MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

This Week’s Awesome Tech Stories From Around the Web (Through February 7)

2026-02-08 02:02:27

ARTIFICIAL INTELLIGENCE

Moltbook Was Pure AI TheaterWill Douglas Heaven | MIT Technology Review ($)

“As the hype dies down, Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today. It also shows us just how far we still are from anything that resembles general-purpose and fully autonomous AI.”

COMPUTING

‘Quantum Twins’ Simulate What Supercomputers Can’tDina Genkina | IEEE Spectrum

“What analog quantum simulation lacks in flexibility, it makes up for in feasibility: quantum simulators are ready now. ‘Instead of using qubits, as you would typically in a quantum computer, we just directly encode the problem into the geometry and structure of the array itself,’ says Sam Gorman, quantum systems engineering lead at Sydney-based startup Silicon Quantum Computing.”

Artificial Intelligence

A New AI Math Startup Just Cracked 4 Previously Unsolved ProblemsWill Knight | Wired ($)

“‘What AxiomProver found was something that all the humans had missed,’ Ono tells Wired. The proof is one of several solutions to unsolved math problems that Axiom says its system has come up with in recent weeks. The AI has not yet solved any of the most famous (or lucrative) problems in the field of mathematics, but it has found answers to questions that have stumped experts in different areas for years.”

Biotechnology

Nasal Spray Could Prevent Infections From Any Flu StrainAlice Klein | New Scientist ($)

“An antibody nasal spray has shown promise for protecting against flu in preliminary human trials, after first being validated in mice and monkeys. It may be useful for combatting future flu pandemics because it seems to neutralize any kind of influenza virus, including ones that spill over from non-human animals.”

Robotics

A Peek Inside Physical Intelligence, the Startup Building Silicon Valley’s Buzziest Robot BrainsConnie Loizos | TechCrunch

“‘Think of it like ChatGPT, but for robots,’ Sergey Levine tells me, gesturing toward the motorized ballet unfolding across the room. …What I’m watching, he explains, is the testing phase of a continuous loop: data gets collected on robot stations here and at other locations—warehouses, homes, wherever the team can set up shop—and that data trains general-purpose robotic foundation models.”

ARTIFICIAL INTELLIGENCE

This Is the Most Misunderstood Graph in AIGrace Huckins | MIT Technology Review ($)

“To some, METR’s ‘time horizon plot’ indicates that AI utopia—or apocalypse—is close at hand. The truth is more complicated. …’I think the hype machine will basically, whatever we do, just strip out all the caveats,’ he says. Nevertheless, the METR team does think that the plot has something meaningful to say about the trajectory of AI progress.”

Tech

AI Bots Are Now a Significant Source of Web TrafficWill Knight | Wired ($)

“The viral virtual assistant OpenClaw—formerly known as Moltbot, and before that Clawdbot—is a symbol of a broader revolution underway that could fundamentally alter how the internet functions. Instead of a place primarily inhabited by humans, the web may very soon be dominated by autonomous AI bots.”

Energy

Fast-Charging Quantum Battery Built Inside a Quantum ComputerKarmela Padavic-Callaghan | New Scientist ($)

“Quach and his colleagues have previously theorized that quantum computers powered by quantum batteries could be more efficient and easier to make larger, which would make them more powerful. ‘This was a theoretical idea that we proposed only recently, but the new work could really be used as the basis to power future quantum computers,’ he says.”

Science

Expansion Microscopy Has Transformed How We See the Cellular WorldMolly Herring | Quanta Magazine

“Rather than invest in more powerful and more expensive technologies, some scientists are using an alternative technique called expansion microscopy, which inflates the subject using the same moisture-absorbing material found in diapers. ‘It’s cheap, it’s easy to learn, and indeed, on a cheap microscope, it gives you better images,’ said Omaya Dudin, a cell biologist at the University of Geneva who studies multicellularity.”

Biotechnology

CRISPR Grapefruit Without the Bitterness Are Now in DevelopmentMichael Le Page | New Scientist ($)

“It has been shown that disabling one gene via gene editing can greatly reduce the level of the chemicals that make grapefruit so bitter. …He thinks this approach could even help save the citrus industry. A bacterial disease called citrus greening, also known as huanglongbing, is having a devastating impact on these fruits. The insects that spread the bacteria can’t survive in areas with cold winters, says Carmi, but cold-hardy citrus varieties are so bitter that they are inedible.”

Future

What We’ve Been Getting Wrong About AI’s Truth CrisisJames O’Donnell | MIT Technology Review ($)

“We were well warned of this, but we responded by preparing for a world in which the main danger was confusion. What we’re entering instead is a world in which influence survives exposure, doubt is easily weaponized, and establishing the truth does not serve as a reset button. And the defenders of truth are already trailing way behind.”

The post This Week’s Awesome Tech Stories From Around the Web (Through February 7) appeared first on SingularityHub.

Scientists Want to Give ChatGPT an Inner Monologue to Improve Its ‘Thinking’

2026-02-06 23:00:00

A new approach would help AI assess its own confidence, detect confusion, and decide when to think harder.

Have you ever had the experience of rereading a sentence multiple times only to realize you still don’t understand it? As taught to scores of incoming college freshmen, when you realize you’re spinning your wheels, it’s time to change your approach.

This process, becoming aware of something not working and then changing what you’re doing, is the essence of metacognition, or thinking about thinking.

It’s your brain monitoring its own thinking, recognizing a problem, and controlling or adjusting your approach. In fact, metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems.

My colleagues Charles Courchaine, Hefei Qiu, and Joshua Iacoboni and I are working to change that. We’ve developed a mathematical framework designed to allow generative AI systems, specifically large language models like ChatGPT or Claude, to monitor and regulate their own internal “cognitive” processes. In some sense, you can think of it as giving generative AI an inner monologue, a way to assess its own confidence, detect confusion, and decide when to think harder about a problem.

Why Machines Need Self-Awareness

Today’s generative AI systems are remarkably capable but fundamentally unaware. They generate responses without genuinely knowing how confident or confused their response might be, whether it contains conflicting information, or whether a problem deserves extra attention. This limitation becomes critical when generative AI’s inability to recognize its own uncertainty can have serious consequences, particularly in high-stakes applications such as medical diagnosis, financial advice, and autonomous vehicle decision-making.

For example, consider a medical generative AI system analyzing symptoms. It might confidently suggest a diagnosis without any mechanism to recognize situations where it might be more appropriate to pause and reflect, like “These symptoms contradict each other” or “This is unusual, I should think more carefully.”

Developing such a capacity would require metacognition, which involves both the ability to monitor one’s own reasoning through self-awareness and to control the response through self-regulation.

Inspired by neurobiology, our framework aims to give generative AI a semblance of these capabilities by using what we call a metacognitive state vector, which is essentially a quantified measure of the generative AI’s internal “cognitive” state across five dimensions.

5 Dimensions of Machine Self-Awareness

One way to think about these five dimensions is to imagine giving a generative AI system five different sensors for its own thinking.

We quantify each of these concepts within an overall mathematical framework to create the metacognitive state vector and use it to control ensembles of large language models. In essence, the metacognitive state vector converts a large language model’s qualitative self-assessments into quantitative signals that it can use to control its responses.

For example, when a large language model’s confidence in a response drops below a certain threshold or the conflicts in the response exceed some acceptable levels, it might shift from fast, intuitive processing to slow, deliberative reasoning. This is analogous to what psychologists call System 1 and System 2 thinking in humans.

A diagram with five rectangles surrounding an oval with arrows connecting them
This conceptual diagram shows the basic idea for giving a set of large language models an awareness of the state of its processing. Ricky J. Sethi

Conducting an Orchestra

Imagine a large language model ensemble as an orchestra where each musician—an individual large language model—comes in at certain times based on the cues received from the conductor. The metacognitive state vector acts as the conductor’s awareness, constantly monitoring whether the orchestra is in harmony, whether someone is out of tune, or whether a particularly difficult passage requires extra attention.

When performing a familiar, well-rehearsed piece, like a simple folk melody, the orchestra easily plays in quick, efficient unison with minimal coordination needed. This is the System 1 mode. Each musician knows their part, the harmonies are straightforward, and the ensemble operates almost automatically.

But when the orchestra encounters a complex jazz composition with conflicting time signatures, dissonant harmonies, or sections requiring improvisation, the musicians need greater coordination. The conductor directs the musicians to shift roles: Some become section leaders, others provide rhythmic anchoring, and soloists emerge for specific passages.

This is the kind of system we’re hoping to create in a computational context by implementing our framework, orchestrating ensembles of large language models. The metacognitive state vector informs a control system that acts as the conductor, telling it to switch modes to System 2. It can then tell each large language model to assume different roles—for example, critic or expert—and coordinate their complex interactions based on the metacognitive assessment of the situation.

Impact and Transparency

The implications extend far beyond making generative AI slightly smarter. In health care, a metacognitive generative AI system could recognize when symptoms don’t match typical patterns and escalate the problem to human experts rather than risking misdiagnosis. In education, it could adapt teaching strategies when it detects student confusion. In content moderation, it could identify nuanced situations requiring human judgment rather than applying rigid rules.

Perhaps most importantly, our framework makes generative AI decision-making more transparent. Instead of a black box that simply produces answers, we get systems that can explain their confidence levels, identify their uncertainties, and show why they chose particular reasoning strategies.

This interpretability and explainability is crucial for building trust in AI systems, especially in regulated industries or safety-critical applications.

The Road Ahead

Our framework does not give machines consciousness or true self-awareness in the human sense. Instead, our hope is to provide a computational architecture for allocating resources and improving responses that also serves as a first step toward more sophisticated approaches for full artificial metacognition.

The next phase in our work involves validating the framework with extensive testing, measuring how metacognitive monitoring improves performance across diverse tasks, and extending the framework to start reasoning about reasoning, or metareasoning. We’re particularly interested in scenarios where recognizing uncertainty is crucial, such as in medical diagnoses, legal reasoning, and generating scientific hypotheses.

Our ultimate vision is generative AI systems that don’t just process information but understand their cognitive limitations and strengths. This means systems that know when to be confident and when to be cautious, when to think fast and when to slow down, and when they’re qualified to answer and when they should defer to others.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Scientists Want to Give ChatGPT an Inner Monologue to Improve Its ‘Thinking’ appeared first on SingularityHub.

This Robotic Hand Detaches and Skitters About Like Thing From ‘The Addams Family’

2026-02-06 07:42:51

Each finger can bend backwards for ultra-flexible crawling and grasping.

Here’s a party trick: Try opening a bottle of water using your thumb and pointer finger while holding it without spilling. It sounds simple, but the feat requires strength, dexterity, and coordination. Our hands have long inspired robotic mimics, but mechanical facsimiles still fall far short of their natural counterparts.

To Aude Billard and colleagues at the Swiss Federal Institute of Technology, trying to faithfully recreate the hand may be the wrong strategy. Why limit robots to human anatomy?

Billard’s team has now developed a prototype similar to Thing from The Addams Family.

Mounted on a robotic arm, the hand detaches at the wrist and transforms into a spider-like creature that can navigate nooks and crannies to pick up objects with its finger-legs. It then skitters on its fingertips back to the arm while holding on to its stash.

At a glance, the robot looks like a human hand. But it has an extra trick up its sleeve: It’s symmetrical, in that every finger is the same. The design essentially provides the hand with multiple thumbs. Any two fingers can pinch an object as opposing finger pairs. This makes complex single-handed maneuvers, like picking up a tube of mustard and a Pringles can at the same time, far easier. The robot can also bend its fingers forwards and backwards in ways that would break ours.

“The human hand is often viewed as the pinnacle of dexterity, and many robotic hands adopt anthropomorphic designs,” wrote the team. But by departing from anatomical constraints, the robot is both a hand and a walking machine capable of tasks that elude our hands.

Out of Reach

If you’ve ever tried putting a nut on a bolt in an extremely tight space, you’re probably very familiar with the limits of our hands. Grabbing and orienting tiny bits of hardware while holding a wrench in position can be extremely frustrating, especially if you have to bend your arm or wrist at an uncomfortable angle for leverage.

Sculpted by evolution, our hands can dance around a keyboard, perform difficult surgeries, and do other remarkable things. But their design can be improved. For one, our hands are asymmetrical and only have one opposable thumb, limiting dexterity in some finger pairs. Try screwing on a bottle cap with your middle finger and pinkie, for example. And to state the obvious, wrist movement and arm length restrict our hands’ overall capabilities. Also, our fingers can’t fully bend backwards, limiting the scope of their movement.

“Many anthropomorphic robotic hands inherit these constraints,” wrote the authors.

Partly inspired by nature, the team re-envisioned the concept of a hand or a finger. Rather than just a grasping tool, a hand could also have crawling abilities, a bit like octopus tentacles that seamlessly switch between movement and manipulation. Combining the two could extend the hand’s dexterity and capabilities.

Handy Upgrade

The team’s design process began with a database of standard hand models. Using a genetic algorithm, a type of machine learning inspired by natural selection, the team ran simulations on how different finger configurations changed the hand’s abilities.

By playing with the parameters, like how many fingers are needed to crawl smoothly, they zeroed in on a few guidelines. Five or six fingers gave the best performance, balancing grip strength and movement. Adding more digits caused the robot to stumble over its extra fingers.

In the final design, each three-jointed finger can bend towards the palm or to the back of the hand. The fingertips are coated in silicone for a better grip. Strong magnets at the base of the palm allow the hand to snap onto and detach from a robotic arm. The team made five- and six-fingered versions.

When attached to the arm, the hand easily pinches a Pringles can, tennis ball, and pen-shaped rod between two fingers. Its symmetrical design allows for some odd finger pairings, like using the equivalent of a ring and middle finger to tightly clutch a ball.

Other demos showcase its maneuverability. In one test, the robot twists off a mustard bottle cap while keeping the bottle steady. And because its fingers bend backwards, the hand can simultaneously pick up two objects, securing one on each side of its palm.

“While our robotic hand can perform common grasping modes like human hands, our design exceeds human capabilities by allowing any combination of fingers to form opposing finger pairs,” wrote the team. This allows “simultaneous multi-object grasping with fewer fingers.”

When released from the arm, the robot turns into a spider-like crawler. In another test, the six-fingered version grabs three blocks, none of which could be reached without detaching. The hand picks up the first two blocks by wrapping individual fingers around each. The same fingers then pinch the third block, and the robot skitters back to the arm on its remaining fingers.

The robot’s superhuman agility could let it explore places human hands can’t reach or traverse hazardous conditions during disaster response. It might also handle industrial inspection, like checking for rust or leakage in narrow pipes, or pick objects just out of reach in warehouses.

The team is also eyeing a more futuristic use: The hand could be adapted for prosthetics or even augmentation. Studies of people born with six fingers or those experimenting with an additional robotic finger have found the brain rapidly remaps to incorporate the digit in a variety of movements, often leading to more dexterity.

“The symmetrical, reversible functionality is particularly valuable in scenarios where users could benefit from capabilities beyond normal human function,” said Billard in a press release, but more work is needed to test the cyborg idea.

The post This Robotic Hand Detaches and Skitters About Like Thing From ‘The Addams Family’ appeared first on SingularityHub.

Humanity’s Last Exam Stumps Top AI Models—and That’s a Good Thing

2026-02-04 05:05:21

The test tracks AI performance with thousands of graduate-level questions to track AI performance across academic disciplines.

How do you translate a Roman inscription found on a tombstone? How many pairs of tendons are supported by one bone in hummingbirds? Here is a chemical reaction that requires three steps: What are they? Based on the latest research on Tiberian pronunciation, identify all syllables ending in a consonant sound from this Hebrew text.

These are just a few example questions from the latest attempt to measure the capability of large language models. These algorithms power ChatGPT and Gemini. They’re getting “smarter” in specific domains—math, biology, medicine, programming—and developing a sort of common sense.

Like the dreaded standardized tests we endured in school, researchers have long relied on benchmarks to track AI performance. But as cutting-edge algorithms now regularly score over 90 percent on such tests, older benchmarks are increasingly becoming obsolete.

An international team has now developed a kind of new SAT for language models. Dubbed Humanity’s Last Exam (HLE), the test has 2,500 challenging questions spanning math, the humanities, and the natural sciences. A human expert crafted and carefully vetted each question so the answers are non-ambiguous and can’t be easily found online.

Although the test captures some general reasoning in models, it measures task performance not  “intelligence.” The exam focuses on expert-level academic problems, which are a far cry from the messy scenarios and decisions we face daily. But as AI increasingly floods many research fields, the HLE benchmark is an objective way to measure their improvement.

“HLE no doubt offers a useful window into today’s AI expertise,” wrote MIT’s Katherine Collins and Joshua Tenenbaum, who were not involved in the study. “But it is by no means the last word on humanity’s thinking or AI’s capacity to contribute to it.”

Moving Scale

It seems that AI has steadily become smarter over the past few years. But what exactly does “smart” mean for an algorithm?

A common way to measure AI “smarts” is to challenge different AI models—or upgraded versions of the same model—with standardized benchmarks. These collections of questions cover a wide range of topics and can’t be answered with a simple web search. They require both an extensive representation of the world, and more importantly, the ability to use it to answer questions. It’s like taking a driver’s license test: You can memorize the entire handbook of rules and regulations but still need to figure out who has the right of way in any scenario.

However, benchmarks are only useful if they still stump AI. And the models have become expert test takers. Cutting-edge large language models are posting near-perfect scores across benchmarks tests, making the tests less effective at detecting genuine advances.

The problem “has grown worse because as well as being trained on the entire internet, current AI systems can often search for information online during the test,” essentially learning to cheat, wrote Collins and Tenenbaum.

Working with the non-profit Center for AI Safety and Scale AI, the HLE Contributors Consortium designed a new benchmark tailor-made to confuse AI. They asked thousands of experts from 50 countries to submit graduate-level questions in specific fields. The questions have two types of answers. One type must completely match the actual solution, while the other is multiple-choice. This makes it easy to automatically score test results.

Notably, the team avoided incorporating questions requiring longer or open-ended answers, such as writing a scientific paper, a law brief, or other cases where there isn’t a clearly correct answer or a way to gauge if an answer is right.

They chose questions in a multi-step process to gauge difficulty and originality. Roughly 70,000 submissions were tested on multiple AI models. Only those that stumped models advanced to the next stage, where experts judged their usefulness for AI evaluation using strict guidelines.

The team has released 2,500 questions from the HLE collection. They’ve kept the rest private to prevent AI systems from gaming the test and outperforming on questions they’ve seen before.

When the team first released the test in early 2025, leading AI models from Google, OpenAI, and Anthropic scored in the single digits. As it subsequently caught the eye of AI companies, many adopted the test to demonstrate the performance of new releases. Newer algorithms have shown some improvement, though even leading models still struggle. OpenAI’s GTP-4o scored a measly 2.7 percent, whereas GPT-5’s success rate increased to 25 percent.

A New Standard?

Like IQ tests and standardized college admission exams, HLE has come under fire. Some people object to the test’s bombastic name, which could lead the general public to misunderstand an AI’s capabilities compared to human experts.

Others question what the test actually measures. Expertise across a wide range of academic fields and model improvement are obvious answers. However, HLE’s current curation inherently limits “the most challenging and meaningful questions that human experts engage with,” which require thoughtful responses, often across disciplines, that can hardly be captured with short answers or multiple-choice questions, wrote Collins and Tenenbaum.

Expertise also involves far more than answering existing questions. Beyond solving a given problem, experts can also evaluate whether the question makes sense—for example, if it has answers the test-maker didn’t consider—and gauge how confident they are of their answers.

“Humanity is not contained in any static test, but in our ability to continually evolve both in asking and answering questions we never, in our wildest dreams, thought we would—generation after generation,” Subbarao Kambhampati, former president of the Association for the Advancement of Artificial Intelligence, who was not involved in the study, wrote on X.

And although an increase in HLE score could be due to fundamental advances in a model, it could also be because model-makers gave an algorithm extra training on the public dataset—like studying the previous year’s exam questions before a test. In this case, the exam mainly reflects the AI’s test performance, not that it has gained expertise or “intelligence.”

The HLE team embraces these criticisms and are continuing to improve the benchmark. Others are developing completely different scales. Using human tests to benchmark AI has been the norm, but researchers are looking into other ways that could better capture an AI’s scientific creativity or collaborative thinking with humans in the real world. A consensus on AI intelligence, and how to measure it, remains a hot topic for debate.

Despite its shortcomings, HLE is a useful way to measure AI expertise. But looking forward, “as the authors note, their project will ideally make itself obsolete by forcing the development of innovative paradigms for AI evaluation,” wrote Collins and Tenenbaum.

The post Humanity’s Last Exam Stumps Top AI Models—and That’s a Good Thing appeared first on SingularityHub.

Waymo Closes in on Uber and Lyft Prices, as More Riders Say They Trust Robotaxis

2026-02-03 07:49:07

Robotaxis have been more expensive with longer wait times. A study by Obi suggests that may be changing.

Robotaxis have long promised cheaper trips and shorter wait times, but so far, providers have struggled to match traditional platforms. New pricing and timing data from San Francisco shows that driverless services are now narrowing the gap with Uber and Lyft.

While it’s been possible to hail a driverless taxi in the US since 2020, they have long felt like an expensive novelty. Tourists and tech enthusiasts often piled in for some not-so-cheap thrills, but higher prices and longer wait times meant few people were relying on them on a regular basis.

But a new study from ride-hailing price aggregator Obi suggests that may be about to change. Data on nearly 100,000 rides in San Francisco between Thanksgiving and New Year’s Day, shows Waymo is now much more competitive with Uber and Lyft on both cost and availability. And while Tesla’s robotaxis still require a human safety driver and wait times remain long, the company is now undercutting everyone on price.

“That’s the biggest change to me,” Ashwini Anburajan, CEO of Obi, told Business Insider. “It’s the convergence in price as well as the reduced wait times because now you can actually compare them. It’s a more honest comparison between the three platforms.”

The last time Obi analyzed data on these two key metrics was in June 2025, when it found that Waymo rides cost 30 to 40 percent more than conventional ride-hailing. By late 2025, that premium had shrunk to just 12.7 percent more expensive than Uber and 27.3 percent more than Lyft. And for longer rides between 2.7 and 5.8 miles the gap nearly disappears, with Waymo only 2 percent pricier than Uber and 17 percent more than Lyft.

Tesla, on the other hand, is now the cheapest service by a significant margin. The average Tesla ride costs $8.17 and rarely exceeds $10, compared to Lyft’s $15.47 average, Uber’s $17.47, and Waymo’s $19.69, which suggests the company is making a concerted play to boost its market share.

“They’re using the playbook that Uber and Lyft used when they first came into the market—dramatically lower pricing, undercutting what’s existing in the market, and really just driving adoption,” Anburajan told Business Insider.

It could be a winning strategy. Price remains the top concern for customers in a survey Obi conducted as part of the research. However, Tesla is lagging considerably on their second biggest priority—wait times.

Operating with fewer than 200 vehicles across a 400-square-mile area, Tesla’s average wait time is 15.32 minutes—roughly three times longer than its competitors. Waymo on the other hand is within touching distance of the traditional ride-hailing companies with an average wait of 5.74 minutes, compared to Lyft’s 4.20 minutes and Uber’s industry leading 3.28 minutes.

Obi also notes that Waymo’s longer average wait time is largely due to a capacity crunch during the 4 pm to 6 pm rush. During less busy periods, in particular early in the morning, Waymo often has the lowest wait times of all service providers.

Perhaps most importantly, the study discovered consumer attitudes towards driverless technology appear to be shifting. Obi’s survey found 63 percent of adults in areas with robotaxi services are now comfortable or somewhat comfortable with self-driving cars, up from just 35 percent in the previous survey.

Attitudes towards safety have also turned around significantly. Last year, only 30.8 percent of people said they believed autonomous rideshares would be safer than regular taxis within five years, but in the latest survey this jumped to 52.5 percent.

While the research suggests robotaxis are rapidly making up ground on their conventional counterparts, it remains to be seen whether they can fully close the gap in a consumer segment where a few minutes or dollars makes all the difference to customers. But if they can keep up the momentum, it may not be long until there are fewer human drivers on the road.

The post Waymo Closes in on Uber and Lyft Prices, as More Riders Say They Trust Robotaxis appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through January 31)

2026-01-31 23:00:00

ARTIFICIAL INTELLIGENCE

A Yann LeCun–Linked Startup Charts a New Path to AGIJoel Khalili | Wired ($)

“As the world’s largest companies pour hundreds of billions of dollars into large language models, San Francisco-based Logical Intelligence is trying something different in pursuit of AI that can mimic the human brain. …The road to AGI, Bodnia contends, begins with the layering of these different types of AI: LLMs will interface with humans in natural language, EBMs will take up reasoning tasks, while world models will help robots take action in 3D space.”

ARTIFICIAL INTELLIGENCE

Google Project Genie Lets You Create Interactive Worlds From a Photo or PromptRyan Whitwam | Ars Technica

“World models are exactly what they sound like—an AI that generates a dynamic environment on the fly. …The system first generates a still image, and from that you can generate the world. This is what Google calls ‘world sketching.'”

Biotechnology

The First Human Test of a Rejuvenation Method Will Begin ‘Shortly’Antonio Regalado | MIT Technology Review ($)

“[Life Biosciences] plans to try to treat eye disease with a radical rejuvenation concept called ‘reprogramming’ that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech. The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off.”

Future

The Wall Street Star Betting His Reputation on Robots and Flying CarsBecky Peterson | The Wall Street Journal ($)

“Jonas will guide the bank’s clients on what he’s calling the ‘Cambrian explosion of bots’—a time in the not-so-distant-future in which fully autonomous vehicles, drones, humanoids and industrial robots grow large enough in population to rival the human race. His theory is deceptive in its simplicity: Anything that can be automated will be automated, he says, even humans.”

Space

Mapping 6,000 Worlds: The New Era of Exoplanetary DataEliza Strickland | IEEE Spectrum

“[Astronomers can now] compare planet sizes, masses, and compositions; track how tightly planets orbit their stars; and measure the prevalence of different kinds of planetary systems. Those statistics allow astronomers to estimate how frequently planets form, and to start making informed guesses about how often conditions arise that could support life. The Drake Equation uses such estimates to tackle one of humanity’s most profound questions: Are we alone in the universe?”

Future

Stratospheric Internet Could Finally Start Taking Off This YearTereza Pultarova | MIT Technology Review ($)

“Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery.”

Robotics

Waymo Robotaxi Hits a Child Near a School, Causing Minor InjuriesAndrew J. Hawkins | The Verge

“In a blog post, Waymo said its vehicle was traveling at 17mph when its autonomous system detected the child and then ‘braked hard,’ reducing its speed to 6mph before ‘contact was made.’ The child ‘stood up immediately, walked to the sidewalk,’ and Waymo said it called 911. ‘The vehicle moved to the side of the road, and stayed there until law enforcement cleared the vehicle to leave the scene,’ it said.”

Artificial Intelligence

Ex-OpenAI Researcher’s Startup Targets Up to $1 Billion in Funding to Develop a New Type of AIStephanie Palazzolo and Wayne Ma | The Information ($)

“[Jerry] Tworek represents a small but growing group of AI researchers who believe the field needs an overhaul because today’s most popular model development techniques seem unlikely to be able to develop advanced AI that can achieve major breakthroughs in biology, medicine and other fields while also managing to avoid silly mistakes.”

Robotics

Waymo’s Price Premium To Lyft and Uber Is Closing, Report FindsAnita Ramaswamy | The Information ($)

“The average price to ride in Waymo’s robotaxis has dropped by 3.6% since March to $19.69 per ride, according to a new report by ride-hailing analytics firm Obi. Riding in a Waymo is now, on average, 12.7% more expensive than riding in an Uber and 27.4% more expensive than riding in a Lyft, down from a 30% to 40% premium for Waymo rides last April, the month covered by Obi’s previous report.”

The post This Week’s Awesome Tech Stories From Around the Web (Through January 31) appeared first on SingularityHub.