2026-02-06 07:42:51
Each finger can bend backwards for ultra-flexible crawling and grasping.
Here’s a party trick: Try opening a bottle of water using your thumb and pointer finger while holding it without spilling. It sounds simple, but the feat requires strength, dexterity, and coordination. Our hands have long inspired robotic mimics, but mechanical facsimiles still fall far short of their natural counterparts.
To Aude Billard and colleagues at the Swiss Federal Institute of Technology, trying to faithfully recreate the hand may be the wrong strategy. Why limit robots to human anatomy?
Billard’s team has now developed a prototype similar to Thing from The Addams Family.
Mounted on a robotic arm, the hand detaches at the wrist and transforms into a spider-like creature that can navigate nooks and crannies to pick up objects with its finger-legs. It then skitters on its fingertips back to the arm while holding on to its stash.
At a glance, the robot looks like a human hand. But it has an extra trick up its sleeve: It’s symmetrical, in that every finger is the same. The design essentially provides the hand with multiple thumbs. Any two fingers can pinch an object as opposing finger pairs. This makes complex single-handed maneuvers, like picking up a tube of mustard and a Pringles can at the same time, far easier. The robot can also bend its fingers forwards and backwards in ways that would break ours.
“The human hand is often viewed as the pinnacle of dexterity, and many robotic hands adopt anthropomorphic designs,” wrote the team. But by departing from anatomical constraints, the robot is both a hand and a walking machine capable of tasks that elude our hands.
If you’ve ever tried putting a nut on a bolt in an extremely tight space, you’re probably very familiar with the limits of our hands. Grabbing and orienting tiny bits of hardware while holding a wrench in position can be extremely frustrating, especially if you have to bend your arm or wrist at an uncomfortable angle for leverage.
Sculpted by evolution, our hands can dance around a keyboard, perform difficult surgeries, and do other remarkable things. But their design can be improved. For one, our hands are asymmetrical and only have one opposable thumb, limiting dexterity in some finger pairs. Try screwing on a bottle cap with your middle finger and pinkie, for example. And to state the obvious, wrist movement and arm length restrict our hands’ overall capabilities. Also, our fingers can’t fully bend backwards, limiting the scope of their movement.
“Many anthropomorphic robotic hands inherit these constraints,” wrote the authors.
Partly inspired by nature, the team re-envisioned the concept of a hand or a finger. Rather than just a grasping tool, a hand could also have crawling abilities, a bit like octopus tentacles that seamlessly switch between movement and manipulation. Combining the two could extend the hand’s dexterity and capabilities.
The team’s design process began with a database of standard hand models. Using a genetic algorithm, a type of machine learning inspired by natural selection, the team ran simulations on how different finger configurations changed the hand’s abilities.
By playing with the parameters, like how many fingers are needed to crawl smoothly, they zeroed in on a few guidelines. Five or six fingers gave the best performance, balancing grip strength and movement. Adding more digits caused the robot to stumble over its extra fingers.
In the final design, each three-jointed finger can bend towards the palm or to the back of the hand. The fingertips are coated in silicone for a better grip. Strong magnets at the base of the palm allow the hand to snap onto and detach from a robotic arm. The team made five- and six-fingered versions.
When attached to the arm, the hand easily pinches a Pringles can, tennis ball, and pen-shaped rod between two fingers. Its symmetrical design allows for some odd finger pairings, like using the equivalent of a ring and middle finger to tightly clutch a ball.
Other demos showcase its maneuverability. In one test, the robot twists off a mustard bottle cap while keeping the bottle steady. And because its fingers bend backwards, the hand can simultaneously pick up two objects, securing one on each side of its palm.
“While our robotic hand can perform common grasping modes like human hands, our design exceeds human capabilities by allowing any combination of fingers to form opposing finger pairs,” wrote the team. This allows “simultaneous multi-object grasping with fewer fingers.”
When released from the arm, the robot turns into a spider-like crawler. In another test, the six-fingered version grabs three blocks, none of which could be reached without detaching. The hand picks up the first two blocks by wrapping individual fingers around each. The same fingers then pinch the third block, and the robot skitters back to the arm on its remaining fingers.
The robot’s superhuman agility could let it explore places human hands can’t reach or traverse hazardous conditions during disaster response. It might also handle industrial inspection, like checking for rust or leakage in narrow pipes, or pick objects just out of reach in warehouses.
The team is also eyeing a more futuristic use: The hand could be adapted for prosthetics or even augmentation. Studies of people born with six fingers or those experimenting with an additional robotic finger have found the brain rapidly remaps to incorporate the digit in a variety of movements, often leading to more dexterity.
“The symmetrical, reversible functionality is particularly valuable in scenarios where users could benefit from capabilities beyond normal human function,” said Billard in a press release, but more work is needed to test the cyborg idea.
The post This Robotic Hand Detaches and Skitters About Like Thing From ‘The Addams Family’ appeared first on SingularityHub.
2026-02-04 05:05:21
The test tracks AI performance with thousands of graduate-level questions to track AI performance across academic disciplines.
How do you translate a Roman inscription found on a tombstone? How many pairs of tendons are supported by one bone in hummingbirds? Here is a chemical reaction that requires three steps: What are they? Based on the latest research on Tiberian pronunciation, identify all syllables ending in a consonant sound from this Hebrew text.
These are just a few example questions from the latest attempt to measure the capability of large language models. These algorithms power ChatGPT and Gemini. They’re getting “smarter” in specific domains—math, biology, medicine, programming—and developing a sort of common sense.
Like the dreaded standardized tests we endured in school, researchers have long relied on benchmarks to track AI performance. But as cutting-edge algorithms now regularly score over 90 percent on such tests, older benchmarks are increasingly becoming obsolete.
An international team has now developed a kind of new SAT for language models. Dubbed Humanity’s Last Exam (HLE), the test has 2,500 challenging questions spanning math, the humanities, and the natural sciences. A human expert crafted and carefully vetted each question so the answers are non-ambiguous and can’t be easily found online.
Although the test captures some general reasoning in models, it measures task performance not “intelligence.” The exam focuses on expert-level academic problems, which are a far cry from the messy scenarios and decisions we face daily. But as AI increasingly floods many research fields, the HLE benchmark is an objective way to measure their improvement.
“HLE no doubt offers a useful window into today’s AI expertise,” wrote MIT’s Katherine Collins and Joshua Tenenbaum, who were not involved in the study. “But it is by no means the last word on humanity’s thinking or AI’s capacity to contribute to it.”
It seems that AI has steadily become smarter over the past few years. But what exactly does “smart” mean for an algorithm?
A common way to measure AI “smarts” is to challenge different AI models—or upgraded versions of the same model—with standardized benchmarks. These collections of questions cover a wide range of topics and can’t be answered with a simple web search. They require both an extensive representation of the world, and more importantly, the ability to use it to answer questions. It’s like taking a driver’s license test: You can memorize the entire handbook of rules and regulations but still need to figure out who has the right of way in any scenario.
However, benchmarks are only useful if they still stump AI. And the models have become expert test takers. Cutting-edge large language models are posting near-perfect scores across benchmarks tests, making the tests less effective at detecting genuine advances.
The problem “has grown worse because as well as being trained on the entire internet, current AI systems can often search for information online during the test,” essentially learning to cheat, wrote Collins and Tenenbaum.
Working with the non-profit Center for AI Safety and Scale AI, the HLE Contributors Consortium designed a new benchmark tailor-made to confuse AI. They asked thousands of experts from 50 countries to submit graduate-level questions in specific fields. The questions have two types of answers. One type must completely match the actual solution, while the other is multiple-choice. This makes it easy to automatically score test results.
Notably, the team avoided incorporating questions requiring longer or open-ended answers, such as writing a scientific paper, a law brief, or other cases where there isn’t a clearly correct answer or a way to gauge if an answer is right.
They chose questions in a multi-step process to gauge difficulty and originality. Roughly 70,000 submissions were tested on multiple AI models. Only those that stumped models advanced to the next stage, where experts judged their usefulness for AI evaluation using strict guidelines.
The team has released 2,500 questions from the HLE collection. They’ve kept the rest private to prevent AI systems from gaming the test and outperforming on questions they’ve seen before.
When the team first released the test in early 2025, leading AI models from Google, OpenAI, and Anthropic scored in the single digits. As it subsequently caught the eye of AI companies, many adopted the test to demonstrate the performance of new releases. Newer algorithms have shown some improvement, though even leading models still struggle. OpenAI’s GTP-4o scored a measly 2.7 percent, whereas GPT-5’s success rate increased to 25 percent.
Like IQ tests and standardized college admission exams, HLE has come under fire. Some people object to the test’s bombastic name, which could lead the general public to misunderstand an AI’s capabilities compared to human experts.
Others question what the test actually measures. Expertise across a wide range of academic fields and model improvement are obvious answers. However, HLE’s current curation inherently limits “the most challenging and meaningful questions that human experts engage with,” which require thoughtful responses, often across disciplines, that can hardly be captured with short answers or multiple-choice questions, wrote Collins and Tenenbaum.
Expertise also involves far more than answering existing questions. Beyond solving a given problem, experts can also evaluate whether the question makes sense—for example, if it has answers the test-maker didn’t consider—and gauge how confident they are of their answers.
“Humanity is not contained in any static test, but in our ability to continually evolve both in asking and answering questions we never, in our wildest dreams, thought we would—generation after generation,” Subbarao Kambhampati, former president of the Association for the Advancement of Artificial Intelligence, who was not involved in the study, wrote on X.
And although an increase in HLE score could be due to fundamental advances in a model, it could also be because model-makers gave an algorithm extra training on the public dataset—like studying the previous year’s exam questions before a test. In this case, the exam mainly reflects the AI’s test performance, not that it has gained expertise or “intelligence.”
The HLE team embraces these criticisms and are continuing to improve the benchmark. Others are developing completely different scales. Using human tests to benchmark AI has been the norm, but researchers are looking into other ways that could better capture an AI’s scientific creativity or collaborative thinking with humans in the real world. A consensus on AI intelligence, and how to measure it, remains a hot topic for debate.
Despite its shortcomings, HLE is a useful way to measure AI expertise. But looking forward, “as the authors note, their project will ideally make itself obsolete by forcing the development of innovative paradigms for AI evaluation,” wrote Collins and Tenenbaum.
The post Humanity’s Last Exam Stumps Top AI Models—and That’s a Good Thing appeared first on SingularityHub.
2026-02-03 07:49:07
Robotaxis have been more expensive with longer wait times. A study by Obi suggests that may be changing.
Robotaxis have long promised cheaper trips and shorter wait times, but so far, providers have struggled to match traditional platforms. New pricing and timing data from San Francisco shows that driverless services are now narrowing the gap with Uber and Lyft.
While it’s been possible to hail a driverless taxi in the US since 2020, they have long felt like an expensive novelty. Tourists and tech enthusiasts often piled in for some not-so-cheap thrills, but higher prices and longer wait times meant few people were relying on them on a regular basis.
But a new study from ride-hailing price aggregator Obi suggests that may be about to change. Data on nearly 100,000 rides in San Francisco between Thanksgiving and New Year’s Day, shows Waymo is now much more competitive with Uber and Lyft on both cost and availability. And while Tesla’s robotaxis still require a human safety driver and wait times remain long, the company is now undercutting everyone on price.
“That’s the biggest change to me,” Ashwini Anburajan, CEO of Obi, told Business Insider. “It’s the convergence in price as well as the reduced wait times because now you can actually compare them. It’s a more honest comparison between the three platforms.”
The last time Obi analyzed data on these two key metrics was in June 2025, when it found that Waymo rides cost 30 to 40 percent more than conventional ride-hailing. By late 2025, that premium had shrunk to just 12.7 percent more expensive than Uber and 27.3 percent more than Lyft. And for longer rides between 2.7 and 5.8 miles the gap nearly disappears, with Waymo only 2 percent pricier than Uber and 17 percent more than Lyft.
Tesla, on the other hand, is now the cheapest service by a significant margin. The average Tesla ride costs $8.17 and rarely exceeds $10, compared to Lyft’s $15.47 average, Uber’s $17.47, and Waymo’s $19.69, which suggests the company is making a concerted play to boost its market share.
“They’re using the playbook that Uber and Lyft used when they first came into the market—dramatically lower pricing, undercutting what’s existing in the market, and really just driving adoption,” Anburajan told Business Insider.
It could be a winning strategy. Price remains the top concern for customers in a survey Obi conducted as part of the research. However, Tesla is lagging considerably on their second biggest priority—wait times.
Operating with fewer than 200 vehicles across a 400-square-mile area, Tesla’s average wait time is 15.32 minutes—roughly three times longer than its competitors. Waymo on the other hand is within touching distance of the traditional ride-hailing companies with an average wait of 5.74 minutes, compared to Lyft’s 4.20 minutes and Uber’s industry leading 3.28 minutes.
Obi also notes that Waymo’s longer average wait time is largely due to a capacity crunch during the 4 pm to 6 pm rush. During less busy periods, in particular early in the morning, Waymo often has the lowest wait times of all service providers.
Perhaps most importantly, the study discovered consumer attitudes towards driverless technology appear to be shifting. Obi’s survey found 63 percent of adults in areas with robotaxi services are now comfortable or somewhat comfortable with self-driving cars, up from just 35 percent in the previous survey.
Attitudes towards safety have also turned around significantly. Last year, only 30.8 percent of people said they believed autonomous rideshares would be safer than regular taxis within five years, but in the latest survey this jumped to 52.5 percent.
While the research suggests robotaxis are rapidly making up ground on their conventional counterparts, it remains to be seen whether they can fully close the gap in a consumer segment where a few minutes or dollars makes all the difference to customers. But if they can keep up the momentum, it may not be long until there are fewer human drivers on the road.
The post Waymo Closes in on Uber and Lyft Prices, as More Riders Say They Trust Robotaxis appeared first on SingularityHub.
2026-01-31 23:00:00
A Yann LeCun–Linked Startup Charts a New Path to AGIJoel Khalili | Wired ($)
“As the world’s largest companies pour hundreds of billions of dollars into large language models, San Francisco-based Logical Intelligence is trying something different in pursuit of AI that can mimic the human brain. …The road to AGI, Bodnia contends, begins with the layering of these different types of AI: LLMs will interface with humans in natural language, EBMs will take up reasoning tasks, while world models will help robots take action in 3D space.”
Google Project Genie Lets You Create Interactive Worlds From a Photo or PromptRyan Whitwam | Ars Technica
“World models are exactly what they sound like—an AI that generates a dynamic environment on the fly. …The system first generates a still image, and from that you can generate the world. This is what Google calls ‘world sketching.'”
The First Human Test of a Rejuvenation Method Will Begin ‘Shortly’Antonio Regalado | MIT Technology Review ($)
“[Life Biosciences] plans to try to treat eye disease with a radical rejuvenation concept called ‘reprogramming’ that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech. The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off.”
The Wall Street Star Betting His Reputation on Robots and Flying CarsBecky Peterson | The Wall Street Journal ($)
“Jonas will guide the bank’s clients on what he’s calling the ‘Cambrian explosion of bots’—a time in the not-so-distant-future in which fully autonomous vehicles, drones, humanoids and industrial robots grow large enough in population to rival the human race. His theory is deceptive in its simplicity: Anything that can be automated will be automated, he says, even humans.”
Mapping 6,000 Worlds: The New Era of Exoplanetary DataEliza Strickland | IEEE Spectrum
“[Astronomers can now] compare planet sizes, masses, and compositions; track how tightly planets orbit their stars; and measure the prevalence of different kinds of planetary systems. Those statistics allow astronomers to estimate how frequently planets form, and to start making informed guesses about how often conditions arise that could support life. The Drake Equation uses such estimates to tackle one of humanity’s most profound questions: Are we alone in the universe?”
Stratospheric Internet Could Finally Start Taking Off This YearTereza Pultarova | MIT Technology Review ($)
“Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery.”
Waymo Robotaxi Hits a Child Near a School, Causing Minor InjuriesAndrew J. Hawkins | The Verge
“In a blog post, Waymo said its vehicle was traveling at 17mph when its autonomous system detected the child and then ‘braked hard,’ reducing its speed to 6mph before ‘contact was made.’ The child ‘stood up immediately, walked to the sidewalk,’ and Waymo said it called 911. ‘The vehicle moved to the side of the road, and stayed there until law enforcement cleared the vehicle to leave the scene,’ it said.”
Ex-OpenAI Researcher’s Startup Targets Up to $1 Billion in Funding to Develop a New Type of AIStephanie Palazzolo and Wayne Ma | The Information ($)
“[Jerry] Tworek represents a small but growing group of AI researchers who believe the field needs an overhaul because today’s most popular model development techniques seem unlikely to be able to develop advanced AI that can achieve major breakthroughs in biology, medicine and other fields while also managing to avoid silly mistakes.”
Waymo’s Price Premium To Lyft and Uber Is Closing, Report FindsAnita Ramaswamy | The Information ($)
“The average price to ride in Waymo’s robotaxis has dropped by 3.6% since March to $19.69 per ride, according to a new report by ride-hailing analytics firm Obi. Riding in a Waymo is now, on average, 12.7% more expensive than riding in an Uber and 27.4% more expensive than riding in a Lyft, down from a 30% to 40% premium for Waymo rides last April, the month covered by Obi’s previous report.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 31) appeared first on SingularityHub.
2026-01-31 06:02:40
Our universe does not simply exist in time. Time is something the universe continuously writes into itself.
Time feels like the most basic feature of reality. Seconds tick, days pass, and everything from planetary motion to human memory seems to unfold along a single, irreversible direction. We are born and we die, in exactly that order. We plan our lives around time, measure it obsessively, and experience it as an unbroken flow from past to future. It feels so obvious that time moves forward that questioning it can seem almost pointless.
And yet, for more than a century, physics has struggled to say what time actually is. This struggle is not philosophical nitpicking. It sits at the heart of some of the deepest problems in science.
Modern physics relies on different, but equally important, frameworks. One is Albert Einstein’s theory of general relativity, which describes the gravity and motion of large objects such as planets. Another is quantum mechanics, which rules the microcosmos of atoms and particles. And on an even larger scale, the standard model of cosmology describes the birth and evolution of the universe as a whole. All rely on time, yet they treat it in incompatible ways.
When physicists try to combine these theories into a single framework, time often behaves in unexpected and troubling ways. Sometimes it stretches. Sometimes it slows. Sometimes it disappears entirely.
Einstein’s theory of relativity was, in fact, the first major blow to our everyday intuition about time. Time, Einstein showed, is not universal. It runs at different speeds depending on gravity and motion. Two observers moving relative to one another will disagree about which events happened at the same time. Time became something elastic, woven together with space into a four-dimensional fabric called spacetime.
Quantum mechanics made things even stranger. In quantum theory, time is not something the theory explains. It is simply assumed. The equations of quantum mechanics describe how systems evolve with respect to time, but time itself remains an external parameter, a background clock that sits outside the theory.
This mismatch becomes acute when physicists try to describe gravity at the quantum level, which is crucial for developing the much coveted theory of everything—which links the main fundamental theories. But in many attempts to create such a theory, time vanishes as a parameter from the fundamental equations altogether. The universe appears frozen, described by equations that make no reference to change.
This puzzle is known as the problem of time, and it remains one of the most persistent obstacles to a unified theory of physics. Despite enormous progress in cosmology and particle physics, we still lack a clear explanation for why time flows at all.
Now a relatively new approach to physics, building on a mathematical framework called information theory, developed by Claude Shannon in the 1940s, has started coming up with surprising answers.
When physicists try to explain the direction of time, they often turn to a concept called entropy. The second law of thermodynamics states that disorder tends to increase. A glass can fall and shatter into a mess, but the shards never spontaneously leap back together. This asymmetry between past and future is often identified with the arrow of time.
This idea has been enormously influential. It explains why many processes are irreversible, including why we remember the past but not the future. If the universe started in a state of low entropy and is getting messier as it evolves, that appears to explain why time moves forward. But entropy does not fully solve the problem of time.
For one thing, the fundamental quantum mechanical equations of physics do not distinguish between past and future. The arrow of time emerges only when we consider large numbers of particles and statistical behaviour. This also raises a deeper question: why did the universe start in such a low-entropy state to begin with? Statistically, there are more ways for a universe to have high entropy than low entropy, just as there are more ways for a room to be messy than tidy. So why would it start in a state that is so improbable?
Over the past few decades, a quiet but far-reaching revolution has taken place in physics. Information, once treated as an abstract bookkeeping tool used to track states or probabilities, has increasingly been recognized as a physical quantity in its own right, just like matter or radiation. While entropy measures how many microscopic states are possible, information measures how physical interactions limit and record those possibilities.
This shift did not happen overnight. It emerged gradually, driven by puzzles at the intersection of thermodynamics, quantum mechanics, and gravity, where treating information as merely mathematical began to produce contradictions.
One of the earliest cracks appeared in black hole physics. When Stephen Hawking showed that black holes emit thermal radiation, it raised a disturbing possibility: Information about whatever falls into a black hole might be permanently lost as heat. That conclusion conflicted with quantum mechanics, which demands that the entirety of information be preserved.
Resolving this tension forced physicists to confront a deeper truth. Information is not optional. If we want a full description of the universe that includes quantum mechanics, information cannot simply disappear without undermining the foundations of physics. This realization had profound consequences. It became clear that information has thermodynamic cost, that erasing it dissipates energy, and that storing it requires physical resources.
In parallel, surprising connections emerged between gravity and thermodynamics. It was shown that Einstein’s equations can be derived from thermodynamic principles that link spacetime geometry directly to entropy and information. In this view, gravity doesn’t behave exactly like a fundamental force.
Instead, gravity appears to be what physicists call “emergent”—a phenomenon describing something that’s greater than the sum of its parts, arising from more fundamental constituents. Take temperature. We can all feel it, but on a fundamental level, a single particle can’t have temperature. It’s not a fundamental feature. Instead it only emerges as a result of many molecules moving collectively.
Similarly, gravity can be described as an emergent phenomenon, arising from statistical processes. Some physicists have even suggested that gravity itself may emerge from information, reflecting how information is distributed, encoded, and processed.
These ideas invite a radical shift in perspective. Instead of treating spacetime as primary, and information as something that lives inside it, information may be the more fundamental ingredient from which spacetime itself emerges. Building on this research, my colleagues and I have explored a framework in which spacetime itself acts as a storage medium for information—and it has important consequences for how we view time.
In this approach, spacetime is not perfectly smooth, as relativity suggests, but composed of discrete elements, each with a finite capacity to record quantum information from passing particles and fields. These elements are not bits in the digital sense, but physical carriers of quantum information, capable of retaining memory of past interactions.
A useful way to picture them is to think of spacetime like a material made of tiny, memory-bearing cells. Just as a crystal lattice can store defects that appeared earlier in time, these microscopic spacetime elements can retain traces of the interactions that have passed through them. They are not particles in the usual sense described by the standard model of particle physics, but a more fundamental layer of physical structure that particle physics operates on rather than explains.
This has an important implication. If spacetime records information, then its present state reflects not only what exists now, but everything that has happened before. Regions that have experienced more interactions carry a different imprint of information than regions that have experienced fewer. The universe, in this view, does not merely evolve according to timeless laws applied to changing states. It remembers.
This memory is not metaphorical. Every physical interaction leaves an informational trace. Although the basic equations of quantum mechanics can be run forwards or backwards in time, real interactions never happen in isolation. They inevitably involve surroundings, leak information outward and leave lasting records of what has occurred. Once this information has spread into the wider environment, recovering it would require undoing not just a single event, but every physical change it caused along the way. In practice, that is impossible.
This is why information cannot be erased and broken cups do not reassemble. But the implication runs deeper. Each interaction writes something permanent into the structure of the universe, whether at the scale of atoms colliding or galaxies forming.
Geometry and information turn out to be deeply connected in this view. In our work, we have showed that how spacetime curves depends not only on mass and energy, as Einstein taught us, but also on how quantum information, particularly entanglement, is distributed. Entanglement is a quantum process that mysteriously links particles in distant regions of space—it enables them to share information despite the distance. And these informational links contribute to the effective geometry experienced by matter and radiation.
From this perspective, spacetime geometry is not just a response to what exists at a given moment, but to what has happened. Regions that have recorded many interactions tend, on average, to behave as if they curve more strongly, have stronger gravity, than regions that have recorded fewer.
This reframing subtly changes the role of spacetime. Instead of being a neutral arena in which events unfold, spacetime becomes an active participant. It stores information, constrains future dynamics and shapes how new interactions can occur. This naturally raises a deeper question. If spacetime records information, could time emerge from this recording process rather than being assumed from the start?
Recently, we extended this informational perspective to time itself. Rather than treating time as a fundamental background parameter, we showed that temporal order emerges from irreversible information imprinting. In this view, time is not something added to physics by hand. It arises because information is written in physical processes and, under the known laws of thermodynamics and quantum physics, cannot be globally unwritten again. The idea is simple but far-reaching.
Every interaction, such as two particles crashing, writes information into the universe. These imprints accumulate. Because they cannot be erased, they define a natural ordering of events. Earlier states are those with fewer informational records. Later states are those with more.
Quantum equations do not prefer a direction of time, but the process of information spreading does. Once information has been spread out, there is no physical path back to a state in which it was localized. Temporal order is therefore anchored in this irreversibility, not in the equations themselves.
Time, in this view, is not something that exists independently of physical processes. It is the cumulative record of what has happened. Each interaction adds a new entry, and the arrow of time reflects the fact that this record only grows.
The future differs from the past because the universe contains more information about the past than it ever can about the future. This explains why time has a direction without relying on special, low-entropy initial conditions or purely statistical arguments. As long as interactions occur and information is irreversibly recorded, time advances.
Interestingly, this accumulated imprint of information may have observable consequences. At galactic scales, the residual information imprint behaves like an additional gravitational component, shaping how galaxies rotate without invoking new particles. Indeed, the unknown substance called dark matter was introduced to explain why galaxies and galaxy clusters rotate faster than their visible mass alone would allow.
In the informational picture, this extra gravitational pull does not come from invisible dark matter, but from the fact that spacetime itself has recorded a long history of interactions. Regions that have accumulated more informational imprints respond more strongly to motion and curvature, effectively boosting their gravity. Stars orbit faster not because more mass is present, but because the spacetime they move through carries a heavier informational memory of past interactions.
From this viewpoint, dark matter, dark energy and the arrow of time may all arise from a single underlying process: the irreversible accumulation of information.
But could we ever test this theory? Ideas about time are often accused of being philosophical rather than scientific. Because time is so deeply woven into how we describe change, it is easy to assume that any attempt to rethink it must remain abstract. An informational approach, however, makes concrete predictions and connects directly to systems we can observe, model, and in some cases experimentally probe.
Black holes provide a natural testing ground, as they seems to suggest information is erased. In the informational framework, this conflict is resolved by recognizing that information is not destroyed but imprinted into spacetime before crossing the horizon. The black hole records it.
This has an important implication for time. As matter falls toward a black hole, interactions intensify and information imprinting accelerates. Time continues to advance locally because information continues to be written, even as classical notions of space and time break down near the horizon and appear to slow or freeze for distant observers.
As the black hole evaporates through Hawking radiation, the accumulated informational record does not vanish. Instead, it affects how radiation is emitted. The radiation should carry subtle signs that reflect the black hole’s history. In other words, the outgoing radiation is not perfectly random. Its structure is shaped by the information previously recorded in spacetime. Detecting such signs remains beyond current technology, but they provide a clear target for future theoretical and observational work.
The same principles can be explored in much smaller, controlled systems. In laboratory experiments with quantum computers, qubits (the quantum computer equivalent of bits) can be treated as finite-capacity information cells, just like the spacetime ones. Researchers have shown that even when the underlying quantum equations are reversible, the way information is written, spread, and retrieved can generate an effective arrow of time in the lab. These experiments allow physicists to test how information storage limits affect reversibility, without needing cosmological or astrophysical systems.
Extensions of the same framework suggest that informational imprinting is not limited to gravity. It may play a role across all fundamental forces of nature, including electromagnetism and the nuclear forces. If this is correct, then time’s arrow should ultimately be traceable to how all interactions record information, not just gravitational ones. Testing this would involve looking for limits on reversibility or information recovery across different physical processes.
Taken together, these examples show that informational time is not an abstract reinterpretation. It links black holes, quantum experiments, and fundamental interactions through a shared physical mechanism, one that can be explored, constrained, and potentially falsified as our experimental reach continues to grow.
Ideas about information do not replace relativity or quantum mechanics. In everyday conditions, informational time closely tracks the time measured by clocks. For most practical purposes, the familiar picture of time works extremely well. The difference appears in regimes where conventional descriptions struggle.
Near black hole horizons or during the earliest moments of the universe, the usual notion of time as a smooth, external coordinate becomes ambiguous. Informational time, by contrast, remains well defined as long as interactions occur and information is irreversibly recorded.
All this may leave you wondering what time really is. This shift reframes the longstanding debate. The question is no longer whether time must be assumed as a fundamental ingredient of the universe, but whether it reflects a deeper underlying process.
In this view, the arrow of time can emerge naturally from physical interactions that record information and cannot be undone. Time, then, is not a mysterious background parameter standing apart from physics. It is something the universe generates internally through its own dynamics. It is not ultimately a fundamental part of reality, but emerges from more basic constituents such as information.
Whether this framework turns out to be a final answer or a stepping stone remains to be seen. Like many ideas in fundamental physics, it will stand or fall based on how well it connects theory to observation. But it already suggests a striking change in perspective.
The universe does not simply exist in time. Time is something the universe continuously writes into itself.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Is Time a Fundamental Part of Reality? A Quiet Revolution in Physics Suggests Not appeared first on SingularityHub.
2026-01-30 06:46:16
Thousands of scientists are already experimenting with the AI to study cancer and brain disorders.
DNA stores the body’s operating playbook. Some genes encode proteins. Other sections change a cell’s behavior by regulating which genes are turned on or off. For yet others, the dark matter of the genome, the purpose remains mysterious—if they have any at all.
Normally, these genetic instructions conduct the symphony of proteins and molecules that keep cells humming along. But even a tiny typo can throw molecular programs into chaos. Scientists have painstakingly connected many DNA mutations—some in genes, others in regulatory regions—to a range of humanity’s most devastating diseases. But a full understanding of the genome remains out of reach, largely because of its overwhelming complexity.
AI could help. In a paper published this week in Nature, Google DeepMind formally unveiled AlphaGenome, a tool that predicts how mutations shape gene expression. The model takes in up to one million DNA letters—an unprecedented length—and simultaneously analyzes 11 types of genomic mutations that could torpedo the way genes are supposed to function.
Built on a previous iteration called Enformer, AlphaGenome stands out for its ability to predict the purpose of DNA letters in non-coding regions of the genome, which largely remain mysterious.
Computational gene expression prediction tools already exist, but they’re usually tailored to one type of genetic change and its consequences. AlphaGenome is a jack-of-all-trades that tracks multiple gene expression mechanisms, allowing researchers to rapidly capture a comprehensive picture of a given mutation and potentially speed up therapeutic development.
Since its initial launch last June, roughly 3,000 scientists from 160 countries have experimented with the AI to study a range of diseases including cancer, infections, and neurodegenerative disorders, said DeepMind’s Pushmeet Kohli in a press briefing.
AlphaGenome is now available for non-commercial use through a free online portal, but the DeepMind team plans to release the model to scientists so they can customize it for their research.
“We see AlphaGenome as a tool for understanding what the functional elements in the genome do, which we hope will accelerate our fundamental understanding of the code of life,” said study author Natasha Latysheva in the news conference.
Our genetic blueprint seems simple. DNA consists of four basic molecules represented by the letters A, T, C, and G. These letters are grouped in threes called codons. Most codons call for the production of an amino acid, a type of molecule the body strings together into proteins. Mutations thwart the cell from making healthy proteins and potentially cause diseases.
The actual genetic playbook is far more complex.
When scientists pieced together the first draft of the human genome in the early 2000s, they were surprised by how little of it directed protein manufacturing. Just two percent of our DNA encoded proteins. The other 98 percent didn’t seem to do much, earning the nickname “junk DNA.”
Over time, however, scientists have realized those non-coding letters have a say about when and in which cells a gene is turned on. These regions were originally thought to be physically close to the gene they regulated. But DNA snippets thousands of letters away can also control gene expression, making it tough to hunt them down and figure out what they do.
It gets messier.
Cells translate genes into messenger molecules that shuttle DNA instructions to the cell’s protein factories. In this process, called splicing, some DNA sequences are skipped. This lets a single gene create multiple proteins with different purposes. Think of it as multiple cuts of the same movie: The edits result in different but still-coherent storylines. Many rare genetic diseases are caused by splicing errors, but it’s been hard to predict where a gene is spliced.
Then there’s the accessibility problem. DNA strands are tightly wrapped around a protein spool. This makes it physically impossible for the proteins involved in gene expression to latch on. Some molecules dock onto tiny bits of DNA and tug them away from the spool to provide access, but the sites are tough to hunt down.
The DeepMind team thought AI would be well-suited to take a crack at these problems.
“The genome is like the recipe of life,” said Kohli in a press briefing. “And really understanding ‘What is the effect of changing any part of the recipe?’ is what AlphaGenome sort of looks at.”
Previous work linking genes to function inspired AlphaGenome. It works in three steps. The first detects short patterns of DNA letters. Next the algorithm communicates this information across the entire analyzed DNA section. In the final step, AlphaGenome maps detected patterns into predictions like, for example, how a mutation affects splicing.
The team trained AlphaGenome on a variety of publicly available genetic libraries amassed by biologists over the past decade. Each captures overlapping aspects of gene expression, including differences between cell types and species. AlphaGenome can analyze sequences that are as long as a million DNA letters from humans or mice. It can then predict a range of molecular outcomes at the resolution of single letter changes.
“Long sequence context is important for covering regions regulating genes from far away,” wrote the team in a blog post. The algorithm’s high resolution captures “fine-grained biological details.” Older methods often sacrifice one for the other; AlphaGenome optimizes both.
The AI is also extremely versatile. It can make sense of 11 different gene regulation processes at once. When pitted against state-of-the-art programs, each focused on just one of these processes, AlphaGenome was as good or better across the board. It readily detected areas engaged in splicing and scored how much DNA letter changes would likely affect gene expression.
In one test, the AI tracked down DNA mutations roughly 8,000 letters away from a gene involved in blood cancer. Normally, the gene helps immune cells mature so they can fight off infections. Then it turns off. But mutations can keep it switched on, causing immune cells to replicate out of control and turn cancerous. That the AI could predict the impact of these far-off DNA influences showcases its genome-deciphering potential.
There are limitations, however. The algorithm struggles to capture the roles of regulatory regions over 100,000 DNA letters away. And while it can predict molecular outcomes of mutations—for example, what proteins are made—it can’t gauge how they cause complex diseases, which involve environmental and other factors. It’s also not set up to predict the impact of DNA mutations for any particular individual.
Still, AlphaGenome is a baseline model that scientists can fine-tune for their area of research, provided there’s enough well-organized data to further train the AI.
“This work is an exciting step forward in illuminating the ‘dark genome.’ We still have a long way to go in understanding the lengthy sequences of our DNA that don’t directly encode the protein
machinery whose constant whirring keeps us healthy,” said Rivka Isaacson at King’s College London, who was not involved in the work. “AlphaGenome gives scientists whole new and vast datasets to sift and scavenge for clues.”
The post Google DeepMind AI Decodes the Genome a Million ‘Letters’ at a Time appeared first on SingularityHub.