2026-01-31 23:00:00
A Yann LeCun–Linked Startup Charts a New Path to AGIJoel Khalili | Wired ($)
“As the world’s largest companies pour hundreds of billions of dollars into large language models, San Francisco-based Logical Intelligence is trying something different in pursuit of AI that can mimic the human brain. …The road to AGI, Bodnia contends, begins with the layering of these different types of AI: LLMs will interface with humans in natural language, EBMs will take up reasoning tasks, while world models will help robots take action in 3D space.”
Google Project Genie Lets You Create Interactive Worlds From a Photo or PromptRyan Whitwam | Ars Technica
“World models are exactly what they sound like—an AI that generates a dynamic environment on the fly. …The system first generates a still image, and from that you can generate the world. This is what Google calls ‘world sketching.'”
The First Human Test of a Rejuvenation Method Will Begin ‘Shortly’Antonio Regalado | MIT Technology Review ($)
“[Life Biosciences] plans to try to treat eye disease with a radical rejuvenation concept called ‘reprogramming’ that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech. The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off.”
The Wall Street Star Betting His Reputation on Robots and Flying CarsBecky Peterson | The Wall Street Journal ($)
“Jonas will guide the bank’s clients on what he’s calling the ‘Cambrian explosion of bots’—a time in the not-so-distant-future in which fully autonomous vehicles, drones, humanoids and industrial robots grow large enough in population to rival the human race. His theory is deceptive in its simplicity: Anything that can be automated will be automated, he says, even humans.”
Mapping 6,000 Worlds: The New Era of Exoplanetary DataEliza Strickland | IEEE Spectrum
“[Astronomers can now] compare planet sizes, masses, and compositions; track how tightly planets orbit their stars; and measure the prevalence of different kinds of planetary systems. Those statistics allow astronomers to estimate how frequently planets form, and to start making informed guesses about how often conditions arise that could support life. The Drake Equation uses such estimates to tackle one of humanity’s most profound questions: Are we alone in the universe?”
Stratospheric Internet Could Finally Start Taking Off This YearTereza Pultarova | MIT Technology Review ($)
“Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery.”
Waymo Robotaxi Hits a Child Near a School, Causing Minor InjuriesAndrew J. Hawkins | The Verge
“In a blog post, Waymo said its vehicle was traveling at 17mph when its autonomous system detected the child and then ‘braked hard,’ reducing its speed to 6mph before ‘contact was made.’ The child ‘stood up immediately, walked to the sidewalk,’ and Waymo said it called 911. ‘The vehicle moved to the side of the road, and stayed there until law enforcement cleared the vehicle to leave the scene,’ it said.”
Ex-OpenAI Researcher’s Startup Targets Up to $1 Billion in Funding to Develop a New Type of AIStephanie Palazzolo and Wayne Ma | The Information ($)
“[Jerry] Tworek represents a small but growing group of AI researchers who believe the field needs an overhaul because today’s most popular model development techniques seem unlikely to be able to develop advanced AI that can achieve major breakthroughs in biology, medicine and other fields while also managing to avoid silly mistakes.”
Waymo’s Price Premium To Lyft and Uber Is Closing, Report FindsAnita Ramaswamy | The Information ($)
“The average price to ride in Waymo’s robotaxis has dropped by 3.6% since March to $19.69 per ride, according to a new report by ride-hailing analytics firm Obi. Riding in a Waymo is now, on average, 12.7% more expensive than riding in an Uber and 27.4% more expensive than riding in a Lyft, down from a 30% to 40% premium for Waymo rides last April, the month covered by Obi’s previous report.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 31) appeared first on SingularityHub.
2026-01-31 06:02:40
Our universe does not simply exist in time. Time is something the universe continuously writes into itself.
Time feels like the most basic feature of reality. Seconds tick, days pass, and everything from planetary motion to human memory seems to unfold along a single, irreversible direction. We are born and we die, in exactly that order. We plan our lives around time, measure it obsessively, and experience it as an unbroken flow from past to future. It feels so obvious that time moves forward that questioning it can seem almost pointless.
And yet, for more than a century, physics has struggled to say what time actually is. This struggle is not philosophical nitpicking. It sits at the heart of some of the deepest problems in science.
Modern physics relies on different, but equally important, frameworks. One is Albert Einstein’s theory of general relativity, which describes the gravity and motion of large objects such as planets. Another is quantum mechanics, which rules the microcosmos of atoms and particles. And on an even larger scale, the standard model of cosmology describes the birth and evolution of the universe as a whole. All rely on time, yet they treat it in incompatible ways.
When physicists try to combine these theories into a single framework, time often behaves in unexpected and troubling ways. Sometimes it stretches. Sometimes it slows. Sometimes it disappears entirely.
Einstein’s theory of relativity was, in fact, the first major blow to our everyday intuition about time. Time, Einstein showed, is not universal. It runs at different speeds depending on gravity and motion. Two observers moving relative to one another will disagree about which events happened at the same time. Time became something elastic, woven together with space into a four-dimensional fabric called spacetime.
Quantum mechanics made things even stranger. In quantum theory, time is not something the theory explains. It is simply assumed. The equations of quantum mechanics describe how systems evolve with respect to time, but time itself remains an external parameter, a background clock that sits outside the theory.
This mismatch becomes acute when physicists try to describe gravity at the quantum level, which is crucial for developing the much coveted theory of everything—which links the main fundamental theories. But in many attempts to create such a theory, time vanishes as a parameter from the fundamental equations altogether. The universe appears frozen, described by equations that make no reference to change.
This puzzle is known as the problem of time, and it remains one of the most persistent obstacles to a unified theory of physics. Despite enormous progress in cosmology and particle physics, we still lack a clear explanation for why time flows at all.
Now a relatively new approach to physics, building on a mathematical framework called information theory, developed by Claude Shannon in the 1940s, has started coming up with surprising answers.
When physicists try to explain the direction of time, they often turn to a concept called entropy. The second law of thermodynamics states that disorder tends to increase. A glass can fall and shatter into a mess, but the shards never spontaneously leap back together. This asymmetry between past and future is often identified with the arrow of time.
This idea has been enormously influential. It explains why many processes are irreversible, including why we remember the past but not the future. If the universe started in a state of low entropy and is getting messier as it evolves, that appears to explain why time moves forward. But entropy does not fully solve the problem of time.
For one thing, the fundamental quantum mechanical equations of physics do not distinguish between past and future. The arrow of time emerges only when we consider large numbers of particles and statistical behaviour. This also raises a deeper question: why did the universe start in such a low-entropy state to begin with? Statistically, there are more ways for a universe to have high entropy than low entropy, just as there are more ways for a room to be messy than tidy. So why would it start in a state that is so improbable?
Over the past few decades, a quiet but far-reaching revolution has taken place in physics. Information, once treated as an abstract bookkeeping tool used to track states or probabilities, has increasingly been recognized as a physical quantity in its own right, just like matter or radiation. While entropy measures how many microscopic states are possible, information measures how physical interactions limit and record those possibilities.
This shift did not happen overnight. It emerged gradually, driven by puzzles at the intersection of thermodynamics, quantum mechanics, and gravity, where treating information as merely mathematical began to produce contradictions.
One of the earliest cracks appeared in black hole physics. When Stephen Hawking showed that black holes emit thermal radiation, it raised a disturbing possibility: Information about whatever falls into a black hole might be permanently lost as heat. That conclusion conflicted with quantum mechanics, which demands that the entirety of information be preserved.
Resolving this tension forced physicists to confront a deeper truth. Information is not optional. If we want a full description of the universe that includes quantum mechanics, information cannot simply disappear without undermining the foundations of physics. This realization had profound consequences. It became clear that information has thermodynamic cost, that erasing it dissipates energy, and that storing it requires physical resources.
In parallel, surprising connections emerged between gravity and thermodynamics. It was shown that Einstein’s equations can be derived from thermodynamic principles that link spacetime geometry directly to entropy and information. In this view, gravity doesn’t behave exactly like a fundamental force.
Instead, gravity appears to be what physicists call “emergent”—a phenomenon describing something that’s greater than the sum of its parts, arising from more fundamental constituents. Take temperature. We can all feel it, but on a fundamental level, a single particle can’t have temperature. It’s not a fundamental feature. Instead it only emerges as a result of many molecules moving collectively.
Similarly, gravity can be described as an emergent phenomenon, arising from statistical processes. Some physicists have even suggested that gravity itself may emerge from information, reflecting how information is distributed, encoded, and processed.
These ideas invite a radical shift in perspective. Instead of treating spacetime as primary, and information as something that lives inside it, information may be the more fundamental ingredient from which spacetime itself emerges. Building on this research, my colleagues and I have explored a framework in which spacetime itself acts as a storage medium for information—and it has important consequences for how we view time.
In this approach, spacetime is not perfectly smooth, as relativity suggests, but composed of discrete elements, each with a finite capacity to record quantum information from passing particles and fields. These elements are not bits in the digital sense, but physical carriers of quantum information, capable of retaining memory of past interactions.
A useful way to picture them is to think of spacetime like a material made of tiny, memory-bearing cells. Just as a crystal lattice can store defects that appeared earlier in time, these microscopic spacetime elements can retain traces of the interactions that have passed through them. They are not particles in the usual sense described by the standard model of particle physics, but a more fundamental layer of physical structure that particle physics operates on rather than explains.
This has an important implication. If spacetime records information, then its present state reflects not only what exists now, but everything that has happened before. Regions that have experienced more interactions carry a different imprint of information than regions that have experienced fewer. The universe, in this view, does not merely evolve according to timeless laws applied to changing states. It remembers.
This memory is not metaphorical. Every physical interaction leaves an informational trace. Although the basic equations of quantum mechanics can be run forwards or backwards in time, real interactions never happen in isolation. They inevitably involve surroundings, leak information outward and leave lasting records of what has occurred. Once this information has spread into the wider environment, recovering it would require undoing not just a single event, but every physical change it caused along the way. In practice, that is impossible.
This is why information cannot be erased and broken cups do not reassemble. But the implication runs deeper. Each interaction writes something permanent into the structure of the universe, whether at the scale of atoms colliding or galaxies forming.
Geometry and information turn out to be deeply connected in this view. In our work, we have showed that how spacetime curves depends not only on mass and energy, as Einstein taught us, but also on how quantum information, particularly entanglement, is distributed. Entanglement is a quantum process that mysteriously links particles in distant regions of space—it enables them to share information despite the distance. And these informational links contribute to the effective geometry experienced by matter and radiation.
From this perspective, spacetime geometry is not just a response to what exists at a given moment, but to what has happened. Regions that have recorded many interactions tend, on average, to behave as if they curve more strongly, have stronger gravity, than regions that have recorded fewer.
This reframing subtly changes the role of spacetime. Instead of being a neutral arena in which events unfold, spacetime becomes an active participant. It stores information, constrains future dynamics and shapes how new interactions can occur. This naturally raises a deeper question. If spacetime records information, could time emerge from this recording process rather than being assumed from the start?
Recently, we extended this informational perspective to time itself. Rather than treating time as a fundamental background parameter, we showed that temporal order emerges from irreversible information imprinting. In this view, time is not something added to physics by hand. It arises because information is written in physical processes and, under the known laws of thermodynamics and quantum physics, cannot be globally unwritten again. The idea is simple but far-reaching.
Every interaction, such as two particles crashing, writes information into the universe. These imprints accumulate. Because they cannot be erased, they define a natural ordering of events. Earlier states are those with fewer informational records. Later states are those with more.
Quantum equations do not prefer a direction of time, but the process of information spreading does. Once information has been spread out, there is no physical path back to a state in which it was localized. Temporal order is therefore anchored in this irreversibility, not in the equations themselves.
Time, in this view, is not something that exists independently of physical processes. It is the cumulative record of what has happened. Each interaction adds a new entry, and the arrow of time reflects the fact that this record only grows.
The future differs from the past because the universe contains more information about the past than it ever can about the future. This explains why time has a direction without relying on special, low-entropy initial conditions or purely statistical arguments. As long as interactions occur and information is irreversibly recorded, time advances.
Interestingly, this accumulated imprint of information may have observable consequences. At galactic scales, the residual information imprint behaves like an additional gravitational component, shaping how galaxies rotate without invoking new particles. Indeed, the unknown substance called dark matter was introduced to explain why galaxies and galaxy clusters rotate faster than their visible mass alone would allow.
In the informational picture, this extra gravitational pull does not come from invisible dark matter, but from the fact that spacetime itself has recorded a long history of interactions. Regions that have accumulated more informational imprints respond more strongly to motion and curvature, effectively boosting their gravity. Stars orbit faster not because more mass is present, but because the spacetime they move through carries a heavier informational memory of past interactions.
From this viewpoint, dark matter, dark energy and the arrow of time may all arise from a single underlying process: the irreversible accumulation of information.
But could we ever test this theory? Ideas about time are often accused of being philosophical rather than scientific. Because time is so deeply woven into how we describe change, it is easy to assume that any attempt to rethink it must remain abstract. An informational approach, however, makes concrete predictions and connects directly to systems we can observe, model, and in some cases experimentally probe.
Black holes provide a natural testing ground, as they seems to suggest information is erased. In the informational framework, this conflict is resolved by recognizing that information is not destroyed but imprinted into spacetime before crossing the horizon. The black hole records it.
This has an important implication for time. As matter falls toward a black hole, interactions intensify and information imprinting accelerates. Time continues to advance locally because information continues to be written, even as classical notions of space and time break down near the horizon and appear to slow or freeze for distant observers.
As the black hole evaporates through Hawking radiation, the accumulated informational record does not vanish. Instead, it affects how radiation is emitted. The radiation should carry subtle signs that reflect the black hole’s history. In other words, the outgoing radiation is not perfectly random. Its structure is shaped by the information previously recorded in spacetime. Detecting such signs remains beyond current technology, but they provide a clear target for future theoretical and observational work.
The same principles can be explored in much smaller, controlled systems. In laboratory experiments with quantum computers, qubits (the quantum computer equivalent of bits) can be treated as finite-capacity information cells, just like the spacetime ones. Researchers have shown that even when the underlying quantum equations are reversible, the way information is written, spread, and retrieved can generate an effective arrow of time in the lab. These experiments allow physicists to test how information storage limits affect reversibility, without needing cosmological or astrophysical systems.
Extensions of the same framework suggest that informational imprinting is not limited to gravity. It may play a role across all fundamental forces of nature, including electromagnetism and the nuclear forces. If this is correct, then time’s arrow should ultimately be traceable to how all interactions record information, not just gravitational ones. Testing this would involve looking for limits on reversibility or information recovery across different physical processes.
Taken together, these examples show that informational time is not an abstract reinterpretation. It links black holes, quantum experiments, and fundamental interactions through a shared physical mechanism, one that can be explored, constrained, and potentially falsified as our experimental reach continues to grow.
Ideas about information do not replace relativity or quantum mechanics. In everyday conditions, informational time closely tracks the time measured by clocks. For most practical purposes, the familiar picture of time works extremely well. The difference appears in regimes where conventional descriptions struggle.
Near black hole horizons or during the earliest moments of the universe, the usual notion of time as a smooth, external coordinate becomes ambiguous. Informational time, by contrast, remains well defined as long as interactions occur and information is irreversibly recorded.
All this may leave you wondering what time really is. This shift reframes the longstanding debate. The question is no longer whether time must be assumed as a fundamental ingredient of the universe, but whether it reflects a deeper underlying process.
In this view, the arrow of time can emerge naturally from physical interactions that record information and cannot be undone. Time, then, is not a mysterious background parameter standing apart from physics. It is something the universe generates internally through its own dynamics. It is not ultimately a fundamental part of reality, but emerges from more basic constituents such as information.
Whether this framework turns out to be a final answer or a stepping stone remains to be seen. Like many ideas in fundamental physics, it will stand or fall based on how well it connects theory to observation. But it already suggests a striking change in perspective.
The universe does not simply exist in time. Time is something the universe continuously writes into itself.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Is Time a Fundamental Part of Reality? A Quiet Revolution in Physics Suggests Not appeared first on SingularityHub.
2026-01-30 06:46:16
Thousands of scientists are already experimenting with the AI to study cancer and brain disorders.
DNA stores the body’s operating playbook. Some genes encode proteins. Other sections change a cell’s behavior by regulating which genes are turned on or off. For yet others, the dark matter of the genome, the purpose remains mysterious—if they have any at all.
Normally, these genetic instructions conduct the symphony of proteins and molecules that keep cells humming along. But even a tiny typo can throw molecular programs into chaos. Scientists have painstakingly connected many DNA mutations—some in genes, others in regulatory regions—to a range of humanity’s most devastating diseases. But a full understanding of the genome remains out of reach, largely because of its overwhelming complexity.
AI could help. In a paper published this week in Nature, Google DeepMind formally unveiled AlphaGenome, a tool that predicts how mutations shape gene expression. The model takes in up to one million DNA letters—an unprecedented length—and simultaneously analyzes 11 types of genomic mutations that could torpedo the way genes are supposed to function.
Built on a previous iteration called Enformer, AlphaGenome stands out for its ability to predict the purpose of DNA letters in non-coding regions of the genome, which largely remain mysterious.
Computational gene expression prediction tools already exist, but they’re usually tailored to one type of genetic change and its consequences. AlphaGenome is a jack-of-all-trades that tracks multiple gene expression mechanisms, allowing researchers to rapidly capture a comprehensive picture of a given mutation and potentially speed up therapeutic development.
Since its initial launch last June, roughly 3,000 scientists from 160 countries have experimented with the AI to study a range of diseases including cancer, infections, and neurodegenerative disorders, said DeepMind’s Pushmeet Kohli in a press briefing.
AlphaGenome is now available for non-commercial use through a free online portal, but the DeepMind team plans to release the model to scientists so they can customize it for their research.
“We see AlphaGenome as a tool for understanding what the functional elements in the genome do, which we hope will accelerate our fundamental understanding of the code of life,” said study author Natasha Latysheva in the news conference.
Our genetic blueprint seems simple. DNA consists of four basic molecules represented by the letters A, T, C, and G. These letters are grouped in threes called codons. Most codons call for the production of an amino acid, a type of molecule the body strings together into proteins. Mutations thwart the cell from making healthy proteins and potentially cause diseases.
The actual genetic playbook is far more complex.
When scientists pieced together the first draft of the human genome in the early 2000s, they were surprised by how little of it directed protein manufacturing. Just two percent of our DNA encoded proteins. The other 98 percent didn’t seem to do much, earning the nickname “junk DNA.”
Over time, however, scientists have realized those non-coding letters have a say about when and in which cells a gene is turned on. These regions were originally thought to be physically close to the gene they regulated. But DNA snippets thousands of letters away can also control gene expression, making it tough to hunt them down and figure out what they do.
It gets messier.
Cells translate genes into messenger molecules that shuttle DNA instructions to the cell’s protein factories. In this process, called splicing, some DNA sequences are skipped. This lets a single gene create multiple proteins with different purposes. Think of it as multiple cuts of the same movie: The edits result in different but still-coherent storylines. Many rare genetic diseases are caused by splicing errors, but it’s been hard to predict where a gene is spliced.
Then there’s the accessibility problem. DNA strands are tightly wrapped around a protein spool. This makes it physically impossible for the proteins involved in gene expression to latch on. Some molecules dock onto tiny bits of DNA and tug them away from the spool to provide access, but the sites are tough to hunt down.
The DeepMind team thought AI would be well-suited to take a crack at these problems.
“The genome is like the recipe of life,” said Kohli in a press briefing. “And really understanding ‘What is the effect of changing any part of the recipe?’ is what AlphaGenome sort of looks at.”
Previous work linking genes to function inspired AlphaGenome. It works in three steps. The first detects short patterns of DNA letters. Next the algorithm communicates this information across the entire analyzed DNA section. In the final step, AlphaGenome maps detected patterns into predictions like, for example, how a mutation affects splicing.
The team trained AlphaGenome on a variety of publicly available genetic libraries amassed by biologists over the past decade. Each captures overlapping aspects of gene expression, including differences between cell types and species. AlphaGenome can analyze sequences that are as long as a million DNA letters from humans or mice. It can then predict a range of molecular outcomes at the resolution of single letter changes.
“Long sequence context is important for covering regions regulating genes from far away,” wrote the team in a blog post. The algorithm’s high resolution captures “fine-grained biological details.” Older methods often sacrifice one for the other; AlphaGenome optimizes both.
The AI is also extremely versatile. It can make sense of 11 different gene regulation processes at once. When pitted against state-of-the-art programs, each focused on just one of these processes, AlphaGenome was as good or better across the board. It readily detected areas engaged in splicing and scored how much DNA letter changes would likely affect gene expression.
In one test, the AI tracked down DNA mutations roughly 8,000 letters away from a gene involved in blood cancer. Normally, the gene helps immune cells mature so they can fight off infections. Then it turns off. But mutations can keep it switched on, causing immune cells to replicate out of control and turn cancerous. That the AI could predict the impact of these far-off DNA influences showcases its genome-deciphering potential.
There are limitations, however. The algorithm struggles to capture the roles of regulatory regions over 100,000 DNA letters away. And while it can predict molecular outcomes of mutations—for example, what proteins are made—it can’t gauge how they cause complex diseases, which involve environmental and other factors. It’s also not set up to predict the impact of DNA mutations for any particular individual.
Still, AlphaGenome is a baseline model that scientists can fine-tune for their area of research, provided there’s enough well-organized data to further train the AI.
“This work is an exciting step forward in illuminating the ‘dark genome.’ We still have a long way to go in understanding the lengthy sequences of our DNA that don’t directly encode the protein
machinery whose constant whirring keeps us healthy,” said Rivka Isaacson at King’s College London, who was not involved in the work. “AlphaGenome gives scientists whole new and vast datasets to sift and scavenge for clues.”
The post Google DeepMind’s AlphaGo Decodes the Genome a Million ‘Letters’ at a Time appeared first on SingularityHub.
2026-01-28 09:06:17
A study tested several AI models and 100,000 people. AI was better than average but trailed top performers.
Creativity is a trait that AI critics say is likely to remain the preserve of humans for the foreseeable future. But a large-scale study finds that leading generative language models can now exceed the average human performance on linguistic creativity tests.
The question of whether machines can be creative has gained new salience in recent years thanks to the rise of AI tools that can generate text and images with both fluency and style. While many experts say true creativity is impossible without lived experience of the world, the increasingly sophisticated outputs of these models challenge that idea.
In an effort to take a more objective look at the issue, researchers at the Université de Montréal, including AI pioneer Yoshua Bengio, conducted what they say is the largest ever comparative evaluation of machine and human creativity to date. The team compared outputs from leading AI models against responses from 100,000 human participants using a standardized psychological test for creativity and found that the best models now outperform the average human, though they still trail top performers by a significant margin.
“This result may be surprising—even unsettling—but our study also highlights an equally important observation: even the best AI systems still fall short of the levels reached by the most creative humans,” Karim Jerbi, who led the study, said in a press release.
The test at the heart of the study, published in Scientific Reports, is known as the Divergent Association Task and involves participants generating 10 words with meanings as distinct from one another as possible. The higher the average semantic distance between the words, the higher the score.
Performance on this test in humans correlates with other well-established creativity tests that focus on idea generation, writing, and creative problem solving. But crucially, it is also quick to complete, which allowed the researchers to test a much larger cohort of humans over the internet.
What they found was striking. OpenAI’s GPT-4, Google’s Gemini Pro 1.5 and Meta’s Llama 3 and Llama 4, all outperformed the average human. However, when they measured the average performance of the top 50 percent of human participants, it exceeded all tested models. The gap widened further when they took the average of the top 25 percent and top 10 percent of humans.
The researchers wanted to see if these scores would translate to more complex creative tasks, so they also got the models to generate haikus, movie plot synopses, and flash fiction. They analyzed the outputs using a measure called Divergent Semantic Integration, which estimates the diversity of ideas integrated into a narrative. While the models did relatively well, the team found that human-written samples were still significantly more creative than AI-written ones.
However, the team also discovered they could boost the AI’s creativity with some simple tweaks. The first involved adjusting a model setting called temperature, which controls the randomness of the model’s output. When this was turned all the way up on GPT-4, the model exceeded the creativity scores of 72 percent of human participants.
The researchers also found that carefully tuning the prompt given to the model helped too. When explicitly instructed to use “a strategy that relies on varying etymology,” both GPT-3.5 and GPT-4 did better than when given the original, less-specific task prompt.
For creative professionals, Jerbi says the persistent gap between top human performers and even the most advanced models should provide some reassurance. But he also thinks the results suggest people should take these models seriously as potential creative collaborators.
“Generative AI has above all become an extremely powerful tool in the service of human creativity,” he says. “It will not replace creators, but profoundly transform how they imagine, explore, and create—for those who choose to use it.”
Either way, the study adds to a growing body of research that is raising uncomfortable questions about what it means to be creative and whether it is a uniquely human trait. Given the strength of feeling around the issue, the study is unlikely to settle the matter, but the findings do mark one of the more concrete attempts to measure the question objectively.
The post AI Now Beats the Average Human in Tests of Creativity appeared first on SingularityHub.
2026-01-25 23:00:00
Aristotle said there were five senses. But he also told us the world was made of five elements, and we no longer believe that.
Stuck in front of our screens all day, we often ignore our senses beyond sound and vision. And yet they are always at work. When we’re more alert we feel the rough and smooth surfaces of objects, the stiffness in our shoulders, the softness of bread.
In the morning, we may feel the tingle of toothpaste, hear and feel the running water in the shower, smell the shampoo, and later the aroma of freshly brewed coffee.
Aristotle told us there were five senses. But he also told us the world was made up of five elements, and we no longer believe that. And modern research is showing we may actually have dozens of senses.
Almost all of our experience is multisensory. We don’t see, hear, smell, and touch in separate parcels. They occur simultaneously in a unified experience of the world around us and of ourselves.
What we feel affects what we see, and what we see affects what we hear. Different odors in shampoo can affect how you perceive the texture of hair. The fragrance of rose makes hair seem silkier, for instance.
Odors in low-fat yogurts can make them feel richer and thicker on the palate without adding more emulsifiers. Perception of odors in the mouth, rising to the nasal passage, are modified by the viscosity of the liquids we consume.
My long-term collaborator, professor Charles Spence from the Crossmodal Laboratory in Oxford, told me his neuroscience colleagues believe there are anywhere between 22 and 33 senses.
These include proprioception, which enables us to know where our limbs are without looking at them. Our sense of balance draws on the vestibular system of ear canals as well as sight and proprioception.
Another example is interoception, by which we sense changes in our own bodies such as a slight increase in our heart rate and hunger. We also have a sense of agency when moving our limbs: a feeling that can go missing in stroke patients who sometimes even believe someone else is moving their arm.
There is the sense of ownership. Stroke patients sometimes feel their, for instance, arm is not their own even though they may still feel sensations in it.
Some of the traditional senses are combinations of several senses. Touch, for instance involves pain, temperature, itch, and tactile sensations. When we taste something, we are actually experiencing a combination of three senses: touch, smell, and taste—or gustation—which combine to produce the flavors we perceive in food and drinks.
Gustation, covers sensations produced by receptors on the tongue that enable us to detect salt, sweet, sour, bitter, and umami (savory). What about mint, mango, melon, strawberry, raspberry?
We don’t have raspberry receptors on the tongue, nor is raspberry flavor some combination of sweet, sour, and bitter. There is no taste arithmetic for fruit flavors.
We perceive them through the combined workings of the tongue and the nose. It is smell that contributes the lion’s share to what we call tasting.
This is not inhaling odors from the environment, though. Odor compounds are released as we chew or sip, traveling from the mouth to the nose though the nasal pharynx at the back of throat.
Touch plays its part too, binding tastes and smells together and fixing our preferences for runny or firm eggs and the velvety, luxurious gooeyness of chocolate.
Sight is influenced by our vestibular system. When you are on board an aircraft on the ground, look down the cabin. Look again when you are in the climb.
It will “look” to you as though the front of the cabin is higher than you are, although optically, everything is in the same relation to you as it was on the ground. What you “see” is the combined effect of sight and your ear canals telling you that you are titling backwards.
The senses offer a rich seam of research and philosophers, neuroscientists and psychologists work together at the Center for the Study of the Senses at the University of London’s School of Advanced Study.
In 2013, the center launched its Rethinking the Senses project, directed by my colleague, the late Professor Sir Colin Blakemore. We discovered how modifying the sound of your own footsteps can make your body feel lighter or heavier.
We learned how audioguides in Tate Britain art museum that address the listener as if the model in a portrait was speaking enable visitors to remember more visual details of the painting. We discovered how aircraft noise interferes with our perception of taste and why you should always drink tomato juice on a plane.
While our perception of salt, sweet, and sour is reduced in the presence of white noise, umami is not, and tomatoes and tomato juice are rich in umami. This means the aircraft’s noise will taste enhance the savory flavor.
At our latest interactive exhibition, Senses Unwrapped at Coal Drops Yard in London’s King’s Cross, people can discover for themselves how their senses work and why they don’t work as we think they do.
For example, the size-weight illusion is illustrated by a set of small, medium, and large curling stones. People can lift each one and decide which is heaviest. The smallest one feels heaviest, but people can then place them on balancing scales and discover that they are all the same weight.
But there are always plenty of things around you to show how intricate your senses are, if you only pause for a moment to take it all in. So next time you walk outside or savor a meal, take a moment to appreciate how your senses are working together to help you feel all the sensations involved.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Humans Could Have as Many as 33 Senses appeared first on SingularityHub.
2026-01-24 23:00:00
Your First Humanoid Robot Coworker Will Probably Be ChineseWill Knight | Wired ($)
“[In addition to Unitree] a staggering 200-plus other Chinese companies are also developing humanoids, which recently prompted the Chinese government to warn of overcapacity and unnecessary replication. The US has about 16 prominent firms building humanoids. With stats like that, one can’t help but suspect that the first country to have a million humanoids will be China.”
CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story.Lindsay Ellis | The Wall Street Journal ($)
“The gulf between senior executives’ and workers’ actual experience with generative AI is vast, according to a new survey from the AI consulting firm Section of 5,000 white-collar workers. Two-thirds of nonmanagement staffers said they saved less than two hours a week or no time at all with AI. More than 40% of executives, in contrast, said the technology saved them more than eight hours of work a week.”
mRNA Cancer Vaccine Shows Protection at 5-Year Follow-Up, Moderna and Merck SayBeth Mole | Ars Technica
“In a small clinical trial, customized mRNA vaccines against high-risk skin cancers appeared to reduce the risk of cancer recurrence and death by nearly 50 percent over five years when compared with standard treatment alone.”
Not to Be Outdone by OpenAI, Apple Is Reportedly Developing an AI WearableLucas Ropek | TechCrunch
“Apple may be developing its own AI wearable, according to a report published Wednesday by The Information. The device will be a pin that users can wear on their clothing, and that comes equipped with two cameras and three microphones, the report says.”
The Math on AI Agents Doesn’t Add UpSteven Levy | Wired ($)
“The big AI companies promised us that 2025 would be ‘the year of the AI agents.’ It turned out to be the year of talking about AI agents, and kicking the can for that transformational moment to 2026 or maybe later. But what if the answer to the question ‘When will our lives be fully automated by generative AI robots that perform our tasks for us and basically run the world?’ is, like that New Yorker cartoon, ‘How about never?'”
Extreme Closeup of the ‘Eye of God’ Reveals Fiery Pillars in Stunning DetailPassant Rabie | Gizmodo
“The Webb space telescope has stared deep into the darkness of the Helix Nebula [nicknamed the Eye of God], revealing layers of gas shed by a dying star to seed the cosmos with future generations of stars and planets. …At its center is a blazing white dwarf—the leftover core of a dying star—releasing an avalanche of material that crashes into a colder surrounding shell of gas and dust.”
China’s Renewable Energy Revolution Is a Huge Mess That Might Save the WorldJeremy Wallace | Wired ($)
“The resulting, onrushing utopia is anything but neat. It is a panorama of coal communities decimated, price wars sweeping across one market after another, and electrical grids destabilizing as they become more central to the energy system. And absolutely no one—least of all some monolithic ‘China’ at the control switch—knows how to deal with its repercussions.”
Zanskar Thinks 1 TW of Geothermal Power Is Being OverlookedTim De Chant | TechCrunch
“‘They underestimated how many undiscovered systems there are, maybe by an order of magnitude or more,’ Hoiland said. With modern drilling techniques, ‘you can get a lot more out of each of them, maybe even an order of magnitude or more from each of those. All of a sudden the number goes from tens of gigawatts to what could be a terawatt-scale opportunity.'”
Some Immune Systems Defeat Cancer. Could That Become a Drug?Gina Kolata | The New York Times ($)
“Dr. Edward Patz, who spent much of his career researching cancer at Duke, has long been intrigued by cancers that are harmless and has thought they might hold important clues for drug development. The result, after years of research, is an experimental drug, tested so far only in small numbers of lung cancer patients.”
Another Jeff Bezos Company Has Announced Plans to Develop a MegaconstellationEric Berger | Ars Technica
“The space company founded by Jeff Bezos, Blue Origin, said it was developing a new megaconstellation named TeraWave to deliver data speeds of up to 6Tbps anywhere on Earth. The constellation will consist of 5,408 optically interconnected satellites, with a majority in low-Earth orbit and the remainder in medium-Earth orbit.”
Waymo Continues Robotaxi Ramp up With Miami Service Now Open to PublicKirsten Korosec | TechCrunch
“The company said Thursday it will initially open the service, on a rolling basis, to the nearly 10,000 local residents on its waitlist. Once accepted, riders will be able to hail a robotaxi within a 60-square-mile service area in Miami that covers neighborhoods such as the Design District, Wynwood, Brickell, and Coral Gables.”
Mars Once Had a Vast Sea the Size of the Arctic OceanTaylor Mitchell Brown | New Scientist ($)
“This would have been the largest ocean on Mars. ‘Our research suggests that around 3 billion years ago, Mars may have hosted long-lasting bodies of surface water inside Valles Marineris, the largest canyon in the Solar System,’ says Indi. ‘Even more exciting, these water bodies may have been connected to a much larger ocean that once covered parts of Mars’ northern lowlands.'”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 24) appeared first on SingularityHub.