2025-04-16 02:15:51
Intercepting interstellar objects could transform fleeting encounters into profound scientific opportunities.
In late 2017, a mysterious object tore through our solar system at breakneck speed. Astronomers scrambled to observe the fast moving body using the world’s most powerful telescopes. It was found to be one quarter mile (400 meters) long and very elongated—perhaps 10 times as long as it was wide. Researchers named it ‘Oumuamua, Hawaiian for “scout.”
‘Oumuamua was later confirmed to be the first object from another star known to have visited our solar system. While these interstellar objects (ISOs) originate around a star, they end up as cosmic nomads, wandering through space. They are essentially planetary shrapnel, having been blasted out of their parent star systems by catastrophic events, such as giant collisions between planetary objects.
Astronomers say that ‘Oumuamua could have been traveling through the Milky Way for hundreds of millions of years before its encounter with our solar system. Just two years after this unexpected visit, a second ISO—the Borisov Comet—was spotted, this time by an amateur astronomer in Crimea. These celestial interlopers have given us tantalizing glimpses of material from far beyond our solar system.
But what if we could do more than just watch them fly by?
Studying ISOs up close would offer scientists the rare opportunity to learn more about far off star systems, which are too distant to send missions to.
There may be over 10 septillion (or ten with 24 zeros) ISOs in the Milky Way alone. But if there are so many of them, why have we only seen two? Put simply, we cannot accurately predict when they will arrive. Large ISOs like ‘Oumuamua, that are more easily detected, do not seem to visit the solar system that often, and they travel incredibly fast.
Ground- and space-based telescopes struggle to respond quickly to incoming ISOs, meaning that we are mostly looking at them after they pass through our cosmic neighborhood. However, innovative space missions could get us closer to objects like ‘Oumuamua, by using breakthroughs in artificial intelligence (AI) to guide spacecraft safely to future visitors. Getting closer means we can get a better understanding of their composition, geology, and activity—gaining insights into the conditions around other stars.
Emerging technologies being used to approach space debris could help to approach other unpredictable objects, transforming these fleeting encounters into profound scientific opportunities. So how do we get close? Speeding past Earth at an average of 32 kilometers per second, ISOs give us less than a year for our spacecraft to try and intercept them after detection. Catching up is not impossible—for example, it could be done via gravitational slingshot maneuvers. However, it is difficult, costly and would take years to execute.
The good news is that the first wave of ISO-hunting missions is already in motion: NASA’s mission concept is called Bridge and the European Space Agency (ESA) has a mission called Comet Interceptor. Once an incoming ISO is identified, Bridge would depart Earth to intercept it. However, launching from Earth currently requires a 30-day launch window after detection, which would cost valuable time.
Comet Interceptor is scheduled for launch in 2029 and comprises a larger spacecraft and two smaller robotic probes. Once launched, it will lie in wait a million miles from Earth, poised to ambush a long period comet (slower comets that come from further away)—or potentially an ISO. Placing spacecraft in a “storage orbit” allows for rapid deployment when a suitable ISO is detected.
Another proposal from the Institute for Interstellar Studies, Project Lyra, assessed the feasibility of chasing down ‘Oumuamua, which has already sped far beyond Neptune’s orbit. They found that it would be possible in theory to catch up with the object, but this would also be very technically challenging.
These missions are a start, but as described, their biggest limitation is speed. To chase down ISOs like ‘Oumuamua, we’ll need to move a lot faster—and think smarter.
Future missions may rely on cutting-edge AI and related fields such as deep learning—which seeks to emulate the decision-making power of the human brain—to identify and respond to incoming objects in real time. Researchers are already testing small spacecraft that operate in coordinated “swarms,” allowing them to image targets from multiple angles and adapt mid-flight.
At the Vera C Rubin Observatory in Chile, a 10-year survey of the night sky is due to begin soon. This astronomical survey is expected to find dozens of ISOs each year. Simulations suggest we may be on the cusp of a detection boom.
Any spacecraft would need to reach high speeds once an object is spotted and ensure that its energy source doesn’t degrade, potentially after years waiting in “storage orbit.” A number of missions have already utilized a form of propulsion called a solar sail.
These use sunlight on the lightweight, reflective sail to push the spacecraft through space. This would dispense with the need for heavy fuel tanks. The next generation of solar sail spacecraft could use lasers on the sails to reach even higher speeds, which would offer a nimble and low-cost solution compared to other futuristic fuels, such as nuclear propulsion.
A spacecraft approaching an ISO will also need to withstand high temperatures and possibly erosion from dust being ejected from the object as it moves. While traditional shielding materials can protect spacecraft, they add weight and may slow them down.
To address this, researchers are exploring novel technologies for lightweight, more durable and resistant materials, such as advanced carbon fibers. Some could even be 3D printed. They are also looking at innovative uses of traditional materials such as cork and ceramics.
A suite of different approaches is needed that involve ground-based telescopes and space-based missions, working together to anticipate, chase down, and observe ISOs.
New technology could allow the spacecraft itself to identify and predict the trajectories of incoming objects. However, potential cuts to space science in the US, including to observatories like the James Webb Space Telescope, threaten such progress.
Emerging technologies must be embraced to make an approach and rendezvous with an ISO a real possibility. Otherwise, we will be left scrabbling, taking pictures from afar as yet another cosmic wanderer speeds away.
Disclosure statement:
Billy Bryan works on projects at RAND Europe that are funded by the UK Space Agency and DG DEFIS. He is affiliated with RAND Europe’s Space Hub and is lead of the civil space theme, the University of Sussex Students’ Union as a Trustee, and Rocket Science Ltd. as an advisor.
Chris Carter works on projects at RAND Europe that are funded by the UK Space Agency and DG DEFIS. He is affiliated with RAND Europe’s Space Hub and is a researcher in the civil space theme.
Theodora (Teddy) Ogden is a Senior Analyst at RAND Europe, where she works on defense and security issues in space. She was previously a fellow at Arizona State University, and before that was briefly at NATO.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Mystery Objects From Other Stars Are Visiting Our Solar System. These Missions Will Study Them Up Close appeared first on SingularityHub.
2025-04-15 04:48:10
In a first, the 3D reconstruction of a mouse brain links structure to activity.
Let a mouse nose around a house, and it will rapidly find food and form a strategy to return to it without getting caught. Given the same task, an AI would require millions of training examples and consume a boatload of energy and time.
Evolution has crafted the brain to quickly learn and adapt to an ever-changing world. Detailing its algorithms—the ways it processes information as revealed by its structure and wiring—could inspire more advanced AI.
This month, the Machine Intelligence From Cortical Networks (MICrONS) consortium released the most comprehensive map ever assembled of a mammalian brain. The years-long effort painstakingly charted a cubic millimeter of mouse brain—all its cells and connections—and linked this wiring diagram to how the animal sees its world.
Although just the size of a poppyseed, the brain chunk was packed with an “astonishing” 84,000 neurons, half a billion synapses—these are the hubs connecting brain cells—and over 3 miles of neural wiring, wrote Harvard’s Mariela Petkova and Gregor Schuhknecht, who were not involved in the project.
Brain maps are nothing new. Some capture basic anatomy. Others highlight which genes activate as neurons spark with activity. The new dataset, imaged at nanoscale resolution and reconstructed with AI, differs in that it connects the brain’s hardware to how it works.
The project “created the most comprehensive dataset ever assembled that links mammalian brain structure to neuronal functions in an active animal,” wrote Petkova and Schuhknecht.
The new resource could help scientists crack the neural code—the brain’s computational framework. Distilling seemingly random electrical activity into algorithms could illuminate how our brains form memories, perceive the outside world, and make calculated decisions. Similar principles could also inspire future generations of more flexible AI.
“Looking at it [the map] really gives you an awe about the sense of complexity in the brain that is very much akin to looking up at the stars of night,” Forrest Collman at the Allen Institute for Brain Science, who was part of MICrONS, told Nature. The results are “really stunningly beautiful.”
The brain is nature’s most prized computational engine.
Although recent AI advances allow algorithms to learn faster or adapt, the squishy three-pound blob in our heads somehow perceives, learns, and memorizes encounters in a flash using far less energy. It then stores important information to guide decision-making in the future.
The brain’s internal wiring is the heart of its computational abilities. Neurons and other brain cells dynamically connect to one another through multiple synapses. New learning alters the wiring by tweaking synaptic strength to form memories and generate thoughts.
Scientists have already found molecules and genes that connect and change these networks across large brain chunks (albeit with low resolution). But deep dive into the brain’s neural connections could yield new insights.
Mapping a whole mouse brain at nanoscale resolution is still technologically challenging. Here, the MICrONS team zeroed in on a poppyseed-sized chunk of the visual cortex. Often dubbed “the seat of higher cognition,” the cortex is the most recently evolved brain structure. It supports some of our most treasured abilities: Logical thinking, decision-making, and perception.
Despite the cortex’s seemingly different functions, previous theoretical studies have suggested there’s a common wiring “blueprint” embedded across the region.
Deciphering this blueprint is like “working out the principles of a combustion engine by looking at many cars—there are different engine models, but the same fundamental mechanics apply,” wrote Petkova and Schuhknecht. For the brain, we’ll need a cellular parts list and an idea of how they work together.
The project analyzed a tiny chunk of a mouse’s visual cortex sliced into over 28,000 pieces, each more than a thousand times thinner than a human hair.
The sections were imaged with an electron beam to capture nanoscale structures. AI-based software then stitched individual sections into a 3D recreation of the original brain region, with brain cells, wirings, and synapses each highlighted in differed colors.
The map contains over 200,000 brain cells, half a billion synapses, and more than 5.4 kilometers of neural wiring—roughly one and a half times the length of New York City’s Central Park.
Although it’s just a tiny speck of mouse brain, the map pushes the technological limits for mapping brain connections at scale. Previous landmark maps from a roundworm and fruit fly contained a fraction of the total neurons and synapses included in the new release. The only study comparable in volume mapped the human cortex, but with far fewer identified brain cells and synapses.
The dataset is unusual because it recorded specific activity from the mouse’s brain before imaging it.
The team showed a mouse multiple videos on a screen, including scenes from The Matrix, as it ran on a treadmill. The mouse’s brain had been genetically altered so that any activated neurons emitted a fluorescent light to mark those cells. Almost 76,000 neurons in the visual cortex sparked to life over multiple sessions. This information was then precisely mapped onto the connectome, highlighting individual activated neurons and charting their networks.
“This is where the study truly breaks new ground,” wrote Petkova and Schuhknecht. Rather than compiling a list of brain components, which only maps anatomy, the dataset also decodes functional connections at unprecedented scale.
Other projects have already made use of the dataset. A few showed how the reconstruction can identify different types of neurons. Mapping structural wiring to activity also revealed a recurring circuit—a generic pattern of brain activity—that occurs throughout the cortex. Using an AI term, the connections formed a sort of “foundation model” of the brain that can generalize, with the ability to predict neural activity in other mice.
The database isn’t perfect. Most of the wiring was reconstructed using AI, a process that leaned heavily on human editing to find errors. Reconstructing larger samples will need further technological improvements to speed up the process.
Then there are fundamental mysteries of the brain that the new brain map can’t solve. Though it offers a way to tally neural components and their wiring, higher level computations—for example, comprehending what you’re seeing—could spark another set of neural activity than that captured in the study. And cortex circuits have vast reach, which means the neural connections in the sample are incomplete.
The consortium is releasing the database, along with a new set of AI-based computational tools to link wiring diagrams to neural activity. Meanwhile, they’re planning to use the technology to map larger portions of the brain.
The release “marks a major leap forwards and offers an invaluable community resource for future discoveries in neuroscience,” such as the basic rules of cognition and memory, wrote Petkova and Schuhknech.
The post Largest Brain Map Ever Reveals Hidden Algorithms of the Mammalian Brain appeared first on SingularityHub.
2025-04-13 00:27:08
A vast galactic survey suggests dark energy may not be constant after all.
The great Russian physicist and Nobel laureate Lev Landau once remarked that “cosmologists are often in error, but never in doubt.” In studying the history of the universe itself, there is always a chance that we have got it all wrong, but we never let this stand in the way of our inquiries.
Last month, a press release announced groundbreaking findings from the Dark Energy Spectroscopy Instrument (DESI), which is installed on the Mayall Telescope in Arizona. This vast survey, containing the positions of 15 million galaxies, constitutes the largest three-dimensional mapping of the universe to date. For context, the light from the most remote galaxies recorded in the DESI catalogue was emitted 11 billion years ago, when the universe was about a fifth of its current age.
DESI researchers studied a feature in the distribution of galaxies that astronomers call “baryon acoustic oscillations.” By comparing it to observations of the very early universe and supernovae, they have been able to suggest that dark energy—the mysterious force propelling our universe’s expansion—is not constant throughout the history of the universe.
An optimistic take on the situation is that sooner or later the nature of dark matter and dark energy will be discovered. The first glimpses of DESI’s results offer at least a small sliver of hope of achieving this.
However, that might not happen. We might search and make no headway in understanding the situation. If that happens, we would need to rethink not just our research, but the study of cosmology itself. We would need to find an entirely new cosmological model, one that works as well as our current one but that also explains this discrepancy. Needless to say, it would be a tall order.
To many who are interested in science this is an exciting, potentially revolutionary prospect. However, this kind of reinvention of cosmology, and indeed all of science, is not new, as argued in the 2023 book The Reinvention of Science.
Back in 1970, Allan Sandage wrote a much-quoted paper pointing to two numbers that bring us closer to answers about the nature of cosmic expansion. His goal was to measure them and discover how they change with cosmic time. Those numbers are the Hubble constant, H₀, and the deceleration parameter, q₀.
The first of these two numbers tells us how fast the universe is expanding. The second is the signature of gravity: as an attractive force, gravity should be pulling against cosmic expansion. Some data has shown a deviation from the Hubble-Lemaître Law, of which Sandage’s second number, q₀, is a measure.
No significant deviation from Hubble’s straight line could be found until breakthroughs were made in 1997 by Saul Perlmutter’s Supernova Cosmology Project and the High-Z SN Search Team led by Adam Riess and Brian Schmidt. The goal of these projects was to search for and follow supernovae exploding in very distant galaxies.
These projects found a clear deviation from the simple straight line of the Hubble-Lemaître Law, but with one important difference: the universe’s expansion is accelerating, not decelerating. Perlmutter, Riess, and Schmidt attributed this deviation to Einstein’s cosmological constant, which is represented by the Greek letter Lambda, Λ, and is related to the deceleration parameter.
Their work earned them the 2011 Nobel Prize in Physics.
Astonishingly, this Lambda-matter, also known as dark energy, is the dominant component of the universe. It has been speeding up the universe’s expansion to the point where the force of gravity is overridden, and it accounts for almost 70 percent of the total density of the universe.
We know little or nothing about the cosmological constant, Λ. In fact, we do not even know that it is a constant. Einstein first said there was a constant energy field when he created his first cosmological model derived from General Relativity in 1917, but his solution was neither expanding nor contracting. It was static and unchanging, and so the field had to be constant.
Constructing more sophisticated models that contained this constant field was an easier task: they were derived by the Belgian physicist Georges Lemaître, a friend of Einstein’s. The standard cosmology models today are based on Lemaître’s work and are referred to as Λ Cold Dark Matter (ΛCDM) models.
The DESI measurements on their own are completely consistent with this model. However, by combining them with observations of the cosmic microwave background and supernovae, the best fitting model is one involving a dark energy that evolved over cosmic time and that will (potentially) no longer be dominant in the future. In short, this would mean the cosmological constant does not explain dark energy.
In 1988, the 2019 physics Nobel laureate P.J.E. Peebles wrote a paper with Bharat Ratra on the possibility that there is a cosmological constant that varies with time. Back when they published this paper, there was no serious body of opinion about Λ.
This is an attractive suggestion. In this case the current phase of accelerated expansion would be transient and would end at some point in the future. Other phases in cosmic history have had a beginning and an end: inflation, the radiation-dominated era, the matter-dominated era, and so on.
The present dominance of dark energy may therefore decline over cosmic time, meaning it would not be a cosmological constant. The new paradigm would imply that the current expansion of the universe could eventually reverse into a “Big Crunch.”
Other cosmologists are more cautious, not least Carl Sagan, who wisely said that “extraordinary claims require extraordinary evidence.” It is crucial to have multiple, independent lines of evidence pointing to the same conclusion. We are not there yet.
Answers may come from one of today’s ongoing projects—not just DESI but also Euclid and J-PAS—which aim to explore the nature of dark energy through large-scale galaxy mapping.
While the workings of the cosmos itself are up for debate, one thing is for sure—a fascinating time for cosmology is on the horizon.
Licia Verde receives funding from the AEI (Spanish State Research Agency) project number PID2022-141125NB-I00, and has previously received funding from the European Research Council. Licia Verde is a member of the DESI collaboration team.
Vicent J. Martínez receives funding from the European Union NextGenerationEU and the Generalitat Valenciana in the 2022 call “Programa de Planes Complementarios de I+D+i”, Project (VAL-JPAS), reference ASFAE/2022/025, the research Project PID2023-149420NB-I00 funded by MICIU/AEI/10.13039/501100011033 and ERDF/EU, and the project of excellence PROMETEO CIPROM/2023/21 of the Conselleria de Educación, Universidades y Empleo (Generalitat Valenciana). He is a member of the Spanish Astronomy Society, the Spanish Royal Physics Society and the Royal Spanish Mathematical Society.
Bernard J.T. Jones and Virginia L Trimble do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Dark Energy Discovery Could Undermine Our Entire Model of Cosmological History appeared first on SingularityHub.
2025-04-11 22:58:15
The AI made a “mental map” of the world to collect the game’s most sought-after material.
My nephew couldn’t stop playing Minecraft when he was seven years old.
One of the most popular games ever, Minecraft is an open world in which players build terrain and craft various items and tools. No one showed him how to navigate the game. But over time, he learned the basics through trial and error, eventually figuring out how to craft intricate designs, such as theme parks and entire working cities and towns. But first, he had to gather materials, some of which—diamonds in particular—are difficult to collect.
Now, a new DeepMind AI can do the same.
Without access to any human gameplay as an example, the AI taught itself the rules, physics, and complex maneuvers needed to mine diamonds. “Applied out of the box, Dreamer is, to our knowledge, the first algorithm to collect diamonds in Minecraft from scratch without human data or curricula,” wrote study author, Danijar Hafner, in a blog post.
But playing Minecraft isn’t the point. AI scientist have long been after general algorithms that can solve tasks across a wide range of problems—not just the ones they’re trained on. Although some of today’s models can generalize a skill across similar problems, they struggle to transfer those skills across more complex tasks requiring multiple steps.
In the limited world of Minecraft, Dreamer seemed to have that flexibility. After learning a model of its environment, it could “imagine” future scenarios to improve its decision making at each step and ultimately was able to collect that elusive diamond.
The work “is about training a single algorithm to perform well across diverse…tasks,” said Harvard’s Keyon Vafa, who was not involved in the study, to Nature. “This is a notoriously hard problem and the results are fantastic.”
Children naturally soak up their environment. Through trial and error, they quickly learn to avoid touching a hot stove and, by extension, a recently used toaster oven. Dubbed reinforcement learning, this process incorporates experiences—such as “yikes, that hurt”—into a model of how the world works.
A mental model makes it easier to imagine or predict consequences and generalize previous experiences to other scenarios. And when decisions don’t work out, the brain updates its modeling of the consequences of actions—”I dropped a gallon of milk because it was too heavy for me”—so that kids eventually learn not to repeat the same behavior.
Scientists have adopted the same principles for AI, essentially raising algorithms like children. OpenAI previously developed reinforcement learning algorithms that learned to play the fast-paced multiplayer Dota 2 video game with minimal training. Other such algorithms have learned to control robots capable of solving multiple tasks or beat the hardest Atari games.
Learning from mistakes and wins sounds easy. But we live in a complex world, and even simple tasks, like, say, making a peanut butter and jelly sandwich, involve multiple steps. And if the final sandwich turns into an overloaded, soggy abomination, which step went wrong?
That’s the problem with sparse rewards. We don’t immediately get feedback on every step and action. Reinforcement learning in AI struggles with a similar problem: How can algorithms figure out where their decisions went right or wrong?
Minecraft is a perfect AI training ground.
Players freely explore the game’s vast terrain—farmland, mountains, swamps, and deserts—and harvest specialized materials as they go. In most modes, players use these materials to build intricate structures—from chicken coups to the Eiffel Tower—craft objects like swords and fences, or start a farm.
The game also resets: Every time a player joins a new game the world map is different, so remembering a previous strategy or place to mine materials doesn’t help. Instead, the player has to more generally learn the world’s physics and how to accomplish goals—say, mining a diamond.
These quirks make the game an especially useful test for AI that can generalize, and the AI community has focused on collecting diamonds as the ultimate challenge. This requires players to complete multiple tasks, from chopping down trees to making pickaxes and carrying water to an underground lava flow.
Kids can learn how to collect diamonds from a 10-minute YouTube video. But in a 2019 competition, AI struggled even after up to four days of training on roughly 1,000 hours of footage from human gameplay.
Algorithms mimicking gamer behavior were better than those learning purely by reinforcement learning. One of the organizers of the competition, at the time, commented that the latter wouldn’t stand a chance in the competition on their own.
Rather than relying on human gameplay, Dreamer explored the game by itself, learning through experimentation to collect a diamond from scratch.
The AI is comprised of three main neural networks. The first of these models the Minecraft world, building an internal “understanding” of its physics and how actions work. The second network is basically a parent that judges the outcome of the AI’s actions. Was that really the right move? The last network then decides the best next step to collect a diamond.
All three components were simultaneously trained using data from the AI’s previous tries—a bit like a gamer playing again and again as they aim for the perfect run.
World modeling is the key to Dreamer’s success, Hafner told Nature. This component mimics the way human players see the game and allows the AI to predict how its actions could change the future—and whether that future comes with a reward.
“The world model really equips the AI system with the ability to imagine the future,” said Hafner.
To evaluate Dreamer, the team challenged it against several state-of-the-art singular use algorithms in over 150 tasks. Some tested the AI’s ability to sustain longer decisions. Others gave either constant or sparse feedback to see how the programs fared in 2D and 3D worlds.
“Dreamer matches or exceeds the best [AI] experts,” wrote the team.
They then turned to a far harder task: Collecting diamonds, which requires a dozen steps. Intermediate rewards helped Dreamer pick the next move with the largest chance of success. As an extra challenge, the team reset the game every half hour to ensure the AI didn’t form and remember a specific strategy.
Dreamer collected a diamond after roughly nine days of continuous gameplay. That’s far slower than expert human players, who need just 20 minutes or so. However, the AI wasn’t specifically trained on the task. It taught itself how to mine one of the game’s most coveted items.
The AI “paves the way for future research directions, including teaching agents world knowledge from internet videos and learning a single world model” so they can increasingly accumulate a general understanding of our world, wrote the team.
“Dreamer marks a significant step towards general AI systems,” said Hafner.
The post DeepMind’s New AI Teaches Itself to Play Minecraft From Scratch appeared first on SingularityHub.
2025-04-10 23:36:47
The thalamus is a gateway, shuttling select information into consciousness.
How consciousness emerges in the brain is the ultimate mystery. Scientists generally agree that consciousness relies on multiple brain regions working in tandem. But the areas and neural connections supporting our perception of the world have remained elusive.
A new study, published in Science, offers a potential answer. A Chinese team recorded the neural activity of people with electrodes implanted deep in their brains as they performed a visual task. Called the thalamus, scientists have long hypothesized the egg-shaped area is a central relay conducting information across multiple brain regions.
Previous studies hunting for the brain mechanisms underlying consciousness have often focused on the cortex—the outermost regions of the brain. Very little is known about how deeper brain structures contribute to our sense of perception and self.
Simultaneously recording neural activity from both the thalamus and the cortex, the team found a wave-like signal that only appeared when participants reported seeing an image in a test. Visual signals specifically designed not to reach awareness had a different brain response.
The results support the idea that parts of the thalamus “play a gate role” for the emergence of conscious perception, wrote the team.
The study is “really pretty remarkable,” said Christopher Whyte at the University of Sydney, who was not involved in the work, to Nature. One of the first to simultaneously record activity in both deep and surface brain regions in humans, it reveals how signals travel across the brain to support consciousness.
Consciousness has teased the minds of philosophers and scientists for centuries. Thanks to modern brain mapping technologies, researchers are beginning to hunt down its neural underpinnings.
At least half a dozen theories now exist, two of which are going head-to-head in a global research effort using standardized tests to probe how awareness emerges in the human brain. The results, alongside other work, could potentially build a unified theory of consciousness.
The problem? There still isn’t definitive agreement on what we mean by consciousness. But practically, most scientists agree it has at least two modes. One is dubbed the “conscious state,” which is when, for example, you’re awake, asleep, or in a coma. The other mode, “conscious content,” captures awareness or perception.
We’re constantly bombarded with sights, sounds, touch, and other sensations. Only some stimuli—the smell of a good cup of coffee, the sound of a great playlist, the feel of typing on a slightly oily keyboard—reach our awareness. Others are discarded by a web of neural networks long before we perceive them.
In other words, the brain filters signals from the outside world and only brings a sliver of them into conscious perception. The entire process from sensing to perceiving takes just a few milliseconds.
Brain imaging technologies such as functional magnetic resonance imaging (fMRI) can capture the brain’s inner workings as we process these stimuli. But like a camera with slow shutter speed, the technology struggles to map activated brain areas in real time at high resolution. The delay also makes it difficult to track how signals flow from one brain area to another. Because a sense of awareness likely emerges from coherent activation across multiple brain regions, this makes it more difficult to decipher how consciousness emerges from neural chatter.
Most scientists have focused on the cortex, with just a few exploring the function of deeper brain structures. “Capturing neural activity in the thalamic nuclei [thalamus] during conscious perception is very difficult” because of technological restrictions, wrote the authors.
The new study solved the problem by tapping a unique resource: People with debilitating and persistent headaches that can’t be managed with medication but who are otherwise mentally sharp and healthy.
Each participant in the study already had up to 20 electrodes implanted in different parts of the thalamus and cortex as part of an experimental procedure to dampen their headache pain. Unlike fMRI studies that cover the whole brain with time lag and relatively low resolution, these electrodes could directly pick up neural signals in the implanted areas with minimal delay.
Often dubbed the brain’s Grand Central Station, the thalamus is a complex structure housing multiple neural “train tracks” originating from different locations. Each track routes and ferries a unique combination of incoming sensations to other brain regions for further processing.
The thalamus likely plays “a crucial role in regulating the conscious state” based on previous theoretical and animal studies, wrote the team. But testing its role in humans has been difficult because of its complex structure and location deep inside the brain. The five participants, each with electrodes already implanted in their thalamus and cortex for treatment, were the perfect candidates for a study matching specific neural signals to conscious perception.
Using a custom task, the team measured if participants could consciously perceive a visual cue—a blob of alternating light and dark lines—blinking on a screen. Roughly half the trials were designed so the cue appeared too briefly for the person to register, as determined by previous work. The participants were then asked to move their eyes towards the left or right of the screen depending on whether they noticed the cue.
Throughout the experiment the team captured electrical activity from parts of each participant’s thalamus and prefrontal cortex—the front region of the brain that’s involved in higher level thinking such as reasoning and decision making.
Two parts of the thalamus sparked with activity when a person consciously perceived the cue, and the areas orchestrated synchronized waves of activity to the cortex. This synchronized activity disappeared when the participants weren’t consciously aware of the cue.
The contributions to “consciousness-related activity were strikingly different” across the thalamus, wrote the authors. In other words, these specific deep-brain regions may form a crucial gateway for processing visual experiences so they rise to the level of perception.
The findings are similar to results from previous studies in mice and non-human primates. One study, tracked how mice react to subtle prods to their whiskers. The rodents were trained to only lick water when they felt a touch but otherwise go about their business. Each mouse’s thalamus and cortex sparked when they went for the water, forming similar neural circuits as those observed in humans during conscious perception. Other studies in monkeys have also identified the thalamus as a hot zone for consciousness, although they implicate slightly different areas of the structure.
The team is planning to conduct similar visual experiments in monkeys to clarify which parts of the thalamus support conscious perception. For now, the full nature of consciousness in the brain remains an enigma. But the new results offer a peek inside the human mind as it perceives the world with unprecedented detail.
Liad Mudrik at Tel Aviv University, who was not involved in the study, told Nature it is “one of the most elaborate and extensive investigations of the role of the thalamus in consciousness.”
The post Our Conscious Perception of the World Depends on This Deep Brain Structure appeared first on SingularityHub.
2025-04-09 02:42:15
Our closest relatives in the animal kingdom are wired up differently.
Scientists have long tried to understand the human brain by comparing it to other primates. Researchers are still trying to understand what makes our brain different to our closest relatives. Our recent study may have brought us one step closer by taking a new approach—comparing the way brains are internally connected.
The Victorian palaeontologist Richard Owen incorrectly argued that the human brain was the only brain to contain a small area called the Hippocampus minor. He claimed that made it unique among the animal kingdom, and he argued, the human brain was therefore clearly unrelated to other species. We’ve learned a lot since then about the organization and function of our brain, but there is still much to learn.
Most studies comparing the human brain to that of other species focus on size. This can be the size of the brain, size of the brain relative to the body, or the size of parts of the brain to the rest of it. However, measures of size don’t tell us anything about the internal organization of the brain. For instance, although the enormous brain of an elephant contains three times as many neurons as the human brain, these are predominantly located in the cerebellum, not in the neocortex, which is commonly associated with human cognitive abilities.
Until recently, studying the brain’s internal organization was painstaking work. The advent of medical imaging techniques, however, has opened up new possibilities to look inside the brains of animals quickly, in great detail, and without harming the animal.
Our team used publicly available MRI data of white matter, the fibers connecting parts of the brain’s cortex. Communication between brain cells runs along these fibers. This costs energy and the mammalian brain is therefore relatively sparsely connected, concentrating communications down a few central pathways.
The connections of each brain region tell us a lot about its functions. The set of connections of any brain region is so specific that brain regions have a unique connectivity fingerprint.
In our study, we compared these connectivity fingerprints across the human, chimpanzee, and macaque monkey brain. The chimpanzee is, together with the bonobo, our closest living relative. The macaque monkey is the non-human primate best known to science. Comparing the human brain to both species meant we could not only assess which parts of our brain are unique to us, but also which parts are likely to be shared heritage with our non-human relatives.
Much of the previous research on human brain uniqueness has focused on the prefrontal cortex, a group of areas at the front of our brain linked to complex thought and decision making. We indeed found that aspects of the prefrontal cortex had a connectivity fingerprint in the human that we couldn’t find in the other animals, particularly when we compared the human to the macaque monkey.
But the main differences we found were not in the prefrontal cortex. They were in the temporal lobe, a large part of cortex located approximately behind the ear. In the primate brain, this area is devoted to deep processing of information from our two main senses: vision and hearing. One of the most dramatic findings was in the middle part of the temporal cortex.
The feature driving this distinction was the arcuate fasciculus, a white matter tract connecting the frontal and temporal cortex and traditionally associated with processing language in humans. Most if not all primates have an arcuate fasciculus but it is much larger in human brains.
However, we found that focusing solely on language may be too narrow. The brain areas that are connected via the arcuate fasciculus are also involved in other cognitive functions, such as integrating sensory information and processing complex social behavior. Our study was the first to find the arcuate fasciculus is involved in these functions. This insight underscores the complexity of human brain evolution, suggesting that our advanced cognitive abilities arose not from a single change, as scientists thought, but through several, interrelated changes in brain connectivity.
While the middle temporal arcuate fasciculus is a key player in language processing, we also found differences between the species in a region more at the back of the temporal cortex. This temporoparietal junction area is critical in processing information about others, such as understanding others’ beliefs and intentions, a cornerstone of human social interaction.
In humans, this brain area has much more extensive connections to other parts of the brain processing complex visual information, such as facial expressions and behavioral cues. This suggests that our brain is wired to handle more intricate social processing than those of our primate relatives. Our brain is wired up to be social.
These findings challenge the idea of a single evolutionary event driving the emergence of human intelligence. Instead, our study suggests brain evolution happened in steps. Our findings suggest changes in frontal cortex organization occurred in apes, followed by changes in temporal cortex in the lineage leading to humans.
Richard Owen was right about one thing. Our brains are different from those of other species—to an extent. We have a primate brain, but it’s wired up to make us even more social than other primates, allowing us to communicate through spoken language.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post What Makes the Human Brain Unique? Scientists Compared It With Monkeys and Apes to Find Out appeared first on SingularityHub.