2026-03-26 22:00:00
Visual experiments suggest just a small fraction of the information our brains process enters awareness.
What can you see right now? This might seem like a silly question, but what enters your consciousness is not the whole story when it comes to vision. A great deal of visual processing in the brain goes on well below our conscious awareness.
Some studies have probed the unconscious depths of vision. One source of evidence comes from the neurological condition known as blindsight, which is caused by damage to areas of the brain involved in processing visual information. People with blindsight report that they are unable to see, either entirely or in a portion of their visual field. However, when asked to guess what is there, they can often do so with remarkable accuracy.
For example, in an experiment published in 2004 on someone with blindsight, a black bar was displayed in the portion of the visual field to which the person was blind. The person was asked to “guess” whether the bar was vertical or horizontal.
Despite denying any conscious awareness of the bar, the participant could answer correctly at a level well above chance. The participant even showed evidence of being able to pay attention to the bar—they were faster to respond when an arrow (placed in a healthy area of their visual field) correctly indicated the location of the bar.
The most popular interpretation (though not the only one) is that people with blindsight can see these objects, but not see them consciously. They see what is there, but it all goes on unconsciously, below their awareness.
The phenomenon of inattentional blindness seems to show you can see without the information crossing into your consciousness. Anyone can experience inattentional blindness. The phenomenon has been known about for a long time, but we can most easily get a handle on it by looking at a well-known experiment reported in 1999.
In this experiment, participants are shown a video of people playing basketball and told to count the number of passes between the players wearing a white shirt. If you’ve never done this before, I urge to you stop reading now and watch the video.
In many cases, people are so busy counting the passes that they completely miss a large gorilla walking across the middle of the scene and beating its chest, then walking off. The gorilla’s right there, in the centre of your visual field. Light from the gorilla enters your eyes, and is processed in the visual system, but somehow you missed it, because you weren’t paying attention to it.
The gorilla has more to teach us. In another experiment reported in 2013, radiologists were given a series of lung scans. They were told to look for nodules (which show up as small light colored circles) on each scan. In one of the scans, a large picture of a dancing gorilla was superimposed on top of the lung scan. In this study, 83 percent of the radiologists failed to spot it, even though it was 48 times bigger than the average nodule they were looking for. Some of them even looked directly at the gorilla and still didn’t notice it!
The interpretation of these experiments is controversial. Some scientists suggest that in these kinds of cases, you consciously see the gorilla, but immediately forget it (although a dancing gorilla in someone’s lung doesn’t seem like the kind of thing you’d forget). Others argue that you see the gorilla, but the information never made its way into consciousness. You saw the gorilla, but unconsciously.
Let’s assume that in the case of blindsight, and inattentional blindness, the information is seen but didn’t make it all the way to consciousness. Then, the question is: What makes some information conscious, rather than the information that stays unconscious? This is one of the central questions for consciousness studies in philosophy, psychology, and neuroscience.
There’s no agreement on which is the best theory of consciousness, but in my opinion, the strongest contender is the global neuronal workspace theory.
According to this theory, consciousness is all to do with a particular area of the brain which is the seat of the “workspace.” The workspace is a system with a small capacity, so it can’t hold a lot of information at any one time. The job of the workspace is to take unconscious information and broadcast it to lots of different networks all across the brain. Global neuronal workspace theorists say that broadcasting the information in this way is what makes it conscious.
The job of the workspace is to act like the brain’s loudspeaker, and consciousness is the information that gets broadcast. The workspace takes unconscious information and boosts it so that many of the different systems in the brain hear about it and can use that information in their own processes. The late philosopher Daniel Dennett used to call consciousness “fame in the brain.” The workspace idea is similar.
One of the most striking implications of the global neuronal workspace theory is how little information makes it to consciousness. Since the workspace has quite a small capacity, it follows that we can only ever be conscious of a little at a time. We might think there’s a rich visual world in front of us, full of details, all of which we’re conscious of, but really—according to the theory—we’re only ever conscious of a small portion of that.
Some philosophers and scientists have objected to the theory on these grounds. They suggest that consciousness “overflows” the workspace: We are conscious of more information than can “fit” into the workspace at any one time. Even with these debates still ongoing, I think the global neuronal workspace theory gives us a reasonably clear answer to the question of what consciousness is for and how it interacts with other systems in the brain.
In our brains, consciousness is only the tip of a very large iceberg. But the global neuronal workspace theory might give us insight into what makes that tip so special.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post What We Actually See—and Don’t See—Shows Consciousness Is Only the Tip of the Iceberg appeared first on SingularityHub.
2026-03-25 03:38:32
In a step toward biological computing, brain organoids rewired their networks as they learned to balance a digital pole on a cart.
Try balancing a ruler vertically on the palm of your hand while walking. It’s not easy. Your eyes constantly track its movement. Your arm and hand make tiny adjustments to prevent tilting. All the while, your brain sparks with activity with one clear goal: Keep the ruler upright.
Scientists have now trained mini brains, or brain organoids, to master the same problem, simulated in the digital realm, with electrical zaps alone.
Mini brains have grown popular with researchers since their invention over a decade ago. Commonly made from stem cells, organoids are jam-packed with neurons that form densely connected networks. Earlier versions loosely resembled the developing brains of preterm babies; now they can mimic the neural wiring of a kindergartener. As the blobs become more sophisticated, scientists are asking: Can they learn?
In the new study, researchers challenged the mini brains with a classic engineering task similar to balancing a ruler on your hand. Mastering the task takes practice, but our brains are wired to receive feedback, often in the form of a small jolt of electrical activity. Called reinforcement learning, the technique has already been adapted to train AI—and now, mini brains too.
The goal isn’t to replace silicon-based controllers with living tissue. It’s to test the organoids’ ability to listen and learn and reveal how they break down.
“We’re trying to understand the fundamentals of how neurons can be adaptively tuned to solve problems,” study author Ash Robbins at the University of California, Santa Cruz said in a press release. “If we can figure out what drives that in a dish, it gives us new ways to study how neurological disease can affect the brain’s ability to learn.”
Attaching living brain tissue to computers sounds like science fiction. But brain organoids have already made it reality.
These blobs of brain cells often start life as skin cells that have been turned back into stem cells. After bathing in a special cocktail of nutrients, they develop into various types of brain cells that self-organize into intricate three-dimensional structures similar to parts of the brain. Neurons form networks, ripple with electrical waves, and when connected to other tissues—such as an artificial spinal cord and lab-grown muscles—can control them.
Bioengineers have taken notice, envisioning organoids as potential living processors. Our brains use far less power and are more adaptable than the most advanced neuromorphic chips and brain-inspired AI. Brain organoids linked together into computers could theoretically enable computation in a dish at a fraction of the energy cost.
There are hints this blue-sky idea could work. Scientists have taught hundreds of thousands of isolated neurons to play the video games Pong and, more recently, Doom. Separately, researchers used cultured neurons to control the simple movements of a vehicle.
But mini brains are different. Unlike isolated neurons, organoids’ 3D structures and connections are harder to decipher. Yet predictable learning is essential to realizing “organoid intelligence.” Their electrical activity needs to rapidly adapt to inputs, strengthening or weakening circuits.
Reinforcement learning from trial and error is a perfect test. When we succeed at a new task, neurons in the brain’s reward center blast dopamine and rewire their connections. Failures don’t bring about similar activity. Over time, we learn not to touch a hot pan, take care when hammering a nail, and other life lessons.
But cortical organoids, which resemble the outermost part of the brain, lack neurons that communicate using dopamine. Can they still learn through experience?
The new study tackled the question with a hybrid organoid-computer system. The team grew cortical organoids from mouse stem cells. These then self-organized into neural networks and developed a layered structure within a month.
The researchers chose this type of brain organoid “due to the cortex’s well-established role in adaptive information processing and its ability to encode, decode, and modify responses to novel inputs,” they wrote.
The team embedded the brain blobs on a chip that captures their electrical pulses and interacts with a computer to “teach” the mini brains and process data. (The chip’s sensors don’t cover the entire organoid as more recent devices do.)
After recording spontaneous activity, the team figured out how best to stimulate the organoids and built a programmable system with a simple interface.
“From an engineering perspective, what makes this powerful is that we can record, stimulate, and adapt in the same system,” said study author Mircea Teodorescu.
Next, the team challenged the organoids with the cartpole problem, a classic engineering task that asks the player to balance an upright pole on a moving cart. If the pole tips over a certain angle, it’s a fail. The player has to constantly adjust the cart as its cargo wobbles.
To train the organoids, the scientists delivered electrical zaps after the pole tipped too far to either side and tracked the responses. In essence, the mini brains played a video game, with human coaches nudging them toward success. The team grouped performance—how long the system balanced the pole—into sets of five trials, each ending when the pole fell. If the most recent performance improved over the previous 20 trials, they considered it a success and delivered no zaps. If performance didn’t improve, the team gave the organoids a zap.
“You could think of it like an artificial coach that says, ‘you’re doing it wrong, tweak it a little bit in this way,’” said Robbins.
Compared to random or no zaps, the rewarding zaps boosted the success rate from 4.5 to 46.5 percent in continuous trials, suggesting the organoids learned from electrical cues alone—without dopamine. A closer look showed the cells released another chemical that strengthens neural connections, and blocking the process prevented them from learning.
“This demonstrates that biological neural networks can be systematically modified through precise electronic control,” wrote the team.
However, the learning didn’t last. After roughly 45 minutes without stimulation, the organoids’ performance reset to baseline. Their fleeting memory may reflect the lack of neural highways required for long-term memory. The team is now culturing multiple types of brain organoids together—each mimicking a different region—to potentially preserve learning and memory.
“These are incredibly minimal neural circuits. There’s no dopamine, no sensory experience, no body to sustain, no goals to pursue,” said Keith Hengen at Washington University in St. Louis, who did not participate in the study. But they could still be nudged toward solving a real control problem. “That tells us something important: The capacity for adaptive computation is intrinsic to cortical tissue itself, separate from all the scaffolding we usually assume is necessary.”
The post These Mini Brains Just Learned to Solve a Classic Engineering Problem appeared first on SingularityHub.
2026-03-24 05:15:01
Rebooting frozen brains is still science fiction, but advanced freezing techniques could preserve wiring and function.
Floating in a warm, nutritious bath, the slices of mouse brain buzzed with electrical activity. Researchers gave them a few zaps, and parts of the hippocampus strengthened their wiring.
This type of experiment is an extremely common way to decipher how the brain works. The slices, not so much. Preserved in a deep freeze for roughly a week, they restarted some basic processes after being thawed. Neurons lit up, boosted their metabolism, and adjusted connections in the same way our brains do when forming new memories and recalling old ones.
“While the brain is considered exceptionally sensitive, we show that the hippocampus can resume electrophysiological activity after being rendered completely immobile in a cryogenic glass,” wrote University of Erlangen‐Nuremberg scientists in a paper describing the work.
In traditional freezing techniques, ice crystals shred delicate neurons and the connections between them. There would be no chance of recovering memories stored within. The new study used a method called vitrification, which rapidly cools tissue before crystals can form. An improved thawing process protected cells from toxic chemicals in their cryogenic bath.
Both pre-sliced and whole mouse brains recovered after warming, although some neural activity was slightly off-kilter. To be clear, brains can’t be completely revived like in the movies. But the approach pushes the known frontier of what brain tissue can tolerate, wrote the team.
Suspended animation is one of science fiction’s oldest tropes. Whether characters are traveling between the stars or awaiting future cures for untreatable diseases, cryogenics is the ultimate pause button they can use to speedrun decades, if not centuries and beyond.
The idea was popularized in the 1960s, when Robert Ettinger “the father of cryonics” argued that people could be frozen and revived in the future, with their memories, cognition, and physical capabilities intact. He took the fringe idea and turned it into a mainstream dream.
But cryosleep has earlier roots. In the late 1800s, scientists realized that certain cells and simple living creatures could survive freezing, suggesting it’s possible to temporarily suspend life.
Liquid nitrogen and other chemical preservatives are now used daily in labs to freeze individual cells—including brain cells—at extremely low temperatures. Many don’t survive, but those that do regain normal function upon thawing. Scientists use the technology to preserve different types of neurons to test theories and share with other labs.
Cryopreserving brain slices or whole brains is far more difficult. These contain the delicate neural branches brain cells use to communicate, which are easily destroyed during the freeze-thaw cycle. Ice is the main culprit. Even with protective chemicals, liquids in cells rapidly solidify into sharp crystals that jab cells inside and out like a thousand knives.
Still, scientists have kept frozen human fetal tissue intact, and cryopreserved rat cells have developed functional networks once thawed. Another effort kept a rodent’s heart structurally intact with a magnetic method that gradually brings the organ back to biological temperature. Techniques to preserve livers and kidneys can keep them in stasis for up to 100 days, and the organs are still healthy enough for transplantation after warming up.
“Progress in cryopreservation of rodent organs has moved the theme of suspending technologies closer to plausibility,” wrote the team.
Structure determines function for each organ. But the brain presents unique challenges. Hundreds of molecules zoom around neurons to build up or whittle down synapses. Others that dot the surfaces of these cells tweak electrical charges to strengthen or weaken activity. Even without tearing up the cell itself, damage to these processes renders neurons incapable of forming or retrieving memories.
Ice is only part of the revival equation. As liquids freeze, they change the pressure of the surrounding environment, causing cells to lose water and shrink. This can collapse internal structures and wreck synaptic connections. Cryoprotectants, such as a sugary liquid called glycerol, limit the damage but are toxic at high doses.
The authors of the new study turned to vitrification. Here, rapid cooling with cryoprotectants limits damage by freezing cells in a disorganized, glass-like state without forming ice crystals.
They first tested cryoprotectant recipes on brain slices that included the hippocampus, a brain region associated with the formation of memories. After soaking the slices in the chemical cocktails, the team bathed them in liquid nitrogen at a bone-chilling -196 degrees Celsius (−320.8 degrees Fahrenheit), which instantly froze the tissues. They then moved the slices to a −150 degrees Celsius (−238 degrees Fahrenheit) freezer and kept them there for up to a week.
The team could visually see whether each cocktail worked, they wrote. Vitrified slices had a glossy, transparent look; those that failed were dull and opaque.
After slow thawing, the slices sprang back to life.
The cells’ mitochondria ramped up energy production. Neuron membranes and synapses remained intact. And though there were some differences compared to fresh brain slices, the reawakened hippocampal cells mostly retained their usual patterns. Given a few electrical zaps, they strengthened their connections, a mechanism underlying learning and memory.
The team also tried the method on whole mouse brains. They had to repeatedly tweak the recipe to minimize toxicity from the cryoprotectants and ward off severe brain dehydration. But once thawed, slices from the whole preserved brains had intact neural wiring, including complex circuits in the hippocampus. Some brain cells languished and were harder to activate, whereas others perked right up.
It seems some types of neurons are more tolerant to vitrification than others, wrote the team.
Because they recorded activity in brain slices, it’s impossible to say whether the process would restore memory and learning. And the slices naturally deteriorated after 10 to 15 hours, making it hard to say much about longer timescales. To get around this, they could test the method on mini brains, or brain organoids, which better mimic whole brains and can be kept alive for years in culture.
The team is now expanding their work to include human brain slices and preservation of other organs, such as the heart. It’ll take plenty of trial and error. Human organs are far larger and could easily crack from mechanical stress during the cryopreservation process.
But the study shows “the brain is remarkably robust…to near-complete shutdown” into a glass-like state. “This reinforces the tenet of brain function being an emergent property of brain structure, and hints at the potential of life-suspending technologies,” wrote the team.
The post Reviving Brain Activity After ‘Cryosleep’ Inches Closer in Pioneering Study appeared first on SingularityHub.
2026-03-21 22:00:00
OpenAI Is Throwing Everything Into Building a Fully Automated ResearcherWill Douglas Heaven | MIT Technology Review ($)
“The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. OpenAI says that the new goal will be its ‘North Star’ for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability.”
Humanoid Robot Gets Surprisingly Good at TennisLoz Blain | New Atlas
“This ain’t teleoperation. Chinese researchers have tested a new, much quicker and easier method of teaching robots to play tennis, and the results look like a breakthrough in machine learning and real-world AI.”
This Is Not a Fly Uploaded to a ComputerRobert Hart | The Verge
“Aran Nayebi, a professor of machine learning at Carnegie Mellon University, said that the group was ‘not even close’ to capturing the full brain of the fly, showing connections between cells but not crucial details like neurotransmitters or how strong the connections between different nerve cells are. The motor system isn’t a ‘true upload’ either, he said. ‘We are not even faithfully simulating its brain in silico.'”
This May Be the World’s First Quantum BatteryGayoung Lee | Gizmodo
“Researchers finally believe they’ve found the right blueprint for scalable quantum batteries, publishing their findings in a recent study in Light: Science & Applications. ‘My ultimate ambition is a future where we can charge electric cars much faster than [fueling] petrol cars or charge devices over long distances wirelessly,’ James Quach, the study’s senior author and a researcher at CSIRO, Australia’s national science agency, said in a statement.”
My Tesla Was Driving Itself Perfectly—Until It CrashedRaffi Krikorian | The Atlantic ($)
“The problem is bigger than one company’s self-driving system. It’s about how we’re building every AI system, every algorithm, every tool that asks for our trust and trains us to give it. The pattern is everywhere: Condition people to rely on the system. Erode their vigilance. Then, when something breaks, point to the terms of service and blame them for not paying attention.”
A Private Space Company Has a Radical New Plan to Bag an AsteroidEric Berger | Ars Technica
“[TransAstra CEO Joel Sercel] envisions aggregating dozens, and then hundreds, of small asteroids at the ‘New Moon’ processing facility, which could potentially be located at the Earth-Sun L2 point, about 1.5 million km from Earth. Such asteroids could provide water for use as propellant and minerals for everything from solar panels to radiation shielding.”
Val Kilmer Set to Be Be Resurrected With AI for New FilmOwen Myers | The Guardian
“The film-maker is working in conjunction with the late actor’s estate and his daughter, Mercedes, to bring Kilmer back to life with state-of-the-art, generative AI. …The AI-generated version of Kilmer will appear in a ‘significant’ portion of the film, says Voorhees. The film will use images of the actor taken throughout his life to re-create Kilmer through the decades.”
Online Bot Traffic Will Exceed Human Traffic by 2027, Cloudflare CEO SaysSarah Perez | TechCrunch
“‘If a human were doing a task let’s say you were shopping for a digital camera—and you might go to five websites. Your agent or the bot that’s doing that will often go to 1,000 times the number of sites that an actual human would visit,’ Prince said. “So it might go to 5,000 sites. And that’s real traffic, and that’s real load, which everyone is having to deal with and take into account.”
World ID Wants You to Put a Cryptographically Unique Human Identity Behind Your AI AgentsKyle Orland | Ars Technica
“World now claims nearly 18 million unique humans have verified their identities on one of nearly 1,000 physical orbs around the world. Now, with Agent Kit, World wants to let those users tie their confirmed identity to any AI agent, letting it work on their behalf across the internet in a way other parties can trust.”
New NASA Chief Aiming for Moon Landings Every Month in 2027Passant Rabie | Gizmodo
“The regular missions will be geared toward building a lunar base on the moon’s surface, which will act as a laboratory for astronauts to develop ways to live beyond Earth’s orbit. ‘If you’re building a moon base and you’re going there to stay, you’re gonna need lots of missions to and from the moon,’ Isaacman [told SpaceFlight Now in an interview].”
Jeff Bezos Wants to Save Earth With This Freaky-Looking ProbePassant Rabie | GIzmodo
“The mission would be equipped with different techniques for mitigating the asteroid threat, including directing a powerful ion beam (a concentrated stream of charged particles) at the object to change its orbit. …[If that doesn’t work, then like the spacecraft in NASA’s DART mission], NEO Hunter can aim for a direct kinetic impact by ramming into the asteroid at high speed to redirect it from its Earth-bound trajectory.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 21) appeared first on SingularityHub.
2026-03-20 22:00:00
There’s plenty of hand-waving around AGI. DeepMind hopes to change that with a new, more rigorous approach.
Few terms are as closely associated with AI hype as artificial general intelligence, or AGI. But Google DeepMind researchers have now proposed a framework that could more concretely measure how close models are to this tech industry holy grail.
Artificial general intelligence refers to a mythical AI system that can match the general and highly adaptable form of intelligence found in humans. As the number of tasks that large language models can tackle has rocketed in recent years, there’s been a growing chorus of voices suggesting the technology is creeping ever closer to this threshold.
But so far, there’s been no clear way to assess progress toward AGI, leaving plenty of room for speculation and exaggeration. To address this gap, a team from Google DeepMind has introduced a new cognitively inspired framework that deconstructs general intelligence into 10 key faculties. More importantly, they propose a way to evaluate AI systems across these key capabilities and compare their performance to humans.
“Despite widespread discussion of AGI, there is no clear framework for measuring progress toward it. This ambiguity fuels subjective claims, makes it difficult to track progress, and risks hindering responsible governance,” the researchers write in a paper outlining their new approach. “We hope this framework will provide a practical roadmap and an initial step toward more rigorous, empirical evaluation of AGI.”
This isn’t DeepMind’s first attempt to clarify the term. In 2023, the company proposed separating AI systems into different levels of capability, in much the same way self-driving systems are categorized.
But the approach didn’t really propose a way to measure what level AI systems have reached. The new framework goes further by building a firmer conceptual footing for the key aspects underpinning model performance and a practical way to evaluate and compare systems.
Digging through decades of research in psychology, neuroscience, and cognitive science, the researchers identify eight basic cognitive building blocks that they say make up general intelligence.
These include the perception of sensory inputs and generation of outputs like text, speech, or actions. Add to those learning, memory, reasoning, and the ability to focus attention on specific information or tasks. Rounding out the list are metacognition—or the ability to reason about and control your own mental processes—and so-called executive functions, like planning and the inhibition of impulses.
The researchers also outline two “composite faculties” that require several building blocks to be applied together. These are problem solving and social cognition, which refers to the ability to understand and react appropriately to the social context.
To judge how well AI systems perform on each measure, the researchers suggest subjecting them to a broad suite of cognitive evaluations that target each specific ability. They also propose collecting human baselines for each task. This would involve asking a demographically representative sample of adults with at least a high school education to complete them under identical conditions.
The results of these tests can then be combined to create “cognitive profiles” that give a sense of a model’s strengths and weaknesses. And by comparing the results against the human baselines, it should be possible to determine when a system matches or surpasses the general intelligence of an average person.
Crucially, the framework focuses on what a system can do rather than how it does it, which means the evaluation is agnostic about the underlying technology. However, the researchers concede that there is currently no good way to measure many of the core cognitive capabilities identified.
While there are already well-established benchmarks for faculties like problem solving and perception, there are no reliable tests for things like metacognition, attention, learning, and social cognition. In addition, many of the best benchmarks are public, which means the testing criteria are easily accessible and may have already been included in model training data. So the authors say they’re working with academics to build more robust, non-public evaluations to fill the gaps.
How useful the new framework will be depends on several factors. First, it remains to be seen whether the criteria identified by the DeepMind team truly capture the essence of human general intelligence. Second, they need to prove that acing this test actually leads to better performance on practical problems compared to narrower, specialist AI systems.
But considering the hand-waving nature of the debate around AGI so far, any framework grounded in well-established cognitive theory and rigorous evaluation represents a significant step forward.
The post Google DeepMind Plans to Track AGI Progress With These 10 Traits of General Intelligence appeared first on SingularityHub.
2026-03-19 22:00:00
The prevailing narrative suggests AI is ready to replace humans, but the evidence is more nuanced.
In the past few months, a wave of tech corporations have announced significant staff cuts and attributed them to efficiency gains driven by artificial intelligence.
Companies such as Atlassian, Block, and Amazon have announced they would lay off thousands of employees due to increased reliance on AI.
The narrative these companies offer is consistent: AI is making human labor replaceable, and responsible management demands adjustment.
The evidence, however, tells a more nuanced story.
Genuine disruption is visible in specific corners of the labor market, though the scale of that disruption is commonly overstated. Research from Anthropic published earlier this month shows that although many work tasks are susceptible to automation, the vast majority are still performed primarily by humans rather than AI tools.
Moreover, some occupations are more exposed to displacement than others: Computer programmers sit at the top of the list, followed by customer service representatives and data entry workers. Yet even within the most exposed occupations, AI use is still limited.
The aggregate economic data reflects this reality. A 2025 Goldman Sachs report estimated that if AI were used across the economy for all the things it could currently do, roughly 2.5 percent of US employment would be at risk of job loss.
That’s not a trivial number. However, the report notes that workers in AI-exposed occupations are currently no more likely to lose their jobs, face reduced hours, or earn lower wages than anyone else.
The report does note early signs of strain in specific industries. Goldman Sachs identifies sectors where employment growth has slowed that align with AI-related efficiency gains. Examples include marketing consulting, graphic design, office administration, and call centers.
In the tech sector, US workers in their 20s in AI-exposed occupations saw unemployment rise by almost 3 percent in the first half of 2025. Anthropic’s research also found that job-finding rates (the chance of an unemployed person finding a job in a one-month period) for workers aged 22–25 entering AI-exposed occupations have fallen by around 14 percent since the launch of ChatGPT in 2022. This is a tentative but telling signal about where the pressure is being felt first.
These are meaningful signals, but they are sector-specific and concentrated—not the evidence of sweeping displacement that corporate announcements often imply. That gap between the evidence and the rhetoric raises an obvious question: What else might be driving these decisions?
The timing and framing of the layoffs attributed to AI warrant closer examination. Corporate restructuring, over-hiring during the post-pandemic boom as demand for online services soared, and pressure from investors to demonstrate improved profit margins are all forces operating at the same time as genuine advances in AI.
While these are not mutually exclusive explanations, they are rarely acknowledged alongside one another in corporate communications.
There is a powerful financial incentive for companies to be seen to be embracing AI aggressively. Since the launch of ChatGPT, AI-related stocks have accounted for about 75 percent of S&P 500 returns.
A workforce reduction framed around AI adoption sends a signal to investors that a straightforward cost-cutting announcement does not. A company making AI-related innovations looks a lot better than one sacking staff due to declining revenues or poor strategic decisions.
It is also worth distinguishing between two kinds of workforce reduction. In the first, AI genuinely increases productivity to the point where fewer workers are needed to produce the same output. In the second, staff reductions are not a consequence of AI, but a way to fund it.
Meta illustrates this distinction. The social media giant is reportedly planning to lay off as much as 20 percent of its workforce, while simultaneously committing $600 billion to build data centers and recruit top AI researchers.
In this case, the workers being let go are not being replaced by AI today; they are subsidizing the AI bet their employer is making on the future.
The big picture is likely one of transformation rather than elimination. According to a recent PwC report, employment is still growing in most industries exposed to AI, although growth tends to be slower than in less exposed sectors.
At the same time, wages in AI-exposed industries are rising roughly twice as fast as in those least touched by the technology. Workers with AI skills command an average wage premium of about 56 percent across the industries analyzed.
Together, the data points toward a flattening of the traditional workplace pyramid rather than mass displacement. Firms require fewer junior employees for routine analytical and administrative work, while experienced professionals who deploy AI tools effectively become more productive and command greater value.
AI is a consequential technology and will have a significant impact in the long term. What is in doubt is whether the dramatic, AI-attributed workforce reductions announced by individual companies accurately reflect that trajectory, or whether they conflate genuine technological change with decisions that would have been made regardless.
Making this distinction is not merely an academic exercise. It shapes how policymakers, educators, and workers themselves understand the nature of the disruption they are navigating.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Tech Companies Are Blaming Massive Layoffs on AI. What’s Really Going On? appeared first on SingularityHub.