2026-03-25 03:38:32
In a step toward biological computing, brain organoids rewired their networks as they learned to balance a digital pole on a cart.
Try balancing a ruler vertically on the palm of your hand while walking. It’s not easy. Your eyes constantly track its movement. Your arm and hand make tiny adjustments to prevent tilting. All the while, your brain sparks with activity with one clear goal: Keep the ruler upright.
Scientists have now trained mini brains, or brain organoids, to master the same problem, simulated in the digital realm, with electrical zaps alone.
Mini brains have grown popular with researchers since their invention over a decade ago. Commonly made from stem cells, organoids are jam-packed with neurons that form densely connected networks. Earlier versions loosely resembled the developing brains of preterm babies; now they can mimic the neural wiring of a kindergartener. As the blobs become more sophisticated, scientists are asking: Can they learn?
In the new study, researchers challenged the mini brains with a classic engineering task similar to balancing a ruler on your hand. Mastering the task takes practice, but our brains are wired to receive feedback, often in the form of a small jolt of electrical activity. Called reinforcement learning, the technique has already been adapted to train AI—and now, mini brains too.
The goal isn’t to replace silicon-based controllers with living tissue. It’s to test the organoids’ ability to listen and learn and reveal how they break down.
“We’re trying to understand the fundamentals of how neurons can be adaptively tuned to solve problems,” study author Ash Robbins at the University of California, Santa Cruz said in a press release. “If we can figure out what drives that in a dish, it gives us new ways to study how neurological disease can affect the brain’s ability to learn.”
Attaching living brain tissue to computers sounds like science fiction. But brain organoids have already made it reality.
These blobs of brain cells often start life as skin cells that have been turned back into stem cells. After bathing in a special cocktail of nutrients, they develop into various types of brain cells that self-organize into intricate three-dimensional structures similar to parts of the brain. Neurons form networks, ripple with electrical waves, and when connected to other tissues—such as an artificial spinal cord and lab-grown muscles—can control them.
Bioengineers have taken notice, envisioning organoids as potential living processors. Our brains use far less power and are more adaptable than the most advanced neuromorphic chips and brain-inspired AI. Brain organoids linked together into computers could theoretically enable computation in a dish at a fraction of the energy cost.
There are hints this blue-sky idea could work. Scientists have taught hundreds of thousands of isolated neurons to play the video games Pong and, more recently, Doom. Separately, researchers used cultured neurons to control the simple movements of a vehicle.
But mini brains are different. Unlike isolated neurons, organoids’ 3D structures and connections are harder to decipher. Yet predictable learning is essential to realizing “organoid intelligence.” Their electrical activity needs to rapidly adapt to inputs, strengthening or weakening circuits.
Reinforcement learning from trial and error is a perfect test. When we succeed at a new task, neurons in the brain’s reward center blast dopamine and rewire their connections. Failures don’t bring about similar activity. Over time, we learn not to touch a hot pan, take care when hammering a nail, and other life lessons.
But cortical organoids, which resemble the outermost part of the brain, lack neurons that communicate using dopamine. Can they still learn through experience?
The new study tackled the question with a hybrid organoid-computer system. The team grew cortical organoids from mouse stem cells. These then self-organized into neural networks and developed a layered structure within a month.
The researchers chose this type of brain organoid “due to the cortex’s well-established role in adaptive information processing and its ability to encode, decode, and modify responses to novel inputs,” they wrote.
The team embedded the brain blobs on a chip that captures their electrical pulses and interacts with a computer to “teach” the mini brains and process data. (The chip’s sensors don’t cover the entire organoid as more recent devices do.)
After recording spontaneous activity, the team figured out how best to stimulate the organoids and built a programmable system with a simple interface.
“From an engineering perspective, what makes this powerful is that we can record, stimulate, and adapt in the same system,” said study author Mircea Teodorescu.
Next, the team challenged the organoids with the cartpole problem, a classic engineering task that asks the player to balance an upright pole on a moving cart. If the pole tips over a certain angle, it’s a fail. The player has to constantly adjust the cart as its cargo wobbles.
To train the organoids, the scientists delivered electrical zaps after the pole tipped too far to either side and tracked the responses. In essence, the mini brains played a video game, with human coaches nudging them toward success. The team grouped performance—how long the system balanced the pole—into sets of five trials, each ending when the pole fell. If the most recent performance improved over the previous 20 trials, they considered it a success and delivered no zaps. If performance didn’t improve, the team gave the organoids a zap.
“You could think of it like an artificial coach that says, ‘you’re doing it wrong, tweak it a little bit in this way,’” said Robbins.
Compared to random or no zaps, the rewarding zaps boosted the success rate from 4.5 to 46.5 percent in continuous trials, suggesting the organoids learned from electrical cues alone—without dopamine. A closer look showed the cells released another chemical that strengthens neural connections, and blocking the process prevented them from learning.
“This demonstrates that biological neural networks can be systematically modified through precise electronic control,” wrote the team.
However, the learning didn’t last. After roughly 45 minutes without stimulation, the organoids’ performance reset to baseline. Their fleeting memory may reflect the lack of neural highways required for long-term memory. The team is now culturing multiple types of brain organoids together—each mimicking a different region—to potentially preserve learning and memory.
“These are incredibly minimal neural circuits. There’s no dopamine, no sensory experience, no body to sustain, no goals to pursue,” said Keith Hengen at Washington University in St. Louis, who did not participate in the study. But they could still be nudged toward solving a real control problem. “That tells us something important: The capacity for adaptive computation is intrinsic to cortical tissue itself, separate from all the scaffolding we usually assume is necessary.”
The post These Mini Brains Just Learned to Solve a Classic Engineering Problem appeared first on SingularityHub.
2026-03-24 05:15:01
Rebooting frozen brains is still science fiction, but advanced freezing techniques could preserve wiring and function.
Floating in a warm, nutritious bath, the slices of mouse brain buzzed with electrical activity. Researchers gave them a few zaps, and parts of the hippocampus strengthened their wiring.
This type of experiment is an extremely common way to decipher how the brain works. The slices, not so much. Preserved in a deep freeze for roughly a week, they restarted some basic processes after being thawed. Neurons lit up, boosted their metabolism, and adjusted connections in the same way our brains do when forming new memories and recalling old ones.
“While the brain is considered exceptionally sensitive, we show that the hippocampus can resume electrophysiological activity after being rendered completely immobile in a cryogenic glass,” wrote University of Erlangen‐Nuremberg scientists in a paper describing the work.
In traditional freezing techniques, ice crystals shred delicate neurons and the connections between them. There would be no chance of recovering memories stored within. The new study used a method called vitrification, which rapidly cools tissue before crystals can form. An improved thawing process protected cells from toxic chemicals in their cryogenic bath.
Both pre-sliced and whole mouse brains recovered after warming, although some neural activity was slightly off-kilter. To be clear, brains can’t be completely revived like in the movies. But the approach pushes the known frontier of what brain tissue can tolerate, wrote the team.
Suspended animation is one of science fiction’s oldest tropes. Whether characters are traveling between the stars or awaiting future cures for untreatable diseases, cryogenics is the ultimate pause button they can use to speedrun decades, if not centuries and beyond.
The idea was popularized in the 1960s, when Robert Ettinger “the father of cryonics” argued that people could be frozen and revived in the future, with their memories, cognition, and physical capabilities intact. He took the fringe idea and turned it into a mainstream dream.
But cryosleep has earlier roots. In the late 1800s, scientists realized that certain cells and simple living creatures could survive freezing, suggesting it’s possible to temporarily suspend life.
Liquid nitrogen and other chemical preservatives are now used daily in labs to freeze individual cells—including brain cells—at extremely low temperatures. Many don’t survive, but those that do regain normal function upon thawing. Scientists use the technology to preserve different types of neurons to test theories and share with other labs.
Cryopreserving brain slices or whole brains is far more difficult. These contain the delicate neural branches brain cells use to communicate, which are easily destroyed during the freeze-thaw cycle. Ice is the main culprit. Even with protective chemicals, liquids in cells rapidly solidify into sharp crystals that jab cells inside and out like a thousand knives.
Still, scientists have kept frozen human fetal tissue intact, and cryopreserved rat cells have developed functional networks once thawed. Another effort kept a rodent’s heart structurally intact with a magnetic method that gradually brings the organ back to biological temperature. Techniques to preserve livers and kidneys can keep them in stasis for up to 100 days, and the organs are still healthy enough for transplantation after warming up.
“Progress in cryopreservation of rodent organs has moved the theme of suspending technologies closer to plausibility,” wrote the team.
Structure determines function for each organ. But the brain presents unique challenges. Hundreds of molecules zoom around neurons to build up or whittle down synapses. Others that dot the surfaces of these cells tweak electrical charges to strengthen or weaken activity. Even without tearing up the cell itself, damage to these processes renders neurons incapable of forming or retrieving memories.
Ice is only part of the revival equation. As liquids freeze, they change the pressure of the surrounding environment, causing cells to lose water and shrink. This can collapse internal structures and wreck synaptic connections. Cryoprotectants, such as a sugary liquid called glycerol, limit the damage but are toxic at high doses.
The authors of the new study turned to vitrification. Here, rapid cooling with cryoprotectants limits damage by freezing cells in a disorganized, glass-like state without forming ice crystals.
They first tested cryoprotectant recipes on brain slices that included the hippocampus, a brain region associated with the formation of memories. After soaking the slices in the chemical cocktails, the team bathed them in liquid nitrogen at a bone-chilling -196 degrees Celsius (−320.8 degrees Fahrenheit), which instantly froze the tissues. They then moved the slices to a −150 degrees Celsius (−238 degrees Fahrenheit) freezer and kept them there for up to a week.
The team could visually see whether each cocktail worked, they wrote. Vitrified slices had a glossy, transparent look; those that failed were dull and opaque.
After slow thawing, the slices sprang back to life.
The cells’ mitochondria ramped up energy production. Neuron membranes and synapses remained intact. And though there were some differences compared to fresh brain slices, the reawakened hippocampal cells mostly retained their usual patterns. Given a few electrical zaps, they strengthened their connections, a mechanism underlying learning and memory.
The team also tried the method on whole mouse brains. They had to repeatedly tweak the recipe to minimize toxicity from the cryoprotectants and ward off severe brain dehydration. But once thawed, slices from the whole preserved brains had intact neural wiring, including complex circuits in the hippocampus. Some brain cells languished and were harder to activate, whereas others perked right up.
It seems some types of neurons are more tolerant to vitrification than others, wrote the team.
Because they recorded activity in brain slices, it’s impossible to say whether the process would restore memory and learning. And the slices naturally deteriorated after 10 to 15 hours, making it hard to say much about longer timescales. To get around this, they could test the method on mini brains, or brain organoids, which better mimic whole brains and can be kept alive for years in culture.
The team is now expanding their work to include human brain slices and preservation of other organs, such as the heart. It’ll take plenty of trial and error. Human organs are far larger and could easily crack from mechanical stress during the cryopreservation process.
But the study shows “the brain is remarkably robust…to near-complete shutdown” into a glass-like state. “This reinforces the tenet of brain function being an emergent property of brain structure, and hints at the potential of life-suspending technologies,” wrote the team.
The post Reviving Brain Activity After ‘Cryosleep’ Inches Closer in Pioneering Study appeared first on SingularityHub.
2026-03-21 22:00:00
OpenAI Is Throwing Everything Into Building a Fully Automated ResearcherWill Douglas Heaven | MIT Technology Review ($)
“The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. OpenAI says that the new goal will be its ‘North Star’ for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability.”
Humanoid Robot Gets Surprisingly Good at TennisLoz Blain | New Atlas
“This ain’t teleoperation. Chinese researchers have tested a new, much quicker and easier method of teaching robots to play tennis, and the results look like a breakthrough in machine learning and real-world AI.”
This Is Not a Fly Uploaded to a ComputerRobert Hart | The Verge
“Aran Nayebi, a professor of machine learning at Carnegie Mellon University, said that the group was ‘not even close’ to capturing the full brain of the fly, showing connections between cells but not crucial details like neurotransmitters or how strong the connections between different nerve cells are. The motor system isn’t a ‘true upload’ either, he said. ‘We are not even faithfully simulating its brain in silico.'”
This May Be the World’s First Quantum BatteryGayoung Lee | Gizmodo
“Researchers finally believe they’ve found the right blueprint for scalable quantum batteries, publishing their findings in a recent study in Light: Science & Applications. ‘My ultimate ambition is a future where we can charge electric cars much faster than [fueling] petrol cars or charge devices over long distances wirelessly,’ James Quach, the study’s senior author and a researcher at CSIRO, Australia’s national science agency, said in a statement.”
My Tesla Was Driving Itself Perfectly—Until It CrashedRaffi Krikorian | The Atlantic ($)
“The problem is bigger than one company’s self-driving system. It’s about how we’re building every AI system, every algorithm, every tool that asks for our trust and trains us to give it. The pattern is everywhere: Condition people to rely on the system. Erode their vigilance. Then, when something breaks, point to the terms of service and blame them for not paying attention.”
A Private Space Company Has a Radical New Plan to Bag an AsteroidEric Berger | Ars Technica
“[TransAstra CEO Joel Sercel] envisions aggregating dozens, and then hundreds, of small asteroids at the ‘New Moon’ processing facility, which could potentially be located at the Earth-Sun L2 point, about 1.5 million km from Earth. Such asteroids could provide water for use as propellant and minerals for everything from solar panels to radiation shielding.”
Val Kilmer Set to Be Be Resurrected With AI for New FilmOwen Myers | The Guardian
“The film-maker is working in conjunction with the late actor’s estate and his daughter, Mercedes, to bring Kilmer back to life with state-of-the-art, generative AI. …The AI-generated version of Kilmer will appear in a ‘significant’ portion of the film, says Voorhees. The film will use images of the actor taken throughout his life to re-create Kilmer through the decades.”
Online Bot Traffic Will Exceed Human Traffic by 2027, Cloudflare CEO SaysSarah Perez | TechCrunch
“‘If a human were doing a task let’s say you were shopping for a digital camera—and you might go to five websites. Your agent or the bot that’s doing that will often go to 1,000 times the number of sites that an actual human would visit,’ Prince said. “So it might go to 5,000 sites. And that’s real traffic, and that’s real load, which everyone is having to deal with and take into account.”
World ID Wants You to Put a Cryptographically Unique Human Identity Behind Your AI AgentsKyle Orland | Ars Technica
“World now claims nearly 18 million unique humans have verified their identities on one of nearly 1,000 physical orbs around the world. Now, with Agent Kit, World wants to let those users tie their confirmed identity to any AI agent, letting it work on their behalf across the internet in a way other parties can trust.”
New NASA Chief Aiming for Moon Landings Every Month in 2027Passant Rabie | Gizmodo
“The regular missions will be geared toward building a lunar base on the moon’s surface, which will act as a laboratory for astronauts to develop ways to live beyond Earth’s orbit. ‘If you’re building a moon base and you’re going there to stay, you’re gonna need lots of missions to and from the moon,’ Isaacman [told SpaceFlight Now in an interview].”
Jeff Bezos Wants to Save Earth With This Freaky-Looking ProbePassant Rabie | GIzmodo
“The mission would be equipped with different techniques for mitigating the asteroid threat, including directing a powerful ion beam (a concentrated stream of charged particles) at the object to change its orbit. …[If that doesn’t work, then like the spacecraft in NASA’s DART mission], NEO Hunter can aim for a direct kinetic impact by ramming into the asteroid at high speed to redirect it from its Earth-bound trajectory.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 21) appeared first on SingularityHub.
2026-03-20 22:00:00
There’s plenty of hand-waving around AGI. DeepMind hopes to change that with a new, more rigorous approach.
Few terms are as closely associated with AI hype as artificial general intelligence, or AGI. But Google DeepMind researchers have now proposed a framework that could more concretely measure how close models are to this tech industry holy grail.
Artificial general intelligence refers to a mythical AI system that can match the general and highly adaptable form of intelligence found in humans. As the number of tasks that large language models can tackle has rocketed in recent years, there’s been a growing chorus of voices suggesting the technology is creeping ever closer to this threshold.
But so far, there’s been no clear way to assess progress toward AGI, leaving plenty of room for speculation and exaggeration. To address this gap, a team from Google DeepMind has introduced a new cognitively inspired framework that deconstructs general intelligence into 10 key faculties. More importantly, they propose a way to evaluate AI systems across these key capabilities and compare their performance to humans.
“Despite widespread discussion of AGI, there is no clear framework for measuring progress toward it. This ambiguity fuels subjective claims, makes it difficult to track progress, and risks hindering responsible governance,” the researchers write in a paper outlining their new approach. “We hope this framework will provide a practical roadmap and an initial step toward more rigorous, empirical evaluation of AGI.”
This isn’t DeepMind’s first attempt to clarify the term. In 2023, the company proposed separating AI systems into different levels of capability, in much the same way self-driving systems are categorized.
But the approach didn’t really propose a way to measure what level AI systems have reached. The new framework goes further by building a firmer conceptual footing for the key aspects underpinning model performance and a practical way to evaluate and compare systems.
Digging through decades of research in psychology, neuroscience, and cognitive science, the researchers identify eight basic cognitive building blocks that they say make up general intelligence.
These include the perception of sensory inputs and generation of outputs like text, speech, or actions. Add to those learning, memory, reasoning, and the ability to focus attention on specific information or tasks. Rounding out the list are metacognition—or the ability to reason about and control your own mental processes—and so-called executive functions, like planning and the inhibition of impulses.
The researchers also outline two “composite faculties” that require several building blocks to be applied together. These are problem solving and social cognition, which refers to the ability to understand and react appropriately to the social context.
To judge how well AI systems perform on each measure, the researchers suggest subjecting them to a broad suite of cognitive evaluations that target each specific ability. They also propose collecting human baselines for each task. This would involve asking a demographically representative sample of adults with at least a high school education to complete them under identical conditions.
The results of these tests can then be combined to create “cognitive profiles” that give a sense of a model’s strengths and weaknesses. And by comparing the results against the human baselines, it should be possible to determine when a system matches or surpasses the general intelligence of an average person.
Crucially, the framework focuses on what a system can do rather than how it does it, which means the evaluation is agnostic about the underlying technology. However, the researchers concede that there is currently no good way to measure many of the core cognitive capabilities identified.
While there are already well-established benchmarks for faculties like problem solving and perception, there are no reliable tests for things like metacognition, attention, learning, and social cognition. In addition, many of the best benchmarks are public, which means the testing criteria are easily accessible and may have already been included in model training data. So the authors say they’re working with academics to build more robust, non-public evaluations to fill the gaps.
How useful the new framework will be depends on several factors. First, it remains to be seen whether the criteria identified by the DeepMind team truly capture the essence of human general intelligence. Second, they need to prove that acing this test actually leads to better performance on practical problems compared to narrower, specialist AI systems.
But considering the hand-waving nature of the debate around AGI so far, any framework grounded in well-established cognitive theory and rigorous evaluation represents a significant step forward.
The post Google DeepMind Plans to Track AGI Progress With These 10 Traits of General Intelligence appeared first on SingularityHub.
2026-03-19 22:00:00
The prevailing narrative suggests AI is ready to replace humans, but the evidence is more nuanced.
In the past few months, a wave of tech corporations have announced significant staff cuts and attributed them to efficiency gains driven by artificial intelligence.
Companies such as Atlassian, Block, and Amazon have announced they would lay off thousands of employees due to increased reliance on AI.
The narrative these companies offer is consistent: AI is making human labor replaceable, and responsible management demands adjustment.
The evidence, however, tells a more nuanced story.
Genuine disruption is visible in specific corners of the labor market, though the scale of that disruption is commonly overstated. Research from Anthropic published earlier this month shows that although many work tasks are susceptible to automation, the vast majority are still performed primarily by humans rather than AI tools.
Moreover, some occupations are more exposed to displacement than others: Computer programmers sit at the top of the list, followed by customer service representatives and data entry workers. Yet even within the most exposed occupations, AI use is still limited.
The aggregate economic data reflects this reality. A 2025 Goldman Sachs report estimated that if AI were used across the economy for all the things it could currently do, roughly 2.5 percent of US employment would be at risk of job loss.
That’s not a trivial number. However, the report notes that workers in AI-exposed occupations are currently no more likely to lose their jobs, face reduced hours, or earn lower wages than anyone else.
The report does note early signs of strain in specific industries. Goldman Sachs identifies sectors where employment growth has slowed that align with AI-related efficiency gains. Examples include marketing consulting, graphic design, office administration, and call centers.
In the tech sector, US workers in their 20s in AI-exposed occupations saw unemployment rise by almost 3 percent in the first half of 2025. Anthropic’s research also found that job-finding rates (the chance of an unemployed person finding a job in a one-month period) for workers aged 22–25 entering AI-exposed occupations have fallen by around 14 percent since the launch of ChatGPT in 2022. This is a tentative but telling signal about where the pressure is being felt first.
These are meaningful signals, but they are sector-specific and concentrated—not the evidence of sweeping displacement that corporate announcements often imply. That gap between the evidence and the rhetoric raises an obvious question: What else might be driving these decisions?
The timing and framing of the layoffs attributed to AI warrant closer examination. Corporate restructuring, over-hiring during the post-pandemic boom as demand for online services soared, and pressure from investors to demonstrate improved profit margins are all forces operating at the same time as genuine advances in AI.
While these are not mutually exclusive explanations, they are rarely acknowledged alongside one another in corporate communications.
There is a powerful financial incentive for companies to be seen to be embracing AI aggressively. Since the launch of ChatGPT, AI-related stocks have accounted for about 75 percent of S&P 500 returns.
A workforce reduction framed around AI adoption sends a signal to investors that a straightforward cost-cutting announcement does not. A company making AI-related innovations looks a lot better than one sacking staff due to declining revenues or poor strategic decisions.
It is also worth distinguishing between two kinds of workforce reduction. In the first, AI genuinely increases productivity to the point where fewer workers are needed to produce the same output. In the second, staff reductions are not a consequence of AI, but a way to fund it.
Meta illustrates this distinction. The social media giant is reportedly planning to lay off as much as 20 percent of its workforce, while simultaneously committing $600 billion to build data centers and recruit top AI researchers.
In this case, the workers being let go are not being replaced by AI today; they are subsidizing the AI bet their employer is making on the future.
The big picture is likely one of transformation rather than elimination. According to a recent PwC report, employment is still growing in most industries exposed to AI, although growth tends to be slower than in less exposed sectors.
At the same time, wages in AI-exposed industries are rising roughly twice as fast as in those least touched by the technology. Workers with AI skills command an average wage premium of about 56 percent across the industries analyzed.
Together, the data points toward a flattening of the traditional workplace pyramid rather than mass displacement. Firms require fewer junior employees for routine analytical and administrative work, while experienced professionals who deploy AI tools effectively become more productive and command greater value.
AI is a consequential technology and will have a significant impact in the long term. What is in doubt is whether the dramatic, AI-attributed workforce reductions announced by individual companies accurately reflect that trajectory, or whether they conflate genuine technological change with decisions that would have been made regardless.
Making this distinction is not merely an academic exercise. It shapes how policymakers, educators, and workers themselves understand the nature of the disruption they are navigating.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Tech Companies Are Blaming Massive Layoffs on AI. What’s Really Going On? appeared first on SingularityHub.
2026-03-18 05:54:35
As they imagine typing, implants translate brain signals into keystrokes on a standard digital keyboard.
It’s hard to picture a keyboard layout other than the one we know best. From laptops to smartphones, it’s an integral part of our digital lives.
Scientists at Massachusetts General Hospital have now restored the ability to communicate by keyboard to two people with paralysis—using their thoughts alone.
Both people already had brain implants that could record their minds’ electrical chatter. The new system translated brain signals in real time as each person imagined finger movements. The system then accurately predicted the character they were trying to type.
The system learned to translate brain activity to physical intent after just 30 sentences. Typing speeds reached 22 words per minute with few errors, nearly matching speeds of able-bodied smartphone users.
“To our knowledge, this system provides the fastest… [brain implant] communication method reported to date based on decoding from hand motor cortex,” wrote the team.
The participants are part of the BrainGate2 clinical trial, a pioneering effort to restore communication and movement by decoding neural signals in people who have lost the use of all four limbs and the torso. One of the participants previously used the implants to translate his inner thoughts into text, but with mixed success.
Controlling a digital keyboard is far more intuitive and familiar, which makes it easier to grasp. Once a person learns to use the system, they don’t have to look at the keyboard, giving their eyes a break as they type with their minds. It also allows users full control of when, or when not, to share their thoughts, preventing accidental leakage of private musings onto a screen or broadcasted with AI-generated speech.
Parts of the brain hum with electrical activity before we speak. Over the past decade, brain implants—microelectrodes that listen in and decode signals—have translated these seemingly chaotic buzzes into text or speech, allowing paralyzed people to regain the ability to communicate.
Methods vary. Some hardware takes the form of wafer-thin disks sitting on top of the brain and gathering signals from vast regions; other devices are inserted into the brain for more targeted recordings.
These systems are life changing. In a recent example, an implant translated the neural activity controlling a man with ALS’s vocal muscles. With just a second’s delay, the system generated coherent sentences with intonation, allowing him to sing with an artificial voice. Another device turned a paralyzed woman’s thoughts into speech with nearly no delay, so she could hold a conversation without frustrating halts. People have also benefited from a method that uses the neural signals behind handwriting for brain-to-text communication.
Brain implants aren’t purely experimental anymore: China recently approved a setup allowing people with paralysis to control a robotic hand. It’s the first such device available outside of clinical trials.
Perhaps the most widely used clinical solution is eye-tracking. Here, patients move their eyes to focus on individual letters, one at a time, on a custom digital keyboard. But the pace is agonizingly slow and prone to error. And prolonged screen time strains the eyes, making extended conversations difficult.
“Those systems take far too long for many users,” said study author Daniel Rubin in a press release, causing them to abandon the technology.
For people who already know how to type, the standard keyboard layout—known as QWERTY—feels familiar and comfortable. Fingers stretch to hit letters in the upper row, tap directly down for ones in the middle, and curl into a loose claw to hit bottom letters and punctuation.
As fingers dance across the keyboard, parts of the motor cortex that control their motion spark with activity, precisely directing each placement. Mind-typing using a familiar keyboard, compared to a custom one, could feel more intuitive and relaxing.
Two people with tetraplegia gave the idea a shot. Participant T17 was diagnosed with ALS at 30, a disease that slowly destroys motor neurons, weakening muscles and eventually impairing breathing. Three years later, when he enrolled in the study, he’d lost control of his vocal muscles and relied on a ventilator. He could move only his eyes, but his mind was still sharp. The second participant, T18, was paralyzed by a spinal cord injury 18 months before enrollment. Both had multiple brain implants in different areas. These were connected to cables that shuttled recordings to a computer system for real-time processing.
The participants used a simplified QWERTY digital keyboard containing all 26 letters, a space key, and three types of punctuation—a question mark, comma, and period. To train the system, the volunteers imagined stretching, tapping, or curling their fingers to type text prompts, while implants captured and isolated neural signals for each finger. After training, a deep learning model predicted intended characters, and a language model continuously attempted to autocomplete the sentence.
After practicing just 30 sentences, both participants could copy on-screen text or type whatever they wanted. When asked “what was the best part of your job,” T18 cheekily replied “the best part of my job was the end [of] the day.” Meanwhile, T17, a fan of The Legend of Zelda video games, told the researchers “you should try oracle of ages and seasons…another is skyward sword…the music in those games is great.”
Their typing speeds broke records. T18 communicated at 110 characters or roughly 22 words per minute, which is 20 characters more than a previous state-of-the-art method based on handwriting, wrote the team. The rate is nearly on par with able-bodied smartphone users similar to his age. Typing errors were consistently low and neared perfection after practice.
T17, with incomplete locked-in syndrome due to ALS, typed 47 characters a minute at a higher error rate. He had full use of his vocabulary, unlike with previous systems that imposed word restrictions, and communicated much faster.
The performance differences could be due to where their implants are located. T18’s microarrays are on both sides of the brain, with some covering an area that controls all four limbs. T17’s implants are on only the left half of his brain, with less coverage of finger motor areas.
The team is now tweaking the system for longer use tailored to individuals. As disease progresses, the link between brain signals and keyboard characters may drift and produce more errors. But updating the algorithm is easy. The system needs only a few sentences to learn, so users could start each day mind-typing some thoughts to keep things dialed in.
Updates to the digital keyboard, like adding numbers or the return and delete keys, are in the works. Temporarily disabling the language model could also let participants type strong gibberish passwords, internet slang (ikr, btw, lol), and other non-standard words without being autocorrected.
The brain implant “is a great example of how modern neuroscience and artificial intelligence technology can combine to create something capable of restoring communication and independence for people with paralysis,” said study author Justin Jude.
The post Brain Implants Let Paralyzed People Type Nearly as Fast as Smartphone Users appeared first on SingularityHub.