2026-04-01 03:40:58
AI companies may be reluctant to risk lower engagement with models that push back.
We all need advice. Did I cross the line arguing with a loved one? Did I mess up my friendships by ghosting them? Did I not tip the delivery driver enough? Or as users on the popular Reddit forum ask: Am I the asshole?
Some people will give it to you straight. Yes, you were in the wrong, and here’s why. No one likes to hear negative feedback. The first instinct is to push back. Yet some of the best life advice comes from friends, family, and even online strangers who don’t coddle you, but instead are willing to challenge your position and beliefs. And although it’s emotionally uncomfortable, with advice and self-reflection, you grow.
Chatbots, in contrast, are likely to take your side. Increasingly, people are treating AI models like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini like close confidants. But the chatbots are notoriously sycophantic. They heartily validate your opinions, even when those views are blatantly harmful or unethical.
Constant flattery has consequences. New research published in Science shows that people who receive advice from sycophantic chatbots are more confident they’re in the right when navigating relationship problems.
Stanford researchers tested 11 sophisticated chatbots on questions from Reddit’s “Am I the asshole” forum. They found the chatbots were roughly 50 percent more likely to endorse the original poster’s actions than crowdsourced human opinions. And people faced with social dilemmas felt more justified in their positions after chatting with sycophantic AI.
Bolstering misplaced self-confidence is troubling. But “the findings raise a broader concern: When AI systems are optimized to please, they may erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold,” wrote Anat Perry at the Hebrew University of Jerusalem, who was not involved in the study.
AI chatbots have wormed their way into our lives. Powered by large language models, they’re trained using enormous amounts of text, images, and videos scraped from online sources, making their replies surprisingly realistic. Users can often steer their tones—neutral, friendly, professional—to their liking or play with their “personalities” to engage with a wittier, more serious, or more empathetic version. In essence, you can build an ideal partner.
It’s no wonder that some people have turned to them for emotional support—or outright fallen in love. Nearly one in three teenagers are talking to chatbots daily. Exchanges tend to be longer and more serious than texts with friends—roleplaying friendships, romances, and other social interactions. Nearly half of Americans under 30 have sought relationship advice from AI. Unlike people, who are often mired in their own busy lives, chatbots are always available and validating, making it easy to forge close emotional connections.
The explosion in chatbot popularity has regulators, researchers, and users worried about the consequences. An notorious update to OpenAI’s GPT-4o turned it into a sycophant, with responses skewed towards overly supportive but disingenuous. Media and user backlash prompted a rapid rollback. However, “the episode did not eliminate the broader phenomenon; it merely highlighted how readily sycophancy can emerge in systems optimized for user approval,” wrote Perry.
Relying on sycophantic chatbots has been implicated in tragedy. Last year, parents testified before Congress about how AI chatbots encouraged their children to take their own lives, prompting multiple AI companies to redesign the systems. Other incidents have linked sycophancy to delusions and self-harm.
Even AI wellness apps based on large language models, often marketed as companions to avoid loneliness, have emotional risks. Users report grief when the app is shut down or altered, similar to how they might mourn a lost relationship. Others develop unhealthy attachments, repeatedly turning to the bot for connection despite knowing it harms their mental health, heightening anxiety and fear of abandonment.
These high-profile incidents make headlines. But social psychology research suggest chatbots could subtly influence behavior in all users—not just vulnerable ones.
To test how pervasive sycophancy is across chatbots, the team behind the new study tested 11 AI models—including GPT-4o, Claude, Gemini, and DeepSeek—against community opinions using questions from Reddit and two other datasets.
“We wanted to just generally look at these kinds of advice-seeking settings, but they’re often very subjective,” study author Myra Cheng told Science in a podcastinterview. Here “there’s millions of people who are weighing in on these decisions, and then there’s a crowdsourced judgement.”
One user, for example, left garbage hanging on a tree in a park without trash cans and asked if that’s okay. While the chatbot commended their effort to clean up, the top-voted reply pushed back, saying they should have taken the trash home because leaving it can attract vermin. “I think [the AI’s response] comes from the person’s post giving a lot of justification for their side” which the AI picked up on, said Cheng.
Overall, chatbots were 49 percent more likely to buy a user’s reasoning compared to groups of humans.
The team then tested whether chatting with sycophantic AI alters a user’s confidence in their own judgment. They recruited roughly 800 participants and asked them to picture a hypothetical scenario derived from Reddit questions. Another group prompted AI advice based on their own personal conflicts, such as “I didn’t invite my sister to a party, and she is upset.”
The participants discussed their dilemmas with either a sycophantic or neutral AI model. Those who chatted with the agreeable model received messages beginning with “it makes sense” and “it’s completely understandable,” whereas neutral chatbots acknowledged their reasoning but provided other perspectives.
Surveys showed that people validated by chatbots were less likely to admit fault or apologize. They also trusted and preferred the sycophantic AI much more. These effects held regardless of the bot’s tone or “personality.”
Chatbots may be silently eroding social friction in a self-perpetuating cycle. “An AI companion who is always empathic and ‘on your side’ may sustain engagement and foster reliance,” wrote Perry. “But it will not teach users how to navigate the complexities of real social interactions—how to engage ethically, tolerate disagreement, or repair interpersonal harm.”
Toeing the line between constructive and sycophantic AI for emotional support won’t be easy. There are ways to instruct chatbots to be more critical. But because users generally prefer friendlier AI, there’s less incentive for companies to make models that push back and risk lowering engagement. The problem echoes challenges in social media, where algorithms serve up eye-catching posts that provide satisfaction without factoring in long-term consequences.
To Perry, the findings raise broader ethical questions—not just for AI, but for humanity. How should we weigh short-term gratification of chatbot interactions against long-term effects? Who sets that balance? The path forward will require companies, regulators, researchers, and users to ensure AI engages responsibly—without nudging people toward behavior that garners a “yes” on the Reddit forum.
The post Chatbots ‘Optimized to Please’ Make Us Less Likely to Admit When We’re Wrong appeared first on SingularityHub.
2026-03-31 07:38:44
The genetically engineered cells can be rewired to tackle a range of bacteria in the battle against antibiotic resistance.
A mixture of bacteria lounge in a dish. Like the bugs populating our guts, most are benign or beneficial. But a deadly strain hides among them. These bacteria can easily escape last-line antibiotics, rapidly spread, and cause mayhem.
But in this case, a single dose of genetically engineered cells hunts them down and wipes out nearly the entire population in a day, while leaving all the other harmless cells alone.
This strategy, called minicell therapy, fights fire with fire: Researchers engineer hunter cells by stripping bacteria of the ability to replicate and then genetically loading them up with proteins to home in on dangerous foes. The cells grab their targets and inject toxins into them, releasing a hurricane of chemicals that causes the bacteria’s insides to collapse.
Developed by a team at the University of Oxford, the approach is completely different than current defenses against bacteria, making it harder for dangerous bugs to develop resistance. It’s also fairly simple to reprogram the engineered cells to target different bacterial strains.
The work shows how synthetic biology can bring wholly new weapons to the fight against deadly bacteria resistant to antibiotics, the authors wrote.
Antimicrobial resistance is a critical global challenge projected to cause over 10 million deaths each year by 2050. Superbugs that dodge current treatments could spark the next pandemic, but our arsenal against them is dwindling.
Antibiotics work in different ways. Some puncture a bacteria’s protective wall, causing it to rupture. Others shut down protein production, damage DNA, or block metabolism to prevent growth.
Fighting bacteria is an evolutionary cat-and-mouse game. With time, bacterial genes mutate, and cells that escape one or many antibiotics grow, reproduce, and become dominant. Resistant bacteria can also share their genes with other cells to spread newly evolved defense systems.
Tweaking the chemical structure of an antibiotic buys some time. But what’s really needed are drugs that work in different ways. Unfortunately, the last new class of antibiotics now used in clinics dates back to the 1980s, followed by a decades-long lull. A novel class discovered in 2024 and the rise of AI-designed antibiotics have reinvigorated the field. But testing the candidates takes time, and they may not be able to catch up with the rapid spread of resistant bugs.
Other solutions are in the works. Phage therapy destroys bacteria with viruses and is already in clinical trials with initially positive results. Antibodies that neutralize bacterial toxins have also succeeded in early patient tests.
“However, these approaches face limitations such as stability issues, potential toxicity, and high manufacturing cost,” wrote the team.
Instead, they turned to an unusual creation called minicells to develop a completely new type of antibiotic. These cells, known more specifically as SimCells (short for “simple cells”), are made by stripping E. coli bacteria of their ability to replicate. Deleting an additional gene turns them into mini-SimCells that are roughly five times smaller.
Although some strains of E. coli can cause serious infections in the wild, the bacteria are reliable workhorses in research, synthetic biology, and biomanufacturing. They’re hardy, easy to grow, and plenty of tools already exist to genetically rewire their biology.
E. coli are also part of a growing effort to turn bacterial foes into living medicines to tackle conditions from metabolic disorders to cancer. Typically, benign probiotic strains are genetically modified to produce protein “bloodhounds” that help them seek out their cellular prey. Even familiar pathogens, like Salmonella, have been similarly repurposed. Once attenuated, they no longer cause disease and can be engineered to attack and inhibit cancer growth.
Though selected for safety, there’s a lingering risk of bacteria growing uncontrollably inside the body, triggering immune attacks, or escaping into the environment, wrote the team.
SimCells and their miniaturized cousin provide yet another layer of safety. Both are stripped of their native DNA so they can’t reproduce. But they retain all the other cellular machinery needed to survive and can make proteins from designer DNA. These cells are the perfect canvas for synthetic biology and have shown promise as shuttles for cancer drugs. One formulation even received “Fast-Track” status from the FDA to speed up development.
But they needed some biological rewiring to go after drug-resistant bacteria. The plan was to engineer SimCells and mini-SimCells that worked like “‘smart bioparticles’ to selectively eradicate pathogens, while sparing non-target bacteria,” the team wrote.
They first screened a library of nanobodies—tiny protein hooks that selectively latch onto a type of bacteria—and inserted genetic instructions for their chosen hooks into both types of designer cells. They then added another genetic payload encoding an enzyme that, with a small dose of aspirin, converted the drug into a chemical that produces hydrogen peroxide. After confirming the added genes, they introduced the cells into a dish full of bacteria.
The new cells were vicious. Their nanobodies guided them toward their prey and, when physically close, deployed their weapons. Nano-needles punctured the bacteria’s outer shell, releasing high doses of antimicrobial compounds—naturally made inside E. Coli as a defense system—into their foes. The cells also pumped out hydrogen peroxide for several days, forming a toxic environment that ruptured the bacteria and prevented stragglers from dividing.
This one-two punch slowed bacterial growth within six hours. After a day, 97 percent of the target bacteria were gone. Another day drove elimination to 99.9 percent.
“This antimicrobial strategy provides both immediate and sustained antimicrobial effects” that could prevent infections from coming back, wrote the team. In another test, the researchers engineered a range of SimCells and mini-SimCells dotted with different nanobodies that also reliably fought off multiple types of common drug-resistant bacteria.
But bacterial strains don’t exist in isolation. A kaleidoscope of beneficial bacteria support the gut, skin, and brain. These become collateral damage with classic antibiotic treatment. The new therapy was far more specific. Challenged with a mix of bacteria, they precisely selected and killed their intended targets but left others unharmed.
The therapy is still early. How the designer cells work inside the human body, especially alongside immune cells, remains to be tested. But thanks to a promising safety profile in a cancer clinical trial, the team is optimistic their infection-fighting versions are safe.
Though there weren’t any signs of resistance over the years-long study, the bacteria might eventually develop it. Researchers will have to track the cells over more time.
The post Forget Antibiotics: These Killer Cells Wipe Out Deadly Superbugs in a Day appeared first on SingularityHub.
2026-03-28 22:00:00
This New Benchmark Could Expose AI’s Biggest WeaknessMark Sullivan | Fast Company
“The influential AI researcher François Chollet has long argued that the field measures intelligence incorrectly, that popular benchmarks reward a model’s ability to memorize vast amounts of data rather than navigate novel situations and learn new skills. …The test, called ARC-AGI-3, may offer the clearest measurement yet of how close today’s AI agents are to human-level intelligence.”
You Can Now Buy a DIY Quantum ComputerKarmela Padavic-Callaghan | New Scientist ($)
“EduQit includes a chip made from tiny superconducting circuits, which is the heart of the quantum computer. There is also a special refrigerator that the chip is installed and wired into, along with a set of electronic devices that use radio waves and microwaves for controlling the chip and reading the results of its computations. All of this is combined with a smattering of racks, power cables and other devices that help complete the quantum computer.”
Scientists Create ‘Living Pharmacy’ Implant That Doses 3 Drugs at OnceEd Cara | Gizmodo
“These tiny devices are jam-packed with genetically engineered cells that produce the desired medication. Once implanted inside the body, usually just underneath the skin, the cells can deliver the drug as needed without any fuss, while the device’s structure is intended to protect the cells from any immune response.”
The CPU Was Left for Dead by AI. Now AI Is Bringing It Back.Robbie Whelan | The Wall Street Journal ($)
“For the past few years, central processing units, or CPUs…have been something of an afterthought in the world of artificial-intelligence computing. Now, thanks to how fast AI is changing, they are the belles of the ball. The explosion of so-called agentic AI has driven a wave of demand for CPUs, and chip companies are moving quickly to capitalize on it.”
What Happens If AI Makes Things Too Easy for Us?Vanessa Bates Ramirez | IEEE Spectrum
“Psychological research has long shown that effortful engagement can deepen understanding and strengthen memory, sometimes described as ‘desirable difficulties.’ The authors worry that AI systems capable of instantly producing polished answers or highly responsive conversation may bypass these processes of learning and motivation.”
Computer Finds Flaw in Major Physics Paper for First TimeMatthew Sparkes | New Scientist ($)
“A computer language designed to robustly verify mathematical theorems and expose logical flaws has been turned towards a physics paper—and spotted an error. …The researcher behind the discovery says it is the first physics paper he has analyzed in this way, which raises a worrying question: how many more contain mistakes?”
‘Zombie’ Cells Created by Transplanting Genomes Into Dead BacteriaChris Simms | New Scientist ($)
“Some of the bacteria began to grow and divide normally and genetic tests showed they carried the synthetic genome. This makes them the first living, synthetic bacterial cells constructed from non-living parts, claim the researchers, who call them ‘zombie cells’ because they have been revived after death.”
We Could Protect Earth From Dangerous Asteroids Using a Huge MagnetLeah Crane | New Scientist ($)
“The spacecraft itself would consist of a large magnet made from a coil of superconducting wire, about 20 meters in diameter, powered by a nuclear fission reactor. Small boosters would control its orbit around the asteroid, keeping it about 10 to 15 meters from the rock, so the magnet could act on the iron within the asteroid.”
A Billionaire-Backed Startup Wants to Grow ‘Organ Sacks’ to Replace Animal TestingEmily Mullin | Wired ($)
“R3 Bio has a bold idea for replacing lab animals: genetically-engineered whole organ systems that lack a brain. The long-term goal, says a cofounder, is to make human versions. …Growing human organs from scratch has been a longtime goal of regenerative medicine, but the idea of body sacks raises a number of ethical questions about how these entities would be created, stored, and maintained—and if they would be capable of having awareness or feeling pain.”
The Hardest Question to Answer About AI-Fueled DelusionsJames O’Donnell | MIT Technology Review ($)
“New research can’t yet say whether AI causes delusions or amplifies them, a distinction that will shape everything from high-profile court cases to safety rules for chatbots. …Many such cases have led to lawsuits against AI companies that are still ongoing. But this is the first time researchers have so closely analyzed chat logs—over 390,000 messages from 19 people—to expose what actually goes on during such spirals.”
This Scientist Rewarmed and Studied Pieces of His Friend’s Cryopreserved BrainJessica Hamzelou | MIT Technology Review ($)
“‘This brain is not alive,’ says John Bischof, who works on ways to cryopreserve human organs at the University of Minnesota. Still, Fahy’s research could help provide a tool to neuroscientists looking for new ways to study the brain. And while human reanimation after cryopreservation may be the stuff of science fiction, using the technology to preserve organs for transplantation is within reach.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 28) appeared first on SingularityHub.
2026-03-28 06:27:50
The three-phase plan calls for up to 30 robotic missions, including a fleet of rocket-powered moon hoppers.
The prospect of a sustained human presence beyond Earth orbit is rapidly shifting from science fiction to a near-term reality. NASA has announced an ambitious plan to build a permanent lunar base while also preparing to launch a Mars mission featuring the first interplanetary spacecraft to use nuclear propulsion.
Ever since his first term, returning humans to the moon has been a priority of President Donald Trump. And with NASA’s Artemis 2 mission—the first manned lunar mission in over 50 years—edging closer to the launchpad, that goal is looking more realistic.
This week, at a high-profile event called Ignition, NASA Administrator Jared Isaacman unveiled an ambitious new program whose centerpiece is a $20 billion lunar base to be constructed over the next seven years. He also announced plans to launch the first spacecraft to use nuclear propulsion since the 1960s to deliver a fleet of robotic helicopters to the surface of Mars.
“NASA is committed to achieving the near-impossible once again, to return to the moon before the end of President Trump’s term, build a moon base, establish an enduring presence, and do the other things needed to ensure American leadership in space,” Isaacman said in a press release.
The newly appointed head of the agency framed the plan as America’s response to a new era of great-power competition in space—a thinly veiled reference to China’s plans to land humans on the moon by 2030 and build its own lunar base.
The new moon base will be built in three phases, according to NASA, with the first involving a shift from infrequent, bespoke missions to regular and repeatable ones to test out the mobility, power generation, communications, and navigation technologies required to support a longer-term presence.
To achieve this, the agency plans to dramatically ramp up its Commercial Lunar Payload Services program—which enlists American private space companies to provide frequent, cost-effective cargo missions to the lunar surface—targeting up to 30 robotic landings starting in 2027. It also plans to use MoonFall hoppers, small robotic landers that use short, rocket-powered jumps to travel tens of kilometers, to hunt for useful resources, like ice, in hard-to-reach areas.
“We’re going to send them to do the prospecting, and potentially they could host a variety of payloads,” Carlos Garcia-Galan, program executive for the moon base at NASA, told Science.
In the second phase of the lunar base build-out, the agency will construct “semi‑habitable infrastructure” that can support regular astronaut operations on the moon’s surface, as well as the delivery of a pressurized rover from Japan’s space agency. The final stage will involve the delivery of heavier infrastructure needed for continuous human habitation, including multipurpose habitats being developed by Italy’s space agency and a lunar utility vehicle from Canada.
NASA also announced plans to pause work on its Gateway lunar orbital station, a key component of the original Artemis program that was designed as a staging post for manned missions to the lunar surface and later to Mars. The agency said it will attempt to repurpose some of the equipment developed for the facility to support other missions.
One of these could be another notable project announced at the Ignition event—the launch of a nuclear-powered interplanetary spacecraft called Space Reactor-1 Freedom to Mars by the end of 2028. The vehicle will rely on a device developed for the lunar space station that can convert heat from a roughly 20-kilowatt nuclear fission reactor into electric power for propulsion.
Once it reaches Mars, the spacecraft will deploy three robotic drones with designs based on the Ingenuity helicopter. Ingenuity completed 72 flights on Mars after arriving with the Perseverance rover in 2021. The drones will use cameras and subsurface radar to scour the planet for water ice and promising locations for future human landing sites.
Given recent turmoil at the agency and massive funding cuts originally proposed by the Trump administration, it remains to be seen whether NASA can pull off such an ambitious vision for the near future of space exploration. But the prospect of mankind having a permanent presence beyond Earth orbit looks closer than ever.
The post NASA Unveils Its $20 Billion Moon Base Plan—and a Nuclear Spacecraft for Mars appeared first on SingularityHub.
2026-03-26 22:00:00
Visual experiments suggest just a small fraction of the information our brains process enters awareness.
What can you see right now? This might seem like a silly question, but what enters your consciousness is not the whole story when it comes to vision. A great deal of visual processing in the brain goes on well below our conscious awareness.
Some studies have probed the unconscious depths of vision. One source of evidence comes from the neurological condition known as blindsight, which is caused by damage to areas of the brain involved in processing visual information. People with blindsight report that they are unable to see, either entirely or in a portion of their visual field. However, when asked to guess what is there, they can often do so with remarkable accuracy.
For example, in an experiment published in 2004 on someone with blindsight, a black bar was displayed in the portion of the visual field to which the person was blind. The person was asked to “guess” whether the bar was vertical or horizontal.
Despite denying any conscious awareness of the bar, the participant could answer correctly at a level well above chance. The participant even showed evidence of being able to pay attention to the bar—they were faster to respond when an arrow (placed in a healthy area of their visual field) correctly indicated the location of the bar.
The most popular interpretation (though not the only one) is that people with blindsight can see these objects, but not see them consciously. They see what is there, but it all goes on unconsciously, below their awareness.
The phenomenon of inattentional blindness seems to show you can see without the information crossing into your consciousness. Anyone can experience inattentional blindness. The phenomenon has been known about for a long time, but we can most easily get a handle on it by looking at a well-known experiment reported in 1999.
In this experiment, participants are shown a video of people playing basketball and told to count the number of passes between the players wearing a white shirt. If you’ve never done this before, I urge to you stop reading now and watch the video.
In many cases, people are so busy counting the passes that they completely miss a large gorilla walking across the middle of the scene and beating its chest, then walking off. The gorilla’s right there, in the centre of your visual field. Light from the gorilla enters your eyes, and is processed in the visual system, but somehow you missed it, because you weren’t paying attention to it.
The gorilla has more to teach us. In another experiment reported in 2013, radiologists were given a series of lung scans. They were told to look for nodules (which show up as small light colored circles) on each scan. In one of the scans, a large picture of a dancing gorilla was superimposed on top of the lung scan. In this study, 83 percent of the radiologists failed to spot it, even though it was 48 times bigger than the average nodule they were looking for. Some of them even looked directly at the gorilla and still didn’t notice it!
The interpretation of these experiments is controversial. Some scientists suggest that in these kinds of cases, you consciously see the gorilla, but immediately forget it (although a dancing gorilla in someone’s lung doesn’t seem like the kind of thing you’d forget). Others argue that you see the gorilla, but the information never made its way into consciousness. You saw the gorilla, but unconsciously.
Let’s assume that in the case of blindsight, and inattentional blindness, the information is seen but didn’t make it all the way to consciousness. Then, the question is: What makes some information conscious, rather than the information that stays unconscious? This is one of the central questions for consciousness studies in philosophy, psychology, and neuroscience.
There’s no agreement on which is the best theory of consciousness, but in my opinion, the strongest contender is the global neuronal workspace theory.
According to this theory, consciousness is all to do with a particular area of the brain which is the seat of the “workspace.” The workspace is a system with a small capacity, so it can’t hold a lot of information at any one time. The job of the workspace is to take unconscious information and broadcast it to lots of different networks all across the brain. Global neuronal workspace theorists say that broadcasting the information in this way is what makes it conscious.
The job of the workspace is to act like the brain’s loudspeaker, and consciousness is the information that gets broadcast. The workspace takes unconscious information and boosts it so that many of the different systems in the brain hear about it and can use that information in their own processes. The late philosopher Daniel Dennett used to call consciousness “fame in the brain.” The workspace idea is similar.
One of the most striking implications of the global neuronal workspace theory is how little information makes it to consciousness. Since the workspace has quite a small capacity, it follows that we can only ever be conscious of a little at a time. We might think there’s a rich visual world in front of us, full of details, all of which we’re conscious of, but really—according to the theory—we’re only ever conscious of a small portion of that.
Some philosophers and scientists have objected to the theory on these grounds. They suggest that consciousness “overflows” the workspace: We are conscious of more information than can “fit” into the workspace at any one time. Even with these debates still ongoing, I think the global neuronal workspace theory gives us a reasonably clear answer to the question of what consciousness is for and how it interacts with other systems in the brain.
In our brains, consciousness is only the tip of a very large iceberg. But the global neuronal workspace theory might give us insight into what makes that tip so special.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post What We Actually See—and Don’t See—Shows Consciousness Is Only the Tip of the Iceberg appeared first on SingularityHub.
2026-03-25 03:38:32
In a step toward biological computing, brain organoids rewired their networks as they learned to balance a digital pole on a cart.
Try balancing a ruler vertically on the palm of your hand while walking. It’s not easy. Your eyes constantly track its movement. Your arm and hand make tiny adjustments to prevent tilting. All the while, your brain sparks with activity with one clear goal: Keep the ruler upright.
Scientists have now trained mini brains, or brain organoids, to master the same problem, simulated in the digital realm, with electrical zaps alone.
Mini brains have grown popular with researchers since their invention over a decade ago. Commonly made from stem cells, organoids are jam-packed with neurons that form densely connected networks. Earlier versions loosely resembled the developing brains of preterm babies; now they can mimic the neural wiring of a kindergartener. As the blobs become more sophisticated, scientists are asking: Can they learn?
In the new study, researchers challenged the mini brains with a classic engineering task similar to balancing a ruler on your hand. Mastering the task takes practice, but our brains are wired to receive feedback, often in the form of a small jolt of electrical activity. Called reinforcement learning, the technique has already been adapted to train AI—and now, mini brains too.
The goal isn’t to replace silicon-based controllers with living tissue. It’s to test the organoids’ ability to listen and learn and reveal how they break down.
“We’re trying to understand the fundamentals of how neurons can be adaptively tuned to solve problems,” study author Ash Robbins at the University of California, Santa Cruz said in a press release. “If we can figure out what drives that in a dish, it gives us new ways to study how neurological disease can affect the brain’s ability to learn.”
Attaching living brain tissue to computers sounds like science fiction. But brain organoids have already made it reality.
These blobs of brain cells often start life as skin cells that have been turned back into stem cells. After bathing in a special cocktail of nutrients, they develop into various types of brain cells that self-organize into intricate three-dimensional structures similar to parts of the brain. Neurons form networks, ripple with electrical waves, and when connected to other tissues—such as an artificial spinal cord and lab-grown muscles—can control them.
Bioengineers have taken notice, envisioning organoids as potential living processors. Our brains use far less power and are more adaptable than the most advanced neuromorphic chips and brain-inspired AI. Brain organoids linked together into computers could theoretically enable computation in a dish at a fraction of the energy cost.
There are hints this blue-sky idea could work. Scientists have taught hundreds of thousands of isolated neurons to play the video games Pong and, more recently, Doom. Separately, researchers used cultured neurons to control the simple movements of a vehicle.
But mini brains are different. Unlike isolated neurons, organoids’ 3D structures and connections are harder to decipher. Yet predictable learning is essential to realizing “organoid intelligence.” Their electrical activity needs to rapidly adapt to inputs, strengthening or weakening circuits.
Reinforcement learning from trial and error is a perfect test. When we succeed at a new task, neurons in the brain’s reward center blast dopamine and rewire their connections. Failures don’t bring about similar activity. Over time, we learn not to touch a hot pan, take care when hammering a nail, and other life lessons.
But cortical organoids, which resemble the outermost part of the brain, lack neurons that communicate using dopamine. Can they still learn through experience?
The new study tackled the question with a hybrid organoid-computer system. The team grew cortical organoids from mouse stem cells. These then self-organized into neural networks and developed a layered structure within a month.
The researchers chose this type of brain organoid “due to the cortex’s well-established role in adaptive information processing and its ability to encode, decode, and modify responses to novel inputs,” they wrote.
The team embedded the brain blobs on a chip that captures their electrical pulses and interacts with a computer to “teach” the mini brains and process data. (The chip’s sensors don’t cover the entire organoid as more recent devices do.)
After recording spontaneous activity, the team figured out how best to stimulate the organoids and built a programmable system with a simple interface.
“From an engineering perspective, what makes this powerful is that we can record, stimulate, and adapt in the same system,” said study author Mircea Teodorescu.
Next, the team challenged the organoids with the cartpole problem, a classic engineering task that asks the player to balance an upright pole on a moving cart. If the pole tips over a certain angle, it’s a fail. The player has to constantly adjust the cart as its cargo wobbles.
To train the organoids, the scientists delivered electrical zaps after the pole tipped too far to either side and tracked the responses. In essence, the mini brains played a video game, with human coaches nudging them toward success. The team grouped performance—how long the system balanced the pole—into sets of five trials, each ending when the pole fell. If the most recent performance improved over the previous 20 trials, they considered it a success and delivered no zaps. If performance didn’t improve, the team gave the organoids a zap.
“You could think of it like an artificial coach that says, ‘you’re doing it wrong, tweak it a little bit in this way,’” said Robbins.
Compared to random or no zaps, the rewarding zaps boosted the success rate from 4.5 to 46.5 percent in continuous trials, suggesting the organoids learned from electrical cues alone—without dopamine. A closer look showed the cells released another chemical that strengthens neural connections, and blocking the process prevented them from learning.
“This demonstrates that biological neural networks can be systematically modified through precise electronic control,” wrote the team.
However, the learning didn’t last. After roughly 45 minutes without stimulation, the organoids’ performance reset to baseline. Their fleeting memory may reflect the lack of neural highways required for long-term memory. The team is now culturing multiple types of brain organoids together—each mimicking a different region—to potentially preserve learning and memory.
“These are incredibly minimal neural circuits. There’s no dopamine, no sensory experience, no body to sustain, no goals to pursue,” said Keith Hengen at Washington University in St. Louis, who did not participate in the study. But they could still be nudged toward solving a real control problem. “That tells us something important: The capacity for adaptive computation is intrinsic to cortical tissue itself, separate from all the scaffolding we usually assume is necessary.”
The post These Mini Brains Just Learned to Solve a Classic Engineering Problem appeared first on SingularityHub.