MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

This AI Found Signs Comatose Patients Were Waking Up Days Before Doctors Did

2025-09-17 05:47:23

The algorithm discovered early signals of consciousness in extremely subtle facial movements.

Imagine waking up in a hospital room. The last thing you remember is a terrible car crash. A doctor holds your hand and asks you to squeeze it. You try as hard as you can, but nothing happens—not even a twitch.

“I’m afraid he’s in a coma,” you hear the doctor say. But I’m conscious, you want to yell.

People with traumatic head injuries, often resulting from car accidents, can seem completely unresponsive to the outside world. But many experience “covert consciousness”—in that their brains respond to commands—even though they can’t translate it into eye blinks, finger twitches, or other obvious movements for clinicians and nurses to detect.

Although brain imaging techniques can sometimes capture signs a person is internally awake and trying to follow instructions, these methods are expensive and difficult to use for everyday monitoring while doctors and the patients’ families wait for them to wake up.

“Some people with severe brain injuries may appear unconscious, but still have some awareness and the ability to move,” wrote Sima Mofakham and colleagues at Stony Brook University in a new study. It’s just that “these movements are often too small to be seen by doctors during routine exams.”

The study, led by Mofakham, used computer vision to track tiny facial movements in seemingly unconscious patients. The AI tool, dubbed SeeMe, zeroed in on extremely minute movements, down to the level of single pores in the skin.

Compared to physicians, the tool detected early signs of covert consciousness roughly four days earlier in roughly 90 percent of patients. The study also found the number and strength of these tiny twitches corresponded to how well the patient had recovered at the time of discharge.

Early detection of consciousness could make recovery less distressing for a person who’s just waking up. Knowing the person is aware could help doctors decide when to kickstart rehabilitation associated with better health outcomes. The technology may also one day be used to monitor real-time treatments for brain damage due to stroke and other injuries.

Stairway to Consciousness

We often think of consciousness as a light switch. Flip it on, and you’re aware of both the outside world and yourself; flip it off, and awareness goes dark.

But consciousness is more like a light dimmer. After a blow to the brain, people can fall into a minimally conscious state. Here, they experience intermittent awareness and can follow commands, like if a doctor says “look left” or “squeeze my hand.” More severe is the vegetative state. Patients in this state can open or close their eyes in cycles, but they can no longer respond to outside stimulation.

In especially traumatic injuries, the patient goes into a coma, where they’re not aware of themselves and others, can’t move, and can’t be awakened.

Despite the odds, unresponsive people can recover mental awareness—often sooner than their observable behavior would suggest. In one study, a person in a vegetative state showed relevant brain activity when asked to imagine playing tennis or moving around her house, even though she couldn’t physically respond.

More recently, a landmark brain imaging study found at least a quarter of 353 people with severe brain injuries, who had been deemed unconscious, showed signs of awareness based on brain activity when given voice commands. Most did not react to a battery of standard clinical tests for responsiveness.

But brain imaging tests, while powerful, are expensive and impractical for everyday clinical use. Rather than looking into the brain, the team behind the new study took a page out of the clinician’s playbook by linking tiny facial movements to diagnostics and recovery.

Now You See Me

The face is a window on the brain. Its muscles are controlled by large areas across both of the brain’s hemispheres. Any early signs of recovery are likely to show up first in facial movements.

The team recruited 16 healthy volunteers and 37 people with brain injuries who, outwardly, appeared to be in a coma. They then analyzed video recordings of the participants being asked to do three tasks: “Stick out your tongue,” “open your eyes,” and “show me a smile.”

The tasks chosen involved multiple facial regions and muscles to better gauge brain activity, the authors wrote.

The new AI tool, SeeMe, then tracked facial movements—down to the level of individual pores—in response to the commands. A group of trained medical professionals also reviewed the videos and were asked for their expert opinions.

The AI captured eye responses in 30 patients and nearly all of their mouth movements, with a success rate nearly double that of the physicians. SeeMe was especially sensitive to tiny twitches that evaded the human eye.

The tool also flagged earlier signs of consciousness. In one deeply comatose volunteer, an older man who suffered a car crash, the AI detected mouth movements on day 18 after admission; he finally responded to motor commands on day 37. SeeMe also found signs of eye and mouth movements 19 days after admission in another participant in a coma after a traffic accident. He opened his eyes three days later and went on to gradually recover.

Across the board, SeeMe detected eye-opening responses roughly four days before standard tests picked them up and mouth-related reactions about eight days earlier. The AI’s performance also correlated to how well patients recovered on discharge and at six months—that is, they increasingly regained awareness and could do rehab.

SeeMe is intended to complement, not replace, long-term follow-up and care. Comatose patients are “an exceedingly challenging population to study,” wrote the team. Some people may have had fluctuations in awareness that weren’t captured in the study. Others may simply not have wanted to participate.

A lack of early detection of consciousness “should never be interpreted as the absence of potential” that the patient can regain awareness, the authors explained.

To further fine-tune the AI, the team hopes to gather information on people who regained consciousness but were initially missed by SeeMe. They also aim to incorporate other objective measures of movement, such as electrical signals in muscles. SeeMe could even help people presumed unconscious for longer periods of time than that covered in the study.

For patients and families, further work could result in a “yes or no” system based on facial movements that might allow loved ones to “talk” to each other again.

The post This AI Found Signs Comatose Patients Were Waking Up Days Before Doctors Did appeared first on SingularityHub.

Scientists Hope 3D-Printed Skin Can Bring On-Demand Treatment for Serious Injuries

2025-09-16 00:24:10

New bioprinting techniques make it possible to 3D print skin with complex networks of blood vessels.

Bioprinting holds the promise of producing tissues and organs on demand, but efforts have been held back by our inability to create the networks of blood vessels required to sustain them. Two complementary new technologies could now solve the problem for advanced skin grafts.

The skin is probably one of the body’s most underappreciated organs. Not only does it provide a crucial barrier against germs, toxins, and radiation, but it also helps regulate temperature and water loss and acts as a vital sensory organ mediating our sense of touch and pain.

Serious injuries to the skin, in particular burns, are usually treated by transplanting a thin layer of epidermis, the top layer of skin, from elsewhere on the body. But many of the structures supporting the skin’s critical functions, such as blood vessels, nerves, and hair follicles, are actually found in the layer below, known as the dermis.

It’s usually impossible to transplant the dermis because it would leave behind a wound as severe as the one being treated. So, traditional skin grafts normally don’t restore full function and can lead to severe scarring.

Now, researchers from Linköping University in Sweden have developed two new bioprinting techniques—essentially 3D printing with biological materials—that could produce skin grafts perfused with blood vessels that replicate the complex structure of the dermis. The first approach involves injecting a cell-laden gel into a wound that can then grow into functional tissue. The second uses hydrogel threads to create channels that can become blood vessels.

“The dermis is so complicated that we can’t grow it in a lab. We don’t even know what all its components are. That’s why we, and many others, think that we could possibly transplant the building blocks and then let the body make the dermis itself,” Johan Junker at Linköping University, who led the study, said in a press release.

The researchers first developed a specially designed “bioink” containing cells known as fibroblasts. These are the most common cells in the dermis and produce important dermal ingredients such as collagen, elastin, and hyaluronic acid, according to Wired.

The researchers grew these cells on tiny beads of gelatin and then mixed them with hyaluronic acid to create a gel. Pressure turns the gel into a liquid that can be extruded through the nozzle of a 3D printer before becoming gel-like again.

The researchers used this “skin in a syringe,” as they’ve dubbed their invention, to create small disks that they then transplanted under the skin of mice. In results published in Advanced Healthcare Materials, the researchers reported the living cells produced various substances crucial for growing a new dermis, such as collagen, and new blood vessels even grew in the graft.

The ability to grow new blood vessels will be key if we’re to make usable skin grafts. Creating functional vascular networks has been a long-standing challenge in tissue engineering efforts, as without them it’s impossible to deliver nutrients and oxygen into larger, more complex structures.

The second technique the researchers developed could go even further towards solving this problem. In another paper, published in the same journal, they showed they could print threads of a water-based substance known as a hydrogel into tissues.

These threads can be arranged in complex patterns and then dissolved by the application of a simple enzyme, according to Wired, leaving a tube-like cavity in which new blood vessels could be grown. By combining the two technologies it should eventually be possible to create a fully functional artificial dermis.

As ever, getting these technologies out of the lab will be a long and uncertain journey. But they bring us one step closer to an on-demand treatment for the most serious skin injuries.

The post Scientists Hope 3D-Printed Skin Can Bring On-Demand Treatment for Serious Injuries appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through September 13)

2025-09-13 22:00:00

Future

AI Could Make the Smartphone Passé. What Comes Next?Brian X. Chen and Tripp Mickle | The New York Times

“Every major tech company is thinking about this million-dollar question: What comes after the smartphone? Here is a list of predictions from current and former employees of some of the world’s largest tech companies, including Apple, Google, Samsung, Amazon, and Meta.”

Computing

Good Old IBM Is Leading the Way in the Race for ‘Quantum Advantage’Christopher Mims | The Wall Street Journal

“IBM hasn’t been associated with breakthrough innovation since its Watson AI won ‘Jeopardy!’ in 2011. But quantum computing, which could see a breakthrough to commercialization by 2030, gives the 114-year-old stalwart of business computing a chance to reclaim some of its past glory.”

Robotics

Reality Is Ruining the Humanoid Robot HypeEvan Ackerman | IEEE Spectrum

“Future projections seem to be based on an extraordinarily broad interpretation of jobs that a capable, efficient, and safe humanoid robot—which does not currently exist—might conceivably be able to do. Can the current reality connect with the promised scale?”

Robotics

We Are Entering a Golden Age of Robotics Startups—and Not Just Because of AIRebecca Szkutak | TechCrunch

“Investors poured $6 billion into robotics startups in the first seven months of 2025 according to Crunchbase data. The data company predicts that this year’s funding totals will eclipse 2024, making it one of the only non-AI categories to experience a boost in funding.”

Future

New Pathway Engineered Into Plants Lets Them Suck Up More CO₂John Timmer | Ars Technica

“It would be nice to think that we could reforest our way out of the mess we’re creating, but recent studies have indicated there’s simply not enough productive land for this to work out. One alternative might be to get plants to take up carbon dioxide more efficiently.”

Space

Rendezvous Robotics Exits Stealth With $3M to Build Reconfigurable Space InfrastructureAria Alamalhodaei | TechCrunch

“Instead of astronauts and robotic arms, Rendezvous is betting on autonomous swarm assembly and electromagnetism. The company is commercializing a technology called ‘tesserae,’ flat-packed modular tiles that can launch in dense stacks and magnetically latch to form structures on orbit. With a software command, the tiles are designed to unlatch and rearrange themselves when the mission changes.”

Future

Will AI Choke Off the Supply of Knowledge?Greg Ip | The Wall Street Journal

“When humans answer questions, such as whether Einstein should be energy secretary, they often pursue novel avenues of inquiry, creating new knowledge and insight as they go. They do this for a variety of reasons: salary, wealth, fame, tenure, ‘likes,’ clicks, curiosity. If LLMs come to dominate the business of answering questions, those incentives shrivel.”

Energy

Geothermal Is Too Expensive, but Dig Energy’s Impossibly Small Drill Rig Might Fix ThatTim De Chant | TechCrunch

“The startup, which has been operating in stealth for the last five years, developed the water-jet drilling rig in an effort to make geothermal heating and cooling so inexpensive that it will displace fossil fuel boilers and furnaces. The rig is central to that, promising to slash drilling costs by up to 80%.”

Science

A Single, ‘Naked’ Black Hole Rewrites the History of the UniverseCharlie Wood | Quanta Magazine

“A black hole unlike any seen before has been spotted in the early universe. It’s huge and appears to be essentially on its own, with few stars circling it. The object, which may represent a whole new class of enormous ‘naked’ black holes, upends the textbook understanding of the young universe.”

Future

Pay-Per-Output? AI Firms Blindsided by Beefed Up Robots.txt Instructions.Ashley Belanger | Ars Technica

“Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.”

Artificial Intelligence

The Software Engineers Paid to Fix Vibe Coded MessesEmanuel Maiberg | 404 Media

“The alleged benefit of vibe coding, which refers to the practice of building software with AI-coding tools without much attention to the underlying code, is that it allows anyone to build a piece of software very quickly and easily. …[But] if the resulting software is so poor you need to hire a human specialist software engineer to come in and rewrite the vibe coded software, it defeats the entire purpose.”

Biotechnology

Scientists Infuse Cement With Bacteria to Create Living Energy DeviceGayoung Lee | Gizmodo

“‘We envision this technology being integrated into real buildings, in walls, foundations, or bridges, where it can support renewable energy sources like solar panels by providing local energy storage,’ Luo said. ‘Imagine a regular room built with bacteria-infused cement: Even at a modest energy density of 5 Wh/kg, the walls alone could store about 10 kWh—enough to keep a standard enterprise server running for a whole day.'”

The post This Week’s Awesome Tech Stories From Around the Web (Through September 13) appeared first on SingularityHub.

I Got an AI to Impersonate Me and Teach Me My Own Course—Here’s What I Learned About the Future of Education

2025-09-12 22:00:00

I asked an AI agent to play the role of me, an Oxford lecturer on media and AI, and teach me a personal master’s course, based entirely on my own work.

Imagine you had an unlimited budget for individual tutors offering hyper-personalized courses that maximized learners’ productivity and skills development. This summer I previewed this idea—with a ridiculous and solipsistic test.

I asked an AI tutor agent to play the role of me, an Oxford lecturer on media and AI, and teach me a personal master’s course, based entirely on my own work.

I set up the agent via an off-the-shelf ChatGPT tool hosted on the Azure-based Nebula One platform, with a prompt to research and impersonate me, then build personalized material based on what I already think. I didn’t tell the large language model (LLM) what to read or do anything else to enhance its capabilities, such as giving it access to learning materials that aren’t publicly available online.

The agent’s course in media and AI was well structured—a term-long, original six-module journey into my own collected works that I had never devised, but admit I would have liked to.

It was interactive and rapid-fire, demanding mental acuity via regular switches in formats. It was intellectually challenging, like good Oxford tutorials should be. The agent taught with rigor, giving instant responses to anything I asked. It had a powerful understanding of the fast-evolving landscape of AI and media through the same lens as me, but had done more homework.

This was apparently fed by my entire multimedia output—books, speeches, articles, press interviews, even university lectures I had no idea had even been recorded, let alone used to train GPT-4 or GPT-5.


The course was a great learning experience, even though I supposedly knew it all already. So in the inevitable student survey, I gave the agentic version of me well-deserved, five-star feedback.

For instance, in a section discussing the ethics of non-player characters (NPCs) in computer games, it asked:

If NPCs are generated by AI, who decides their personalities, backgrounds, or morals? Could this lead to bias or stereotyping?

And:

If an AI NPC can learn and adapt, does it blur the line between character and “entity” [independent actor]?

These are great, philosophical questions, which will probably come to the fore when and if Grand Theft Auto 6 comes out next May. I’m psyched that the agentic me came up with them, even if the real me didn’t.

Agentic me also built on what real me does know. In film, it knew about bog-standard Adobe After Effects, which I had covered (it’s used for creating motion graphics and visual effects). But it added Nuke, a professional tool used to combine and manipulate visual effects in The Avengers, which (I’m embarrassed to say) I had never heard of.

The Course Reading List

So, where did the agent’s knowledge of me come from? My publisher Routledge did a training data deal with Open AI, which I guess could cover my books on media, AI, and live experience.

Unlike some authors, I’m up for that. My books guide people through an amazing and fast-moving subject, and I want them in the global conversation, in every format and territory possible (Turkish already out, Korean this month).

That availability has to extend to what is now potentially the most discoverable “language” of all, the one spoken by AI models. The priority for any writer who agrees with this should be AI optimization: making their work easy for LLMs to find, process, and use—much like search engine optimization, but for AI.

To build on this, I further tested my idea by getting an agent powered by China’s DeepSeek to run a course on my materials. When I found myself less visible in its training corpus, it was hard not to take offense. There is no greater diss in the age of AI than a leading LLM deeming your book about AI irrelevant.

When I experimented with other AIs, they had issues getting their facts straight, which is very 2024. From Google’s Gemini 2.5 Pro, I learned hallucinatory biographical details about myself like a role running media company The Runaway Collective.

When I asked Elon Musk’s Grok what my best quote was, it said: “Whatever your question, the answer is AI.” That’s a great line, but Google DeepMind’s Nobel-winning Demis Hassabis said it, not me.

Where We’re Heading

This whole, self-absorbed summer diversion was clearly absurd, though not entirely. Agentic self-learning projects are quite possibly what university teaching actually needs: Interactive, analytical, insightful, and personalized. And there is some emerging research around the value. This German-led study found that AI-generated feedback helped to motivate secondary school students and benefited their exam revision.

It won’t be long before we start to see this kind of real-time AI layer formally incorporated into school and university teaching. Anyone lecturing undergraduates will know that AI is already there. Students use AI transcription to take notes. Lecture content is ripped in seconds from these transcriptions and will have trained a dozen LLMs within the year. To assist with writing essays, ChatGPT, Claude, Gemini, and DeepSeek/Qwen are the sine qua non of Gen Z projects.

But here’s the kicker. As AI becomes ever more central to education, the human teacher becomes more important, not less. They will guide the learning experience, bringing published works to the conceptual framework of a course and driving in-person student engagement and encouragement. They can extend their value as personal AI tutors—via agents—for each student, based on individual learning needs.

Where do younger teachers fit in, who don’t have a back catalog to train LLMs? Well, the younger the teacher, the more AI-native they are likely to be. They can use AI to flesh out their own conceptual vision for a course by widening the research beyond their own work, by prompting the agent on what should be included.

In AI, two alternate positions are often simultaneously true. AI is both emotionally intelligent and tone deaf. It is both a glorified text predictor and a highly creative partner. It is costing jobs, yet creating them. It is dumbing us down, but also powering us up.

So too in teaching. AI threatens the learning space, yet can liberate powerful interaction. A prevailing wisdom is that it will make students dumber. But perhaps AI could actually be unlocking for students the next level of personalisation, challenge and motivation.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post I Got an AI to Impersonate Me and Teach Me My Own Course—Here’s What I Learned About the Future of Education appeared first on SingularityHub.

In a First, Scientists Record Decision-Making as It Happens Across a Whole Mouse Brain

2025-09-12 03:43:51

An extensive brain map shows how regions collaborate during complex decision-making.

We’re constantly making decisions. If I get the pumpkin spice latte, would it make me happier than my usual black coffee? If I go the scenic route on a trip, would it be worth the extra time?

Past and current experiences affect each decision. By imaging the brain, scientists have long known multiple regions collaborate to pull in memories and integrate them with what we’re seeing, hearing, and thinking when weighing options. But because the resolution is relatively low, we’ve only had a rough sketch of the intricate neural connections involved.

A global collaboration is now digging deeper. In a technological feat, the International Brain Laboratory released a large, dynamic brain map of mice navigating a difficult decision-making task.

Launched in 2017, the group seeks to link brain activity with behavior, one of the holy grails in neuroscience. It’s been an uphill struggle. Prior attempts could only measure small regions, and individual teams used their own behavioral tests, making it difficult to integrate data.

The new collaboration gathered neural electrical recordings in mice from multiple labs across the globe using a standardized procedure. Overall, the scientists used nearly 700 brain implants to record neural activity in 139 mice, capturing the activity of 620,000 neurons across the brain.

“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making. The scale is unprecedented … [which] together represent[s] 95 percent of the mouse brain volume,” said study author Alexandre Pouget at the University of Geneva in a press release.

Should I Stay or Should I Go?

Despite decades of research, scientists still don’t fully understand how we make up our minds.

Say you’re hiking and encounter a bear. The brain immediately goes into hyper mode: The visual cortex identifies the brown thing as a bear and transmits this to the brain’s emotion centers. The latter activate to produce a sense of fear and ask the memory regions what to do. These calculations direct a motor response—back away, make yourself big, or quick-draw that bear spray.

Multiple neural networks fire up to reach the decision, but scientists are divided on how the system works. One camp thinks the brain could combine memories—say, YouTube videos of how to avoid bears—with the fact you’re seeing a bear in high-level brain regions. This hypothesis predicts memories, or prior information, only inform actions at later stages.

Another camp believes the opposite. Rather than waiting until the last second, all regions of the brain—including early sensory systems—incorporate memories to decide the best response. This process could better spread communication throughout the brain.

Neural Gambling

Like spies tapping phone lines, the authors of the new study hoped to settle the debate by listening in on the chatter of hundreds of thousands of brain cells.

The effort piggy-backed on an International Brain Laboratory dataset that used 699 Neuropixels, an open-source brain implant, to record the electrical firing of individual neurons in mice. The team strategically placed the devices across nearly 280 brain regions in over a hundred mice. They tried to keep the recordings relatively uniform as they tackled the same task across all collaborating labs.

“The scale is unprecedented as we recorded from over half a million neurons across mice…which together represent 95 percent of the mouse brain volume,” said Pouget.

Every lab taught the critters to perform the same difficult challenge. Each mouse entered an arcade of sorts and was shown a black-and-white grating—think zebra skin—on either the left or right side of a screen. They then had to use their front paws to turn a tiny wheel, moving the image to the center within a minute.

If they succeeded, they got a tasty reward. If they failed, they were blasted with a pop of white noise and a short time-out. Between trials, the mice tried to keep their paws on the wheel as they waited for the next test.

Here’s the crux: The game was rigged. There was an 80 percent chance the grate would appear in one direction, teaching the mice that was the best bet. As the trial went on, the grate slowly faded to the point it was almost impossible to see. The mice then had to decide whether to move it left or right based on what they’d previously learned as a best guess.

Each lab recorded brain signals as the mice made their choices and sent the data to a central database. In all, the consortium isolated nearly 76,000 neuron activity patterns across the brain. The recording sites were then stitched together using two-photon microscopy, a technology that impeccably lines up anatomical regions to maps of electrical activation in brain regions.

“We’d seen how successful large-scale collaborations in physics had been at tackling questions no single lab could answer, and we wanted to try that same approach in neuroscience,” said study author Tom Mrsic-Flogel at University College London. “The brain is the most complex structure we know of in the universe and understanding how it drives behavior requires international collaboration on a scale that matches that complexity.”

A Brainy Universe

Using data from the new brain map, the team realized that decision-making isn’t linear. Instead, multiple brain regions—including so-called “early” sensory ones—contribute to the final choice.

For example, brain regions in mice that process visual information sparked with activity upon seeing the grate. That activity then spread and ramped up in a wave-like pattern towards brain regions associated with emotion. These signals guided them to incorporate previous learning, called priors, into final decisions on what to do—move the wheel left or right.

Before, scientists thought priors were encoded in brain regions related to memory and higher cognition. But the new map suggests their signals also influence the early sensory processing regions that contribute to eventual responses.

“The efforts of our collaboration generated fundamental insights about the brain-wide circuits that support complex cognition,” said study author Anne Churchland at UCLA. “This is really exciting and a major step forward relative to the ‘piecemeal’ approach (1-2 brain areas at a time) that was previously the accepted method in the field.”

The International Brain Laboratory is releasing the entire database, with the goal of eventually understanding the brain’s computations within and across the brain regions behind decision-making. The dataset could shed light on neurological disorders with impaired decision-making, such as obsessive-compulsive disorders, Parkinson’s disease, and addiction.

The post In a First, Scientists Record Decision-Making as It Happens Across a Whole Mouse Brain appeared first on SingularityHub.

This Crawling Robot Is Made With Living Brain and Muscle Cells

2025-09-10 07:10:10

Scientists want to know if a biohybrid robot can form a long-lasting biological “mind” to direct movement.

It’s a bizarre sight: With a short burst of light, a sponge-shaped robot scoots across a tiled surface. Flipped on its back, it repeatedly twitches as if doing sit-ups. By tinkering with the light’s frequency, scientists can change how fast the strange critter moves—and how long it needs to “rest” after a long crawl.

Soft robots are nothing new, but the spongy bot stands out in that it blends living muscle and brain cells with a 3D-printed skeleton and wireless electronics. The neurons, genetically altered to respond to light, trigger neighboring muscles to contract or release.

Watching the robot crawl around is amusing, but the study’s main goal is to see if a biohybrid robot can form a sort of long-lasting biological “mind” that directs movement. Neurons are especially sensitive cells that rapidly stop working or even die outside of a carefully controlled environment. Using blob-like amalgamations of different types of neurons to direct muscles, the sponge-bots retained their crawling ability for over two weeks.

Scientists have built biohybrid bots that use electricity or light to control muscle cells. Some mimic swimming, walking, and grabbing motions. Adding neurons could further fine-tune their activity and flexibility and even bestow a sort of memory for repeated tasks.

These biohybrid bots offer a unique way to study motion, movement disorders, and drug development without lab animals. Because their components are often compatible with living bodies, they could be used for diagnostics, drug delivery, and other medical scenarios.

Squishy But Powerful

The word robot often conjures images of Terminator’s metal T-800. Soft robots have the potential to be far more flexible and agile. Being able to slightly deform lets them squeeze through tiny spaces, monitor fragile ecosystems like coral reefs, explore the deep sea, and potentially snake through the body with minimal damage to surrounding tissues.

In addition to synthetic materials and mechanisms, another way to build soft robots is inspired by nature. From blue whales to rodents and humans—all rely on similar biological machinery to move. Motor neurons in muscles receive directions from the brain and spinal cord. They then release chemicals that trigger muscles to contract or relax.

The process is energy efficient and rapidly adapts to sudden changes in the environment—like stepping over an unexpected doorstep instead of tripping. Though today’s robots are getting more agile, they still struggle with unexpected landmines in uneven terrain. Adding neuromuscular junctions could lead to more precise and efficient robots.

Last year, in a proof of concept, one team engineered a swimming “stingray” bot using stem cell-derived neurons, heart muscle cells, and an electronic “brain.” Scientists combined the cells, and brain with an artificial skeleton to make a soft robot that could flap its fins and roam a swimming pool.

There was a surprise too—the junctions between the two cell types developed electrical synapses. Usually, neurons release chemicals to direct muscle movements. These connections are called chemical synapses. While electrical networks are faster, they’re generally less adaptable.

Back to Basics

The new study aimed to create chemical synapses in robots.

The team first 3D printed a skeleton shaped roughly like a figure eight, but with a wider middle section. Each side formed a trough with one side slightly deeper than the other. The troughs were intended to function as legs. The researchers then embedded muscle cells from mice in a nutritious gel contained in each trough. After five days, the cells had formed slivers of muscle capable of contracting throughout the legs.

The robot’s “brain” sat in the middle part of the figure eight. The team made tiny blobs of neural tissue, called neurospheres, out of stem cells genetically engineered to activate with light. The blobs contained a mix of brain cells, including motor neurons to control muscles.

The neurospheres connected with muscle tissue days after transplantation. The cells formed neuromuscular junctions similar in form and function to those in our bodies, and the biohybrid robots began pumping out chemicals that control muscle function.

Then came an electronic touch. The team added a hub to wirelessly detect light pulses, harvest power, and drive five tiny micro-LED lights to change brain cell activity and translate it into movement.

The robot moved at turtle speed, roughly 0.8 millimeters per minute. However, the legs twitched in tandem throughout the trials, suggesting the neurons and muscles formed a sort of synchrony in their connections.

Surprisingly, some bots kept moving even after turning off the light, while other “zombie” bots spontaneously moved on their own. The team is still digging into why this happens. But differences in performance were expected—living components are far less controllable than inorganic parts.

Like after tough workout, the robots also needed breaks. And when flipped on their backs, their legs moved for roughly two weeks but then failed. This is likely due to a buildup of metabolic toxins, which gradually accumulate inside the robots, but the team is looking for the root cause.

Despite their imperfections, the bots are essentially built from living mini neural networks and tissue connected to electronics—true cyborgs. They “provide a valuable platform for understanding…the emergent behaviors of neurons and neuromuscular junctions,” wrote the team.

The researchers are now planning to explore different skeletons and monitor behavior to fine-tune control. Adding more advanced features like sensory feedback and a range of muscle structures could help the bots further mimic the agility of our nervous system. And multiple neural “centers,” like in sea creatures, could control different muscles in robots that look nothing like us.

The post This Crawling Robot Is Made With Living Brain and Muscle Cells appeared first on SingularityHub.