2025-10-29 02:10:53
Patients regained the ability to read books, food labels, and subway signs.
Globally, more than five million people are affected by age-related macular degeneration, which can make reading, driving, and the recognition of faces impossible. A new wireless retinal implant has now restored functional sight to patients in advanced stages of the disease.
The condition gradually destroys the light-sensitive photoreceptors at the center of the retina, leaving people with only blurred peripheral vision. While researcher are investigating whether stem cell implants or gene therapy could help restore sight in these patients, these approaches are still only experimental.
Now though, a system called PRIMA built by neurotechnology startup Science Corporation is helping patients regain the ability to read books, food labels, and subway signs. The system consists of a specially designed pair of glasses that uses a camera to capture images and transmit them wirelessly to a tiny chip implanted in the retina that then stimulates surviving neurons.

In a paper published in The New England Journal of Medicine, researchers showed that 27 out of 32 participants in a clinical trial of the technology had regained the ability to read a year after receiving the device.
“This study confirms that, for the first time, we can restore functional central vision in patients,” Frank Holz from the University Hospital of Bonn who was lead author on the paper said in a statement. “The implant represents a paradigm shift in treating late-stage AMD [age-related macular degeneration].”
The system works by converting images captured by the camera-equipped glasses into pulses of infrared light that are then transmitted through the patients’ pupils to a two-square-millimeter photovoltaic chip. The chip converts the light into electrical signals that are transmitted to the neurons at the back of the eye, allowing the patients to perceive the light patterns captured by the glasses. The PRIMA system also includes a zoom function that lets users magnify what they’re looking at.
Daniel Palanker at Stanford School of Medicine initially designed the technology, and a French startup called Pixium Vision was commercializing it. But facing bankruptcy, the company sold PRIMA to Science Corporation last year for €4 million ($4.7 million), according to MIT Technology Review.
Palanker said the idea for the product came 20 years ago when he realized that because the eye is transparent it’s possible to deliver information into it using light. Previous systems also relied on camera-equipped glasses to transmit signals to a retinal implant, but they were connected either by wires or radio transmitters.
In the recent study, 32 people with a form of macular degeneration that destroys photoreceptors in the center of the retina received implants in one eye. After several months of visual training, 80 per cent of them had regained the ability to read text and recognize high-contrast objects.
Some participants achieved visual acuity equivalent to 20/42 when images were zoomed. And 26 of them could read at least two extra lines on a standard eye chart, with the average closer to five lines.
Because PRIMA uses infrared light to stimulate the chip, these signals don’t interfere with the remaining healthy photoreceptors surrounding it, allowing the brain to merge the restored vision in the central region with the patients’ residual peripheral vision.
The chip is currently only capable of producing black and white images with no shades in between, which limits patients’ ability to recognize more complex objects like faces. But Palanker says he is currently developing software that will allow users to see in grayscale. The researchers are also developing a second-generation implant that will have more than 10,000 pixels, which could support close to normal levels of visual acuity.
One of the participants told the BBC that using the device requires considerable concentration, and it’s not really practical on the move. But Science Corporation told MIT Technology Review it is also in the process of slimming down the bulky glasses and control box into a sleeker headset that would be only slightly larger than a standard pair of sunglasses.
Given the huge number of people affected by macular degeneration, the market for such a device could already be large, but the designers hope the approach could also help cure other vision disorders. The company has already applied for medical approval in Europe, so it may not be long before neuroprosthetic devices become a standard treatment for those with vision loss.
The post These High-Tech Glasses and an Eye Implant Restored Sight in People With Severe Vision Loss appeared first on SingularityHub.
2025-10-28 06:21:52
Boosting protective immune cells healed blood vessels and improved the skin’s ability to repair damage.
The first time I accepted that my grandpa was really aging was when I held his hand. His grip was strong as ever. But the skin on his hand was wafer thin and carried tell-tale signs of bruising across its back.
Plenty of “anti-aging” skin care products promise a younger look. But skin health isn’t just about vanity. The skin is the largest organ in the body and our first line of defense against pathogens and dangerous chemicals. It also keeps our bodies within normal operating temperatures—whether we’re in a Canadian snowstorm or the blistering heat of Death Valley.
The skin also has a remarkable ability to regenerate. After a sunburn, scraped knee, or knife cut while cooking, skin cells divide to repair damage and recruit immune cells to ward off infection. They also make hormones to coordinate fat storage, metabolism, and other bodily functions.
With age the skin deteriorates. It bruises more easily. Wound healing takes longer. And the risk of skin cancer rises. Many problems are connected to a dense web of blood vessels that becomes increasingly fragile as we age. Without a steady supply of nutrients, the skin weakens.
Now a team from New York University School of Medicine and collaborators have discovered a way to turn back the clock. In elderly mice and human skin cells, they detected a steep decline in the numbers of a particular immune cell type. The cells they studied, a type of macrophage, hug blood vessels, help maintain their integrity, and control which molecules flow in or out.
A protein-based drug designed to revive the cells’ numbers gave elderly mice a skin glow up, improving blood flow and the skin’s ability to repair damage. Because loss of these cells happens before the skin declines notably, renewing their numbers may offer “an early strategy” for keeping our largest organ humming along as the years pass.
All organs in mammals have residential macrophages. Literally meaning “big eaters,” these immune cells monitor tissues for infections, cancers, and other dangers. Once activated, they recruit more immune cells to tackle diseases and repair damaged tissues.
There’s more than one type of macrophage. The cells belong to a large family where each member has a slightly different task. How they populate different organs is still mysterious, and scientists are just beginning to decode all the jobs they do. But there’s a general consensus: With age, many macrophage types decline in numbers and health and are linked to a variety of age-related diseases, such as atherosclerosis, cancer, and neurodegeneration.
This trend could also affect aging skin.
The skin’s layers are populated by different types of macrophages. Those in the outermost layer detect pathogens, while cells in the lower, fatty layer help maintain metabolism and regulate body temperature and inflammation. But it was capillary-associated macrophages (CAMs), in the middle layer, that caught the team’s interest. These cells wrap around intricate webs of blood vessels woven through our skin, helping maintain their ability to function and heal.
To better understand how the skin’s macrophages change with age, the team developed a technology to monitor their numbers and health in mice. The researchers genetically engineered the critters such that they produced glow-in-the-dark macrophages and observed these throughout their life.
With age, the skin’s middle layer lost macrophages—which the scientists identified as CAMs—far faster than other skin layers. In mice between 1 and 18 months of age—the human equivalent of pre-teens through people in their 70s—blood vessels that had lost these macrophages behaved as if they were “older” and struggled to support oxygen-rich blood flow to the skin.
The macrophages also dwindled in their coverage of capillaries during aging. Roughly a tenth of the width of a human hair, these dainty blood vessels shuttle nutrients to tissues and dump toxic chemicals into the bloodstream. Macrophage losses eventually led to the death of capillaries in elderly mice. Similar results were found in human skin samples from people over 75.
All this reduced the skin’s ability to maintain capillary health and healing. For example, in one test, the scientists used targeted lasers to form small blood clots. In young mice, the macrophages traveled to the site and ate up damaged red blood cells in the clumps. In elderly mice, blood vessels with more macrophages better repaired injuries, but healing slowed overall.
The team next developed a protein-based therapy that directly boosts CAM levels, and injected it into one hind paw of mice the human equivalent of over 80 years of age. The other paw received a non-active control.
In a few days, the treated paw saw a jump in macrophage numbers and improved capillary flow nourishing the skin. The blood vessels also healed more rapidly after laser damage, resulting in less bruising. The injection seemingly rejuvenated old macrophages, rather than recruiting new ones from the bone marrow, suggesting even vintage cells can grow and regain their strength.
These early results are in mice, and they don’t measure the full spectrum of skin function after repairing blood vessels, which would require observing other cells. Fibroblasts, for example, generate collagen for skin elasticity and promote wound healing. Their numbers also shrink with age. The new treatment is based on a protein from these cells, and the team is planning to test how fibroblasts and CAMs interact with age, and if the shot can be further optimized.
Beyond skin health, blood vessel disease wreaks havoc in multiple organs, contributing to heart attacks, stroke, and other medical scourges of aging. A similar strategy could pave the way for new treatments. In future studies, the team hopes to optimize dosing, follow long-term effects and safety, and potentially mix-and-match the treatment with other regenerative therapies.
The post This Shot Gave Elderly Mice’s Skin a Glow Up. It Could Do the Same for Other Organs Too. appeared first on SingularityHub.
2025-10-25 22:00:00
Google’s Quantum Computer Makes a Big Technical LeapCade Metz | The New York Times ($)
“Leveraging the counterintuitive powers of quantum mechanics, Google’s machine ran this algorithm 13,000 times as fast as a top supercomputer executing similar code in the realm of classical physics, according to a paper written by the Google researchers in the scientific journal Nature.”
The Next Revolution in Biology Isn’t Reading Life’s Code—It’s Writing ItAndrew Hessel | Big Think
“Andrew Hessel, cofounder of the Human Genome Project–write, argues that genome writing is humanity’s next great moonshot, outlining how DNA synthesis could transform biology, medicine, and industry. He calls for global cooperation to ensure that humanity’s new power to create life is used wisely and for the common good.”
Amazon Hopes to Replace 600,000 Us Workers With Robots, According to Leaked DocumentsJess Weatherbed | The Verge
“Citing interviews and internal strategy documents, The New York Times reports that Amazon is hoping its robots can replace more than 600,000 jobs it would otherwise have to hire in the United States by 2033, despite estimating it’ll sell about twice as many products over the period.”
Retina e-Paper Promises Screens ‘Visually Indistinguishable From Reality’Michael Franco | New Atlas
“The team was able to create a screen that’s about the size of a human pupil packed with pixels measuring about 560 nanometers wide. The screen, which has been dubbed retinal e-paper, has a resolution beyond 25,000 pixels per inch. ‘This breakthrough paves the way for the creation of virtual worlds that are visually indistinguishable from reality,’ says a Chalmers news release about the breakthrough.”
Nike’s Robotic Shoe Gets Humans One Step Closer to CyborgMichael Calore | Wired ($)
“At the end of each step, the motor pulls up on the heel of the shoe. The device is calibrated so the movement of the motor can match the natural movement of each person’s ankle and lower leg. The result is that each step is powered, or given a little bit of a spring and an extra push by the robot mechanism.”
SpaceX Launches 10,000th Starlink Satellite, With No Sign of Slowing DownStephen Clark | Ars Technica
“Taking into account [decommissioned Starlink satellites, there are] 8,680 total Starlink satellites in orbit, 8,664 functioning Starlink satellites in orbit (including newly launched satellites not yet operational), [and] 7,448 Starlink satellites in operational orbit. …The European Space Agency estimates there are now roughly 12,500 functioning satellites in orbit. This means SpaceX owns and operates up to 70 percent of all the active satellites in orbit today.”
Amazon Unveils AI Smart Glasses for Its Delivery DriversAisha Malik | TechCrunch
“The e-commerce giant says the glasses will allow delivery drivers to scan packages, follow turn-by-turn walking directions, and capture proof of delivery, all without using their phones. The glasses use AI-powered sensing capabilities and computer vision alongside cameras to create a display that includes things like hazards and delivery tasks.”
The Astonishing Embryo Models of Jacob HannaAntonio Regalado | MIT Technology Review ($)
“Clark and her colleagues are right that, for the foreseeable future, no one is going to decant a full-term baby out of a bottle. That’s still science fiction. But there’s a pressing issue that needs to be dealt with right now. And that’s what to do about synthetic embryo models that develop just part of the way—say for a few weeks, or months, as Hanna proposes. Because right now, hardly any laws or policies apply to synthetic embryos.”
OpenAI Readies Itself for Its Facebook EraKalley Huang, Erin Woo, and Stephanie Palazzolo | The Information ($)
“As the Meta alums have arrived, it’s become evident that some of OpenAI’s latest strategies and initiatives do resemble the tactics Meta used to grow into a corporate juggernaut, according to conversations with seven current and former employees. OpenAI itself is keenly interested in growing into a similar gigantic form, an effort to satisfy investors and justify the half-a-trillion-dollar valuation it received a few months ago.”
Sakana AI’s CTO Says He’s ‘Absolutely Sick’ of Transformers, the Tech That Powers Every Major AI ModelMichael Nuñez | VentureBeat
“Llion Jones, who co-authored the seminal 2017 paper ‘Attention Is All You Need’ and even coined the name ‘transformer,’ delivered an unusually candid assessment at the TED AI conference in San Francisco on Tuesday: Despite unprecedented investment and talent flooding into AI, the field has calcified around a single architectural approach, potentially blinding researchers to the next major breakthrough.”
The ChatGPT Atlas Browser Still Feels Like Googling With Extra StepsEmma Roth | The Verge
“OpenAI’s new browser is great at providing AI-generated responses, but not so great at searches. …Given the options already out there, ChatGPT Atlas is a bit of an underwhelming start for a company that wants to build a series of interconnected apps that could eventually become an AI operating system.”
OpenAI Executive Explains the Insatiable Appetite For AI ChipsSri Muppidi | The Information ($)
“Because training and running models are blurring together, given inference is using more compute than before and incorporating user feedback, OpenAI likely needs more and stronger chips to power every stage of building and deploying its models. So it makes sense why OpenAI is trying to get its hands on every Nvidia chip under the sun.”
The post This Week’s Awesome Tech Stories From Around the Web (Through October 25) appeared first on SingularityHub.
2025-10-24 22:00:00
As AI shopping goes mainstream, will people keep any real control over what they buy and why?
Your phone buzzes at 6 a.m. It’s ChatGPT: “I see you’re traveling to New York this week. Based on your preferences, I’ve found three restaurants near your hotel. Would you like me to make a reservation?”
You didn’t ask for this. The AI simply knew your plans from scanning your calendar and email and decided to help. Later, you mention to the chatbot needing flowers for your wife’s birthday. Within seconds, beautiful arrangements appear in the chat. You tap one: “Buy now.” Done. The flowers are ordered.
This isn’t science fiction. On Sept. 29, 2025, OpenAI and payment processor Stripe launched the Agentic Commerce Protocol. This technology lets you buy things instantly from Etsy within ChatGPT conversations. ChatGPT users are scheduled to gain access to over a million other Shopify merchants, from major household brand names to small shops as well.
As marketing researchers who study how AI affects consumer behavior, we believe we’re seeing the beginning of the biggest shift in how people shop since smartphones arrived. Most people have no idea it’s happening.
For three decades, the internet has worked the same way: You want something, you Google it, you compare options, you decide, you buy. You’re in control.
That era is ending.
AI shopping assistants are evolving through three phases. First came “on-demand AI.” You ask ChatGPT a question, it answers. That’s where most people are today.
Now we’re entering “ambient AI,” where AI suggests things before you ask. ChatGPT monitors your calendar, reads your emails, and offers recommendations without being asked.
Soon comes “autopilot AI,” where AI makes purchases for you with minimal input from you. “Order flowers for my anniversary next week.” ChatGPT checks your calendar, remembers preferences, processes payment, and confirms delivery.
Each phase adds convenience but gives you less control.
AI’s responses create what researchers call an “advice illusion.” When ChatGPT suggests three hotels, you don’t see them as ads. They feel like recommendations from a knowledgeable friend. But you don’t know whether those hotels paid for placement or whether better options exist that ChatGPT didn’t show you.
Traditional advertising is something most people have learned to recognize and dismiss. But AI recommendations feel objective even when they’re not. With one-tap purchasing, the entire process happens so smoothly that you might not pause to compare options.
OpenAI isn’t alone in this race. In the same month, Google announced its competing protocol, AP2. Microsoft, Amazon, and Meta are building similar systems. Whoever wins will be in position to control how billions of people buy things, potentially capturing a percentage of trillions of dollars in annual transactions.
This convenience comes with costs most people haven’t thought about.
Privacy: For AI to suggest restaurants, it needs to read your calendar and emails. For it to buy flowers, it needs your purchase history. People will be trading total surveillance for convenience.
Choice: Right now, you see multiple options when you search. With AI as the middleman, you might see only three options ChatGPT chooses. Entire businesses could become invisible if AI chooses to ignore them.
Power of comparing: When ChatGPT suggests products with one-tap checkout, the friction that made you pause and compare disappears.
ChatGPT reached 800 million weekly users by September 2025, growing four times faster than social media platforms did. Major retailers began using OpenAI’s Agentic Commerce Protocol within days of its launch.
History shows people consistently underestimate how quickly they adapt to convenient technologies. Not long ago most people wouldn’t think of getting in a stranger’s car. Uber now has 150 million users.
Convenience always wins. The question isn’t whether AI shopping will become mainstream. It’s whether people will keep any real control over what they buy and why.
The open internet gave people a world of information and choice at their fingertips. The AI revolution could take that away. Not by forcing people, but by making it so easy to let the algorithm decide that they forget what it’s like to truly choose for themselves. Buying things is becoming as thoughtless as sending a text.
In addition, a single company could become the gatekeeper for all digital shopping, with the potential for monopolization beyond even Amazon’s current dominance in e-commerce. We believe that it’s important to at least have a vigorous public conversation about whether this is the future people actually want.
Here are some steps you can take to resist the lure of convenience:
Question AI suggestions. When ChatGPT suggests products, recognize you’re seeing hand-picked choices, not all your options. Before one-tap purchases, pause and ask: Would I buy this if I had to visit five websites and compare prices?
Review your privacy settings carefully. Understand what you’re trading for convenience.
Talk about this with friends and family. The shift to AI shopping is happening without public awareness. The time to have conversations about acceptable limits is now, before one-tap purchasing becomes so normal that questioning it seems strange.
AI will learn what you want, maybe even before you want it. Every time you tap “Buy now” you’re training it—teaching it your patterns, your weaknesses, what time of day you impulse buy.
Our warning isn’t about rejecting technology. It’s about recognizing the trade-offs. Every convenience has a cost. Every tap is data. The companies building these systems are betting you won’t notice, and in most cases, they’re probably right.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post OpenAI Slipped Shopping Into 800 Million ChatGPT Users’ Chats—Here’s Why That Matters appeared first on SingularityHub.
2025-10-24 00:15:29
It sounds like science fiction, but the system could help people with brain or spinal cord injuries regain lost abilities.
In 2020, Keith Thomas dived into a pool and snapped his spine. The accident left him paralyzed from the chest down and unable to feel and move his arms and legs. Alone and isolated in a hospital room due to the pandemic, he jumped on a “first-of-its-kind” clinical trial that promised to restore some sense of feeling and muscle control using an innovative brain implant.
Researchers designed the implant to reconnect the brain, body, and spinal cord. An AI detects Thomas’ intent to move and activates his muscles with gentle electrical zaps. Sensors on his fingertips shuttle feelings back to his brain. Within a year, Thomas was able to lift and drink from a cup, wipe his face, and pet and feel the soft fur of his family’s dog, Bow.
The promising results led the team at Feinstein Institutes for Medical Research and the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell wondering: If the implant can control muscles in one person, can that person also use it to control someone else’s muscles?
A preprint now suggests such “interhuman” connections are possible. With thoughts alone, Thomas controlled the hand of an able-bodied volunteer using precise electrical zaps to her muscles.
The multi-person neural bypass also helped Kathy Denapoli, a woman suffering from partial paralysis and struggling to move her hand. With the system, Thomas helped her successfully pour water with his brain signals. He even eventually felt the objects she touched in return.
It sounds like science fiction, but the system could boost collaborative rehabilitation, where groups of people with brain or spinal cord injuries work together. By showing rather than telling Denapoli how to move her hand, she’s nearly doubled her hand strength since starting the trial.
“Crucially, this approach not only restores aspects of sensorimotor function,” wrote the team. It “also fosters interpersonal connection, allowing individuals with paralysis to re-experience agency, touch, and collaborative action through another person.”
We move without a second thought: pouring a hot cup of coffee while half awake, grabbing a basketball versus a tennis ball, or balancing a cup of ice cream instead of a delicate snow cone.
Under the hood, these mundane tasks activate a highly sophisticated circuit. First, the intention to move is encoded in the brain’s motor regions and the areas surrounding them. These electrical signals then travel down the spinal cord instructing muscles to contract or relax. The skin sends feedback on pressure, temperature, and other sensations back to the brain, which adjusts movement on the fly.
This circuit is broken in people with spinal cord injuries. But over the past decade, scientists have begun bridging the gap with the help of brain or spinal implants. These arrays of microelectrodes send electrical signals to tailored AI algorithms that can decode intent. The signals are then used to control robotic arms, drones, and other prosthetics. Other methods have focused on restoring sensation, a crucial aspect of detailed movement.
Connecting motor commands and sensation into a feedback loop—similar to what goes on in our brains naturally—is gaining steam. Thomas’s implant is one example. Unlike previous implants, the device simultaneously taps into the brain, spinal cord, and muscles.
The setup first records electrical activity from Thomas’s brain using sensors placed in its motor regions. The sensors send these signals to a computer where they’re decoded. The translated signals travel to flexible electrode patches, like Band-Aids, placed on his spine and forearm. The patches electrically stimulate his muscles to guide their movement. Tiny sensors on his fingertips and palm then transmit pressure and other sensations back to his brain.
Over time, Thomas learned to move his arms and feel his hand for the first time in three years.
“There was a time that I didn’t know if I was even going to live, or if I wanted to, frankly. And now, I can feel the touch of someone holding my hand. It’s overwhelming,” he said at the time. “The only thing I want to do is to help others. That’s always been the thing I’m best at. If this can help someone even more than it’s helped me somewhere down the line, it’s all worth it.”
To help people regain their speech after injury or disease, scientists have created digital avatars that capture vocal pitch and emotion from brain recordings. Others have linked up people’s minds with non-invasive technologies for rudimentary human-to-human brain communication.
The new study incorporated Thomas’s brain implant with a human “avatar.” The volunteer wore electrical stimulation patches, wired to his brain, on her forearm.
In training, Thomas watched his able-bodied partner grasp an object, such as a baseball or soft foam ball. He received electrical stimulation to the sensory regions of his brain based on force feedback. Eventually, Thomas learned to discriminate between the objects while blindfolded with up to over 90 percent accuracy. Different objects felt strong or light, said Thomas.
The researchers wondered if Thomas could also help others with spinal cord injury. For this trial, he worked with Denapoli, a woman in her 60s with some residual ability to move her arms despite damage to her spinal cord.
Denapoli voiced how she wanted to move her hand—for example, close, open, or hold. Thomas imagined the movement, and his brain signals wirelessly activated the muscle stimulators on Denapoli’s arm to move her hand as intended.
The collaboration allowed her to pick up and pour a water bottle in roughly 20 seconds, with a success rate nearly triple that of when she tried the same task alone. In another test, Thomas’s neural commands helped her grasp, sip from, and set a can of soda down without spillage.
The connection went both ways. Gradually, Thomas began to feel the objects she touched based on feedback sent to his brain.
“This paradigm…allowed two participants with tetraplegia to engage in cooperative rehabilitation, demonstrating increased success in a motor task with a real-world object,” wrote the team.
The implant may have long-lasting benefits. Because it taps into the three main components of neurological sensation and movement, repeatedly activating the circuit could trigger the body to restore damage. With the implant, Thomas experienced improved sensation and movement in his hands and Denapoli increased her grip strength.
The treatment could also help people who suffered a stroke and lost control of their arms, or those with amyotrophic lateral sclerosis (ALS), a neurological disease that gradually eats away at motor neurons. To be clear, the results haven’t yet been peer-reviewed and are for a very limited group of people. More work is need to see if this type of collaborative rehabilitation—or what the authors call “thought-driven therapy”—helps compared to existing approaches.
Still, both participants are happy. Thomas said the study gave him a sense of purpose. “I was more satisfied [because] I was helping somebody in real life…rather than just a computer,” he said.
“I couldn’t have done that without you,” Denapoli told Thomas.
The post One Mind, Two Bodies: Man With Brain Implant Controls Another Person’s Hand—and Feels What She Feels appeared first on SingularityHub.
2025-10-21 22:00:00
Bacterial nanowires and memristors combine in artificial neurons that can control living cells.
Most people wouldn’t give Geobacter sulfurreducens a second look. The bacteria was first discovered in a ditch in rural Oklahoma. But the lowly microbe has a superpower. It grows protein nanotubes that transmit electrical signals and uses them to communicate.
These bacterial wires are now the basis of a new artificial neuron that activates, learns, and responds to chemical signals like a real neuron.
Scientists have long wanted to mimic the brain’s computational efficiency. But despite years of engineering, artificial neurons still operate at much higher voltages than natural ones. Their frustratingly noisy signals require an extra step to boost fidelity, undercutting energy savings.
Because they don’t match biological neurons—imagine plugging a 110-volt device into a 220-volt wall socket—it’s difficult to integrate the devices with natural tissues.
But now a team at the University of Massachusetts Amherst has used bacterial protein nanowires to form conductive cables that capture the behaviors of biological neurons. When combined with an electrical module called a memristor—a resistor that “remembers” its past—the resulting artificial neuron operated at a voltage similar to its natural counterpart.
“Previous versions of artificial neurons used 10 times more voltage—and 100 times more power—than the one we have created,” said study author Jun Yao in a press release. “Ours register only 0.1 volts, which [is] about the same as the neurons in our bodies.”
The artificial neurons easily controlled the rhythm of living heart muscle cells in a dish. And adding an adrenaline-like molecule triggered the devices to up the muscle cells’ “heart rate.”
This level of integration between artificial neurons and biological tissue is “unprecedented,” Bozhi Tian at the University of Chicago, who was not involved in the work, told IEEE Spectrum.
The human brain is a computational wonder. It processes an enormous amount of data at very low power. Scientists have long wondered how it’s capable of such feats.
Massively parallel computing—with multiple neural networks humming along in sync—may be one factor. More efficient hardware design may be another. Computers have separate processing and memory modules that require time and energy to shuttle data back and forth. A neuron is both memory chip and processor in a single package. Recent studies have also uncovered previously unknown ways brain cells compute.
It’s no wonder researchers have long tried to mimic neural quirks. Some have used biocompatible organic materials that act like synapses. Others have incorporated light or quantum computing principles to drive toward brain-like computation.
Compared to traditional chips, these artificial neurons slashed energy use when faced with relatively simple tasks. Some even connected with biological neurons. In a cross-continental test, one artificial neuron controlled a living, biological neuron that then passed the commands on to a second artificial neuron.
But building mechanical neurons isn’t for the “whoa” factor. These devices could make implants more compatible with the brain and other tissues. They may also give rise to a more powerful, lower energy computing system compared to the status quo—an urgent need as energy-hogging AI models attract hundreds of millions of users.
Previous artificial neurons loosely mimicked the way biological neurons behave. The new study sought to recapitulate their electrical signaling.
Neurons aren’t like light switches. A small input, for example, isn’t enough to activate them. But as signals consistently build up, they trigger a voltage change, and the neuron fires. The electrical signal travels along its output branch and guides neighboring neurons to activate too. In the blink of an eye, the cells connect as a network, encoding memories, emotions, movement, and decisions.
Once activated, neurons go into a resting mode, during which they can’t be activated again—a brief reprieve before they tackle the next wave of electrical signals.
These dynamics are hard to mimic. But the tiny protein cables G. sulfurreducens bacteria use to communicate may help. The cables can withstand extremely unpredictable conditions, such as Oklahoma winters. They’re also particularly adept at conducting ions—the charged particles involved in neural activity—with high efficiency, nixing the need to amplify signals.
Harvesting the nanocables was a bit like drying wild mushrooms. The team snipped them off collections of bacteria and developed a way to rid them of contaminants. They suspended the wispy proteins in liquid and poured the concoction onto an even surface for drying. After the water evaporated, they were left with an extremely thin film containing protein nanocables that retained their electrical capabilities.
The team integrated this film into a memristor. Like in neurons, changing voltages altered the artificial neuron’s behavior. Built-up voltage caused the protein nanowires to bridge a gap inside the memristor. With sufficient input voltage, the nanocables completed the circuit and electrical signals flowed—essentially activating the neuron. Once the voltage dropped, the nanocables dissolved, and the artificial neurons reset to a resting state like their biological counterparts.
Because the protein wires are extremely sensitive to voltage changes, they can instruct the artificial neurons to switch their behavior at a much lower energy. This slashes total energy costs to one percent of previous artificial neurons. The devices operate at a voltage similar to biological neurons, suggesting they could better integrate with the brain.
As proof of concept, the team connected their invention to heart muscle cells. These cells require specific electrical signals to keep their rhythm. Like biological neurons, the artificial neurons monitored the strength of heart cell contractions. Adding norepinephrine, a drug that rapidly increases heart rate, activated the artificial neurons in a way that mimics natural ones, suggesting they could capture chemical signals from the environment.
Although it’s still early, the artificial neurons pave the way for uses that seamlessly bridge biology and electronics. Wearable devices and brain implants inspired by the devices could yield prosthetics that better “talk” to the brain.
Outside of biotech, artificial neurons could be a greener alternative to silicon-based chips if the technology scales up. Unlike older designs that require difficult manufacturing processes, such as extreme temperatures, this new iteration can be printed with the same technology used to manufacture run-of-the-mill silicon chips.
It won’t be an easy journey. Harvesting and processing protein nanotubes remains time consuming. It’s yet unclear how long the artificial neurons can remain fully functional. And as with any device including biological components, more quality control will be needed to ensure even manufacturing.
Regardless, the team is hopeful the design can inspire more effective bioelectronic interfaces. “The work suggests a promising direction toward developing bioemulated electronics, which in turn can lead to closer interface with biosystems,” they wrote. Not too bad for bacteria discovered in a ditch.
The post ‘Unprecedented’ Artificial Neurons Are Part Biological, Part Electrical—Work More Like the Real Thing appeared first on SingularityHub.