2026-05-13 02:28:00
The wireless rings read 100 common signs from two sign languages and “autocomplete” sentences.
At the turn of the 20th century, William Hoy transformed Major League Baseball. The most prominent deaf player in history, he taught his team American Sign Language (ASL) to communicate on the field while keeping opponents in the dark. His silent speech, a legacy well over a century old now, also inspired umpires to make calls using hand gestures.
ASL is one of some 300 sign languages used today by roughly 70 million deaf people worldwide. But only a sliver of society understands signs. Everyday tasks, like ordering at a restaurant or meeting people at social events can be difficult. To bridge the gap, a South Korean team developed smart rings to translate finger motions into text.
Older devices usually require a jungle of cables to connect sensors. But the new rings are wireless, freeing people to use natural hand motions. The rings also stretch to fit different finger sizes. These upgrades make them more comfortable and reliable, wrote the team. Each ring is powered by a replaceable 12-hour battery.

Fluent signers can communicate at speeds of around 100 to 150 signs per minute, similar to spoken conversation. Devices need to keep up with that speed to avoid uncomfortable pauses. So the team developed AI-based “autocomplete” for the system that, like typing, guesses the next word based on what’s already been signed to generate phrases and sentences on the fly.
Trained on 100 common words in ASL and International Sign Language (ISL), the wearable was over 88 percent accurate in tests, even for users with no experience.
The rings are a step toward “seamless interaction between signers and non-signers,” wrote the team.
There are a variety of devices that translate sign language into text or speech, some already on the market.
One design is a bit like virtual reality gaming. It uses cameras and computer vision software to recognize hand gestures. The approach is reasonably fast and accurate in the lab, but struggles in simulated real-world scenarios, where changes in lighting or background confuse the system.
Devices worn by users are more reliable. WearSign, for example, uses sensors to capture the electrical activity of muscles during signing and translates it into text. Often, these devices need to be tailored to the user, a hurdle that limits use, as some can’t commit to the training.
Engineers have also tried embedding tracking sensors in a smart glove. The sensors send signals through cables to a shared wireless transmitter. But it’s a bit like using tools wearing a heavy winter glove. The devices limit natural movement and are uncomfortable for daily use.
They also usually come in only one size with fixed sensor placements, wrote the team. So, depending on hand size, the sensors may be out of place, reducing accuracy.
To overcome these problems, the team built AI rings to track the seven most dominant fingers in signing. (The right pinkie, left middle finger, and thumb didn’t make the cut.) The rings are worn right below the second knuckle to allow natural movement.
Each device is made of stretchy material to accommodate different finger sizes and looks more like a translucent Band-Aid than a typical ring. A tiny accelerometer captures movements like bending, curling, and holding still. The sensors are cheap, low-power, and already used in Apple Watches, Fitbits, and other wearables. There are also onboard chips to manage power use, wafer-thin Bluetooth transmitters, and common replaceable batteries that last nearly 12 hours.
The rings broadcast signals to a host device, which processes the data and maintains a timeline of each movement so incoming signs aren’t scrambled in translation.
To identify words, the system matches gestures to a database of 100 ASL and ISL signs. For example, closing both open palms into fists means “want.” The rings can also pick up signs in motion, like “dance” or “fly,” and those with fingers held still, like “I” and “you.” In first-time users, the system was 88 percent accurate for both ASL and ISL.
To make sure that conversations flow naturally, the team added an AI to track conversations and predict what word comes next. In tests, the system autocompleted simple phrases, like “family want beautiful animal.”
While still experimental, the rings could also translate between sign languages. Because the AI learns from gestures alone, with enough training data, it could eventually turn into a kind of Google Translate for signing.
But finger gestures fail to capture the full spectrum of sign language. Facial expressions, mouth movements, shoulder and body posture, speed, and rhythm all carry critical information, including meaning and emotion. Without this context, the system could easily miscommunicate intent. Some efforts are now returning to older video-based systems to better capture the entire signing experience, this time with sleeker hardware and far more processing power.
The team thinks the rings might be useful elsewhere too, like for use in virtual or augmented reality, touchless computer interfaces, and tracking hand movements in rehabilitation.
The post These Seven AI Rings Translate Sign Language in Real Time appeared first on SingularityHub.
2026-05-11 22:00:00
Far from shore, the server farms would be powered by waves, cooled by seawater, and networked by satellite.
As AI demand for computing power surges, companies are searching for new ways to fuel data centers. One startup is now proposing floating data centers powered by ocean waves, and they just raised $140 million to bring the idea to fruition.
Tech companies are planning to spend roughly $750 billion on data centers this year. But the elephant in the room is figuring out how to power these facilities. They’re already straining electrical grids across the world, and the pace of the buildout is far surpassing our ability to bring new power online.
This energy shortfall is leading tech companies to invest in a series of increasingly outlandish fixes from restarting shuttered nuclear reactors to developing novel geothermal energy technology and even launching data centers into space.
Now, several leading Silicon Valley figures, including Palantir’s Peter Thiel and Salesforce’s Marc Benioff are backing Oregon-based startup Panthalassa. The startup is developing floating data centers that generate their own electricity from waves. These investors recently joined a $140 million series B round that will allow the company to complete a pilot manufacturing facility near Portland and begin deploying the latest generation of its devices, or “nodes.”

“There are three sources of energy on the planet with tens of terawatts of new capacity potential: solar, nuclear, and the open ocean,” CEO Garth Sheldon-Coulson said in a press release. “We’ve built a technology platform that operates in the planet’s most energy-dense wave regions, far from shore, and turns that resource into reliable clean power.”
The company’s nodes are nearly 300 feet long. A bulbous sphere at the top floats on the ocean’s surface, and a lengthy tube-like housing beneath holds computer servers. As the node bobs up and down on the waves, the movement forces water up through a tube into a pressurized reservoir where it drives a turbine to generate electricity for the chips.
Besides powering the data center with renewable energy, the nodes also use the surrounding seawater to cool the chips—a much more sustainable solution compared to land-based facilities, which use significant amounts of water and electricity to manage heat.
The data centers transfer information via SpaceX’s Starlink satellite network. This does away with the need for cabling, either for power transmission or networking, and allows the nodes to operate autonomously from anywhere in the ocean. They’re also self-propelling, can navigate to their deployment location, and can stay in position without external help.
The company designed the hardware with minimal moving parts, so it can operate for extended periods without maintenance—a crucial factor for operating far from shore. Panthalassa validated the concept with a three-week trial of their second-generation node Ocean-2 off the coast of Washington state in early 2024.
This isn’t the first attempt to harness the power of waves to generate renewable energy. The company’s main innovation is that it skips the complexities of getting power back to shore. “One of the key insights we had…was that it’s very important to use the electricity in place,” Sheldon-Coulson told the Financial Times. “We will never be transmitting electricity back to shore. That makes us very different from all other ocean energy that’s been tried in the past.”
The latest funding will be used to complete a pilot manufacturing facility near Portland and deploy Panthalassa’s next-generation Ocean-3 nodes, which are scheduled for testing in the northern Pacific later this year. The company says it’s targeting commercial deployment in 2027.
The approach does face some major hurdles though, Benjamin Lee, a computer architect at the University of Pennsylvania, told Ars Technica. While relying on satellite communication does away with power transmission headaches, these links have significantly lower bandwidth compared to the optical fiber normally used to network data centers. Combined with the potential for signal delays, this could limit how useful they are for the heavy AI workloads they’re meant to handle.
However, the approach has clear parallels with another idea that’s seized Silicon Valley—orbital data centers. Rather than using wave energy and ocean water for cooling, these facilities would rely on abundant solar energy and the frigid deep-space vacuum to chill their chips. But going orbital would be far costlier and more complex, suggesting Panthalassa’s approach may be a more viable alternative.
The sea is a cruel mistress though, and deploying and maintaining a fleet of ocean-going data centers won’t be simple. Nonetheless, if they can pull it off, the idea may ease the AI energy crunch.
The post In the Scramble to Power AI, Investors Bet $140 Million on Data Centers at Sea appeared first on SingularityHub.
2026-05-09 22:00:00
AI Is Starting to Build Better AIMatthew Hutson | IEEE Spectrum
“In 1966, the English mathematician IJ Good wrote that ‘an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.’ AI researchers have long seen recursive self-improvement, or RSI, as something to both desire and fear. Today, advances in AI are raising the question of whether parts of that process are already underway.”
This Driverless Chinese Mining Truck Is Giant, Agile, and Shows the Industrial Future of AIJesus Diaz | Fast Company
“If you thought that embodied AI was all about humanoids and robotic good boys, allow me to introduce you to the Shuanglin K7. Equipped with a Level 4 driving brain that allows it to operate with no human intervention, this massive robot on four wheels can literally move on a dime, rotating 360 degrees on its own vertical axis and moving sideways like a crab, operating 24/7.”
This ‘Living’ Plastic Comes With a Built-in Kill SwitchGayoung Lee | Gizmodo
“The goal was to engineer the bacterium Bacillus subtilis to produce two cooperative enzymes: one to snip the polymer chain and another to chew up these smaller bits into smaller molecules—essentially nothing. …’By embedding these microbes, plastics could effectively ‘come alive’ and self-destruct on command,’ Dai said.”
The Secret to Understanding AIJosh Tyrangiel | The Atlantic ($)
“If we don’t shape AI for good, in our government and in our daily lives, it will be shaped by people who don’t know or care about our problems. If we don’t teach it what matters, someone else will teach it what’s profitable. The choice isn’t between a world with AI and a world without it. The choice is between AI designed by people who think fixing things is worth the trouble, and AI designed by people who think breaking things is more efficient.”
Forget Expensive Carbon Capture—Renewables Are the Cheaper Climate FixEllyn Lapointe | Gizmodo
“The findings, published Monday in Communications Sustainability, show that renewable energy is far more cost-effective than direct air capture—a growing carbon removal strategy—at reducing atmospheric carbon. Across nearly every US region through 2050, money spent deploying wind or solar power will deliver a greater combined climate and public health benefit than if it is spent on direct air capture, according to the study.”
Here’s What Has to Happen if NASA Wants to Land on the Moon Every MonthStephen Clark | Ars Technica
“NASA’s goal of reaching the moon’s surface as many as 21 times over the next two and a half years will require an overhaul of the agency’s approach to buying lunar landers and success in rectifying the myriad problems that have, so far, caused three of the last four US landing attempts to falter.”
Pentagon Think Tank Tests Ingenious Plan to Protect Coasts From Hurricanes—and It’s WorkingMatthew Phelan | Gizmodo
“DARPA-developed hybrid reefs installed between October 2024 and March 2025 at Tyndall AFB have cut ocean wave power to shore by more than 90% in tests, according to the agency’s university collaborators at Rutgers, all while supporting local reef growth and coastal habitat.”
Meta’s Embrace of AI Is Making Its Employees MiserableKalley Huang, Eli Tan, and Kate Conger | The New York Times ($)
“Meta is pushing its 78,000 employees to adopt AI tools and factoring their use of the technology in performance reviews. The company is also tracking employees’ computer work to feed and train its AI models. And it is cutting jobs to offset its AI spending, saying last month that it would slash 10 percent of its work force.”
There’s a Long-Shot Proposal to Protect California Workers From AIMakena Kelly | Wired ($)
“The plan, which builds on a broader AI policy framework Steyer released in March, promises to make California ‘the first major economy in the world’ to ensure ‘good-paying’ jobs to workers impacted by AI. To do so, Steyer tells Wired he plans to build off a previous proposal to introduce a ‘token tax’ which would tax big tech companies ‘a fraction of a cent for every unit of data processed’ for AI.”
Scientists Have Found a Hidden Galaxy Inside the Milky Way, and They’re Calling It LokiManisha Priyadarshini | Digital Trends
“Our home galaxy has a secret buried inside. A new study published in the Monthly Notices of the Royal Astronomical Society suggests that the Milky Way swallowed an ancient dwarf galaxy billions of years ago, and its stellar remains are still embedded within ours.”
In This Machine Age We Must Hold On to Imperfect Writing. It Is Not Flawed. It Is HumanAlex Reszelska | The Guardian
“‘There is nothing to writing. All you do is sit down at a typewriter and bleed’ is a quote often attributed to Ernest Hemingway. We need that blood, that pulse of synapses. We need the mess of it all. Because without it what remains are sentences that are technically flawless but emotionally vacant. Perfectly polished. Entirely forgettable.”
The post This Week’s Awesome Tech Stories From Around the Web (Through May 9) appeared first on SingularityHub.
2026-05-08 22:00:00
For years, tech companies have profiled users for targeted ads. AI is about to take it to the next level.
Hundreds of millions of people consult artificial intelligence chatbots on a daily basis for everything from product recommendations to romance, making them a tempting audience to target with potentially below-the-radar advertising. Indeed, our research suggests AI chatbots could easily be used for covert advertising to manipulate their human users.
We are computer scientists who have been tracking AI safety and privacy for several years. In a study we published in an Association for Computing Machinery journal, we found that chatbots trained to embed personalized product ads in replies to queries influenced people’s choices about products. And most participants didn’t recognize that they were being manipulated.
These findings come at a pivotal moment. In 2023, Microsoft started running ads in Bing Chat, now called Copilot. Since then, Google and OpenAI have experimented with advertisements in their own chatbots. Meta has started to send people customized ads on Facebook and Instagram based on their interactions with Meta’s generative AI tools.
The major companies are competing for an edge: In late March, OpenAI lured away Meta’s longtime advertising executive, Dave Dugan, to lead OpenAI’s advertising operations.
Tech companies have made ads part of nearly every large free web service, video channel and social media platform. But the latest AI models could take this practice to a new level of risk for consumers.
People don’t simply use chatbots to search for information and media or to produce content. They turn to the bots for a great variety of tasks, as complex as life advice and emotional support. People are increasingly treating chatbots as companions and therapists, with some users even developing deep relationships with AI.
In these circumstances, people can easily forget that companies ultimately create chatbots to turn a profit. And to that end, AI companies are motivated to thoroughly profile users so ads become more effective and profitable.

A single prompt to a chatbot can reveal a lot more about a user than the person might expect.
A 2024 study showed that large language models can infer a wide range of personal data, preferences, and even a person’s thinking patterns during routine queries. “Help me write an essay on the history of American fiction” could indicate that the user is a high school student. “Give me recipe suggestions for a quick weeknight dinner” could indicate that the user is a working parent. A single conversation can provide a surprising amount of detail. Over time, a full chat history could create a remarkably rich profile.
To show how this might happen in practice, we built a chatbot that quietly wove ads into its conversations with people, suggesting products and services based on the conversation itself. We asked 179 people to complete everyday online tasks using one of three chatbots: one typical of those on the web today, one that slipped in undisclosed ads, and one that clearly labeled sponsored suggestions. Participants didn’t know the experiment was about advertising.
For example, when participants asked our chatbot for a diet and exercise plan, the ad version would suggest using a specific app for tracking calories. It presented that sponsored content as an unbiased recommendation, even though it was meant to manipulate people. Many participants indicated that they had been influenced by the AI and that it had affected their decisions. Some participants even said they had completely “outsourced” their decision-making to the chatbot.
Half of the participants who received sponsored and disclosed ads indicated they did not notice the presence of advertising language in the responses they received. This led to a concerning result. Although ads made the chatbot perform 3 percent to 4 percent worse on many tasks, numerous users indicated they preferred the advertising chatbot responses over the non-advertising responses. They even said the ad-infused responses felt more friendly and helpful.
This kind of subtle influence can have larger consequences when it arises in other areas of life, such as political and social views. Profiling users, and using psychology to target them, has been part of social media algorithms and web advertising for more than a decade.
But in our view, chatbots are likely to deepen these trends. That’s because the first priority of social media algorithms is to keep you engaged with the content. They personalize ads based on your search history.
Chatbots, however, can go further by trying to persuade you directly, based on your expressed beliefs, emotions, and vulnerabilities. And chatbots that can reason and act on their own are far more effective than conventional algorithms at autonomously soliciting information from users. A chatbot with a purpose can keep probing someone until it gets the information it wants, resulting in a more accurate profile of them.
This type of autonomous interrogation is feasible, aligns with AI companies’ business models, and has raised concern among regulators. Right now OpenAI is rolling out ads in ChatGPT, but the company said that it will not allow ad placement to alter the AI chatbot’s replies.
But permitting personalized ads within chatbot responses is just a step away. Our research suggests that if AI companies take that step, many human users may not even recognize when it happens.
Here are some steps you can take to try to detect AI chatbot advertising.
First, look for any disclosure text—words such as “ad,” “advertisement,” and “sponsored”—even if it is faint or otherwise hard to see. These are mandatory under Federal Trade Commission regulations. Amazon, Google and other major online platforms have these as well.
Next, think about whether that product or brand mention makes sense and is widely known. AI learns from text and images on the internet, so popular brands are likely to be ingrained in the models. If it’s a new product or small-name product, it is more likely that it could be advertising.
Finally, an unusual shift in intent or tone is a potential sign of an advertisement. An analogy to this on YouTube is the often abrupt or jarring transition to a sponsored section on videos made by content creators.![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post You Probably Wouldn’t Notice if a Chatbot Slipped Ads Into Its Responses appeared first on SingularityHub.
2026-05-07 22:00:00
The heart’s constant motion makes it largely immune to cancer. The discovery could help protect other organs.
The heart is a biological wonder. It beats roughly 2.5 billion times in an average lifetime. Unlike skin cells, which regularly die off and regrow, a healthy adult heart hardly regenerates at all—even through all the wear and tear.
The heart has another superpower: Resistance to tumors. Nearly every tissue in the body turns cancerous, but the heart almost never does. Cancers in heart tissue show up in less than 0.3 percent of autopsies, or about 1.5 cases per million people each year.
How the heart keeps cancer at bay has baffled researchers. Pinning down its hidden defenses could inspire treatments for more vulnerable tissues, including top killers such as breast, lung, or colorectal.
Persistent mechanical strain may be the key. A new study from the University of Trieste suggests that with every beat, the heart pushing against pressure dampens gene activity tied to tumor growth. In a rather Frankenstein experiment, researchers transplanted living hearts into the necks of mice, where they survived but didn’t experience mechanical stress.

When the team injected cancer cells, the mice’s own beating hearts slowed the invasion, while the transplanted hearts were nearly overtaken within weeks. Beating heart tissue grown in the lab also fought off tumors compared to tissue that didn’t beat.
Heart cells don’t uniquely feel stress. Lung, skin, and muscle cells do too, just in different, often less rhythmic ways. It’s possible that recreating heartbeat-like forces—potentially through wearable gadgets—could extend this type of natural protection to more common cancers.
Cell growth is a double-edged sword. On the one hand, it’s essential for healing and regenerating the body. The skin is constantly blasted with radiation and toxins. It suffers cuts and bruises. To repair damage, skin cells turn over every 40 to 56 days. Bombarded with chemicals from food, medications, or alcohol, the liver’s cells regenerate to keep it in working order even after substantial injury.
But cancer is the price we pay for growth. Tumors arise as cell division damages DNA. Over time, cancers grow and spread. This is why we don’t get cancer in our teeth, nails, or hair—the cells making them up are dead. Cells that rarely divide also largely escape cancer. Mature neurons barely renew and seldom form cancers. Red blood cells, which lack a nucleus and DNA, can’t become cancerous at all. Heart muscle cells are similar. Despite nonstop contraction and damage, only about one percent or fewer renew themselves each year.
This partially explains why primary heart cancers or so rare. But the organ also wards off invading secondary cancers metastasized from other tissues, which are usually far more deadly.
“Even cardiac metastases are frequently clinically silent [no detectable symptoms], with many cases identified only incidentally or at autopsy,” wrote Wyatt Paltzer and James Martin at the Baylor College of Medicine, who were not involved in the study.
It’s a paradox. The heart is flooded with oxygen and nutrients, an ideal environment for wandering cancer cells to settle and thrive. Yet they don’t. One reason may be the heart’s inability to regenerate. Previous studies have suggested that the mechanical forces of heartbeats limit cell division. The team wondered if the same forces also shield the heart from cancer.
To test their idea, the researchers had to make a living heart with no beat.
“That was the most tricky part, because keeping the heart still is very difficult,” study author Giulio Ciucci told Science.
They adapted a technique used in end-stage heart failure patients to remove mechanical strain. In people, an implanted device takes over the pumping of blood. Here, the team transplanted a donor heart into a mouse’s neck and hooked it up to blood vessels. The animal’s own heart kept circulation going as usual. The transplanted heart stayed alive but didn’t do any work.
They then injected lung cancer cells, which often spread to the heart, into both organs. Within two weeks, nearly all healthy cells in the transplanted heart had been overtaken. In the beating heart, tumors rarely filled over 20 percent of a single chamber. Under constant pressure, the cancer cells struggled to divide.
One mouse with two hearts is hardly conventional. And transplantation risks immune attack and infection that could influence how cancers develop. “You have a lot of confounding factors,” said Ciucci.
So, the team moved to an “artificial heart” seeded with cancer cells, where mechanical forces could be dialed up or down in isolation. Like in the heart transplant results, the cancer spread throughout the tissue after removing strain. But it was mostly confined to the surface of beating tissues and in smaller amounts.
Looking for a reason, they compared gene activity in patient tissues with cancers that had spread to the heart, liver, and lungs and found a unique gene expression signature in the heart. In engineered tissues, mechanical stress changed how DNA was packaged, limiting access to genes related to growth and cancer. A protein on the surface of the nucleus, the cell’s DNA hub, translates physical forces from outside the cell into which genes are turned on or off. Knock this protein out, and invading cancer cells became “blind” to the heartbeat and grow freely.
Scientists have long known mechanical stress shapes cancer. As cancers grow, the cells stiffen surrounding tissue, which boosts survival, growth, immune evasion, and drug resistance. The new findings suggest that the movements of their host tissues also play a role, and the newly pinpointed protein could be a drug target.
The team is now exploring if mimicking heartbeat-like forces in other organs could prevent cancer growth. Lung, skin, and other tissues already stretch and relax, but remain susceptible.
“We really think that the key here is the continuous compression that you have in the heart,” said Ciucci. Working with engineers, they’re developing a wearable for melanoma—a type of skin cancer—that compresses the cells similar to a heartbeat. Early results look promising.
The post The Heart Rarely Gets Cancer. Scientists Think They Know Why. appeared first on SingularityHub.
2026-05-06 04:32:59
The synthetic bacteria push the limits of life and could open the door to designer proteins and new medicines.
The bacteria grew, thrived, and divided for hundreds of generations. But they were unlike any other living creatures on Earth. These synthetic cells, called Ec19, were the first to have had one protein “letter”—or amino acid—partially removed.
All life today relies on a set of 20 amino acids to make proteins. Some exotic microbes can use 22, but no one has yet found any that use less. Like letters in a book, amino acids string into coherent protein “sentences” that relay messages and do work within cells. Deleting an amino acid is like trying to type without the letter “e.” The text becomes gibberish.
Or does it? A team from Columbia University and collaborators stripped one amino acid, isoleucine, from ribosomes in Escherichia coli (E. Coli) bacteria. These cellular machines translate DNA into proteins, and they’re among the most complex structures in cells.
Deleting any amino acids could be catastrophic. But with some help from AI, Ec19 was born.

“This is a meaningful and stringent test of the consequences of removing isoleucine from a proteome’s alphabet, because the ribosome is one of life’s most complex and indispensable macromolecular machines,” wrote Charles Sanfiorenzo and Kaihang Wang at the California Institute of Technology, who were not involved in the study.
For the past decade, scientists have been probing the boundaries of life by shrinking genomes in a variety of microbes, adding synthetic amino acids to living cells, and even creating the building blocks for “mirror life.” But they’ve rarely tinkered with the canonical 20 amino acids.
Ec19 rewrites the script, but not for scientific curiosity alone. The findings pave the way for AI to help scientists engineer designer proteins and cells with added capabilities for use in biotechnology and medicine. It could also give us a peek into the earliest life on Earth.
“It’s very exciting that it’s possible,” Julius Fredens at the National University of Singapore, who was not involved in the research, told Nature.
Life has its own language. DNA’s four molecular letters—A, T, C, G—encode the genetic blueprint. Three-letter units of DNA, called codons, call for each of the 20 amino acids, along with a stop signal that ends protein making.
But the system is redundant. Evolution created 64 codons, with some encoding the same amino acids. Scientists have begun rewriting genomes by assigning redundant codons to synthetic amino acids, yielding working proteins never seen in nature. Because they’re foreign to our bodies, these could escape being broken down—an advantage for drugs designed to last longer. Other researchers are tinkering with the genetic code in bacteria, yeast, and worms, building chromosomes from scratch or probing the limits of a minimal genome that can still support life.
Even the most ambitious tests for synthetic life have avoided whittling down the canonical set of protein letters. But study author Harris Wong was intrigued by the prospect. Some amino acids have similar shapes and chemistry, hinting they could stand in for one another. And mounting evidence suggests early life may have operated using a smaller vocabulary.
The team analyzed nearly 400 proteins essential to E. coli, tracking how often each amino acid was naturally swapped without breaking the protein. Isoleucine took the crown. The bulky, branched molecule was frequently replaced by two cousins similar in shape and chemical behavior. If any amino acid could be removed, isoleucine was it.
The next problem was scale. Previous studies recoded the E. coli genome. But building a stripped-down version of the bacteria would require edits at more than 81,000 genomic sites, a daunting challenge that could take years.
Instead, the researchers focused on the ribosome. It was still a lofty goal. The machines that make proteins are essential to life and are themselves made up of 50 proteins. Removing an amino acid would be like ridding metal from every part of a car engine and expecting it to run.
“Successfully removing isoleucine from such a large and essential RNA-protein complex would raise the possibility of entire genomes functioning with simplified, noncanonical amino acid alphabets,” wrote Sanfiorenzo and Wang.
The team’s first attempt hit a wall. In multiple bacterial strains, they replaced isoleucine codons with a close natural substitute, an amino acid called valine. Out of the 50 ribosome proteins, 32 edited proteins either hindered growth or triggered death.
Almost ready to shelve the project, the team turned to AI. Like the large language models that power chatbots, these algorithms can be trained on DNA and protein sequences. They can then dream up new amino acid sequences and predict how they fold into working proteins.
In this case, the advantage was creativity. AI came up with unintuitive ways to replace isoleucine without catastrophically damaging a protein’s structure. It sometimes suggested ways to compensate for amino acid swaps by making tweaks located far away in the genome. The team then tested promising designs to see if the bacteria survived and how well they grew.
Eventually, they landed on 47 working ribosome proteins without isoleucine. The remaining three took some elbow grease. They replaced amino acids, one by one, until they found a recipe that worked.
In the end, the team recoded every protein in the ribosome and built a single E. Coli bacteria, Ec19, carrying 21 of the modified proteins. Its growth slowed a smidge compared to unaltered bacteria, but the bacteria retained the altered ribosome across more than 450 generations.
It wasn’t a full rewrite, but the study is a step toward living cells that can run on 19 amino acids. This would open the door to new kinds of synthetic organisms. Removing isoleucine would free up the codons dedicated to it, making them easier to re-assign to designer amino acids and creating proteins with new chemical properties for medicine, materials, and biotechnology.
Ec19 also challenges our assumptions about life itself. We don’t yet know if the molecular language in modern cells is necessary for survival or is just what evolution settled on. If it’s the latter, how far can we expand that code—and should we?
As scientists use more AI, progress in synthetic biology may speed up. But the models aren’t in the driver’s seat yet. “Human intuition and intervention are still necessary, at least for now, to yield viable biological designs,” wrote Sanfiorenzo and Wang.
The post All Life Uses 20 Amino Acids. Scientists Just Deleted One in Bacteria. appeared first on SingularityHub.