MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

One Shot Just Crushed Three Deadly Autoimmune Diseases

2026-04-15 02:41:54

A woman battling the conditions went from “two handfuls of pills” and blood transfusions daily to medication-free.

The 47-year-old woman was at the end of her rope.

In 2014, she was diagnosed with a rare form of anemia. Her body’s B cells, which normally produce antibodies to fight infections, had gone rogue, endlessly attacking oxygen-carrying red blood cells. Two other autoimmune disorders soon followed, one crippling her body’s ability to stop bleeding, the other increasing the risk of blood clots.

She had tried nine treatments. None helped. Her life was centered on blood transfusions, up to three daily, to keep the symptoms at bay. But constant fatigue made every day a struggle. The threat of deadly bleeding or blood clots loomed over her life.

Out of options, her care team tested an experimental treatment called CAR T cell therapy. They made a “living drug” out of the patient’s own T cells, editing the cells’ DNA so they would seek and destroy a specific biological enemy. Though CAR T is best known as a treatment for blood cancer, it’s also shown early promise in autoimmune disease. Trying to take on three conditions at the same time raised the bar, but it worked.

A single infusion of engineered cells rapidly killed off the misbehaving B cells. The woman was able to end blood transfusions within a week, and her red blood cell count was near normal in roughly a month. Her strength returned, and at the 11-month follow up, she was free of medication and able to enjoy life again.

“It was an entirely uncontrolled disease. And now she’s off any therapy. That tells you that, at least for now, we did something very right,” study author Fabian Müller at University Hospital Erlangen in Germany told Nature.

Runaway Train

The body’s B cells are powerful defenders. They watch for infections or cancer, generate antibodies to take out threats, and rally other immune cells to join the fight.

But sometimes B cells break down. Genetic mutations can lead to blood cancer. Some B cells struggle to produce antibodies, rendering them powerless to counter infection. And in autoimmune disorders, the cells mistakenly attack and damage healthy tissue—a kind of immune friendly fire—that can damage organs if left untreated.

In the woman’s case, malfunctioning B cells relentlessly attacked red blood cells, stripping them of their ability to carry oxygen. They also destroyed platelets—tiny, disc-shaped fragments in the blood that stem bleeding. The cells also attacked a protein that helps prevent clot formation.

This triple whammy ”can kill you very rapidly,” said CAR T pioneer Carl June at the University of Pennsylvania, who was not involved in the study.

Steroids to dampen the immune system didn’t work. Neither did antibodies that inhibit B cells or other classic autoimmune drugs. After attempting nine treatments and exhausting their options, the team offered CAR T cell therapy as a last resort.

CAR T drugs are usually made from a patient’s own T cells, genetically boosted to hunt down, grab onto, and destroy targets. Researchers originally developed CAR T for blood cancer, but efforts are underway to expand its use against solid cancers. In other studies, scientists have made these cancer-fighting soldiers directly inside the body to slash cost and time. Because CAR T cells can divide and replenish their numbers, a single dose could last over a decade.

The treatment is largely plug-and-play. The surfaces of all cells are dotted with protein beacons. Tumors have a unique protein signature. B cells have one too—a protein called CD19. Scientists have already had early success treating autoimmune diseases by designing CAR T cells that selectively hunt and destroy B cells.

A small CAR T trial in 2014 restored movement in patients with systemic sclerosis, a condition that causes tissue rigidity. Earlier this year, Müller helmed a clinical trial testing Zorpo-cel, T cells engineered to seek out CD19 in a variety of autoimmune conditions with promising results. Six months after treatment, all patients had ended their use of steroids and other treatments.

“For the very first time in severe autoimmune diseases, you actually have a treatment-free period,” Müller told Medscape at the time. “That is really a new perspective that has never been achieved before.”

One for All

Simultaneously tackling three autoimmune diseases was uncharted territory. Too many CAR T cells could trigger a deadly runaway immune reaction, which could risk even the brain.

The team turned to Zorpo-cel. They isolated the woman’s T cells and gene edited them to produce protein “hooks” targeting CD19 in the lab. The patient then underwent standard chemotherapy to wipe out most of her immune system. This step is obviously very tough on the body, but it’s needed to remove immune cells that would shut down CAR T.

A week after infusion, the woman’s red blood cells had rebounded, ending the need for blood transfusions. A month later, most of her disease-related blood work had improved, and she “experienced a rapid and remarkable increase in physical strength and has been able to carry out normal everyday activity,” wrote the team.

Now, a year on, she no longer needs the “two handfuls of pills” she took to manage the conditions. Her liver struggled at several points during the trial, but she avoided major immune reactions and other severe side effects. It’s not clear if the liver trouble was due to CAR T or lingering damage from earlier treatments.

Battling three autoimmune disorders with CAR T is unprecedented. But there are limitations. It’s a single-case study, and researchers will need to keep an eye on the patient’s health over time. Also, CAR T cells can dwindle and allow target cells to return. At the end of the study, the team found signs of newly formed B cells. However, they were “naïve,” in that they hadn’t learned to target normal tissues yet—and they may never learn.

Hundreds of CAR T clinical trials targeting autoimmune diseases are in the works. Multiple commercial companies have joined the race. “I think, within a year or two, there’s going to be approvals in the US,” said June.

The post One Shot Just Crushed Three Deadly Autoimmune Diseases appeared first on SingularityHub.

Scientists Grow Electronics Inside the Brains of Living Mice

2026-04-14 06:33:30

The technology harnesses the brain’s own blood chemistry to assemble soft, light-controlled electrodes around neurons.

A single shot transforms the mice’s brains into biomanufacturing machines. Blood proteins churn the injected chemicals into a soft, flexible electrode mesh that seamlessly wraps around delicate neurons. Pulses of light aimed at the mesh quiet hyperactive cells. All the while, the mice go about their merry ways, with no inkling they’ve been turned into cyborgs.

This science fiction-like invention is the brainchild of Purdue University scientists seeking to reimagine brain implants.

These devices, often composed of rigid microelectrode chips, have already changed lives. They can collect electrical signals from the brain or spinal cord and translate these signals into speech or movement—returning lost abilities to people with paralysis or diseases of the brain. Implants can also jolt brain activity and pull people out of severe depression.

Yet most implants require extensive surgery and risk damaging the brain’s delicate tissue. The new technology would avoid these downsides by building electrodes directly at the target.

“Our work points to a future where doctors could ‘grow’ soft, wire-free electronic interfaces inside the brain using the patient’s own blood, then gently dial brain activity up or down from outside the head using harmless near-infrared light,” study author Krishna Jayant said in a press release.

Probes Galore

The brain produces every one of our sensations, movements, emotions, and decisions. Scientists have long sought to decode and manipulate its activity with a range of hardware.

Some devices use electrodes to monitor single neurons in a lab dish. Others are physically inserted into brain regions that encode cognition and emotion. Some designs sit atop the brain, without puncturing its delicate tissue, and capture dynamic brain waves like a wide-lens camera.

But brain tissue is soft and squishy; microelectrodes are not. The mismatch often leads to scarring, signal loss, and shortened device lifetimes. Replacing broken or infected implants is surgically complex and can further damage the brain. Some experts have even raised ethical concerns about long-term care.

A recent explosion of soft, biocompatible materials suggests alternatives are possible, and we’ve seen a wave of creative new probes. In one example, a silk-like mesh drapes over the brain’s surface, and a related version maps electrical activity in brain organoids. Another device is smaller than a cell and, after injection, hitches a ride on immune cells into the brain. These systems can record and alter brain activity. But prebuilt implants often require surgery and struggle to integrate with their hosts without damaging surrounding tissue.

So, why not grow an electrode directly inside the brain?

“The ability to synthesize [conductive] materials on demand at a target site could overcome the limitations of conventional synthetic implants,” wrote M.R. Antognazza and G. Lanzani at the Italian Institute of Technology, who were not involved in the study.

Under Construction

Our cells are natural manufacturers, constantly assembling things like proteins, genetic messengers, and membranes. Cells rely on two essential ingredients to construct the complex structures of life: Biological building blocks and catalysts to bind them together. Synthetic materials work the same way. Monomers link like Lego blocks to form polymers with the help of a catalyst.

The discovery of electrically conductive polymers, meanwhile, has galvanized efforts to grow living bioelectronics directly inside the body. In a previous study, researchers genetically engineered cells to produce a protein catalyst that helps assemble conductive structures on the surfaces of living neurons. Another approach used hydrogen peroxide—a common first-aid staple—to compile monomers into reliable electrodes that monitor nerves in leeches.

These quirky early successes showcased the promise of brain-built electronics, but hit hard limits. The chemistry often relied on catalysts toxic to neurons. Even when successfully formed, the electrodes mostly just listened. Changing brain activity required additional physical cables.

The Purdue team rewrote the recipe. They designed a monomer, called BDF, that with the help of hemoglobin—a protein in red blood cells—becomes a soft, flexible, and electrically conductive mesh surrounding neurons at the site of injection. The willowy electrode hugs the brain’s anatomy and moves with it, minimizing physical damage. It’s responsive to near-infrared light and can translate light pulses from outside the skull into electrical signals that alter brain activity.

“Our key idea was to let the body’s own chemistry do the hard work,” said study author Sanket Samal.

The appeoach worked in several tests. Injecting BDF into store-bought beef and lamb steaks produced the electrode mesh within a day at human body temperature. In zebrafish embryos, a darling in neuroscience research, the reaction proceeded smoothly inside their yolks. Over 80 percent of the embryos survived, developed normally, and actively swam around—suggesting minimal harm.

But steak dinners and translucent fish are a far cry from our brains. Mice are closer. With the help of blood, BDF formed electrodes in mice’s motor cortexes after injection with minimal surgery. The mice’s brains maintained a normal balance of activity as they skittered around.

The team also coaxed dendrites, the tree-like input branches of a neuron, to produce the conductive mesh. Dendrites aren’t just passive cables, they’re “mini computers” that contribute to the brain’s computation and learning. Current methods struggle to precisely single out and control dendrite activity without messing with other parts of the neuron.

With near-infrared light, dendrite-built electrodes changed the way the neural branches behaved. The light temporarily lowered brain activity, and mice trained to press a lever were unable to perform the task. It didn’t wipe out their memory though: After turning off the light, the animals regained the skill. Their brains showed no signs of infection, inflammation, or over-heating throughout the study.

Inhibiting brain signals has upsides. Hyperactive brain activity in epilepsy and Parkinson’s disease, for example, is currently dampened with medication or—in severe cases—brain implants. If validated, brain-grown electrodes could be a less invasive alternative. Though to be clear, the method still requires surgery to inject the materials. Adding biocompatible magnetic ingredients, which can also control brain activity, could further boost the system’s potential.

How long the materials stay put and if they’re safe over the long term remains unclear. But in theory, the strategy could also control spinal cord nerves or heart tissue. Researchers could also adapt the strategy to use other types of materials that regulate brain activity in different ways, like ramping it up.

With further improvement, the electrode wouldn’t “just coexist with brain cells for months or years; it becomes part of them, stable across lifetimes,” said Jayant.

The post Scientists Grow Electronics Inside the Brains of Living Mice appeared first on SingularityHub.

How to Build Better Digital Twins of the Human Brain

2026-04-11 22:00:00

Brain twins where regions are allowed to compete for resources behave more like the real thing.

The potential to create personalized digital twins of your brain and body is a hot topic in neuroscience and medicine today. These computer models are designed to simulate how parts of your brain interact and how the brain may respond to stimulation, disease, or medication.

The extraordinary complexity of the brain’s billions of neurons makes this a very difficult task, of course, even in the era of AI and big data. Until now, whole-brain models have struggled to capture what makes each brain unique.

People’s brains are all wired slightly differently, so everyone has a unique network of neural connections that represents a kind of “brain fingerprint.”

However, most so-called brain twins are currently more like distant cousins. Their performance is barely any closer to the real thing than if the model were using the wiring diagram of a random stranger.

This matters because digital twins are increasingly proposed as tools for testing treatments by computer simulation, before applying them to real people. If these models fail to capture fundamental principles of each patient’s unique brain organization, their predictions won’t be personalized—and in worst cases could be misleading.

In our latest study, published in Nature Neuroscience, we show that realistic digital brain twins require something that many existing models overlook: competition between the brain’s different systems.

Our findings suggest that without competition, digital twins risk being overly generic, missing out on what makes you “you.”

Excess of Cooperation

The human brain is never static. The ebb and flow of its activity can be mapped non-invasively using neuroimaging methods such as functional MRI. A computer model can be built from this, specific to that person and simulating how the regions of their brain interact. This is the idea of the digital twin.

The brain is often described as a highly cooperative system. Yet everyday experiences such as focusing attention or switching between tasks tells us intuitively that brain systems compete for limited resources. Our brains cannot do everything at once, and not all regions can be active together all the time.

Despite this, the vast majority of brain simulations over the past 20 years have not taken these competitive interactions between regions into account. Rather, they have “forced” neighboring regions to cooperate. This can push the simulated brain into overly synchronized states that are rarely seen in real brains.

In a large comparative study of humans, macaque monkeys, and mice, our international team of researchers used non-invasive brain activity recordings to show that the most realistic whole-brain models not only require cooperative interactions within specialized brain circuits, but long-range competitive interactions between different circuits.

To achieve this, we compared two types of brain model: one in which all interactions between brain regions were cooperative, and another in which regions could either excite or suppress each other’s activity. In humans, monkeys, and mice, the models that included competitive interactions consistently outperformed cooperative-only models.

Using a large-scale analysis of over 14,000 neuroimaging studies, we found that spontaneous activity in the competitive models more faithfully reflected known cognitive circuits, such as those involved in attention or memory. This suggests competition is crucial for enabling the brain to flexibly activate appropriate combinations of regions—a hallmark of intelligent behavior.

Visual summary of our study:

When whole-brain models of humans, macaques and mice are allowed to treat interactions between some brain regions as competitive, they consistently do so.
When whole-brain models of humans, macaques, and mice are allowed to treat interactions between some brain regions as competitive, they consistently do so—generating activity patterns that closely resemble those associated with real cognitive processes. Luppi et al/Nature Neuroscience, CC BY

We concluded that competitive interactions act as a stabilizing force, allowing different brain systems to take turns in shaping the direction of the brain’s ebbs and flows without interference or distraction. This ability to avoid runaway activity may also contribute to the remarkable energy-efficiency of the mammalian brain, which is many orders of magnitude more efficient than modern AI systems.

Crucially, models with competitive interactions were not only more accurate but also more individual-specific. This means they were better at capturing the unique brain fingerprint that distinguishes one person’s brain from another’s.

No Longer Lost in Translation?

The fact that our findings hold across humans and other mammals suggests they reflect fundamental principles of how intelligent systems work. In each case, we found models with competitive interactions generated brain activity patterns that closely resembled those associated with real cognitive processes.

This could have major implications for translational neuroscience. Animal models are routinely used to test treatments before human trials, yet differences between species often limit how well these results translate. Around 90 percent of treatments for neuropsychiatric disorders are “lost in translation,” failing in human clinical trials after showing promise in animal trials.

Combining brain imaging data from human patients with whole-brain modeling could radically change this. A framework that works across species would provide a powerful bridge between basic research and clinical application.

If someone needs intervention in the brain, for example due to epilepsy or a tumor, their digital twin could be used to explore how the patient’s brain activity would change when stimulated with different levels of drugs or electrical impulses. This might significantly improve on existing trial-and-error approaches with real patients, and thus provide better treatments.

The general principles of brain organization across species also offer a path for understanding how to shape the next generation of artificial intelligence. In the not-too-distant future, we may be able to construct digital twins that are more faithful in reproducing the salient features of the human brain—and potentially, AI models that are more faithful to the human mind.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post How to Build Better Digital Twins of the Human Brain appeared first on SingularityHub.

Anthropic’s Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser

2026-04-11 02:33:01

It’s a step change in cybersecurity. Exploits that would take experts weeks to develop can now be generated in hours.

Concerns about AI’s ability to turbocharge cybersecurity threats have been building for years. Anthropic’s latest model could mark a turning point after the company claimed the model could identify and exploit zero-day vulnerabilities in every major operating system and web browser.

One of the standout use cases for large language models is analyzing and writing code. This has long raised worries that the technology could help automate much of the work of hackers, potentially lowering the barrier for cyberattacks.

Leading models have demonstrated steady progress on various cybersecurity-related benchmarks, and there has been evidence malicious actors are using the technology. But so far, the impact appears to have been modest, suggesting practical barriers remain that prevent the widespread use of the technology.

According to Anthropic, that’s about to change. The company says its latest model, Mythos, has hacking capabilities so potent the company will not make it publicly available. Instead, it’s releasing Mythos to a select group of major technology companies and open source developers as part of an initiative called Project Glasswing. Those participating can use the model to identify vulnerabilities in their code and patch them before hackers get access to similar capabilities.

“The vulnerabilities that Mythos Preview finds and then exploits are the kind of findings that were previously only achievable by expert professionals,” the company’s researchers write in a blog post. “We believe the capabilities that future language models bring will ultimately require a much broader, ground-up reimagining of computer security as a field.”

Fortune first reported news of Mythos last month, after a data leak at Anthropic revealed details about the new model. While the AI excels at cybersecurity tasks, it’s designed to be a general purpose model, and the company says its hacking capabilities are simply a result of vastly improved coding and reasoning skills.

In testing, Anthropic’s researchers discovered the model was able to find “zero-day” vulnerabilities—ones that were previously undiscovered—in every major operating system and web browser. Many were decades old, an indicator of how hard they were to detect.

But the model isn’t just good at finding vulnerabilities. The company’s red team—security researchers who simulate hacking attacks to identify security weaknesses—showed the model could chain together multiple vulnerabilities to create complex attacks capable of sidestepping defenses.

Its capabilities are a step change from the previous best models. Given the challenge of attacking the Firefox web browser’s JavaScript engine, Anthropic’s previous most powerful model Opus 4.6 succeeded just twice, compared to 181 times for Mythos. Most worryingly, the team found that engineers with no security background could use it to develop successful attacks overnight.

Key to the new capabilities is the model’s ability to operate autonomously for long stretches. To find bugs, the researchers used Anthropic’s coding agent Claude Code to call the model and give it a simple prompt to scan for vulnerabilities in a particular codebase. The model then read the code, came up with hypotheses about potential bugs, and ran tests to validate them without any human involvement.

The Anthropic team says Mythos fundamentally reshapes the cybersecurity landscape as exploits that would take experts weeks to develop can now be generated in hours. In particular, they note that so-called “defense-in-depth” measures that make it time-consuming and costly to attack a system may prove ineffective against models like Mythos.

“When run at large scale, language models grind through these tedious steps quickly,” they write. “Mitigations whose security value comes primarily from friction rather than hard barriers may become considerably weaker against model-assisted adversaries.”

The head of Anthropic’s frontier red team, Logan Graham, told Axios that they expect other companies to produce models with similar capabilities in the coming six to 18 months. Sources familiar with the matter told Axios that OpenAI is already finalizing a model with similar capabilities to Mythos, which will have a similarly limited release.

In its blog post, the company’s researchers note that new security technology has historically benefited defenders more than attackers. If frontier labs are careful about model releases, they think the same could be true here too, but the transitional period is likely to be disruptive.

“We need to prepare now for a world where these capabilities are broadly available in 6, 12, 24 months,” Graham told Wired. “Many things would be different about security. Many of the assumptions that we’ve built the modern security paradigms on might break.”

Whether AI developers can keep a lid on these capabilities long enough for the rest of the world to come to grips with this new reality remains to be seen. But either way, cybersecurity is likely to be even higher up the list of priorities in most boardrooms going forward.

The post Anthropic’s Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser appeared first on SingularityHub.

MIT Mined Bacteria for the Next CRISPR—and Found Hundreds of Potential New Tools

2026-04-07 22:00:00

An AI system unearthed a trove of CRISPR-like proteins in minutes instead of weeks or months.

CRISPR is a breakthrough technology with humble origins. Scientists first discovered the powerful gene editor in bacteria that were using it as a weapon against invading viruses called phages. Phages can wipe out up to a quarter of a bacterial population in a day. Under assault, bacteria have evolved a hefty arsenal of defenses in a relentless arms race.

These bacterial immune systems often chop up the DNA or RNA of invading viruses and are relatively easy to manufacture, making them alluring targets for scientists developing genetic engineering tools. CRISPR is just one example. There are many more. But traditional methods of searching for them are slow and labor-intensive, leaving most CRISPR-like proteins unexplored.

Now, MIT scientists have released an AI called DefensePredictor that can root out new bacterial defense systems in five minutes, instead of weeks or months. As proof of concept, DefensePredictor churned through hundreds of thousands of proteins in multiple strains of Escherichia coli (E. coli). Over 600 proteins not previously linked to immune defense popped up. Added to a vulnerable strain of bacteria, a subset of these protected them against attack.

E. coli harbors a much broader landscape of antiphage defense than previously realized, expanding the likely number of systems by multiple orders of magnitude,” wrote the team.

These systems might hold secrets about how immunity evolved. And because the proteins may work in different ways, they could be a goldmine for next-generation precision molecular tools.

Unrivaled Success

Around three decades ago, Japanese scientists discovered a curious, repetitive DNA sequence in E. coli. Other researchers soon realized it was widespread across bacterial species and matched viral DNA sequences—suggesting it could be part of the bacteria’s immunity against phages.

The system now known as CRISPR stores snippets of DNA from past infections and uses protein “scissors” to cut apart matching viral DNA during reinfection. Intrigued by its precision, scientists repurposed CRISPR into a variety of gene editing tools and launched a gene therapy revolution.

CRISPR is the most famous, but a range of bacterial defense systems have transformed genetic engineering. One, containing an enzyme that cuts specific sequences of foreign DNA, is widely used to add genetic material into cells. Another encodes a balance of toxins and antitoxins that can trigger bacterial death after phage infection. This one has been adapted into a kill switch to prevent engineered microbes or genetically modified crops from spreading uncontrollably.

Researchers are also exploring the use of newly discovered systems—with video game-like names like Zorya and Thoeris—as molecular sensors and programmable signaling in synthetic biology.

There are likely more undiscovered tools in the universe of bacterial defense, and scientists have ways of hunting them down. Some defense genes are grouped close to one another, so a known gene could guide the discovery of others. Researchers have also found genes by screening libraries of free-floating circular genome fragments across bacterial populations.

Over 250 systems have been painstakingly validated. But plenty more could escape current detection methods if, for example, their components are spread across the genome.

“The full repertoire of antiphage defense systems in bacteria remains unknown,” wrote the team. “We currently lack the tools to systematically identify systems with high speed, sensitivity, and specificity.”

AI Discoverer

The new DefensePredictor algorithm bridges that gap.

At its core is a protein language model called ESM-2. Proteins are made of 20 molecular “letters” that combine into strings and fold into complex 3D shapes. Similar to large language models, algorithms like ESM-2 learn the language of proteins and can predict their structure and purpose based on sequence alone.

ESM-2 and other similar algorithms have already helped scientists decipher mysterious proteins in bacteria, viruses, and other microorganisms previously unknown to science. Researchers hope their unique shapes could inspire antibiotics, biofuels, or even be used to build synthetic organisms.

To build their AI, the team first established a training ground. With a previous model, DefenseFinder, they screened roughly 17,000 microbial genomes for genes related—and unrelated—to defense systems. They translated these genes into corresponding proteins and built up a database with some 15,000 antiphage proteins and 186,000 proteins unrelated to defense.

These numbers are far too staggering for a human to tackle, but the AI took the work in stride. Alongside ESM-2, the model used several algorithms to distinguish between defense and non-defense proteins. Eventually DefensePredictor learned some general characteristics that make a protein more likely to be part of the immune system. (Like other language models, it’s hard to fully understand the system’s reasoning, which the team is still trying to unpack.)

When tested on 69 strains of E. coli, DefensePredictor surfaced a treasure trove of over 600 new defense-related proteins, including more than 100 that were different than any yet discovered. Although some were encoded near one another or in circular DNA—like previous findings—nearly half weren’t. They were instead littered across the genome yet may still work together.

To test the results, the team engineered a highly vulnerable E. coli strain to express candidate defense proteins—predicted to work either alone or as part of a system—and exposed them to two dozen aggressive phages. Nearly 45 percent of the proteins offered protection against at least one phage.

Beyond E. coli, the scientists expanded their search to 1,000 more microorganisms and found thousands of potential defense proteins unlike anything seen before. “New immune mechanisms remain to be found,” wrote the team.

The race is on. Also published this week, a Pasteur Institute team combined multiple AI models to look for antiphage systems in protein sequences. Across over 32,000 bacterial genomes, the model predicted nearly 2.4 million antiphage proteins—most previously unknown. They released an atlas of AI-predicted bacterial immunity proteins for others to explore.

“The diversity of antiphage defense systems is vast and largely untapped,” they wrote.

Microorganisms harbor a colossal repertoire of biological tools we’re only just beginning to uncover at scale. More species are constantly found thriving in diverse environments, from pond scum to boiling sulfuric springs to the crushing pressure of the Mariana Trench. Every new genome scientists discover and pick apart, now with AI’s help, could be hiding the next CRISPR.    

The post MIT Mined Bacteria for the Next CRISPR—and Found Hundreds of Potential New Tools appeared first on SingularityHub.

US Issues Grand Challenge: The First Fault-Tolerant Quantum Computer by 2028

2026-04-06 22:00:00

Today’s error-prone quantum computers are still far from practical. But a bold deadline could galvanize the field.

As the race to harness quantum computing accelerates, governments are throwing their hats in the ring. The US Department of Energy is now aiming to build a fully functional, fault-tolerant quantum computer within the next three years.

Despite plenty of breathless headlines about the coming quantum revolution, today’s machines remain a long way from being practically useful. It’s widely expected that we will need much larger, more reliable quantum computers before they can tackle real-world problems.

That’s largely due to the fact that qubits are incredibly error-prone, which means future machines will need to run algorithms to detect and correct those errors faster than they occur. It’s estimated that the overhead for these algorithms could be as high as 1,000 physical qubits to create a single, error-corrected “logical” qubit that can actually take part in calculations.

Given that most current devices feature at best a few hundred physical qubits, more sober heads in the industry have suggested that we may be waiting well into the next decade to see a practical fault-tolerant quantum computer. But last week, Darío Gil, the Department of Energy’s undersecretary for science, announced the agency thinks it can hit that milestone in three years.

“By 2028 we will deliver the first generation of fault-tolerant quantum computers capable of scientifically relevant quantum calculations,” he told the Office of Science Advisory Committee, according to Science.

The agency doesn’t actually plan to build the system itself; it wants quantum computing companies to provide a ready-made solution. It has set out performance criteria it expects the future device to meet but is leaving the details up to providers. In particular, the agency has not picked a favorite between leading quantum computing designs, such as superconducting qubits, trapped ions, or neutral atoms.

“You can build it however you want, so long as you meet that objective and demonstrate scientific relevance,” Gil explained.

The proposed system would likely be housed at one of the department’s national laboratories where researchers can apply to use it for free, with projects selected based on scientific merit.

The announcement is the latest example of the agency’s growing focus on quantum technology. In November 2025, it announced $625 million to renew its National Quantum Information Science Research Centers, which are designed to accelerate research in quantum computing, simulation, networking, and sensing.

The goal is undeniably ambitious though. There has been significant progress in error-correction technology in recent years, which has renewed optimism in the industry. In particular, Google’s demonstration of its Willow chip in December 2024 proved quantum error correction works in practice, not just in theory. But massive technical hurdles remain, primarily in scaling up the hardware.

“It’s a very optimistic but worthy goal,” Yale physicist Steven Girvin told Science. Researchers are making “tremendous progress” in error correction, he said, but they’re still far from true fault-tolerance.

Solving that challenge has become an urgent priority for the industry, according to a recent report from quantum computing company Riverlane, but a severe talent shortage may limit how fast the field can move. There are only an estimated 600 to 700 professionals specializing in quantum error correction worldwide, but the industry will need up to 16,000 by the turn of the decade. And training error-correction experts can take up to 10 years.

It’s possible that the kind of grand challenge laid out by DoE can help galvanize both the attention and funding needed to shift the needle. But it’s an open question whether it will be able to deliver on the incredibly bold timeline outlined this week.

The post US Issues Grand Challenge: The First Fault-Tolerant Quantum Computer by 2028 appeared first on SingularityHub.