MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

Data Centers in Space: Will 2027 Really Be the Year AI Goes to Orbit?

2025-12-19 23:00:00

Google plans to take a tangible first step in a new AI and computing moonshot by launching two prototype satellites to orbit in early 2027.

Google recently unveiled Project Suncatcher, a research “moonshot” aiming to build a data center in space. The tech giant plans to use a constellation of solar-powered satellites which would run on its own TPU chips and transmit data to one another via lasers.

Google’s TPU chips (tensor processing units), which are specially designed for machine learning, are already powering Google’s latest AI model, Gemini 3. Project Suncatcher will explore whether they can be adapted to survive radiation and temperature extremes and operate reliably in orbit. It aims to deploy two prototype satellites into low Earth orbit, some 400 miles above the Earth, in early 2027.

Google’s rivals are also exploring space-based computing. Elon Musk has said that SpaceX “will be doing data centers in space,” suggesting that the next generation of Starlink satellites could be scaled up to host such processing. Several smaller firms, including a US startup called Starcloud, have also announced plans to launch satellites equipped with the GPU chips (graphics processing units) that are used in most AI systems.

The logic of data centers in space is that they avoid many of the issues with their Earth-based equivalents, particularly around power and cooling. Space systems have a much lower environmental footprint, and it’s potentially easier to make them bigger.

As Google CEO Sundar Pichai has said: “We will send tiny, tiny racks of machines and have them in satellites, test them out, and then start scaling from there … There is no doubt to me that, a decade or so away, we will be viewing it as a more normal way to build data centers.”

Assuming Google does manage to launch a prototype in 2027, will it simply be a high-stakes technical experiment—or the dawning of a new era?

The Scale of the Challenge

I wrote an article for The Conversation at the start of 2025 laying out the challenges of putting data centers into space, in which I was cautious about them happening soon.

Now, of course, Project Suncatcher represents a concrete program rather than just an idea. This clarity, with a defined goal, launch date, and hardware, marks a significant shift.

The satellites’ orbits will be “sun synchronous,” meaning they’ll always be flying over places at sunset or sunrise so that they can capture sunlight nearly continuously. According to Google, solar arrays in such orbits can generate significantly more energy per panel than typical installations on Earth because they avoid losing sunlight due to clouds and the atmosphere, as well as at night.

The TPU tests will be fascinating. Whereas hardware designed for space normally needs to be heavily shielded against radiation and extreme temperatures, Google is using the same chips used in its Earth data centers.

The company has already done laboratory tests exposing the chips to radiation from a proton beam that suggest they can tolerate almost three times the dose they’ll receive in space. This is very promising, but maintaining reliable performance for years, amidst solar storms, debris, and temperature swings is a far harder test.

Another challenge lies in thermal management. On Earth, servers are cooled with air or water. In space, there is no air and no straightforward way to dissipate heat. All heat must be removed through radiators, which often become among the largest and heaviest parts of a spacecraft.

NASA studies show that radiators can account for more than 40 percent of total power system mass at high power levels. Designing a compact system that can keep dense AI hardware within safe temperatures is one of the most difficult aspects of the Suncatcher concept.

A space-based data center must also replicate the high bandwidth, low latency network fabric of terrestrial data centers. If Google’s proposed laser communication system (optical networking) is going to work at the multi-terabit capacity required, there are major engineering hurdles involved.

These include maintaining the necessary alignment between fast-moving satellites and coping with orbital drift, where satellites move out of their intended orbit. The satellites will also have to sustain reliable ground links back on Earth and ovecome weather disruptions. If a space data-center is to be viable for the long term, it will be vital that it avoids early failures.

Maintenance is another unresolved issue. Terrestrial data centers rely on continual hardware servicing and upgrades. In orbit, repairs would require robotic servicing or additional missions, both of which are costly and complex.

Then there is the uncertainty around economics. Space-based computing becomes viable only at scale, and only if launch costs fall significantly. Google’s Project Suncatcher paper suggests that launch costs could drop below $200 (£151) per kilogram by the mid 2030s, seven or eight times cheaper than today. That would put construction costs on par with some equivalent facilities on Earth. But if satellites require early replacement or if radiation shortens their lifespan, the numbers could look quite different.

In short, a two-satellite test mission by 2027 sounds plausible. It could validate whether TPUs survive radiation and thermal stress, whether solar power is stable, and whether the laser communication system performs as expected.

However, even a successful demonstration would only be the first step. It would not show that large-scale orbital data centers are feasible. Full-scale systems would require solving all the challenges outlined above. If adoption occurs at all, it is likely to unfold over decades.

For now, space-based computing remains what Google itself calls it, a moonshot: ambitious and technically demanding, but one that could reshape the future of AI infrastructure, not to mention our relationship with the cosmos around us.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Data Centers in Space: Will 2027 Really Be the Year AI Goes to Orbit? appeared first on SingularityHub.

New Gene Drive Stops the Spread of Malaria—Without Killing Any Mosquitoes

2025-12-19 01:09:46

A Tanzania-led study pitted gene-edited mosquitoes against a wide range of malaria parasites found locally.

Mosquitoes are an uncomfortable, itchy nuisance. But for people in sub-Saharan Africa, a bite could mean death. The pests are living incubators for the parasite that causes malaria. Roughly 600,000 people are killed by the disease each year, with most being children under five years of age.

Insecticides, malaria drugs, and mosquito nets saved a million lives globally in 2024 alone. But their efficacy is waning. Mosquitoes and the malaria parasite are becoming resistant to chemical inhibitors. And consistent, perfect use of physical barriers is hard to manage for years on end, especially for children.

Realizing this, scientists have turned to a drastic solution: Gene drives, a technology that skews the rules of inheritance. Rather than nature’s fifty-fifty chance of an offspring inheriting a gene from either parent, gene drives raise the possibility of a gene’s inheritance to over 90 percent—if not higher.

The tweak allows a gene to rapidly spread across entire populations. In lab tests encoding gene drives that reduce female mosquito fertility, mosquito populations have collapsed. Other experimental gene drives encoding genes that block parasite reproduction have suggested they could replace a natural population with one unable to carry malaria in just a few generations.

But these studies mostly used a specific type of lab-grown mosquito and older generations of the malaria parasite. Whether gene drives could keep naturally circulating malaria parasites in check, especially in countries where they’re most prevalent, was unknown.

This month, a research team from Tanzania and the UK found engineered mosquitoes conquered a wide variety of malaria parasites in blood samples collected from children in the area. Genetically altered in a new state-of-the-art biosecurity facility in Tanzania, the mosquitoes passed on genes that inhibit the parasite with breakneck speed and efficiency.

The promising findings are the latest from Transmission Zero, a Tanzania-led and internationally supported project to develop genetically based mosquito suppression.

“Gene-drive mosquitoes…offer unprecedented hope,” wrote study authors Alphaxard Manjurano at the National Institute for Medical Research Mwanza Center and Dickson Lwetoijera at the Ifakara Health Institute, both based in Tanzania.

Moving South

Gene drives shatter the laws of evolution. Rather than a fifty-percent chance of inheriting genes from a parent, gene drives pass genes down through generations with near-certainty.

Scientists engineer gene drives by first adding instructions to make the gene editing tool CRISPR. These instructions are genetically inserted into a single chromosome in a chromosome pair. The chromosomes in these pairs are inherited one from each parent. The drive hijacks the bug’s protein-making machinery to pump out Cas9 “scissors” that break the sister chromosome.

Rather than stitching the broken ends together, the cells use the gene-drive containing chromosome as a template for repair. And now both chromosomes contain the drive, ensuring it’ll be passed down to future generations.

Gene drive design is extremely versatile. Some drives target genes involved in female fertility, making mosquitoes sterile and quickly lowering their numbers. Others produce malaria antibodies in female mosquitoes when they drink blood, neutralizing the parasite and preventing it from spreading. Yet others propagate a protective gene that naturally wards off malaria in mosquitoes.

The latter strategies are gaining steam. Not everyone is keen on eliminating entire species. Mosquitoes may play diverse roles in ecosystems that we haven’t yet discovered. Kneecapping malaria parasites as they grow in mosquitoes seems like the safer bet.

But previous gene-drive mosquitoes were designed and tested using old, frozen malaria samples—a far cry from the genetic diversity and rapid evolution that make the parasite formidable in natural environments. Bringing the technology to regions heavily affected by the disease could help local communities better battle the disease.

Hidden Medicine

The new gene drive relied on previous efforts from George Christophides at Imperial College London who was also an author of the new study. Malaria parasites take roughly 10 days to incubate and develop inside mosquitoes. Once mature, they spread into the bug’s saliva, which can now infect people. Because the mosquito carriers don’t survive long past this period—but can do lots of damage in the meantime—delaying parasite development could crash the entire transmission cycle.

The team took inspiration from two small proteins that naturally cripple parasite development. One was discovered in the African clawed frog; the other in honeybees. Parasites in lab-grown mosquitoes, engineered to contain gene drives loaded with the proteins, took a few days longer to mature—precious time during which some of the bugs naturally died off.

Collaborators in Tanzania recreated these gene drive mosquitoes and tested them in a near real-world setting. After feeding on blood samples from local children infected by various strains of the parasite, the edited mosquitoes struggled to produce more of the pathogen.

“This is the first time a genetically modified, gene drive-compatible mosquito strain has been developed in Africa, by African scientists, targeting malaria parasites circulating in local communities,” said Lwetoijera in a press release. However, long-term monitoring is essential to make sure the parasite doesn’t develop resistance against the gene drive. The treatment presents a new way to slash malaria risks in plagued communities.

The project didn’t just rely on scientific insights. In a country with relatively low resources, little infrastructure, and hazy regulations, building the research program from the ground up was a top priority to ensure biocontainment safety. The study was conducted in a state-of-the-art facility specifically designed for this research, allowing local scientists to spearhead future genetic engineering efforts and field testing.

A daring trial to release the edited mosquitoes on an island in Lake Victoria is planned for the next phase. Throughout the project, Transmission Zero has worked with local communities to build trust in a bewildering technology. Plenty of protocols and planning need to be in place before a real-world test takes place. These include ecological risk assessment, regulatory oversight, and continued development of skills and expertise in staff leading the effort.

Both Manjurano and Lwetoijera stressed the importance of African leadership as the project moves along, ensuring that as the technology is developed and implemented it meets local priorities and ethical standards.

International collaborators agree. “Now, we want to move at the right speed. It is important that we’re not too fast and that we make sure people are supportive of this new technology, but we should also move with urgency and treat malaria as the emergency that it is,” said study author Nikolai Windbichler at Transmission Zero and Imperial College London.

The post New Gene Drive Stops the Spread of Malaria—Without Killing Any Mosquitoes appeared first on SingularityHub.

These Robots Are the Size of Single Cells and Cost Just a Penny Apiece

2025-12-16 23:00:00

The microbots have tiny computers, sensors, and actuators. They can sense temperature and swim autonomously.

The robots, each the size of a single cell, casually turn circles in a bath of water. Suddenly, their sensors detect a change: Parts of the bath are heating up. The microrobots halt their twirls and head for warmer waters, where they once again settle into lounge-mode—all without human interference.

For 40 years, scientists have tried to engineer ‘smart’ microrobots. But building microscopic machines that sense, learn, and act based on their programming has eluded researchers. Today’s most sophisticated robots, such as Boston Dynamics’ Atlas, already embody these functions using computer chips, algorithms, and actuators. The seemingly simple solution would be to simply shrink down larger systems, and voila, mission accomplished.

It’s not so easy. The physical laws governing semiconductors and other aspects of robotics go sideways at the microscopic scale. “Fundamentally different approaches are required for truly microscopic robots,” wrote Marc Miskin and team at the University of Pennsylvania.

Their study, published last week in Science Robotics, packed the autonomous abilities of full-sized robots into microrobots 10,000 times smaller—each one roughly the size of a single-celled paramecium. Costing just a penny per unit to manufacture, the bots are loaded with sensors, processors, communications modules, and actuators to propel them.

In tests, the microrobots responded to a variety of instructions transmitted from a computer workstation helmed by a person. After receiving the code, however, the bots functioned autonomously with energy consumption near that of single cells.

While just prototypes, similar designs could one day roam the body to deposit medications, monitor the environment, or make nanomanufacturing more adjustable.

Spooky Physics

Intelligent living “microrobots” surround us. Despite their miniature size and lack of a central brain, single-celled creatures are quick to sense, learn, and adapt to shifting surroundings. If evolution can craft these resilient microorganisms, why can’t we?

So far, the smallest robots that can sense, be reprogrammed, and move on command are at least bigger than a millimeter, or roughly the size of a grain of sand. Further shrinking runs into roadblocks based on fundamental physical principles.

Just as quantum computing departs from everyday physics—with one computational quirk famously called “spooky action at a distance” by Albert Einstein—the rules that guide computer chip and robotic performance also begin to behave differently at the microscopic scale.

For example, forces on a robot’s surface become disproportionately large, so the devices stick to everything, including themselves. This means motors have to ramp up their power, which swiftly exhausts scarce energy resources. Drag also limits mobility, like trying to move with a parachute in strong winds. Processors suffer too—shrinking down computer chips causes noise to skyrocket—while sensors rapidly lose sensitivity.

You can get around all this by controlling a bot’s movement externally with light or magnets, which offloads multiple hardware components. But this sacrifices “programmability, sensing, and/or autonomy in the process,” wrote the team. Such microrobots struggle in changing environments and can only switch between a limited number of coded behaviors.

Alternatively, you can weave functions directly into the materials so microrobots change their properties as the environment shifts. This also switches their computation. Most examples are soft and biocompatible, but they’re harder to manufacture at scale and often require expensive hardware to control, crippling real-world practicality.

Honey, I Shrank the Chips

Many of the essential, miniaturized components needed for “smart” microbots already exist. These include tiny sensors, information processing systems, and actuators to convert electrical signals into motion. The trick is wiring them all together. For example, given a “limited power budget,” it’s difficult to accommodate both propulsion and computation, wrote the team.

The team optimized each component for efficiency, and the design relied on tradeoffs. Increasing the microbot’s memory took more energy, for example, but could support complex behaviors. In the end, they were limited to just a few hundred bits of onboard data. But this was sufficient to store the microbot’s current state, or the memory of its actions and past commands. The team wrote a library of simple instructions—like “sense the environment”—which could be sent to the bots.

The final design has mini solar panels to soak up beams of light for power, temperature sensors, and a processing unit. A communications module, also using light, receives new commands and translates sensor readings into specific movements.

The team made the bots in bulk using a standard chipmaking process.

In one test, they asked the microbots to measure nearby temperature, digitize the number, and transmit it to the base station for evaluation. Instead of infrared beams or other wireless technologies, the system relied on specific movements to encode temperature measurements in bits. To save energy, the entire process used only two programming commands, one for sensing and another to encode and transmit data.

The microrobots beat state-of-the-art digital thermometers, capturing temperature differences of 0.3 degrees Celsius in a tiny space. The technology could be used to probe temperature changes in microfluidic chambers or tiny blood vessels, wrote the team.

The bots can also move along heat gradients like living organisms. At rest, they stay in place and turn in circles. But when they detect a temperature change,  they automatically move toward more heated areas until the temperature is steady. They then switch back into relaxed mode. Beaming a different set of commands asking them to move to colder regions reverses their trajectory. The microrobots faithfully adapt to the new instructions and settle in cooler waters. 

The team also built in passcodes. These pulses of light activate the microrobots and allow the researchers to send commands to the entire fleet or only to select groups. They could potentially use this to program more sophisticated robotic swarm behaviors. 

Although still prototypes, the microrobots have a reprogrammable digital brain that senses, remembers, and acts.  This means the scientists can assign them a wide range of tasks on demand. Up next, they aim to add communication between the microrobots for coordination and upgrade their motors for faster, more agile movement.

The post These Robots Are the Size of Single Cells and Cost Just a Penny Apiece appeared first on SingularityHub.

Hugging Face Says AI Models With Reasoning Use 30x More Energy on Average

2025-12-15 23:00:00

Models that “think” through problems step by step before providing an answer use considerably more power than older models.

It’s not news to anyone that there are concerns about AI’s rising energy bill. But a new analysis shows the latest reasoning models are substantially more energy intensive than previous generations, raising the prospect that AI’s energy requirements and carbon footprint could grow faster than expected.

As AI tools become an ever more common fixture in our lives, concerns are growing about the amount of electricity required to run them. While worries first focused on the huge costs of training large models, today much of the sector’s energy demand is from responding to users’ queries.

And a new analysis from researchers at Hugging Face and Salesforce suggests that the latest generation of models, which “think” through problems step by step before providing an answer, use considerably more power than older models. They found that some models used 700 times more energy when their “reasoning” modes were activated.

“We should be smarter about the way that we use AI,” Hugging Face research scientist and project co-lead Sasha Luccioni told Bloomberg. “Choosing the right model for the right task is important.”

The new study is part of the AI Energy Score project, which aims to provide a standardized way to measure AI energy efficiency. Each model is subjected to 10 tasks using custom datasets and the latest generation of GPUs. The researchers then measure the number of watt-hours the models use to answer 1,000 queries.

The group assigns each model a star rating out of five, much like the energy efficiency ratings found on consumer goods in many countries. But the benchmark can only be applied to open or partially open models, so leading closed models from major AI labs can’t be tested.

In this latest update to the project’s leaderboard, the researchers studied reasoning models for the first time. They found these models use, on average, 30 times more energy than models without reasoning capabilities or with their reasoning modes turned off, but the worst offenders used hundreds of times more.

The researchers say that this is largely due to the way AI reasoning works. These models are fundamentally text generators, and each chunk of text they output requires energy to produce. Rather than just providing an answer, reasoning models essentially “think aloud,” generating text that is supposed to correspond to some kind of inner monologue as they work through a problem.

This can boost the number of words they generate by hundreds of times, leading to a commensurate increase in their energy use. But the researchers found it can be tricky to work out which models are the most prone to this problem.

Traditionally, the size of a model was the best predictor of how much energy it would use. But with reasoning models, how verbose their reasoning chains are is often a bigger predictor, and this typically comes down to subtle quirks of the model rather than its size. The researchers say this is a key reason why benchmarks like this are important.

It’s not the first time researchers have attempted to assess the efficiency of reasoning models. A June study in Frontiers in Communication found that reasoning models can generate up to 50 times more CO₂ than models designed to provide a more concise response. The challenge, however, is that while reasoning models are less efficient, they are also much more powerful.

“Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences in Germany who led the study, said in a press release. “None of the models that kept emissions below 500 grams of CO₂ equivalent [total greenhouse gases released] achieved higher than 80 percent accuracy on answering the 1,000 questions correctly.”

So, while we may be getting a clearer picture of the energy impacts of the latest reasoning models, it may be hard to convince people not to use them.

The post Hugging Face Says AI Models With Reasoning Use 30x More Energy on Average appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through December 13)

2025-12-13 23:00:00

Artificial Intelligence

OpenAI Releases GPT-5.2 After ‘Code Red’ Google Threat AlertBenj Edwards | Ars Technica

“OpenAI says GPT-5.2 Thinking beats or ties ‘human professionals’ on 70.9 percent of tasks in the GDPval benchmark (compared to 53.3 percent for Gemini 3 Pro). The company also claims the model completes these tasks at more than 11 times the speed and less than 1 percent of the cost of human experts.”

Robotics

1X Struck a Deal to Send Its ‘Home’ Humanoids to Factories and WarehousesRebecca Szkutak | TechCrunch

“The company announced a strategic partnership to make thousands of its humanoid robots available for [its backer] EQT’s portfolio companies on Thursday. …This deal involves shipping up to 10,000 1X Neo humanoid robots between 2026 and 2030 to EQT’s more than 300 portfolio companies with a concentration on manufacturing, warehousing, logistics, and other industrial use cases.”

Computing

China Launches 34,175-Mile AI Network That Acts Like One Massive SupercomputerGayoung Lee | Gizmodo

“Last week, state-run Science and Technology Daily reported the launch of the Future Network Test Facility (FNTF), a giant distributed AI computing pool capable of connecting distant computing centers. The high-speed optical network spans across 40 cities in China, measuring at about 34,175 miles (55,000 kilometers)—enough to circle the equator 1.5 times, according to the South China Morning Post.”

Robotics

Aurora Will Have ‘Hundreds’ of Driverless Trucks on the Road by the End of 2026, CEO SaysAndrew J. Hawkins | The Verge

“Urmson says he expects ‘thousands’ of trucks on the road within the next two years. ‘It’ll be a little less visceral, because it’s not a consumer-facing product,’ he says. ‘But in terms of the expansion, I think we’ll start to see that happen pretty quickly.'”

Future

This Incredible Map Shows the World’s 2.75 Billion BuildingsJesus Diaz | Fast Company

“From the latest skyscraper in a Chinese megalopolis to a six‑foot‑tall yurt in Inner Mongolia, researchers at the Technical University of Munich claim they have created a map of all buildings worldwide: 2.75 billion building models set in high‑resolution 3D with a level of precision never before recorded.”

Computing

AI Hackers Are Coming Dangerously Close to Beating HumansRobert McMillan | The Wall Street Journal ($)

“Artemis found bugs at lightning speed and it was cheap: It cost just under $60 an hour to run. Ragan says that human pen testers typically charge between $2,000 and $2,500 a day. But Artemis wasn’t perfect. About 18% of its bug reports were false positives. It also completely missed an obvious bug that most of the human testers spotted in a webpage.”

Energy

Overview Energy Wants to Beam Energy From Space to Existing Solar FarmsTim De Chant | TechCrunch

“The startup plans to use large solar arrays in geosynchronous orbit about 22,000 miles above Earth where satellites match the planet’s rotation—to harvest sunlight. It will then use infrared lasers to transmit that power to utility-scale solar farms on Earth, allowing them to send power to the grid nearly round the clock.”

Tech

Why the AI Boom Is Unlike the Dot-Com BoomDavid Streitfeld | The New York Times ($)

“Much of the rhetoric about a glorious world to come is the same [as the dot-com boom]. Fortunes are again being made, sometimes by the same tech people who made fortunes the first time around. Extravagant valuations are being given to companies that didn’t exist yesterday. For all the similarities, however, there are many differences that could lead to a distinctly different outcome.”

Computing

A First Look at Google’s Project Aura Glasses Built With XrealVictoria Song | The Verge

“Is it a headset? Smart glasses? Both? Those were the questions running through my head as I held Project Aura in my hands in a recent demo. It looked like a pair of chunky sunglasses, except for the cord dangling off the left side, leading down to a battery pack that also served as a trackpad. When I asked, Google’s reps told me they consider it a headset masquerading as glasses. They have a term for it, too: wired XR glasses.”

Space

Bezos and Musk Race to Bring Data Centers to SpaceMicah Maidenberg and Becky Peterson | The Wall Street Journal ($)

“Bezos’ Blue Origin has had a team working for more than a year on technology needed for orbital AI data centers, a person familiar with the matter said. Musk’s SpaceX plans to use an upgraded version of its Starlink satellites to host AI computing payloads, pitching the technology as part of a share sale that could value the company at $800 billion, according to people involved in the discussions.”

Biotechnology

Scientists Thought Parkinson’s Was in Our Genes. It Might Be in the WaterDavid Ferry | Wired ($)

“Despite the avalanche of funding, the latest research suggests that only 10 to 15 percent of Parkinson’s cases can be fully explained by genetics. The other three-quarters are, functionally, a mystery. ‘More than two-thirds of people with PD don’t have any clear genetic link,’ says Briana De Miranda, a researcher at the University of Alabama at Birmingham. ‘So, we’re moving to a new question: What else could it be?'”

The post This Week’s Awesome Tech Stories From Around the Web (Through December 13) appeared first on SingularityHub.

New Immune Treatment May Suppress HIV—No Daily Pills Required

2025-12-12 23:00:00

An immune tag-team promises to hold the virus in check for years—even without medication.

HIV was once a death sentence. Thanks to antiretroviral therapy, it’s now a chronic disease. But the daily treatment is for life. Without the drug, the virus rapidly rebounds.

Scientists have long hunted for a more permanent solution. One option they’ve explored is a stem cell transplant using donor cells from people who are naturally resistant to the virus. A handful of patients have been “cured” this way, in that they could go off antiretroviral therapy without a resurgence in the virus for years. But the therapy is difficult, costly, and hardly scalable.

Other methods are in the works. These include using the gene editor CRISPR to damage HIV’s genetic material in cells and mRNA vaccines that hunt down a range of mutated HIV viruses. While promising, they’re still early in development.

A small group of people may hold the key to a simpler, long-lasting treatment. In experimental trials of a therapy called broadly neutralizing anti-HIV antibodies, or bNAbs, some people with HIV were able to contain the virus for months to years even after they stopped taking drugs. But not everyone did.

Two studies this month reveal why: Combining a special type of immune T cell with immunotherapy “supercharges” the body’s ability to hunt down and destroy cells harboring HIV. These cellular reservoirs normally escape the immune system.

One trial led by the University of California, San Francisco (UCSF) merged T cell activation and bNAb treatment. In 7 of 10 participants, viral levels remained low for months after they stopped taking antiretroviral drugs.

Another study analyzed blood samples from 12 participants receiving bNAbs and compared those who were functionally cured to those who still relied on antiretroviral therapy. They zeroed in on an immune reaction bolstering long-term remission with the same T cells at its center.

“I do believe we are finally making real progress towards developing a therapy that may allow people to live a healthy life without the need of life-long medications,” said study author Steven Deeks in a press release.

A Long and Winding Road

HIV is a frustrating foe. The virus rapidly mutates, making it difficult to target with a vaccine. It also forms silent reservoirs inside cells. This means that while viral counts circulating in the blood may seem low, the virus rapidly rebounds if a patient ends treatment. Finally, HIV infects and kneecaps immune cells, especially those that hunt it down.

According to the World Health Organization, roughly 41 million people live with the virus globally, and over a million acquire the infection each year. Preventative measures such as a daily PrEP pill, or pre-exposure prophylaxis, guard people who don’t have the virus but are at high risk of infection. More recently, a newer, injectable PrEP formulation fully protected HIV-negative women from acquiring the virus in low- to middle-income countries.

Once infected, however, options are few. Antiretroviral therapy is the standard of care. But “lifelong ART is accompanied by numerous challenges, such as social stigma and fatigue associated with the need to take pills daily,” wrote Jonathan Li at the Brigham and Women’s Hospital, who was not involved in either study.

Curing HIV once seemed impossible. But in 2009, Timothy Ray Brown, also known as the Berlin patient, galvanized the field. He received a full blood-stem-cell transplant for leukemia, but the treatment also fought off his HIV infection, keeping the virus undetectable without drugs. Other successes soon followed, mostly using donor cells from people genetically immune to the virus. Earlier this month, researchers said a man receiving a non-HIV-resistant stem cell transplant had remained virus-free for over six years after stopping antiretroviral therapy.

While these cases prove that HIV can be controlled—or even eradicated—by the body, stem cell transplants are hardly scalable. Instead, the new studies turned to an emerging immunotherapy employing broadly neutralizing anti-HIV antibodies (bNAbs).

From Theory to Trial

Compared to normal antibodies, bNAbs are extremely rare and powerful. They can neutralize a wide range of HIV strains. Clinical trials using bNAbs in people with HIV have found that some groups maintained low viral levels long after the antibodies left their system.

To understand why, one study examined blood samples from 12 people across four clinical trials. Each participant had received bNAbs treatment and subsequently ended antiretroviral therapy. Comparing those who controlled their HIV infection to those who didn’t, researchers found that a specific type of T cell was a major contributor to long-term remission.

Remarkably, even before receiving the antibody therapy, people with less HIV in their systems had higher levels of these T cells circulating in their bodies. Although the virus attacks immune cells, this population was especially resilient to HIV and almost resembled stem cells. They rapidly expanded and flooded the body with healthy HIV-hunting T cells. Adding bNAbs boosted the number of these T cells and their killer efficiency destroying HIV safe harbor cells too. Without a host, the virus can’t replicate or spread and withers away.

“Control [of viral load] wasn’t uniquely linked to the development of new types of [immune] responses; it was the quality of existing CD8+ T cell responses that appeared to make the difference,” said study author David Collins at Mass General Brigham in a press release.

If these T cells are key to long-term viral control, what if we artificially activated them?

A small clinical trial at UCSF tested the theory in 10 people with HIV. The participants first received a previously validated vaccine that boosts HIV-hunting T cell activity. This was followed by a drug that activates overall immune responses and then two long-lasting bNAb treatments. The patients were then taken off antiretroviral therapy.

After the one-time treatment, seven participants maintained low levels of the virus over the following months. One had undetectable circulating virus for more than a year and a half. Like Collins’s results, bloodwork found the strongest marker for viral control was a high level of those stem cell-like T cells. People with rapidly expanding levels of these T cells, which then transformed into “killer” versions targeting HIV-infected cells, better controlled the infection.

“It’s like…[the cells] were hanging out waiting for their target, kind of like a cat getting ready to pounce on a mouse,” said study author Rachel Rutishauser in a press release.

Findings from both studies converge on a similar message: Long-term HIV management without antiretroviral therapy depends, at least in part, on a synergy between T cells and immunotherapy. Methods amping up stem cell-like T cells before administering bNAbs could give the immune system a head start in the HIV battle and achieve longer-lasting effects.

But these T cells are likely only part of the picture. Other immune molecules, such as a patient’s naturally occurring antibodies against the virus, may also play a role. Going forward, the combination treatment will need to be simplified and tested on a larger population. For now, antiretroviral remains the best treatment option.

“This is not the end game,” said study author Michael Peluso at UCSF. “But it proves we can push progress on a challenge we often frame as unsolvable.”

The post New Immune Treatment May Suppress HIV—No Daily Pills Required appeared first on SingularityHub.