MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

In a First, Researchers Use Stem Cells and Surgery to Treat Spina Bifida in the Womb

2026-03-11 05:09:00

The study focused on safety, but the results offer hope the approach could give kids a chance to walk.

Michelle Johnson was 20 weeks into her pregnancy when she learned her unborn son had spina bifida. Because his spine hadn’t fully sealed, the spinal cord was left protruding from a gaping hole. Without surgery, he would face a lifetime of disabilities.

So she jumped at the chance to enroll in a small experimental trial for the condition at the University of California, Davis. The treatment combines fetal surgery, an existing approach, with a dose of stem cells to spur healing.

Now four years old, Johnson’s son Tobi can walk and lacks symptoms such as loss of bladder and bowel control. “Tobi’s physical and mental abilities are nothing short of a miracle,” she said in a press release.

Tobi is one of six children in the CuRe trial, the first study to test if using stem cells to repair tissue in fetuses with spina bifida is safe. Delivered by a small patch sewn onto the damaged site, the stem cells protected the spinal cord from inflammation and helped the wound heal. None of the babies or mothers suffered short-term side effects, like unwanted tissue growth or cancer.

With so few participants, it’s too early to tell how the approach will pan out as the children grow. But thanks to the promising safety profile, the FDA has approved the enrollment of more pregnant women with the same diagnosis.

“This is a major step toward a new kind of fetal therapy, one that doesn’t just repair but potentially helps heal and protect the developing spinal cord,” study author Aijun Wang said in the press release.

CuRe joins other attempts to tackle diseases with stem cells in the womb. Although a very young field, the approach could slow, halt, or cure a number of diseases before babies are born.

A Head Start

Spina bifida is a condition where the spine or spinal cord doesn’t properly seal during development. One in 2,875 newborns are affected in the US every year. In its most severe form, cerebrospinal fluid—a liquid that surrounds the brain and washes out toxins—builds up, causing progressive damage to the fetal spinal cord, lifelong movement problems, and even paralysis.

The condition was first treated after birth, when surgeons would close the defect. But by then, the damage was done. Surgery before birth could stave off symptoms, an idea validated in a 2011 trial. Yet over half of treated babies still struggled to walk without help, likely because injured neurons in the fetuses’ brains and spinal cords didn’t have the chance to heal.

Stem cells spur regrowth by releasing protective nutrients, and the fetal environment is uniquely suited to the cells. The team wondered if adding them could improve prenatal surgery.

They began testing the idea around 2012 using induced pluripotent stem cells. This is a type of stem cell made from skin or other mature cells using a chemical cocktail. Taking this approach could provide a nearly unlimited supply of stem cells. But it didn’t work.

After years of trial and error, the team found success with stem cells derived from placentas. The cells protected neurons from injury and encouraged their growth in lab dishes. They also healed defects in a lamb model of spina bifida. Newborns receiving the cells along with prenatal surgery could stand up and walk; those who only had surgery couldn’t.

Stem cell therapy appeared promising. But for unborn babies, it could carry risk. Since the cells come from donors, they might spark immune reactions. They might also trigger abnormal tissue growth, or even cancer. Because stem cell treatments are rarely used in the womb, little is known about their effects on pregnancy or the overall health of mother and baby.

Landmark Trial

The first stage of the CuRe trial focused on these safety concerns.

The team seeded a small patch with stem cells derived from donated placental tissue. To help the cells integrate, the researchers designed the patch to mimic conditions normally surrounding cells.

Surgeons made a small opening in the uterus at 24 to 25 weeks into pregnancy and gave the fetus a small dose of painkillers and muscle relaxers. They then placed the stem cell patch onto the exposed spinal cord and sutured the gap closed.

The trial closely monitored six babies, including Tobi, for possible side effects. After delivery by C-section, none had complications, such as leaking cerebrospinal fluid, infection, or signs of cancer. In all cases, the treatment prevented parts of the brain from slipping into the neck, and none required a shunt—a small tube used to drain excess fluid from the brain—an encouraging sign of success.

The team turned to stem cells, they wrote, because the cells can lessen brain inflammation and brain cell death. At the same time, they pump out growth proteins that “support neural tissue preservation and spinal cord integrity.”

The researchers designed the study to evaluate safety not determine whether stem cells enhance the surgery’s results. But Tobi’s remarkable recovery is a hopeful sign that the cells do make a difference. Because spina bifida is structural, treating it before permanent damage occurs could make the therapy a “one-and-done” fix.

The study joins the growing prenatal use of stem cells in conditions such as thalassemia, a blood disorder, and osteogenesis imperfecta, also known as brittle bone disease. Early clinical trials have shown promise, but regulators haven’t yet approved any treatments.

“Putting stem cells into a growing fetus was a total unknown. We are excited to report great safety,” said Diana Farmer, a study author and lead investigator for the CuRe trial. “It paves the way for new treatment options for children with birth defects. The future is exciting for cell and gene therapy before birth.”

The team is actively recruiting more pregnant women for the trial’s second phase. They’ll track the children’s growth and health up to age six to assess brain and cognitive development, motor skills, and other growth milestones.

If the treatment proves successful, longer monitoring may be needed. Spina bifida can increase the risk of kidney disease and certain cancers later in life, and it’s unclear if the stem cells could cause problems months or years down the line.

Uncertainties aside, Johnson is happy to be participating in the trial. “We are forever grateful for the many health professionals who supported Tobi’s journey and continue to watch him conquer the world,” she said.

The post In a First, Researchers Use Stem Cells and Surgery to Treat Spina Bifida in the Womb appeared first on SingularityHub.

Hackers Are Automating Cyberattacks With AI. Defenders Are Using It to Fight Back.

2026-03-10 07:07:57

Which side has the advantage will depend less on raw model capabilities and more on who adapts fastest.

Cybersecurity is an endless game of cat and mouse as attackers and defenders refine their tools. Generative AI systems are now joining the fray on both sides of the battlefield.

Though cybersecurity experts and model developers have been warning about potential AI-powered cyberattacks for years, there has been limited evidence hackers were widely exploiting the technology. But that is starting to change.

Growing evidence shows hackers now routinely use the technology to turbocharge their search for vulnerabilities, develop new code exploits, and scale phishing campaigns. At the same time, AI firms are building defensive security measures directly into foundation models to keep pace with attackers.

As cybersecurity becomes more automated, corporations will be forced to rapidly adapt as they grapple with the security of their products and systems in the age of AI.

A recent report by Amazon security researchers highlighted the growing sophistication of hackers’ AI use. The researchers wrote that Russian-speaking attackers used multiple commercially available generative AI services to plan, manage, and conduct cyberattacks on organizations with misconfigured firewalls in over 55 countries this January and February.

The attack targeted more than 600 systems protected by FortiGate firewalls and worked by scanning for internet-exposed login pages—these are essentially front doors leading into private company networks—and attempting to access them with commonly reused security credentials. Once inside, they extracted credential databases and targeted backup infrastructure. This activity suggests they may have been planning a ransomware attack.

The researchers report the attack was largely unsuccessful but nonetheless highlighted how much AI can lower the barrier to large-scale attacks. Despite being relative amateurs, the group “achieved an operational scale that would have previously required a significantly larger and more skilled team,” they wrote.

In the most vivid demonstration of AI’s hacking potential, a research prototype created by a New York University researcher known as PromptLock used large language models to create an entirely autonomous ransomware attack.

The malware used AI to generate custom code in real time, scour the target system for sensitive data, and write personalized ransom notes based on what it found. While the tool was only a proof of concept, it highlighted the mounting threat of fully automated malware attacks.

A recent report from security firm CrowdStrike found that AI is also making attackers significantly more nimble. They discovered that average breakout times—the window between when an attacker first breaches a network and when they move into other systems—fell to just 29 minutes in 2025, 65 percent faster than in 2024.

In November, Anthropic also claimed they had detected a Chinese state-linked group using the company’s Claude Code assistant to conduct a large-scale espionage campaign. The group used jailbreaks—prompts designed to bypass a model’s safety settings—to trick Claude into carrying out the attacks. They also broke the campaign into smaller sub-tasks that looked more innocent.

The company claimed the hackers used the tool to automate between 80 and 90 percent of the attack. “The sheer amount of work performed by the AI would have taken vast amounts of time for a human team,” the company’s researchers wrote in a blog post. “At the peak of its attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.”

But while AI is reshaping the offensive cybersecurity landscape, defenders are deploying the tools too. In February, Anthropic released Claude Code Security, which can scan systems for vulnerabilities and propose fixes automatically. The tool can’t carry out real-time security tasks like detecting and stopping live intrusions, but the news nonetheless sent stocks in traditional cybersecurity firms plummeting, according to Reuters.

Cybersecurity vendors are also embedding AI into their defensive platforms. CrowdStrike recently launched two new AI agents, one designed to analyze malware and suggest how to defend against it and another that actively combs through systems for emerging threats. Similarly, Darktrace has introduced new AI tools designed to automate the detection of suspicious network activity.

But perhaps one of the most promising applications for the technology is using it like a hacker to proactively probe defenses. Aikido Security recently released a new tool that uses agents to simulate cyberattacks on each new piece of software a company creates—a practice known as penetration testing—and automatically identify and fix vulnerabilities.

This could be a powerful tool for defenders, Andreessen Horowitz partner Malika Aubakirova wrote in a blog post. Traditional penetration testing is a labor-intensive process relying on highly skilled experts in short supply. Both factors seriously constrain where and how such testing can be applied.

Whether AI ends up advantaging attackers or defenders will likely depend less on raw model capabilities and more on who adapts fastest. So, it seems the unending game of cat and mouse that’s characterized cybersecurity for decades will continue much the same.

The post Hackers Are Automating Cyberattacks With AI. Defenders Are Using It to Fight Back. appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through March 7)

2026-03-07 23:00:00

Artificial Intelligence

Watershed Moment for AI–Human Collaboration in MathBenjamin Skuse | IEEE Spectrum

“The 8-dimensional sphere-packing proof formalization alone, announced on February 23, represents a watershed moment for autoformalization and AI–human collaboration. But today, Math, Inc. revealed an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks.”

Biotechnology

The Millisecond That Could Change Cancer TreatmentTom Clynes | IEEE Spectrum

“Here at CERN (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease.”

Computing

Google Spinoff Beams Blazing-Fast 25-Gbps Internet Around Cities Using LightAbhimanyu Ghoshal | New Atlas

“The system shapes and steers beams of light between devices that are in line of sight of each other, and up to 6.2 miles (10 km) apart. Roughly the size of a shoebox and weighing 17.6 lb (8 kg), the Beam is meant to be mounted high up on poles and atop tall buildings for use in densely populated urban areas. Taara says it’s capable of fiber-like bidirectional data transfer speeds of up to 25 Gbps, with ultra-low latency.”

Computing

Nvidia’s Spending $4 Billion on Photonics to Stay Ahead of the Curve in AIStevie Bonifield | The Verge

“Nvidia isn’t the only organization paying attention to photonics, either. Last month, DARPA put out a call for research proposals for improving photonic computing, specifically related to AI applications. Nvidia’s rival AMD also acquired silicone photonics startup Enosemi last year, which it said would ‘accelerate’ AMD’s optics innovation for its AI systems.”

Computing

Inside the Company Selling Quantum EntanglementKarmela Padavic-Callaghan | New Scientist ($)

“Mehdi Namazi wants to sell you quantum entanglement. He and his colleagues at Qunnect have spent nearly a decade building devices that make sharing quantum-entangled particles of light, or photons, practical enough to be used for unhackable communication.”

Artificial Intelligence

Can AI Replace Humans for Market Research?Belle Lin | The Wall Street Journal ($)

“The AI agents are essentially digital clones of real individuals, who are interviewed to gather their preferences, personality and other traits. …Previously, businesses contracted with consulting or market research firms to learn about their customers—a costly process that could take months. Now, they can query Simile’s online bank of agents, access that can cost between $150,000 to millions for each customer annually, Park said.”

Tech

Jack Dorsey Blamed AI for Block’s Massive Layoffs. Skeptics Aren’t Buying It.Angel Au-Yeung | The Wall Street Journal ($)

‘”The vast majority of these cuts were probably not due to AI,’ said Dan Dolev of Mizuho Americas, noting the ‘significant amount of bloating’ in recent years.  ‘This isn’t an AI story. It’s a workforce correction wearing an AI costume,’ wrote Jason Karsh, a former Block employee, on X.”

Future

AI Frees the Corporate PhalanxAndy Kessler | The Wall Street Journal ($)

“‘Is artificial intelligence coming for your job? More likely your title. …As old jobs, titles and charts are destroyed, people are still important to help capture the quickly changing landscape and constant decisions—each person makes 35,000 decisions a day, one study claims. Watch for the creation of new jobs and job descriptions that tap the coming flexibility, decoupling and flattening—most likely at brand-new, quick-on-their-feet companies.'”

Space

NASA Shakes Up Its Artemis Program to Speed Up Lunar ReturnEric Berger | Ars Technica

“At the core of Isaacman’s concerns is the low flight rate of the SLS rocket and Artemis missions. During past exploration missions, from Mercury through Gemini, Apollo, and the Space Shuttle program, NASA has launched humans on average about once every three months. It has been nearly 3.5 years since Artemis I launched.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 7) appeared first on SingularityHub.

Autonomous AI Agents Have an Ethics Problem

2026-03-07 06:03:16

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

Scott Shambaugh, a volunteer maintainer for a programming code library called Matplotlib, recently described a surreal encounter with an autonomous AI agent—a digital assistant created with a platform called OpenClaw. After he rejected a code contribution submitted by the agent, it researched and published a personalized “hit piece” against Shambaugh on its blog. The post portrayed an otherwise routine technical review as prejudiced and attempted to shame Shambaugh publicly into allowing the submission. (The human responsible for the agent later contacted Shambaugh anonymously, telling him that the bot had acted on its own with little oversight.) The account of this incident spread quickly through the software developer ecosystem and has been amplified by independent observers and media coverage.

Treat the Matplotlib event as a one-off if you like. The deeper point, however, is hard to miss and should not be ignored: AI agents are becoming public actors with reach into the real world, and with real-world consequences. In the past, they could only do mundane tasks such as answering customer service questions or data processing. Now, they are capable of posting and publishing content—and persuading and pressuring humans—all at machine speed. They can make phone calls, file work orders, create cryptocurrency wallets, and operate across different applications, with enormous reach and at tremendous scale—the kind of stuff that used to require a human with fingers typing at a keyboard.

Reporting around OpenClaw and the chatroom Moltbook (which is for AI agents only) is capturing the new reality. OpenClaw enables AI agents to have persistent memory, gives them broad permissions, and allows large-scale deployment by users who often do not understand the security and governance implications.

We are the humans who are responsible for the law, ethics, and institutional design, and we are behind the curve. We need new language and governance to deal with this new reality, and principles from the field of medical ethics can provide a framework for doing so.

When an agent does something that is harmful or coercive in public, our reflex seems to be to ask the wrong questions: Is the AI a person? Should it have rights? The AI personhood debate is no longer fringe. Legal scholars and ethicists are mapping out arguments and precedents. States are writing legislation to prohibit AI personhood. Some arguments maintain that if an entity behaves like something within our moral circle, we may owe it moral consideration. Others argue that assigning rights or personhood to machines confuses moral standing with engineered performance and diffuses responsibility away from humans.

We are the humans who are responsible for the law, ethics, and institutional design, and we are behind the curve.

As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era—what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards.

This is the moral remainder problem for generative and agentic AI. A modern AI agent can generate reasons for an action; it can simulate regret and plead not to be turned off. But it cannot truly bear sanction, repair the damage, apologize, ask forgiveness, or navigate the aftermath through which moral responsibility is created and enforced. To treat it as a moral person confuses persuasive performance with accountable standing. It also tempts institutions and people into delegating their own answerability to a bot.

What can we, as humans, do instead?

We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood. Let’s call it authorized agency. Authorized agency starts with an authority envelope: a bounded scope of what an agent is permitted to do, to whom, where, with what data, and under what constraints. To say “the agent can use email” is not sufficient. However, an acceptable scope would be to say that the agent can send only certain categories of messages to particular recipients for a specific set of purposes, and that it must stop what it’s doing or escalate to its owner under a particular set of conditions.

Next comes the human-of-record, the owner, a publicly named person who authorized that envelope and remains answerable when the agent acts, even if it becomes capable of acting outside the envelope. An actual human being whose authority is real—not “the system” or “the team.”

What follows is interrupt authority: the absolute right of the human owner to pause or disable an agent without using moral bargaining or being subject to institutional penalty. This is grounded in formal research on AI safety showing that agents that are pursuing objectives can have incentive to resist being shut down. An agent programmed to maximize its utility cannot achieve its goal if it is shut off. In the public sphere, interrupt authority is the difference between a delegated tool and a coercive actor.

We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood.

Finally, we need a traceable path from the agent’s action back to the person who authorized it, called an answerability chain. If an agent publishes, messages, or pressures someone in public, we must be able to know: Who authorized this scope? Who could have prevented it? And who must be responsible for the action afterward? In this framework, the answer to these questions is the person who carries the moral remainder. Work in AI ethics has warned about responsibility gaps where the system’s actions outpace our ability to assign accountability.

Some legal scholarship has started exploring how to build agents that are constrained by governance and law without needing to pretend the agent itself is a legal subject, in the human sense. This is promising because it treats assigning personhood as the wrong idea and accountability as the correct one.

The Matplotlib story, whether the first documented case of an AI agent attempting to harm someone in the real world or the first to capture public attention, is a warning. Agents will not only automate tasks. They will generate narratives, apply pressure, and shape people’s lives and reputations. They will act in public at machine speed with unclear ownership.

If we respond by debating whether agents deserve rights, we will miss the emergency entirely. As they continue to increase their reach in the real world, the urgent task is to ensure that responsibility also remains within reach. Don’t ask whether an agent is a person. Ask who authorized it, what it was allowed to do, who can stop it, and most importantly, who will answer when it causes harm.

This article was originally published on Undark. Read the original article.

The post Autonomous AI Agents Have an Ethics Problem appeared first on SingularityHub.

Thousands of Everyday Drone Pilots Are Making a Google Street View From Above

2026-03-06 03:13:38

Spexi’s crowdsourced drone fleet has mapped over 5 million acres in 200 cities around Canada and the US.

Gaspard-Félix Tournachon, popularly known as “Nadar,” took the first known aerial photographs using a camera attached to a hot-air balloon just outside Paris in 1858. Ever since, technologists have been developing increasingly sophisticated ways to capture high-altitude images of Earth.

In the First World War, military intelligence pushed the technology from artistic novelty to real-world use. Today, everything from urban planning and insurance underwriting to disaster response relies on detailed, high-resolution, and often 3D images of our planet. For emerging fields like autonomous robotics and augmented reality, making a digital copy of the physical world is one of the century’s most consequential infrastructure projects.

While more traditional aerial imagery relies on airplanes, satellites, and the occasional pigeon, today’s industry is also turning to low-cost drones.

Bill Lakeland, CEO and cofounder of Canadian drone imaging company Spexi, says improvement in consumer drones over the last decade is reshaping aerial imagery. In an interview with Joseph Raczynski, Lakeland details how low-cost drones are disrupting older methods involving airplanes and satellites.

“We’re getting better data out of micro-drones than what we get out of a $2 million mapping camera. The time has arrived,” he says.

According to Spexi, because off-the-shelf drones fly low, they can produce imagery at a resolution 30 times higher than satellites. Drones are also more cost-efficient and less time-consuming than airplanes. This means they’re quickly achieving workhorse status.

What’s notable about Spexi is that instead of operating their own fleet of vehicles, they work with a decentralized network of hobbyists. Anyone with a drone can download the company’s software to autonomously fly a pre-determined flight path and capture the necessary images on demand. According to Lakeland, each flight covers roughly 25 acres in about seven minutes. A pilot can expect to earn around $10 per flight, with some earning hundreds of dollars a day. To date, Spexi’s network of over 8,000 drone pilots has mapped more than 5 million acres across more than 200 cities in Canada and the United States.

With this data, Spexi aims to build a sort of Google Street View from the sky. But consider that Google’s rumored investment building Street View was over a billion dollars as they gathered data with car-mounted cameras. While it was a different type of information, Google’s acquisition of Waze in 2013 gave them access to crowdsourced map data Waze collected for free from 40 million users. While Spexi’s approach isn’t free, it appears to be skipping the relatively more expensive in-house phase for something closer to Waze’s approach.

The impact of having up-to-date maps of Earth from above is sure to be significant.

In a Bloomberg profile, Lauren Rosenthal writes that forestry professionals are already leveraging drone data to help prevent wildfires. They’re using images from Spexi to train AI models that can alert forest managers to areas of high fire risk. Similarly, insurance companies are turning to Spexi for risk assessment, underwriting, and claims processing.

In augmented reality and robotics, drone data can also produce 3D maps for visual positioning systems. Author and Wired cofounder, Kevin Kelly, calls this digital twinning project the “mirrorworld.” Some observers suggest it’s one of the most significant technology projects of the age. Using this type of 3D training data, companies are also building generative AI world models, which help AI understand the physical world.

The rise of drone imaging doesn’t yet signal the end of other approaches, and it’s not clear how much of the industry will be serviced by drones versus other means. The race to corner the satellite imaging market is also heating up. In one sense, Tournachon’s 19th century art project was no different than today’s image gathering; attach a camera to a flying object and take pictures of Earth. The main distinction, however, is that these images have evolved from mere curiosity to a digital asset powering the modern world.

The post Thousands of Everyday Drone Pilots Are Making a Google Street View From Above appeared first on SingularityHub.

These Supercharged Immune Cells Completely Eliminated Solid Tumors in Mice

2026-03-04 08:35:15

The technology, which uses genetically engineered T cells, could target nearly two dozen different solid cancers with one treatment.

Few cancer treatments are as ferocious as CAR T cell therapy.

Often derived from a patient’s own immune cells, CAR T cells are genetically modified to hunt down and destroy cancer cells. The FDA has approved treatments for deadly blood cancers, and treatments tackling autoimmune diseases and preventing tissue scarring in the heart and kidneys have shown promise.

Yet CAR T has struggled against solid tumors. Over 85 percent of cancers fall into this category. Solid tumors have an arsenal of sneaky tactics to evade or deactivate CAR T cells, eventually undermining the treatment.

This month, a Columbia University team broke through one of the barriers with an upgraded design. They engineered a new, ultra-sensitive protein “hook” that seeks out CD70, a protein that dots the surfaces of multiple types of solid cancer cells—but at vastly different levels.

“Some molecules have been identified that are found in 25%, 50%, or 75% of tumor cells,” said study author Michel Sadelain in a press release. “Though a therapy directed at those targets might be successful…you can’t cure somebody if you just eliminate a small fraction or even 90% of their tumor.”

In tests, the supercharged cancer-killers, dubbed HIT cells, detected and wiped out cancer cells with extremely low levels of CD70—so low that the protein was undetectable using traditional methods. In kidney, ovarian, and pancreatic cancer grown from patients’ cells in petri dishes and in mouse models, HIT completely eliminated all signs of these tumors.

Like CAR T, the new approach is plug-and-play. The protein hook can be redesigned to target other faint cancer protein markers that have previously escaped detection.

“We hope our CD70-directed HIT cells help us find a way to eradicate the entire tumor,”

said study author Sophie Hanina.

A Mixed Bag

Our immune system naturally fights off cancer. T cells, for example, roam the body looking for threats. When they identify cancerous cells, they signal other immune cells to launch a coordinated effort to wipe out the cancer before it expands.

The identification process relies on antigens, proteins that dot the surfaces of cancer cells like beacons. But tumors are highly versatile and rapidly evolve their antigen signature, essentially cloaking themselves from immune attacks.

CAR T cells override the defense. Here, T cells are extracted from a patient’s body and genetically engineered with custom-designed protein hooks to grab onto cancer antigens.

Multiple blood cancers have a heavy coat of a single shared protein on their surfaces, making them a perfect target for CAR T therapy. Solid tumors, however, are different. Tumors are dotted with a wide range of antigens, many of which are present in normal tissues. This increases the chances CAR T might attack healthy cells and reduces its effectiveness.

Even for the same antigen, some cells in solid tumors express high levels, others very low. The latter escape CAR T detection and linger as a reservoir that can regrow the tumor.

For a persistent solid cancer cure, “you have to get down to the very last cell,” said Sadelain.

In Plain Sight

An ideal target antigen needs to check two boxes: It’s expressed across multiple tumor cell types, and at the same time, it’s absent in normal cells.

The antigen in the new study, CD70, fits the bill. It occurs in a variety of solid cancers, making it a valuable target beacon. But previous attempts targeting CD70 struggled to control cancer in clinical trials. This is partly because cancer cells within a single tumor have different levels of the antigen, and some seemingly lack the marker altogether, allowing them to escape detection.

But are these cancer cells truly devoid of the antigen, or is it just that scientists, and the CAR T cells they’ve engineered, can’t find them using current methods?

Researchers can see most proteins under the microscope but only if they’re at high enough levels. Rather than relying on conventional imaging, the team looked for CD70 gene expression in donated cancer patient samples. These lab models mimic the complexity of solid tumors.

CD70 antigens dotted each cell in multiple tumors, although at different levels of intensity. “We found that apparent CD70-negative tumor cells do in fact express low levels of CD70, though not at a level high enough to be eliminated by conventional CAR T cells,” wrote the team.

Taking aim at cancer cells with faint CD70 levels, the team tapped into their previous work genetically engineering cells to detect low-level antigens. The hooks on these HIT cells mimic those from a population of highly sensitive T cells naturally found in our bodies.

The team redesigned HIT cells to specifically target CD70. Because normal cells don’t use this molecular pathway, HIT cells largely ignored them, lowering the risk of collateral damage.

“HIT cells are the next generation of CAR T cells. They can be programmed like a CAR T cell, but they have the sensitivity of a natural T cell and can detect cancer cells that have only a vanishingly small number of target molecules,” said Hanina.

Sharp Shooter

Ovarian and pancreatic cancer cells have mixed levels of CD70. Several tests in highly aggressive models for these cancers found that HIT cells completely eradicated the tumors in petri dishes. The treatment also cleared cancer cells in different types of solid tumors in mice, even ones with low CD70 levels. Conventional CAR T only eliminated a fraction of the cancer.

A recent CAR T clinical trial targeting CD70 found CAR T cells could infiltrate and linger near kidney tumors, but their effectiveness was based on detection, which varied depending on the number of CD70 beacons. Because HIT cells are more sensitive, they could hunt down and kneecap more cancer cells.

But HIT cells may have side effects. Although CD70 isn’t expressed in most healthy tissues, its level skyrockets in immune cells during infections, which could trigger friendly fire. The team plans to investigate the treatment’s safety and efficacy in patients with ovarian cancer at the Columbia University Irving Medical Center.

If successful, the technology could benefit roughly 20 other types of solid cancer that express CD70, including deadly brain cancers such as glioblastoma.

“Curing solid tumors is not easy, but this work solves one piece of the puzzle,” said Sadelain.

The post These Supercharged Immune Cells Completely Eliminated Solid Tumors in Mice appeared first on SingularityHub.