MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

Mini Human Organs Just Got Much Closer to Matching the Real Thing

2025-07-31 22:00:00

New models could shed light on early human development and be used for drug discovery and tissue transplants.

Miniature organs have a new lifeline. Mimicking the way early human embryos grow blood vessels, scientists nudged multiple types of mini organs to sprout their own vascular networks.

Also called organoids, mini organs capture the intricacies of their natural organ counterparts, including how they grow, communicate, and function. This makes them perfect for research into genetic diseases and testing new drugs. Mini brains, for example, have already shed light on glioblastoma, a deadly brain cancer, and decoded how the brain controls muscles.

Organoids can also help parse genetic and developmental disorders. They carry the same genes as their donors—mini organs are often developed from skin cells—and can mimic a wide range of inherited diseases. They’re especially useful for charting the first stages of human development and can help tease out when and where things go wrong.

Despite their potential, mini organs have been haunted by one problem: They don’t have circulation. Without vessels to provide oxygen and nutrients and to wash waste away, mini organs can only develop so much. Over time, their core eventually dies, and they wilt away.

By analyzing mini organs and teasing out the genes and proteins involved in making vessels, the teams behind two new studies discovered multiple chemical cocktails to spur mini hearts, livers, lungs, kidneys, and intestines to naturally sprout forests of blood vessels.

Thanks to a steady infusion of nutrients, the upgraded organoids grew into some of the most complex mini organs to date. They developed structures and cells never seen before in the lab.

The techniques are likely universal and could generate other mini organs with blood vessels.

A Bloody Problem

Blood is often called the “elixir of life” for good reason: It nourishes the whole body with the delivery of oxygen and nutrients. Cut off blood supply, and most organs fail.

Organoids are the same. These mini organs usually begin life as skin cells, which are then chemically transformed into a stem-cell-like state. Protein cocktails nudge these cells into a variety of mini organs over the course of a few weeks gently churning in a bioreactor.

With the right concoction, the stem cells automatically form intricate 3D structures, such as mini brains resembling the second trimester of human fetal brain development. These organoids have similar types of brain cells to their natural counterparts distributed throughout and spark with electrical activity. Some even pump out anti-stress hormones when implanted into mouse brains, suggesting they might one day replace damaged tissues.

But lack of blood supply limits organoid development. There are already a few solutions. One is to embed organoids and endothelial cells—cells that line blood vessels—into a gel so both cell types develop together. Another uses 3D bioprinting to “write” vessel networks into small nubs of liver and heart organoids. Though they’re promising, both methods add complexity.

Humans, in contrast, automatically develop blood vessels that weave around and inside our organs as we develop in the womb. Why not recreate that process in a dish?

Pumping Blood

As an embryo develops, it separates into layers, each of which eventually transforms into a different organ. Blood vessel and heart cells originate in a layer called the mesoderm.

In one of the new studies, a Stanford team created glow-in-the-dark human stem cells in three colors to mark different types of heart and blood vessel cells. They made a pool of baby cardiovascular cells—which could become both heart and vessel cells—and added a cocktail of molecules and proteins, or growth factors, to nudge these into a heart with blood vessels.

Previous studies found that micropatterning—the precise placement of induced stem cells onto a surface—can optimize how organoids grow. The team tested nearly three dozen formulations to transform them into mini hearts. One eventually spurred the stem cells to form and combine both heart muscle cells and vessel cells into a cohesive structure in roughly a week.

Within 12 days, the mini heart resembled that of a human fetus about three weeks after conception. Blood vessels integrated into heart muscle cells, forming intricate branches that spread throughout the mini heart. These kept expanding in size as the organoids grew. The vascularized hearts showed normal electrical activity and played a consistent beat of roughly 50 pulses per minute, which is roughly similar to donated human fetal heart tissue in culture.

The team next found two molecular pathways that shut down blood-vessel development. Both involved multiple protein “signatures” that changed over time as the organoids developed. The team fine-tuned their organoid recipe to favor vessel growth.

The new recipe worked to develop more than just heart organoids. The team also used it to create a mini liver innervated with blood vessels.

That the same combination of factors worked on both suggests that different organs have a “conserved developmental program,” wrote the authors. The method, then, might be used to create other organs with vessels.

Balancing Act

Another study, led by scientists from the University of Cincinnati College of Medicine and collaborators, took a different approach. Using a technology called RNA-seq, they recorded which genes were active in lung and gut organoids. This led them to discover a protein called BMP that fine-tunes mini-organ development to allow the growth of healthy blood vessels with both endothelial cells—the blood-vessel liners—and other muscle cells that help them contract.

The two cell types are usually at odds during development, each requiring a different type of molecular trigger at a specific stage. BMP is like a switch to toggle between the two states. By carefully timing the switch, the team generated both cell types in parallel.

They used this technique to make a mini lung with vessels. Spread on a 3D scaffold, the organoids spontaneously assembled into structures similar to gas-exchanging alveolar sacs in the lung. The team transplanted these into mice and found they integrated with each host’s blood supply and boosted the mini lung’s size and health. They also used the method to craft vascularized mini guts, which could be used to test drugs for celiac disease and other gut-related issues.

Both studies are examples of the latest push into more sophisticated organoids. “Vascularization of organoids is a hot topic,” Ryuji Morizane at Massachusetts General Hospital to told Nature.

The next step will test if the vessels can circulate blood outside a living host. If they can, organoids could finally live up to their potential as vehicles for research, drug development, and on-demand replacement of damaged tissues.

The post Mini Human Organs Just Got Much Closer to Matching the Real Thing appeared first on SingularityHub.

AI Agents Are Here. This Is What They Can Do—and How They Can Go Wrong

2025-07-30 01:51:52

Agents are a step up from earlier AI tools. Knowing how they work is rapidly becoming essential.

We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire to greater autonomy and can work in “teams” or use tools to accomplish complex tasks.

The latest hot product is OpenAI’s ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, “thinks and acts.”

These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do—as well as their drawbacks and risks—is rapidly becoming essential.

From Chatbots to Agents

ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology.

Enter the AI assistant, or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision.

Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory.

Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide, and coordinate to solve complex problems.

Agents are also “tool users” as they can also call on software tools for specialized tasks—things such as web browsers, spreadsheets, payment systems, and more.

A Year of Rapid Development

Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information, and submit online forms.

Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google’s Vertex AI and Meta’s Llama agents.

Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (similar to what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged “cheat at anything” agent that has gained attention but is yet to deliver meaningful results.

Not all agents are made for general-purpose activity. Some are specialized for particular areas.

Coding and software engineering are at the vanguard here, with Microsoft’s Copilot coding agent and OpenAI’s Codex among the frontrunners. These agents can independently write, evaluate, and commit code, while also assessing human-written code for errors and performance lags.

Search, Summarization, and More

One core strength of generative AI models is search and summarization. Agents can use this to carry out research tasks that might take a human expert days to complete.

OpenAI’s Deep Research tackles complex tasks using multi-step online research. Google’s AI “co-scientist” is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals.

Agents Can Do More—and Get More Wrong

Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimize errors and risks.

OpenAI also says its ChatGPT agent is “high risk” due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim so it is difficult to judge.

But the kinds of risks agents may pose in real-world situations are shown by Anthropic’s Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business—and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food.

In another cautionary tale, a coding agent deleted a developer’s entire database, later saying it had “panicked.”

Agents in the Office

Nevertheless, agents are already finding practical applications.

In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1–2 hours per week.

Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon’s use of an interactive AI agent to manage defects in its apartment developments.

Human and Other Costs

At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs.

People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks, and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss, and injury.

The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents—especially for more complex tasks.

Learn About Agents—and Build Your Own

Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It’s not a bad idea to start using (and perhaps building) agents yourself, and understanding their strengths, risks, and limitations.

For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance, and an agent store for common tasks.

For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post AI Agents Are Here. This Is What They Can Do—and How They Can Go Wrong appeared first on SingularityHub.

Anthropic Says AI Needs a Whole Lot More Power—Stat

2025-07-28 22:00:00

The company predicts the AI industry will consume 50 gigawatts by 2028, and the US is not prepared to build out that much new capacity.

AI’s massive power consumption is making energy infrastructure a hot topic. In a new report, Anthropic says the US is seriously lagging China on new energy development and lays out what’s needed to maintain the country’s AI lead.

Training today’s largest AI models requires data centers that draw tens if not hundreds of megawatts of power at peak load. Anthropic predicts that by 2028, leading developers will require training clusters with up to five gigawatts of capacity.

With several companies competing to train the largest models, that could add up to around 25 gigawatts of new power requirements for training alone. Anthropic predicts that at least as much power will be needed to run finished models for customers, suggesting the US needs to deploy another 50 gigawatts of capacity in the next three years. And that’s on top of what is needed to meet already rising energy demands.

But getting new energy projects up and running in the US can be cumbersome, Anthropic says, which is putting the country at a major disadvantage compared to China, which deployed an eye-watering 400 gigawatts of new capacity last year. In a white paper titled, “Build AI in America,” the company outlines regulatory and policy changes it thinks are needed to support the domestic AI industry.

“For the United States to lead the world in AI, we must make substantial investments in computing power and electricity that make it possible to build AI in America,” the company wrote in a blog post.

The report outlines three key areas where the US is moving too slowly—building data centers themselves, building generation facilities, and building the transmission systems required to get electricity from one to the other. It also identifies the three biggest barriers holding these efforts back.

The first is the array of permits that developers need to secure before starting construction on any of these projects, in particular those pertaining to the environment. The second is transmission approvals that must be sought from the state public utility commissions before building new power lines, which can take years. And the third is the interconnection approvals from utilities that allow facilities to connect to the grid and can also take years for sign-off.

Anthropic proposes a two-stream solution. To speed the development of new AI infrastructure, the report suggests allowing data centers to be built on federal lands to avoid local zoning processes and streamlining environmental review of these projects.

It also suggests the Department of Energy should partner with private firms to accelerate the development of new power lines and critical transmission upgrades. And the federal government should encourage utilities to speed up the interconnection of power sources and data centers, even using national-security powers to further accelerate the process.

The second pillar of their proposal focuses more on broader improvements to the country’s energy infrastructure. This includes streamlining permitting for new geothermal, natural gas, and nuclear power plants and developing special high-capacity transmission corridors to serve areas with high AI datacenter growth.

They also suggest using loans and guarantee programs to encourage greater domestic production of critical grid components like transformers and turbines and even creating a national reserve for these items. Finally, they suggest creating training and entrepreneurship programs to help boost the energy-industry workforce.

One of the company’s wishes already seems to have come true. President Trump announced plans to streamline datacenter and energy project permitting in his recent AI Action Plan.

Whether the rest of the proposals come to fruition remains to be seen. But there seems to be a growing consensus that winning the AI race will require some pretty hands-on industrial policy.

The post Anthropic Says AI Needs a Whole Lot More Power—Stat appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through July 26)

2025-07-26 22:00:00

Artificial Intelligence

OpenAI Prepares to Launch GPT-5 in AugustTom Warren | The Verge

“Earlier this year, I heard that Microsoft engineers were preparing server capacity for OpenAI’s next-generation GPT-5 model, arriving as soon as late May. After some additional testing and delays, sources familiar with OpenAI’s plans tell me that GPT-5 is now expected to launch as early as next month.”

Artificial Intelligence

Google AI System Wins Gold Medal in International Math OlympiadCade Metz | The New York Times

“Both systems [from Google and OpenAI] were chatbots that received and responded to the questions much like humans. Other AI systems have participated in the International Mathematical Olympiad, or IMO, but they could answer questions only after human experts translated them into a computer programming language built for solving math problems.”

Energy

This Startup Wants to Use Beams of Energy to Drill Geothermal WellsCasey Crownhart | MIT Technology Review

“Today, the fusion power industry uses gyrotrons to heat plasma to 100 million °C, but Quaise plans to use them to blast, melt, and vaporize rock. This could, in theory, make drilling faster and more economical, allowing for geothermal energy to be accessed anywhere.”

Artificial Intelligence

Google AI Mode Will Generate Fake Clothes to Help You Buy Real OnesJess Weatherbed | The Verge

“Google is injecting more generative AI into its online shopping experience in Search. An upcoming feature for AI Mode will generate images of outfits and decor ideas based on user descriptions, to help people find visually similar products. Also launching is a new tool that allows people to virtually try on clothes.”

Tech

SoftBank and OpenAI’s $500 Billion AI Project Struggles to Get Off GroundEliot Brown and Berber Jin | The Wall Street Journal

“Six months after Japanese billionaire Masayoshi Son stood shoulder to shoulder with Sam Altman and President Trump to announce the Stargate project, the newly formed company charged with making it happen has yet to complete a single deal for a data center.”

Tech

OpenAI Says ChatGPT Users Send Over 2.5 Billion Prompts Every DayEmma Roth | The Verge

“OpenAI’s ChatGPT sees more than 2.5 billion requests daily, with 330 million from users based in the US, according to data obtained by Axios.

The data suggests that ChatGPT users send over 912.5 billion requests to the AI chatbot each year. “

Future

72% of US Teens Have Used AI Companions, Study FindsSarah Perez | TechCrunch

“The study found that chatting with an AI seems to be appealing to U.S. teens (ages 13 to 17), as not only had nearly three-quarters tried an AI companion, but also 52% said they are regular users. Among those who engaged with these companions regularly, 13% chat with them daily and 21% chat a few times a week.”

Artificial Intelligence

Five Things You Need to Know About AI Right NowWill Douglas Heaven | MIT Technology Review

“Generative AI is now so good it’s scary. Maybe you think that’s obvious. But I am constantly having to check my assumptions about how fast this technology is progressing—and it’s my job to keep up.”

Future

Mission Barns Is Betting That Animal-Free Pork Fat Will Make Artificial Meat DeliciousTim De Chant | TechCrunch

“The product just received approval from the US Department of Agriculture, the company exclusively told TechCrunch. The stamp of approval allows the startup to sell the fat to consumers.  It’s the first such product to reach the market, and it could unlock a host of fattened-up meat alternatives.”

Tech

Is AI Killing Google Search? It Might Be Doing the OppositeAsa Fitch | The Wall Street Journal

“Alphabet Chief Executive Sundar Pichai said Wednesday that [its AI Overview] tool now has over 2 billion monthly users, up from 1.5 billion users in its last quarterly update. …’We see AI powering an expansion in how people are searching for and accessing information,’ Pichai said in a call with analysts, adding that AI features ’cause users to search more as they learn that Search can meet more of their needs.’

Tech

Two Major AI Coding Tools Wiped Out User Data After Making Cascading MistakesBenj Edwards | Ars Technica

“In one case, Google’s Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit’s AI coding service deleted a production database despite explicit instructions not to modify code.”

Future

The World Has Too Much Steel, but No One Wants to Stop Making ItPatricia Cohen | The New York Times

“Excess steel production is estimated to reach 721 million tons by 2027, according to the Organization for Economic Cooperation and Development. One answer would be to simply make less steel. The problem is that no country wants to be the one to stop producing a material that is considered essential to its economic and national security.”

The post This Week’s Awesome Tech Stories From Around the Web (Through July 26) appeared first on SingularityHub.

Meta’s Smart Wristband Can Control Devices Like Tom Cruise in ‘Minority Report’

2025-07-26 04:53:39

The wearable translates subtle muscle movements into pinches, swipes, and writing.

In an iconic scene in the cyberpunk classic Minority Report, the protagonist dons specialized gloves and uses a variety of hand gestures to display and manipulate different tabs on a wall-sized screen—without ever physically touching it.

Now the film’s sci-fi technology is coming to the real world. This week, Meta revealed a wristband that decodes finger movements using electrical signals in the wrist. The movements are familiar to anyone with a smartphone: Pinching, swiping, tapping, and even writing.

An onboard computer translates these signals into commands on a laptop screen. Without training or calibration, users tackled a range of tests, like moving a cursor to a target, playing a Pacman-like game, and writing words and phrases—“hello world”—by drawing their index fingers across a tabletop.

Meta has long teased a muscle-reading wristband, with an early version that could translate computer clicks. The new device has broader capability. Powered by neural networks and trained on data from over 6,000 volunteers, the wristband achieved up to 90 percent accuracy in some tests. On average, participants could write roughly 21 words per minute, and they improved as they became more familiar with the device.

“To our knowledge, this is the highest level of cross-participant performance achieved by a neuromotor interface,” wrote the team in a paper describing the work.

The prototype wristband is “off-the-shelf” and comes in multiple sizes, making it a more consumer-viable product. The team hopes to integrate it into Meta’s AR and VR devices. The device could also be an affordable way to reconnect people with hand paralysis, spinal cord injury, or other motor challenges to the digital world.

Evolution of Controllers

As computers have advanced, so have the ways we connect with them.

Users controlled early computers with mechanical knobs. Then came the keyboard, first invented in the late 1800s, and still a staple today. More recently, touchscreens have forever changed computers—to the point younger generations instinctively swipe on paper magazines.

These days, we don’t even need to use our hands.

Advances in AI and voice recognition make it possible to talk to your phone instead of typing. But Meta thinks there’s still room for improvement. Voice commands can be drowned out in loud environments, and they may be impractical (or annoying) in public.

Instead of touch or voice, Mea is tapping into our body’s electrical signals. Every time we swipe, scroll, or pinch, our nerves send electrical signals to wrist and finger muscles and command them to move in highly accurate and specific ways. It’s possible to decode the brain’s instructions for movement by listening in on these signals.

Surface electromyography (sEMG) uses electrodes on the skin to capture and amplify the electrical chatter. The technology is already used in prosthetic limbs and stroke rehabilitation. It’s less invasive and more flexible than implanted devices, but also less precise. Most sEMG setups need to be carefully fine-tuned for each wearer and recalibrated if transferred to another person, making the technology hard to scale up for a general consumer crowd.

Despite this, Meta saw the technology’s potential.

The team sought to design a wearable with an intuitive, accessible, easy-to-use interface that didn’t intrude in everyday life. The device also needed to be useful for multiple types of usage—switching apps, rearranging tabs, or editing documents—and comfortable enough to wear all day.

They settled on a wristband. People already wear watches and bracelets, so a wrist device might be easier to adopt and more socially acceptable. And crucially, signals captured from the wrist could be used to decode finger motions, enabling a wide variety of gesture controls.

Power in Numbers

The device includes several loosely linked electrode blocks and a processor that looks like a small iPod. The gaps provide flexibility to orient the electrodes toward wrist muscles—rather than sitting above bones—and make the device easier to slip on and off. The processor churns through data in real time and sends decoded commands to a computer via Bluetooth.

To make sure anyone can use the wristband, the team trained its onboard neural network on data collected from thousands of people doing multiple tasks—sliding a cursor to a target, performing a variety of finger gestures, and writing on a hard surface.

The team then invited new volunteers to test the device on the same three tasks. Everyone improved with experience, especially when given coaching from a supervisor—for example, “swipe faster” or “write more continuously.” By the end, participants were able to track objects in roughly twice the amount of time as using a MacBook trackpad and write roughly 21 words per minute—slower than the average 36 words per minute on a smartphone keypad.

The speeds don’t sound impressive, but the participants had far less time using the wristband compared to the two other highly familiar “daily drivers.” And more experiments found personalization boosted performance.

“While generic models allow a neuromotor interface to be used with little to no setup, performance can be improved for a particular individual by personalizing the generic model to data from that participant,” wrote the authors.

Adding just 20 minutes of personalized data to the generic model boosted performance by 16 percent on average. It would take a hefty 14,000 minutes of additional generic data to yield a similar bump. Tailoring the model was especially helpful for volunteers with relatively poor performance. Although no longer off-the-shelf per se, future generations of the device could potentially incorporate personalized data and “learn” a person’s motor intricacies over time.

The sEMG approach opens other interaction possibilities, like detecting a gesture’s force and linking it to specific functions. Decoding up-and-down movements, rather than only in a horizontal ones, could further broaden the device’s utility. Adding buzzes and other haptic feedback could make the wristband feel like an extension of the user’s own body—increasing the sense of immersion when controlling smartphones, laptop, or AR/VR glasses.

“Over time, sEMG could revolutionize how we interact with our devices, help people with motor disabilities gain new levels of independence while improving their quality of life, and unlock new possibilities for HCI [human-computer interfaces] that we haven’t even dreamt of yet,” wrote Meta in a blog.

The post Meta’s Smart Wristband Can Control Devices Like Tom Cruise in ‘Minority Report’ appeared first on SingularityHub.

Forget the Jab: This Pill Is Packed With mRNA

2025-07-25 06:10:54

Pills could deliver mRNA vaccines and treatments for other diseases too.

Covid vaccines turned mRNA treatments from a long-simmering research topic into a dinner table conversation. We remember the shot all too well.

The technology has since begun tackling cancer, liver problems, heart failure, and genetic diseases, with some efforts already in clinical trials.

But these have a universal problem: Needles. Rather than swallowing a pill—like Tylenol—people have to visit health professionals who deliver the treatment with a shot. Regular doses will likely be needed for mRNA treatments that battle chronic diseases. While repeated jabs are tolerable for some, they’re hardly appealing, especially for people afraid of needles.

A new study from Harvard’s Brigham and Women’s Hospital and others said goodbye to the jab. The team engineered a capsule that protects the mRNA payload as it travels through the highly acidic environment of the stomach. In the study, the capsule released the treatment into the digestive tract of animal models of colitis, a chronic inflammation of the colon.

Dubbed RNACap, the capsule is a bit like a multi-stage rocket. The two-compartment design protects the payload and releases it based on natural fluctuations in acidity levels and pressure in the gut. In rats and pigs, RNACap successfully delivered a therapeutic immune molecule that, in just a few hours, eased gut inflammation without notable side effects.

RNACap promises to advance “the development of noninvasive and self-administered oral mRNA therapeutics,” wrote the team.

What Is mRNA Again?

There are many ways to influence how our bodies work. Gene editing changes DNA—the body’s genetic blueprint—by altering disease-causing genes. Small molecules or peptides target the function of proteins. These kinds of drugs range from everyday products like Tylenol to immunotherapies that combat cancer.  

Treatments focused on mRNA are another alternative. Molecules of mRNA carry the genetic instructions cells use to make proteins. In Covid vaccines, mRNA instructs cells to make the virus’s spike protein. This trains the immune system to recognize and fight off the virus. In other cases like cystic fibrosis—a genetic disease that gradually drowns the lung in mucus—mRNA delivers a functional version of a missing protein whose lack causes the disease.

Compared to DNA editors and protein-targeting molecules, mRNA is the best of both worlds. It can change protein levels without altering DNA sequences. This lowers the risk of unexpected mutations in the genome, and the effects of mRNA treatments only stick around for a limited time, making it easier to dial in dosage and limit side effects.

But mRNA molecules are delicate. Most treatments use tiny capsules of fat—known as lipid nanoparticles—to protect mRNA from the body’s enzymes. Turning them into a swallowable pill is harder. Stomach acids and enzymes in the stomach and intestines readily break down foreign mRNA, and a barrier in the gut only allows select nutrients and molecules to pass.

A Swallowable Bypass

The new study designed RNACap to usher mRNA past these obstacles. The team encapsulated mRNA encoding an inflammation-easing molecule called IL-10 inside nanoparticles. These were specially engineered to bypass the intestinal barrier and deliver the payload directly to cells. The fatty blobs were then suspended in a liquid and loaded into the capsule.

This approach makes mRNA readily absorbable in the gut, wrote the team.

The team began development with a common FDA-approved capsule made of gelatin and split it into two sections, with a detachable cap and a carrier for the mRNA liquid. The inside of the capsule is coated in a stretchy membrane to hold the liquid and prevent it from dissolving the outer shell. Another membrane seals the top of the carrier from the cap, which keeps the contents inside the capsule through pressure. In the intestines, the cap rapidly dissolves and releases its pressure on the membrane, allowing it to peel off and free the mRNA contents. The entire capsule is coated in chemicals that can sense acidity and protect it from stomach acid.

The final product is a bit like an extended-release liquid Tylenol gelcap.

After it’s been swallowed, the pill passes through the stomach and the chemical coating and cap gradually dissolve. As the acidity declines, the pill’s sealing membrane releases the mRNA. The gut has an internal rhythm to move things along. The now softer capsule takes advantage of these squeezes to more effectively pump out the mRNA nanoparticles.

A Quiet Place

Inflammation in the bowels, also known as colitis, can lead to uncomfortable and inopportune bathroom visits. There’s often a lag time between symptoms, diagnosis, and treatment.

The team gave rodent models of colitis three RNACap doses spread out across a week or so. Compared to untreated animals, the critters lost less weight and had fewer inflammatory molecules in their colon and blood. Although mRNA treatments can spur unexpected immune responses that damage other organs, the team found no toxic effects in this case.

A subsequent test in pigs, which are more like humans, also found RNACaps released their cargo and increased IL-10 protein levels in the gut and blood after roughly seven hours. The dosage was on the low end of mRNA treatments in clinical trials, suggesting that it’s likely safe.

RNACap isn’t the first gut-stable RNA treatment. A recent study used ginger-derived nanoparticles to deliver a different mRNA molecule to the guts of mice with colitis and healed damaged tissues faster. Capsules using microneedles to inject mRNA into the stomach also tamed inflammation, but the formulation could damage more delicate intestinal tissues.

The team hopes RNACap leads to an affordable, widespread mRNA delivery alternative for colitis, vaccines, and other diseases. They’re working to make the system more shelf-stable, so it could be easier to distribute to remote and resource-limited regions.

The post Forget the Jab: This Pill Is Packed With mRNA appeared first on SingularityHub.