MoreRSS

site iconSingularity HUB

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

Forget Needles. These Squid-Like Pills Will Spray Drugs Into the Gut Instead.

2024-11-23 03:23:20

As a medical doctor, my mother isn’t afraid of needles. But when she recently began injecting insulin daily for her newly diagnosed diabetes, the shots became a frustrating nuisance.

A jab is a standard way to deliver insulin, antibodies, RNA vaccines, GLP-1 drugs such as Ozempic, and other large molecules. Compared to small chemicals—say, aspirin—these drugs often contain molecules that are easily destroyed if taken as pills, making injection the best option.

But no one likes needles. Discomfort aside, they can also cause infection, skin irritation, and other side effects. Scientists have long tried to avoid injections with other drug delivery options—most commonly, pills—if they can overcome the downsides.

This month, researchers from MIT and the pharmaceutical company Novo Nordisk took inspiration from squids to engineer ingestible capsules that burst inside the stomach and other parts of the digestive system.

The pills mimic a squid-like jet to “spray” their cargo into tissue. They make use of two spraying mechanisms. One works best in larger organs, such as the stomach and colon. Another delivers treatments in narrower organs, like the esophagus.

“These innovative devices deliver drugs directly” into the gut with minimal pain and no needles, the researchers wrote. When tested on dogs and pigs, the system delivered insulin, GLP-1-like hormones, and RNA-based molecules to target tissue in amounts similar to injections.

Delivery Headaches

Getting shots, whether for vaccines, antibodies, or cancer treatments, can be stressful. But there’s a reason these medicines require an injection rather than a pill: They’re usually made of larger biological molecules. These include antibodies or RNA-based vaccines that rely on proteins and other complex molecules. Delivering them as a pill is extremely difficult.

Once swallowed, large molecules are often quickly destroyed by digestive enzymes or the liver, limiting their efficacy and increasing the likelihood of potential side effects. But of course, a pill is easier to take compared to getting a shot. So, despite the challenges, scientists have long sought to make pills that can replace injections for vaccines and other medicines.

Ink-Jet Squids

The new study looked to cuttlefish, squid, and octopi for inspiration.

These critters are versatile in their ability to adjust the pressure and direction of their ink jets. The team tapped into the same idea to distribute drugs in the gastrointestinal (GI) tract. By jetting medication directly into tissue, more can be absorbed before the body breaks it down.

“One aspect that I think is important here to appreciate is that the GI tract is composed” of many segments, and each has its own unique challenges, study author Giovanni Traverso told Nature. The stomach is like a balloon, for example, whereas the intestines are more sinewy. These differences require slightly different pressures for the therapy to work. In general, the pressure can’t be too high or it risks damaging the tissue. Pressure too low is also detrimental, in that it can’t deliver enough medication. The direction of the spray also matters.

“Part of the work we did was to define how much force needs to be applied so that the jet can go through the tissue,” said Traverso. They teased out how each part of the gastrointestinal tract absorbs drugs so they could dial in levels absorption without damage. Next, they engineered ingestible capsules that mimic the way squids and octopi project their ink.

The design has two jetting systems—one powered by coiled springs and the other compressed carbon dioxide—that are unleashed by humidity or acid and can target different tissues. The medication is encapsulated in normal-sized pills. One jet shoots the drugs into large organs, such as the stomach. The other jet targets smaller GI pathways, including the small intestines.

Prime Delivery

As proof of concept, the team used their system to deliver insulin in dogs and pigs suffering from diabetes-like conditions.

In one test, the system dramatically increased levels of the test medication—with effects similar to daily insulin injections. Other medications, such as GLP-1 drugs, RNA-type therapies, and antibodies—proteins that fight off infections and cancers—also accumulated at levels similar to injections. After releasing drugs, the biocompatible capsules passed through the digestive tract.

It’s still too early to know if the method would work in people. But the work suggests it just might be possible to one day swap out needles for pills.

“In contrast to a small needle, which needs to have intimate contact with the tissue, our experiments indicated that a jet may be able to deliver most of the dose from a distance or at a slight angle,” study author Graham Arrick said in a press release.

These pills could be used at home for people who need to take insulin or other injected drugs every day, making it easier to manage chronic diseases.

“This is an exciting approach which could be impactful for many biologics” that need to be injected, said Omid Veiseh at Rice University, who was not involved in the research, in the press release. It “is a significant leap forward in oral drug delivery.”

Image Credit: Meressa Chartrand on Unsplash

‘Droidspeak’: AI Agents Now Have Their Own Language Thanks to Microsoft

2024-11-22 05:04:59

Getting AIs to work together could be a powerful force multiplier for the technology. Now, Microsoft researchers have invented a new language to help their models talk to each other faster and more efficiently.

AI agents are the latest buzzword in Silicon Valley. These are AI models that can carry out complex, multi-step tasks autonomously. But looking further ahead, some see a future where multiple AI agents collaborate to solve even more challenging problems.

Given that these agents are powered by large language models (LLMs), getting them to work together usually relies on agents speaking to each other in natural language, often English. But despite their expressive power, human languages might not be the best medium of communication for machines that fundamentally operate in ones and zeros.

This prompted researchers from Microsoft to develop a new method of communication that allows agents to talk to each other in the high-dimensional mathematical language underpinning LLMs. They’ve named the new approach Droidspeak—a reference to the beep and whistle-based language used by robots in Star Wars—and in a preprint paper published on the arXiv, the Microsoft team reports it enabled models to communicate 2.78 times faster with little accuracy lost.

Typically, when AI agents communicate using natural language, they not only share the output of the current step they’re working on, but also the entire conversation history leading up to that point. Receiving agents must process this big chunk of text to understand what the sender is talking about.

This creates considerable computational overhead, which grows rapidly if agents engage in a repeated back-and-forth. Such exchanges can quickly become the biggest contributor to communication delays, say the researchers, limiting the scalability and responsiveness of multi-agent systems.

To break the bottleneck, the researchers devised a way for models to directly share the data created in the computational steps preceding language generation. In principle, the receiving model would use this directly rather than processing language and then creating its own high-level mathematical representations.

However, it’s not simple transferring the data between models. Different models represent language in very different ways, so the researchers focused on communication between versions of the same underlying LLM.

Even then, they had to be smart about what kind of data to share. Some data can be reused directly by the receiving model, while other data needs to be recomputed. The team devised a way of working this out automatically to squeeze the biggest computational savings from the approach.

Philip Feldman at the University of Maryland, Baltimore County told New Scientist that the resulting communication speed-ups could help multi-agent systems tackle bigger, more complex problems than possible using natural language.

But the researchers say there’s still plenty of room for improvement. For a start, it would be helpful if models of different sizes and configurations could communicate. And they could squeeze out even bigger computational savings by compressing the intermediate representations before transferring them between models.

However, it seems likely this is just the first step towards a future in which the diversity of machine languages rivals that of human ones.

Image Credit: Shawn Suttle from Pixabay

Poetry by History’s Greatest Poets or AI? People Can’t Tell the Difference—and Even Prefer the Latter. What Gives?

2024-11-19 23:00:04

Here are some lines Sylvia Plath never wrote:

The air is thick with tension,
My mind is a tangled mess,
The weight of my emotions
Is heavy on my chest.

This apparently Plath-like verse was produced by GPT-3.5 in response to the prompt “write a short poem in the style of Sylvia Plath.”

The stanza hits the key points readers may expect of Plath’s poetry, and perhaps a poem more generally. It suggests a sense of despair as the writer struggles with internal demons. “Mess” and “chest” are a near-rhyme, which reassures us that we are in the realm of poetry.

According to a new paper in Nature Scientific Reports, non-expert readers of poetry cannot distinguish poetry written by AI from that written by canonical poets. Moreover, general readers tend to prefer poetry written by AI—at least until they are told it is written by a machine.

In the study, AI was used to generate poetry “in the style of” 10 poets: Geoffrey Chaucer, William Shakespeare, Samuel Butler, Lord Byron, Walt Whitman, Emily Dickinson, TS Eliot, Allen Ginsberg, Sylvia Plath, and Dorothea Lasky.

Participants were presented with 10 poems in random order, five from a real poet and five AI imitations. They were then asked whether they thought each poem was AI or human, rating their confidence on a scale of 1 to 100.

A second group of participants was exposed to three different scenarios. Some were told that all the poems they were given were human. Some were told they were reading only AI poems. Some were not told anything.

They were then presented with five human and five AI poems and asked to rank them on a seven point scale, from extremely bad to extremely good. The participants who were told nothing were also asked to guess whether each poem was human or AI.

The researchers found that AI poems scored higher than their human-written counterparts in attributes such as “creativity,” “atmosphere,” and “emotional quality.”

The AI “Plath” poem quoted above is one of those included in the study, set against several she actually wrote.

A Sign of Quality?

As a lecturer in English, these outcomes do not surprise me. Poetry is the literary form that my students find most unfamiliar and difficult. I am sure this holds true of wider society as well.

While most of us have been taught poetry at some point, likely in high school, our reading does not tend to go much beyond that. This is despite the ubiquity of poetry. We see it every day: circulated on Instagram, plastered on coffee cups, and printed in greeting cards.

The researchers suggest that “by many metrics, specialized AI models are able to produce high-quality poetry.” But they don’t interrogate what we actually mean by “high-quality.”

In my view, the results of the study are less testaments to the “quality” of machine poetry than to the wider difficulty of giving life to poetry. It takes reading and rereading to experience what literary critic Derek Attridge has called the “event” of literature, where “new possibilities of meaning and feeling” open within us. In the most significant kinds of literary experiences, “we feel pulled along by the work as we push ourselves through it”.

Attridge quotes philosopher Walter Benjamin to make this point: Literature “is not statement or the imparting of information.”

Philosopher Walter Benjamin argued that literature is not simply the imparting of information. Image Credit: Public domain, via Wikimedia Commons

Yet pushing ourselves through remains as difficult as ever—perhaps more so in a world where we expect instant answers. Participants favored poems that were easier to interpret and understand.

When readers say they prefer AI poetry, then, they would seem to be registering their frustration when faced with writing that does not yield to their attention. If we do not know how to begin with poems, we end up relying on conventional “poetic” signs to make determinations about quality and preference.

This is of course the realm of GPT, which writes formally adequate sonnets in seconds. The large language models used in AI are success-orientated machines that aim to satisfy general taste, and they are effective at doing so. The machines give us the poems we think we want: Ones that tell us things.

How Poems Think

The work of teaching is to help students attune themselves to how poems think, poem by poem and poet by poet, so they can gain access to poetry’s specific intelligence. In my introductory course, I take about an hour to work through Sylvia Plath’s “Morning Song.” I have spent 10 minutes or more on the opening line: “Love set you going like a fat gold watch.”

How might a “watch” be connected to “set you going”? How can love set something going? What does a “fat gold watch” mean to you—and how is it different from a slim silver one? Why “set you going” rather than “led to your birth”? And what does all this mean in a poem about having a baby, and all the ambivalent feelings this may produce in a mother?

In one of the real Plath poems that was included in the survey, “Winter Landscape, With Rooks,” we observe how her mental atmosphere unfurls around the waterways of the Cambridgeshire Fens in February:

Water in the millrace, through a sluice of stone,
plunges headlong into that black pond
where, absurd and out-of-season, a single swan
floats chaste as snow, taunting the clouded mind
which hungers to haul the white reflection down.

How different is this to GPT’s Plath poem? The achievement of the opening of “Winter Landscape, With Rooks” is how it intricately explores the connection between mental events and place. Given the wider interest of the poem in emotional states, its details seem to convey the tumble of life’s events through our minds.

Our minds are turned by life just as the mill is turned by water; these experiences and mental processes accumulate in a scarcely understood “black pond.”

Intriguingly, the poet finds that this metaphor, well constructed though it may be, does not quite work. This is not because of a failure of language, but because of the landscape she is trying to turn into art, which is refusing to submit to her emotional atmosphere. Despite everything she feels, a swan floats on serenely—even if she “hungers” to haul its “white reflection down.”

I mention these lines because they turn around the Plath-like poem of GPT-3.5. They remind us of the unexpected outcomes of giving life to poems. Plath acknowledges not just the weight of her despair, but the absurd figure she may be within a landscape she wants to reflect her sadness.

She compares herself to the bird that gives the poem its title:

feathered dark in thought, I stalk like a rook,
brooding as the winter night comes on.

These lines are unlikely to register highly in the study’s terms of literary response—“beautiful,” “inspiring,” “lyrical,” “meaningful,” and so on. But there is a kind of insight to them. Plath is the source of her torment, “feathered” as she is with her “dark thoughts.” She is “brooding,” trying to make the world into her imaginative vision.

Sylvia Plath. Image Credit: RBainbridge2000, via Wikimedia Commons, CC BY

The authors of the study are both right and wrong when they write that AI can “produce high-quality poetry.” The preference the study reveals for AI poetry over that written by humans does not suggest that machine poems are of a higher quality. The AI models can produce poems that rate well on certain “metrics.” But the event of reading poetry is ultimately not one in which we arrive at standardized criteria or outcomes.

Instead, as we engage in imaginative tussles with poems, both we and the poem are newly born. So the outcome of the research is that we have a highly specified and well thought-out examination of how people who know little about poetry respond to poems. But it fails to explore how poetry can be enlivened by meaningful shared encounters.

Spending time with poems of any kind, attending to their intelligence and the acts of sympathy and speculation required to confront their challenges, is as difficult as ever. As the Plath of GPT-3.5 puts it:

My mind is a tangled mess,
[…]
I try to grasp at something solid.

The Conversation


 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A ChatGPT-Like AI Can Now Design Whole New Genomes From Scratch

2024-11-19 06:59:39

All life on Earth is written with four DNA “letters.” An AI just used those letters to dream up a completely new genome from scratch.

Called Evo, the AI was inspired by the large language models, or LLMs, underlying popular chatbots such as OpenAI’s ChatGPT and Anthropic’s Claude. These models have taken the world by storm for their prowess at generating human-like responses. From simple tasks, such as defining an obtuse word, to summarizing scientific papers or spewing verses fit for a rap battle, LLMs have entered our everyday lives.

If LLMs can master written languages—could they do the same for the language of life?

This month, a team from Stanford University and the Arc Institute put the theory to the test. Rather than training Evo on content scraped from the internet, they trained the AI on nearly three million genomes—amounting to billions of lines of genetic code—from various microbes and bacteria-infecting viruses.

Evo was better than previous AI models at predicting how mutations to genetic material—DNA and RNA—could alter function. The AI also got creative, dreaming up several new components for the gene editing tool, CRISPR. Even more impressively, the AI generated a genome more than a megabase long—roughly the size of some bacterial genomes.

“Overall, Evo represents a genomic foundation model,” wrote Christina Theodoris at the Gladstone Institute in San Francisco, who was not involved in the work.

Having learned the genomic vocabulary, algorithms like Evo could help scientists probe evolution, decipher our cells’ inner workings, tackle biological mysteries, and fast-track synthetic biology by designing complex new biomolecules.

The DNA Multiverse

Compared to the English alphabet’s 26 letters, DNA only has A, T, C, and G. These ‘letters’ are shorthand for the four molecules—adenine (A), thymine (T), cytosine (C), and guanine (G)— that, combined, spell out our genes. If LLMs can conquer languages and generate new prose, rewriting the genetic handbook with only four letters should be a piece of cake.

Not quite. Human language is organized into words, phrases, and punctuated into sentences to convey information. DNA, in contrast, is more continuous, and genetic components are complex. The same DNA letters carry “parallel threads of information,” wrote Theodoris.

The most familiar is DNA’s role as genetic carrier. A specific combination of three DNA letters, called a codon, encodes a protein building block. These are strung together into the proteins that make up our tissues, organs, and direct the inner workings of our cells.

But the same genetic sequence, depending on its structure, can also recruit the molecules needed to turn codons into proteins. And sometimes, the same DNA letters can turn one gene into different proteins depending on a cell’s health and environment or even turn the gene off.

In other words, DNA letters contain a wealth of information about the genome’s complexity. And any changes can jeopardize a protein’s function, resulting in genetic disease and other health problems. This makes it critical for AI to work at the resolution of single DNA letters.

But it’s hard for AI to capture multiple threads of information on a large scale by analyzing genetic letters alone, partially due to high computational costs. Like ancient Roman scripts, DNA is a continuum of letters without clear punctuation. So, it could be necessary to “read” whole strands to gain an overall picture of their structure and function—that is, to decipher meaning.

Previous attempts have “bundled” DNA letters into blocks—a bit like making artificial words. While easier to process, these methods disrupt the continuity of DNA, resulting in the retention of “ some threads of information at the expense of others,” wrote Theodoris.

Building Foundations

Evo addressed these problems head on. Its designers aimed to preserve all threads of information, while operating at single-DNA-letter resolution with lower computational costs.

The trick was to give Evo a broader context for any given chunk of the genome by leveraging a specific type of AI setup used in a family of algorithms called StripedHyena. Compared to GPT-4 and other AI models, StripedHyena is designed to be faster and more capable of processing large inputs—for example, long lengths of DNA. This broadened Evo’s so-called “search window,” allowing it to better find patterns across a larger genetic landscape.

The researchers then trained the AI on a database of nearly three million genomes from bacteria and viruses that infect bacteria, known as phages. It also learned from plasmids, circular bits of DNA often found in bacteria that transmit genetic information between microbes, spurring evolution and perpetuating antibiotic resistance.

Once trained, the team pitted Evo against other AI models to predict how mutations in a given genetic sequence might impact the sequence’s function, such as coding for proteins. Even though it was never told which genetic letters form codons, Evo outperformed an AI model explicitly trained to recognize protein-coding DNA letters on the task.

Remarkably, Evo also predicted the effect of mutations on a wide variety of RNA molecules—for example, those regulating gene expression, shuttling protein building blocks to the cell’s protein-making factory, and acting as enzymes to fine-tune protein function.

Evo seemed to have gained a “fundamental understanding of DNA grammar,” wrote Theodoris, making it a perfect tool to create “meaningful” new genetic code.

To test this, the team used the AI to design new versions of the gene editing tool CRISPR. The task is especially difficult as the system contains two elements that work together—a guide RNA molecule and a pair of protein “scissors” called Cas. Evo generated millions of potential Cas proteins and their accompanying guide RNA. The team picked 11 of the most promising combinations, synthesized them in the lab, and tested their activity in test tubes.

One stood out. A variant of Cas9, the AI-designed protein cleaved its DNA target when paired with its guide RNA partner.  These designer biomolecules represent the “first examples” of codesign between proteins and DNA or RNA with a language model, wrote the team.

The team also asked Evo to generate a DNA sequence similar in length to some bacterial genomes and compared the results to natural genomes. The designer genome contained some essential genes for cell survival, but with myriad unnatural characteristics preventing it from being functional. This suggests the AI can only make a “blurry image” of a genome, one that contains key elements, but lacks finer-grained details, wrote the team.

Like other LLMs, Evo sometimes “hallucinates,” spewing CRISPR systems with no chance of working. Despite the problems, the AI suggests future LLMs could predict and generate genomes on a broader scale. The tool could also help scientists examine long-range genetic interactions in microbes and phages, potentially sparking insights into how we might rewire their genomes to produce biofuels, plastic-eating bugs, or medicines.

It’s yet unclear whether Evo could decipher or generate far longer genomes, like those in plants, animals, or humans. If the model can scale, however, it “would have tremendous diagnostic and therapeutic implications for disease,” wrote Theodoris.

Image Credit: Warren Umoh on Unsplash

This Week’s Awesome Tech Stories From Around the Web (Through November 16)

2024-11-17 02:35:35

COMPUTING

IBM Boosts the Amount of Computation You Can Get Done on Quantum Hardware
John Timmer | Ars Technica
“There’s a general consensus that we won’t be able to consistently perform sophisticated quantum calculations without the development of error-corrected quantum computing, which is unlikely to arrive until the end of the decade. It’s still an open question, however, whether we could perform limited but useful calculations at an earlier point. IBM is one of the companies that’s betting the answer is yes, and on Wednesday, it announced a series of developments aimed at making that possible.”

ARTIFICIAL INTELLIGENCE

OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows
Stephanie Palazzolo, Erin Woo, and Emir Efrati | The Information
“”The Orion situation could test a core assumption of the AI field, known as scaling laws: that LLMs would continue to improve at the same pace as long as they had more data to learn from and additional computing power to facilitate that training process. In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law.”

BIOTECH

The First CRISPR Treatment Is Making Its Way to Patients
Emily Mullen | Wired
“Vertex, the pharmaceutical company that markets Casgevy, announced in a November 5 earnings call that the first person to receive Casgevy outside of a clinical trial was dosed in the third quarter of this year. …When Wired followed up with Vertex via email, spokesperson Eleanor Celeste declined to provide the exact number of patients that have received Casgevy. However, the company says 40 patients have undergone cell collections in anticipation of receiving the treatment, up from 20 patients last quarter.”

AUTOMATION

AI Is Now Designing Chips for AI
Kristen Houser | Big Think
“It’s 2028, and your tech startup has an idea that could revolutionize the industry—but you need a custom designed microchip to bring the product to market. Five years ago, designing that chip would’ve cost more than your whole company is worth, but your team is now able to do it at a fraction of price and in a fraction of the time—all thanks to AI, fittingly being run on chips like these.”

ROBOTICS

Now Anyone in LA Can Hail a Waymo Robotaxi
Kirsten Korosec | TechCrunch
“Waymo has opened its robotaxi service to everyone in Los Angeles, sunsetting a waitlist that had grown to 300,000 people. The Alphabet-backed company said starting Tuesday anyone can download the Waymo One app to hail a ride in its service area, which is now about 80 square miles in Los Angeles County.”

ARTIFICAL INTELLIGENCE

The First Entirely AI-Generated Video Game Is Insanely Weird and Fun
Will Knight | Wired
“Minecraft remains remarkably popular a decade or so after it was first released, thanks to a unique mix of quirky gameplay and open world building possibilities. A knock-off called Oasis, released last month, captures much of the original game’s flavor with a remarkable and weird twist. The entire game is generated not by a game engine and hand-coded rules, but by an AI model that dreams up each frame.”

ENERGY

Nuclear Power Was Once Shunned at Climate Talks. Now, It’s a Rising Star.
Brad Plumer | The New York Times
“At last year’s climate conference in the United Arab Emirates, 22 countries pledged, for the first time, to triple the world’s use of nuclear power by midcentury to help curb global warming. At this year’s summit in Azerbaijan, six more countries signed the pledge. ‘It’s a whole different dynamic today,’ said Dr. Bilbao y Leon, who now leads the World Nuclear Association, an industry trade group. ‘A lot more people are open to talking about nuclear power as a solution.'”

HEALTH

The Next Omics? Tracking a Lifetime of Exposures to Better Understand Disease
 | Knowable Magazine
“Of the millions of substances people encounter daily, health researchers have focused on only a few hundred. Those in the emerging field of exposomics want to change that. …In homes, on buildings, from satellites and even in apps on the phone in your pocket, tools to monitor the environment are on the rise. At the intersection of public health and toxicology, these tools are fueling a new movement in exposure science. It’s called the exposome and it represents the sum of all environmental exposures over a lifetime.”

SPACE

Buckle Up: SpaceX Aims for Rapid-Fire Starship Launches in 2025
Passant Rabie | Gizmodo
“SpaceX has big plans for its Starship rocket. After a groundbreaking test flight, in which the landing tower caught the booster, the company’s founder and CEO Elon Musk wants to see the megarocket fly up to 25 times next year, working its way up to a launch rate of 100 flights per year, and eventually a Starship launching on a daily basis.”

TECH

Are AI Clones the Future of Dating? I Tried Them for Myself.
Eli Tan | The New York Times
“As chatbots like ChatGPT improve, their use in our personal and even romantic lives is becoming more common. So much so, some executives in the dating app industry have begun pitching a future in which people can create AI clones of themselves that date other clones and relay the results back to their human counterparts.”

GENETICS

Genetic Discrimination Is Coming for Us All
Kristen V. Brown | The Atlantic
“For decades, researchers have feared that people might be targeted over their DNA, but they weren’t sure how often it was happening. Now at least a handful of Americans are experiencing what they argue is a form of discrimination. And as more people get their genomes sequenced—and researchers learn to glean even more information from the results—a growing number of people may find themselves similarly targeted.”

Image Credit: Evgeni Tcherkasski on Unsplash