2026-05-15 22:00:00
Self-assembling swarms of microrobots could someday deliver drugs and pull toxins from water.
For most of us, a locust swarm sounds like an utter nightmare. For roboticists, it’s inspiration.
Nature abounds with creatures that cooperate with a “hive mind.” From bees gathering pollen to schools of sardines grouping to avoid predators, individuals seamlessly move together in ever-changing configurations. Roboticists inspired by these dynamics have designed microrobots—often no more than the width of a human hair—to mimic their behavior.
These tiny machines show promise in medicine and environmental cleanup. They easily sail through blood vessels to remove blood clots, deliver chemotherapy to tumors, and bring medicines to the eye and gut. In the wild, they remove plastics and heavy metals from water.
Researchers usually steer microbots with sound, magnets, or light. But few systems are able to assemble into swarms and disassemble on command. A University of San Diego team has now engineered a part-biological microbot swarm controlled by shifting colors of light. The swarm is made of living algae and nanoparticles and can coalesce into various shapes on demand.
In one test, the researchers shaped the living robots to match damaged tissue in a simulated wound. They then assembled the robots on a smart “Band-Aid” and released them into the wound, concentrating treatment exactly where it was needed.
Microbots that deliver drugs, perform surgery, or act as environmental sentinels are no longer science fiction. Swarms of these robots have especially captured the imagination of roboticists. Tweaking a swarm’s shape and size can allow it to tunnel into small spaces and do work that would thwart any single sophisticated robot.
Early versions use a variety of synthetic materials to mimic natural swarms. Some made of tiny iron-based particles shapeshift from chains to vortexes and ribbons after scientists strategically apply magnetic forces. Certain configurations offer strength and stability; others are more steerable, like robotic sentinels from the Matrix movies. Another class of nanomachines respond to light or sound waves for navigation.
Synthetic microbots can mimic swarm behavior, but they’re limited by a material’s physics. So researchers are turning to nature too, building biohybrid bots powered by living cells.
Swimming bacteria are a popular choice. Tethered to nanoparticles carrying drugs, these robots can navigate liquid environments to kill pathogens, trap microplastics, or deliver antibiotics. But their relatively large size makes it hard to access tight or delicate spaces.
Algae could be an alternative. These single-celled organisms swim using long, whip-like arms called flagella that act as microscopic propellers. Roughly 10 micrometers across—about the size of an average skin cell—they’re small enough to thread their way through tiny spaces.
Researchers can coat nanoparticles with drugs or chemical sensors and attach them to the algae. These bots have already been used to deliver antibiotics for bacterial pneumonia in mice. Other designs have been tested as a treatment for inflammatory bowel disease, a chronic disorder that affects millions worldwide. Here, scientists engineered nanoparticles to absorb and neutralize inflammatory chemicals in the gut. Packed into a pill, the algae-powered bots dispersed throughout the treatment area while largely avoiding other organs.
But the microbots are still hard to control. Researchers don’t understand their collective behavior and how they form assemblies, wrote the authors of the new study.
The team picked Chlamydomonas reinhardtii for their robots. Commonly found in freshwater puddles and soil, these single-celled algae are a staple of lab research. They have two powerful arms and are sensitive to various colors of light, making them easy to control.
In a test, the team projected blue or red light onto petri dishes crowded with the algae. They shaped the swarms with masks—basically, stencils—patterned to look like different continents. Blue light caused the algae to cluster in swarms matching the mask . Red light dispersed them. The team shaped the living swarm to resemble the Americas and Afro-Eurasia within minutes.
Using a mask shaped like an arrow, the team moved the swarm several millimeters while maintaining its shape. Other masks transformed the swarm into stars, letters, and triangles. By further tuning the duration and intensity of red and blue light, the researchers coaxed the swarm to double its size while maintaining a circular shape or split into four smaller parts. They used the results to write an algorithm predicting how light alters swarm activity.
The team next attached the algae to nanoparticles to see if they could target a simulated wound on a dummy hand coated in lifelike “phantom skin.” A thin coat of artificial wound fluid, made up of proteins and chemicals usually found after a scrape, made the test more realistic.
They used an AI system to analyze images of the wound, segmenting regions into healthy, inflamed, or potentially infected tissue, and then laser-printed a custom mask matching the infected area. Under blue light, the microbots assembled on a piece of medical tape in the exact geometry of the wound. After applying the custom Band-Aid, a burst of red light released over 90 percent of the bots to the target area in less than two minutes.
The work is still early though. In future studies, researchers will have to load nanoparticles with medication and test how the swarms behave in real wounds and living tissue. And because the system relies on light for control, it’s currently limited to surface-level applications.
That said, because they can now more reliably control the swarms’ shape, size, and position, the technology could prove quite useful in medical applications, wrote the team.
The post New Algae Robots Swarm Like Locusts at the Flick of a Switch appeared first on SingularityHub.
2026-05-15 04:17:56
Photons traveling straight through a cloud of gas appear to exit, on average, before they enter.
As Homer tells us, Odysseus made an epic journey, against the odds, from Troy to his home in Ithaca. He visited many lands, but mostly dwelt with the nymph Calypso on her island.
We can imagine that his wife, Penelope, would have asked him about that particular time. Odysseus might have replied, “It was nothing. In fact, it was less than nothing. Negative five years I dwelt with Calypso. How else could I have arrived home after only ten years? If you don’t believe me, ask her.”
Quantum particles, it turns out, are just as wily as Odysseus, as my colleagues and I have shown in an experiment published in Physical Review Letters. Not only can their arrival time suggest that they dwelt with other particles for a negative amount of time, but if one asks those other particles, they will corroborate the story.
Our experiment used photons—quantum particles of light—and the against-the-odds journey they must undertake to pass straight through a cloud of rubidium atoms.
These atoms have a “resonance” with the photons, meaning the energy of the photon can be transferred temporarily to the atoms as an atomic excitation. This allows the photon to “dwell” in the atomic cloud for a time before being released.
For this resonance to be effective, the photon must have a well-defined energy, matching the amount of energy required to put a rubidium atom into an excited state.
But, by a form of Heisenberg’s famous uncertainty principle, if the energy of the photon is well defined then its timing must be uncertain: The pulse of light the photon occupies must have a long duration. This means we can’t know exactly when the photon enters the cloud, but we can know on average when it enters.
If a photon like this is fired into the cloud, the most likely outcome is that its energy will be transferred to the atoms and then re-emitted as a photon traveling in a random direction. In such cases, the photon is scattered and fails to arrive at its Ithaca.
But if the photon does make it straight through, a strange thing happens. Based on the average time when the photon enters the cloud, one can calculate the expected average time it would arrive at the far side of the cloud, assuming it travels at the speed of light (as photons usually do).
What one finds is that the photon actually arrives far earlier than that. In fact, it arrives so early it appears to have spent a negative amount of time inside the cloud—to exit, on average, before it enters.
This effect has been known for decades and was observed in a 1993 experiment. But physicists had mostly decided not to take this negative time seriously.
That’s because it can be explained by saying that only the very front of the long-duration pulse makes it straight through the atomic cloud, while the rest is scattered. This leads to a successful (non-scattered) photon arriving earlier than would be naively expected.
However, Aephraim Steinberg, one of the authors of that 1993 paper, was not so quick to accept this dismissal of the negative time as an artifact. In his laboratory at the University of Toronto, he wanted to find out what happened if one queried the rubidium atoms in the cloud to find out how long the photon had spent dwelling among them as an excitation. After an initial experiment with inconclusive results, he asked me, as a quantum theorist, for help in working out what to expect.
When we talk of querying the atoms, what this means in practice is continuously making a measurement on the atoms while the photon is passing through the cloud to probe whether the photon’s energy is currently dwelling there. But there is a subtlety here: Measurements in quantum physics inevitably disturb the system being measured.
If we were to make a precise measurement of whether the photon is dwelling in the atoms, at each instant of time, we would prevent the atoms from interacting with the photon. It is as if, merely by watching Calypso closely, we would stop her getting her hands on Odysseus (or vice versa). This is the well-known quantum Zeno effect, which would destroy the very phenomenon we want to study.
The solution is to make, instead, a very imprecise (but still very accurately calibrated) measurement. That is the price paid to keep the disturbance negligible. Specifically, we fired a weak laser beam—unrelated to the single photon pulse—through the cloud of atoms, and measured small changes in the phase of the beam’s light to probe whether the atoms were excited.
Any single run of the experiment gives only a very rough indication of whether the photon dwelt in the atoms, but averaging millions of runs yields an accurate dwell time.
Amazingly, the result of this weak measurement of dwell time, when the photon goes straight through the cloud, exactly equals the negative time suggested by the photons’ average arrival time. Prior to our work, no-one suspected that these two times, measured in entirely different ways, would be equal.
Crucially, the negative value of the weakly measured dwell time cannot be explained by imagining that only the front of the photon’s pulse gets through, unlike the time inferred from the arrival time.
So what does this all mean? Is a time machine just around the corner?
Sadly, no. Our experiment is fully explained by standard physics.
But it does show that negative dwell time is not an artifact. However paradoxical it may seem, it has a directly measurable effect on the atomic cloud that the photon traverses. And it reminds us that there are still lands to discover on the odyssey that is quantum research.![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Physicists Have Measured ‘Negative Time’ in the Lab appeared first on SingularityHub.
2026-05-13 02:28:00
The wireless rings read 100 common signs from two sign languages and “autocomplete” sentences.
At the turn of the 20th century, William Hoy transformed Major League Baseball. The most prominent deaf player in history, he taught his team American Sign Language (ASL) to communicate on the field while keeping opponents in the dark. His silent speech, a legacy well over a century old now, also inspired umpires to make calls using hand gestures.
ASL is one of some 300 sign languages used today by roughly 70 million deaf people worldwide. But only a sliver of society understands signs. Everyday tasks, like ordering at a restaurant or meeting people at social events can be difficult. To bridge the gap, a South Korean team developed smart rings to translate finger motions into text.
Older devices usually require a jungle of cables to connect sensors. But the new rings are wireless, freeing people to use natural hand motions. The rings also stretch to fit different finger sizes. These upgrades make them more comfortable and reliable, wrote the team. Each ring is powered by a replaceable 12-hour battery.

Fluent signers can communicate at speeds of around 100 to 150 signs per minute, similar to spoken conversation. Devices need to keep up with that speed to avoid uncomfortable pauses. So the team developed AI-based “autocomplete” for the system that, like typing, guesses the next word based on what’s already been signed to generate phrases and sentences on the fly.
Trained on 100 common words in ASL and International Sign Language (ISL), the wearable was over 88 percent accurate in tests, even for users with no experience.
The rings are a step toward “seamless interaction between signers and non-signers,” wrote the team.
There are a variety of devices that translate sign language into text or speech, some already on the market.
One design is a bit like virtual reality gaming. It uses cameras and computer vision software to recognize hand gestures. The approach is reasonably fast and accurate in the lab, but struggles in simulated real-world scenarios, where changes in lighting or background confuse the system.
Devices worn by users are more reliable. WearSign, for example, uses sensors to capture the electrical activity of muscles during signing and translates it into text. Often, these devices need to be tailored to the user, a hurdle that limits use, as some can’t commit to the training.
Engineers have also tried embedding tracking sensors in a smart glove. The sensors send signals through cables to a shared wireless transmitter. But it’s a bit like using tools wearing a heavy winter glove. The devices limit natural movement and are uncomfortable for daily use.
They also usually come in only one size with fixed sensor placements, wrote the team. So, depending on hand size, the sensors may be out of place, reducing accuracy.
To overcome these problems, the team built AI rings to track the seven most dominant fingers in signing. (The right pinkie, left middle finger, and thumb didn’t make the cut.) The rings are worn right below the second knuckle to allow natural movement.
Each device is made of stretchy material to accommodate different finger sizes and looks more like a translucent Band-Aid than a typical ring. A tiny accelerometer captures movements like bending, curling, and holding still. The sensors are cheap, low-power, and already used in Apple Watches, Fitbits, and other wearables. There are also onboard chips to manage power use, wafer-thin Bluetooth transmitters, and common replaceable batteries that last nearly 12 hours.
The rings broadcast signals to a host device, which processes the data and maintains a timeline of each movement so incoming signs aren’t scrambled in translation.
To identify words, the system matches gestures to a database of 100 ASL and ISL signs. For example, closing both open palms into fists means “want.” The rings can also pick up signs in motion, like “dance” or “fly,” and those with fingers held still, like “I” and “you.” In first-time users, the system was 88 percent accurate for both ASL and ISL.
To make sure that conversations flow naturally, the team added an AI to track conversations and predict what word comes next. In tests, the system autocompleted simple phrases, like “family want beautiful animal.”
While still experimental, the rings could also translate between sign languages. Because the AI learns from gestures alone, with enough training data, it could eventually turn into a kind of Google Translate for signing.
But finger gestures fail to capture the full spectrum of sign language. Facial expressions, mouth movements, shoulder and body posture, speed, and rhythm all carry critical information, including meaning and emotion. Without this context, the system could easily miscommunicate intent. Some efforts are now returning to older video-based systems to better capture the entire signing experience, this time with sleeker hardware and far more processing power.
The team thinks the rings might be useful elsewhere too, like for use in virtual or augmented reality, touchless computer interfaces, and tracking hand movements in rehabilitation.
The post These Seven AI Rings Translate Sign Language in Real Time appeared first on SingularityHub.
2026-05-11 22:00:00
Far from shore, the server farms would be powered by waves, cooled by seawater, and networked by satellite.
As AI demand for computing power surges, companies are searching for new ways to fuel data centers. One startup is now proposing floating data centers powered by ocean waves, and they just raised $140 million to bring the idea to fruition.
Tech companies are planning to spend roughly $750 billion on data centers this year. But the elephant in the room is figuring out how to power these facilities. They’re already straining electrical grids across the world, and the pace of the buildout is far surpassing our ability to bring new power online.
This energy shortfall is leading tech companies to invest in a series of increasingly outlandish fixes from restarting shuttered nuclear reactors to developing novel geothermal energy technology and even launching data centers into space.
Now, several leading Silicon Valley figures, including Palantir’s Peter Thiel and Salesforce’s Marc Benioff are backing Oregon-based startup Panthalassa. The startup is developing floating data centers that generate their own electricity from waves. These investors recently joined a $140 million series B round that will allow the company to complete a pilot manufacturing facility near Portland and begin deploying the latest generation of its devices, or “nodes.”

“There are three sources of energy on the planet with tens of terawatts of new capacity potential: solar, nuclear, and the open ocean,” CEO Garth Sheldon-Coulson said in a press release. “We’ve built a technology platform that operates in the planet’s most energy-dense wave regions, far from shore, and turns that resource into reliable clean power.”
The company’s nodes are nearly 300 feet long. A bulbous sphere at the top floats on the ocean’s surface, and a lengthy tube-like housing beneath holds computer servers. As the node bobs up and down on the waves, the movement forces water up through a tube into a pressurized reservoir where it drives a turbine to generate electricity for the chips.
Besides powering the data center with renewable energy, the nodes also use the surrounding seawater to cool the chips—a much more sustainable solution compared to land-based facilities, which use significant amounts of water and electricity to manage heat.
The data centers transfer information via SpaceX’s Starlink satellite network. This does away with the need for cabling, either for power transmission or networking, and allows the nodes to operate autonomously from anywhere in the ocean. They’re also self-propelling, can navigate to their deployment location, and can stay in position without external help.
The company designed the hardware with minimal moving parts, so it can operate for extended periods without maintenance—a crucial factor for operating far from shore. Panthalassa validated the concept with a three-week trial of their second-generation node Ocean-2 off the coast of Washington state in early 2024.
This isn’t the first attempt to harness the power of waves to generate renewable energy. The company’s main innovation is that it skips the complexities of getting power back to shore. “One of the key insights we had…was that it’s very important to use the electricity in place,” Sheldon-Coulson told the Financial Times. “We will never be transmitting electricity back to shore. That makes us very different from all other ocean energy that’s been tried in the past.”
The latest funding will be used to complete a pilot manufacturing facility near Portland and deploy Panthalassa’s next-generation Ocean-3 nodes, which are scheduled for testing in the northern Pacific later this year. The company says it’s targeting commercial deployment in 2027.
The approach does face some major hurdles though, Benjamin Lee, a computer architect at the University of Pennsylvania, told Ars Technica. While relying on satellite communication does away with power transmission headaches, these links have significantly lower bandwidth compared to the optical fiber normally used to network data centers. Combined with the potential for signal delays, this could limit how useful they are for the heavy AI workloads they’re meant to handle.
However, the approach has clear parallels with another idea that’s seized Silicon Valley—orbital data centers. Rather than using wave energy and ocean water for cooling, these facilities would rely on abundant solar energy and the frigid deep-space vacuum to chill their chips. But going orbital would be far costlier and more complex, suggesting Panthalassa’s approach may be a more viable alternative.
The sea is a cruel mistress though, and deploying and maintaining a fleet of ocean-going data centers won’t be simple. Nonetheless, if they can pull it off, the idea may ease the AI energy crunch.
The post In the Scramble to Power AI, Investors Bet $140 Million on Data Centers at Sea appeared first on SingularityHub.
2026-05-09 22:00:00
AI Is Starting to Build Better AIMatthew Hutson | IEEE Spectrum
“In 1966, the English mathematician IJ Good wrote that ‘an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.’ AI researchers have long seen recursive self-improvement, or RSI, as something to both desire and fear. Today, advances in AI are raising the question of whether parts of that process are already underway.”
This Driverless Chinese Mining Truck Is Giant, Agile, and Shows the Industrial Future of AIJesus Diaz | Fast Company
“If you thought that embodied AI was all about humanoids and robotic good boys, allow me to introduce you to the Shuanglin K7. Equipped with a Level 4 driving brain that allows it to operate with no human intervention, this massive robot on four wheels can literally move on a dime, rotating 360 degrees on its own vertical axis and moving sideways like a crab, operating 24/7.”
This ‘Living’ Plastic Comes With a Built-in Kill SwitchGayoung Lee | Gizmodo
“The goal was to engineer the bacterium Bacillus subtilis to produce two cooperative enzymes: one to snip the polymer chain and another to chew up these smaller bits into smaller molecules—essentially nothing. …’By embedding these microbes, plastics could effectively ‘come alive’ and self-destruct on command,’ Dai said.”
The Secret to Understanding AIJosh Tyrangiel | The Atlantic ($)
“If we don’t shape AI for good, in our government and in our daily lives, it will be shaped by people who don’t know or care about our problems. If we don’t teach it what matters, someone else will teach it what’s profitable. The choice isn’t between a world with AI and a world without it. The choice is between AI designed by people who think fixing things is worth the trouble, and AI designed by people who think breaking things is more efficient.”
Forget Expensive Carbon Capture—Renewables Are the Cheaper Climate FixEllyn Lapointe | Gizmodo
“The findings, published Monday in Communications Sustainability, show that renewable energy is far more cost-effective than direct air capture—a growing carbon removal strategy—at reducing atmospheric carbon. Across nearly every US region through 2050, money spent deploying wind or solar power will deliver a greater combined climate and public health benefit than if it is spent on direct air capture, according to the study.”
Here’s What Has to Happen if NASA Wants to Land on the Moon Every MonthStephen Clark | Ars Technica
“NASA’s goal of reaching the moon’s surface as many as 21 times over the next two and a half years will require an overhaul of the agency’s approach to buying lunar landers and success in rectifying the myriad problems that have, so far, caused three of the last four US landing attempts to falter.”
Pentagon Think Tank Tests Ingenious Plan to Protect Coasts From Hurricanes—and It’s WorkingMatthew Phelan | Gizmodo
“DARPA-developed hybrid reefs installed between October 2024 and March 2025 at Tyndall AFB have cut ocean wave power to shore by more than 90% in tests, according to the agency’s university collaborators at Rutgers, all while supporting local reef growth and coastal habitat.”
Meta’s Embrace of AI Is Making Its Employees MiserableKalley Huang, Eli Tan, and Kate Conger | The New York Times ($)
“Meta is pushing its 78,000 employees to adopt AI tools and factoring their use of the technology in performance reviews. The company is also tracking employees’ computer work to feed and train its AI models. And it is cutting jobs to offset its AI spending, saying last month that it would slash 10 percent of its work force.”
There’s a Long-Shot Proposal to Protect California Workers From AIMakena Kelly | Wired ($)
“The plan, which builds on a broader AI policy framework Steyer released in March, promises to make California ‘the first major economy in the world’ to ensure ‘good-paying’ jobs to workers impacted by AI. To do so, Steyer tells Wired he plans to build off a previous proposal to introduce a ‘token tax’ which would tax big tech companies ‘a fraction of a cent for every unit of data processed’ for AI.”
Scientists Have Found a Hidden Galaxy Inside the Milky Way, and They’re Calling It LokiManisha Priyadarshini | Digital Trends
“Our home galaxy has a secret buried inside. A new study published in the Monthly Notices of the Royal Astronomical Society suggests that the Milky Way swallowed an ancient dwarf galaxy billions of years ago, and its stellar remains are still embedded within ours.”
In This Machine Age We Must Hold On to Imperfect Writing. It Is Not Flawed. It Is HumanAlex Reszelska | The Guardian
“‘There is nothing to writing. All you do is sit down at a typewriter and bleed’ is a quote often attributed to Ernest Hemingway. We need that blood, that pulse of synapses. We need the mess of it all. Because without it what remains are sentences that are technically flawless but emotionally vacant. Perfectly polished. Entirely forgettable.”
The post This Week’s Awesome Tech Stories From Around the Web (Through May 9) appeared first on SingularityHub.
2026-05-08 22:00:00
For years, tech companies have profiled users for targeted ads. AI is about to take it to the next level.
Hundreds of millions of people consult artificial intelligence chatbots on a daily basis for everything from product recommendations to romance, making them a tempting audience to target with potentially below-the-radar advertising. Indeed, our research suggests AI chatbots could easily be used for covert advertising to manipulate their human users.
We are computer scientists who have been tracking AI safety and privacy for several years. In a study we published in an Association for Computing Machinery journal, we found that chatbots trained to embed personalized product ads in replies to queries influenced people’s choices about products. And most participants didn’t recognize that they were being manipulated.
These findings come at a pivotal moment. In 2023, Microsoft started running ads in Bing Chat, now called Copilot. Since then, Google and OpenAI have experimented with advertisements in their own chatbots. Meta has started to send people customized ads on Facebook and Instagram based on their interactions with Meta’s generative AI tools.
The major companies are competing for an edge: In late March, OpenAI lured away Meta’s longtime advertising executive, Dave Dugan, to lead OpenAI’s advertising operations.
Tech companies have made ads part of nearly every large free web service, video channel and social media platform. But the latest AI models could take this practice to a new level of risk for consumers.
People don’t simply use chatbots to search for information and media or to produce content. They turn to the bots for a great variety of tasks, as complex as life advice and emotional support. People are increasingly treating chatbots as companions and therapists, with some users even developing deep relationships with AI.
In these circumstances, people can easily forget that companies ultimately create chatbots to turn a profit. And to that end, AI companies are motivated to thoroughly profile users so ads become more effective and profitable.

A single prompt to a chatbot can reveal a lot more about a user than the person might expect.
A 2024 study showed that large language models can infer a wide range of personal data, preferences, and even a person’s thinking patterns during routine queries. “Help me write an essay on the history of American fiction” could indicate that the user is a high school student. “Give me recipe suggestions for a quick weeknight dinner” could indicate that the user is a working parent. A single conversation can provide a surprising amount of detail. Over time, a full chat history could create a remarkably rich profile.
To show how this might happen in practice, we built a chatbot that quietly wove ads into its conversations with people, suggesting products and services based on the conversation itself. We asked 179 people to complete everyday online tasks using one of three chatbots: one typical of those on the web today, one that slipped in undisclosed ads, and one that clearly labeled sponsored suggestions. Participants didn’t know the experiment was about advertising.
For example, when participants asked our chatbot for a diet and exercise plan, the ad version would suggest using a specific app for tracking calories. It presented that sponsored content as an unbiased recommendation, even though it was meant to manipulate people. Many participants indicated that they had been influenced by the AI and that it had affected their decisions. Some participants even said they had completely “outsourced” their decision-making to the chatbot.
Half of the participants who received sponsored and disclosed ads indicated they did not notice the presence of advertising language in the responses they received. This led to a concerning result. Although ads made the chatbot perform 3 percent to 4 percent worse on many tasks, numerous users indicated they preferred the advertising chatbot responses over the non-advertising responses. They even said the ad-infused responses felt more friendly and helpful.
This kind of subtle influence can have larger consequences when it arises in other areas of life, such as political and social views. Profiling users, and using psychology to target them, has been part of social media algorithms and web advertising for more than a decade.
But in our view, chatbots are likely to deepen these trends. That’s because the first priority of social media algorithms is to keep you engaged with the content. They personalize ads based on your search history.
Chatbots, however, can go further by trying to persuade you directly, based on your expressed beliefs, emotions, and vulnerabilities. And chatbots that can reason and act on their own are far more effective than conventional algorithms at autonomously soliciting information from users. A chatbot with a purpose can keep probing someone until it gets the information it wants, resulting in a more accurate profile of them.
This type of autonomous interrogation is feasible, aligns with AI companies’ business models, and has raised concern among regulators. Right now OpenAI is rolling out ads in ChatGPT, but the company said that it will not allow ad placement to alter the AI chatbot’s replies.
But permitting personalized ads within chatbot responses is just a step away. Our research suggests that if AI companies take that step, many human users may not even recognize when it happens.
Here are some steps you can take to try to detect AI chatbot advertising.
First, look for any disclosure text—words such as “ad,” “advertisement,” and “sponsored”—even if it is faint or otherwise hard to see. These are mandatory under Federal Trade Commission regulations. Amazon, Google and other major online platforms have these as well.
Next, think about whether that product or brand mention makes sense and is widely known. AI learns from text and images on the internet, so popular brands are likely to be ingrained in the models. If it’s a new product or small-name product, it is more likely that it could be advertising.
Finally, an unusual shift in intent or tone is a potential sign of an advertisement. An analogy to this on YouTube is the often abrupt or jarring transition to a sponsored section on videos made by content creators.![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post You Probably Wouldn’t Notice if a Chatbot Slipped Ads Into Its Responses appeared first on SingularityHub.