2025-12-16 23:00:00
The microbots have tiny computers, sensors, and actuators. They can sense temperature and swim autonomously.
The robots, each the size of a single cell, casually turn circles in a bath of water. Suddenly, their sensors detect a change: Parts of the bath are heating up. The microrobots halt their twirls and head for warmer waters, where they once again settle into lounge-mode—all without human interference.
For 40 years, scientists have tried to engineer ‘smart’ microrobots. But building microscopic machines that sense, learn, and act based on their programming has eluded researchers. Today’s most sophisticated robots, such as Boston Dynamics’ Atlas, already embody these functions using computer chips, algorithms, and actuators. The seemingly simple solution would be to simply shrink down larger systems, and voila, mission accomplished.
It’s not so easy. The physical laws governing semiconductors and other aspects of robotics go sideways at the microscopic scale. “Fundamentally different approaches are required for truly microscopic robots,” wrote Marc Miskin and team at the University of Pennsylvania.
Their study, published last week in Science Robotics, packed the autonomous abilities of full-sized robots into microrobots 10,000 times smaller—each one roughly the size of a single-celled paramecium. Costing just a penny per unit to manufacture, the bots are loaded with sensors, processors, communications modules, and actuators to propel them.
In tests, the microrobots responded to a variety of instructions transmitted from a computer workstation helmed by a person. After receiving the code, however, the bots functioned autonomously with energy consumption near that of single cells.
While just prototypes, similar designs could one day roam the body to deposit medications, monitor the environment, or make nanomanufacturing more adjustable.
Intelligent living “microrobots” surround us. Despite their miniature size and lack of a central brain, single-celled creatures are quick to sense, learn, and adapt to shifting surroundings. If evolution can craft these resilient microorganisms, why can’t we?
So far, the smallest robots that can sense, be reprogrammed, and move on command are at least bigger than a millimeter, or roughly the size of a grain of sand. Further shrinking runs into roadblocks based on fundamental physical principles.
Just as quantum computing departs from everyday physics—with one computational quirk famously called “spooky action at a distance” by Albert Einstein—the rules that guide computer chip and robotic performance also begin to behave differently at the microscopic scale.
For example, forces on a robot’s surface become disproportionately large, so the devices stick to everything, including themselves. This means motors have to ramp up their power, which swiftly exhausts scarce energy resources. Drag also limits mobility, like trying to move with a parachute in strong winds. Processors suffer too—shrinking down computer chips causes noise to skyrocket—while sensors rapidly lose sensitivity.
You can get around all this by controlling a bot’s movement externally with light or magnets, which offloads multiple hardware components. But this sacrifices “programmability, sensing, and/or autonomy in the process,” wrote the team. Such microrobots struggle in changing environments and can only switch between a limited number of coded behaviors.
Alternatively, you can weave functions directly into the materials so microrobots change their properties as the environment shifts. This also switches their computation. Most examples are soft and biocompatible, but they’re harder to manufacture at scale and often require expensive hardware to control, crippling real-world practicality.
Many of the essential, miniaturized components needed for “smart” microbots already exist. These include tiny sensors, information processing systems, and actuators to convert electrical signals into motion. The trick is wiring them all together. For example, given a “limited power budget,” it’s difficult to accommodate both propulsion and computation, wrote the team.
The team optimized each component for efficiency, and the design relied on tradeoffs. Increasing the microbot’s memory took more energy, for example, but could support complex behaviors. In the end, they were limited to just a few hundred bits of onboard data. But this was sufficient to store the microbot’s current state, or the memory of its actions and past commands. The team wrote a library of simple instructions—like “sense the environment”—which could be sent to the bots.
The final design has mini solar panels to soak up beams of light for power, temperature sensors, and a processing unit. A communications module, also using light, receives new commands and translates sensor readings into specific movements.
The team made the bots in bulk using a standard chipmaking process.
In one test, they asked the microbots to measure nearby temperature, digitize the number, and transmit it to the base station for evaluation. Instead of infrared beams or other wireless technologies, the system relied on specific movements to encode temperature measurements in bits. To save energy, the entire process used only two programming commands, one for sensing and another to encode and transmit data.
The microrobots beat state-of-the-art digital thermometers, capturing temperature differences of 0.3 degrees Celsius in a tiny space. The technology could be used to probe temperature changes in microfluidic chambers or tiny blood vessels, wrote the team.
The bots can also move along heat gradients like living organisms. At rest, they stay in place and turn in circles. But when they detect a temperature change, they automatically move toward more heated areas until the temperature is steady. They then switch back into relaxed mode. Beaming a different set of commands asking them to move to colder regions reverses their trajectory. The microrobots faithfully adapt to the new instructions and settle in cooler waters.
The team also built in passcodes. These pulses of light activate the microrobots and allow the researchers to send commands to the entire fleet or only to select groups. They could potentially use this to program more sophisticated robotic swarm behaviors.
Although still prototypes, the microrobots have a reprogrammable digital brain that senses, remembers, and acts. This means the scientists can assign them a wide range of tasks on demand. Up next, they aim to add communication between the microrobots for coordination and upgrade their motors for faster, more agile movement.
The post These Robots Are the Size of Single Cells and Cost Just a Penny Apiece appeared first on SingularityHub.
2025-12-15 23:00:00
Models that “think” through problems step by step before providing an answer use considerably more power than older models.
It’s not news to anyone that there are concerns about AI’s rising energy bill. But a new analysis shows the latest reasoning models are substantially more energy intensive than previous generations, raising the prospect that AI’s energy requirements and carbon footprint could grow faster than expected.
As AI tools become an ever more common fixture in our lives, concerns are growing about the amount of electricity required to run them. While worries first focused on the huge costs of training large models, today much of the sector’s energy demand is from responding to users’ queries.
And a new analysis from researchers at Hugging Face and Salesforce suggests that the latest generation of models, which “think” through problems step by step before providing an answer, use considerably more power than older models. They found that some models used 700 times more energy when their “reasoning” modes were activated.
“We should be smarter about the way that we use AI,” Hugging Face research scientist and project co-lead Sasha Luccioni told Bloomberg. “Choosing the right model for the right task is important.”
The new study is part of the AI Energy Score project, which aims to provide a standardized way to measure AI energy efficiency. Each model is subjected to 10 tasks using custom datasets and the latest generation of GPUs. The researchers then measure the number of watt-hours the models use to answer 1,000 queries.
The group assigns each model a star rating out of five, much like the energy efficiency ratings found on consumer goods in many countries. But the benchmark can only be applied to open or partially open models, so leading closed models from major AI labs can’t be tested.
In this latest update to the project’s leaderboard, the researchers studied reasoning models for the first time. They found these models use, on average, 30 times more energy than models without reasoning capabilities or with their reasoning modes turned off, but the worst offenders used hundreds of times more.
The researchers say that this is largely due to the way AI reasoning works. These models are fundamentally text generators, and each chunk of text they output requires energy to produce. Rather than just providing an answer, reasoning models essentially “think aloud,” generating text that is supposed to correspond to some kind of inner monologue as they work through a problem.
This can boost the number of words they generate by hundreds of times, leading to a commensurate increase in their energy use. But the researchers found it can be tricky to work out which models are the most prone to this problem.
Traditionally, the size of a model was the best predictor of how much energy it would use. But with reasoning models, how verbose their reasoning chains are is often a bigger predictor, and this typically comes down to subtle quirks of the model rather than its size. The researchers say this is a key reason why benchmarks like this are important.
It’s not the first time researchers have attempted to assess the efficiency of reasoning models. A June study in Frontiers in Communication found that reasoning models can generate up to 50 times more CO₂ than models designed to provide a more concise response. The challenge, however, is that while reasoning models are less efficient, they are also much more powerful.
“Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences in Germany who led the study, said in a press release. “None of the models that kept emissions below 500 grams of CO₂ equivalent [total greenhouse gases released] achieved higher than 80 percent accuracy on answering the 1,000 questions correctly.”
So, while we may be getting a clearer picture of the energy impacts of the latest reasoning models, it may be hard to convince people not to use them.
The post Hugging Face Says AI Models With Reasoning Use 30x More Energy on Average appeared first on SingularityHub.
2025-12-13 23:00:00
OpenAI Releases GPT-5.2 After ‘Code Red’ Google Threat AlertBenj Edwards | Ars Technica
“OpenAI says GPT-5.2 Thinking beats or ties ‘human professionals’ on 70.9 percent of tasks in the GDPval benchmark (compared to 53.3 percent for Gemini 3 Pro). The company also claims the model completes these tasks at more than 11 times the speed and less than 1 percent of the cost of human experts.”
1X Struck a Deal to Send Its ‘Home’ Humanoids to Factories and WarehousesRebecca Szkutak | TechCrunch
“The company announced a strategic partnership to make thousands of its humanoid robots available for [its backer] EQT’s portfolio companies on Thursday. …This deal involves shipping up to 10,000 1X Neo humanoid robots between 2026 and 2030 to EQT’s more than 300 portfolio companies with a concentration on manufacturing, warehousing, logistics, and other industrial use cases.”
China Launches 34,175-Mile AI Network That Acts Like One Massive SupercomputerGayoung Lee | Gizmodo
“Last week, state-run Science and Technology Daily reported the launch of the Future Network Test Facility (FNTF), a giant distributed AI computing pool capable of connecting distant computing centers. The high-speed optical network spans across 40 cities in China, measuring at about 34,175 miles (55,000 kilometers)—enough to circle the equator 1.5 times, according to the South China Morning Post.”
Aurora Will Have ‘Hundreds’ of Driverless Trucks on the Road by the End of 2026, CEO SaysAndrew J. Hawkins | The Verge
“Urmson says he expects ‘thousands’ of trucks on the road within the next two years. ‘It’ll be a little less visceral, because it’s not a consumer-facing product,’ he says. ‘But in terms of the expansion, I think we’ll start to see that happen pretty quickly.'”
This Incredible Map Shows the World’s 2.75 Billion BuildingsJesus Diaz | Fast Company
“From the latest skyscraper in a Chinese megalopolis to a six‑foot‑tall yurt in Inner Mongolia, researchers at the Technical University of Munich claim they have created a map of all buildings worldwide: 2.75 billion building models set in high‑resolution 3D with a level of precision never before recorded.”
AI Hackers Are Coming Dangerously Close to Beating HumansRobert McMillan | The Wall Street Journal ($)
“Artemis found bugs at lightning speed and it was cheap: It cost just under $60 an hour to run. Ragan says that human pen testers typically charge between $2,000 and $2,500 a day. But Artemis wasn’t perfect. About 18% of its bug reports were false positives. It also completely missed an obvious bug that most of the human testers spotted in a webpage.”
Overview Energy Wants to Beam Energy From Space to Existing Solar FarmsTim De Chant | TechCrunch
“The startup plans to use large solar arrays in geosynchronous orbit about 22,000 miles above Earth where satellites match the planet’s rotation—to harvest sunlight. It will then use infrared lasers to transmit that power to utility-scale solar farms on Earth, allowing them to send power to the grid nearly round the clock.”
Why the AI Boom Is Unlike the Dot-Com BoomDavid Streitfeld | The New York Times ($)
“Much of the rhetoric about a glorious world to come is the same [as the dot-com boom]. Fortunes are again being made, sometimes by the same tech people who made fortunes the first time around. Extravagant valuations are being given to companies that didn’t exist yesterday. For all the similarities, however, there are many differences that could lead to a distinctly different outcome.”
A First Look at Google’s Project Aura Glasses Built With XrealVictoria Song | The Verge
“Is it a headset? Smart glasses? Both? Those were the questions running through my head as I held Project Aura in my hands in a recent demo. It looked like a pair of chunky sunglasses, except for the cord dangling off the left side, leading down to a battery pack that also served as a trackpad. When I asked, Google’s reps told me they consider it a headset masquerading as glasses. They have a term for it, too: wired XR glasses.”
Bezos and Musk Race to Bring Data Centers to SpaceMicah Maidenberg and Becky Peterson | The Wall Street Journal ($)
“Bezos’ Blue Origin has had a team working for more than a year on technology needed for orbital AI data centers, a person familiar with the matter said. Musk’s SpaceX plans to use an upgraded version of its Starlink satellites to host AI computing payloads, pitching the technology as part of a share sale that could value the company at $800 billion, according to people involved in the discussions.”
Scientists Thought Parkinson’s Was in Our Genes. It Might Be in the WaterDavid Ferry | Wired ($)
“Despite the avalanche of funding, the latest research suggests that only 10 to 15 percent of Parkinson’s cases can be fully explained by genetics. The other three-quarters are, functionally, a mystery. ‘More than two-thirds of people with PD don’t have any clear genetic link,’ says Briana De Miranda, a researcher at the University of Alabama at Birmingham. ‘So, we’re moving to a new question: What else could it be?'”
The post This Week’s Awesome Tech Stories From Around the Web (Through December 13) appeared first on SingularityHub.
2025-12-12 23:00:00
An immune tag-team promises to hold the virus in check for years—even without medication.
HIV was once a death sentence. Thanks to antiretroviral therapy, it’s now a chronic disease. But the daily treatment is for life. Without the drug, the virus rapidly rebounds.
Scientists have long hunted for a more permanent solution. One option they’ve explored is a stem cell transplant using donor cells from people who are naturally resistant to the virus. A handful of patients have been “cured” this way, in that they could go off antiretroviral therapy without a resurgence in the virus for years. But the therapy is difficult, costly, and hardly scalable.
Other methods are in the works. These include using the gene editor CRISPR to damage HIV’s genetic material in cells and mRNA vaccines that hunt down a range of mutated HIV viruses. While promising, they’re still early in development.
A small group of people may hold the key to a simpler, long-lasting treatment. In experimental trials of a therapy called broadly neutralizing anti-HIV antibodies, or bNAbs, some people with HIV were able to contain the virus for months to years even after they stopped taking drugs. But not everyone did.
Two studies this month reveal why: Combining a special type of immune T cell with immunotherapy “supercharges” the body’s ability to hunt down and destroy cells harboring HIV. These cellular reservoirs normally escape the immune system.
One trial led by the University of California, San Francisco (UCSF) merged T cell activation and bNAb treatment. In 7 of 10 participants, viral levels remained low for months after they stopped taking antiretroviral drugs.
Another study analyzed blood samples from 12 participants receiving bNAbs and compared those who were functionally cured to those who still relied on antiretroviral therapy. They zeroed in on an immune reaction bolstering long-term remission with the same T cells at its center.
“I do believe we are finally making real progress towards developing a therapy that may allow people to live a healthy life without the need of life-long medications,” said study author Steven Deeks in a press release.
HIV is a frustrating foe. The virus rapidly mutates, making it difficult to target with a vaccine. It also forms silent reservoirs inside cells. This means that while viral counts circulating in the blood may seem low, the virus rapidly rebounds if a patient ends treatment. Finally, HIV infects and kneecaps immune cells, especially those that hunt it down.
According to the World Health Organization, roughly 41 million people live with the virus globally, and over a million acquire the infection each year. Preventative measures such as a daily PrEP pill, or pre-exposure prophylaxis, guard people who don’t have the virus but are at high risk of infection. More recently, a newer, injectable PrEP formulation fully protected HIV-negative women from acquiring the virus in low- to middle-income countries.
Once infected, however, options are few. Antiretroviral therapy is the standard of care. But “lifelong ART is accompanied by numerous challenges, such as social stigma and fatigue associated with the need to take pills daily,” wrote Jonathan Li at the Brigham and Women’s Hospital, who was not involved in either study.
Curing HIV once seemed impossible. But in 2009, Timothy Ray Brown, also known as the Berlin patient, galvanized the field. He received a full blood-stem-cell transplant for leukemia, but the treatment also fought off his HIV infection, keeping the virus undetectable without drugs. Other successes soon followed, mostly using donor cells from people genetically immune to the virus. Earlier this month, researchers said a man receiving a non-HIV-resistant stem cell transplant had remained virus-free for over six years after stopping antiretroviral therapy.
While these cases prove that HIV can be controlled—or even eradicated—by the body, stem cell transplants are hardly scalable. Instead, the new studies turned to an emerging immunotherapy employing broadly neutralizing anti-HIV antibodies (bNAbs).
Compared to normal antibodies, bNAbs are extremely rare and powerful. They can neutralize a wide range of HIV strains. Clinical trials using bNAbs in people with HIV have found that some groups maintained low viral levels long after the antibodies left their system.
To understand why, one study examined blood samples from 12 people across four clinical trials. Each participant had received bNAbs treatment and subsequently ended antiretroviral therapy. Comparing those who controlled their HIV infection to those who didn’t, researchers found that a specific type of T cell was a major contributor to long-term remission.
Remarkably, even before receiving the antibody therapy, people with less HIV in their systems had higher levels of these T cells circulating in their bodies. Although the virus attacks immune cells, this population was especially resilient to HIV and almost resembled stem cells. They rapidly expanded and flooded the body with healthy HIV-hunting T cells. Adding bNAbs boosted the number of these T cells and their killer efficiency destroying HIV safe harbor cells too. Without a host, the virus can’t replicate or spread and withers away.
“Control [of viral load] wasn’t uniquely linked to the development of new types of [immune] responses; it was the quality of existing CD8+ T cell responses that appeared to make the difference,” said study author David Collins at Mass General Brigham in a press release.
If these T cells are key to long-term viral control, what if we artificially activated them?
A small clinical trial at UCSF tested the theory in 10 people with HIV. The participants first received a previously validated vaccine that boosts HIV-hunting T cell activity. This was followed by a drug that activates overall immune responses and then two long-lasting bNAb treatments. The patients were then taken off antiretroviral therapy.
After the one-time treatment, seven participants maintained low levels of the virus over the following months. One had undetectable circulating virus for more than a year and a half. Like Collins’s results, bloodwork found the strongest marker for viral control was a high level of those stem cell-like T cells. People with rapidly expanding levels of these T cells, which then transformed into “killer” versions targeting HIV-infected cells, better controlled the infection.
“It’s like…[the cells] were hanging out waiting for their target, kind of like a cat getting ready to pounce on a mouse,” said study author Rachel Rutishauser in a press release.
Findings from both studies converge on a similar message: Long-term HIV management without antiretroviral therapy depends, at least in part, on a synergy between T cells and immunotherapy. Methods amping up stem cell-like T cells before administering bNAbs could give the immune system a head start in the HIV battle and achieve longer-lasting effects.
But these T cells are likely only part of the picture. Other immune molecules, such as a patient’s naturally occurring antibodies against the virus, may also play a role. Going forward, the combination treatment will need to be simplified and tested on a larger population. For now, antiretroviral remains the best treatment option.
“This is not the end game,” said study author Michael Peluso at UCSF. “But it proves we can push progress on a challenge we often frame as unsolvable.”
The post New Immune Treatment May Suppress HIV—No Daily Pills Required appeared first on SingularityHub.
2025-12-11 23:00:00
The technology is still in its infancy. But its trajectory suggests that ethical conversations may become pressing far sooner than expected.
As prominent artificial intelligence researchers eye limits to the current phase of the technology, a different approach is gaining attention: using living human brain cells as computational hardware.
These “biocomputers” are still in their early days. They can play simple games such as Pong, and perform basic speech recognition.
But the excitement is fueled by three converging trends.
First, venture capital is flowing into anything adjacent to AI, making speculative ideas suddenly fundable. Second, techniques for growing brain tissue outside the body have matured with the pharmaceutical industry jumping on board. Third, rapid advances in brain–computer interfaces have seen growing acceptance of technologies that blur the line between biology and machines.
But plenty of questions remain. Are we witnessing genuine breakthroughs, or another round of tech-driven hype? And what ethical questions arise when human brain tissue becomes a computational component?
For almost 50 years, neuroscientists have grown neurons on arrays of tiny electrodes to study how they fire under controlled conditions.

By the early 2000s, researchers attempted rudimentary two-way communication between neurons and electrodes, planting the first seeds of a bio-hybrid computer. But progress stalled until another strand of research took off: brain organoids.
In 2013, scientists demonstrated that stem cells could self-organize into three-dimensional brain-like structures. These organoids spread rapidly through biomedical research, increasingly aided by “organ-on-a-chip” devices designed to mimic aspects of human physiology outside the body.
Today, using stem cell-derived neural tissue is commonplace—from drug testing to developmental research. Yet the neural activity in these models remains primitive, far from the organized firing patterns that underpin cognition or consciousness in a real brain.
While complex network behavior is beginning to emerge even without much external stimulation, experts generally agree that current organoids are not conscious, nor close to it.
The field entered a new phase in 2022, when Melbourne-based Cortical Labs published a high-profile study showing cultured neurons learning to play Pong in a closed-loop system.
The paper drew intense media attention—less for the experiment itself than for its use of the phrase “embodied sentience.” Many neuroscientists said the language overstated the system’s capabilities, arguing it was misleading or ethically careless.
A year later, a consortium of researchers introduced the broader term “organoid intelligence.” This is catchy and media-friendly, but it risks implying parity with artificial intelligence systems, despite the vast gap between them.
Ethical debates have also lagged behind the technology. Most bioethics frameworks focus on brain organoids as biomedical tools—not as components of biohybrid computing systems.
Leading organoid researchers have called for urgent updates to ethics guidelines, noting that rapid research development, and even commercialization, is outpacing governance.
Meanwhile, despite front-page news in Nature, many people remain unclear about what a “living computer” actually is.
Companies and academic groups in the United States, Switzerland, China, and Australia are racing to build biohybrid computing platforms.
Swiss company FinalSpark already offers remote access to its neural organoids. Cortical Labs is preparing to ship a desktop biocomputer called CL1. Both expect customers well beyond the pharmaceutical industry—including AI researchers looking for new kinds of computing systems.
Academic aspirations are rising too. A team at UC San Diego has ambitiously proposed using organoid-based systems to predict oil spill trajectories in the Amazon by 2028.
The coming years will determine whether organoid intelligence transforms computing or becomes a short-lived curiosity. At present, claims of intelligence or consciousness are unsupported. Today’s systems display only simple capacity to respond and adapt, not anything resembling higher cognition.
More immediate work focuses on consistently reproducing prototype systems, scaling them up, and finding practical uses for the technology.
Several teams are exploring organoids as an alternative to animal models in neuroscience and toxicology.
One group has proposed a framework for testing how chemicals affect early brain development. Other studies show improved prediction of epilepsy-related brain activity using neurons and electronic systems. These applications are incremental, but plausible.
Much of what makes the field compelling—and unsettling—is the broader context.
As billionaires such as Elon Musk pursue neural implants and transhumanist visions, organoid intelligence prompts deep questions.
What counts as intelligence? When, if ever, might a network of human cells deserve moral consideration? And how should society regulate biological systems that behave, in limited ways, like tiny computers?
The technology is still in its infancy. But its trajectory suggests that conversations about consciousness, personhood, and the ethics of mixing living tissue with machines may become pressing far sooner than expected.
Disclosure statement: Bram Servais formerly worked for Cortical Labs but holds no shared patents or stock and has severed all financial ties.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post How Scientists Are Growing Computers From Human Brain Cells—and Why They Want to Keep Doing It appeared first on SingularityHub.
2025-12-10 01:35:49
GPT-4, Claude, and Llama sought out popular peers, connected with others via existing friends, and gravitated towards those similar to them.
As AI wheedles its way into our lives, how it behaves socially is becoming a pressing question. A new study suggests AI models build social networks in much the same way as humans.
Tech companies are enamored with the idea that agents—autonomous bots powered by large language models—will soon work alongside humans as digital assistants in everyday life. But for that to happen, these agents will need to navigate the humanity’s complex social structures.
This prospect prompted researchers at Arizona State University to investigate how AI systems might approach the delicate task of social networking. In a recent paper in PNAS Nexus, the team reports that models such as GPT-4, Claude, and Llama seem to behave like humans by seeking out already popular peers, connecting with others via existing friends, and gravitating towards those similar to them.
“We find that [large language models] not only mimic these principles but do so with a degree of sophistication that closely aligns with human behaviors,” the authors write.
To investigate how AI might form social structures, the researchers assigned AI models a series of controlled tasks where they were given information about a network of hypothetical individuals and asked to decide who to connect to. The team designed the experiments to investigate the extent to which models would replicate three key tendencies in human networking behavior.
The first tendency is known as preferential attachment, where individuals link up with already well-connected people, creating a kind of “rich get richer” dynamic. The second is triadic closure, in which individuals are more likely to connect with friends of friends. And the final behavior is homophily, or the tendency to connect to others that share similar attributes.
The team found the models mirrored all of these very human tendencies in their experiments, so they decided to test the algorithms on more realistic problems.
They borrowed datasets that captured three different kinds of real-world social networks—groups of friends at college, nationwide phone-call data, and internal company data that mapped out communication history between different employees. They then fed the models various details about individuals within these networks and got them to reconstruct the connections step by step.
Across all three networks, the models replicated the kind of decision making seen in humans. The most dominant effect tended to be homophily, though the researchers reported that in the company communication settings they saw what they called “career-advancement dynamics”—with lower-level employees consistently preferring to connect to higher-status managers.
Finally, the team decided to compare AI’s decisions to humans directly, enlisting more than 200 participants and giving them the same task as the machines. Both had to pick which individuals to connect to in a network under two different contexts—forming friendships at college and making professional connections at work. They found both humans and AI prioritized connecting with people similar to them in the friendship setting and more popular people in the professional setting.
The researchers say the high level of consistency between AI and human decision making could make these models useful for simulating human social dynamics. This could be helpful in social science research but also, more practically, for things like testing how people might respond to new regulations or how changes to moderation rules might reshape social networks.
However, they also note this means agents could reinforce some less desirable human tendencies as well, such as the inclination to create echo chambers, information silos, and rigid social hierarchies.
In fact, they found that while there were some outliers in the human groups, the models were more consistent in their decision making. That suggests that introducing them to real social networks could reduce the overall diversity of behavior, reinforcing any structural biases in those networks.
Nonetheless, it seems future human-machine social networks may end up looking more familiar than one might expect.
The post Study: AI Chatbots Choose Friends Just Like Humans Do appeared first on SingularityHub.