2025-12-12 23:00:00
An immune tag-team promises to hold the virus in check for years—even without medication.
HIV was once a death sentence. Thanks to antiretroviral therapy, it’s now a chronic disease. But the daily treatment is for life. Without the drug, the virus rapidly rebounds.
Scientists have long hunted for a more permanent solution. One option they’ve explored is a stem cell transplant using donor cells from people who are naturally resistant to the virus. A handful of patients have been “cured” this way, in that they could go off antiretroviral therapy without a resurgence in the virus for years. But the therapy is difficult, costly, and hardly scalable.
Other methods are in the works. These include using the gene editor CRISPR to damage HIV’s genetic material in cells and mRNA vaccines that hunt down a range of mutated HIV viruses. While promising, they’re still early in development.
A small group of people may hold the key to a simpler, long-lasting treatment. In experimental trials of a therapy called broadly neutralizing anti-HIV antibodies, or bNAbs, some people with HIV were able to contain the virus for months to years even after they stopped taking drugs. But not everyone did.
Two studies this month reveal why: Combining a special type of immune T cell with immunotherapy “supercharges” the body’s ability to hunt down and destroy cells harboring HIV. These cellular reservoirs normally escape the immune system.
One trial led by the University of California, San Francisco (UCSF) merged T cell activation and bNAb treatment. In 7 of 10 participants, viral levels remained low for months after they stopped taking antiretroviral drugs.
Another study analyzed blood samples from 12 participants receiving bNAbs and compared those who were functionally cured to those who still relied on antiretroviral therapy. They zeroed in on an immune reaction bolstering long-term remission with the same T cells at its center.
“I do believe we are finally making real progress towards developing a therapy that may allow people to live a healthy life without the need of life-long medications,” said study author Steven Deeks in a press release.
HIV is a frustrating foe. The virus rapidly mutates, making it difficult to target with a vaccine. It also forms silent reservoirs inside cells. This means that while viral counts circulating in the blood may seem low, the virus rapidly rebounds if a patient ends treatment. Finally, HIV infects and kneecaps immune cells, especially those that hunt it down.
According to the World Health Organization, roughly 41 million people live with the virus globally, and over a million acquire the infection each year. Preventative measures such as a daily PrEP pill, or pre-exposure prophylaxis, guard people who don’t have the virus but are at high risk of infection. More recently, a newer, injectable PrEP formulation fully protected HIV-negative women from acquiring the virus in low- to middle-income countries.
Once infected, however, options are few. Antiretroviral therapy is the standard of care. But “lifelong ART is accompanied by numerous challenges, such as social stigma and fatigue associated with the need to take pills daily,” wrote Jonathan Li at the Brigham and Women’s Hospital, who was not involved in either study.
Curing HIV once seemed impossible. But in 2009, Timothy Ray Brown, also known as the Berlin patient, galvanized the field. He received a full blood-stem-cell transplant for leukemia, but the treatment also fought off his HIV infection, keeping the virus undetectable without drugs. Other successes soon followed, mostly using donor cells from people genetically immune to the virus. Earlier this month, researchers said a man receiving a non-HIV-resistant stem cell transplant had remained virus-free for over six years after stopping antiretroviral therapy.
While these cases prove that HIV can be controlled—or even eradicated—by the body, stem cell transplants are hardly scalable. Instead, the new studies turned to an emerging immunotherapy employing broadly neutralizing anti-HIV antibodies (bNAbs).
Compared to normal antibodies, bNAbs are extremely rare and powerful. They can neutralize a wide range of HIV strains. Clinical trials using bNAbs in people with HIV have found that some groups maintained low viral levels long after the antibodies left their system.
To understand why, one study examined blood samples from 12 people across four clinical trials. Each participant had received bNAbs treatment and subsequently ended antiretroviral therapy. Comparing those who controlled their HIV infection to those who didn’t, researchers found that a specific type of T cell was a major contributor to long-term remission.
Remarkably, even before receiving the antibody therapy, people with less HIV in their systems had higher levels of these T cells circulating in their bodies. Although the virus attacks immune cells, this population was especially resilient to HIV and almost resembled stem cells. They rapidly expanded and flooded the body with healthy HIV-hunting T cells. Adding bNAbs boosted the number of these T cells and their killer efficiency destroying HIV safe harbor cells too. Without a host, the virus can’t replicate or spread and withers away.
“Control [of viral load] wasn’t uniquely linked to the development of new types of [immune] responses; it was the quality of existing CD8+ T cell responses that appeared to make the difference,” said study author David Collins at Mass General Brigham in a press release.
If these T cells are key to long-term viral control, what if we artificially activated them?
A small clinical trial at UCSF tested the theory in 10 people with HIV. The participants first received a previously validated vaccine that boosts HIV-hunting T cell activity. This was followed by a drug that activates overall immune responses and then two long-lasting bNAb treatments. The patients were then taken off antiretroviral therapy.
After the one-time treatment, seven participants maintained low levels of the virus over the following months. One had undetectable circulating virus for more than a year and a half. Like Collins’s results, bloodwork found the strongest marker for viral control was a high level of those stem cell-like T cells. People with rapidly expanding levels of these T cells, which then transformed into “killer” versions targeting HIV-infected cells, better controlled the infection.
“It’s like…[the cells] were hanging out waiting for their target, kind of like a cat getting ready to pounce on a mouse,” said study author Rachel Rutishauser in a press release.
Findings from both studies converge on a similar message: Long-term HIV management without antiretroviral therapy depends, at least in part, on a synergy between T cells and immunotherapy. Methods amping up stem cell-like T cells before administering bNAbs could give the immune system a head start in the HIV battle and achieve longer-lasting effects.
But these T cells are likely only part of the picture. Other immune molecules, such as a patient’s naturally occurring antibodies against the virus, may also play a role. Going forward, the combination treatment will need to be simplified and tested on a larger population. For now, antiretroviral remains the best treatment option.
“This is not the end game,” said study author Michael Peluso at UCSF. “But it proves we can push progress on a challenge we often frame as unsolvable.”
The post New Immune Treatment May Suppress HIV—No Daily Pills Required appeared first on SingularityHub.
2025-12-11 23:00:00
The technology is still in its infancy. But its trajectory suggests that ethical conversations may become pressing far sooner than expected.
As prominent artificial intelligence researchers eye limits to the current phase of the technology, a different approach is gaining attention: using living human brain cells as computational hardware.
These “biocomputers” are still in their early days. They can play simple games such as Pong, and perform basic speech recognition.
But the excitement is fueled by three converging trends.
First, venture capital is flowing into anything adjacent to AI, making speculative ideas suddenly fundable. Second, techniques for growing brain tissue outside the body have matured with the pharmaceutical industry jumping on board. Third, rapid advances in brain–computer interfaces have seen growing acceptance of technologies that blur the line between biology and machines.
But plenty of questions remain. Are we witnessing genuine breakthroughs, or another round of tech-driven hype? And what ethical questions arise when human brain tissue becomes a computational component?
For almost 50 years, neuroscientists have grown neurons on arrays of tiny electrodes to study how they fire under controlled conditions.

By the early 2000s, researchers attempted rudimentary two-way communication between neurons and electrodes, planting the first seeds of a bio-hybrid computer. But progress stalled until another strand of research took off: brain organoids.
In 2013, scientists demonstrated that stem cells could self-organize into three-dimensional brain-like structures. These organoids spread rapidly through biomedical research, increasingly aided by “organ-on-a-chip” devices designed to mimic aspects of human physiology outside the body.
Today, using stem cell-derived neural tissue is commonplace—from drug testing to developmental research. Yet the neural activity in these models remains primitive, far from the organized firing patterns that underpin cognition or consciousness in a real brain.
While complex network behavior is beginning to emerge even without much external stimulation, experts generally agree that current organoids are not conscious, nor close to it.
The field entered a new phase in 2022, when Melbourne-based Cortical Labs published a high-profile study showing cultured neurons learning to play Pong in a closed-loop system.
The paper drew intense media attention—less for the experiment itself than for its use of the phrase “embodied sentience.” Many neuroscientists said the language overstated the system’s capabilities, arguing it was misleading or ethically careless.
A year later, a consortium of researchers introduced the broader term “organoid intelligence.” This is catchy and media-friendly, but it risks implying parity with artificial intelligence systems, despite the vast gap between them.
Ethical debates have also lagged behind the technology. Most bioethics frameworks focus on brain organoids as biomedical tools—not as components of biohybrid computing systems.
Leading organoid researchers have called for urgent updates to ethics guidelines, noting that rapid research development, and even commercialization, is outpacing governance.
Meanwhile, despite front-page news in Nature, many people remain unclear about what a “living computer” actually is.
Companies and academic groups in the United States, Switzerland, China, and Australia are racing to build biohybrid computing platforms.
Swiss company FinalSpark already offers remote access to its neural organoids. Cortical Labs is preparing to ship a desktop biocomputer called CL1. Both expect customers well beyond the pharmaceutical industry—including AI researchers looking for new kinds of computing systems.
Academic aspirations are rising too. A team at UC San Diego has ambitiously proposed using organoid-based systems to predict oil spill trajectories in the Amazon by 2028.
The coming years will determine whether organoid intelligence transforms computing or becomes a short-lived curiosity. At present, claims of intelligence or consciousness are unsupported. Today’s systems display only simple capacity to respond and adapt, not anything resembling higher cognition.
More immediate work focuses on consistently reproducing prototype systems, scaling them up, and finding practical uses for the technology.
Several teams are exploring organoids as an alternative to animal models in neuroscience and toxicology.
One group has proposed a framework for testing how chemicals affect early brain development. Other studies show improved prediction of epilepsy-related brain activity using neurons and electronic systems. These applications are incremental, but plausible.
Much of what makes the field compelling—and unsettling—is the broader context.
As billionaires such as Elon Musk pursue neural implants and transhumanist visions, organoid intelligence prompts deep questions.
What counts as intelligence? When, if ever, might a network of human cells deserve moral consideration? And how should society regulate biological systems that behave, in limited ways, like tiny computers?
The technology is still in its infancy. But its trajectory suggests that conversations about consciousness, personhood, and the ethics of mixing living tissue with machines may become pressing far sooner than expected.
Disclosure statement: Bram Servais formerly worked for Cortical Labs but holds no shared patents or stock and has severed all financial ties.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post How Scientists Are Growing Computers From Human Brain Cells—and Why They Want to Keep Doing It appeared first on SingularityHub.
2025-12-10 01:35:49
GPT-4, Claude, and Llama sought out popular peers, connected with others via existing friends, and gravitated towards those similar to them.
As AI wheedles its way into our lives, how it behaves socially is becoming a pressing question. A new study suggests AI models build social networks in much the same way as humans.
Tech companies are enamored with the idea that agents—autonomous bots powered by large language models—will soon work alongside humans as digital assistants in everyday life. But for that to happen, these agents will need to navigate the humanity’s complex social structures.
This prospect prompted researchers at Arizona State University to investigate how AI systems might approach the delicate task of social networking. In a recent paper in PNAS Nexus, the team reports that models such as GPT-4, Claude, and Llama seem to behave like humans by seeking out already popular peers, connecting with others via existing friends, and gravitating towards those similar to them.
“We find that [large language models] not only mimic these principles but do so with a degree of sophistication that closely aligns with human behaviors,” the authors write.
To investigate how AI might form social structures, the researchers assigned AI models a series of controlled tasks where they were given information about a network of hypothetical individuals and asked to decide who to connect to. The team designed the experiments to investigate the extent to which models would replicate three key tendencies in human networking behavior.
The first tendency is known as preferential attachment, where individuals link up with already well-connected people, creating a kind of “rich get richer” dynamic. The second is triadic closure, in which individuals are more likely to connect with friends of friends. And the final behavior is homophily, or the tendency to connect to others that share similar attributes.
The team found the models mirrored all of these very human tendencies in their experiments, so they decided to test the algorithms on more realistic problems.
They borrowed datasets that captured three different kinds of real-world social networks—groups of friends at college, nationwide phone-call data, and internal company data that mapped out communication history between different employees. They then fed the models various details about individuals within these networks and got them to reconstruct the connections step by step.
Across all three networks, the models replicated the kind of decision making seen in humans. The most dominant effect tended to be homophily, though the researchers reported that in the company communication settings they saw what they called “career-advancement dynamics”—with lower-level employees consistently preferring to connect to higher-status managers.
Finally, the team decided to compare AI’s decisions to humans directly, enlisting more than 200 participants and giving them the same task as the machines. Both had to pick which individuals to connect to in a network under two different contexts—forming friendships at college and making professional connections at work. They found both humans and AI prioritized connecting with people similar to them in the friendship setting and more popular people in the professional setting.
The researchers say the high level of consistency between AI and human decision making could make these models useful for simulating human social dynamics. This could be helpful in social science research but also, more practically, for things like testing how people might respond to new regulations or how changes to moderation rules might reshape social networks.
However, they also note this means agents could reinforce some less desirable human tendencies as well, such as the inclination to create echo chambers, information silos, and rigid social hierarchies.
In fact, they found that while there were some outliers in the human groups, the models were more consistent in their decision making. That suggests that introducing them to real social networks could reduce the overall diversity of behavior, reinforcing any structural biases in those networks.
Nonetheless, it seems future human-machine social networks may end up looking more familiar than one might expect.
The post Study: AI Chatbots Choose Friends Just Like Humans Do appeared first on SingularityHub.
2025-12-09 07:55:01
The vaccine stopped runaway allergic reactions for a year in mice. It could evolve into a blanket therapy for food allergies, from peanuts to shellfish.
‘Tis the season for overindulgence. But for people with allergies, holiday feasting can be strewn with landmines.
Over three million people worldwide tiptoe around a food allergy. Even more experience watery eyes, runny noses, and uncontrollable sneezing from dust, pollen, or cuddling with a fluffy pet. Over-the-counter medications can control symptoms. But in some people, allergic responses turn deadly.
In anaphylaxis, an overactive immune system releases a flood of inflammatory chemicals that closes up the throat. This chemical storm stresses out the heart and blood vessels and limits oxygen to the brain and other organs.
Early diagnosis, especially of shellfish or nut allergies, helps people avoid these foods. And in an emergency, EpiPens loaded with epinephrine can relax airways and save lives. But the pens must be carried at all times, and patients—especially young children—struggle with this.
An alternative is to train the immune system to neutralize its over-zealous response. This month, a team from the University of Toulouse in France presented a long-lasting treatment that fights off anaphylactic shock in mice. Using a vaccine, they rewired part of the immune system to battle Immunoglobulin E (IgE), a protein that’s involved in severe allergic reactions.
A single injection into mice launched a tsunami of antibodies against IgE, and levels of those antibodies remained high for at least 12 months—which is over half of a mouse’s life. Despite triggering an immune civil war, the mice’s defenses were still able to fight a parasitic infection. The vaccine is, in theory, a blanket therapy for most food allergies, from peanuts to shellfish.
Although it needs more testing before clinical trials, the treatment is a “very enticing therapeutic candidate that fills an important need,” wrote Danielle Libera at McMaster University and colleagues, who were not involved in the study.
An army of immune cells roams our bodies to surveil and fight off invaders. When the system detects danger—pathogens, cancer cells, or foreign organs—it springs into action.
Some cells locate the threat and act as a beacon to other immune troops. T cells activate and physically lock onto a target, releasing toxic chemicals that punch holes in the invader’s protective membrane. B cells send in tailored antibodies to further neutralize the enemy.
But sometimes the well-oiled immune machine goes awry. Allergies are caused by friendly fire from B cells as they churn out antibodies to suit the body’s needs. Immunoglobulin G (IgG) provides overall immune support. Immunoglobulin A (IgA) protects the lining of the gut and lungs. IgE fights off parasites—and also triggers severe allergic reactions.
In food allergies, for example, allergens in the gut trigger B cells to switch antibody production from IgG to allergen-specific IgE. In the bloodstream, IgE meets up with mast cells, sensitizes them to the allergen, and keeps them on high alert.
If the person eats food containing the same allergen again, the allergen grabs onto these sensitized cells and prompts them to release a deluge of chemicals, such as histamines.
Cue immediate symptoms: Blood vessels dilate and leak, causing flushing, swelling, and a sudden drop in blood pressure. Smooth muscles contract and restrict airways. Mast cells recruit more immune fighters, and mucus and inflammation in the lungs skyrocket.
EpiPens immediately counteract some of these responses and provide valuable time for more intensive treatment. But patients must have one nearby, and the pens aren’t preventative. In 2024, the US FDA approved an antibody therapy that lowers IgE in the body after accidental allergen exposure as a preventative measure. But the treatment requires an injection every two to four weeks, is costly, and ironically, can inadvertently trigger anaphylaxis in some people.
Instead of injecting an antibody against IgE, why not coax the body to make its own?
The idea was first pitched in the early 1990s. But there were roadblocks, side effects being most notable. Earlier attempts at an IgE vaccine unexpectedly activated mast cells and triggered runaway immune reactions. The immune system also rapidly adapted. Newly formed IgE antibodies can be tagged as invaders, resulting in a counterattack that depletes levels of the antibodies levels over time.
However, the authors of the latest study had access to a wealth of new information. Atomic-level scans revealed that IgE toggles between two states. In an “open ” state, IgE grabs onto mast cells and allergens, forming a bridge that triggers allergic responses. But some antibodies can lock IgE into a “closed” state where it no longer connects with mast cells, severing the anaphylactic cascade.
The team engineered a vaccine using these antibodies to keep IgE in its closed state. The vaccine also stimulates the immune system to produce high levels of the antibodies.
Called IgE-K, the vaccine protected mice from multiple allergic reactions, including to peanuts, and completely prevented anaphylaxis. Two vaccine doses produced persistent antibodies that lasted for a year at sufficiently high amounts to ward off additional allergic reactions.
The results indicate that IgE-K may overcome depletion and establish a long-term antibody reservoir, wrote Libera and colleagues. It’s an especially promising strategy for food allergies that are lifelong in more than 80 percent of affected people.
Although the vaccine dampened IgE activity, it didn’t interfere with the antibody’s ability to clear parasites. Vaccinated mice knocked out a worm infection similarly to their non-treated peers. However, the experimental model relied on mast cells to fight off the infection as opposed to IgE per se. The team is now exploring the vaccine’s impact on other parts of the immune system, especially the B-cells in charge of making antibodies.
The study is a first step. But if all goes well, kids with severe allergies could have their PB&J and eat it too.
The post Scientists Just Developed a Lasting Vaccine to Prevent Deadly Allergic Reactions appeared first on SingularityHub.
2025-12-06 23:00:00
After Neuralink, Max Hodak Is Building Something Even WilderConnie Loizos | TechCrunch
“What makes this conversation remarkable is how concrete everything sounds. Hodak isn’t hand-waving about ‘someday.’ He’s got timelines, patient numbers, and regulatory pathways. ‘By 2035, [biohybrid neural interfaces] will be basically available for patients in need,’ he says. ‘And that will start to really deform the world in interesting ways.'”
OpenAI Co-Founder Sutskever Joins the SkepticsStephanie Palazzolo | The Information ($)
“There’s rising skepticism among researchers, including OpenAI co-founder Ilya Sutskever, about the effectiveness of RL [reinforcement learning] and whether it can advance AI to the level of artificial general intelligence, on par with human experts in scientific research, healthcare, and other domains. …[In a rare interview, Sutskever] said researchers use RL to help the models ace the evaluations, but that doesn’t improve the way the models generalize, or handle a wide variety of tasks.”
AR Ski Goggles Show You the Hazards That Your Eyes Alone Can’t SeeMaryna Holovnova | New Atlas
“Since there was no product on the market for improving visibility in bad weather, he and his colleagues invented one. …[The goggles] capture landscape details and textures that the human eye cannot see, and the enhanced 3D video is shown to you instantly through the augmented-reality displays with a latency of less than 30 milliseconds—way below human perception (anything under 50 ms is essentially imperceptible).”
Cold Metal Fusion Makes It Easy to 3D Print TitaniumDrew Robb | IEEE Spectrum
“CADmore Metal has introduced a fresh take on 3D printing metal components to the North American market known as cold metal fusion (CMF). John Carrington, the company’s CEO, claims CMF produces stronger 3D printed metal parts that are cheaper and faster to make. That includes titanium components, which have historically caused trouble for 3D printers.”
AI Chatbots Can Sway Voters Better Than Political AdvertisementsMichelle Kim | MIT Technology Review ($)
“A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party. The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things.”
Varda Says It Has Proven Space Manufacturing Works—Now It Wants to Make It BoringConnie Loizos | TechCrunch
“The Varda Space Industries CEO predicts that within 10 years, someone could stand at a landing site and watch multiple specialized spacecraft per night zooming toward Earth like shooting stars, each carrying pharmaceuticals manufactured in space. Within 15 to 20 years, he says, it will be cheaper to send a working-class employee to orbit for a month than to keep them on Earth.”
A Startup Says It Has Found a Hidden Source of Geothermal EnergyMolly Taft | Wired ($)
“Zanskar, which uses AI to find hidden geothermal resources deep underground, says that it has identified a new commercially viable site for a potential power plant. The discovery, the company claims, is the first of its kind made by the industry in decades. The find is the culmination of years of research on how to find these resources—and points to the growing promise of geothermal energy.”
One Day, AI Might Be Better Than You at Surfing the Web. That Day Isn’t Today.Victoria Song | The Verge
“The pitch is to reorient how we browse, to move us away from the search engines that have reigned for the past three decades. The central idea is the same as we’ve heard from all the other agents-all-the-way-down companies: AI will be just as good as you are at surfing the web. Possibly better. Big, if true.”
California’s Ban on Self-Driving Trucks Could Soon Be OverKirsten Korosec | TechCrunch
“California regulators have released revised rules that would allow companies to test and eventually deploy self-driving trucks on public highways. …While robotaxis have become commonplace in the San Francisco Bay Area and parts of Los Angeles, autonomous trucks are absent because regulations ban any driverless vehicles weighing over 10,000 pounds from testing on public roads.”
Meta Could Ax Up to One-Third of Its ‘Metaverse’ Budget Next YearEmma Roth | The Verge
“Meta, which changed its name from Facebook to align itself with the metaverse, has poured billions into building out its vision for virtual worlds over the past few years. But CEO Mark Zuckerberg has since shifted the company’s focus to developing AI superintelligence with a series of high-profile hires.”
Astronomers Have Found 6,000 Exoplanets—but This Could Be the First Known ExomoonGayoung Lee | Gizmodo
“The object appears to be around 0.4 Jupiter masses, which is more than seven Neptune masses, and is still much smaller than HD 206893 B at 28 Jupiter masses. So it’s an absolutely gigantic exomoon orbiting an absolutely gigantic exoplanet. Well, if true. As the researchers themselves admit, the alleged exomoon will now have to face scrutiny from the wider astronomical community.”
The post This Week’s Awesome Tech Stories From Around the Web (Through December 6) appeared first on SingularityHub.
2025-12-05 23:00:00
History says we can’t be too sure.
OpenAI chief executive Sam Altman—perhaps the most prominent face of the artificial intelligence boom that accelerated with the launch of ChatGPT in 2022—loves scaling laws.
These widely admired rules of thumb linking the size of an AI model with its capabilities inform much of the headlong rush among the AI industry to buy up powerful computer chips, build unimaginably large data centers, and re-open shuttered nuclear plants.
As Altman argued in a blog post earlier this year, the thinking is that the “intelligence” of an AI model “roughly equals the log of the resources used to train and run it”—meaning you can steadily produce better performance by exponentially increasing the scale of data and computing power involved.
First observed in 2020 and further refined in 2022, the scaling laws for large language models (LLMs) come from drawing lines on charts of experimental data. For engineers, they give a simple formula that tells you how big to build the next model and what performance increase to expect.
Will the scaling laws keep on scaling as AI models get bigger and bigger? AI companies are betting hundreds of billions of dollars that they will—but history suggests it is not always so simple.
Scaling laws can be wonderful. Modern aerodynamics is built on them, for example.
Using an elegant piece of mathematics called the Buckingham π theorem, engineers discovered how to compare small models in wind tunnels or test basins with full-scale planes and ships by making sure some key numbers matched up.
Those scaling ideas inform the design of almost everything that flies or floats, as well as industrial fans and pumps.
Another famous scaling idea underpinned the boom decades of the silicon chip revolution. Moore’s law—the idea that the number of the tiny switches called transistors on a microchip would double every two years or so—helped designers create the small, powerful computing technology we have today.
But there’s a catch: not all “scaling laws” are laws of nature. Some are purely mathematical and can hold indefinitely. Others are just lines fitted to data that work beautifully until you stray too far from the circumstances where they were measured or designed.
History is littered with painful reminders of scaling laws that broke. A classic example is the collapse of the Tacoma Narrows Bridge in 1940.
The bridge was designed by scaling up what had worked for smaller bridges to something longer and slimmer. Engineers assumed the same scaling arguments would hold: If a certain ratio of stiffness to bridge length worked before, it should work again.
Instead, moderate winds set off an unexpected instability called aeroelastic flutter. The bridge deck tore itself apart, collapsing just four months after opening.
Likewise, even the “laws” of microchip manufacturing had an expiry date. For decades, Moore’s law (transistor counts doubling every couple of years) and Dennard scaling (a larger number of smaller transistors running faster while using the same amount of power) were astonishingly reliable guides for chip design and industry roadmaps.
As transistors became small enough to be measured in nanometers, however, those neat scaling rules began to collide with hard physical limits.
When transistor gates shrank to just a few atoms thick, they started leaking current and behaving unpredictably. The operating voltages could also no longer be reduced without being lost in background noise.
Eventually, shrinking was no longer the way forward. Chips have still grown more powerful, but now through new designs rather than just scaling down.
The language-model scaling curves that Altman celebrates are real, and so far they’ve been extraordinarily useful.
They told researchers that models would keep getting better if you fed them enough data and computing power. They also showed earlier systems were not fundamentally limited—they just hadn’t had enough resources thrown at them.
But these are undoubtedly curves that have been fit to data. They are less like the derived mathematical scaling laws used in aerodynamics and more like the useful rules of thumb used in microchip design—and that means they likely won’t work forever.
The language model scaling rules don’t necessarily encode real-world problems such as limits to the availability of high-quality data for training or the difficulty of getting AI to deal with novel tasks—let alone safety constraints or the economic difficulties of building data centers and power grids. There is no law of nature or theorem guaranteeing that “intelligence scales” forever.
So far, the scaling curves for AI look pretty smooth—but the financial curves are a different story.
Deutsche Bank recently warned of an AI “funding gap” based on Bain Capital estimates of a $800 billion mismatch between projected AI revenues and the investment in chips, data centers, and power that would be needed to keep current growth going.
JP Morgan, for their part, has estimated that the broader AI sector might need around $650 billion in annual revenue just to earn a modest 10 percent return on the planned build-out of AI infrastructure.
We’re still finding out which kind of law governs frontier LLMs. The realities may keep playing along with the current scaling rules; or new bottlenecks—data, energy, users’ willingness to pay—may bend the curve.
Altman’s bet is that the LLM scaling laws will continue. If that’s so, it may be worth building enormous amounts of computing power because the gains are predictable. On the other hand, the banks’ growing unease is a reminder that some scaling stories can turn out to be Tacoma Narrows: beautiful curves in one context, hiding a nasty surprise in the next.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post AI Companies Are Betting Billions on AI Scaling Laws. Will Their Wager Pay Off? appeared first on SingularityHub.