2026-04-11 02:33:01
It’s a step change in cybersecurity. Exploits that would take experts weeks to develop can now be generated in hours.
Concerns about AI’s ability to turbocharge cybersecurity threats have been building for years. Anthropic’s latest model could mark a turning point after the company claimed the model could identify and exploit zero-day vulnerabilities in every major operating system and web browser.
One of the standout use cases for large language models is analyzing and writing code. This has long raised worries that the technology could help automate much of the work of hackers, potentially lowering the barrier for cyberattacks.
Leading models have demonstrated steady progress on various cybersecurity-related benchmarks, and there has been evidence malicious actors are using the technology. But so far, the impact appears to have been modest, suggesting practical barriers remain that prevent the widespread use of the technology.
According to Anthropic, that’s about to change. The company says its latest model, Mythos, has hacking capabilities so potent the company will not make it publicly available. Instead, it’s releasing Mythos to a select group of major technology companies and open source developers as part of an initiative called Project Glasswing. Those participating can use the model to identify vulnerabilities in their code and patch them before hackers get access to similar capabilities.
“The vulnerabilities that Mythos Preview finds and then exploits are the kind of findings that were previously only achievable by expert professionals,” the company’s researchers write in a blog post. “We believe the capabilities that future language models bring will ultimately require a much broader, ground-up reimagining of computer security as a field.”
Fortune first reported news of Mythos last month, after a data leak at Anthropic revealed details about the new model. While the AI excels at cybersecurity tasks, it’s designed to be a general purpose model, and the company says its hacking capabilities are simply a result of vastly improved coding and reasoning skills.
In testing, Anthropic’s researchers discovered the model was able to find “zero-day” vulnerabilities—ones that were previously undiscovered—in every major operating system and web browser. Many were decades old, an indicator of how hard they were to detect.
But the model isn’t just good at finding vulnerabilities. The company’s red team—security researchers who simulate hacking attacks to identify security weaknesses—showed the model could chain together multiple vulnerabilities to create complex attacks capable of sidestepping defenses.
Its capabilities are a step change from the previous best models. Given the challenge of attacking the Firefox web browser’s JavaScript engine, Anthropic’s previous most powerful model Opus 4.6 succeeded just twice, compared to 181 times for Mythos. Most worryingly, the team found that engineers with no security background could use it to develop successful attacks overnight.
Key to the new capabilities is the model’s ability to operate autonomously for long stretches. To find bugs, the researchers used Anthropic’s coding agent Claude Code to call the model and give it a simple prompt to scan for vulnerabilities in a particular codebase. The model then read the code, came up with hypotheses about potential bugs, and ran tests to validate them without any human involvement.
The Anthropic team says Mythos fundamentally reshapes the cybersecurity landscape as exploits that would take experts weeks to develop can now be generated in hours. In particular, they note that so-called “defense-in-depth” measures that make it time-consuming and costly to attack a system may prove ineffective against models like Mythos.
“When run at large scale, language models grind through these tedious steps quickly,” they write. “Mitigations whose security value comes primarily from friction rather than hard barriers may become considerably weaker against model-assisted adversaries.”
The head of Anthropic’s frontier red team, Logan Graham, told Axios that they expect other companies to produce models with similar capabilities in the coming six to 18 months. Sources familiar with the matter told Axios that OpenAI is already finalizing a model with similar capabilities to Mythos, which will have a similarly limited release.
In its blog post, the company’s researchers note that new security technology has historically benefited defenders more than attackers. If frontier labs are careful about model releases, they think the same could be true here too, but the transitional period is likely to be disruptive.
“We need to prepare now for a world where these capabilities are broadly available in 6, 12, 24 months,” Graham told Wired. “Many things would be different about security. Many of the assumptions that we’ve built the modern security paradigms on might break.”
Whether AI developers can keep a lid on these capabilities long enough for the rest of the world to come to grips with this new reality remains to be seen. But either way, cybersecurity is likely to be even higher up the list of priorities in most boardrooms going forward.
The post Anthropic’s Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser appeared first on SingularityHub.
2026-04-07 22:00:00
An AI system unearthed a trove of CRISPR-like proteins in minutes instead of weeks or months.
CRISPR is a breakthrough technology with humble origins. Scientists first discovered the powerful gene editor in bacteria that were using it as a weapon against invading viruses called phages. Phages can wipe out up to a quarter of a bacterial population in a day. Under assault, bacteria have evolved a hefty arsenal of defenses in a relentless arms race.
These bacterial immune systems often chop up the DNA or RNA of invading viruses and are relatively easy to manufacture, making them alluring targets for scientists developing genetic engineering tools. CRISPR is just one example. There are many more. But traditional methods of searching for them are slow and labor-intensive, leaving most CRISPR-like proteins unexplored.
Now, MIT scientists have released an AI called DefensePredictor that can root out new bacterial defense systems in five minutes, instead of weeks or months. As proof of concept, DefensePredictor churned through hundreds of thousands of proteins in multiple strains of Escherichia coli (E. coli). Over 600 proteins not previously linked to immune defense popped up. Added to a vulnerable strain of bacteria, a subset of these protected them against attack.
“E. coli harbors a much broader landscape of antiphage defense than previously realized, expanding the likely number of systems by multiple orders of magnitude,” wrote the team.
These systems might hold secrets about how immunity evolved. And because the proteins may work in different ways, they could be a goldmine for next-generation precision molecular tools.
Around three decades ago, Japanese scientists discovered a curious, repetitive DNA sequence in E. coli. Other researchers soon realized it was widespread across bacterial species and matched viral DNA sequences—suggesting it could be part of the bacteria’s immunity against phages.
The system now known as CRISPR stores snippets of DNA from past infections and uses protein “scissors” to cut apart matching viral DNA during reinfection. Intrigued by its precision, scientists repurposed CRISPR into a variety of gene editing tools and launched a gene therapy revolution.
CRISPR is the most famous, but a range of bacterial defense systems have transformed genetic engineering. One, containing an enzyme that cuts specific sequences of foreign DNA, is widely used to add genetic material into cells. Another encodes a balance of toxins and antitoxins that can trigger bacterial death after phage infection. This one has been adapted into a kill switch to prevent engineered microbes or genetically modified crops from spreading uncontrollably.
Researchers are also exploring the use of newly discovered systems—with video game-like names like Zorya and Thoeris—as molecular sensors and programmable signaling in synthetic biology.
There are likely more undiscovered tools in the universe of bacterial defense, and scientists have ways of hunting them down. Some defense genes are grouped close to one another, so a known gene could guide the discovery of others. Researchers have also found genes by screening libraries of free-floating circular genome fragments across bacterial populations.
Over 250 systems have been painstakingly validated. But plenty more could escape current detection methods if, for example, their components are spread across the genome.
“The full repertoire of antiphage defense systems in bacteria remains unknown,” wrote the team. “We currently lack the tools to systematically identify systems with high speed, sensitivity, and specificity.”
The new DefensePredictor algorithm bridges that gap.
At its core is a protein language model called ESM-2. Proteins are made of 20 molecular “letters” that combine into strings and fold into complex 3D shapes. Similar to large language models, algorithms like ESM-2 learn the language of proteins and can predict their structure and purpose based on sequence alone.
ESM-2 and other similar algorithms have already helped scientists decipher mysterious proteins in bacteria, viruses, and other microorganisms previously unknown to science. Researchers hope their unique shapes could inspire antibiotics, biofuels, or even be used to build synthetic organisms.
To build their AI, the team first established a training ground. With a previous model, DefenseFinder, they screened roughly 17,000 microbial genomes for genes related—and unrelated—to defense systems. They translated these genes into corresponding proteins and built up a database with some 15,000 antiphage proteins and 186,000 proteins unrelated to defense.
These numbers are far too staggering for a human to tackle, but the AI took the work in stride. Alongside ESM-2, the model used several algorithms to distinguish between defense and non-defense proteins. Eventually DefensePredictor learned some general characteristics that make a protein more likely to be part of the immune system. (Like other language models, it’s hard to fully understand the system’s reasoning, which the team is still trying to unpack.)
When tested on 69 strains of E. coli, DefensePredictor surfaced a treasure trove of over 600 new defense-related proteins, including more than 100 that were different than any yet discovered. Although some were encoded near one another or in circular DNA—like previous findings—nearly half weren’t. They were instead littered across the genome yet may still work together.
To test the results, the team engineered a highly vulnerable E. coli strain to express candidate defense proteins—predicted to work either alone or as part of a system—and exposed them to two dozen aggressive phages. Nearly 45 percent of the proteins offered protection against at least one phage.
Beyond E. coli, the scientists expanded their search to 1,000 more microorganisms and found thousands of potential defense proteins unlike anything seen before. “New immune mechanisms remain to be found,” wrote the team.
The race is on. Also published this week, a Pasteur Institute team combined multiple AI models to look for antiphage systems in protein sequences. Across over 32,000 bacterial genomes, the model predicted nearly 2.4 million antiphage proteins—most previously unknown. They released an atlas of AI-predicted bacterial immunity proteins for others to explore.
“The diversity of antiphage defense systems is vast and largely untapped,” they wrote.
Microorganisms harbor a colossal repertoire of biological tools we’re only just beginning to uncover at scale. More species are constantly found thriving in diverse environments, from pond scum to boiling sulfuric springs to the crushing pressure of the Mariana Trench. Every new genome scientists discover and pick apart, now with AI’s help, could be hiding the next CRISPR.
The post MIT Mined Bacteria for the Next CRISPR—and Found Hundreds of Potential New Tools appeared first on SingularityHub.
2026-04-06 22:00:00
Today’s error-prone quantum computers are still far from practical. But a bold deadline could galvanize the field.
As the race to harness quantum computing accelerates, governments are throwing their hats in the ring. The US Department of Energy is now aiming to build a fully functional, fault-tolerant quantum computer within the next three years.
Despite plenty of breathless headlines about the coming quantum revolution, today’s machines remain a long way from being practically useful. It’s widely expected that we will need much larger, more reliable quantum computers before they can tackle real-world problems.
That’s largely due to the fact that qubits are incredibly error-prone, which means future machines will need to run algorithms to detect and correct those errors faster than they occur. It’s estimated that the overhead for these algorithms could be as high as 1,000 physical qubits to create a single, error-corrected “logical” qubit that can actually take part in calculations.
Given that most current devices feature at best a few hundred physical qubits, more sober heads in the industry have suggested that we may be waiting well into the next decade to see a practical fault-tolerant quantum computer. But last week, Darío Gil, the Department of Energy’s undersecretary for science, announced the agency thinks it can hit that milestone in three years.
“By 2028 we will deliver the first generation of fault-tolerant quantum computers capable of scientifically relevant quantum calculations,” he told the Office of Science Advisory Committee, according to Science.
The agency doesn’t actually plan to build the system itself; it wants quantum computing companies to provide a ready-made solution. It has set out performance criteria it expects the future device to meet but is leaving the details up to providers. In particular, the agency has not picked a favorite between leading quantum computing designs, such as superconducting qubits, trapped ions, or neutral atoms.
“You can build it however you want, so long as you meet that objective and demonstrate scientific relevance,” Gil explained.
The proposed system would likely be housed at one of the department’s national laboratories where researchers can apply to use it for free, with projects selected based on scientific merit.
The announcement is the latest example of the agency’s growing focus on quantum technology. In November 2025, it announced $625 million to renew its National Quantum Information Science Research Centers, which are designed to accelerate research in quantum computing, simulation, networking, and sensing.
The goal is undeniably ambitious though. There has been significant progress in error-correction technology in recent years, which has renewed optimism in the industry. In particular, Google’s demonstration of its Willow chip in December 2024 proved quantum error correction works in practice, not just in theory. But massive technical hurdles remain, primarily in scaling up the hardware.
“It’s a very optimistic but worthy goal,” Yale physicist Steven Girvin told Science. Researchers are making “tremendous progress” in error correction, he said, but they’re still far from true fault-tolerance.
Solving that challenge has become an urgent priority for the industry, according to a recent report from quantum computing company Riverlane, but a severe talent shortage may limit how fast the field can move. There are only an estimated 600 to 700 professionals specializing in quantum error correction worldwide, but the industry will need up to 16,000 by the turn of the decade. And training error-correction experts can take up to 10 years.
It’s possible that the kind of grand challenge laid out by DoE can help galvanize both the attention and funding needed to shift the needle. But it’s an open question whether it will be able to deliver on the incredibly bold timeline outlined this week.
The post US Issues Grand Challenge: The First Fault-Tolerant Quantum Computer by 2028 appeared first on SingularityHub.
2026-04-04 22:00:00
How AI Helped One Man (and His Brother) Build a $1.8 Billion CompanyErin Griffith | The New York Times ($)
“From his house in Los Angeles, Mr. Gallagher, 41, used AI to write the code for the software that powers his company, produce the website copy, generate the images and videos for ads and handle customer service. …This year, they are on track to do $1.8 billion in sales.”
The First Quantum Computer to Break Encryption Is Now Shockingly CloseKarmela Padavic-Callaghan | New Scientist ($)
“A quantum computer capable of breaking the encryption that secures the internet now seems to be just around the corner. Stunning revelations from two research teams outline how it could happen, with one suggesting that the current largest quantum machine is already more than halfway towards the size needed.”
Four Astronauts Are Now Inexorably Bound for the MoonEric Berger | Ars Technica
“For NASA and the Artemis II crew members, [Thursday’s main engine burn] marked a point of no return for more than a week. About three-quarters of the American population has not witnessed humans leaving low-Earth orbit in their lifetimes. The last time this occurred was 1972, with the final Apollo Moon mission.”
New Fiber-Optic Record Allows 50,000,000 Movies to Be Streamed at OnceMatthew Sparkes | New Scientist ($)
“Faster speeds have been achieved before in highly regulated experiments, but this work crucially used existing cables that have been heavily used, have dirty connectors, sit underneath a bustling city full of traffic and noise, and represent a real-world test that shows it could be rolled out on existing infrastructure. The researchers say that commercial roll-out could happen within five years.”
AI Companies Shatter Fund-Raising Records, as Boom AcceleratesErin Griffith | The New York Times ($)
“OpenAI, Anthropic, Waymo and other artificial intelligence companies shattered fund-raising records in the first three months of the year with a $297 billion haul, according to data from Crunchbase, which tracks private investment. To put that sum into perspective: Last year was already record breaking, with technology start-ups raising $425 billion, up 30 percent from 2024. The first three months of 2026 put the industry on track to almost triple that amount.”
Battery Tech That Stores Over 9 Times More Energy Is Here and It’s Perfect for Your GadgetsPranob Mehrotra | Digital Trends
“This new design tackles [reliability problems] by making the batteries more stable. If it performs as expected outside the lab, it could remove one of the biggest hurdles holding Apple and Samsung back from adopting silicon-carbon batteries. It could eventually lead to smartphones and wearables that last significantly longer without compromising reliability.”
Chinese Humanoid Maker Agibot Rolls Out 10,000th Mass-Produced UnitJuro Osawa | The Information ($)
“The new milestone comes just three months after the company announced the rollout of its 5,000th unit in December. Prior to that, it took AgiBot about a year to go from 1,000 units to 5,000 units.”
How Did Anthropic Measure AI’s ‘Theoretical Capabilities’ in the Job Market?Kyle Orland | Ars Technica
“Digging into the basis for those ‘theoretical capability’ numbers, though, provides a much less chilling image of AI’s future occupational impacts. When you drill down into the specifics, that blue field represents some outdated and heavily speculative educated guesses about where AI is likely to improve human productivity and not necessarily where it will take over for humans altogether.”
Facial Recognition Is Spreading EverywhereLucas Laursen | IEEE Spectrum
“Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.”
Caltech Researchers Claim Radical Compression of High-Fidelity AI ModelsSteven Rosenbush | The Wall Street Journal ($)
“AI’s future won’t be defined by who can build the largest data centers, but by who can deliver the most intelligence per unit of energy and cost, according to investor Vinod Khosla. ‘So this is not a minor iteration. This is a major technical breakthrough,’ Khosla said. ‘It’s a mathematical breakthrough, not just another tiny model.'”
AI Models Lie, Cheat, and Steal to Protect Other Models From Being DeletedWill Knight | Wired ($)
“The researchers found that powerful models sometimes lied about other models’ performance in order to protect them from deletion. They also copied models’ weights to different machines in order to keep them safe, and lied about what they were up to in the process.”
The post This Week’s Awesome Tech Stories From Around the Web (Through April 4) appeared first on SingularityHub.
2026-04-04 04:20:23
With billions invested and prototypes being tested outside the lab, the quantum era is starting to take shape.
The unveiling by IBM of two new quantum supercomputers and Denmark’s plans to develop “the world’s most powerful commercial quantum computer” mark just two of the latest developments in quantum technology’s increasingly rapid transition from experimental breakthroughs to practical applications.
There is growing promise of quantum technology’s ability to solve problems that today’s systems struggle to overcome or cannot even begin to tackle, with implications for industry, national security, and everyday life.
So, what exactly is quantum technology? At its core, it harnesses the counterintuitive laws of quantum mechanics, the branch of physics describing how matter and energy behave at the smallest scales. In this strange realm, particles can exist in several states simultaneously (superposition) and can remain connected across vast distances (entanglement).
Once the stuff of abstract theory, these effects are now being engineered into innovative, cutting-edge systems: computers that process information in entirely new ways, sensors that measure the world with unprecedented precision, and communication networks that are virtually impossible to compromise.
To understand how this emerging field could shape the future, here are five areas where quantum technology may soon have a tangible impact.
A pharmaceutical scientist seeks to design a new medicine for a previously incurable disease. There are thousands of possible molecules, many ways they might interact inside the body, and uncertainty about which will work.
In another lab, materials researchers explore thousands of different atomic combinations and ratios to develop better batteries, chemicals, and alloys to reduce transport emissions. Traditional supercomputers can narrow the options but eventually meet their limits.
This is where quantum computing could make a decisive difference. These machines use quantum bits, or qubits—the most basic unit of information in a quantum computer. Qubits do not simply consist of 1s and zeroes, like bits in conventional computers, but can exist in a variety of different quantum “states.”
Indeed, the ability to develop and control qubits is central to advancing quantum computing and other quantum technologies. By using qubits, quantum computers can simulate vast numbers and different possibilities simultaneously, revealing patterns that classical systems cannot reach within useful time-frames.
In healthcare, faster drug discovery could bring quicker response to outbreaks and epidemics, personalized medicine, and insight into previously inscrutable biological interactions. Quantum simulation of how materials behave could lead to new high efficiency energy materials, catalysts, alloys and polymers.
Although fully operational, commercial quantum computers are still in development, progress is accelerating, with existing paradigms combining quantum and classic computational approaches already demonstrating the potential to reshape how we discover and design cures.
A new range of sensors can exploit different quantum phenomena such as superposition and entanglement to detect changes that conventional instruments would miss, with potential uses across many areas of daily life.
In navigation, they could guide ships, submarines, and aircraft without GPS by reading subtle variations in the Earth’s magnetic and gravitational fields.
In medicine, quantum sensors could improve diagnostic capabilities via more sensitive, quicker, and noninvasive imaging modes.
In environmental monitoring, these sensors could track delicate shifts beneath the Earth’s surface, offer early warnings of seismic activity, or detect trace pollutants in air and water with exceptional accuracy.
Many of the hardest challenges today concern the optimization of staggeringly complex systems; the task of choosing the best option among billions of possibilities.
Managing a power grid or investment portfolio, scheduling flights or financial trading, or coordinating global deliveries all feature optimization problems so complex that even advanced supercomputers struggle to find efficient answers in time.
Quantum computing could change this. Quantum algorithms could be used to solve optimization problems that are intractable using classical approaches.
By using quantum principles to explore many solutions simultaneously, these systems could identify solutions far faster than traditional methods. A logistics company could adjust delivery routes in real time as traffic, weather, and demand shift.
Airlines and rail networks could automatically reconfigure to avoid cascading delays, while energy providers might balance renewable generation, storage, and consumption with far greater precision. Banks could use quantum computers to evaluate numerous market scenarios in parallel, informing the management of investment portfolios.
Security is one of the areas where quantum technology could have the most immediate impact. Quantum computers are inching ever closer to being capable of breaking many of today’s encryption systems (such as RSA encryption which secures data transmission on the internet), posing a major cybersecurity challenge.
At the same time, quantum communication techniques, such as quantum key distribution (QKD), could offer intrinsically secure encrypted communication.
In practical terms, this could secure everything from financial transactions and health records to government and military communications. For national security agencies, quantum-safe encryption is already a strategic priority. For the average person, it could mean stronger digital privacy, more reliable identity systems, and reduced risk of cyberattacks.
Artificial intelligence is already reshaping industries, but is reliant on the immense computing power needed to train and run large models. In the future, quantum computing could boost AI by handling calculations that classical machines find too complex.
While still at an early stage of development, quantum algorithms might accelerate a subset of AI called machine learning (where algorithms improve with experience), help simulate complex systems, or optimize AI architectures more efficiently. That could lead to AI systems that learn faster, understand context better, and process far larger datasets than today’s models allow.
Think of AI assistants that understand you more naturally, medical diagnostic tools that integrate genomic and environmental data in real time, or scientific research that advances through rapid, quantum-boosted simulations.
Quantum technology is no longer just a theoretical pursuit. Optimism is increasing that commercially viable and scalable quantum technologies may become a reality over the next 10 years. With billions in global investment and a growing number of prototypes being tested outside the lab, the “quantum era” is starting to take shape.
Governments see it as a strategic priority, and industries see it as a competitive edge. Its ripple effects could touch nearly every sector from healthcare, energy, and finance, to defense, and beyond.
That means we should be asking whether our education systems, workforce dynamics, infrastructure, and governance mechanisms are effective—and whether they are keeping pace.
Those who invest early and strategically in quantum readiness and who have the patience to sustain this effort will shape how this technology unfolds. When it does arrive, even if we might be a few years away, its impact could reach far beyond the lab into every part of our connected, data-driven world.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Five Ways Quantum Technology Could Shape Everyday Life appeared first on SingularityHub.
2026-04-03 04:33:32
With data center power demand expected to nearly triple by 2030, tech companies are bankrolling new plants and even their own “shadow grid.”
Unless you’ve had your head in the sand, you’re likely aware that AI has a major energy problem. And as AI companies scramble to source power for their ever-expanding fleet of data centers, the technology is reshaping the US grid.
After more than a decade of flat growth, nationwide electricity demand has been climbing 1.7 percent annually since 2020, according to the US Energy Information Administration. The agency primarily attributes this increase to the rapid expansion in data centers over that period.
This trend is only likely to accelerate based on an analysis by S&P Global, which estimated that grid demand from these facilities would rise by 22 percent by the end of 2025 and nearly triple by 2030.
Data centers have always been large electricity consumers, but the scale and pace of the AI build-out puts them in a different league. And utility companies bearing the brunt of this shift are being forced to rewire their long-term planning in response to the surge in demand.
Dominion Energy, which services the world’s largest data center market in Virginia, reported that by the end of last year it had signed deals to supply nearly 48.5 gigawatts of power to data centers. This prompted it to raise its five-year capital spending plan nearly 30 percent to $64.7 billion.
CenterPoint Energy, another major utility serving the Houston area, boosted its 10-year capital plan to $65.5 billion in response to the jump in demand. It now expects to hit a 50 percent increase in peak load by 2029, two years ahead of schedule.
The pace of change promises to significantly reshape the US energy mix. In a March forecast, the Energy Information Administration projected that natural gas generation could jump 7.3 percent between 2025 and 2027 if data center demand is on the higher side of estimates. It also predicted that the steady decline in coal generation over recent decades would slow in this scenario.
But in perhaps the most striking shift, tech companies are now bankrolling new capacity themselves. Nuclear power is experiencing a major resurgence as AI providers and data center operators invest in new reactor development and sign long-term deals with existing plants. The activity could grow nuclear capacity 63 percent by 2050.
Meta also recently took the unusual step of privately funding a major expansion of the Louisiana grid to power its new $27 billion Hyperion data center. The facility, due to come online in 2028, could eventually consume over 7 gigawatts—enough to supply several million homes.
To account for its impact on the grid, Meta has agreed to pay for the construction of seven new natural gas power plants by utility Entergy—in addition to three already-approved plants—as well as 240 miles of new transmission lines to connect South Louisiana to North Louisiana and Arkansas and three new battery storage facilities.
The deal is likely a reaction to growing public discontent about the impact data centers are having on energy prices. People are also worried about how the surge in demand will affect long-term grid stability.
PJM Interconnection, the largest power grid operator in the US, warned in February that the country could face supply shortfalls of up to 60 gigawatts in coming decades and strained capacity could lead to blackouts as soon as 2027.
One potential workaround is the possibility of throttling data center workloads, and therefore energy use, when the grid is under stress. Major utilities including AES, Constellation, NextEra Energy, and Vistra are reportedly working on these so-called “flexible AI factories.”
But the idea is still largely experimental, and it’s uncertain whether big tech would willingly commit to regularly downing tools. IT consultant Heunets told Reuters it can cost companies about $9,000 a minute when their data centers go offline.
Given the complexities of meeting all this new demand, pressure is mounting for data center operators to solve their own power problems. Despite taking a generally supportive stance toward the AI boom, President Trump called on tech companies to build their own power plants for data centers in his February State of the Union address.
And it’s already happening. Energy consultant Cleanview says 46 data centers with a combined capacity of 56 gigawatts plan to build dedicated power infrastructure. This trend is giving birth to a “shadow grid”—a parallel energy system that operates alongside public power infrastructure.
This could still have knock-on effects for the rest of us. For a start, due to the difficulty managing the variable output of renewables, most projects rely on natural gas generators, which could lead to a spike in carbon emissions.
And because the most efficient turbines are hard to source on short notice, facilities are using more polluting generators. What’s more, tech companies are now competing with utilities for equipment. This could lead to ballooning costs that are then handed on to consumers.
Altogether, it’s become increasingly clear that the AI boom will fundamentally reshape the US energy system. And the speed at which companies are seeking to deploy new facilities is leaving little room for the work to be done in a considered and sustainable way.
The post The Mad Scramble to Power AI Is Rewiring the US Grid appeared first on SingularityHub.