2026-03-04 22:00:00
The transformational potential of AI is already well established. Enterprise use cases are building momentum and organizations are transitioning from pilot projects to AI in production. Companies are no longer just talking about AI; they are redirecting budgets and resources to make it happen. Many are already experimenting with agentic AI, which promises new levels of automation. Yet, the road to full operational success is still uncertain for many. And, while AI experimentation is everywhere, enterprise-wide adoption remains elusive.
Without integrated data and systems, stable automated workflows, and governance models, AI initiatives can get stuck in pilots and struggle to move into production. The rise of agentic AI and increasing model autonomy make a holistic approach to integrating data, applications, and systems more important than ever. Without it, enterprise AI initiatives may fail. Gartner predicts over 40% of agentic AI projects will be cancelled by 2027 due to cost, inaccuracy, and governance challenges. The real issue is not the AI itself, but the missing operational foundation.

To understand how organizations are structuring their AI operations and how they are deploying successful AI projects, MIT Technology Review Insights surveyed 500 senior IT leaders at mid- to large-size companies in the US, all of which are pursuing AI in some way.
The results of the survey, along with a series of expert interviews, all conducted in December 2025, show that a strong integration foundation aligns with more advanced AI implementations, conducive to enterprise-wide initiatives. As AI technologies and applications evolve and proliferate, an integration platform can help organizations avoid duplication and silos, and have clear oversight as they navigate the growing autonomy of workflows.

Key findings from the report include the following:
Some organizations are making progress with AI. In recent years, study after study has exposed a lack of tangible AI success. Yet, our research finds three in four (76%) surveyed companies have at least one department with an AI workflow fully in production.
AI succeeds most frequently with well-defined, established processes. Nearly half (43%) of organizations are finding success with AI implementations applied to well-defined and automated processes. A quarter are succeeding with new processes. And one-third (32%) are applying AI to various processes.
Two-thirds of organizations lack dedicated AI teams. Only one in three (34%) organizations have a team specifically for maintaining AI workflows. One in five (21%) say central IT is responsible for ongoing AI maintenance, and 25% say the responsibility lies with departmental operations. For 19% of organizations, the responsibility is spread out.
Enterprise-wide integration platforms lead to more robust implementation of AI. Companies with enterprise-wide integration platforms are five times more likely to use more diverse data sources in AI workflows. Six in 10 (59%) employ five or more data sources, compared to only 11% of organizations using integration for specific workflows, or 0% of those not using an integration platform. Organizations using integration platforms also have more multi-departmental implementation of AI, more autonomy in AI workflows, and more confidence in assigning autonomy in the future.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
2026-03-04 21:12:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
The boom of a calving glacier. The crackling rumble of a wildfire. The roar of a surging storm front. They’re the noises of the living Earth, but as loud as all these things are, they emit even more acoustic energy below the threshold of human hearing, at frequencies of 20 hertz or lower.
These “infrasounds” have such long wavelengths that they can travel around the globe as churning emanations of distant events. But humans have never been able to hear them. Until now. Read our story and check the sounds out for yourself.
—Monique Brouillette
This story is from the latest March/April issue of our print magazine, all about crime. Subscribe today to get full access. You’ll also receive an in-depth digital AI report and an exclusive e-book on how to understand AI’s reckoning.
A new wave of theft is rocking the luxury car industry—mixing high tech with old-school chop-shop techniques to snag vehicles while they’re in transport.
It’s remained under the radar, even as it’s rocked the industry over the past two years. MIT Technology Review identified more than a dozen cases involving high-end vehicles, obtained court records, and spoke to law enforcement, brokers, drivers, and victims in multiple states to reveal how transport fraud is wreaking havoc across the country.
This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 How Anthropic’s AI tool Claude is being used for US strikes on Iran
It’s helping to identify targets and prioritize them—for now. (WP $)
+ We should all be alarmed by the White House turning on Anthropic. (The Atlantic $)
+ OpenAI is pursuing a contract with NATO. (Reuters)
2 Iran’s Shahed drones give it a major advantage
They’re cheap and easy to manufacture, but very expensive to intercept. (CNBC)
+ The US is manufacturing copies of the drone to use against Iran. (New Scientist $)
+ Israel’s plot to kill Ayatollah Ali Khamenei was years in the making. (FT $)
3 Data center politics are getting an early test in North Carolina
One of the candidates is calling for a 10-year national moratorium on building them. (The Guardian)
+ But it’s not just data centers that are driving people’s electricity bills up. (Inside Climate News)
+ Data centers are amazing. Everyone hates them. (MIT Technology Review)
+ Never mind space—why not just build them into floating offshore wind turbines? (IEEE Spectrum)
4 LLMs can unmask pseudonymous users
At a speed and scale far beyond what even skilled human investigators can manage. (Ars Technica)
+ It’s also very easy to persuade them to fabricate scientific papers. (Nature $)
5 TikTok has ruled out end-to-end encryption, citing user safety
It’s a stance that sets it apart from almost all rival social media services. (BBC)
+ The strategy will please parents, police—and hackers. (Cybernews)
+ TikTok is experiencing Oracle-related server issues, again. (Gizmodo)
6 Why is SpaceX going public?
One thing seems certain: it’s not for the reasons Musk’s claiming. (The Verge $)
+ Two companies have just unveiled plans to build lunar harvesters. (Ars Technica)
7 NASA’s scheduled its next attempt to launch the Artemis II moon rocket
On April Fool’s Day, of all days. Good luck! (Space)
8 What it’s like to live with a brain implant for years 
For 65-year-old Rodney Gorham, who can no longer walk, talk, or move his hands, it’s been a real lifeline. (Wired $)
+ This patient’s Neuralink brain implant is getting a boost from generative AI. (MIT Technology Review)
9 Pokémon Pokopia is getting rave reviews
It apparently mixes Animal Crossing and Stardew Valley, with a hint of Minecraft-style building. (BBC)
10. Hollywood is scouring YouTube for its next horror hits 
Movie studios want to bring the threat from the platform in-house. (The New Yorker $)
+ One YouTuber’s self-financed horror flick opened at 4,000 theatres. (Variety)
Quote of the day
—OpenAI CEO Sam Altman comments on X about his decision to rush in to work with the US Department of War after its talks with Anthropic fell apart.
One More Thing

Crypto millionaires are pouring money into Central America to build their own cities
El Salvador’s Conchagua Volcano, home to a lush ecotourism retreat amid its sun-dappled forest, is set to host a glittering new Bitcoin City, according to the country’s president.
While some politicians and residents believe in crypto’s potential to jump-start the economy, others see history repeating itself. They also question who these projects are really for, and whether the countries serving as test beds will truly benefit. Read the full story.
—Laurie Clarke
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Art is everywhere in Los Angeles: you just need to know what you’re looking for.
+ Survivor has been running for 50 seasons. How is that even possible?!
+ MP3 players are cool again. I don’t make the rules.
+ Be careful out there—you never know when you’re going to come across a Homer Simpson AI cover song.
2026-03-03 21:30:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
This startup claims it can stop lightning and prevent catastrophic wildfires
Startup Skyward Wildfire says it can prevent catastrophic fires by stopping the lightning strikes that ignite them. So far, it hasn’t publicly revealed how it does so, but online documents suggest the company is relying on an approach the US government began evaluating in the early 1960s: seeding clouds with metallic chaff, or narrow fiberglass strands coated with aluminum.
It just raised millions of dollars to accelerate its product development and expand its operations. But researchers and environmental observers say uncertainties remain, including how well the seeding may work under varying conditions, how much material would need to be released, how frequently it would have to be done, and what sorts of secondary environmental impacts might result. Read the full story.
—James Temple
OpenAI’s “compromise” with the Pentagon is what Anthropic feared
OpenAI has reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.”
OpenAI has taken great pains to say that it has not caved to allow the Pentagon to do whatever it wants with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused.
But it’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. Read the full story.
—James O’Donnell
The story is from The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Gulf states are racing against time to intercept Iran’s drone attacks
They could run out of interceptors very soon. (WSJ $)
2 Apple is considering using Google’s Gemini AI to power Siri
It’s also set to deepen its reliance on Google’s cloud infrastructure. (The Information $)
3 A database shows which topics fall foul of the Trump administration
National parks are being forced to erase any exhibits that display “partisan ideology”. (WP $)
4 Can AI actually enhance jobs, not just destroy them?
Three economists take the optimistic view (New Yorker)
5 Are “bossware” apps tracking you?
Tools to watch what workers are doing are getting more and more sophisticated. (NYT)
6 RFK Jr says he is about to unleash 14 banned peptides
By reversing a Biden-era FDA ban on their production. (Gizmodo)
7 Meta is testing an AI shopping research tool
It hopes to rival Gemini and ChatGPT. (Bloomberg)
8 Maybe data centers in space aren’t as crazy as they sound?
They could be cheaper, with the right tech. (Economist)
9 Why climate change is making turbulence worse
Buckle up, people. (New Yorker)
10 6G is on its way!
And the hype cycle is doing its thing again. (The Verge $)
Quote of the day
“We don’t list markets directly tied to death. When there are markets where potential outcomes involve death, we design the rules to prevent people from profiting from death.”
—Tarek Mansour, CEO and founder of prediction market company Kalshi, tries to justify the $54 million bet on “Ali Khamenei out as Supreme Leader?” on his platform, 404 Media reports.
One More Thing

South Africa’s private surveillance machine is fueling a digital apartheid
Johannesburg is birthing a uniquely South African surveillance model. Over the past decade, the city has become host to a centralized, coordinated, entirely privatized mass surveillance operation. These tools have been enthusiastically adopted by the local security industry, grappling with the pressures of a high-crime environment.
Civil rights activists worry the new surveillance is fueling a digital apartheid and unraveling people’s democratic liberties, but a growing chorus of experts say the stakes are even higher.
They argue that the impact of artificial intelligence is repeating the patterns of colonial history, and here in South Africa, where colonial legacies abound, the unfettered deployment of AI surveillance offers just one case study in how a technology that promised to bring societies into the future is threatening to send them back to the past. Read the full story.
—Karen Hao and Heidi Swart
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ These influencers are on a mission to save the UK’s pubs.
+ Here’s what a map of America solely made up of its rivers would look like.
+ The winner of the Underwater Photographer of the Year awards is incredibly cute.
+ Pokémon may have turned 30 years old, but the franchise is more popular than ever.
2026-03-03 18:00:00
On June 1, 2023, as a sweltering heat wave baked Quebec, thousands of lightning strikes flashed across the province, setting off more than 120 wildfires.
The blazes ripped through parched forests and withered grasslands, burned for weeks, and compounded what was rapidly turning into Canada’s worst fire year on record. In the end, nearly 7,000 fires scorched tens of millions of acres across the country, generated nearly 500 millions tons of carbon emissions, and forced hundreds of thousands of people to flee their homes.
Lightning sparked almost 60% of the wildfires—and those blazes accounted for 93% of the total area burned.
Now a Vancouver-based weather modification startup, Skyward Wildfire, says it can prevent such catastrophic fires in the future—by stopping the lightning strikes that ignite them. It just raised millions of dollars in a funding round that it plans to use to accelerate its product development and expand its operations.
Until last week the company, which highlights the role lightning played in the 2023 infernos, stated on its website that it has demonstrated technology capable of preventing “up to 100% of lightning strikes.”
It was an eye-catching claim that went well beyond the confidence level of researchers who have studied the potential for humans to suppress lightning—and the company took it down following inquiries from MIT Technology Review.
“While the statement reflected an observed result under specific conditions, it was not intended to suggest uniform outcomes and has been removed,” Nicholas Harterre, who oversees government partnerships at Skyward, said in an email. “In complex atmospheric systems, consistent 100% outcomes are not realistic, as the experts you spoke to rightly pointed out.”
The company now states it demonstrated that it “can prevent the majority of cloud-to-ground lightning strikes in targeted storm cells.” So far, Skyward hasn’t publicly revealed how it does so, and in response to our questions Harterre said only that the materials are “inert and selected in accordance with regulatory standards.”
But online documents suggest the company is relying on an approach that US government agencies began evaluating in the early 1960s: seeding clouds with metallic chaff, or narrow fiberglass strands coated with aluminum.
The military uses the material to disrupt radar signals; fighter jets, for example, deploy it during dogfights to throw off guided missile systems. Field trials conducted decades ago by US agencies suggest it could help reduce lightning strikes, at least to some degree and under certain conditions.
If Skyward could employ it reliably on significant scales, it might offer a powerful tool for countering rising fire risks as climate change drives up temperatures, dries out forests, and likely increases the frequency of lightning strikes.
“Preventing lightning on high-risk days saves lives, billions in wildfire costs, and is one of the highest-leverage and most immediate climate solutions available,” Sam Goldman, Skyward’s founder and chief executive, said in a statement posted on LinkedIn last year.
But researchers and environmental observers say there are plenty of remaining uncertainties, including how well the seeding may work under varying weather and climate conditions, how much material would need to be released, how frequently it would have to be done, and what sorts of secondary environmental impacts might result from lighting suppression on commercial scales.
Some observers are also concerned that the company appears to have moved ahead with weather modification field trials in parts of Canada without providing wide public notice or openly discussing what materials it’s putting into the clouds.
Given the escalating fire dangers, it’s “reasonable” to evaluate the potential for new technologies to mitigate them, says Keith Brooks, programs director at Environmental Defence, a Canadian advocacy organization.
“But we should be doing so cautiously and really transparently, with a robust scientific methodology that’s open to scrutiny,” he says.
Skyward’s website offers few technical details, but the company says it worked with Canadian wildfire agencies in 2024 and 2025 to demonstrate its technology. The company also says it has developed AI tools to predict lightning strikes that could set off fires.
Skyward announced last month that it raised $7.9 million in Canadian dollars ($5.7 million), in an extension of a seed round initially closed early last year. Investors included Climate Innovation Capital, Active Impact Investments, and Diagram Ventures.
“Our first season demonstrated that prevention is possible at scale,” Goldman said in a statement. “This funding allows us to expand into new regions and support partners who need reliable, operational tools to reduce wildfire risk before emergencies begin.”
The company doesn’t use the term “cloud seeding” on its site or in its recent announcements. But a press release highlighting its selection as a finalist last year in a conservation group’s Fire Grand Challenge states that it suppresses lightning “by cloud seeding with safe, non-toxic materials to neutralize storm charges,” as The Narwhal previously reported.
In addition, Unorthodox Philanthropy, a foundation that provided a grant to support Skyward’s efforts “to test and deploy” the technology, offered more detail in an awardee write-up about Goldman.
It states: “The Skyward team … settled on an inert substance consisting of aluminum covered glass fibers, which is regularly used in military operations to intercept and confuse enemy radar and can also dis-charge clouds.”
Additional details were disclosed in a document marked “Proprietary and Confidential,” which the World Bank nonetheless released within a package of materials from companies developing means of addressing fire risks.
Skyward’s diagrams show planes dropping particles into clouds to prevent cloud-to-ground lightning strikes in “high risk areas.” The company also notes in the document that it uses artificial intelligence for a number of purposes, including forecasting lightning storms, prioritizing treatments, targeting storm cells, and optimizing flight paths.
Harterre stressed that the company would deploy the technology judiciously and reserve it for storm events with elevated wildfire risk, adding that such storms account for less than 0.1% of lightning activity in a given area.
“Our objective is to reduce the probability of ignition on the limited number of extreme-risk days when fires threaten lives, critical infrastructure, and ecosystems, and when suppression costs and impacts can escalate rapidly,” he said.
The document posted by the World Bank states that Skyward partnered with Alberta Wildfire in August of 2024 to “prove suppression by plane and drone,” and that its process produced a “60-100% reduction” in lightning compared with “control cells” (which likely means storm cells that weren’t seeded).
The document added that the company would be carrying out additional field trials in the summer of 2025 with the wildfire agencies in British Columbia and Alberta to “provide landscape level solutions with more advanced aircraft, sensors and forecasting.”
“BC Wildfire Service is aware that Skyward is developing technology that aims to reduce instances of lightning in targeted situations,” the British Columbia agency acknowledged in a statement provided to MIT Technology Review. “Last year, preliminary trials were conducted by Skyward to gain a better understand [sic] of the technology and its applicability in B.C. Should a project/technology like this move forward in B.C., we would engage with the project team in an effort to learn and ensure we’re using every tool available to us to respond to wildfire in B.C.”
The BC agency declined to make anyone available for an interview and didn’t respond to questions about what materials were used, where the tests were carried out, or whether it provided public disclosures or required the company to. Alberta Wildfire didn’t respond to similar questions from MIT Technology Review.
Clouds are just water in various forms—vapor, droplets, and ice crystals, condensed enough to form the floating Rorschach tests we see in the sky. Within them, snowflakes and tiny ice pellets known as graupel rub together, causing atoms to trade electrons. This process creates highly reactive ions with negative and positive charges.
Updrafts separate the light snowflakes from the graupel, building up larger differences in the charges across the electrical field until … crack! An electrostatic discharge occurs in the form of a lightning strike.
The 2023 fire season wasn’t a particularly big year for lightning strikes in Canada—but then it didn’t have to be. It was so hot and dry that every bolt that struck the surface had a better than usual chance of igniting a fire, says Piyush Jain, a research scientist at the Canadian Forest Service and lead author of a study published in Nature Communications that analyzed the year’s fires.

Climate change is, however, likely to produce more lightning strikes, if it hasn’t started to already. Warmer air holds more moisture and adds more convective energy to the atmosphere, which drives the vertical movement of air that forms clouds and stirs up lightning storms.
“So the conditions are there, and the conditions are likely to increase,” Jain says.
Different models arrive at different lightning forecasts for some regions of the world. But a clearer trend is already emerging in the northernmost latitudes, where the planet is warming fastest. Studies show that lightning-ignited fires have substantially increased in the Arctic boreal region, and predict that they will continue to rise.
This combines with other growing risks like longer fire seasons, warmer temperatures, and drier vegetation, together raising the odds of more severe fires and more greenhouse-gas emissions, says Brendan Rogers, a senior scientist at the Woodwell Climate Research Center who studies the effect of fires on permafrost thaw.
In fact, Canada’s emissions from the 2023 fires were more than four times its emissions from fossil fuels.
Scientists have conducted a variety of experiments exploring the possibility of preventing lightning, but most of it happened in the later half of the last century.
Amid the cultural optimism and booming economy of the postwar period, US research agencies and corporations went on a tear of cloud seeding experiments aimed at conquering nature—or at least moderating its dangers. Research teams launched or dropped materials like dry ice and silver iodide into clouds in attempts to boost rainfall, reduce hail, dissipate fog, and redirect hurricanes.
“Cloud seeding activity was so intensive that at its peak in the early 1950s, approximately 10% of the US land area was under some kind of weather modification program,” wrote MIT’s Phillip Stepanian and Earle Williams in a 2024 history of lightning suppression efforts in the Bulletin of the American Meteorological Society. (MIT Technology Review is owned by MIT but is editorially independent.)
Harry Gisborne, then chief of the division of fire research at the US Forest Service, wondered if the technique could be used to trigger downpours that might extinguish hard-to-reach wildfires on public lands. But when he put the question to Vincent Schaefer of General Electric, who had done pioneering research in cloud seeding, Schaefer thought they could perhaps do one better: prevent the lighting that sparked the fires in the first place.
The conversations kicked off what would become Project Skyfire, a multiagency private-public research program that carried out a series of experiments through the 1950s and 1960s. Research teams seeded clouds over the San Francisco Peaks of Arizona, the Bitterroot Mountains at the edge of Idaho, and the Deerlodge National Forest in Montana, among other places.
After comparing treated and untreated storm clouds, the researchers concluded that seeding decreased cloud-to-ground lightning by more than half. But as MIT’s Stepanian and Williams noted, the sample sizes were small, and questions remained about the statistical significance of the findings.
(Soviet scientists also carried out some field experiments on lightning suppression in the 1950s, as well as some related research that involved using rockets to launch lead iodide into thunderstorms in the 1970s, but it’s difficult to find further details about those programs.)
A near tragedy reignited US government interest in the possibility of lightning suppression in 1969, when lightning struck the Apollo 12 space shuttle twice within seconds of launch. The astronauts were able to reset their systems and successfully complete their mission to the moon, but it was a very close call.
In the aftermath, NASA and NOAA teamed up on what became known as Project Thunderbolt, which relied on the metallic chaff normally used in military countermeasures.
Researchers at the US Army Electronics Laboratory had previously proposed the possibility of suppressing lightning by deploying this material, which a handful of defense contractors manufacture. The idea is that chaff acts as a conductor in a forming electrical field, stripping electrons from some oxygen and nitrogen molecules and adding them to others. The mismatched electrons already collecting in cloud water molecules, thanks to all that rubbing between snowflakes and graupel, can then leap over to those newly charged atoms. That, in turn, should reduce the buildup of static electricity that otherwise results in lightning.
“By continuously redistributing—and thereby neutralizing—charges within the storm in a weak electric field, the strong electric fields required to produce lightning would never develop,” Stepanian and Williams wrote.
NASA and NOAA carried out a series of experiments seeding clouds with chaff from the early to mid 1970s, over Boulder, Colorado, and later at the Kennedy Space Center. Here, too, the experiments showed “generally promising field results.” But NASA eventually grew concerned about the possibility that chaff could affect radio communications and shuttered the program.
“Lightning suppression research was once again abandoned, and the responsibility for mitigating lightning hazards reverted to weather forecasters,” Stepanian and Williams concluded.
So what does all this tell us about our ability to prevent lightning?
“In my opinion, it’s unambiguously true that this technique can be used to reduce lightning strikes in a storm,” says Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group. “With some major caveats.”
For example, it’s not clear how much material you would need to release, how long it would persist, and how the effectiveness might change under different climate and weather conditions.
(Stepanian consulted with Skyward in its early stages, and he declined to discuss the startup.)
His coauthor on the history of lightning suppression seems a tad more skeptical. In an email, Williams, a research scientist at MIT who studies physical meteorology and atmospheric electricity, said there’s unmistakable evidence that chaff “has an impact on the electrification of thunderstorms.” But in email responses, he said its effectiveness in reducing or eliminating lighting activity “remains controversial” and requires further testing. (Williams says he did not consult for Skyward.)
In his own written reviews, he’s highlighted a number of potential shortcomings with earlier research, including unaccounted-for differences in cloud heights between treated and untreated storms. In addition, he’s noted that some studies used detection systems that pick up only cloud-to-ground strikes, not intracloud lightning, which is far more common.
He also points to the results of a more recent study that he and Stepanian collaborated on with researchers at New Mexico Tech. They relied upon data from weather radars in Tampa and Melbourne, Florida, located on opposite sides of the state, to detect the presence of chaff released over the central part of the state during military training and testing exercises.
They compared 35 storms during which chaff was clearly detected in clouds with 35 instances when it wasn’t.
According to an abstract of the paper—which hasn’t been peer-reviewed or published but was presented at the American Geophysical Union conference in December—storms that occurred when chaff was present were generally “smaller and shorter-lived.”
But the number of total flashes—which includes ground strikes as well as lightning within and between clouds and the air—was actually significantly higher in clouds carrying chaff: 62,250 versus 24,492.
“In summary, so far, it is hard to draw any conclusion about lightning suppression using chaff,” the authors wrote.
Williams says their results and other studies suggest that large chaff concentrations may be needed to suppress lightning. That could be because there’s a strong tendency for the ions released from the chaff fibers to be captured by cloud droplets before they reach the charged particles that would need to be neutralized.
But that may also present a significant deployment challenge, since chaff quickly becomes dilute once it’s released into the midst of turbulent storm clouds, Williams adds.
Skyward’s Harterre said he couldn’t comment on the results of the Florida study but noted that storms in the state are very different from those that occur in the Canadian provinces where his company operates.
“Our work to date has focused on regions where operational feasibility has been evaluated and wildfire risk is highest,” he wrote.
The possibility of releasing more chaff into the air also raises the questions of what else it could do in the atmosphere, and what will happen once it lands.
The US military has produced a number of studies exploring the environmental and health effects of chaff and found that it disperses widely, breaks down in the environment, and is “generally nontoxic.”
For instance, a Naval Health Research Center report assessing environmental impacts from decades of training exercises near Chesapeake Bay concluded that “current and estimated use of aluminized chaff by American forces worldwide” will not raise total aluminum levels above the Environmental Protection Agency’s established limits.
But a US Government Accountability Office report in 1998 raised a few other flags, noting that chaff can also affect civilian air traffic control radar and weather forecasts. It also highlighted a “potential but remote chance of collecting in reservoirs and causing chemical changes that may affect water and the species that use it.”
Stepanian says that if lightning suppression efforts require more chaff than the military currently releases, further studies may be needed to properly evaluate the environmental effects.
Brooks of Environmental Defence Canada says he wants to know more about what materials Skyward is using, where they’re sourced from, what the effort leaves behind in the environment, and what the impacts on animals could be. He is also wary of the possible secondary effects of intervening in storms.
“I just think there’s the potential for unintended consequences if we start to mess with a complex system, like weather,” Brooks says, adding: “It makes me nervous to think there are pilots going on without people knowing about them.”
Harterre said that the company abides by any applicable regulations, and that it conducts its field activities “in coordination with relevant authorities and with appropriate authorization.”
He added that it releases seeding materials at lower volumes and concentrations than those associated with defense use and that deployments “are limited to defined high-wildfire-risk storm conditions.”
It’s not clear whether or to what degree Skyward has meaningfully advanced the science of lightning suppression or cleared up the questions that have lingered since the studies from the last century.
The company hasn’t released data from its field trials, published any papers in peer-reviewed literature, or disclosed how its tests were performed, as far as MIT Technology Review was able to determine.
Without such information it’s impossible to assess its claims, Williams says. He and two of his New Mexico Tech coauthors—associate professor Adonis Leal and master’s student Jhonys Moura—had all expressed skepticism about the company’s previous claim of “up to 100%” lightning prevention.
Harterre said Skyward intends to release more technical information as its programs mature.
“We look forward to the opportunity to share more detailed information,” he wrote.
In the meantime, Skyward’s investors have high hopes for the company and see “tremendous opportunity” in its potential ability to counteract fire dangers.
“Mitigating the exponentially increasing risk of wildfires can only happen if we shift from reactive suppression to proactive prevention,” Kevin Kimsa, managing partner of Climate Innovation Capital, said in a statement when the company’s recent funding was announced.
Rogers, of the Woodwell Climate Research Center, has spoken with Skyward several times but hasn’t worked with them. He also stressed that it’s crucial to understand potential environmental impacts from lightning suppression and to consult with citizens in affected areas, including Indigenous communities.
But he says he’s “optimistic” about the role that lighting suppression could play, if it works effectively and without major downsides.
That’s because preventing wildfires is far cheaper than putting them out, and it avoids risks to firefighters, ecosystems, infrastructure and local communities.
“If you’re able to go after fires before they’ve even ignited, you remove a lot of that from the equation,” he says.
2026-03-03 01:29:42
On February 28, OpenAI announced it had reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.”
In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused.
You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon.
It’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. (OpenAI did not immediately respond to requests for additional information about its agreement.)
But the devil is also in the details. The reason OpenAI was able to make a deal when Anthropic could not was less about boundaries, Altman said, but about approach. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he wrote.
OpenAI says one basis for its willingness to work with the Pentagon is simply an assumption that the government won’t break the law. The company, which has shared a limited excerpt of its contract, cites a number of laws and policies related to autonomous weapons and surveillance. They are as specific as a 2023 directive from the Pentagon on autonomous weapons (which does not prohibit them but issues guidelines for their design and testing) and as broad as the Fourth Amendment, which has supported protections for Americans against mass surveillance.
However, the published excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school. It simply states that the Pentagon can’t use OpenAI’s tech to break any of those laws and policies as they’re stated today.
The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use.
OpenAI could say, as its head of national security partnerships wrote yesterday, that if you believe the government won’t follow the law, then you should also not be confident it would honor the red lines that Anthropic was proposing. But that’s not an argument against setting them. Imperfect enforcement doesn’t make constraints meaningless, and contract terms still shape behavior, oversight, and political consequences.
OpenAI claims a second line of defense. The company says it maintains control over the safety rules governing its models and will not give the military a version of its AI stripped of those safety controls. “We can embed our red lines—no mass surveillance and no directing weapons systems without human involvement—directly into model behavior,” wrote Boaz Barak, an OpenAI employee Altman deputized to speak on the issue about X.
But the company doesn’t specify how its safety rules for the military differ from its rules for normal users. Enforcement is also never perfect, and it is especially unlikely to be when OpenAI is rolling out these protections in a classified setting for the first time and is expected to do so in just six months.
There’s another question beneath all this: Should it be down to tech companies to prohibit things that are legal but that they find morally objectionable? The government certainly viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the government to cease working with the AI company after Anthropic sought to keep its model Claude from being used for autonomous weapons or mass domestic surveillance. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose,” Hegseth wrote.
But unless OpenAI’s full contract will reveal more, it’s hard not to see the company as sitting on an ideological seesaw, promising that it does have leverage it will proudly use to do what it sees as the right thing while deferring to the law as the main backstop for what the Pentagon can do with its tech.
There are three things to be watching here. One is whether this position will be good enough for OpenAI’s most critical employees. With AI companies spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.
Second, there is the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the government’s contract with the company, he announced that it would be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” There is significant debate about whether this death blow is legally possible, and Anthropic has said it will sue if the threat is pursued. OpenAI has also come out against the move.
Lastly, how will the Pentagon swap out Claude—the only AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to do so, during which the military will phase in OpenAI’s models as well as those from Elon Musk’s xAI.
But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground.
If you have information to share about how this is unfolding, reach out to me via Signal (username: jamesodonnell.22).