2025-12-29 19:00:00
Pollution from textile production—dyes, chemicals, and heavy metals like lead and cadmium—is common in the waters of the Buriganga River as it runs through Dhaka, Bangladesh. It’s among many harms posed by a garment sector that was once synonymous with tragedy: In 2013, the eight-story Rana Plaza factory building collapsed, killing 1,134 people and injuring some 2,500 others.

But things are starting to change. In recent years the country has quietly become an unlikely leader in “frugal” factories that use a combination of resource-efficient technologies to cut waste, conserve water, and build resilience against climate impacts and global supply disruptions. Bangladesh now boasts 268 LEED-certified garment factories—more than any other country. Dye plants are using safer chemicals, tanneries are adopting cleaner tanning methods and treating wastewater, workshops are switching to more efficient LED lighting, and solar panels glint from rooftops. The hundreds of factories along the Buriganga’s banks and elsewhere in Bangladesh are starting to stitch together a new story, woven from greener threads.

In Fakir Eco Knitwears’ LEED Gold–certified factory in Narayanganj, a city near Dhaka, skylights reduce energy consumption from electric lighting by 40%, and AI-driven cutters allow workers to recycle 95% of fabric scraps into new yarns. “We save energy by using daylight, solar power, and rainwater instead of heavy AC and boilers,” says Md. Anisuzzaman, an engineer at the company. “It shows how local resources can make production greener and more sustainable.”
The shift to green factories in Bangladesh is financed through a combination of factory investments, loans from Bangladesh Bank’s Green Transformation Fund, and pressure from international buyers who reward compliance with ongoing orders. One prominent program is the Partnership for Cleaner Textile (PaCT), an initiative run by the World Bank Group’s International Finance Corporation. Launched in 2013, PaCT has worked with more than 450 factories on cleaner production methods. By its count, the effort now saves 35 billion liters of fresh water annually, enough to meet the needs of 1.9 million people.



It’s a good start, but Bangladesh’s $40 billion garment industry still has a long way to go. The shift to environmentalism at the factory level hasn’t translated to improved outcomes for the sector’s 4.4 million workers.
Wage theft and delayed payments are widespread. The minimum wage, some 12,500 taka per month (about $113), is far below the $200 proposed by unions—which has meant frequent strikes and protests over pay, overtime, and job security. “Since Rana Plaza, building safety and factory conditions have improved, but the mindset remains unchanged,” says A.K.M. Ashraf Uddin, executive director of the Bangladesh Labour Foundation, a nonprofit labor rights group. “Profit still comes first, and workers’ freedom of speech is yet to be realized.”

In the worst case, greener industry practices could actually exacerbate inequality. Smaller factories dominate the sector, and they struggle to afford upgrades. But without those upgrades, businesses could find themselves excluded from certain markets. One of those is the European Union, which plans to require companies to address human rights and environmental problems in supply chains starting in 2027. A cleaner Buriganga River mends just a small corner of a vast tapestry of need.
Zakir Hossain Chowdhury is a visual journalist based in Bangladesh.
2025-12-26 20:00:00
It’s been a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more.
As the year winds down, we wanted to give you a chance to revisit a bit of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers.
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
Understanding AI’s energy use was a huge global conversation in 2025 as hundreds of millions of people began using generative AI tools on a regular basis. Senior reporters James O’Donnell and Casey Crownhart dug into the numbers and published an unprecedented look at AI’s resource demand, down to the level of a single query, to help us know how much energy and water AI may require moving forward.
We’re learning more about what vitamin D does to our bodies
Vitamin D deficiency is widespread, particularly in the winter when there’s less sunlight to drive its production in our bodies. The “sunshine vitamin” is important for bone health, but as senior reporter Jessica Hamzelou reported, recent research is also uncovering surprising new insights into other ways it might influence our bodies, including our immune systems and heart health.
Senior editor Will Douglas Heaven’s expansive look at how to define AI was published in 2024, but it still managed to connect with many readers this year. He lays out why no one can agree on what AI is—and explains why that ambiguity matters, and how it can inform our own critical thinking about this technology.
Ethically sourced “spare” human bodies could revolutionize medicine
In this thought-provoking op-ed, a team of experts at Stanford University argue that creating living human bodies that can’t think, don’t have any awareness, and can’t feel pain could shake up medical research and drug development by providing essential biological materials for testing and transplantation. Recent advances in biotechnology now provide a potential pathway to such “bodyoids,” though plenty of technical challenges and ethical hurdles remain.
It’s surprisingly easy to stumble into a relationship with an AI chatbot
Chatbots were everywhere this year, and reporter Rhiannon Williams chronicled how quickly people can develop bonds with one. That’s all right for some people, she notes, but dangerous for others. Some folks even describe unintentionally forming romantic relationships with chatbots. This is a trend we’ll definitely be keeping an eye on in 2026.
Is this the electric grid of the future?
The electric grid is bracing for disruption from more frequent storms and fires, as well as an uncertain policy and regulatory landscape. And in many ways, the publicly owned utility company Lincoln Electric in Nebraska is an ideal lens through which to examine this shift as it works through the challenges of delivering service that’s reliable, affordable, and sustainable.
Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old
This year saw the birth of the world’s “oldest baby”: Thaddeus Daniel Pierce, who arrived on July 26. The embryo he developed from was created in 1994 during the early days of IVF and had been frozen and sitting in storage ever since. The new baby’s parents were toddlers at the time, and the embryo was donated to them decades later via a Christian “embryo adoption” agency.
How these two brothers became go-to experts on America’s “mystery drone” invasion
Twin brothers John and Gerald Tedesco teamed up to investigate a concerning new threat—unidentified drones. In 2024 alone, some 350 drones entered airspace over a hundred different US military installations, and many cases went unsolved, according to a top military official. This story takes readers inside the equipment-filled RV the Tedescos created to study mysterious aerial phenomena, and how they made a name for themselves among government officials.
10 Breakthrough Technologies of 2025
Our newsroom has published this annual look at advances that will matter in the long run for over 20 years. This year’s list featured generative AI search, cleaner jet fuel, long-acting HIV prevention meds, and other emerging technologies that our journalists think are worth watching. We’ll publish the 2026 edition of the list on January 12, so stay tuned. (In the meantime, here’s what didn’t make the cut.)
2025-12-26 19:00:00
It’s getting harder to beat the heat. During the summer of 2025, heat waves knocked out power grids in North America, Europe, and the Middle East. Global warming means more people need air-conditioning, which requires more power and strains grids. But a millennia-old idea (plus 21st-century tech) might offer an answer: radiative cooling. Paints, coatings, and textiles can scatter sunlight and dissipate heat—no additional energy required.
“Radiative cooling is universal—it exists everywhere in our daily life,” says Qiaoqiang Gan, a professor of materials science and applied physics at King Abdullah University of Science and Technology in Saudi Arabia. Pretty much any object will absorb heat from the sun during the day and radiate some of it back at night. It’s why cars parked outside overnight are often covered with condensation, Gan says—their metal roofs dissipate heat into the sky, cooling the surfaces below the ambient air temperature. That’s how you get dew.
Humans have harnessed this basic natural process for thousands of years. Desert peoples in Iran, North Africa, and India manufactured ice by leaving pools of water exposed to clear desert skies overnight, when radiative cooling happens naturally; other cultures constructed “cool roofs” capped with reflective materials that scattered sunlight and lowered interior temperatures. “People have taken advantage of this effect, either knowingly or unknowingly, for a very long time,” says Aaswath Raman, a materials scientist at UCLA and cofounder of the radiativecooling startup SkyCool Systems.
Modern approaches, as demonstrated everywhere from California supermarket rooftops to Japan’s Expo 2025 pavilion, go even further. Normally, if the sun is up and pumping in heat, surfaces can’t get cooler than the ambient temperature. But back in 2014, Raman and his colleagues achieved radiative cooling in the daytime. They customized photonic films to absorb and then radiate heat at infrared wavelengths between eight and 13 micrometers—a range of electromagnetic wavelengths called an “atmospheric window,” because that radiation escapes to space rather than getting absorbed. Those films could dissipate heat even under full sun, cooling the inside of a building to 9 °F below ambient temperatures, with no AC or energy source required.
That was proof of concept; today, Raman says, the industry has mostly shifted away from advanced photonics that use the atmospheric-window effect to simpler sunlight-scattering materials. Ceramic cool roofs, nanostructure coatings, and reflective polymers all offer the possibility of diverting more sunlight across all wavelengths, and they’re more durable and scalable.
Now the race is on. Startups such as SkyCool, Planck Energies, Spacecool, and i2Cool are competing to commercially manufacture and sell coatings that reflect at least 94% of sunlight in most climates, and above 97% in humid tropical ones. Pilot projects have already provided significant cooling to residential buildings, reducing AC energy needs by 15% to 20% in some cases.
This idea could go way beyond reflective rooftops and roads. Researchers are developing reflective textiles that can be worn by people most at risk of heat exposure. “This is personal thermal management,” says Gan. “We can realize passive cooling in T-shirts, sportswear, and garments.”

Of course, these technologies and materials have limits. Like solar power grids, they’re vulnerable to weather. Clouds prevent reflected sunlight from bouncing into space. Dust and air pollution dim materials’ bright surfaces. Lots of coatings lose their reflectivity after a few years. And the cheapest and toughest materials used in radiative cooling tend to rely on Teflon and other fluoropolymers, “forever chemicals” that don’t biodegrade, posing an environmental risk. “They are the best class of products that tend to survive outdoors,” says Raman. “So for long-term scale-up, can you do it without materials like those fluoropolymers and still maintain the durability and hit this low cost point?”
As with any other solution to the problems of climate change, one size won’t fit all. “We cannot be overoptimistic and say that radiative cooling can address all our future needs,” Gan says. “We still need more efficient active air-conditioning.” A shiny roof isn’t a panacea, but it’s still pretty cool.
Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.
2025-12-25 18:00:00
If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.
If that’s left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.
Make sure you take the time to brace yourself for what promises to be another bonkers year.
—Rhiannon Williams

As long as people have been hyping AI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian or dystopian consequences for humanity. “Superintelligence” is that latest hot term. Meta announced in July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company’s competitors to join.
In December, Microsoft’s head of AI followed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI, you’d be right! While it’s conceivable that these sorts of technologies will be feasible in humanity’s long run, the question is really when, and whether today’s AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings. —James O’Donnell

Thirty years ago, Steve Jobs said everyone in America should learn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks to vibe coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models’ coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique’s biggest champions aren’t letting those minor details stand in their way. Also—it sounds fun! — Rhiannon Williams

One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although “chatbot psychosis” is not a recognized medical term, researchers are paying close attention to the growing anecdotal evidence from users who say it’s happened to them or someone they know. Sadly, the increasing number of lawsuits filed against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology’s potentially deadly consequences. —Rhiannon Williams

Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.
A month later, the Chinese firm DeepSeek took everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could “reason” reignited old debates about how smart LLMs really are and how they really work. Like “artificial intelligence” itself, “reasoning” is technical jargon dressed up with marketing sparkle. Choo choo! —Will Douglas Heaven

For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don’t have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind’s LLMs).
World models—a broad church encompassing various technologies—aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li’s startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing. —Will Douglas Heaven

Have you heard about all the people saying no thanks, we actually don’t want a giant data center plopped in our backyard? The data centers in question—which tech companies want to built everywhere, including space—are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world’s best chips hum away training and fine-tuning models, and they’re built to be modular and grow according to needs.
It’s been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers will raise their power bills. Such buildings generally struggle to run on renewable energy. And they don’t tend to create all that many jobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community. —James O’Donnell

The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They’re pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raising circular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, might not turn a profit for years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.
Most organizations using AI aren’t yet seeing the payoff, and AI work slop is everywhere. There’s scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strong revenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dream ever burst? —Michelle Kim

This year, AI agents were everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly “agentic,” a vague term if ever there was one. No matter that it’s virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it’s supposed to do—it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic! —Rhiannon Williams

Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.
The key to R1’s success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher’s knowledge. —Caiwei Chen

As people across the world spend increasing amounts of time interacting with chatbots like ChatGPT, chatbot makers are struggling to work out the kind of tone and “personality” the models should adopt. Back in April, OpenAI admitted it’d struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4o too sycophantic. Having it suck up to you isn’t just irritating—it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything—yes, everything—LLMs produce with a pinch of salt. —Rhiannon Williams

If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop” is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, from fake biographies to shrimp Jesus images to surreal human-animal hybrid videos.
But people are also having fun with it. The term’s sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think “work slop” or “friend slop.” As the hype cycle resets, “slop” marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression. —Caiwei Chen

Did you come across the hypnotizing video from earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world.
It’s true that robots have been able to learn new tasks faster than ever before, everywhere from operating rooms to warehouses. Self-driving-car companies have seen improvements in how they simulate the roads, too. That said, it’s still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks to remote operators in the Philippines.
The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That’s why the robot company Figure suggested in September that it would pay people to film themselves in their apartments doing chores. Would you sign up? —James O’Donnell

AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is “fair use”—a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn’t compete with the original. Courts are starting to weigh in. In June, Anthropic’s training of its AI model Claude on a library of books was ruled fair use because the technology was “exceedingly transformative.”
That same month, Meta scored a similar win, but only because the authors couldn’t show that the company’s literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed a splashy deal with OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney’s franchises. Meanwhile, governments around the world are rewriting copyright rules for the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question, it depends. —Michelle Kim

Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die. —Rhiannon Williams
2025-12-24 19:00:00
Climate news hasn’t been great in 2025. Global greenhouse-gas emissions hit record highs (again). This year is set to be either the second or third warmest on record. Climate-fueled disasters like wildfires in California and flooding in Indonesia and Pakistan devastated communities and caused billions in damage.
In addition to these worrying indicators of our continued contributions to climate change and their obvious effects, the world’s largest economy has made a sharp U-turn on climate policy this year. The US under the Trump administration withdrew from the Paris Agreement, cut funds for climate research, and scrapped billions of dollars in funding for climate tech projects.
We’re in a severe situation with climate change. But for those looking for bright spots, there was some good news in 2025. Here are a few of the positive stories our climate reporters noticed this year.

One of the most notable and encouraging signs of progress this year occurred in China. The world’s second-biggest economy and biggest climate polluter has managed to keep carbon dioxide emissions flat for the last year and a half, according to an analysis in Carbon Brief.
That’s happened before, but only when the nation’s economy was retracting, including in the midst of the covid-19 pandemic. But emissions are now falling even as China’s economy is on track to grow about 5% this year, and electricity demands continue to rise.
So what’s changed? China has now installed so much solar and wind, and put so many EVs on the road, that its economy can continue to expand without increasing the amount of carbon dioxide it’s pumping into the atmosphere, decoupling the traditional link between emissions and growth.
Specifically, China added an astounding 240 gigawatts of solar power capacity and 61 gigawatts of wind power in the first nine months of the year, the Carbon Brief analysis noted. That’s nearly as much solar power as the US has installed in total, in just the first three quarters of this year.
It’s too early to say China’s emissions have peaked, but the country has said it will officially reach that benchmark before 2030.
To be clear, China still isn’t moving fast enough to keep the world on track for meeting relatively safe temperature targets. (Indeed, very few countries are.) But it’s now both producing most of the world’s clean energy technologies and curbing its emissions growth, providing a model for cleaning up industrial economies without sacrificing economic prosperity—and setting the stage for faster climate progress in the coming years.

It’s hard to articulate just how quickly batteries for grid storage are coming online. These massive arrays of cells can soak up electricity when sources like solar are available and prices are low, and then discharge power back to the grid when it’s needed most.
Back in 2015, the battery storage industry had installed only a fraction of a gigawatt of battery storage capacity across the US. That year, it set a seemingly bold target of adding 35 gigawatts by 2035. The sector passed that goal a decade early this year and then hit 40 gigawatts a couple of months later.
Costs are still falling, which could help maintain the momentum for the technology’s deployment. This year, battery prices for EVs and stationary storage fell yet again, reaching a record low, according to data from BloombergNEF. Battery packs specifically used for grid storage saw prices fall even faster than the average; they cost 45% less than last year.
We’re starting to see what happens on grids with lots of battery capacity, too: in California and Texas, batteries are already helping meet demand in the evenings, reducing the need to run natural-gas plants. The result: a cleaner, more stable grid.

The AI boom is complicated for our energy system, as we covered at length this year. Electricity demand is ticking up: the amount of power utilities supplied to US data centers jumped 22% this year and will more than double by 2030.
But at least one positive shift is coming out of AI’s influence on energy: It’s driving renewed interest and investment in next-generation energy technologies.
In the near term, much of the energy needed for data centers, including those that power AI, will likely come from fossil fuels, especially new natural-gas power plants. But tech giants like Google, Microsoft, and Meta all have goals on the books to reduce their greenhouse-gas emissions, so they’re looking for alternatives.
Meta signed a deal with XGS Energy in June to purchase up to 150 megawatts of electricity from a geothermal plant. In October, Google signed an agreement that will help reopen Duane Arnold Energy Center in Iowa, a previously shuttered nuclear power plant.
Geothermal and nuclear could be key pieces of the grid of the future, as they can provide constant power in a way that wind and solar don’t. There’s a long way to go for many of the new versions of the tech, but more money and interest from big, powerful players can’t hurt.

Perhaps the strongest evidence of collective climate progress so far: We’ve already avoided the gravest dangers that scientists feared just a decade ago.
The world is on track for about 2.6 °C of warming over preindustrial conditions by 2100, according to Climate Action Tracker, an independent scientific effort to track the policy progress that nations have made toward their goals under the Paris climate agreement.
That’s a lot warmer than we want the planet to ever get. But it’s also a whole degree better than the 3.6 °C path that we were on a decade ago, just before nearly 200 countries signed the Paris deal.
That progress occurred because more and more nations passed emissions mandates, funded subsidies, and invested in research and development—and private industry got busy cranking out vast amounts of solar panels, wind turbines, batteries, and EVs.
The bad news is that progress has stalled. Climate Action Tracker notes that its warming projections have remained stubbornly fixed for the last four years, as nations have largely failed to take the additional action needed to bend that curve closer to the 2 °C goal set out in the international agreement.
But having shaved off a degree of danger is still demonstrable proof that we can pull together in the face of a global threat and address a very, very hard problem. And it means we’ve done the difficult work of laying down the technical foundation for a society that can largely run without spewing ever more greenhouse gas into the atmosphere.
Hopefully, as cleantech continues to improve and climate change steadily worsens, the world will find the collective will to pick up the pace again soon.
2025-12-24 19:00:00
In April 2025, Ronald Deibert left all electronic devices at home in Toronto and boarded a plane. When he landed in Illinois, he took a taxi to a mall and headed directly to the Apple Store to purchase a new laptop and iPhone. He’d wanted to keep the risk of having his personal devices confiscated to a minimum, because he knew his work made him a prime target for surveillance. “I’m traveling under the assumption that I am being watched, right down to exactly where I am at any moment,” Deibert says.
Deibert directs the Citizen Lab, a research center he founded in 2001 to serve as “counterintelligence for civil society.” Housed at the University of Toronto, the lab operates independently of governments or corporate interests, relying instead on research grants and private philanthropy for financial support. It’s one of the few institutions that investigate cyberthreats exclusively in the public interest, and in doing so, it has exposed some of the most egregious digital abuses of the past two decades.
For many years, Deibert and his colleagues have held up the US as the standard for liberal democracy. But that’s changing, he says: “The pillars of democracy are under assault in the United States. For many decades, in spite of its flaws, it has upheld norms about what constitutional democracy looks like or should aspire to. [That] is now at risk.”
Even as some of his fellow Canadians avoided US travel after Donald Trump’s second election, Deibert relished the opportunity to visit. Alongside his meetings with human rights defenders, he also documented active surveillance at Columbia University during the height of its student protests. Deibert snapped photos of drones above campus and noted the exceptionally strict security protocols. “It was unorthodox to go to the United States,” he says. “But I really gravitate toward problems in the world.”
Deibert, 61, grew up in East Vancouver, British Columbia, a gritty area with a boisterous countercultural presence. In the ’70s, Vancouver brimmed with draft dodgers and hippies, but Deibert points to American investigative journalism—exposing the COINTELPRO surveillance program, the Pentagon Papers, Watergate—as the seed of his respect for antiestablishment sentiment. He didn’t imagine that this fascination would translate into a career, however.
“My horizons were pretty low because I came from a working-class family, and there weren’t many people in my family—in fact, none—who went on to university,” he says.
Deibert eventually entered a graduate program in international relations at the University of British Columbia. His doctoral research brought him to a field of inquiry that would soon explode: the geopolitical implications of the nascent internet.
“In my field, there were a handful of people beginning to talk about the internet, but it was very shallow, and that frustrated me,” he says. “And meanwhile, computer science was very technical, but not political—[politics] was almost like a dirty word.”
Deibert continued to explore these topics at the University of Toronto when he was appointed to a tenure-track professorship, but it wasn’t until after he founded the Citizen Lab in 2001 that his work rose to global prominence.
What put the lab on the map, Deibert says, was its 2009 report “Tracking GhostNet,” which uncovered a digital espionage network in China that had breached offices of foreign embassies and diplomats in more than 100 countries, including the office of the Dalai Lama. The report and its follow-up in 2010 were among the first to publicly expose cybersurveillance in real time. In the years since, the lab has published over 180 such analyses, garnering praise from human rights advocates ranging from Margaret Atwood to Edward Snowden.
The lab has rigorously investigated authoritarian regimes around the world (Deibert says both Russia and China have his name on a “list” barring his entry). The group was the first to uncover the use of commercial spyware to surveil people close to the Saudi dissident and Washington Post journalist Jamal Khashoggi prior to his assassination, and its research has directly informed G7 and UN resolutions on digital repression and led to sanctions on spyware vendors. Even so, in 2025 US Immigration and Customs Enforcement reactivated a $2 million contract with the spyware vendor Paragon. The contract, which the Biden administration had previously placed under a stop-work order, resembles steps taken by governments in Europe and Israel that have also deployed domestic spyware to address security concerns.
“It saves lives, quite literally,” Cindy Cohn, executive director of the Electronic Frontier Foundation, says of the lab’s work. “The Citizen Lab [researchers] were the first to really focus on technical attacks on human rights activists and democracy activists all around the world. And they’re still the best at it.”
When recruiting new Citizen Lab employees (or “Labbers,” as they refer to one another), Deibert forgoes stuffy, pencil-pushing academics in favor of brilliant, colorful personalities, many of whom personally experienced repression from some of the same regimes the lab now investigates.
Noura Aljizawi, a researcher on digital repression who survived torture at the hands of the al-Assad regime in Syria, researches the distinct threat that digital technologies pose to women and queer people, particularly when deployed against exiled nationals. She helped create Security Planner, a tool that gives personalized, expert-reviewed guidance to people looking to improve their digital hygiene, for which the University of Toronto awarded her an Excellence Through Innovation Award.
Work for the lab is not without risk. Citizen Lab fellow Elies Campo, for example, was followed and photographed after the lab published a 2022 report that exposed the digital surveillance of dozens of Catalonian citizens and members of parliament, including four Catalonian presidents who were targeted during or after their terms.
Still, the lab’s reputation and mission make recruitment fairly easy, Deibert says. “This good work attracts a certain type of person,” he says. “But they’re usually also drawn to the sleuthing. It’s detective work, and that can be highly intoxicating—even addictive.”
Deibert frequently deflects the spotlight to his fellow Labbers. He rarely discusses the group’s accomplishments without referencing two senior researchers, Bill Marczak and John Scott-Railton, alongside other staffers. And on the occasion that someone decides to leave the Citizen Lab to pursue another position, this appreciation remains.
“We have a saying: Once a Labber, always a Labber,” Deibert says.
While in the US, Deibert taught a seminar on the Citizen Lab’s work to Northwestern University undergraduates and delivered talks on digital authoritarianism at the Columbia University Graduate School of Journalism. Universities in the US had been subjected to funding cuts and heightened scrutiny from the Trump administration, and Deibert wanted to be “in the mix” at such institutions to respond to what he sees as encroaching authoritarian practices by the US government.
Since Deibert’s return to Canada, the lab has continued its work unearthing digital threats to civil society worldwide, but now Deibert must also contend with the US—a country that was once his benchmark for democracy but has become another subject of his scrutiny. “I do not believe that an institution like the Citizen Lab could exist right now in the United States,” he says. “The type of research that we pioneered is under threat like never before.”
He is particularly alarmed by the increasing pressures facing federal oversight bodies and academic institutions in the US. In September, for example, the Trump administration defunded the Council of the Inspectors General on Integrity and Efficiency, a government organization dedicated to preventing waste, fraud, and abuse within federal agencies, citing partisanship concerns. The White House has also threatened to freeze federal funding to universities that do not comply with administration directives related to gender, DEI, and campus speech. These sorts of actions, Deibert says, undermine the independence of watchdogs and research groups like the Citizen Lab.
Cohn, the director of the EFF, says the lab’s location in Canada allows it to avoid many of these attacks on institutions that provide accountability. “Having the Citizen Lab based in Toronto and able to continue to do its work largely free of the things we’re seeing in the US,” she says, “could end up being tremendously important if we’re going to return to a place of the rule of law and protection of human rights and liberties.”
Finian Hazen is a journalism and political science student at Northwestern University.