MoreRSS

Noah SmithModify

Economics and other interesting stuff, an economics PhD student at the University of Michigan, an economics columnist for Bloomberg Opinion.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Noah Smith

Does anyone know why we're still doing tariffs?

2026-02-22 17:46:47

Cartoon by U.J. Kepper, 1908

In case you haven’t heard, the Supreme Court just ruled many of Donald Trump’s tariffs illegal:

[T]he Supreme Court ruled that the unilaterally imposed [tariffs] were illegal…No longer does Trump have a tariff “on/off” switch…Future tariffs will need to be imposed by lengthy, more technical trade authorities — or through Congress…

In a 6-3 ruling, the Supreme Court said that affirming Trump's use of the International Emergency Economic Powers Act (IEEPA) would "represent a transformative expansion of the President's authority over tariff policy."…Chief Justice John Roberts said that IEEPA does not authorize the president to impose tariffs because the Constitution grants Congress — and only Congress — the power to levy taxes and duties.

This doesn’t mean that Trump’s tariffs are going to suddenly vanish. More are on the way. There are older laws passed by Congress in the 1960s and 1970s that authorize the President to raise tariffs under certain circumstances. Here’s a summary by the Yale Budget Lab:

[T]he president has other sources of legal authority to enact tariffs without further congressional action. These authorities generally fall into two groups: those that require investigations by federal agencies but have few if any restrictions on the eventual tariffs imposed (Sections 201, 232, and 301) and Section 122, which provides a temporary authority to impose tariffs without an investigation, but is limited to a 15 percent rate for only 150 days. There is another authority, under Section 338 of the Tariff Act of 1930 (otherwise known as Smoot-Hawley) that would allow the President to impose a 50 percent tariff with no investigation or time limitations, but no President has used this authority before, raising again concerns about future legal challenges.

For now, all those other laws still stand, and Trump is going to use at least some of them. He immediately invoked one of the other laws, called Section 122, to put a 10% tariff on all imports from all countries, and then raised that to 15% a day later. This means the overall statutory tariff rate on U.S. imports (or at least, on the mix of imports from 2024), which would have fallen to around 9% after the SCOTUS ruling, will actually fall only a tiny bit:

But tariffs are very complex, and there are a ton of exemptions. Because these tariffs are more blanket than the ones SCOTUS just struck down, and because they interact with other tariffs that are still on the books, the new regime could raise effective tariff rates to even higher levels than before the SCOTUS decision.

That Section 122 tariff is supposed to be temporary — it only lasts 5 months — but Trump can presumably just renew it for another 5 months when it ends, until he gets sued again and it goes back to the Supreme Court. Then if that doesn’t work, he can use the various other laws, getting sued each time. In other words, Trump will be able to keep imposing large tariffs for the rest of his term in office.

So the fun continues. Whee!!

What was the point of these tariffs? It has never really been clear. Trump’s official justification was that they were about reducing America’s chronic trade deficit. In fact, the initial “Liberation Day” tariffs were set according to a formula based on America’s bilateral trade deficits with various countries.1 But trade deficits are not so easy to banish, and although America’s trade deficit bounced around a lot and shifted somewhat from China to other countries, it stayed more or less the same overall:

Economists don’t actually have a good handle on what causes trade deficits, but whatever it is, it’s clear that tariffs have a hard time getting rid of them without causing severe damage to the economy. Trump seemed to sense this when stock markets fell and money started fleeing America, which is why he backed off on much of his tariff agenda.

Trump also seemed to believe that tariffs would lead to a renaissance in American manufacturing. Economists did know something about that — namely, they recognized that tariffs are taxes on intermediate goods, and would therefore hurt American manufacturing more than they helped. The car industry and the construction industry and other industries all use steel, so if you put taxes on imported steel, you protect the domestic market for American steel manufacturers, but you hurt all those other industries by making their inputs more expensive.

And guess what? The economists were right. Under Trump’s tariffs, the U.S. manufacturing sector has suffered. Here’s the WSJ:

The manufacturing boom President Trump promised would usher in a golden age for America is going in reverse…Manufacturers shed workers in each of the eight months after Trump unveiled “Liberation Day” tariffs, according to federal figures…An index of factory activity tracked by the Institute for Supply Management shrunk in 26 straight months through December…[M]anufacturing construction spending, which surged with Biden-era funding for chips and renewable energy, fell in each of Trump’s first nine months in office.

And here’s a handy chart, via Joey Politano:

Trump didn’t cause all of the slowdown — it began a few months before he took office — but manufacturers consistently report that tariffs are making things worse. Tariff cheerleaders like Oren Cass, who goes around shouting that economists don’t know anything and that economics isn’t a science, have gone strangely silent in the face of this clear victory for textbook economics.

On some level, Trump — unlike pundits like Cass — seems to realize the basic economics of how tariffs hurt American industry. Recognizing the AI boom’s importance to the current economic expansion, he has granted huge exemptions for the computers that are being used to build AI data centers:

Macroeconomically, the tariffs haven’t been as big a deal as initially feared. Growth came in slightly weak in the final quarter of 2025, but that was mostly due to the government shutdown, and will rebound next quarter. Inflation keeps bumping along at a little bit above the official target, distressing the American consumer but failing to either explode or collapse. The President’s cronies have taken to holding up this lack of catastrophe as a great victory, but this sets the bar too low. If you back off of most of your tariffs and the economy fails to crash, you don’t get to celebrate — after all, the tariffs were ostensibly supposed to fix something in our economy, and they have fixed absolutely nothing.

Instead, the tariffs have mostly just caused inconvenience for American consumers, who have been cut off from being able to buy many imported goods. The Kiel Institute studied what happened to traded products after Trump put tariffs on their country of origin, and found out that they mostly just stopped coming:

The 2025 US tariffs are an own goal: American importers and consumers bear nearly the entire cost. Foreign exporters absorb only about 4% of the tariff burden—the remaining 96% is passed through to US buyers…Using shipment-level data covering over 25 million transactions…we find near-complete pass-through of tariffs to US import prices……Event studies around discrete tariff shocks on Brazil (50%) and India (25–50%) confirm: export prices did not decline. Trade volumes collapsed instead…Indian export customs data validates our findings: when facing US tariffs, Indian exporters maintained their prices and reduced shipments. They did not “eat” the tariff. [emphasis mine]

So it’s no surprise that the most recent polls show that Americans despise the tariffs:

Source: ABC

A Fox News poll found the same, and Trump’s approval rating on both trade and the economy is underwater by over 16 points despite a solid labor market. Consumer sentiment, meanwhile, has crashed:

Trump has belatedly begun to realize the hardship he’s inflicting on voters. But instead of simply abandoning the tariff strategy, he’s issuing yet more exemptions and carve-outs in an attempt to placate consumers:

Donald Trump is planning to scale back some tariffs on steel and aluminium goods as he battles an affordability crisis that has sapped his approval ratings…The US president hit steel and aluminium imports with tariffs of up to 50 per cent last summer, and has expanded the taxes to a range of goods made from those metals including washing machines and ovens…But his administration is now reviewing the list of products affected by the levies and plans to exempt some items, halt the expansion of the lists and instead launch more targeted national security probes into specific goods, according to three people familiar with the matter.

Tariffs — or at least, broad, blanket tariffs on many products from many different countries — are simply a bad policy that accomplishes nothing while causing varying degrees of economic harm. But despite all his chicken-outs and walk-backs and exemptions, Trump is still deeply wedded to the idea. When news of the Supreme Court ruling reached him, he flew into a rage and accused the Justices of serving foreign interests:

He called the liberals a “disgrace to our nation.” But he heaped particular vitriol on the three conservatives [who ruled against him]. They “think they’re being ‘politically correct,’ which has happened before, far too often, with certain members of this Court,” Mr. Trump said. “When, in fact, they’re just being fools and lapdogs for the RINOs and the radical left Democrats—and . . . they’re very unpatriotic and disloyal to our Constitution. It’s my opinion that the Court has been swayed by foreign interests.”

JD Vance, rather ridiculously, called the decision “lawless”:

Why are the President and his loyalists so incensed over the SCOTUS decision? The tariffs are a millstone weighing down Trump’s presidency, and his various walk-backs confirm that he realizes this. It would have been smarter, from a purely political standpoint, to just let SCOTUS do the administration a favor and cancel the tariffs. Instead, Trump is going to the mat for the policy. Why?

One possibility is simply that Trump hates having his authority challenged by anyone. Tariffs were his signature economic policy — something he probably decided on after hearing people like Lou Dobbs complain about trade deficits back in the 1990s. To give up and admit that tariffs aren’t a good solution to trade imbalances would mean a huge loss of face for Trump.

Another possibility is that Trump ideologically hates the idea of trade with other nations, viewing it as an unacceptable form of dependency on foreigners. Perhaps by using ever-shifting uncertainty about who would be hit by tariffs next, he hoped to prod other countries into simply giving up and not selling much to the United States.

A third possibility is that tariffs offer Trump a golden opportunity for corruption and personal enrichment. Trump issues blanket tariffs, and then offers carve-outs and exemptions to various companies and/or their products. This means companies line up to curry favor with Trump and his family, in the hopes that Trump will grant them a reprieve.

But the explanation I find most convincing is power. If all Trump wanted was to kick out against global trade, the Section 122 tariffs and all the other alternatives would surely suffice. Instead, he was very specifically attached to the IEEPA tariffs that SCOTUS struck down. Those tariffs allowed Trump to levy tariffs on specific countries, at rates of his own choosing, as well as to grant specific exemptions. That gave Trump an enormous amount of negotiating leverage with countries that value America’s big market.

This is the kind of personal power that no President had before Trump. It allowed him to conduct foreign policy entirely on his own. It allowed him to enrich himself and his family. It allowed him to gain influence domestically, by holding out the promise of tariff exemptions for businesses that toe his political line. And it allowed him to act as a sort of haphazard economic central planner, using tariffs like a scalpel to discourage the kinds of trade and production that he didn’t personally like.

In other words, I think that although the tariffs had their origin in 1990s-era worries about trade deficits, they ended up as a way to make the Presidency more like a dictatorship. That is almost certainly why the Supreme Court struck the IEEPA tariffs down, citing concerns over presidential overreach instead of more technical considerations.2

For much of the modern GOP, I think, autocracy has become its own justification. To many Republicans, tariffs were good because they made the President powerful, and SCOTUS’ ruling is anathema because it pushes back on the imperial Presidency.

In this case, America’s democratic institutions held the line. But there will be a next case.


Subscribe now

Share

1

The formula, which was probably AI-generated, involved lots of bad assumptions.

2

For example, SCOTUS could have ruled that IEEPA was fine in general, but that trade deficits don’t constitute the kind of “national emergency” that would justify IEEPA’s use.

Democratic economic policy in the age of AI

2026-02-20 18:49:56

Photo by Billy Hathorn via Wikimedia Commons

Let’s continue with the AI theme. We’ve done the dire warnings of doom, so now let’s be a little more pragmatic and optimistic.

A friend called me up the other day and asked me what I thought Democrats could offer Americans in terms of economic policy in this day and age. We discussed the limitations of the progressive economic program that coalesced in the late 2010s and was implemented during the Biden years. We then talked about possibilities for how AI might affect the economy, and what Democrats could offer in various scenarios. I promised my friend I would write a post outlining my thoughts, so here you go.

My basic argument is that the next Democratic policy offering should be robust to uncertainty. AI technology is changing very fast, and it will probably end up changing other technologies very quickly as well — robotics, energy, software, and so on. That rapid technological progress creates great uncertainty. Looking even just 10 years into the future, we basically don’t know:

  1. What kind of jobs humans will be doing (and which will pay well)

  2. What the macroeconomy — inflation, growth, and employment — is going to look like

  3. How the distributions of income and wealth will change

Those are essentially all of the biggest questions in economics, and we don’t really know any of them. So what do you do when you can’t predict the future? You come up with ideas that will be likely to work no matter what the future ends up looking like. In other words, you try to be robust. AI is like a storm that’s buffeting the whole economy; Democrats need to be the rock in that storm.

In fact, I have several ideas for how Democrats can create a robust economic offering even in the face of radical uncertainty. My three basic principles are:

  1. Abundance

  2. Government taking an ownership stake in the corporate system

  3. Policies to promote human work

Before I talk about those, however, I want to briefly go over what the 2010s progressive program looked like, and why Democrats can’t just keep pushing that.

The failures of the 2010s progressive economic program

The progressive economic ideas of the 2010s were, in large part, a reaction to the Great Recession and to the rise of inequality since the 1970s. But they also had their roots in political considerations — progressive activism, and the need to manage the emerging Democratic political coalition.

In a nutshell, the progressive program was:

  1. Have the government spend money to sustain full employment

  2. Spend money on subsidizing health care, education, and child care

  3. Spend money on cash benefits for people with children

  4. Spend money on mitigating climate change

  5. Fund this spending by taxing billionaires

  6. Attack corporate power in order to reduce political opposition to the progressive agenda

This was a pretty cohesive program — there was at least a bit of real research to back up most of these ideas,1 and these proposals seemed like they would both satisfy the core Democratic interest groups while also potentially having broad appeal.

But it turned out there were lots of problems with this approach. The first, as I wrote back in 2024, was that it basically assumed we were still in the macroeconomic environment of 2009 or 2016:

In 2009 or maybe even 2016, the economy still had a shortage of aggregate demand — spending more money had the potential to create jobs while also bringing inflation back to target. In fact, a high-pressure economy, along with higher local minimum wage laws, did raise low-end wages disproportionately in the 2010s.

But by the time progressives actually got into power in 2021, progressives’ diagnosis was no longer correct. In the years after the pandemic, America’s main macroeconomic problem was no longer underemployment — it was inflation. And by pushing aggregate demand even higher with massive deficit spending, the Biden administration probably exacerbated that inflation.

The warnings were there in advance. In 2021, the macroeconomist Olivier Blanchard used a standard, simple Keynesian economic analysis — the same kind of back-of-the-envelope exercise that would have been screaming “spend more money!” back in 2009 — to predict that Biden’s American Rescue Plan would raise inflation. Progressives ignored these warnings and charged ahead anyway. The result — exacerbated inflation — probably contributed marginally to Kamala Harris’ election loss in 2024.

The second problem was the natural tension between providing jobs in care industries and actually providing care services cheaply to the American populace. If you subsidize health care, education, and child care, the prices of these things will go up. This is exactly what happened with artificially cheap student loans, which drove up the cost of college. I pointed this out in 2021:

The progressive retort was that it doesn’t matter if the price that care providers charge goes up, as long as the price consumers pay goes down. The subsidy makes up the difference — it makes care services feel cheaper to the public, but also creates jobs in those industries. This is expensive, of course, but progressives planned to square the circle by taxing billionaires.

The problem was that the billionaire taxes never happened. Biden and Harris and progressives in Congress made a lot of noise about taxing the super-duper-rich, but it turns out that it’s very hard to get super-duper-rich people’s money. It’s a lot easier to get money from millionaires instead of billionaires, but millionaires had become the Democrats’ base, so they were reluctant to do this. And so instead, subsidies for care industries became giant deficit-funded make-work programs.2

Meanwhile, core parts of the progressive agenda turned out not to be as popular as their creators had hoped. Cash benefits failed to garner broad support, and Americans ended up not caring that much about climate change.

This is not to say that the progressive economic program failed completely, either in an economic or in a political sense. A high-pressure economy really did lift wages at the bottom and reduce wage inequality. Biden’s cash benefits and climate subsidies did some good for a while. And the program probably did help Biden get elected in 2020.

Overall, though, the progressive program was pretty underwhelming. And more importantly, the economic situation has changed so much that the program is no longer relevant. Democrats can’t just cruise on autopilot on the promise of more health care subsidies, more child tax credits, more green jobs, and more promises of billionaire taxes. In the late 2020s and early 2030s, this will be neither a winning program nor an effective one.

Which brings me to my own ideas about what the Democrats should do next.

When work is uncertain, abundance becomes more important

Read more

China is killing the fish

2026-02-18 07:49:53

Photo by Asc1733 via Wikimedia Commons

Unfortunately, I have another thing for you to worry about.

There are three types of environmental harm. The first kind is local — think air pollution and water pollution. This kind of activity hurts people who are geographically close by — when factories dump crap in the water, it’s local communities who get cancer, and so on. This kind of local pollution is typically solved by a local or national government, using things like regulation, pollution markets, and so on.

In fact, humanity has a pretty good track record when it comes to problems like this. The Environmental Kuznets Curve — the theory that countries pollute less as they get richer — seems to hold true for air and water pollution. As people escape poverty, they demand a cleaner local environment. For example, China used to be known for its toxic, unbreathable air, but in the 2010s it launched a successful cleanup policy:

Source: EPIC

The second kind of environmental harm — global harm — is a lot harder to deal with. These are things that mostly hurt people in other countries — global warming being the primary example. It’s very hard to solve global warming, because the worldwide nature of the harm means there’s a free rider problem (or, if you prefer, a coordination problem) — no country wants to pay the full cost of decarbonization, because most of the benefit goes to people in other countries. You can try international agreements, but everyone has an incentive to cheat.

Often, the best solution to these problems is technological — you simply invent something better and cheaper that doesn’t pollute as much, and then every country has an incentive to switch. Essentially, you use the positive externality of technology to fight the negative externality of pollution. This is what we did with HFC refrigerants, which replaced the CFCs that were destroying the ozone layer. It’s how we’re now fighting climate change with solar, batteries, and other green energy technologies.

But there’s a third kind of environmental harm, which is harm to the natural world. When pollution or logging or mining destroys natural habitats, it often doesn’t cause much harm to human beings — at least, not to those who are alive today. When coral reefs get bleached and die from industrial runoff, it might hurt tourism revenue a tiny bit, but overall humans don’t really get hurt. Animals and plants get hurt, but they have no voice in human politics. Future generations might regret not having coral reefs around, but they don’t exist yet, so they can’t complain.

Solving these harms seems like it probably requires some degree of altruism — either people caring about conservation for its own sake, or people who care a great deal about leaving a healthy natural world for their unborn descendants.

Altruism sounds like it won’t go far when matched against brute economic self-interest. But in recent years, I’ve become more optimistic that humans will care more intrinsically about preserving the natural world as they get richer. For example, people in North America, Europe, and East Asia all seem to care a lot about having forests:

This suggests that we won’t see a “race to the bottom” in terms of biodiversity loss, because the most powerful countries don’t seem to be the ones that chop down all their forests. Even Brazil, the worst offender in terms of sheer amount of forest cut down,1 has decreased the rate of Amazon deforestation by quite a lot since the early 2000s.

And that in turn hints at an even more important idea — that societies don’t trend toward greater rapaciousness as they become richer and more powerful. In his book The Better Angels of Our Nature, Steve Pinker theorized that people become more altruistic as they become more comfortable and secure; increasing global commitment to biodiversity seems to fit that theory. That might even be good news for the future of superintelligent AI — if rich nations stopped chopping down their forests, then maybe AI won’t kill the human race to use our resources for data centers.

Encouragingly, note the progress in China on the chart above. Some of this reforestation is motivated by the self-interested need to stop soil erosion and desertification, but China’s government has also increased its commitment to biodiversity. As another example of this, China banned fishing in the Yangtze River in 2021, in order to save fish stocks.

But there appear to be limits to China’s altruism here. Even as it took measures to prevent overfishing within its borders, China has continued to overfish much of the world’s oceans.

China’s fishing fleet just keeps getting bigger and bigger. This is from a 2025 report from the environmental group Oceana:

Oceana released an analysis of China’s global fishing* activity worldwide between 2022 and 2024. The analysis shows China’s global fishing footprint, in which 57,000 of their industrial fishing vessels dominated 44% of the world’s visible fishing activity during this period…Chinese vessels accounted for 30% of all fishing activity on the high seas, appearing to fish for more than 8.3 million hours.

In terms of catching wild fish, it’s basically China and Latin America dominating everyone else:

Much of this fishing activity is either outright illegal — meaning Chinese vessels fish in other countries’ waters in violation of their local laws or regional agreements — or unreported. In addition to simply violating laws with impunity, Chinese fleets use a large variety of tricks to get around regulations meant to keep them from overfishing — turning off their transponders, falsifying records, using foreign front companies, and so on. A lot of this fishing activity isn’t just to fuel China’s own increasing fish consumption — it’s an export industry. Here’s a detailed report from the Outlaw Ocean Project. Some key excerpts:

The size and behavior of the Chinese fishing fleet raises concerns…The Chinese government and western seafood companies often dismiss illegality in the fishing industry as an isolated problem. But [our] investigation revealed a wide pattern: Almost half of the Chinese squid fleet, 357 of the 751 ships studied, were tied to human-rights or environmental violations…

More than 100 Chinese squid ships were found to have fished illegally, including by targeting protected species, operating without a license, and dumping excess fish into the sea. The investigation revealed other environmental or fishing-specific crimes and risk indicators, including Chinese ships illegally entering the waters of other countries, disabling locational transponders in violation of Chinese law…transmitting dual identities (or “spoofing”)…fishing without a license, and using prohibited fishing gear. But the most common environmental violation involved Chinese ships poaching fish from other countries’ waters…

About 80 percent of seafood consumed in the U.S. is caught or processed abroad, with China as its biggest supplier.

Poor countries in Latin America and Africa don’t have the state capacity or economic leverage to enforce their laws. As a result, their waters are crammed with vast fleets of Chinese fishing boats:

Why is Chinese overfishing bad? Obviously it hurts fishermen in poor countries by taking away their fish. But in addition, it hurts biodiversity and robs future generations of fish. Here’s a good primer from Our World in Data that shows what you would do if you cared mainly about biodiversity, versus what you would do if you cared mainly about sustainability:

The key fact here is that whether you care more about the natural world or whether you care more about future humans being able to eat fish, the world is catching too many fish. An increasing percent of the world’s fisheries are now overexploited:

China’s lack of concern for sustainability plays a large part in this. Chinese fishing vessels are more likely to use various techniques that make them catch more juvenile fish. One of these is bottom-trawling, which drags nets along the seabed. Japan and the U.S. have largely given up on this practice; China has long been the world’s worst offender.

In previous decades, environmental organizations like Greenpeace sounded the alarm over Chinese overfishing. In recent years, with a few commendable exceptions like Sea Shepherd, they have mostly gone quiet. This is unfortunately consistent with the idea that legacy environmental groups are generally drifting from universal values of environmental protection toward a more explicitly leftist stance that focuses exclusively on critiquing the West and ignores environmental abuses by non-Western countries. (You can also see this in climate groups’ stubborn refusal to criticize China, which is by far the world’s worst climate polluter.)

In other words, geopolitics is starting to intrude into environmental debates. Most of the alarms now being sounded about Chinese overfishing come from “China hawks” rather than from environmentalists. And geopolitics is probably a big part of the reason China hasn’t cracked down on its global overfishing practices.

Traditionally, a lot of China’s overfishing has been due to massive subsidies that the Chinese government gives to the industry, mostly in the form of cheap fuel and other support. In the late 2010s, China began curbing those subsidies a bit. But as Ian Urbina reported back in 2020, these efforts have been pretty slow and minor when it comes to international waters, and geopolitics is probably a big reason:

[M]ore than seafood is at stake in the present size and ambition of China’s fishing fleet. Against the backdrop of China’s larger geo-political aspirations, the country’s commercial fishermen often serve as de-facto paramilitary personnel whose activities the Chinese government can frame as private actions. Under a civilian guise, this ostensibly private armada helps assert territorial domination, especially pushing back fishermen or governments that challenge China’s sovereignty claims that encompass nearly all of the South China Sea.

“What China is doing is putting both hands behind its back and using its big belly to push you out, to dare you to hit first,” said Huang Jing, former director of the Center on Asia and Globalization at the Lee Kuan Yew School of Public Policy in Singapore.

Chinese fishing boats are notoriously aggressive and often shadowed, even on the high seas or in other countries’ national waters, by armed Chinese Coast Guard vessels…From the waters of North Korea to Mexico to Indonesia, incursions by Chinese fishing ships are becoming more frequent, brazen and aggressive.

In other words, China’s government is becoming increasingly concerned about biodiversity and sustainability for its own sake, and this has resulted in more sustainable fishing practices in China’s own waters. But at the same time, China is using its vast international fishing fleet as a sort of naval militia to press its claims on other countries’ waters. And this is having collateral damage on the natural world — China’s quasi-military subsidies for its fishing fleet are resulting in too much actual fishing taking place.

In one sense, this is actually kind of optimistic. The fact that China is overfishing international waters for military and geopolitical reasons, rather than out of pure economic rapacity, suggests that the Chinese are not an exception to the rule that richer societies start to care more about sustainability — and, perhaps, about the intrinsic value of the natural world as well.

But in the meantime, the bad news is that China’s decision to maintain its fishing fleet as a naval militia means that the world’s oceans are being despoiled and drained of wildlife. That’s not good, and I wish that more environmentalists would pay attention to the problem. As power and wealth shift away from the West, the environmental movement risks making itself irrelevant if it continues its recent practice of letting countries like China off the hook.


Subscribe now

Share

1

Mostly to make room for cattle ranches.

Updated thoughts on AI risk

2026-02-16 10:14:52

So the other day I wrote a post about how humanity is inevitably going to be disempowered by the existence of AI:

A bunch of people wrote to me and asked me: “What made you change your mind?”. Three years ago, shortly after the release of the original ChatGPT, I wrote a post about how LLMs are not going to destroy the human race:

And just a couple of months ago, I wrote a post arguing that ASI (artificial superintelligence) is likely to peacefully coexist with humanity rather than kill us off.

People wanted to know why my tone had shifted from optimistic to pessimistic.

Well, the simple answer to that is “I was in a worse mood.” My rabbit was sick,1 so I was kind of grumpy, and so in my post a few days ago I painted the eventual disempowerment of humanity as more of a negative thing than I usually do. In fact, I’ve always believed that at some point, humanity would be replaced with something posthuman — godlike AIs, a hive mind, modified humans, or whatever. I grew up reading science fiction about that kind of thing — Vernor Vinge, Charles Stross, Arthur C. Clarke, Iain M. Banks, and so on — and it just always seemed impossible that humanity had already attained the theoretical pinnacle of intelligence.2 I had always simply envisioned that whatever came after us would be in the general human family, and would be more likely to be on our side than against us.

That’s what my post the other day was about. I painted a more glum picture of humanity’s eventual supersession because I was in a bad mood. But even in that post, at the end, I offered optimism that ASI will save us from things like low fertility, fascist overlords, and the end of human-driven scientific discovery. That optimistic future would be like the Culture novels, by Iain M. Banks, in which AIs take the reins of civilization but in which they respect and help and protect a now-mostly-useless humanity — basically a much nicer, more enlightened version of the way the United States of America treats Native Americans nowadays. It’s a wistful future, and in some ways a sad one, but not particularly terrifying.

BUT, at the same time, I have gotten a lot more worried about existential, catastrophic AI risk — the kind of thing that would kill us instead of just rendering us comfortably impotent — than I was three years ago. And so the people who wrote to ask me why my tone had shifted deserve a longer explanation about why I’m more worried.

What I got wrong three years ago

In my post three years ago, I argued that LLMs were not yet the kind of AI that could threaten the human race. I think I was probably right regarding the type of LLMs that existed in early 2023, for the reasons I laid out in that post. In a nutshell, I argued that since all LLMs could do was talk to people, the only way they could destroy the human race was by convincing us to destroy ourselves (unlikely) or by teaching us how to destroy ourselves (for example, by educating bioterrorists about how to make bioweapons).

In my defense, this is not too different from the scenario that Eliezer Yudkowsky — who literally wrote the book on existential AI risk — envisioned in 2022. He wrote:

My lower-bound model of “how a sufficiently powerful intelligence would kill everyone, if it didn’t want to not do that” is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they’re dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.

This is about AI teaching people how to make self-replicating nanomachinery instead of a doomsday virus. But honestly I feel like the doomsday virus would be easier to make. So I don’t think my scenario was too far behind the thinking of the most vocal and panicky AI safety people back in 2023.

Anyway, if I had said “chatbots” instead of “LLMs” in my 2023 post, I think I still would have been correct, because a chatbot is a type of user interface, while an LLM is an underlying technology that can be used to do much more than make a chatbot. What I missed was that LLMs can do a lot more than just talk to people — they can write code, because code is just a language, and it’s not too hard to get them to do this in an automated, end-to-end, agentic fashion.

In other words, I didn’t envision the advent of vibe-coding. And I probably should have. To be fair, the advent of vibe-coding required some big technological advances3 that didn’t exist in early 2023. But missing the fact that computer code is just a language that can be learned like any other — and that in fact it’s easier to learn, since you can verify when it works and when it doesn’t work — was a big miss for me. And it opens up the door to a LOT of other scary scenarios, beyond “A chatbot helps humans to do something bad”.

So anyway, let’s talk about what I’m scared about now. But first, let’s talk about what I’m less scared about, at least for the moment.

The rise of the robots is still a ways away

The scenario that everyone tends to think about is one in which a fully autonomous ASI decides that human civilization is an impediment to its use of natural resources, and that we need to be exterminated, enslaved, or otherwise radically disempowered in order to turn the world into data centers. This is basically the plot of the Terminator movies,4 the Matrix movies, and various other “rise of the robots” stories.

Conceptually speaking it’s easy to envision an AI that’s advanced enough to carry this out. It would have full control over an entirely automated chain of AI production, including:

  • Mining, refining, and processing of minerals

  • Fabrication of chips and construction of data centers

  • Manufacturing of robots

Controlling this entire chain would give AI control over its own reproduction — the way humans have always had control over our own reproduction. It could then safely dispense with humanity without endangering its own future.

This is basically a very direct analogy to what European settlers did to Native American civilization, or what various other waves of human conquerors and colonizers do to other groups of humans.

I think this scenario is worth worrying about, but it’s not immediate. Right now, robotics is still fairly rudimentary — things are advancing, but AI will need humans as its agents in the physical world for years to come. Furthermore, AI will need some algorithmic changes before it can permanently “survive” on its own without humans — long memory, for one. I’m not saying these won’t happen, but at least we have some time to think about how to prevent the “rise of the robots” scenario. I do think we should have some people (and AI) thinking about how to harden our society against that sort of attack.

It seems likely that AI will eventually get smart enough to think its way around whatever physical safeguards we put in place against the rise of the robots. But as I wrote two months ago, I think an AI advanced enough to fully control the physical world would have already reached the stage where it understands that peaceful coexistence and positive-sum interaction is a better long-term bet than genocide. Smarter humans and richer human societies both tend to be more peaceful, and I sort of expect the same from smarter AI.

So I think there are other worries to prioritize here.

What if the Machine stops?

In my post three years ago, I tried to list the ways that LLMs might eventually destroy us:

Here’s a list of ways the human race could die within a relatively short time frame:

  • Nuclear war

  • Bioweapons

  • Other catastrophic WMDs (asteroid strike, etc.)

  • Mass suicide

  • Extermination by robots

  • Major environmental catastrophe

The advent of vibe-coding has made me think of another way our civilization could be destroyed, which I probably should have thought of at the time: starvation.

Every piece of agricultural machinery in the developed world, more or less, runs on software now — every tractor, every harvester, every piece of food processing machinery. That software was mostly written by human hands, but in a fairly short period of time, it will all be vibe-coded by AI.

At that point, AI would, in principle, have the ability to bring down human civilization simply by making agricultural software stop working. It could push malicious updates, or hack in and take over, or wipe the software, etc. Agricultural machines would stop working, and in a few weeks the entire human population would begin to starve. Civilization would fall soon afterwards.

I really should have thought of this scenario when I wrote my post in 2023, because it’s the plot of a very famous science fiction story from 1909: “The Machine Stops”, by E.M. Forster. In this story, humanity lives in separate rooms, communicating with each other only electronically,5 cared for entirely by a vast AI; when the AI stops working, most of humanity starves.

This could happen to us soon. Now that vibe-coding is many times as productive as human coding, it’s very possible that a lot fewer people will get good at coding. Even the tools that exist right now might be eroding humans’ skills at working with code. This is from a recent Anthropic study:

AI creates a potential tension: as coding grows more automated and speeds up work, humans will still need the skills to catch errors, guide output, and ultimately provide oversight for AI deployed in high-stakes environments. Does AI provide a shortcut to both skill development and increased efficiency? Or do productivity increases from AI assistance undermine skill development?

In a randomized controlled trial, we examined 1) how quickly software developers picked up a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they’d just written.

We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades.

Meanwhile, Harry Law wrote a good post called “The Last Temptation of Claude”, about how the ease of vibe-coding is making him mentally lazier. There are many other such posts going around.

As vibe-coding becomes even better and eliminates humans entirely from the loop, the need for human software skills will presumably atrophy further. Ten years from now, if the software that runs our agricultural machinery just stops working for some reason, there’s a good chance there will not be enough human coders around to get it working again.

This would simply be a special case of a well-known problem — overoptimization creating fragility. When Covid hit in 2020, we found out that our just-in-time supply chains had been so over-engineered for efficiency that they lacked robustness. Vibe-coding could lead to a much worse version of the same problem.

That said, AI going on a catastrophic strike isn’t at the top of my list of fears. The reason is that I expect AI to be very fragmented; so far, no AI company seems to have any kind of monopoly, even for a short period of time. If the AI that writes the code for harvesters and tractors suddenly goes rogue, it seems like there’s a good chance that humans can call in another AI to fix it.

I guess it’s possible that all the AIs will collude so that none of them will help humans survive, or that the rogue AI(s) will be able to maliciously lock humans out from applying non-rogue AI to fix the problem. So people should be thinking about how to harden our agricultural system against software disruption. But it’s also not at the top of my list of doomsday worries.

Vibe-coding the apocalypse

OK, so what is at the top of my list of doomsday worries? It’s still AI bioterrorism.

Hunting down and killing humans with an army of robots would be fairly hard. Depriving humans of food so that we starve to death would be easier, but still a little bit hard. But slaughtering humans with a suite of genetically engineered viruses would not actually be very hard. As we saw in 2020, humans are very vulnerable to novel viruses.

Imagine the following scenario. In the near future, virology research is basically automated. Labs are robotic, and AI designs viruses in simulation before they’re created in labs. For whatever personal reasons, a human terrorist wants to destroy the human race. Using techniques he reads about on the internet, he jailbreaks a near-cutting-edge AI in order to remove all safeguards. He then prompts this AI to vibe-code a simulation that can design 100 superviruses. Each supervirus is 10x as contagious as Covid, has a 90% fatality rate, and has a long initial asymptomatic period so it’ll spread far and wide before it starts killing its victims. He then prompts his AI to vibe-code a program to hack into every virology lab on the planet and produce these 100 viruses, then release them into the human population.

If successful, this would quickly lead to the end of human civilization, and quite possibly to the extinction of the entire human species.

Is it possible? I don’t know. But developments seem to be moving in the direction of making it possible. For example, bio labs are becoming more automated all the time:

And AI algorithms are rapidly getting better at simulating things like proteins:

“Virtual labs” powered by “AI scientists” are becoming commonplace in the world of bio. And there is plenty of fear about how AI-powered laboratories might be used to create superviruses. Here’s a story that ran in Time magazine almost a year ago:

A new study claims that AI models like ChatGPT and Claude now outperform PhD-level virologists in problem-solving in wet labs, where scientists analyze chemicals and biological material. This discovery is a double-edged sword, experts say. Ultra-smart AI models could help researchers prevent the spread of infectious diseases. But non-experts could also weaponize the models to create deadly bioweapons.

The study, shared exclusively with TIME, was conducted by researchers at the Center for AI Safety, MIT’s Media Lab, the Brazilian university UFABC, and the pandemic prevention nonprofit SecureBio. The authors consulted virologists to create an extremely difficult practical test which measured the ability to troubleshoot complex lab procedures and protocols. While PhD-level virologists scored an average of 22.1% in their declared areas of expertise, OpenAI’s o3 reached 43.8% accuracy. Google’s Gemini 2.5 Pro scored 37.6%.

I am not a biology expert, and I plan to go ask more of them about this worry (as well as having AI educate me more). I asked GPT-5.2 what it thought about this risk, and here are some excerpts from what it wrote:6

[A]utomation can increase throughput and reduce expertise needed, which is directionally risk-increasing. But it doesn’t magically eliminate the underlying biological constraints…

[AI safety] guardrails can be bypassed sometimes. Also, you don’t necessarily need a frontier model to be dangerous if you have access to domain tools, leaked data, or insider capability…

A more realistic worry is a small number (1–a few) of engineered or selected agents that are “good enough” (highly transmissible and significantly more lethal than typical pandemics)…

AI accelerates, but it doesn’t replace the need for experimental validation [of new viruses] —yet…

If an attacker can truly create one pathogen that is (a) highly transmissible, (b) substantially more lethal than typical pandemics, and (c) hard to contain early, then you already have global-catastrophe potential…A single “good enough” pathogen, combined with poor detection and slow countermeasures, can be catastrophic.

Probability of “one compromised lab enables a catastrophic engineered outbreak”: still low, but not negligible, and plausibly higher than many other X-risk stories because it has fewer required miracles.

Probability of “human extinction via this route”: lower than “catastrophe/collapse,” but not zero; it remains deep tail risk.

GPT’s recommendations all included maintaining humans in the loop of biology research. But after what we’ve seen with vibe-coding over the past few months, how confident can we be that labs all across the world — including in China — will insist on maintaining humans in the loop, when full automation would speed up productivity and improve competitiveness? I can’t say I’m incredibly optimistic here.

So the advent of vibe-coding has significantly increased my own worries about truly catastrophic AI risk. It seems clear now that brute economic forces will push humanity in the direction of taking humans out of the loop anywhere they can be taken out. And in any domain where data is plentiful, outputs can be verified, and there are no physical bottlenecks, it seems likely that keeping humans in the loop will eventually prove un-economical.

Really, this boils down to another example of overoptimization creating fragility. But it’s an especially extreme and catastrophic one. I don’t think humanity is doomed, but I don’t see many signs that our governments and other systems are yet taking the threat of vibe-coded superviruses as seriously as they ought to be. Not even close.

So if you ask me if my worries about AI risk have shifted materially in recent months, the answer is “Yes.” I still think Skynet or Agent Smith is highly unlikely to appear and exterminate humanity with an army of robots in the near future. But I will admit that the thought of vibe-coded superviruses is now keeping me up at night.


Subscribe now

Share

1

He’s better now!

2

In fact, if we had been the smartest possible creatures in the Universe, that itself would be a pretty glum future.

3

From what I can tell, the most important such advance was verifier-based reinforcement learning that enabled test-time compute scaling…

4

Well, sort of. In the Terminator movies, Skynet is a military AI who sees humans as a military threat.

5

It’s pretty wild that a contemporary of H.G. Wells could have envisioned both AI and modern social media.

6

Encouragingly, it stopped answering my questions pretty quickly, because this topic hit the guardrails.

How technology has already changed the world in my lifetime

2026-02-15 15:12:13

Photo by Nataev via Wikimedia Commons

AI is changing the world very quickly right now, having just radically altered the entire software industry just in the last few months. It’s a time of dizzying technological change, and it’s easy to feel a lot of future shock right now.

So I thought I’d repost something I wrote back in 2023, when LLMs were just starting to have a big effect on the world. Reflecting on the changes in my lifetime, I realized that the internet, social media, smartphones, and other digital technologies had already altered the world of my childhood into something almost totally unrecognizable. AI is changing how we think, learn, and work, but the internet already wreaked deep, lasting, confusing changes on how we socialize with each other and how we present ourselves to the world. Humans are fundamentally social creatures, so to be honest I’m not sure which has been the more wrenching change (though of course AI is just getting started).

Anyway, here’s the original post.


In 1970, Alvin Toffler published Future Shock, a book claiming that modern people feel overwhelmed by the pace of technological change and the social changes that result. I’m starting to think that we ward off future shock by minimizing the scale and extent of the changes we experience in our life. I tend to barely notice the differences in my world from year to year, and when I do notice them they generally seem small enough to be fun and exciting rather than vast and overwhelming. Only when I look back on the long sweep of decades does it stun my just how much my world fails to resemble the one I grew up in.

Back in March, Tyler Cowen wrote a widely read (and very good) piece about the rapid progress in generative AI. I agree that AI will change the world, usually in ways we’ve barely thought of yet. And I love Tyler’s conclusion that we should embrace the change and ride the wave instead of fearing it and trying to hold it back. But I do disagree when Tyler says we haven’t already been living in a world of radical change:

For my entire life, and a bit more, there have been two essential features of the basic landscape:

1. American hegemony over much of the world, and relative physical safety for Americans.

2. An absence of truly radical technological change…

In other words, virtually all of us have been living in a bubble “outside of history.”…Hardly anyone you know, including yourself, is prepared to live in actual “moving” history.

Paul Krugman made a similar case back in 2011, using the example of how few appliances in his kitchen had changed in recent decades:

The 1957 owners [of my kitchen] didn’t have a microwave, and we have gone from black and white broadcasts of Sid Caesar to off-color humor on The Comedy Channel, but basically they lived pretty much the way we do. Now turn the clock back another 39 years, to 1918 — and you are in a world in which a horse-drawn wagon delivered blocks of ice to your icebox, a world not only without TV but without mass media of any kind (regularly scheduled radio entertainment began only in 1920). And of course back in 1918 nearly half of Americans still lived on farms, most without electricity and many without running water. By any reasonable standard, the change in how America lived between 1918 and 1957 was immensely greater than the change between 1957 and the present.

But when I look back on the world I lived in when I was a kid in 1990, it absolutely stuns me how different things are now. The technological changes I’ve already lived through may not have changed what my kitchen looks like, but they have radically altered both my life and the society around me. Almost all of these changes came from information technology — computers, the internet, social media, and smartphones.

Here are a few examples.

Screen time has eaten human life

If you went back to 1973 and made a cheesy low-budget sci-fi film about a future where humans sit around looking at little glowing hand-held screens, it might have become a cult classic among hippie Baby Boomers. Fast forward half a century, and this is the reality I live in. When I go out to dinner with friends or hang out at their houses, they are often looking at their phones instead of interacting with anything in the physical world around them.

Nor is this just the people I hang out with. Just between 2008 and 2018, American adults’ daily time spent on social media more than doubled, to over 6 hours a day.

About a third of the populace is online “almost constantly”.

Source: Pew

All this screen time doesn’t necessarily show up in the productivity statistics — in fact, it might lower measured productivity, by inducing people to goof off more on their phones during work hours. But the reorientation of human life away from the physical world and toward a digital world of our own creation represents a real and massive change in the world nonetheless. To some extent, we already live in virtual reality.

The shift of human life from offline to online has profound implications for how we interact with each other. One example is how couples meet in the modern day. Dating apps have taken over from friends and work as the main ways that people meet romantic partners:

Source: Statista

The reorientation of social relationships to the online world is what makes the digital revolution different than the advent of television. TV involves staring at a screen for long periods of time, but it doesn’t let people talk to each other and form social bonds. Arguably, talking to each other and forming social bonds is the most important thing we do in our entire lives — personal relationships are the single biggest determinant of happiness. For almost all of human history, even in the age of telephones, our relationships were governed by physical proximity — who lived near us, worked with us, or could meet us in real life. That is suddenly no longer true.

What broader effects this will have on our society, of course, remains to be seen. One of my hypotheses is that online interaction will encourage people to identify with “vertical” communities — physically distant people who share their identities and interests — rather than the physical communities around them. This could obviously have profoundly disruptive impacts on cities and even nations, which are organized around contiguous physical territory. Perhaps this is already happening, and some of our modern political and social strife is due to the fact that we no longer have to get along with our neighbors in order to have a rich social life.

I once was lost, but now am found

Three decades ago, not getting lost was one of the most important goals of human labor and human social organization. Over the millennia we developed whole systems of landmarks, maps, directions, road names, and even social relations in order to make sure that we always knew how to find our way back to security and shelter. The possibility of getting lost was an ever-present worry for anyone who drove their car, or walked in the woods, or took a vacation to a strange place.

And now that foundational human experience is just…gone. Unless your phone runs out of battery or you’re in a very remote wilderness area, GPS and Google Maps will always be able to guide you home. Much of our physical and social wayfinding infrastructure, built up over so many centuries, has instantly been obviated — you don’t have to plan your route or ask for directions, you don’t have to keep close track of landmarks as you drive or walk, you don’t even have to remember the names of roads. The forest has lost its terror.

Of course, there was another kind of fake “getting lost” that could be quite fun, and that’s gone too. It was exhilarating to wander around in a foreign city not knowing what was around you, hunting for restaurants and shops and new neighborhoods to explore. You can still do that if you want, but it’s just imposing unnecessary difficulty for fun — you can just open Google Maps and find the nearest cafe or clothing store or museum or historical monument.

And there’s also a big and important flip side to the fact that no one gets lost anymore — as long as you have your smartphone with you, someone, somewhere, is always able to know where you are. Your location is tracked, wherever you go, even if Apple or Google is nice and respectful enough not to let humans look at that data. China, of course, has far less concern for privacy. But even where privacy rights are respected, governments and corporations are still capable of easily tracking your every move if they feel like it.

Mystery has gone out of the world

I still remember a moment in 2003 when I idly wondered what the Matterhorn looked like. In 1990, answering my curiosity would have required that I go open an encyclopedia, or — if my encyclopedia didn’t have a photo of that particular mountain — to go through the library searching books for a picture. But in 2003, I just typed “Matterhorn” into Google image search, and the picture appeared.

Over the last two decades, there has been a massive proliferation of tools that convey enormous amounts of knowledge, on demand, to anyone who wants it. Wikipedia can teach you the basics of anything from math to history to geography. YouTube tutorials can show you how to fix things in your house, ride a jet ski, or cook a restaurant-quality meal. Google can tell you what anything looks like, or point you to any scientific paper on any topic. And they can give you all this knowledge on demand, from the little screen that you carry with you everywhere, whenever you want it.

This has changed the nature of human life. Just a few decades ago, the knowledge contained in human heads was of utmost importance. If your cabinet was loose or your drain was clogged, you had to know a human being who could fix it. If you wanted to know interesting facts about the world, you had to either ask a human being who knew those facts, or go on an exhaustive search. Wisdom and know-how were profoundly valuable personal attributes. Now they’re much less of a reason for distinction.

Now it’s important to note that understanding and practiced skill are still scarce commodities. YouTube can’t teach you how to be a great violinist (at least, not in 30 minutes), and Wikipedia can’t give you the ability to do difficult math proofs. And the knowledge that can be gained from direct experience is often superior in quality to the knowledge gained from a Google search — for example, if I actually go to the Matterhorn, I can see it from a variety of angles and in a variety of lighting. But overall, humans have taken much of the knowledge that they used to have to carry around in their heads and uploaded it to what is, in effect, a single unified exocortex.

But if ignorance (or at least, the accidental kind, rather than the willful kind) has diminished, mystery has also shrunk. Exploring remote locations, rare objects, and esoteric knowledge is no longer difficult enough to generate quite the same sense of adventure and wonder it once did. Just as GPS has taken some of the adventure out of visiting strange places, the vast sea of internet knowledge has made many other forms of exploration quotidian.

And there’s another kind of mystery that the internet has either eliminated or vastly reduced — the mystery of not knowing other people. In 1990, if I wanted to know what Indians thought about American politics, I’d just have to wonder. Now, I can just open Twitter and ask, and a bunch of Indian people will be happy to inform me of their views. In 1990, talking to someone from another country was a rare and exotic treat; now, it’s just something that we do every day without even really thinking of it. That’s the first time that has ever happened in the history of humankind.

The Universe has memory now

The internet doesn’t just find things for us or allow us to communicate; it also stores information in much larger volumes than books, TV shows, or any other medium that came before. Practically everything you’ve ever typed on the internet is still on the internet. As recently as a few decades ago, most of the things you said and did would be forgotten and misremembered after a fairly short time; now they’re frozen in silicon and magnets.

This has some obvious major consequences for the shape of human life. When you can be “canceled” by an online mob as a 35-year-old for something you said as a teenager, that requires people to be more on guard about what they say for their entire life, even as kids. When any prospective business partner or lover can Google you and find your background (unless you erase it, in which case the prospective partner will be justifiably suspicious), the ability to craft a new persona for yourself, and move beyond the baggage of the past, is limited. On the other hand, remembering what you were like at a younger age, or an argument that you made in a debate a few years ago, is now quite easy. And it’s a lot easier to keep in touch with old friends now.

Technology’s memory involves images and video as well, thanks to the explosion of digital cameras and the increasing capacity of hard drives. Many of the memories we want to preserve in life — our interactions with our offline friends, the places we traveled, the places we lived — now get stored on a hard drive.

Technology weirds the world

A lot of economists tend to think of technological change as being embodied in total factor productivity growth, which has slowed down since 1973 or so. But first of all, there are plenty of things that go into TFP growth besides what we typically think of as technology — there’s education, geographic mobility, a demographic dividend, and so on. As the economist Dietz Vollrath has shown, these factors can explain the entire productivity slowdown, with no need to appeal to a slowing rate of technological progress.

But even more fundamentally, technology changes the world in ways not directly captured by the monetary value of goods and services sold in the market. If our daily activities are redirected toward different sorts of relationships and interactions, that isn’t necessarily something that we’d pay a lot of money for — and yet it means human life is now an entirely different sort of endeavor. If we’re constantly surveilled by corporations and the government, that’s probably something we don’t want, and thus will not pay extra money for, even if rebelling against it would be too much of a hassle for most. And so on.

Sometimes technology grows the economy, but more fundamentally, it always weirds the world. By that I mean that technology changes the nature of what humans do and how we live, so that people living decades ago would think our modern lives bizarre, even if we find them perfectly normal. The information technology boom may not have goosed the productivity numbers as much as many hoped, but it has nevertheless left a deep and transformative impact on the shape of human life.

I’m excited to see if AI brings us a world of radical technology-driven change. But you and I have already been living in a world of radical change for decades now. Maybe the tendency to believe progress has slowed — to focus on the stagnation in our kitchens and not the fact that the world has suddenly become far more transparent, comprehensible, and recorded — is a way of avoiding future shock. But man, when I think of 1990, I can’t help but feel a little overwhelmed by how far we’ve come.


Subscribe now

Share

You are no longer the smartest type of thing on Earth

2026-02-13 17:15:24

“He comes like a day that has passed, and night enters our future with him.” — Charlo

Yesterday my pet rabbit bit my finger. It was an accident; he was trying to bite a towel to move it out of his way, and I accidentally stuck my hand in his mouth. He is a gentle beast, and would never bite a human intentionally. Anyway, the bite punctured and lacerated my left index finger near the front knuckle. I washed it out, put some ointment and a band-aid on it, and that was that.

It occurs to me that if my pet rabbit had instead been a tiger, I would now be dead. There is a reason most people don’t keep tigers as pets; they may be fluffy and cute, but they’re big and strong and can easily kill you. Instead, we generally keep pets who are smaller and weaker than us, allowing us to train them, and if necessary to physically restrain them, and minimizing the danger to our own health.

Until now, we haven’t had to think about this principle in the context of intelligence. As long as you or I or anyone we know has been alive — for all of recorded history, and in fact for much much longer than that — humankind has been the most intelligent thing on this planet.

At some point in the next couple of years, that will no longer be true. It arguably is no longer true right now. There is no single unarguable measure of intelligence — it’s not like distance or time. AI doesn’t think in the same way humans do. But it can get gold medals on the International Math Olympiad, solve difficult outstanding math problems all on its own, and get A’s in graduate school classes. Most human beings can’t do any of that.

Intelligence is as intelligence does. If it helps you feel unique and special to sit there and tell yourself “AI can’t think!”, then go ahead. And sure, AI doesn’t think exactly the way you do. It probably never will, in the same sense that a submarine will never paddle its fins and an airplane will never flap its wings. But a submarine can go faster than any fish, and an airplane can fly higher and faster than any bird, so it doesn’t matter. You can value your own unique human way of thinking all you like — and I agree, it’s pretty special and cool — but that doesn’t make it more effective than AI.

Right now, there are some cognitive things that humans still do better than AI, but that will probably not last. The entire might of the world’s technological innovation system is now being thrown into making AI better, and there is no sign of a slowdown in progress. One of the main things AI couldn’t do until recently was to work on a task for a long period of time. That’s changing fast. AI models are flying up the METR curve,1 which tries to measure the length of time a human would require to complete a software engineering task that AIs can do:

Source: Noam Brown

This is what’s behind all the “vibe coding” you’re hearing about. AI agents — basically, a program that keeps applying AI over and over until a task is complete — are now taking over much of software engineering. People just tell the AI what kind of software they want, and the AI pops it out. Human software engineers are still checking the code for problems, but as the technology improves, the cost of doing this is likely to become uneconomical; AI-written software will never be perfect, but it’ll be consistently much better than anything humans could do, and at a tiny fraction of the price.

Vibe coding is taking over fast. Spotify’s co-CEO recently revealed that the company’s best developers don’t write code anymore. Some journalists from CNBC, with no coding experience, vibe-coded a clone of the app Monday, and the company’s stock price promptly crashed. Meanwhile, AI is increasingly writing the next version of itself, and humans may not be in the loop for very much longer.

And all of this — ending software engineering as we know it, acing the hardest math tests, solving unsolved math problems, creating infinite apps at the touch of a button — is just the beginning. The amount of resources that the world is preparing to deploy to improve AI, this year and in the following few years, utterly dwarfs anything that it has deployed so far:

Source: Bloomberg

AI’s abilities scale with the amount of compute applied.2 The amount of compute available this year will be much greater than the amount that’s producing all the miracles you see now. And then next year’s compute will be far greater than that. All the while, AI itself will be searching for ways to improve AI algorithms to better take advantage of increased compute.

Other weaknesses of AI — in particular, its lack of long-term memory and its inability to learn on the fly — will eventually be solved.3 AI will be able to act on its own for longer and longer, with less and less human decision-making in the loop. Meanwhile, massive investment in robotics will give AI more and more direct contact with, understanding of, and control of the physical world.

More and more people are waking up to this reality. An article by Matt Shumer called “Something Big is Happening” recently went viral. It’s very simplified and hand-wavey, and Shumer himself is a bit of a huckster, but it gets the point across. If anything it understates the pace and magnitude of the changes taking place. I recommend giving it a read, if you haven’t already.

But there’s a bigger reality out there that people outside the tech industry — and even many people within it — don’t seem to have grasped yet. It isn’t just that AI could take your job, or put millions of people on welfare, or give us infinite free software, or whatever. It’s that for the first time in all of recorded history, humans no longer are — or soon no longer will be — the most intelligent beings on this planet, in any meaningful functional sense of the word.

For the rest of our lives, we’ll all be sleeping next to a tiger.

Read more