MoreRSS

site iconNot BoringModify

by Packy McCormick, Tech strategy and analysis, but not boring.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Not Boring

Costless Sacrifice

2026-03-05 23:14:24

Welcome to the 727 newly Not Boring people who have joined us since our last essay! Join 259,712 smart, curious folks by subscribing here:

Subscribe now


Hi friends 👋,

Happy Thursday! It’s late in the week and late in the morning, so…

Let’s get to it.


Today’s Not Boring is brought to you by… Deel

Every week, at the bottom of the Weekly Dose, I thank my teammates, Aman and Sehaj, for their contributions. Thanks isn’t enough though. They do great work, for which they deserve to get paid. Challenge is: they’re based in India. Or, challenge was. Now, I pay them with Deel.

Deel is payroll for global teams, like not boring and many of our portfolio companies. I grabbed coffee with a founder in our portfolio the other day, and unprompted, he brought up how easy Deel made it to expand his team globally. He’d tried the alternatives and switched, and he was gushing, which is rare for a payroll provider.

Hiring talent outside your home country can open up new growth, but it also comes with unfamiliar rules, local labor laws, and compliance risks. That’s where an Employer of Record (EOR) comes in. If you’re a startup exploring global hiring for the first time, Deel’s free guide walks you through the basics. Whether you’re hiring your first international employee or planning your next market expansion, Deel can help.

Download the Guide


Costless Sacrifice

The Old Testament’s 2 Samuel tells the tale of King David as he unifies the tribes of Israel and establishes Jerusalem as the nation’s capital. The world was slower then, there were only like 50 to 75 million people around at the time, and so God could be more actively involved in the day-to-day management of human affairs.

As, for example, when King David ordered a census of Israel to put a number on its military strength, he angered God, who expected his servant to rely not on soldiers but on Him. God sent a plague down that killed 70,000 men, then instructed David, through the prophet Gad, to build an altar on the threshing floor of Araunah the Jebusite in order to stop it.

When David arrived, Araunah offered to just give him the land, the oxen, and the wood with which to sacrifice them for free as a gift to the king.

“But the king replied to Araunah, ‘No, I insist on paying you for it. I will not sacrifice to the Lord my God burnt offerings that cost me nothing.’ So David bought the threshing floor and the oxen and paid fifty shekels of silver for them.”

Costless sacrifice is not sacrifice. And we, like the Gods, demand sacrifice.


I read an essay in The Argument by economist yesterday: The Tinder-ization of the Job Market.

The Argument
The Tinder-ization of the job market
Is AI going to take everyone’s jobs? A recent report went viral after claiming there were 55,000 AI-related layoffs in 2025…
Read more

The job market is stuck, Darling argues. It’s not bad: unemployment was relatively low in January at 4.3%; Prime Age (25-54) Employment at 80.9% is higher than it was at any point in the Obama or first Trump presidencies. Just stuck. The hiring rate averaged 3.3% in H2 2025, a level only touched during COVID and the Global Financial Crisis.

BLS Hires Data

The weird part is… we’re not in the middle of a crisis, financial or biological. Employment is high, remember! Unemployment is low! “Generally, a high employment rate and a ‘tight’ labor market are associated with high hiring rates,” Darling writes, “not low ones.”

Darling’s theory is that LLMs have made it easier for people to apply for a ton of jobs, with custom-written cover letters and everything.

He cites Greenhouse data which “showed recruiting workload rose 26% in the third quarter of 2024 and that 38% of job seekers reported ‘mass applying,’ flooding firms with far more resumes than before.” Per Business Insider, “the applications-to-recruiter ratio is now about 500-1, four times what it was just four years ago.” He links to a Kelsey Piper essay from last year that provides more data to back this up.

Because it’s easier to apply, the volume of applications is up. Because AI is writing cover letters, their quality is no longer a useful way to pull signal from all that noise. “A recent paper showed that after Freelancer.com introduced an AI-generated cover letter tool, the correlation between cover letter customization and offers dropped 79%.”

You used to get rewarded for customizing your cover letter, when it cost you something. Now it doesn’t, so you don’t, but you still gotta customize, because everyone else is.

We find ourselves running an incredibly stupid Red Queen’s Race.


“Here’s the hard thing about easy things: if everyone can do something, there’s no advantage to doing it, but you still have to do it anyway just to keep up.” I wrote that way back in August 2020, about DTC brands and Shopify. “When every rebel is armed, none really is. It’s like when you played GoldenEye 007 as a kid. Getting the Golden Gun the hard way was dope. Everyone getting the Golden Gun with a cheat code made the game suck.”

This is an idea I’ve been obsessed with for a long time: the asymmetric ability of the laggards to make the leaders’ life a little bit worse.

You might produce the very best widget by far, but your potential customers will undoubtedly be bombarded by inferior alternatives claiming that they make the very best widget. At worst, your potential customer will fall for it; at best, your cost to acquire that customer goes up a little bit.

You spend hour after quiet hour handcrafting your widgets, smoothing their edges, polishing their faces, giving each one a little kiss before sending it out. You sacrifice. Your competitors do none of that. They yeet millions of these suckers out in Yiwu. Then they stamp HANDCRAFTED WITH LOVE IN USA on their website and who’s to say?

Some people really want a particular job. It’s their dream job. They spend hour after quiet hour poring over resume details, crafting a heartfelt cover letter, and saying a little prayer before hitting “Submit.” They sacrifice. Their sacrifice gets lost among the 1,732 less-lovingly-crafted (but who’s to say? who has the time to read them all) applications that came in that same day.

The burnt offerings that cost nothing and the burnt offerings that cost everything smell the same, because we make our offerings not to an omniscient God, but to fallible, overwhelmed humans.


This is happening more and more with writing, too. Claude in particular has gotten much better at writing, which means that more people are publishing essays.

There will be times when there’s an interesting topic to write about, and by the time I’ve thought about what path I might take through the argument, there are dozens of versions of the essay on X. Some of them are even pretty good!

But I’ve noticed something happening in my brain that’s worth sharing, because I would imagine a lot of people are thinking similar things about whatever it is that they do.

I’ll be interested in a topic, start chewing on it, start thinking through unique ways to present it, to shape the essay, the research I might need to do, the people I might need to talk to, all this stuff I’d need to do to make something great. Then I see a handful of pretty good versions on similar-ish topics, and I think, “Well, I guess they beat me to it. On to the next one.”

And who knows if the version I would have written would have ended up being better than what these AI-human teams whipped up in a couple hours. Who knows if it would have been great, or even good.

What I do know is that if I’d written it, I would have put in a lot more effort, agonized over it, sweat the details, tried to present the ideas in unexpected ways. I would have sacrificed something that cost me something, hours and hours, sometimes weeks and weeks of my life.

And half of the value of the post would have been the result of that effort, but half would have been the effort itself, my pointing to an idea and saying, “THIS IS AN IDEA THAT I WAS WILLING TO SPEND FIFTY HOURS WRESTLING WITH.”

And maybe I’ll still spend the time to write it, and maybe many someone elses will spend the time making the full and beautiful version of the things they want to create despite the easy-come existence of their bloodless simulacra, but a lot of the time, we won’t, and the world will be left with many hollow versions of a thing filling up the place where the one full one should have gone.

The pretty good is the enemy of the potentially great.

Right, like someone will somehow, eventually, once the recruiters have gathered themselves up and faced the never-ending application pile, basically just win the lottery and get offered the job, and they’ll be happy that they got it, because a paycheck is a paycheck, but two things will probably be true: 1) it wasn’t their dream job; they applied to 347 of them, this was the one they got, and 2) the person whose dream job it actually was, who would have been overjoyed to get to work in this specific job, who actually did write the cover letter the old-fashioned way for this one because they reallllllllly wanted it… no human ever even saw that person’s application. They’re still looking. Dream crushed, they’ve now applied to 123 jobs themselves, none of which they really care about, but all of which someone really wants and now is a little bit less likely to get.


I don’t know code as well as I know words, but it looks like the same story is playing out there. SemiAnalysis has started publishing this chart of the number of Claude Code GitHub Commits Over Time. The number, let me tell you, is going up. It’s going up big time.

SemiAnalysis

“4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude Code will be 20%+ of all daily commits by the end of 2026. While you blinked, AI consumed all of software development.” Claude Code Is The Inflection Point, is what this means.

And again, I am not a coding expert, but like…

Of course the machines that can happily print tokens at high speed 24/7 are going to produce more code than us meatbags. I can’t imagine what a similar graph of “% of Words Committed to X” would look like. We have to be approaching 50%; we might be way past it. That does not mean the Singularity is Near.

I think if you asked me how I’d position myself on an AI trade, it would be something like: short ASI, long tokens.

As so perfectly captured in Tool Shaped Objects:

The market for feeling productive is orders of magnitude larger than the market for being productive. Most people, most of the time, want to click and watch the number go up. They do not want to be told the number is fake. They will pay— in time, in attention, in actual money— to keep the number going up.

But his is a “come for the tool” view of token demand, and misses the “stay for the network” part. It’s not just that we want to watch our own number go up. It’s that if everyone else’s numbers are going up, we need ours to go up just to keep up.

More words, more applications, more code, MORE, in the world’s stupidest Red Queen’s Race.

I don’t blame the tools. We do this all the time. Venkatesh Rao wrote about Premium Mediocre just two months after Attention is All You Need and years before the transformer’s significance became apparent. Kylie Jenner made her face look a certain way because she could afford it, then everyone started doing it, and it got more affordable, and now Instagram Face has become middle class, a costless sacrifice to the gods of vanity, signifying nothing.

Like, what does the number of Claude Code GitHub Commits signify? What does the number of words written by an LLM signify? What do thousands of applications for every single low-wage job signify?

If they cost nothing, they signify nothing.

I started reading this book recently, The Control Revolution by James R. Beniger, written in 1986 and, from what I’ve read so far, criminally underread today. I haven’t read much of it yet, so I’ll need to save a deeper analysis for a future essay, but Beniger has this idea that is relevant to our conversation.

He believes that modern information technology was built in response to the Industrial Revolution, a direct response necessitated by industrial scale and complexity:

Until the Industrial Revolution, even the largest and most developed economies ran literally at a human pace, with processing speeds enhanced only slightly by draft animals and by wind and water power, and with system control increased correspondingly by modest bureaucratic structures. By far the greatest effect of industrialization, from this perspective, was to speed up a society’s entire material processing system, thereby precipitating what I call a crisis of control, a period in which innovations in information-processing and communication technologies lagged behind those of energy and its application to manufacturing and transportation.

Modern bureaucracy, the telegraph and telephone, mass media, computing, sensors… all of this, Beniger says, fell out of the crisis of control precipitated by the Industrial Revolution. The Industrial Revolution massively accelerated production. We could make things faster than ever. But then we had a distribution crisis: how do we move all this stuff? Railroads and logistics solved that. But then we had a communication crisis: how do all of those far flung hubs coordinate with each other? Enter telegraphs, telephones, train schedules, standardized clocks, and the bureaucracy to manage it all. And a consumption crisis: how do we get people to want and buy all this stuff we’re producing, all over the world? That’s where mass marketing, advertising, and eventually consumer culture come from.

All of which I bring up because it seems that whatever the Crisis of Control was, where we did not have enough information technology to deal with the material abundance we were creating, we have the opposite Crisis now, where the information has outstripped that which it was born to control.

Like going off the gold standard, but for information. We can mindlessly and costlessly print information with no reference to the underlying stuff about which it was meant to inform.

This is the fast takeoff, the runaway scenario: information has reached escape velocity. Claude Code GitHub commits are going stratospheric. But what is the relationship between GitHub Commits and Total Factor Productivity? Between GitHub Commits and actual economic output?

A sacrifice needs to cost something.

The bureaucracy was established on the backs of the hard, provable work of making things. Today, the bureaucracy is the thing, the simulacra of productivity that has become more real than the thing itself, and it lets society make things occasionally, if slowly.

An essay is valuable insofar as it costs the writer something. They may have paid the costs in the years of experience that seasoned the piece that took hours to write, like that Picasso story where he asks for $100,000 for a picture that took him 30 seconds to draw. They may pay the costs in hours and hours, weeks and weeks of researching and writing and editing and re-writing and crumpling and throwing into waste basket overflowing with previous drafts and re-writing again, and again, until it’s good enough.

Even if a machine wrote the same essay, word for word, it wouldn’t have paid the same cost. You can read the costlessness of it, feel it, even as the prose improves. We demand a cost.

This is what King David realized. Fifty shekels wasn’t much money to him, and God, being God, could have divined KD’s sincerity with or without the coin. But a sacrifice has got to cost something.

The bad news is: it seems that we’re in a downward spiral. Systemically, I don’t know how we pull out of this. There will be many, many more Claude Code GitHub Commits and Claude-written words in 2026 than there were in 2025, many more in 2027 than 2026, etc.

Anything can look like the Singularity with a dumb enough y-axis.

The good news is: you don’t need to play this game. You can make yourself pay a cost and you will be rewarded: externally, maybe. Internally, for sure. Eternally, I think so.

There is a school of thought that believes humans were brought into this universe to create. To live and struggle and love and sin and pay all of the costs required to create new things. The reason we are here, this belief, is precisely to experience the limitations and frictions that God cannot. We are here to pay the cost.

There is a joy that comes from conviction, and meaning in doing something only you can do, well. And somehow, that soul shines through. People can feel its presence as strongly as they can feel its absence.

I find that whenever I go comically over the top in the amount of work I do on something, spending a month to write an essay that AI could write ~75% as well in minutes, for example, that sacrifice is rewarded. People want to see that you care, that the work cost you something, that you had to make a choice, in order for them to pay a cost to you.

Pay the shekels.


That’s all for today. If reading this made you want to hire real humans, check out Deel.

We’ll be back in your inbox tomorrow with a Weekly Dose.

Thanks for reading,

Packy

Weekly Dose of Optimism #182

2026-02-27 21:37:12

Hi friends 👋,

Happy Friday! It’s quasi-warm in New York City, America won two golds in ice hockey (sorry, Sean, Dan didn’t have anything funny to say but we’re pumped), and the good guys just keep on doing things that make us optimistic.

Let’s get to it.


Subscribe now


(1) Form Gets $1 Billion Google Order for 30 Gigawatt-Hour Battery System

Steve Levine for The Information

The Decade of the Battery is charging ahead, now with American batteries.

Earlier this week, Google announced a deal with Xcel Energy to provide 1.9 GW of wind, solar, and battery power for a planned data center in Pine Island, Minnesota, part of a push for hyperscalers to Bring Your Own Electricity (BYOE). Wind and solar are clean, and cheap when the sun shines and the wind blows, but the sun doesn’t always shine, and the wind doesn’t always blow, so the deal includes $1 billion for a nine-year-old startup Form Energy to provide backup.

Form will be providing two kinds of batteries: standard lithium-ion batteries for instant surges of power, and iron-air batteries for longer-term backup.

Iron-air batteries work through a process that’s like reverse rusting. To discharge, the battery breathes in oxygen from the air. Iron metal at the anode reacts with that oxygen and water-based electrolyte to form iron rust (iron hydroxide/iron oxide). This oxidation reaction releases electrons, which flow through an external circuit as electricity. To charge and store electricity, you apply electricity (say, from solar or wind), and the process reverses. The rust is electrochemically reduced back into metallic iron, and oxygen is released back into the air. The iron is “de-rusted” and ready to discharge again.

Iron Air Battery Diagram
Iron Air Battery Diagram | Rachel McKerracher

Iron-air batteries are low power density and slow response, but extraordinarily cheap on a per-kWh basis, roughly 10% of the cost of lithium-ion. They’re perfect for longer-term storage, up to 100 hours, for multi-day lulls in sun and wind. These lulls are whimsically named “Dunkelflaute” events.

Form will provide a 300 MW iron-air battery system for the project, and the batteries can store 100 hours of power, making it a 30 GW-hr system. It will be the biggest battery system by energy capacity in the world when delivered.

This is good news because we love batteries here at Not Boring, and because it’s a rare win for a western battery company in a category that’s been dominated by China. That also means good US jobs; the batteries for this project will be manufactured at Form Factory 1 in Weirton, West Virginia, on the site of a historic former steel mill.

So yeah, maybe AI is going to take all of our white-collar jobs (kidding, we’re on Team Thompson), but powering all that AI is going to create new ones, too.

(2) Proxima Fusion Signs MoU to Build Stellarator Fusion Power Plant

Proxima Fusion, a German startup out of the Max Planck Institute, signed an MoU with the Free State of Bavaria, RWE, and Max Planck Institute for Plasma Physics (IPP) to put the world’s first commercial stellarator fusion power plant on the grid in Europe.

It will need €2 billion to build a demonstration facility, Alpha, prior to the commercial plant, Stellaris, €400 million of which will come from Bavaria and up to €1.2 billion of which may come from the German government. It’s an expensive way to say “Es tut mir Leid” for shutting down all of the country’s nuclear power in favor of coal, but a very cool way to get German energy back on track.

Of course, it’s just an MoU. The plant still needs to be built, and then there’s the tricky matter of achieving Q>1 and, eventually, Q> whatever it needs to get to to be economically viable, but we love the stellarator.

Stellarators are a type of fusion reactor. In The Fusion Race, we wrote:

Designed by Lyman Spitzer at the Princeton Plasma Physics Laboratory, Stellarators are devices designed to confine hot plasma within magnetic fields in a twisted, torus-shaped configuration to sustain nuclear fusion reactions.

Stellarators, while promising, were difficult to design and build due to their complex magnetic field configurations.

But that was the 1950s! We have better technology today. On Age of Miracles, Julia and I interviewed Proxima CEO Francesco Sciortino, who told us something that’s stuck with us: we made Tokamaks first not because they were best, but because they were the easiest to design and manufacture. Stellarators, with their weird twists, were harder to design and manufacture, but closer to the platonic ideal of a fusion generator. Now, with better software and manufacturing capabilities, we can make reactors closer to the platonic idea.

Proxima has a long way to go before it’s putting electrons on the grid, but we’re happy to see progress towards that goal. Gut gemacht, Germany.

(3) Magical Month for Psilocybin

Psilocybin, the psychoactive compound in magic mushrooms, has been classified as Schedule I drug under the Controlled Substances Act since 1970. The classification means that it has “no accepted medical use and a high potential for abuse.”

That “no accepted medical use” part is coming under intense pressure from reality. Last week, Compass Pathways announced that it had achieved its primary endpoint in a second Phase 3 trial evaluating COMP360 psilocybin for treatment-resistant depression. Two doses of COMP360 25mg demonstrated a highly statistically significant reduction in depression symptoms versus control (p<0.001, -3.8 point MADRS difference). This makes Compass 3-for-3 on trials. The company plans to meet with the FDA to discuss a rolling approval submission between October and December 2026, which would make COMP360 the first “classic” psychedelic cleared in the U.S.

The week prior, a Johns Hopkins pilot study of 20 adults with well-documented post-treatment Lyme disease found that psilocybin, given with psychological support, produced significant and lasting reductions in multi-system symptom burden, including improved mood, fatigue, sleep, pain, and quality of life, that persisted for up to six months. The numbers are striking: general symptom scores dropped roughly 40% and mental/physical quality of life scores rose about 13% from baseline, with benefits maintained through the six-month follow-up. The durability is noteworthy, because a placebo would typically diminish over six months.

The Lyme study is especially interesting because it extends the psilocybin evidence base beyond the depression/anxiety lane into a chronic neuroimmunological condition where there are essentially no accepted treatments. If psilocybin is doing something meaningful for post-treatment Lyme, it starts to suggest mechanisms (anti-inflammatory, neural connectivity remodeling) that go well beyond the “it helps you process emotions in therapy” framing.

From a pharma business perspective, one of the biggest challenges with psilocybin is that it just works. There’s no opportunity to keep patients on (and paying for) meds for the rest of their lives like there is with SSRIs. That’s all the more reason we should be taking psilocybin very seriously. If this same logic applies to other conditions, like Lyme, it could be a major win for humanity.

A miracle drug like GLP-1 whose side effects include euphoria and spiritual experiences instead of muscle loss and upset stomach would be magical indeed.

(4) Stripe’s 2025 Annual Letter

Patrick and John Collison for Stripe

Stripe published its annual letter this week. It’s one of the best pieces of economic writing you’ll read all year, because Stripe has some of the world’s best data on the internet economy, and because those Irishmen can write.

Last week, in Power in the Age of Intelligence, I made the case that tech-native category leaders like Stripe are more valuable because they’re going to eat more of their categories than second-best. Stripe is eating. They shared that they did $1.9 trillion in total payment volume in 2025, up 34% year-over-year, and that roughly 1.6% of global GDP is flowing through the company's pipes.

More interesting is the data they have on companies using Stripe. Their 2025 cohort of new businesses is growing 50% faster than the 2024 cohort. The number of companies hitting $10 million ARR within three months of launch doubled. iOS app releases jumped 60% year-over-year in December. GitHub pushes surged 41% between Q3 2024 and Q3 2025. All of their indicators suggest that the pace is accelerating, and the Collisons’ best guess is that it isn’t an anomaly.

The whole letter is worth a read, but three sections stand out.

First, their framework for “agentic commerce” lays out five levels, from agents that just fill out checkout forms for you all the way up to agents that anticipate what you need and buy it before you ask. They’re honest that we’re hovering between levels 1 and 2, but the comparison to the mid-90s — when HTTP, HTML, and DNS were being hashed out — feels apt.

Second, crypto is happening, even if crypto prices aren’t. Bitcoin is down 50% from its October peak, but stablecoin payments volume doubled to around $400 billion last year, with 60% estimated to be B2B payments. Bridge, the Not Boring Capital portfolio company acquired by Stripe, quadrupled its volume. Stripe is betting on stablecoin payments infrastructure hard with Tempo, a built-for-payments blockchain they’re building with Paradigm. Visa, Nubank, Shopify, and even Klarna (whose CEO was once a self-proclaimed crypto skeptic) are already testing it. Mainnet is launching soon.

Third, the letter closes with what might be its most important idea: we live in a “Republic of Permissions.” Technologies succeed or fail not just on their merits but on whether the web of regulators, committees, and courts lets them through. The Collisons cite Joel Mokyr’s Nobel-winning work on how culture, not just capital or technology, drives progress. They argue that AI could transform drug discovery, nuclear could deliver energy abundance, and drones could slash logistics costs, but only if we don’t let “a slurry of local ordinances harden into a blockade.” Hear hear.

It’s a letter about payments that’s really a letter about whether civilization can keep up with its own tools, co-written by a guy whose website has the canonical list of projects from a time when we were able to build fast. They think we can.

“We’re reminded of the phenomenon of falling into a large black hole… We write this letter at what may well turn out to be the advent of a different and hopefully much more beneficent singularity. While much around us in 2026 feels similar to prior years, it is also clear that the next decade will look very different to those just gone by.”

(5) Vitamin B2 and B3 nutrigenomics reveals a therapy for NAXD disease

Arc Institute in Cell

Pulling one of the most interesting entries from Ulkar Aghayeva’s Scientific Breakthroughs this week above the paywall. Here’s Ulkar on a fresh finding out of Arc Institute:

It may seem that vitamin biology has been figured out decades ago (for reference, all 13 classical vitamins were discovered by 1948). But this paper shows that there’s still a lot of unexplored territory in nutritional genomics.

Instead of starting out with a target disease and trying to find a drug that treats it, they chose the well-known vitamins, B2 and B3, and did a genome-wise CRISPR screen in K562 cancer cells to identify genetic diseases responsive to vitamin supplementation. NAXD, a repair enzyme essential in redox biology, emerged as the top hit for vitamin B3. Mutations in NADX are known to be lethal in early childhood. The team generated knockout mice that showed very similar disease profile to humans, and adding vitamin B3 to their food from birth increased their lifespan more than 40-fold.

How many other diseases are out there that could be so easily cured with various vitamin supplementations?

Additional sources: Arc Institute blogpost; twitter thread

EXTRA DOSE:

  • Scientific Breakthroughs from Ulkar

  • Jeremy Stern on Shyam Shankar

  • Alex Konrad on William Hockey

  • Mario joins Hummingbird

Read more

Weekly Dose of Optimism #181

2026-02-20 21:55:19

Hi friends 👋,

Happy Friday and welcome back to our 181st Weekly Dose of Optimism!

This week has it all, from transformers to aliens.

We are also introducing a new segment for not boring world members: Scientific Breakthroughs from Ulkar Aghayeva. In December, we shared Frontier of the Year 2025, in which Ulkar, Gavin Leech, and Lauren Gilbert reviewed 202 pieces of scientific news from the year and assigned a score based on a probability each generalizes and how big it would be if they did. We loved it, so we asked Ulkar to do a roundup of the most important stories in science, which she did this week.

It’s our attempt to bring you even closer to the frontier. I hope you enjoy it.

Let’s get to it.


Today’s Weekly Dose is brought to you by… not boring world

The Dose is for everyone, but we’re adding more and more great stuff behind the paywall for not boring world members. We also have a deep slate of co-written essays in flight and going out to members over the next couple months. If you want to support not boring, help us make it as great as it can be, and stay at the cutting edge of technology, science, and business

Subscribe now


(1) Heron Power Raises $140M to Build 40GW Solid State Transformer Factory

Everything that can go electric economically will, including the grid itself.

Heron Power, founded by former Tesla SVP of Powertrain and Energy Drew Baglino, raised $140 million in a Series B led by a16z American Dynamism to build solid state transformers. This is pure not boring catnip: power electronics from the guy in charge of a lot of the Tesla power electronics stuff we wrote about in The Electric Slide, a16z putting its new funds to work, and fixing the grid by attacking one of its key constraints.

The constraint is that transformers, which take electricity at one voltage and converts it to a different voltage (step up or step down) using coiled wires, are outdated (the fundamental design hasn't really changed since the late 1800s and is based on a one-way grid), passive (electricity goes in, gets stepped up or down, and comes out), massive (built with 10 tons of grain-oriented electrical steel and copper submerged in oil), and worst, in desperately short supply. Lead times stretch up to 24 months, U.S. manufacturing meets less than 20% of demand, manufacturing has been moving to China, prices have spiked 60-80% since 2020, and demand is projected to double in the next decade.

So Heron is building Heron Link, a modular solid-state transformer that uses wide-bandgap semiconductors instead of scarce electrical steel and copper. It’s starting by selling to solar and battery farm operators and data centers, who represent huge and growing demand and for whom the solid state transformer eliminates the need for inverters. Traditional transformers can’t handle direct current, so an inverter turns DC from a solar panel or battery into AC. Heron Link can do that, so customers can skip the inverter. HL is software-defined, meaning it can use software to regulate voltage and frequency to actively manage grid stability. And it’s modular, meaning that if one of the device’s tens of power conversion modules fails, it can be swapped out in ten minutes. They also contain lithium-ion batteries that can discharge quickly to provide 30 seconds of power to smooth the transition to backup power sources.

The funding will go to building a factory that can produce 40GW worth of Heron Links per year. At 5MW per Link, that’s 8,000 transformers.

Heron isn’t the only company making solid state transformers. I hope they all do great. But I do want to call out one competitor: Raleigh-based DG Matrix, whose co-founder and CTO is Subhashish Bhattacharya, a professor at NC State University who is a long-time collaborator of… B. Jayant Baliga, the inventor of the IGBT, The GATEway of India, the hero of the Power Electronics section of The Electric Slide!

Anyway, if we have a lot of smart people using wide bandgap semiconductors to fix the grid, the future of American energy is going to be SiC.

(2) DoW Transports a Valar Atomics Nuclear Reactor to Utah on a C-17

Isaiah Taylor and Valar Atomics

The title pretty much says it all here: the Department of War transported a Valar Atomics Ward nuclear reactor from California to Utah on a C-17. It’s another sign that the government is serious about turning on reactors by the United States 250th birthday on July 4, 2026. Not much more for me to add here other than to watch the video and the others in Valar’s Operation Windlord series.

In other nuclear news, Lockheed Martin is investing in Radiant Nuclear, which we wrote about in 2024, as it prepares to turn on its first reactor at the DOME this summer. Defense is starting to play offense on nuclear.

(3) Single vaccine could protect against all coughs, colds and flus, researchers say

James Gallagher for the BBC h/t Simon Taylor for the find

Researchers at Stanford have built a vaccine that protects against… everything. Like, everything they’ve tested, it works against. The nasal spray works by putting immune cells in the lungs on permanent “amber alert,” ready to fight whatever shows up. In animal tests, it reduced viral breakthrough by 100-to-1,000-fold and worked against flu, Covid, common cold viruses, two species of bacteria, and even house dust mite allergens. Prof Bali Pulendran calls it “a radical departure from the principle by which all vaccines have worked” since Edward Jenner figured out the originals in the 1790s.

It’s early (animal studies, not human trials) but the team is planning deliberate infection studies in people next. The big questions are whether the effect translates to human lungs, how long the protection lasts (about three months in mice), and whether keeping the immune system dialed up causes friendly fire. The researchers don’t think it should be permanent; they envision a seasonal spray at the start of winter or a rapid-deployment tool at the start of a pandemic to buy time while a targeted vaccine is developed.

That pandemic use case alone is worth getting excited about. One of the hardest lessons of early 2020 was how long it took to develop, test, and distribute a vaccine while people died waiting. A universal nasal spray sitting on the shelf, ready to deploy on day one, could change the math on the next pandemic entirely.

And if the seasonal version works, imagine a world where “cold and flu season” just no longer exists. As a parent of little kids, please for the love of God work in humans.

(4) DeepMind Veteran David Silver Raises $1B for Ineffable Intelligence

George Hammond for Financial Times

When interviewed Richard Sutton last year, the father of Recursive Learning (RL) and author of The Bitter Lesson said, basically, that LLMs are a dead end and that the real Bitter Lesson-pilled approach would be to just let AI learn from experience. That resonated with me. Even with all of the recent coding and agent advances, these things are still missing something ineffably intelligent to me.

David Silver wants to fix that. Silver, who joined DeepMind in 2010, was one of the team’s star researchers. He worked on AlphaGo, AlphaStar, and the Gemini family of models. And last year, with Sutton, he wrote a paper titled Welcome to the Era of Experience.

Silver and Sutton argue that we’re moving from an era where AI mainly learns by imitating static human data (like text on the internet) to an Era of Experience,” where the most powerful systems will learn predominantly from their own ongoing interaction with environments. Agents will improve by generating vast streams of experiential data, acting over long time horizons, grounded in real environments and rewards, which will unlock truly superhuman capabilities beyond the limits of human-written data. Which makes sense intuitively!

Sequoia is leading a $1 billion investment at a $4 billion valuation. I’m excited for this one. And it couldn’t come at a better time because…

(5) The US Government is Going to Release the ET, UAP, and UFO Files

Maybe the greatest proof that we’re in some other beings’ simulation is the fact that just as we are about to figure out how to create intelligence ourselves, we finally begin to get the truth about other intelligent life in the universe.

This is just one Truth post, and the government has a spotty (redacted) record releasing the important details in important files recently, but it seems that disclosure might finally be upon us.

It’s too early to know or speculate when we’ll get more information, what’s in the files, and how ontologically shocking it will be; it could be anything from “we’ve been holding a few craft that we don’t quite understand” to “we’ve reversed engineered the craft and understand antigravity” to “humanity is just an experiment run by far more intelligent beings” to … anything, really.

But it’s funny and kind of beautiful that just as so many people are worrying about humanity’s place in a world with AI, we may get proof that we’re part of something much larger, weirder, and more wondrous than we could ever imagine.

EXTRA DOSE: Scientific Breakthroughs, Two Podcasts, and Research Revival Fund

Subscribe to learn about 8 scientific breakthroughs this week, including gene drive-like systems to fight antibacterial resistance, potential early cancer diagnosis method from Arc Institute, DMT for major depressive disorder, and using laser writing in glass for long-term storage. Let us know what you think!

Read more

Power in the Age of Intelligence

2026-02-18 21:33:05

Welcome to the 737 newly Not Boring people who have joined us since our last essay! Join 258,985 smart, curious folks by subscribing here:

Subscribe now


Hi friends 👋,

Happy Wednesday!

Émile Borel once said that given enough time, a bunch of monkeys banging on typewriters would come up with Shakespeare. And yet, despite the innumerable X Articles on software moats in the face of AI, I haven’t read a single one that’s satisfying.

I think it’s because thinking about software moats, how a software company might protect itself from abundant code, is the wrong frame altogether, and that the more interesting and relevant frame is which companies, SaaS, hardware, or otherwise, stand to benefit the most from newly abundant inputs.

Those companies, not vibe coders, are the ones that point solutions should be worried about. They will win enormous market shares and fortunes. They will come to dominate large industries by using new technology to compete, capturing the High Ground, and expanding further outward than companies with lesser technological tools could have ever dreamed of. They will be the Standard Oils of this era.

This essay is about those companies.

Let’s get to it.


Today’s Not Boring is brought to you by… Silicon Valley Bank

In 2025, crypto returned to the financial mainstream. It is, as they say, so back.

What’s ahead for 2026? I’m glad you asked. Silicon Valley Bank is out with their annual crypto outlook, featuring proprietary insights and data from over 500+ crypto clients. We love a bank that banks crypto.

Silicon Valley Bank makes five predictions for the year ahead, including:

  1. Institutional capital goes vertical with increased VC investment and corporate adoption.

  2. M&A posts another banner year after the highest-ever deal count in 2025.

  3. Real-World Asset (RWA) tokenization goes mainstream on prediction market strength.

Last year, Silicon Valley Bank predicted that stablecoins would be the big breakout use case. That was correct. They think that will continue this year, too. Read their take on what comes next, free:

Get SVB's 2026 Crypto Predictions Free


Power in the Age of Intelligence

One of the more head-scratching anomalies in the market is the valuation gap between Stripe and Adyen. The two payments companies handle similar amounts of Total Payment Volume. Stripe is growing faster. Adyen reports exceptional margins and cash conversion. Stripe is reportedly doing a tender offer at a $140 billion valuation in the private markets. Adyen is valued at $34 billion in the public markets. There are a number of theories for why this is the case, most of which boil down to: VCs are idiots, as they’ll find out if Stripe ever goes public.

Chart from Claude based on market data

Ramp versus Brex is another example of the same idea. Ramp was most recently valued at $32 billion in the private markets. Brex, which had been valued at $12 billion in the private markets, sold to Capital One for $5.15 billion. Ramp is doing more revenue and growing faster, but not 6x more revenue or 6x faster. Once again, the actual market disagrees with the VCs.

Or does it?

I have a different theory, one that neatly fits those two cases, the SaaSpocalypse, SpaceX’s $1.25 trillion valuation, and even the evolving structure of venture capital itself: winner takes more.

The history of business is basically the history of increasing concentration of value, accelerated in spurts by technological change. Centuries ago, firms operated within cart-hauling distance of their customers, creating a system of local monopolies. A brewer in 1800 served a single town, if that. Canning, railroads, telegraphs, mass production, electrification, containerization, planes, and the internet, among other technologies, expanded companies’ available market, and winners captured a greater and greater share of value.

Economic data backs this up.

In a 2020 paper, Jan De Loecker and Jan Eeckhout found that aggregate markups rose from 21% above marginal cost to 61% between 1980 and the late 2010s, and this increase was driven almost entirely by the upper tail of the distribution; median firm markups barely changed, while the 90th-percentile markups surged.

De Loecker and Eeckhout, The Rise of Market Power and the Macroeconomic Implications

In The Fall of the Labor Share and the Rise of Superstar Firms, Autor et al. find that industry sales concentration trends up over time across measures, and it rises more in sales than in employment, what Brynjolfsson et al. call “scale without mass.” In their account, this shift reflects reallocation toward superstar firms with high markups and profits and low labor shares.

A 2023 paper by Spencer Y. Kwon, Yueran Ma, and Kaspar Zimmermann at UChicago, 100 Years of Rising Corporate Concentration, uses IRS Statistics of Income to show that the top 1% of U.S. corporations by assets accounted for about 72% of total corporate assets in the 1930s and about 97% in the 2010s. The top 0.1% of U.S. corporations by assets increased its share of total corporate assets from 47% to 88% over the same period. Power laws within power laws.

Kwon, Ma, and Zimmerman, 100 Years of Rising Corporate Concentration

Today, the Magnificent 7 accounts for one-third of the market cap of the S&P 500, and Apollo showed that those seven companies are driving the vast majority of equity returns.

Apollo, Mag 7 vs. Everyone Else

Venture capital is responding to this information as you would expect. Per Pitchbook, as of August last year, 41% of all VC dollars deployed in the US in 2025 went to just ten companies. Per Axios, more recent Pitchbook data shows that “The estimated aggregate valuation of unicorns hasn’t actually changed too much — $4.4 trillion vs. $4.7 trillion at the end of 2025 — because the top 10 companies account for around 52% of value (up from only 18.5% in 2022 and the highest such figure in a decade).”

Venture capital firms themselves are concentrating. As I wrote in a16z: The Power Brokers, “a16z accounted for over 18% of all US VC funds raised in 2025.” Just yesterday, Thrive announced that it raised $10 billion: $1 billion for early-stage investments, and $9 billion for late-stage investments, which is the kind of split you put in place if you believe your winners are going to keep winning.

The theory for investing in very large funds like a16z, Founders Fund, Thrive, General Catalyst, and Greenoaks is that they are best positioned to win large allocations in the handful of companies that matter, and that those companies will capture most of the value in this vintage. Notably, all five of the firms I just mentioned are investors in Stripe, and three of the five (Founders Fund, Thrive, and General Catalyst) are investors in Ramp.

All of that data suggests increasing concentration. Through this lens, the SaaSpocalypse (the violent sell-off in software stocks) is less about software writ large dying, and more about point solution software finally facing economic gravity. They are no longer getting a free pass simply for having a good business model.

The past few decades of software exceptionalism have been an exception based on a business model so sweet and capabilities so universally useful that the rules of strategy, while not wholly unimportant, were less consequential than normal. A venture capitalist could look at a standard set of SaaS metrics (ARR, growth rate, net retention rate, gross margin, LTV:CAC, Rule of 40, etc…) and underwrite whatever new business they encountered to them. This is why you hear things like “Here’s how much ARR you need to raise a Series A.” The companies are basically all the different flavors of the same thing.

There are, of course, idiosyncrasies between selling software to professional services firms and selling software to energy companies, for example, but the basic model is the same. Invest upfront to hire engineers, write software to make someone’s job easier, and sell the software to as many customers as possible at high margins. Different industries may need software to do different things, have different buyers, be larger or smaller, be more or less willing to pay, and be more or less expensive to acquire. A bigger, less crowded market with a strong need and a high willingness to pay is better than the opposite. But figuring that out is relatively straightforward. Since software operates at the edge of value in most industries, and does not attempt to strike at its core or compete with it, it doesn’t require thorough competitive analysis.

Since the SaaSpocalypse, people have gotten AI to write tens of thousands of words on which types of software companies have moats given AI and posted the resulting essays to X. They’ve gotten a little more specific than “SaaS is good.” Data is a moat, or a particular type of data at least. Or it isn’t, maybe, because The Agent Will Eat Your System of Record. Certainly, dealing with regulatory hair earns you a moat. No?

Most of the takes I’ve seen miss what matters.

On paper, Stripe and Adyen have basically the same moats, as do Ramp and Brex. I love a good hardware moat more than the next guy, as I’ve been writing since The Good Thing About Hard Things in July 2022, before ChatGPT or Claude Code but when it was clear that good software alone would offer no moat. I was too unspecific in that piece, too. Some hardware businesses will get very large, and others will fail. Hardware itself isn’t a moat. Good luck making LFP cells in America.

No, what matters is becoming the leader in your industry in a way that is incredibly specific to that industry and in such a way that your business benefits from, instead of being threatened by, abundant improvements in general purpose technologies like AI and batteries.

What matters now is the same stuff that has always mattered but that software forgave for a while: own the scarce, defensible asset in an industry and use it as the High Ground from which to dominate. Ricardo said this.

If you’re a startup, and you don’t already own the scarce asset, then you need to identify the constraint holding the industry back, focus everything on breaking it, and expand from there.

History’s most influential military strategist, Carl von Clausewitz, said this. He called it Schwerpunkt, the center of gravity. “The first task, then, in planning for a war is to identify the enemy’s center of gravity, and if possible trace it back to a single one,” he wrote in On War. “The second task is to ensure that the forces to be used against that point are concentrated for a main offensive.”

For our purposes, the Schwerpunkt is the constraint you attack. The High Ground is the scarce and valuable position you win by breaking it. Moats are what keep others from taking it.

Even while forces must be concentrated against the Schwerpunkt, our attackers must plan for victory before it is won. The company that breaks the constraint needs to build the complementary assets (the distribution, the manufacturing, the customer relationships) to capture the value from its own innovation. Otherwise, its competitors will.

David Teece argued this in 1986, in Profiting from Technological Innovation. The paper, he wrote, “Demonstrates that when imitation is easy, markets don’t work well, and the profits from innovation may accrue to the owners of certain complementary assets, rather than to the developers of the intellectual property. This speaks to the need, in certain cases, for the innovating firm to establish a prior position in these complementary assets.” Which is the point I am making: innovation alone, software or hardware, isn’t enough.

Figuring out which companies might capture the Schwerpunkt and use it as a High Ground from which to expand is an entirely different kind of underwriting, impossible to do in a spreadsheet alone, even with Claude in Excel.

Companies that don’t own the High Ground face existential risk from technological progress. If you’re just selling point solution software, then software abundance is a threat. Hardware isn’t necessarily the moat people think it is, either, even if it’s less susceptible to AI, because AI isn’t the only technology improving rapidly. “Hardware is a moat” is the same kind of lazy thinking that “SaaS is the greatest business of all time” is. If you’re selling better mouse traps, you’re at risk every time someone builds a slightly better mouse trap.

Similarly, incumbents that currently own the High Ground but can’t wield modern technology face existential risk from those who can. This is why there is such a large opportunity for startups today. New technologies mean old constraints are finally attackable, and it’s likely to be newer companies doing the attacking.

Companies that do own the High Ground, on the other hand, and are tech-native, benefit from technological progress, just as land owners captured the gains from more farming labor and better farming tools, but more pronounced, because these modern landowners will corner the best talent, the most capital, and the richest veins of distribution. A glib way to put it is that a Ramp engineer with an AI will build something better than a CFO with an AI, no matter how good the AI gets. It’s not the vibe coders you should be worried about.

Newly abundant resources can have opposite effects on your business depending on your position, and it is likely to be the company with the High Ground wielding those resources that dooms the companies in weaker positions. Ask Slack how it felt to compete with Microsoft Teams; companies like Microsoft can now build a lot more “Teams.”

The game on the field is all about understanding who can own the High Ground in a given industry.

The moats are the same as they’ve always been. Study 7 Powers. When you’re starting out, you need to understand what your moats might be, but in order for moats to matter, you need to have something worth protecting. You need to own the High Ground.

If we really are living through the most consequential technology revolution in history, why are you spending so much time hand-wringing about protecting small, old castles when you could be thinking about how to build history’s most magnificent businesses?

The abundant inputs keep getting cheaper. The scarce asset keeps getting more valuable. The companies that own the latter and leverage the former will become larger than ever before.

These businesses themselves are scarce assets, valued on their strategic importance and industry size, because the opportunity is no longer to sell software into industries in order to marginally improve them, but to win those industries and capture their economics.

If your aim is to build or invest in these companies, old heuristics will do you no good. You need a brain of your own, some sweat on your brow, and some good ol’ fashioned strategy frameworks to help you reason about the opportunity at hand.

This essay is a guide to thinking through where power might concentrate, for those willing to think. If winner takes more, it’s about what it takes to build, or invest in, the companies that have a shot at winning large industries. And it’s about how to position yourself to gain strength from technological progress instead of running from it while throwing weak “moats” in your wake.

It is about Power in the Age of Intelligence.

A Tale of Two Industrial Revolutions

Or, it’s about Power in any age of rapid technological change, really.

While the advances we are experiencing today feel unprecedented, my thesis has been that we are going through a modern version of the Industrial Revolution.

Then, machines did what only human muscles could previously. Now, machines are doing what only human brains could previously, in new bodies built to house those brains. This is The Techno-Industrial Revolution.

So it is useful to study Rockefeller, Carnegie, Swift, and Ford. None created an industry from scratch. They all fit this pattern: identify the Schwerpunkt in an existing industry, break it, seize High Ground, integrate outward, dominate.

Standard Oil

When John D. Rockefeller met the oil industry, it was young, valuable, and incredibly volatile. Oil itself was abundant. There were a lot of refineries – roughly 30 in his hometown of Cleveland alone when he got to work – but their quality was inconsistent, and their processes were inefficient. Per Austin Vernon, “Refining methods were so inefficient in the mid-1860s that a barrel of oil (42 gallons) sold for almost the same price as a gallon of refined kerosene. Today, the price ratio of refined products to crude oil is ~1.25x instead of 42x.”

There was one big constraint to the profitable growth of the oil industry – the volatility – which could only be broken through scale and control. To get to scale and control, Rockefeller needed to drive down costs to capture the market. Refining was the place to get scale, given its inefficiency and the fact that, per Vernon, “A typical rule of thumb in chemical engineering is that capital costs increase sublinearly with capacity, usually by (capacity ratio)^0.6. A plant with double the output is only 50% more expensive to build, and operating costs tend to follow similar trends.”

Standard Oil Refinery, 1889

So in partnership with chemist Samuel Andrews, one of the first people to distill kerosene from oil, Rockefeller continued to improve the kerosene yield. At the same time, he aggressively grew revenue and lowered costs by eating the whole cow, so to speak. He sold the non-kerosene byproducts others threw out (paraffin wax, naphtha, and gasoline) and used some of the fuel oil to power his own plants. He also integrated into barrels (by buying an oak tree forest and a barrel-making shop).

As it lowered costs and delivered a more consistent product, Rockefeller’s refinery (the predecessor to Standard Oil) grew, and as it grew, it lowered costs. Vernon again:

Standard Oil and its predecessor firms increased production ~20x between 1865 and the end of 1872, meaning their costs could have fallen more than 85%. At that point, they were the largest refiner in the world with a double-digit share of capacity, and it was their game to lose. If we understand this short period, then we know how the company eventually won.

The company that would become Standard Oil won the High Ground by breaking the constraint. Then it integrated outward, horizontally and vertically.

By 1870, Standard Oil was a joint stock company capitalized with $1 million that owned 10% of the oil trade in the United States. Rockefeller got busy acquiring struggling refineries or putting them out of business, increasing scale and efficiency in the process. Rockefeller also did favorable deals with the railroads, which Vernon argues actually had less to do with Standard Oil’s success than did its growing scale and efficiency. He kept growing, acquiring refiners in Pennsylvania, New York, and New England. He vertically integrated into pipelines (which replaced railroads), into distribution, into retail (ExxonMobil and Chevron are Standard successors), and into production itself.

By the late 1880s, Standard Oil controlled 90% of American refining, a share it held until it was broken up in 1911, when its $1.1 billion market cap represented 6.6% of the entire US stock market. To hear Vernon explain it, the outcome was a fait accompli by the time he’d attacked the Schwerpunkt and gained the High Ground in the 1860s.

Carnegie Steel

Andrew Carnegie’s story is so similar it’s almost suspicious. Like Rockefeller, Carnegie didn’t invent his product (steel); Bessemer did. Like Rockefeller, Carnegie realized the constraint to steel’s growth was inefficiency and inconsistency. Like Rockefeller, Carnegie hired a chemist (in his case, to measure what was happening inside the furnaces) and obsessed over cost, which he knew was the only thing he could control:

Show me your cost sheets. It is more interesting to know how well and how cheaply you have done this thing than how much money you have made, because the one is a temporary result, due possibly to special conditions of trade, but the other means a permanency that will go on with the works as long as they last.

His chemical knowledge allowed Carnegie to run his furnaces hotter and longer than anyone else, producing more steel at lower cost. His cost obsession lowered costs further. Low cost, high quality steel was the High Ground.

Carnegie Steel Mill, HBR

And from there, he integrated outward. Backward into coke (Frick) and iron ore (the Mesabi Range), into railroads to transport raw materials, and forward into finished products. He certainly didn’t sell his services and know-how to incumbents; he used them to destroy competitors on price until US Steel bought him out for $480 million in 1901 (roughly $18 billion today) to create the first billion-dollar corporation in history.

“Congratulations, Mr. Carnegie,” JP Morgan told him upon closing, “you are now the richest man in the world.”

Swift Meats

Gustavus Swift, like Rockefeller, would also “eat the whole cow.” He just did it later in his arc, and more literally.

The constraint was this: only about 60% of a live animal’s mass is edible, and meat goes bad. Which meant that, prior to the 1870s, the meat industry shipped 1,000 pound live cattle by rail from wherever they were raised to wherever they were going to be eaten. They paid by the pound (an extra 40%), had to feed animals to keep them alive and healthy, and lost some to death in transit anyway.

So Swift, building on early experiments by Detroiter George Hammond, hired an engineer to design him a refrigerated railcar. Then, he could slaughter the beef in Chicago and ship the cuts to their final table much more efficiently. Railroads, not wanting to lose livestock shipping cash cow, refused to pull his cars, so Swift leased his own and partnered with smaller lines to move them. Then, he built icing stations along the routes, and replenished them with ice he contracted directly with ice harvesters in Wisconsin and other cold midwestern states. By necessity, he built the whole cold chain from scratch.

This combination of centralized slaughter in Chicago and cold chain to the coast was his High Ground. He was forced into vertical integration because none of the pieces made sense on their own, but once he had it, he used it to drive down costs.

Like Rockefeller, Swift was appalled by waste, and because he controlled his own slaughterhouses, he could do something about it: he turned cow byproducts into soap, glue, fertilizer, sundries, even medical products, which allowed him to increase revenue and lower prices. He also maximized his refrigerated cars by stacking butter, eggs, and cheese beneath the swinging carcasses of dressed beef heading East.

By 1884, after only six years in operation as a slaughterer, Swift had become the second largest meatpacking firm in the US. By 1900, the meatpacking industry, unconstrained, had grown to become the second largest in the country to iron and steel.

Ford

It was a visit to a Chicago slaughterhouse that inspired Henry Ford’s assembly line. “Along about April 1, 1913, we first tried the experiment of an assembly line,” Ford writes in My Life and Work. “We tried it on assembling the flywheel magneto. I believe that this was the first moving line ever installed. The idea came in a general way from the overhead trolley that the Chicago packers use in dressing beef.”

Ford didn’t invent the automobile. By 1908, there were hundreds of American car companies selling expensive, hand-built machines to the wealthy. A typical car cost $2,000-$3,000, or roughly two and a half years’ wages for an average worker. Manufacturing costs were the constraints to the nascent automobile industry’s growth, so manufacturing costs were Ford’s Schwerpunkt.

Ford broke the constraint with the moving assembly line. Before it, a single worker assembled a complete flywheel magneto in about 20 minutes. Ford split the work across 29 operations, cutting the time to 13 minutes. Then he raised the line eight inches and cut it to seven minutes. Then he adjusted the speed of the line and cut it to five. The same progression played out across the whole car: total assembly time fell from over 12 hours per chassis to 93 minutes.

That manufacturing capability was the High Ground. The Model T launched in 1908 at $850, already half the price of the competition. As the assembly line improved, Ford kept cutting: $550 by 1913, $360 by 1916, below $300 by 1924. What had been 18 months’ wages for an average worker became four months’.

From that High Ground, Ford integrated ferociously. Rubber plantations in Brazil. Iron mines and timberland in Michigan. A glass plant, a railroad, a steel mill, even soybean farms for plastic components. All of it flowed into the Rouge River complex, where raw materials entered one end and half of the finished cars on the world’s roads rolled out the other.

The result was that Ford’s sales went from 12,000 in 1909 to half a million in 1916 to over two million in 1923. At its peak, more than half the cars in the world were Fords.

Across the Industrial Revolution’s most successful entrepreneurs, there was a clear pattern that looks almost nothing like how you’d think about scaling a SaaS business: identify the Schwerpunkt in an existing industry, break it, seize High Ground, integrate outward, dominate.

Pause for a second. Think about how people are telling you to analyze businesses today. Would those AI-generated moat lists, or the equivalent for their time, have given you any advantage whatsoever in identifying Rockefeller, Carnegie, Swift, or Ford, let alone becoming one of them? It is never that easy, and it always takes work.

I want you to feel those examples, because what’s old is new again. The biggest companies in the world today are executing against the same framework, in ways that are specific to their industry.

SpaceX Goes Vertical

The funny thing about today’s biggest software companies is just how much they spend on hardware. This year, the world’s four largest companies that started as software companies plan to spend an estimated $600-700 billion on data center buildouts, equivalent to roughly 2% of US GDP, a level of infrastructure buildout comparable to laying America’s railroads in the 1850s.

Amazon, an online bookseller, will spend $200 billion. Google, the search engine giant, will spend $175-185 billion. Meta, the social network for college students, will spend $115-135 billion. And Microsoft, which makes operating systems and office applications, will spend $100-150 billion.

Except, of course, that’s not what those businesses are. They are technology conglomerates that used the early internet to break the Schwerpunkt in their respective industries, gain their respective High Grounds, and integrate outward so far that they’re all running into each other at this new frontier. And despite their best efforts and hundreds of billions of dollars spent on terrestrial data centers, Elon Musk still thinks we’re going to need to put them in space.

SpaceX

Before SpaceX, the constraint in the space industry was cost to orbit. SpaceX broke the constraint with reusable rockets, drove costs down an order of magnitude, and quite literally gained the High Ground. From there, it integrated outward into Starlink communications satellites, which it can launch more cheaply than competitors because it owns the rockets and which fund the development of even bigger Starship rockets, which bring the cost per kg to launch things into orbit down even further. SpaceX used vertical integration the same way Rockefeller did: it is simultaneously its own largest customer and its own cheapest supplier. Casey Handmer’s The SpaceX Starship is a very big deal is an excellent read on the topic.

In 2023, Elon Musk founded xAI to build maximally truth-seeking AI. He then merged it with X (née Twitter). xAI got a late start, and it doesn’t have the best models yet, but what it is best in the world at is building data centers very fast. So the world took note when Elon said that we’d never be able to build enough data centers on earth to meet demand for AI, and that we will need to start building them in space.

So on February 2nd, 2026, SpaceX announced that “SpaceX has acquired xAI to form the most ambitious, vertically-integrated innovation engine on (and off) Earth, with AI, rockets, space-based internet, direct-to-mobile device communications and the world’s foremost real-time information and free speech platform.” According to Musk, SpaceX will get the data centers it needs in space via ~10,000 Starship launches per year, or roughly one per hour, every hour. Simultaneously, it will also build a self-growing Moon city, from which it plans to build a mass driver in order to make a terawatt per year of more worth of AI satellites, far more energy than Rockefeller could have conceived of, en route to eventually colonizing Mars and fulfilling SpaceX’s mission to “extend consciousness and life as we know it to the stars.”

It remains to be seen whether the High Ground will also give SpaceX a decisive advantage in the AI race, but it certainly demonstrates that the stakes have grown since the Industrial Revolution, even as the strategy has remained the same.

But no matter how that plays out, SpaceX (and Google, Microsoft, Amazon, Meta, Apple, Tesla, NVIDIA, OpenAI, and Anthropic) aren’t going to eat everything, or else I wouldn’t be investing in startups.

The Hunt for the High Ground

Boulton and Watt did not capture the entire value of the Industrial Revolution they steam powered, although they did vertically integrate, Boulton into the Soho Manufactory, the steam engine-based Gigafactory of its day, and Watt, via his son, into steamships. Nor did Rockefeller eat everything in the internal combustion era of the Revolution despite owning the oil on which it all ran.

In addition to Rockefeller (oil), Boulton and Watt (steam engine), Carnegie (steel), Swift (meatpacking), and Ford (automobiles), the Industrial Revolution gushed multi-generational wealth for the Vanderbilts (railroads), Morgans (finance), Sears and Roebucks (retail), Havemayers (sugar), McCormicks (agricultural equipment, sadly unrelated), Westinghouses (power), Otises (elevators), Pullmans (luxury rail cars), Bells and Vails (telecommunications), Pulitzers and Hearsts (publishing), Eastmans (photography), Kelloggs (processed food), Pillsburys (milling), Singers (sewing machines), Nobles (Dynamite), DuPonts (chemicals), and Dukes (tobacco). This list is incomplete.

What’s notable is the diversity of industries that produced these fortunes. Machines made “labor” more abundant, and the companies that seized upon the technological innovation to break the Schwerpunkt in their specific industry, gain the High Ground, and expand were all wildly successful. Far from simply defending against mechanization, they seized the complementary assets to which value flowed as key inputs became abundant.

There are clear differences between AI, developed by huge labs and distributed at the speed of bits, and Industrial Era machine-filled factories, but I expect the Techno-Industrial Era to play out similarly. Each industry has unique constraints and resulting High Grounds, very few of which can be cracked and captured with digital intelligence alone.

The diversity that creates unique opportunities in each industry, however, makes underwriting those opportunities a different and more difficult beast than underwriting SaaS companies, which are more homogenous. There is no list, no spreadsheet, no agreed-upon metrics that will tell you which will become today’s Standard Oils. There is only the evaluation of constraints and hunt for High Grounds.

Instead of a list, then, let me give you my favorite example: Base Power Company.

Base Power Company

Base Power Company doesn’t just make batteries. It buys cells (the commoditized piece of the value chain), manufactures battery packs, installs them on homes (starting in Texas), writes software to coordinate them, trades in the power market, and partners with utilities to help balance the grid. Base is built on the type of logic companies (and their investors) will need to exercise if they want to compete in the modern era, and it goes something like this.

We want to fix power. What’s the bottleneck? The grid. Companies are competing to build power generation and the electric machines that consume that power, and the better they do, the more strain there will be on the grid. The grid is the chokepoint. So how do you fix the grid? Laying new transmission and distribution is slow and expensive, and the grid we have is already structurally underutilized because it’s built to serve peak demand, so to smooth it out, you need batteries. Where should you put the batteries? Centralized battery farms are helpful, but they need to wait in interconnect queues, which makes them slower to turn on, and those batteries still need to distribute power to end users when demand peaks, which means they don’t fully solve the bottleneck. So you need to put the batteries right next to demand. Fill them up when the grid has capacity, and use them to smooth demand when demand is high. And if you want to put batteries next to demand (homes, to start), where is the best place in the country to do that? Texas, which operates its own deregulated grid, ERCOT, is volatile (which means potential for higher trading profits and greater need on the part of customers and utilities), and is regulatorily friendly. So you start by putting batteries on the homes of early adopters within Texas. Those slots are scarce - it would take a lot for a customer to rip and replace their batteries, and no one is installing two companies’ batteries. Then, connect them with software, improve the grid and each customer’s experience with more batteries on the network, and use the richest source of demand available in the country to begin to scale. Bring manufacturing in-house, continue to improve the batteries, decrease their costs, get more efficient at installing them, connect more of them, sign more early adopter utilities, get more scale. At which point, it’s hard to imagine a viable way to beat Base at its own game. Then, expand. Integrate upstream into grid hardware and generation and downstream into electronic devices to sell into customers with whom you’ve built trust. Expand geographically, leveraging scale, experience, and software to offer a better product than a potential competitor attempting to grab a foothold by starting in the next-best market. Keep expanding. Dominate. Expand some more.

There are a couple things I want you to take away from that paragraph.

First, it is a very long paragraph. This is not simple or easy. I think investors bemoaning the Death of SaaS are in part sad that the era of underwriting software businesses on known, straightforward metrics is over. Underwriting the biggest companies of this generation will be a much more bespoke process. The time has come to move from simple analysis to strategy. It is not a coincidence that my first Deep Dive on Base was structured as a walk through the evolution of the strategy memos that Zach and Justin wrote before touching a single atom.

Second, as technology improves – from AI to the Electric Stack – the vast majority of the returns will accrue to the companies that figure out the right place to attack and execute violently against their conviction. A simple way to think about this is that better software is more valuable to Base than it is to a smaller competitor, to a battery farm operator, or to a power generation company, as is better hardware. Better robots for manufacturing and logistics would make Base faster and more profitable, and making the game more CapEx intensive would give it an advantage over would-be competitors.

The lesson from Base is not that hardware is a moat, or that you should put your product next to Texans’ homes.

It’s that you need to deeply understand the problem you’re trying to solve, the constraint that’s bottlenecking it, how you’re going to unblock it with technology (and why now?), and how you might expand to capture the market once you do. It applies differently in every industry.

For airlines, the constraint is the engine: today’s turbofan engines carry planes as fast and efficiently as they can. Everything bad in air travel is downstream of that. So Astro Mechanica is building a new engine that is faster and efficient at every speed. But certifying a commercial airline is a long and expensive process, so Astro plans to sell into Defense first, then build private supersonic planes (which are cheaper to certify and can be cost-competitive with first class tickets immediately), then build larger supersonic planes that are cost-competitive with commercial air travel, and use the advantage in speed and cost to build its own full-stack airline, from booking to flight.

For internet, the constraint is the architecture: incumbent telcos froze their architectures around early-2000s assumptions about what was expensive, locked themselves into passive optical networks and vendor dependence, and now spend billions every few years on upgrades that still deliver shared, degraded bandwidth with no redundancy. They do zero R&D. So Somos Internet is rebuilding the full stack from scratch: an Active Ethernet architecture borrowed from data centers, physically simple with complexity pushed to software, that delivers dedicated bandwidth to every home at a fraction of the CapEx. As it grows, Somos eats more of its supply chain: “It’s been this never-ending game of doing something janky, getting credibility, doing crazier stuff, getting more resources, getting smarter people so that we can fix the things that were messed up in the janky past iteration,” Forrest explained. “Then gaining credibility to get more resources to get cooler people to do crazier stuff. It’s like this self-sustaining fission process.” Somos is expanding geographically, into new markets, vertically, by making its own hardware and laying its own fiber, and horizontally, building hydro-powered data centers. From the position of delivering one of the few home utilities everyone pays for, better, faster, and cheaper than incumbents, it plans to expand what it offers the customers with whom it’s built trust and loyalty. Maybe one day, it will offer batteries and power. Maybe one day, it will use its growing cash position to enter the United States.

There are a lot of similarities between Base and Somos: both own a core home utility and deliver it better than incumbents, which earns them the right to expand. But there are differences, too. Base is starting in the very best market for its technology, because that’s where the need is greatest and the regulatory environment is friendliest. If Somos started somewhere like New York City, it would be caught up in red tape and slow, expensive telco lawsuits for years; so it’s starting in a high-need, regulatorily friendly environment and building up cash for bigger battles. And Astro’s approach is almost entirely different from both Base’s and Somos’, apart from using better technology, now feasible thanks to Curve Convergence, to break a constraint and capture the High Ground. For one thing, people go to planes, so Astro can’t capture their real estate in the same way that Base or Somos can.

There I go with the long paragraphs again. Fine. There is endless nuance to this.

I am talking my own book here, not because I think my portfolio companies are the only businesses that will succeed in the Age of Intelligence, but because I understand their strategies much more thoroughly. Very smart people will disagree with me on each industry’s Schwerpunkts and potential High Grounds. And even once you’ve done all this work on paper, so much comes down to execution. Will the team that identified the right strategy be the same one that can build against it to capture the opportunity? Only time will tell. That’s what makes this so much fun - it’s not obvious!

What is obvious, and I hope clear at this point, is that there is no one answer, no handy guide that will tell you how to win in the Age of Intelligence. Which means that there is also not one business model.

A Note on Business Models

While the “Death of SaaS” is overblown, what I hope this freak out does is to end the default investor assumption that every business should try to be a SaaS business.

A week before the sell-off, I met with three separate founders who told me that investors didn’t like their businesses because they weren’t SaaS. In two cases, the founders were building services businesses – traditionally a huge venture red flag. In all three, they believed the technology they were building was so good that they could use it to compete directly with incumbents instead of selling them software that made them marginally more productive.

During the sell-off, Flexport’s Ryan Petersen tweeted that everyone “smart” had told him to just build SaaS. The idea being that it would be easier to sell to freight forwarders instead of actually becoming a freight forwarder and competing.

Other founders quote tweeted him saying they’d been given, and ignored, the same advice, including Cover’s Alexis Rivas, whose company builds houses. I’m not even sure what selling software would look like here.

This is not because investors are dumb. Selling software has real advantages, and those advantages are legible more quickly. Two of those three founders I spoke to said that they had competitors who were selling software and generating a lot of revenue quickly, which is why investors thought they should be doing the same.

In the past, that was a logical discussion to have: should you try to sell software to generate lots of high gross margin revenue in the near-term in a way that’s legible to downstream capital so that you can continue to raise and hopefully give yourself time to develop moats, or should you try to compete directly, with better technology and a chance at better economics within whatever your industry’s business model is, even if those economics are worse than SaaS economics, in pursuit of the larger and ultimately more impactful shot at winning and reshaping your industry?

In most cases, that is no longer a debate. AI squeezes it from both sides. From one side, SaaS is a more competitive, less defensible business; there will be enough competitive noise that it’s harder to establish traditional moats like network effects and switching costs, and customers big and sophisticated enough to actually pay a lot and make use of your tool may opt to build something custom themselves. From the other, the technology is so powerful in the right hands that it should provide a stronger force with which to attack the Schwerpunkt than deterministic software could have. In other words, it is more likely than ever that a technology-native new entrant can defeat incumbents, assuming their technology actually addresses the industry’s key constraint.

What this means is that investors need to get comfortable with a wider range of business models to accommodate whichever is the right one for the industry in which a company operates. This does not mean that they should treat all business models equally now. Instead, they need to stop blind pattern matching altogether.

Services businesses might still be terrible for most businesses, but exactly the right model for some. Stripe clearly shouldn’t be a services business; maybe an AI-native law firm should. Selling hardware to incumbents might be a bad business model, not simply because hardware is hard, but because buyers have all the power in a particular industry, or because existing suppliers have locked buyers into whole sticky ecosystems. What might be better is to use that better hardware as the High Ground from which to integrate and compete.

A question I would like to see more investors asking instead of “Why not sell SaaS” is:

If your technology is so good, why aren’t you using it to compete?

Some companies find out that selling software to incumbents is the wrong model only through trial-and-error. My favorite example here is Earth AI.

Earth AI developed AI models to identify drilling targets for mining explorers back before AI was a thing, and sold it to explorers for a very good price at high margins. The challenge was: they never heard back from their customers. Many went bust - exploration is a notoriously binary business - which meant they stopped paying; retention was hard. Many others just had no incentive to report back, which meant that Earth AI wasn’t learning which of its targets were good and bad, which meant that it couldn’t improve its models. So it bought its own rig and went to customer sites to go find out for itself, and then it realized that it could build better rigs, and combine them with better models, and just compete directly. As I wrote in my Deep Dive:

The same thing that makes exploration customers bad customers – slowness, unwillingness to adopt technology – makes them very attractive competitors, if your tech actually works as well as you say it does. If you’re willing to vertically integrate – to do exploration, and drilling, and maybe even extraction – you might be able to build the most efficient explorer out there.

To be clear, Earth AI’s current business model is much more confusing than selling software. It has to invest in rigs up front, stake deposits, put teams on site to prove them out, and keep proving feasibility until a downstream miner wants to buy a stake in the deposit or buy the whole deposit outright and pay Earth AI a royalty, at which point, it becomes one of the most beautiful business models there is. Mining royalty & streaming companies have some of the highest market caps per employee in the world. Franco Nevada is worth $48 billion with just 41 employees, good for $1.2 billion per employee! Earth AI has the potential to build up a similar portfolio at a much lower cost basis because it is willing to dig.

The point is, maybe you drill mineral deposits in Australia to build a portfolio of mines, maybe you buy cells, manufacture battery packs, install them on homes, and make money by becoming a Retail Electric Provider, trading power, and selling ancillary services, maybe you hire expensive humans, make them much more efficient, and sell their time, maybe you even sell software!

Whatever you need to do to break the constraint, gain the High Ground, and win your industry is what dictates the business model you should pursue.

You Can Even Sell Software, As Long as You Win

Sometimes, the Gods smile on an essay. As I was writing this, Stripe co-founder John Collison released the latest episode of his podcast, Cheeky Pint. His guest: Eric Glyman, the CEO of Ramp.

Responding to John’s first question, Eric described Ramp’s evolution in terms that should now sound familiar. A few years ago, Ramp’s gross profit was over 90% card interchange. Today, the non-card businesses, including bill payments, treasury, procurement, travel, and software, will comprise the majority of Ramp’s contribution profit.

Ramp used a card, software, and counter-positioning to attack what it viewed as the Schwerpunkt in corporate spend (the fact that everyone was selling money, and no one was selling time) and win the transaction layer, the High Ground from which it is now expanding to eat every point solution a finance team touches.

Throughout the conversation, he lays out from his inside view exactly how and why this is happening. Everything flows from earning the High Ground. Ramp has data that no one else has and that no new entrant can accumulate more quickly. As it adds more intelligence, it gets more data. It’s built the things that are too expensive to replicate with more tokens – “I think the fitness function for companies becomes can you actually do things in such a way where even if you could spend tokens on it, it would take more tokens to create the thing or do that work than the system that you’ve built to drive that outcome.” – and is happily spending tokens to build everything else. And as it adds more features, it grows. The company now “powers more than 2% of all corporate and small business card transactions in the United States.” The larger its share, the more it learns, the more it makes, and the more tokens it can throw at eating adjacent opportunities, which keep feeding the machine.

This is why Ramp is valued at $32 billion while Brex sold for $5.15 billion. It is why Stripe is worth multiples of Adyen. It is why Base, only two years old, was valued at $4 billion.

The ownership of the scarce position in an industry is itself a scarce asset. The market, whether it uses this language or not, is including in its valuation the belief that from that position, you can eat an industry.

In doing so, they are leaning on history and economic data. Once Rockefeller smoothed refining’s volatility and began to get scale advantages in the 1860s, it was fait accompli. Once SpaceX drove down the cost of putting mass in orbit, and used that advantage to build a telecommunications cash cow that it could use to reinvest in cheaper launch, it became the leading candidate to win whatever economically valuable use cases required putting a lot of mass in orbit. Before Elon realized space data centers were going to be a thing, he’d won the right to win space data centers.

If you are confident in your analysis of the constraint and High Ground in an industry, and of which company is best positioned to break the former to win the latter, you can pay a premium under the assumption that more of that industry’s economic value will flow to the leader. That trend - increasing concentration of economic value - is a long and stable one, accelerated by new technologies.

Today, our new technologies are more powerful and general purpose than ever before, which means that the ability of category leaders who are able to wield those technologies is greater than ever before. They are levered to the pace of technological progress.

If AI gets smarter, Stripe and Ramp can eat more adjacencies, faster. If battery cells get more efficient, Base can offer a better service to its retail and utility customers. As power electronics continue to improve, Astro Mechanica can build faster, more efficient engines.

Whether SaaS is dead is one of the least interesting questions in the world. SaaS as a wellspring of valuable businesses, almost regardless of those businesses’ actual power, was a historical anomaly.

That doesn’t mean software is dead. We will see software businesses become some of the largest businesses in history, just as we will see hardware and even services businesses that dwarf Standard Oil’s size. Economic inputs are becoming more abundant, which means that more value will flow to the scarce complementary assets. This will continue as long as the abundance does.

The question that matters now is how you plan to win your industry. Everything else follows.

Power in the Age of Intelligence flows to the winners. Winners take more.


That’s all for today. We’ll be back in your inbox Friday with a Weekly Dose.

Thanks for reading,

Packy

Weekly Dose of Optimism #180

2026-02-13 21:07:33

Hi friends 👋 ,

Happy Friday from sunny Cape Town, South Africa! Not sure if it’s escaping frozen New York for warmer weather, spending time with family, or the fact that this was another one of the wildest weeks in Dose history, but I am feeling a little extra optimistic this week. By the end of this one, I hope you are too.

When Dan an I started writing this over three years ago, our goal was to make the world more optimistic by sharing all of the incredible progress happening in science and technology each week. That is still the case, and it’s still necessary. People are still pessimistic, and uncertain about what lies on the other side of progress.

Since we started writing, what’s changed is that things are simply moving much faster. There is more to cover each week. We have 7 Extra Doses in this one; each could be one of the top 5, and there are still things we didn’t cover.

So now, there’s an additional goal with the Dose: to keep you up-to-speed with the most important things happening in science and technology in the time it takes you two finish two morning coffees. Don’t doomscroll to keep up, just read the Dose.

Let’s get to it.


Today’s Weekly Dose is brought to you by… the Abundance Institute

My friends at the Abundance Institute are launching “Everyday Abundance,” a new podcast, this spring hosted by best selling authors Virginia Postrel and Charles Mann. I had a fascinating conversation about tissue paper, sneezing, and germs with Virginia and Charles at the Progress Conference in October and I’m pretty exited to listen to the show.

If you join Abundance’s Foundry now, you’ll get access to a salon Zoom with Virginia, early access to the podcast, and 3 months of not boring world free1, on top of all the other benefits of supporting this amazing organization.

Check out the Foundry membership here: Join the Foundry


(1) Isomorphic Labs Drug Design Engine unlocks a new frontier beyond AlphaFold

Isomorphic Labs

AlphaFold won Demis Hassabis a Nobel Prize for predicting the structure of proteins, which felt like a technological miracle at the time, as captured in The Thinking Game.

This week, Hassabis’ Isomorphic Labs, the Google spinout he CEOs on Tuesdays while also running Google DeepMind, showed that they can now predict how to drug them in a technical report on IsoDDE, its AI drug design engine.

On the hardest protein-ligand structures (the ones most unlike anything in its training data, where AlphaFold 3 struggled) IsoDDE more than doubles AlphaFold3’s accuracy. It outperforms AlphaFold 3 by 2.3x on antibody-antigen modeling and Boltz-2 by nearly 20x. And it predicts how strongly a drug will bind to its target better than FEP+, the gold-standard physics simulation that typically costs orders of magnitude more in compute time.

It’s finding things quickly that have taken researchers over a decade. Cereblon is a protein that researchers spent 15 years believing had one druggable pocket. A 2026 paper experimentally discovered a second, hidden one. IsoDDE found both from the amino acid sequence alone, with no hints about what ligand to look for.

The big question from here is whether and how IsoDDE and other computational breakthroughs translate into actual drugs. As of early 2026, no AI-discovered drug has received FDA approval. AI-designed compounds are progressing to clinical trials at roughly the same success rates as traditionally discovered ones. Biology remains brutally unpredictable once you move from a screen to a human body.

Isomorphic Labs itself has pushed back its clinical trial timeline, now targeting end of 2026 for its first AI-designed drugs to enter human trials. So we’re still in the “proof of concept” phase for the whole field.

But to date, drug discovery's biggest bottleneck has been the staggering cost and time of search. It can takes a decade and billions of dollars per drug. Last year, Hassabis told 60 Minutes: “We can maybe reduce that down from years to maybe months or maybe even weeks.”

IsoDDE compresses the search phase from months of lab work to minutes of computation. If it can reliably surface the right targets and the right molecules faster, even if clinical trial timelines stay the same, you’re running dramatically more shots on goal for the same cost, and taking shots in weirder, harder-to-find pockets that humans would never think to (or at least have the time and resources to) try.

IsoDDE and other tools like it turn the front end of drug discovery from a slow, artisanal hunt into a fast, systematic search. One more bottleneck down. They’ll flood the clinical pipeline with better, more novel drug candidates, which creates another one. We are going to need to do something to accelerate clinical trials and FDA approvals to handle the flood.

(2) Gemini 3 Deep Think Crushes Benchmarks, Does Materials Science and Math

Google DeepMind

Look, I’m a simple man. If you include a video of a Duke lab in the announcement of your new model that “mogs” state-of-the-art models on ARC-AGI-2 (a test designed to be incredibly hard for AI), assists in cutting-edge materials science research, and helps mathematicians solve Erdős problems, I’m going to include it in the Dose. Go Duke.

Deep Think is GDM’s specialized reasoning mode within Gemini 3, designed to spend minutes (or longer) chewing on a single problem, exploring solution paths, backtracking when they don’t work, and building up multi-step chains of reasoning before committing to an answer. Google calls it “System 2” thinking, borrowing the Kahneman framing: where standard Gemini is fast and intuitive, Deep Think is slow and deliberate.

That deliberate approach pays off on benchmarks. Deep Think hit 84.6% on ARC-AGI-2 (the frontier reasoning benchmark, verified by ARC Prize), where the next closest model scored 68.8%. It achieved a 3455 Elo on Codeforces: for context, that puts it in the top tier of competitive programmers on Earth; it would rank 8th in the world. It set a new standard of 48.4% on Humanity's Last Exam, a benchmark designed to be the hardest collection of problems across math, science, and engineering. And it earned gold medal-level results on the written portions of the 2025 International Physics and Chemistry Olympiads.

It’s always hard to know what the benchmarks mean, though. Every time a big lab drops a new model, they beat some benchmarks.

Which is why the video with Duke University's Wang Lab is cool. In it, a researcher uses Deep Think to optimize the fabrication of MoS₂ monolayer thin films, a class of semiconductor materials that's notoriously difficult to grow at precise scales. The researcher prompts Deep Think with synthesis parameters, the model reasons through an optimized growth recipe, and then the system pipes those parameters directly into lab automation software that controls the furnace, gas flows, and temperature profiles. Deep Think designed a recipe for growing thin films larger than 100 μm, a precise target that previous methods had struggled to hit. The era of self-driving labs is upon us.

Meanwhile, collaborating with experts on 18 open research problems, Deep Think helped break long-standing deadlocks across computer science, information theory, and economics. It cracked classic algorithmic challenges like Max-Cut and Steiner Tree by pulling in mathematical tools from entirely unrelated fields, the kind of cross-domain intuition leap that's supposed to be uniquely human but which is basically what I expect a thinking machine with access to all human knowledge to do. Every time a new model drops, I ask it to tell me connections that humans have missed given its view across disciplines, and normally, it’s pretty weak. I’m excited to give Deep Think the test.

In another case, it caught a subtle logical flaw in a proof that had survived human peer review. In research-level mathematics, it autonomously generated a paper on structure constants in arithmetic geometry and collaborated with humans to prove bounds on interacting particle systems. And DeepMind ran it against 700 open problems from Bloom's Erdős Conjectures database, a collection of unsolved problems posed by Paul Erdős, one of the most prolific mathematicians in history, and autonomously solved several of them.

The coding stuff that gets twitter buzzing just doesn’t excite me that much. I didn’t buy a Mac Mini. The writing is still bad. But this stuff… helping humans solve hard problems and make new discoveries… this stuff I’m here for.

It’s a great time to be a researcher, and a bad time to be a problem.

(3) Introducing: Liberty Class

Blue Water

Speaking of problems that have seemed almost impossible for Americans to solve…

American shipbuilding numbers are almost comical. China’s shipbuilding capacity is 232 times greater than America’s. In 2024, Chinese yards built over 1,000 commercial vessels. The US built eight. China’s navy has over 370 battle force ships and is projected to hit 435 by 2030. The US Navy has 296 and is projected to shrink to 283 by 2027 as retirements outpace new construction. 37 of the 45 ships currently under construction face significant delays. America’s four public shipyards average 76 years old, with dry docks averaging over 107. As the Secretary of the Navy put it, one Chinese shipyard has more capacity than all American shipyards combined. You’ve seen the chart.

Good news. This week, Blue Water Autonomy unveiled the Liberty Class: a 190-foot autonomous steel ship with a range of over 10,000 nautical miles and 150+ metric tons of payload capacity. The name is a deliberate nod to the Liberty Ships of World War II, which were built rapidly and at scale to meet wartime demand. Blue Water is making a similar bet: take a proven hull design (Damen's Stan Patrol 6009, battle-tested in demanding conditions worldwide), re-engineer it from the inside out for autonomous operation, and start building at Conrad Shipyard in Louisiana next month. The first vessel is expected to be delivered to the US Navy later this year.

Blue Water developed Liberty entirely with private capital, which is unprecedented for a full-sized Navy ship, but standard in commercial markets. Working with over 100 suppliers, they went from founding in 2024 to construction start in 2026, and they're targeting serial production of 10-20 vessels per year. Conrad's five yards and 1,100-person workforce already produce 30+ ships annually, so the production capacity exists; now, it’s being put to more productive use.

It’s a good start, but we’re going to need like 1,000 of those eventually to catch up.

More good news on the autonomous boats front, then: Saronic was selected for DARPA's Pulling Guard program, which is developing semi-autonomous escort systems to protect logistics vessels at sea. Over 75% of global trade moves by water, and the Navy has historically protected those routes by deploying billion-dollar destroyers and carrier strike groups. Pulling Guard is exploring whether low-cost, modular autonomous platforms can provide distributed maritime protection, “protection as a service” that works in peacetime and conflict. Saronic, which has been building autonomous surface vessels and scaling manufacturing at speed, will design a modular, autonomy-enabled vessel under the program.

America's traditional shipbuilding apparatus is a cautionary tale in institutional sclerosis. But we love sclerosis here at not boring. Every sclerotic incumbent is an opportunity for a startup to build something better, faster, and cheaper. Ships ahoy.

(4) A small polymerase ribozyme that can synthesize itself and its complementary strand

Giannini, Kwok, Wan, Goeij, Clifton, Colizzi, Attwater, and Holliger in Science

Stanford Medical Assistant Professor Jason Sheltzer wrote a better lead-in than I could: “AI is cool and all... but a new paper in Science Magazine kind of figured out the origin of life?”

Here's the backstory. The leading theory for how life began is the “RNA World” hypothesis: before DNA, before proteins, before cells, RNA molecules on early Earth stored genetic information and catalyzed chemical reactions. At some point, one of these RNA molecules figured out how to copy itself, and from that moment, evolution (descent with modification) could begin. The rest, over 4 billion years, is history.

The problem is that scientists have never been able to demonstrate this convincingly in the lab. Previous RNA enzymes (called ribozymes) that could copy other RNA strands were huge, 165 to 189 nucleotides long, and far too complex to have plausibly popped into existence in a primordial soup. And crucially, none of them could copy themselves. They could copy other, simpler RNAs, but their own folded structures blocked self-replication. It was a fundamental paradox: a ribozyme needs to fold to work, but when folded, it can't be copied.

Researchers at the MRC Laboratory of Molecular Biology in Cambridge (the same lab where Watson and Crick figured out DNA's structure) appear to have cracked it. They discovered QT45: a 45-nucleotide ribozyme, less than a quarter the size of previous RNA polymerases, that can synthesize both its complementary strand and a copy of itself. It does this by stitching together three-letter RNA building blocks (trinucleotides) rather than adding one letter at a time. Those triplets bind strongly enough to unravel folded RNA structures, solving the self-replication paradox that has stumped the field for decades.

The "45" matters enormously. Previous self-replicating ribozyme candidates were so large and complex that their spontaneous emergence on early Earth seemed implausible, like lightning striking a junkyard and assembling a 747. At 45 nucleotides, QT45 is small enough that the researchers argue polymerase ribozymes may be far more abundant in random RNA sequence space than anyone thought, meaning self-replication might not have required an astronomically unlikely accident. It might have been, in a sense, easy.

The coolest part is that the triplet building blocks QT45 uses, three-letter RNA chunks, are the same triplet code that all life on Earth still uses today to make proteins like the ones that AlphaFold discovered the structure of and IsoDDE targets. The genetic code is like a still-operational fossil of the very first replication system.

We spend a lot of time in the Dose on people solving hard problems. This one is the hardest problem: how did something come from nothing? How did chemistry become biology? The answer, it turns out, might be astonishingly simple, just 45 letters long. Way shorter than anything I’ve written.

(5) Texas Parents Rush for School Choice

The Wall Street Journal Editorial Board

There was a viral slop essay on X this week that I won’t link to but that you’ve probably seen talking about how screwed humans are, including our kids, except for maybe those of us who pay to get the good models and the analysts who ask AI to do research that would have taken three days in one hour. I, for one, think the kids are going to be alright, especially the ones who learn how to think instead of asking the machines to do it for them.

One thing is clear, though: we’re going to need to educate our kids in a way that’s different from the Prussian Model, which uncharitably optimized us to think like machines so that we would be good factory workers. We need to teach our kids to love learning, to ask questions, and to be curious. Basically, we need to teach our kids in a way that’s the opposite of the way most schools do it now.

That’s why I’ve been a big fan of school choice: states giving parents the money to choose better schools for their kids. School choice is not without its critics, who argue that it takes money away from public schools and hurts public school students, but public schools have had a monopoly on the education of the vast majority of kids who can’t afford private school, and the results have largely been what you’d expect from a state-protected monopoly. School choice encourages competition and can help direct funds to new schools taking new approaches to rethinking education.

This week was a big one for school choice. Texas opened applications for its new Education Freedom Accounts on February 4th, and 42,000 families applied on day one, a nationwide record for any new school choice program, surpassing Tennessee's 33,000 first-day applications last year. By the next morning, the number had crossed 47,000. The latest reports are at 91,000. The application window runs through March 17th.

This was a long time coming. For more than 20 years, Texas's Republican-controlled House blocked school choice legislation, even as the Senate passed ESA bills session after session. The tide turned in 2024 when Governor Abbott campaigned for 16 House candidates who challenged the incumbents blocking his school choice bill. The new House Speaker, Dustin Burrows, pledged the bill would pass. It did, last April. Senate Bill 2 allocated $1 billion for the 2026-27 school year, with room to grow to $4.5 billion by 2030.

The program gives eligible families roughly $10,474 per student per year to use toward private school tuition, homeschooling costs, tutoring, career and technical education, and other approved educational expenses. Students with disabilities can receive up to $30,000. Eligibility is prioritized by economic need, not first-come-first-served, with disabled and low-income students at the top.

I’m personally excited about this one because the Certified Educational Assistance Organization running the day-to-day operations of the program (application portal, payment processing, e-commerce marketplace where families shop for approved educational services) is Odyssey, a not boring capital portfolio company. Odyssey already manages ESA programs in Iowa, Georgia, Louisiana, Utah, and Wyoming, but Texas is a different animal. This is the biggest state school choice program ever launched, and Odyssey is the infrastructure making it work, providing each family with a secure digital wallet, real-time balances, and access to a marketplace of vetted schools and providers. They’ve handled the biggest launch ever seamlessly.

The numbers show that parents want this. I’m excited to see how K-12 education evolves as parents get to choose where to allocate dollars to get the education they think is best for their kids.

EXTRA DOSE: Will Manidis, Anthropic, Simile, 3D printed boats, Zero

Read more

Weekly Dose of Optimism #179

2026-02-06 21:57:45

Hi friends 👋,

Happy Friday and welcome back to the 179th Weekly Dose of Optimism!

We started writing the Weekly Dose during the 2022 bear market because there was a disconnect between the incredible things we saw being built and the (largely market-driven) pessimism. So this week is great. We were born in the darkness.

Even as the markets have vomited, the innovation has continued apace. Zoom out.

We have another jam-packed week of optimism, including four Extra Doses below the fold for not boring world members.

Let’s get to it.


Today’s Weekly Dose is brought to you by… Guru

Your team is probably already using AI for everything: research, customer support, product decisions. Just one problem… AI is confidently wrong about your company knowledge 40% of the time.

While everyone races to deploy more AI tools, they’re building on a foundation of outdated wikis, scattered documents, and tribal knowledge that was never meant to power automated decisions.

Guru solved this for companies like Spotify and Brex. They built the only AI verification system that automatically validates company knowledge before your AI agents use it. Think of it as quality control for your AI’s brain.

The companies that figure this out first will have AI that actually works. The ones that don’t waste valuable human time cleaning up expensive mistakes.

Try Guru Today


(1) Introducing Claude Opus 4.6 and Introducing GPT-5.3-Codex

Anthropic and OpenAI, respectively

The race between Anthropic and OpenAI to build the smartest, most useful thinking machines is heating up, and it’s riveting. The day after Anthropic released its Super Bowl commercials, which make fun of OpenAI for planning to introduce ads into its product (which many people, including Jordi Hays, think are a bit deceptive, but which are super entertaining)…

… both companies dropped their newest, smartest models. Anthropic released Opus 4.6 and OpenAI released GPT-5.3-Codex (Codex is its coding model/app).

Anthropic’s Opus 4.6 is for everyone: better at coding, plans longer, runs financial analyses, does research, etc… I’ve been playing with it and it’s definitely smarter (although thankfully it’s still a shitty writer).

OpenAI’s very-OpenAI-named GPT-5.3-Codex is for coding. It slots right into the Codex app they released this week. I had 5.2 build a website for not boring, and it was very cool that it could build it, but no matter how hard I prompted, the design was trash. I told 5.3 to throw out that trash and make me something that looked better, and it actually did a decent job in one shot. It can also do things like make models and presentations and docs, although it’s not available in Chat yet.

In both cases, researchers at the labs used their own agents to help research and build the new models. “Taken together,” OpenAI writes, “we found that these new capabilities resulted in powerful acceleration of our research, engineering, and product teams.” This is the mechanism that fast takeoff believers believe in: models so smart that they make the next models smarter, and so on.

I don’t know what to say other than have fun playing with your new geniuses this weekend.

(2) As Rocks May Think

Eric Jang

Whenever logical processes of thought are employed — that is, whenever thought for a time runs along an acceptive groove — there is an opportunity for the machine.

— Dr. Vannevar Bush, As We May Think, 1945

How’d we get here?

Eric Jang is VP of AI at 1X Technologies, the humanoid robotics company, and before that spent six years at Google Brain robotics where he co-led the team behind SayCan. He’s one of the people building the robots we covered in my robotics cossay with Evan Beard a few weeks ago.

His new essay, As Rocks May Think, is a riff on Vannevar Bush’s 1945 classic, As We May Think, and the title is the thesis: we taught rocks to think, and they’re getting really smart.

The piece is part technical history, part practical manual, and it is pretty technical, but it’s the most concise overview of how we got to where we are today and where we might be going from here that I’ve come across. Jang walks through the intellectual lineage of machine reasoning, from symbolic logic systems that collapsed when a single premise was wrong, through Bayesian belief nets that got tripped up in compounding uncertainty, to AlphaGo’s breakthrough combination of deductive search and learned intuition, and finally to today’s reasoning models, like Opus 4.6 and GPT 5.3.

For the practical manual piece, Jang walks through building his own AlphaGo and how he uses AI today: “Instead of leaving training jobs running overnight before I go to bed, I now leave "research jobs" with a Claude session working on something in the background. I wake up and read the experimental reports, write down a remark or two, and then ask for 5 new parallel investigations.”

He suspects we’ll al have access to today’s researcher-level of compute soon, and that when we do, we are going to need a shit-ton of compute. He compares thinking machines to air conditioning, a technology that Lee Kuan Yew credited with changing the nature of civilization by making the tropics productive. Air conditioning currently consumes 10% of global electricity. Data centers consume less than 1%. If automated thinking creates even a fraction of the productivity gains that climate control did, the demand for inference compute is going to be enormous.

Maybe that’s why Google anticipates $185 billion in 2026 CapEx spend and Amazon anticipates an even more whopping $200 billion, which sent its stock tumbling after hours.

The sell-off is ugly, but if Jang is right, all of that buildout and much more is going to be put to use. I asked my thinking rock (Claude Opus 4.6) what it thinks about the selloff. It told me: “if the bottleneck is inference compute, build the data centers. Vertical integration, baby.”

(3) Drone Controlled by Cultured Mouse Brain Cells Enters Anduril AI Grand Prix

Palmer Luckey

Don’t count thinking cells out yet, though!

Anduril’s AI Grand Prix, a drone racing competition, has strict rules: identical drones, no hardware mods, AI software flies. Over 1,000 teams signed up in the first 24 hours to compete for $500,000 and a job at Anduril.

Then one team showed up planning to use a biological computer built from cultured mouse brain cells to fly their drone.

Mouse brain cells. Australian company Cortical Labs commercially launched the CL1 last year: a $35,000 device that fuses lab-grown neurons with silicon chips. The neurons are grown on electrode arrays, kept alive in a life-support housing, and learn tasks through electrical stimulation. In 2022, the team placed 800,000 human and mouse brain cells on a chip and taught the network to play Pong in five minutes. The neurons run on a few watts and learn from far less data than conventional AI.

So: is a mouse brain “software”? Who cares.

“At first look, this seems against the spirit of the software-only rules. On second thought, hell yeah.”

(4) Waymo Raises $16 Billion, Now Does 400,000 Rides a Week

Waymo

Speaking of autonomous vehicles… Alphabet’s self-driving car company has way mo’ money at its disposal to save lives.

Nearly 40,000 Americans died in traffic crashes last year. The leading causes, things like distraction, impairment, fatigue, are all fundamentally human problems. Waymo doesn’t have those problems. It’s safer than human drivers, and the faster we get more of them (and other self-driving cars) on the road, the better.

Luckily, the company just raised $16 billion, which is basically a seed round in AI and is like 10% of what any serious hyperscaler is planning to spend on CapEx this year, but which will mean a lot more self-driving cars on the road. The round values Waymo at $126 billion and brings total funding to ~$27 billion. The investor list suggests that if they keep doing their job, there’s plenty more where that came from: Sequoia, a16z, DST Global, Dragoneer, Silver Lake, Tiger Global, Fidelity, T. Rowe Price, Kleiner Perkins, and Temasek, alongside majority investor Alphabet. This is the largest private investment ever in an autonomous vehicle company.

We’re talking a lot about fast takeoffs this week, and Waymo is a case study in gradually, then suddenly.

Waymo started in 2009 as a secret Google project, with a handful of engineers modifying a Toyota Prius to drive itself on the Golden Gate Bridge. For years, the punchline was that self-driving cars were always five years away. Google spent $1.1 billion between 2009 and 2015 and had essentially nothing to sell for it. The pessimists were winning. The five years away joke kept landing.

And then it started working. 127 million fully autonomous miles driven. A 90% reduction in serious injury crashes versus human drivers. 15 million rides in 2025 alone (3x 2024). Over 400,000 rides per week across six US metro areas.

They’re in Phoenix, San Francisco, LA, Austin, Atlanta, Miami. If you’ve ridden in one in one of those cities, the thing that strikes you is how fast it goes from feeling sci-fi to feeling normal. Now, they’re planning to launch in 20+ additional cities in 2026, including Tokyo and London. Saving lives around the globe.

My kids are never going to get their drivers’ licenses, are they?

(5) Contrary Tech Trends Report

Contrary Capital

My friends at Contrary just dropped their annual Tech Trends Report, full of charts, data, and insights across a wide range of technological frontiers. It’s one of the most optimistic documents I’ve read in a while.

A few things jumped out. AI tools are reaching adoption speeds that make the internet’s growth curve look leisurely. OpenEvidence, an AI tool for doctors, hit 300,000 active prescribers in 11 months, a milestone that took Doximity, the previous standard-bearer, 11 years. ChatGPT is at 800 million weekly active users with retention rates approaching Google Search. And coding AI tools like GitHub Copilot, Cursor, and Claude Code are each approaching or at $1 billion in ARR. AI companies are reaching revenue milestones 37% faster than traditional SaaS companies did.

On energy, the numbers are staggering. Welcome to the ELECTRONAISSANCE. Total US electricity generation is projected to grow 35-50% by 2040, driven by data centers, EVs, and manufacturing. The country is investing $1.3 trillion in AI-related capital expenditure alone by 2027, and $3-5 trillion in global data center spending by 2030. Meanwhile, wind and solar are the fastest-growing energy sources globally, and US fab capacity is projected to grow 203% from 2022 to 2032, more than double the global average. America is building again.

And then there’s the frontier stuff. Lonestar Data Holdings sent a data storage unit to the moon in 2025. The report lays out how lunar bases could unlock helium-3 for clean fusion energy (which For All Mankind predicted), rare earth metals for EVs and batteries, and platinum group metals for hydrogen fuel cells. Artemis II, a crewed lunar flyby, is scheduled for April 2026. The US Space Force wants a 100kW nuclear reactor on the moon by decade’s end. Microsoft sank a data center underwater and saw 8x fewer hardware failures. 90% of US factories still operate without robots, which means we have a lot of productivity gains ahead.

There are challenges too, of course: aging grid infrastructure, water stress around data centers, the fact that 60% of CEOs say AI projects haven’t delivered positive ROI yet. But the overwhelming takeaway is that the buildout is happening, the adoption curves are real, and the scale of investment is unlike anything we’ve seen.

We are living in a sci-fi novel. What a time to be alive.

EXTRA DOSE (for not boring world subscribers) BELOW THE FOLD

Skyryse, Machina Labs, OpenAI x Gingko, General Matter x Mario

Read more