MoreRSS

site iconNot BoringModify

by Packy McCormick, Tech strategy and analysis, but not boring.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Not Boring

Power in the Age of Intelligence

2026-02-18 21:33:05

Welcome to the 737 newly Not Boring people who have joined us since our last essay! Join 258,985 smart, curious folks by subscribing here:

Subscribe now


Hi friends 👋,

Happy Wednesday!

Émile Borel once said that given enough time, a bunch of monkeys banging on typewriters would come up with Shakespeare. And yet, despite the innumerable X Articles on software moats in the face of AI, I haven’t read a single one that’s satisfying.

I think it’s because thinking about software moats, how a software company might protect itself from abundant code, is the wrong frame altogether, and that the more interesting and relevant frame is which companies, SaaS, hardware, or otherwise, stand to benefit the most from newly abundant inputs.

Those companies, not vibe coders, are the ones that point solutions should be worried about. They will win enormous market shares and fortunes. They will come to dominate large industries by using new technology to compete, capturing the High Ground, and expanding further outward than companies with lesser technological tools could have ever dreamed of. They will be the Standard Oils of this era.

This essay is about those companies.

Let’s get to it.


Today’s Not Boring is brought to you by… Silicon Valley Bank

In 2025, crypto returned to the financial mainstream. It is, as they say, so back.

What’s ahead for 2026? I’m glad you asked. Silicon Valley Bank is out with their annual crypto outlook, featuring proprietary insights and data from over 500+ crypto clients. We love a bank that banks crypto.

Silicon Valley Bank makes five predictions for the year ahead, including:

  1. Institutional capital goes vertical with increased VC investment and corporate adoption.

  2. M&A posts another banner year after the highest-ever deal count in 2025.

  3. Real-World Asset (RWA) tokenization goes mainstream on prediction market strength.

Last year, Silicon Valley Bank predicted that stablecoins would be the big breakout use case. That was correct. They think that will continue this year, too. Read their take on what comes next, free:

Get SVB's 2026 Crypto Predictions Free


Power in the Age of Intelligence

One of the more head-scratching anomalies in the market is the valuation gap between Stripe and Adyen. The two payments companies handle similar amounts of Total Payment Volume. Stripe is growing faster. Adyen reports exceptional margins and cash conversion. Stripe is reportedly doing a tender offer at a $140 billion valuation in the private markets. Adyen is valued at $34 billion in the public markets. There are a number of theories for why this is the case, most of which boil down to: VCs are idiots, as they’ll find out if Stripe ever goes public.

Chart from Claude based on market data

Ramp versus Brex is another example of the same idea. Ramp was most recently valued at $32 billion in the private markets. Brex, which had been valued at $12 billion in the private markets, sold to Capital One for $5.15 billion. Ramp is doing more revenue and growing faster, but not 6x more revenue or 6x faster. Once again, the actual market disagrees with the VCs.

Or does it?

I have a different theory, one that neatly fits those two cases, the SaaSpocalypse, SpaceX’s $1.25 trillion valuation, and even the evolving structure of venture capital itself: winner takes more.

The history of business is basically the history of increasing concentration of value, accelerated in spurts by technological change. Centuries ago, firms operated within cart-hauling distance of their customers, creating a system of local monopolies. A brewer in 1800 served a single town, if that. Canning, railroads, telegraphs, mass production, electrification, containerization, planes, and the internet, among other technologies, expanded companies’ available market, and winners captured a greater and greater share of value.

Economic data backs this up.

In a 2020 paper, Jan De Loecker and Jan Eeckhout found that aggregate markups rose from 21% above marginal cost to 61% between 1980 and the late 2010s, and this increase was driven almost entirely by the upper tail of the distribution; median firm markups barely changed, while the 90th-percentile markups surged.

De Loecker and Eeckhout, The Rise of Market Power and the Macroeconomic Implications

In The Fall of the Labor Share and the Rise of Superstar Firms, Autor et al. find that industry sales concentration trends up over time across measures, and it rises more in sales than in employment, what Brynjolfsson et al. call “scale without mass.” In their account, this shift reflects reallocation toward superstar firms with high markups and profits and low labor shares.

A 2023 paper by Spencer Y. Kwon, Yueran Ma, and Kaspar Zimmermann at UChicago, 100 Years of Rising Corporate Concentration, uses IRS Statistics of Income to show that the top 1% of U.S. corporations by assets accounted for about 72% of total corporate assets in the 1930s and about 97% in the 2010s. The top 0.1% of U.S. corporations by assets increased its share of total corporate assets from 47% to 88% over the same period. Power laws within power laws.

Kwon, Ma, and Zimmerman, 100 Years of Rising Corporate Concentration

Today, the Magnificent 7 accounts for one-third of the market cap of the S&P 500, and Apollo showed that those seven companies are driving the vast majority of equity returns.

Apollo, Mag 7 vs. Everyone Else

Venture capital is responding to this information as you would expect. Per Pitchbook, as of August last year, 41% of all VC dollars deployed in the US in 2025 went to just ten companies. Per Axios, more recent Pitchbook data shows that “The estimated aggregate valuation of unicorns hasn’t actually changed too much — $4.4 trillion vs. $4.7 trillion at the end of 2025 — because the top 10 companies account for around 52% of value (up from only 18.5% in 2022 and the highest such figure in a decade).”

Venture capital firms themselves are concentrating. As I wrote in a16z: The Power Brokers, “a16z accounted for over 18% of all US VC funds raised in 2025.” Just yesterday, Thrive announced that it raised $10 billion: $1 billion for early-stage investments, and $9 billion for late-stage investments, which is the kind of split you put in place if you believe your winners are going to keep winning.

The theory for investing in very large funds like a16z, Founders Fund, Thrive, General Catalyst, and Greenoaks is that they are best positioned to win large allocations in the handful of companies that matter, and that those companies will capture most of the value in this vintage. Notably, all five of the firms I just mentioned are investors in Stripe, and three of the five (Founders Fund, Thrive, and General Catalyst) are investors in Ramp.

All of that data suggests increasing concentration. Through this lens, the SaaSpocalypse (the violent sell-off in software stocks) is less about software writ large dying, and more about point solution software finally facing economic gravity. They are no longer getting a free pass simply for having a good business model.

The past few decades of software exceptionalism have been an exception based on a business model so sweet and capabilities so universally useful that the rules of strategy, while not wholly unimportant, were less consequential than normal. A venture capitalist could look at a standard set of SaaS metrics (ARR, growth rate, net retention rate, gross margin, LTV:CAC, Rule of 40, etc…) and underwrite whatever new business they encountered to them. This is why you hear things like “Here’s how much ARR you need to raise a Series A.” The companies are basically all the different flavors of the same thing.

There are, of course, idiosyncrasies between selling software to professional services firms and selling software to energy companies, for example, but the basic model is the same. Invest upfront to hire engineers, write software to make someone’s job easier, and sell the software to as many customers as possible at high margins. Different industries may need software to do different things, have different buyers, be larger or smaller, be more or less willing to pay, and be more or less expensive to acquire. A bigger, less crowded market with a strong need and a high willingness to pay is better than the opposite. But figuring that out is relatively straightforward. Since software operates at the edge of value in most industries, and does not attempt to strike at its core or compete with it, it doesn’t require thorough competitive analysis.

Since the SaaSpocalypse, people have gotten AI to write tens of thousands of words on which types of software companies have moats given AI and posted the resulting essays to X. They’ve gotten a little more specific than “SaaS is good.” Data is a moat, or a particular type of data at least. Or it isn’t, maybe, because The Agent Will Eat Your System of Record. Certainly, dealing with regulatory hair earns you a moat. No?

Most of the takes I’ve seen miss what matters.

On paper, Stripe and Adyen have basically the same moats, as do Ramp and Brex. I love a good hardware moat more than the next guy, as I’ve been writing since The Good Thing About Hard Things in July 2022, before ChatGPT or Claude Code but when it was clear that good software alone would offer no moat. I was too unspecific in that piece, too. Some hardware businesses will get very large, and others will fail. Hardware itself isn’t a moat. Good luck making LFP cells in America.

No, what matters is becoming the leader in your industry in a way that is incredibly specific to that industry and in such a way that your business benefits from, instead of being threatened by, abundant improvements in general purpose technologies like AI and batteries.

What matters now is the same stuff that has always mattered but that software forgave for a while: own the scarce, defensible asset in an industry and use it as the High Ground from which to dominate. Ricardo said this.

If you’re a startup, and you don’t already own the scarce asset, then you need to identify the constraint holding the industry back, focus everything on breaking it, and expand from there.

History’s most influential military strategist, Carl von Clausewitz, said this. He called it Schwerpunkt, the center of gravity. “The first task, then, in planning for a war is to identify the enemy’s center of gravity, and if possible trace it back to a single one,” he wrote in On War. “The second task is to ensure that the forces to be used against that point are concentrated for a main offensive.”

For our purposes, the Schwerpunkt is the constraint you attack. The High Ground is the scarce and valuable position you win by breaking it. Moats are what keep others from taking it.

Even while forces must be concentrated against the Schwerpunkt, our attackers must plan for victory before it is won. The company that breaks the constraint needs to build the complementary assets (the distribution, the manufacturing, the customer relationships) to capture the value from its own innovation. Otherwise, its competitors will.

David Teece argued this in 1986, in Profiting from Technological Innovation. The paper, he wrote, “Demonstrates that when imitation is easy, markets don’t work well, and the profits from innovation may accrue to the owners of certain complementary assets, rather than to the developers of the intellectual property. This speaks to the need, in certain cases, for the innovating firm to establish a prior position in these complementary assets.” Which is the point I am making: innovation alone, software or hardware, isn’t enough.

Figuring out which companies might capture the Schwerpunkt and use it as a High Ground from which to expand is an entirely different kind of underwriting, impossible to do in a spreadsheet alone, even with Claude in Excel.

Companies that don’t own the High Ground face existential risk from technological progress. If you’re just selling point solution software, then software abundance is a threat. Hardware isn’t necessarily the moat people think it is, either, even if it’s less susceptible to AI, because AI isn’t the only technology improving rapidly. “Hardware is a moat” is the same kind of lazy thinking that “SaaS is the greatest business of all time” is. If you’re selling better mouse traps, you’re at risk every time someone builds a slightly better mouse trap.

Similarly, incumbents that currently own the High Ground but can’t wield modern technology face existential risk from those who can. This is why there is such a large opportunity for startups today. New technologies mean old constraints are finally attackable, and it’s likely to be newer companies doing the attacking.

Companies that do own the High Ground, on the other hand, and are tech-native, benefit from technological progress, just as land owners captured the gains from more farming labor and better farming tools, but more pronounced, because these modern landowners will corner the best talent, the most capital, and the richest veins of distribution. A glib way to put it is that a Ramp engineer with an AI will build something better than a CFO with an AI, no matter how good the AI gets. It’s not the vibe coders you should be worried about.

Newly abundant resources can have opposite effects on your business depending on your position, and it is likely to be the company with the High Ground wielding those resources that dooms the companies in weaker positions. Ask Slack how it felt to compete with Microsoft Teams; companies like Microsoft can now build a lot more “Teams.”

The game on the field is all about understanding who can own the High Ground in a given industry.

The moats are the same as they’ve always been. Study 7 Powers. When you’re starting out, you need to understand what your moats might be, but in order for moats to matter, you need to have something worth protecting. You need to own the High Ground.

If we really are living through the most consequential technology revolution in history, why are you spending so much time hand-wringing about protecting small, old castles when you could be thinking about how to build history’s most magnificent businesses?

The abundant inputs keep getting cheaper. The scarce asset keeps getting more valuable. The companies that own the latter and leverage the former will become larger than ever before.

These businesses themselves are scarce assets, valued on their strategic importance and industry size, because the opportunity is no longer to sell software into industries in order to marginally improve them, but to win those industries and capture their economics.

If your aim is to build or invest in these companies, old heuristics will do you no good. You need a brain of your own, some sweat on your brow, and some good ol’ fashioned strategy frameworks to help you reason about the opportunity at hand.

This essay is a guide to thinking through where power might concentrate, for those willing to think. If winner takes more, it’s about what it takes to build, or invest in, the companies that have a shot at winning large industries. And it’s about how to position yourself to gain strength from technological progress instead of running from it while throwing weak “moats” in your wake.

It is about Power in the Age of Intelligence.

A Tale of Two Industrial Revolutions

Or, it’s about Power in any age of rapid technological change, really.

While the advances we are experiencing today feel unprecedented, my thesis has been that we are going through a modern version of the Industrial Revolution.

Then, machines did what only human muscles could previously. Now, machines are doing what only human brains could previously, in new bodies built to house those brains. This is The Techno-Industrial Revolution.

So it is useful to study Rockefeller, Carnegie, Swift, and Ford. None created an industry from scratch. They all fit this pattern: identify the Schwerpunkt in an existing industry, break it, seize High Ground, integrate outward, dominate.

Standard Oil

When John D. Rockefeller met the oil industry, it was young, valuable, and incredibly volatile. Oil itself was abundant. There were a lot of refineries – roughly 30 in his hometown of Cleveland alone when he got to work – but their quality was inconsistent, and their processes were inefficient. Per Austin Vernon, “Refining methods were so inefficient in the mid-1860s that a barrel of oil (42 gallons) sold for almost the same price as a gallon of refined kerosene. Today, the price ratio of refined products to crude oil is ~1.25x instead of 42x.”

There was one big constraint to the profitable growth of the oil industry – the volatility – which could only be broken through scale and control. To get to scale and control, Rockefeller needed to drive down costs to capture the market. Refining was the place to get scale, given its inefficiency and the fact that, per Vernon, “A typical rule of thumb in chemical engineering is that capital costs increase sublinearly with capacity, usually by (capacity ratio)^0.6. A plant with double the output is only 50% more expensive to build, and operating costs tend to follow similar trends.”

Standard Oil Refinery, 1889

So in partnership with chemist Samuel Andrews, one of the first people to distill kerosene from oil, Rockefeller continued to improve the kerosene yield. At the same time, he aggressively grew revenue and lowered costs by eating the whole cow, so to speak. He sold the non-kerosene byproducts others threw out (paraffin wax, naphtha, and gasoline) and used some of the fuel oil to power his own plants. He also integrated into barrels (by buying an oak tree forest and a barrel-making shop).

As it lowered costs and delivered a more consistent product, Rockefeller’s refinery (the predecessor to Standard Oil) grew, and as it grew, it lowered costs. Vernon again:

Standard Oil and its predecessor firms increased production ~20x between 1865 and the end of 1872, meaning their costs could have fallen more than 85%. At that point, they were the largest refiner in the world with a double-digit share of capacity, and it was their game to lose. If we understand this short period, then we know how the company eventually won.

The company that would become Standard Oil won the High Ground by breaking the constraint. Then it integrated outward, horizontally and vertically.

By 1870, Standard Oil was a joint stock company capitalized with $1 million that owned 10% of the oil trade in the United States. Rockefeller got busy acquiring struggling refineries or putting them out of business, increasing scale and efficiency in the process. Rockefeller also did favorable deals with the railroads, which Vernon argues actually had less to do with Standard Oil’s success than did its growing scale and efficiency. He kept growing, acquiring refiners in Pennsylvania, New York, and New England. He vertically integrated into pipelines (which replaced railroads), into distribution, into retail (ExxonMobil and Chevron are Standard successors), and into production itself.

By the late 1880s, Standard Oil controlled 90% of American refining, a share it held until it was broken up in 1911, when its $1.1 billion market cap represented 6.6% of the entire US stock market. To hear Vernon explain it, the outcome was a fait accompli by the time he’d attacked the Schwerpunkt and gained the High Ground in the 1860s.

Carnegie Steel

Andrew Carnegie’s story is so similar it’s almost suspicious. Like Rockefeller, Carnegie didn’t invent his product (steel); Bessemer did. Like Rockefeller, Carnegie realized the constraint to steel’s growth was inefficiency and inconsistency. Like Rockefeller, Carnegie hired a chemist (in his case, to measure what was happening inside the furnaces) and obsessed over cost, which he knew was the only thing he could control:

Show me your cost sheets. It is more interesting to know how well and how cheaply you have done this thing than how much money you have made, because the one is a temporary result, due possibly to special conditions of trade, but the other means a permanency that will go on with the works as long as they last.

His chemical knowledge allowed Carnegie to run his furnaces hotter and longer than anyone else, producing more steel at lower cost. His cost obsession lowered costs further. Low cost, high quality steel was the High Ground.

Carnegie Steel Mill, HBR

And from there, he integrated outward. Backward into coke (Frick) and iron ore (the Mesabi Range), into railroads to transport raw materials, and forward into finished products. He certainly didn’t sell his services and know-how to incumbents; he used them to destroy competitors on price until US Steel bought him out for $480 million in 1901 (roughly $18 billion today) to create the first billion-dollar corporation in history.

“Congratulations, Mr. Carnegie,” JP Morgan told him upon closing, “you are now the richest man in the world.”

Swift Meats

Gustavus Swift, like Rockefeller, would also “eat the whole cow.” He just did it later in his arc, and more literally.

The constraint was this: only about 60% of a live animal’s mass is edible, and meat goes bad. Which meant that, prior to the 1870s, the meat industry shipped 1,000 pound live cattle by rail from wherever they were raised to wherever they were going to be eaten. They paid by the pound (an extra 40%), had to feed animals to keep them alive and healthy, and lost some to death in transit anyway.

So Swift, building on early experiments by Detroiter George Hammond, hired an engineer to design him a refrigerated railcar. Then, he could slaughter the beef in Chicago and ship the cuts to their final table much more efficiently. Railroads, not wanting to lose livestock shipping cash cow, refused to pull his cars, so Swift leased his own and partnered with smaller lines to move them. Then, he built icing stations along the routes, and replenished them with ice he contracted directly with ice harvesters in Wisconsin and other cold midwestern states. By necessity, he built the whole cold chain from scratch.

This combination of centralized slaughter in Chicago and cold chain to the coast was his High Ground. He was forced into vertical integration because none of the pieces made sense on their own, but once he had it, he used it to drive down costs.

Like Rockefeller, Swift was appalled by waste, and because he controlled his own slaughterhouses, he could do something about it: he turned cow byproducts into soap, glue, fertilizer, sundries, even medical products, which allowed him to increase revenue and lower prices. He also maximized his refrigerated cars by stacking butter, eggs, and cheese beneath the swinging carcasses of dressed beef heading East.

By 1884, after only six years in operation as a slaughterer, Swift had become the second largest meatpacking firm in the US. By 1900, the meatpacking industry, unconstrained, had grown to become the second largest in the country to iron and steel.

Ford

It was a visit to a Chicago slaughterhouse that inspired Henry Ford’s assembly line. “Along about April 1, 1913, we first tried the experiment of an assembly line,” Ford writes in My Life and Work. “We tried it on assembling the flywheel magneto. I believe that this was the first moving line ever installed. The idea came in a general way from the overhead trolley that the Chicago packers use in dressing beef.”

Ford didn’t invent the automobile. By 1908, there were hundreds of American car companies selling expensive, hand-built machines to the wealthy. A typical car cost $2,000-$3,000, or roughly two and a half years’ wages for an average worker. Manufacturing costs were the constraints to the nascent automobile industry’s growth, so manufacturing costs were Ford’s Schwerpunkt.

Ford broke the constraint with the moving assembly line. Before it, a single worker assembled a complete flywheel magneto in about 20 minutes. Ford split the work across 29 operations, cutting the time to 13 minutes. Then he raised the line eight inches and cut it to seven minutes. Then he adjusted the speed of the line and cut it to five. The same progression played out across the whole car: total assembly time fell from over 12 hours per chassis to 93 minutes.

That manufacturing capability was the High Ground. The Model T launched in 1908 at $850, already half the price of the competition. As the assembly line improved, Ford kept cutting: $550 by 1913, $360 by 1916, below $300 by 1924. What had been 18 months’ wages for an average worker became four months’.

From that High Ground, Ford integrated ferociously. Rubber plantations in Brazil. Iron mines and timberland in Michigan. A glass plant, a railroad, a steel mill, even soybean farms for plastic components. All of it flowed into the Rouge River complex, where raw materials entered one end and half of the finished cars on the world’s roads rolled out the other.

The result was that Ford’s sales went from 12,000 in 1909 to half a million in 1916 to over two million in 1923. At its peak, more than half the cars in the world were Fords.

Across the Industrial Revolution’s most successful entrepreneurs, there was a clear pattern that looks almost nothing like how you’d think about scaling a SaaS business: identify the Schwerpunkt in an existing industry, break it, seize High Ground, integrate outward, dominate.

Pause for a second. Think about how people are telling you to analyze businesses today. Would those AI-generated moat lists, or the equivalent for their time, have given you any advantage whatsoever in identifying Rockefeller, Carnegie, Swift, or Ford, let alone becoming one of them? It is never that easy, and it always takes work.

I want you to feel those examples, because what’s old is new again. The biggest companies in the world today are executing against the same framework, in ways that are specific to their industry.

SpaceX Goes Vertical

The funny thing about today’s biggest software companies is just how much they spend on hardware. This year, the world’s four largest companies that started as software companies plan to spend an estimated $600-700 billion on data center buildouts, equivalent to roughly 2% of US GDP, a level of infrastructure buildout comparable to laying America’s railroads in the 1850s.

Amazon, an online bookseller, will spend $200 billion. Google, the search engine giant, will spend $175-185 billion. Meta, the social network for college students, will spend $115-135 billion. And Microsoft, which makes operating systems and office applications, will spend $100-150 billion.

Except, of course, that’s not what those businesses are. They are technology conglomerates that used the early internet to break the Schwerpunkt in their respective industries, gain their respective High Grounds, and integrate outward so far that they’re all running into each other at this new frontier. And despite their best efforts and hundreds of billions of dollars spent on terrestrial data centers, Elon Musk still thinks we’re going to need to put them in space.

SpaceX

Before SpaceX, the constraint in the space industry was cost to orbit. SpaceX broke the constraint with reusable rockets, drove costs down an order of magnitude, and quite literally gained the High Ground. From there, it integrated outward into Starlink communications satellites, which it can launch more cheaply than competitors because it owns the rockets and which fund the development of even bigger Starship rockets, which bring the cost per kg to launch things into orbit down even further. SpaceX used vertical integration the same way Rockefeller did: it is simultaneously its own largest customer and its own cheapest supplier. Casey Handmer’s The SpaceX Starship is a very big deal is an excellent read on the topic.

In 2023, Elon Musk founded xAI to build maximally truth-seeking AI. He then merged it with X (née Twitter). xAI got a late start, and it doesn’t have the best models yet, but what it is best in the world at is building data centers very fast. So the world took note when Elon said that we’d never be able to build enough data centers on earth to meet demand for AI, and that we will need to start building them in space.

So on February 2nd, 2026, SpaceX announced that “SpaceX has acquired xAI to form the most ambitious, vertically-integrated innovation engine on (and off) Earth, with AI, rockets, space-based internet, direct-to-mobile device communications and the world’s foremost real-time information and free speech platform.” According to Musk, SpaceX will get the data centers it needs in space via ~10,000 Starship launches per year, or roughly one per hour, every hour. Simultaneously, it will also build a self-growing Moon city, from which it plans to build a mass driver in order to make a terawatt per year of more worth of AI satellites, far more energy than Rockefeller could have conceived of, en route to eventually colonizing Mars and fulfilling SpaceX’s mission to “extend consciousness and life as we know it to the stars.”

It remains to be seen whether the High Ground will also give SpaceX a decisive advantage in the AI race, but it certainly demonstrates that the stakes have grown since the Industrial Revolution, even as the strategy has remained the same.

But no matter how that plays out, SpaceX (and Google, Microsoft, Amazon, Meta, Apple, Tesla, NVIDIA, OpenAI, and Anthropic) aren’t going to eat everything, or else I wouldn’t be investing in startups.

The Hunt for the High Ground

Boulton and Watt did not capture the entire value of the Industrial Revolution they steam powered, although they did vertically integrate, Boulton into the Soho Manufactory, the steam engine-based Gigafactory of its day, and Watt, via his son, into steamships. Nor did Rockefeller eat everything in the internal combustion era of the Revolution despite owning the oil on which it all ran.

In addition to Rockefeller (oil), Boulton and Watt (steam engine), Carnegie (steel), Swift (meatpacking), and Ford (automobiles), the Industrial Revolution gushed multi-generational wealth for the Vanderbilts (railroads), Morgans (finance), Sears and Roebucks (retail), Havemayers (sugar), McCormicks (agricultural equipment, sadly unrelated), Westinghouses (power), Otises (elevators), Pullmans (luxury rail cars), Bells and Vails (telecommunications), Pulitzers and Hearsts (publishing), Eastmans (photography), Kelloggs (processed food), Pillsburys (milling), Singers (sewing machines), Nobles (Dynamite), DuPonts (chemicals), and Dukes (tobacco). This list is incomplete.

What’s notable is the diversity of industries that produced these fortunes. Machines made “labor” more abundant, and the companies that seized upon the technological innovation to break the Schwerpunkt in their specific industry, gain the High Ground, and expand were all wildly successful. Far from simply defending against mechanization, they seized the complementary assets to which value flowed as key inputs became abundant.

There are clear differences between AI, developed by huge labs and distributed at the speed of bits, and Industrial Era machine-filled factories, but I expect the Techno-Industrial Era to play out similarly. Each industry has unique constraints and resulting High Grounds, very few of which can be cracked and captured with digital intelligence alone.

The diversity that creates unique opportunities in each industry, however, makes underwriting those opportunities a different and more difficult beast than underwriting SaaS companies, which are more homogenous. There is no list, no spreadsheet, no agreed-upon metrics that will tell you which will become today’s Standard Oils. There is only the evaluation of constraints and hunt for High Grounds.

Instead of a list, then, let me give you my favorite example: Base Power Company.

Base Power Company

Base Power Company doesn’t just make batteries. It buys cells (the commoditized piece of the value chain), manufactures battery packs, installs them on homes (starting in Texas), writes software to coordinate them, trades in the power market, and partners with utilities to help balance the grid. Base is built on the type of logic companies (and their investors) will need to exercise if they want to compete in the modern era, and it goes something like this.

We want to fix power. What’s the bottleneck? The grid. Companies are competing to build power generation and the electric machines that consume that power, and the better they do, the more strain there will be on the grid. The grid is the chokepoint. So how do you fix the grid? Laying new transmission and distribution is slow and expensive, and the grid we have is already structurally underutilized because it’s built to serve peak demand, so to smooth it out, you need batteries. Where should you put the batteries? Centralized battery farms are helpful, but they need to wait in interconnect queues, which makes them slower to turn on, and those batteries still need to distribute power to end users when demand peaks, which means they don’t fully solve the bottleneck. So you need to put the batteries right next to demand. Fill them up when the grid has capacity, and use them to smooth demand when demand is high. And if you want to put batteries next to demand (homes, to start), where is the best place in the country to do that? Texas, which operates its own deregulated grid, ERCOT, is volatile (which means potential for higher trading profits and greater need on the part of customers and utilities), and is regulatorily friendly. So you start by putting batteries on the homes of early adopters within Texas. Those slots are scarce - it would take a lot for a customer to rip and replace their batteries, and no one is installing two companies’ batteries. Then, connect them with software, improve the grid and each customer’s experience with more batteries on the network, and use the richest source of demand available in the country to begin to scale. Bring manufacturing in-house, continue to improve the batteries, decrease their costs, get more efficient at installing them, connect more of them, sign more early adopter utilities, get more scale. At which point, it’s hard to imagine a viable way to beat Base at its own game. Then, expand. Integrate upstream into grid hardware and generation and downstream into electronic devices to sell into customers with whom you’ve built trust. Expand geographically, leveraging scale, experience, and software to offer a better product than a potential competitor attempting to grab a foothold by starting in the next-best market. Keep expanding. Dominate. Expand some more.

There are a couple things I want you to take away from that paragraph.

First, it is a very long paragraph. This is not simple or easy. I think investors bemoaning the Death of SaaS are in part sad that the era of underwriting software businesses on known, straightforward metrics is over. Underwriting the biggest companies of this generation will be a much more bespoke process. The time has come to move from simple analysis to strategy. It is not a coincidence that my first Deep Dive on Base was structured as a walk through the evolution of the strategy memos that Zach and Justin wrote before touching a single atom.

Second, as technology improves – from AI to the Electric Stack – the vast majority of the returns will accrue to the companies that figure out the right place to attack and execute violently against their conviction. A simple way to think about this is that better software is more valuable to Base than it is to a smaller competitor, to a battery farm operator, or to a power generation company, as is better hardware. Better robots for manufacturing and logistics would make Base faster and more profitable, and making the game more CapEx intensive would give it an advantage over would-be competitors.

The lesson from Base is not that hardware is a moat, or that you should put your product next to Texans’ homes.

It’s that you need to deeply understand the problem you’re trying to solve, the constraint that’s bottlenecking it, how you’re going to unblock it with technology (and why now?), and how you might expand to capture the market once you do. It applies differently in every industry.

For airlines, the constraint is the engine: today’s turbofan engines carry planes as fast and efficiently as they can. Everything bad in air travel is downstream of that. So Astro Mechanica is building a new engine that is faster and efficient at every speed. But certifying a commercial airline is a long and expensive process, so Astro plans to sell into Defense first, then build private supersonic planes (which are cheaper to certify and can be cost-competitive with first class tickets immediately), then build larger supersonic planes that are cost-competitive with commercial air travel, and use the advantage in speed and cost to build its own full-stack airline, from booking to flight.

For internet, the constraint is the architecture: incumbent telcos froze their architectures around early-2000s assumptions about what was expensive, locked themselves into passive optical networks and vendor dependence, and now spend billions every few years on upgrades that still deliver shared, degraded bandwidth with no redundancy. They do zero R&D. So Somos Internet is rebuilding the full stack from scratch: an Active Ethernet architecture borrowed from data centers, physically simple with complexity pushed to software, that delivers dedicated bandwidth to every home at a fraction of the CapEx. As it grows, Somos eats more of its supply chain: “It’s been this never-ending game of doing something janky, getting credibility, doing crazier stuff, getting more resources, getting smarter people so that we can fix the things that were messed up in the janky past iteration,” Forrest explained. “Then gaining credibility to get more resources to get cooler people to do crazier stuff. It’s like this self-sustaining fission process.” Somos is expanding geographically, into new markets, vertically, by making its own hardware and laying its own fiber, and horizontally, building hydro-powered data centers. From the position of delivering one of the few home utilities everyone pays for, better, faster, and cheaper than incumbents, it plans to expand what it offers the customers with whom it’s built trust and loyalty. Maybe one day, it will offer batteries and power. Maybe one day, it will use its growing cash position to enter the United States.

There are a lot of similarities between Base and Somos: both own a core home utility and deliver it better than incumbents, which earns them the right to expand. But there are differences, too. Base is starting in the very best market for its technology, because that’s where the need is greatest and the regulatory environment is friendliest. If Somos started somewhere like New York City, it would be caught up in red tape and slow, expensive telco lawsuits for years; so it’s starting in a high-need, regulatorily friendly environment and building up cash for bigger battles. And Astro’s approach is almost entirely different from both Base’s and Somos’, apart from using better technology, now feasible thanks to Curve Convergence, to break a constraint and capture the High Ground. For one thing, people go to planes, so Astro can’t capture their real estate in the same way that Base or Somos can.

There I go with the long paragraphs again. Fine. There is endless nuance to this.

I am talking my own book here, not because I think my portfolio companies are the only businesses that will succeed in the Age of Intelligence, but because I understand their strategies much more thoroughly. Very smart people will disagree with me on each industry’s Schwerpunkts and potential High Grounds. And even once you’ve done all this work on paper, so much comes down to execution. Will the team that identified the right strategy be the same one that can build against it to capture the opportunity? Only time will tell. That’s what makes this so much fun - it’s not obvious!

What is obvious, and I hope clear at this point, is that there is no one answer, no handy guide that will tell you how to win in the Age of Intelligence. Which means that there is also not one business model.

A Note on Business Models

While the “Death of SaaS” is overblown, what I hope this freak out does is to end the default investor assumption that every business should try to be a SaaS business.

A week before the sell-off, I met with three separate founders who told me that investors didn’t like their businesses because they weren’t SaaS. In two cases, the founders were building services businesses – traditionally a huge venture red flag. In all three, they believed the technology they were building was so good that they could use it to compete directly with incumbents instead of selling them software that made them marginally more productive.

During the sell-off, Flexport’s Ryan Petersen tweeted that everyone “smart” had told him to just build SaaS. The idea being that it would be easier to sell to freight forwarders instead of actually becoming a freight forwarder and competing.

Other founders quote tweeted him saying they’d been given, and ignored, the same advice, including Cover’s Alexis Rivas, whose company builds houses. I’m not even sure what selling software would look like here.

This is not because investors are dumb. Selling software has real advantages, and those advantages are legible more quickly. Two of those three founders I spoke to said that they had competitors who were selling software and generating a lot of revenue quickly, which is why investors thought they should be doing the same.

In the past, that was a logical discussion to have: should you try to sell software to generate lots of high gross margin revenue in the near-term in a way that’s legible to downstream capital so that you can continue to raise and hopefully give yourself time to develop moats, or should you try to compete directly, with better technology and a chance at better economics within whatever your industry’s business model is, even if those economics are worse than SaaS economics, in pursuit of the larger and ultimately more impactful shot at winning and reshaping your industry?

In most cases, that is no longer a debate. AI squeezes it from both sides. From one side, SaaS is a more competitive, less defensible business; there will be enough competitive noise that it’s harder to establish traditional moats like network effects and switching costs, and customers big and sophisticated enough to actually pay a lot and make use of your tool may opt to build something custom themselves. From the other, the technology is so powerful in the right hands that it should provide a stronger force with which to attack the Schwerpunkt than deterministic software could have. In other words, it is more likely than ever that a technology-native new entrant can defeat incumbents, assuming their technology actually addresses the industry’s key constraint.

What this means is that investors need to get comfortable with a wider range of business models to accommodate whichever is the right one for the industry in which a company operates. This does not mean that they should treat all business models equally now. Instead, they need to stop blind pattern matching altogether.

Services businesses might still be terrible for most businesses, but exactly the right model for some. Stripe clearly shouldn’t be a services business; maybe an AI-native law firm should. Selling hardware to incumbents might be a bad business model, not simply because hardware is hard, but because buyers have all the power in a particular industry, or because existing suppliers have locked buyers into whole sticky ecosystems. What might be better is to use that better hardware as the High Ground from which to integrate and compete.

A question I would like to see more investors asking instead of “Why not sell SaaS” is:

If your technology is so good, why aren’t you using it to compete?

Some companies find out that selling software to incumbents is the wrong model only through trial-and-error. My favorite example here is Earth AI.

Earth AI developed AI models to identify drilling targets for mining explorers back before AI was a thing, and sold it to explorers for a very good price at high margins. The challenge was: they never heard back from their customers. Many went bust - exploration is a notoriously binary business - which meant they stopped paying; retention was hard. Many others just had no incentive to report back, which meant that Earth AI wasn’t learning which of its targets were good and bad, which meant that it couldn’t improve its models. So it bought its own rig and went to customer sites to go find out for itself, and then it realized that it could build better rigs, and combine them with better models, and just compete directly. As I wrote in my Deep Dive:

The same thing that makes exploration customers bad customers – slowness, unwillingness to adopt technology – makes them very attractive competitors, if your tech actually works as well as you say it does. If you’re willing to vertically integrate – to do exploration, and drilling, and maybe even extraction – you might be able to build the most efficient explorer out there.

To be clear, Earth AI’s current business model is much more confusing than selling software. It has to invest in rigs up front, stake deposits, put teams on site to prove them out, and keep proving feasibility until a downstream miner wants to buy a stake in the deposit or buy the whole deposit outright and pay Earth AI a royalty, at which point, it becomes one of the most beautiful business models there is. Mining royalty & streaming companies have some of the highest market caps per employee in the world. Franco Nevada is worth $48 billion with just 41 employees, good for $1.2 billion per employee! Earth AI has the potential to build up a similar portfolio at a much lower cost basis because it is willing to dig.

The point is, maybe you drill mineral deposits in Australia to build a portfolio of mines, maybe you buy cells, manufacture battery packs, install them on homes, and make money by becoming a Retail Electric Provider, trading power, and selling ancillary services, maybe you hire expensive humans, make them much more efficient, and sell their time, maybe you even sell software!

Whatever you need to do to break the constraint, gain the High Ground, and win your industry is what dictates the business model you should pursue.

You Can Even Sell Software, As Long as You Win

Sometimes, the Gods smile on an essay. As I was writing this, Stripe co-founder John Collison released the latest episode of his podcast, Cheeky Pint. His guest: Eric Glyman, the CEO of Ramp.

Responding to John’s first question, Eric described Ramp’s evolution in terms that should now sound familiar. A few years ago, Ramp’s gross profit was over 90% card interchange. Today, the non-card businesses, including bill payments, treasury, procurement, travel, and software, will comprise the majority of Ramp’s contribution profit.

Ramp used a card, software, and counter-positioning to attack what it viewed as the Schwerpunkt in corporate spend (the fact that everyone was selling money, and no one was selling time) and win the transaction layer, the High Ground from which it is now expanding to eat every point solution a finance team touches.

Throughout the conversation, he lays out from his inside view exactly how and why this is happening. Everything flows from earning the High Ground. Ramp has data that no one else has and that no new entrant can accumulate more quickly. As it adds more intelligence, it gets more data. It’s built the things that are too expensive to replicate with more tokens – “I think the fitness function for companies becomes can you actually do things in such a way where even if you could spend tokens on it, it would take more tokens to create the thing or do that work than the system that you’ve built to drive that outcome.” – and is happily spending tokens to build everything else. And as it adds more features, it grows. The company now “powers more than 2% of all corporate and small business card transactions in the United States.” The larger its share, the more it learns, the more it makes, and the more tokens it can throw at eating adjacent opportunities, which keep feeding the machine.

This is why Ramp is valued at $32 billion while Brex sold for $5.15 billion. It is why Stripe is worth multiples of Adyen. It is why Base, only two years old, was valued at $4 billion.

The ownership of the scarce position in an industry is itself a scarce asset. The market, whether it uses this language or not, is including in its valuation the belief that from that position, you can eat an industry.

In doing so, they are leaning on history and economic data. Once Rockefeller smoothed refining’s volatility and began to get scale advantages in the 1860s, it was fait accompli. Once SpaceX drove down the cost of putting mass in orbit, and used that advantage to build a telecommunications cash cow that it could use to reinvest in cheaper launch, it became the leading candidate to win whatever economically valuable use cases required putting a lot of mass in orbit. Before Elon realized space data centers were going to be a thing, he’d won the right to win space data centers.

If you are confident in your analysis of the constraint and High Ground in an industry, and of which company is best positioned to break the former to win the latter, you can pay a premium under the assumption that more of that industry’s economic value will flow to the leader. That trend - increasing concentration of economic value - is a long and stable one, accelerated by new technologies.

Today, our new technologies are more powerful and general purpose than ever before, which means that the ability of category leaders who are able to wield those technologies is greater than ever before. They are levered to the pace of technological progress.

If AI gets smarter, Stripe and Ramp can eat more adjacencies, faster. If battery cells get more efficient, Base can offer a better service to its retail and utility customers. As power electronics continue to improve, Astro Mechanica can build faster, more efficient engines.

Whether SaaS is dead is one of the least interesting questions in the world. SaaS as a wellspring of valuable businesses, almost regardless of those businesses’ actual power, was a historical anomaly.

That doesn’t mean software is dead. We will see software businesses become some of the largest businesses in history, just as we will see hardware and even services businesses that dwarf Standard Oil’s size. Economic inputs are becoming more abundant, which means that more value will flow to the scarce complementary assets. This will continue as long as the abundance does.

The question that matters now is how you plan to win your industry. Everything else follows.

Power in the Age of Intelligence flows to the winners. Winners take more.


That’s all for today. We’ll be back in your inbox Friday with a Weekly Dose.

Thanks for reading,

Packy

Weekly Dose of Optimism #180

2026-02-13 21:07:33

Hi friends 👋 ,

Happy Friday from sunny Cape Town, South Africa! Not sure if it’s escaping frozen New York for warmer weather, spending time with family, or the fact that this was another one of the wildest weeks in Dose history, but I am feeling a little extra optimistic this week. By the end of this one, I hope you are too.

When Dan an I started writing this over three years ago, our goal was to make the world more optimistic by sharing all of the incredible progress happening in science and technology each week. That is still the case, and it’s still necessary. People are still pessimistic, and uncertain about what lies on the other side of progress.

Since we started writing, what’s changed is that things are simply moving much faster. There is more to cover each week. We have 7 Extra Doses in this one; each could be one of the top 5, and there are still things we didn’t cover.

So now, there’s an additional goal with the Dose: to keep you up-to-speed with the most important things happening in science and technology in the time it takes you two finish two morning coffees. Don’t doomscroll to keep up, just read the Dose.

Let’s get to it.


Today’s Weekly Dose is brought to you by… the Abundance Institute

My friends at the Abundance Institute are launching “Everyday Abundance,” a new podcast, this spring hosted by best selling authors Virginia Postrel and Charles Mann. I had a fascinating conversation about tissue paper, sneezing, and germs with Virginia and Charles at the Progress Conference in October and I’m pretty exited to listen to the show.

If you join Abundance’s Foundry now, you’ll get access to a salon Zoom with Virginia, early access to the podcast, and 3 months of not boring world free1, on top of all the other benefits of supporting this amazing organization.

Check out the Foundry membership here: Join the Foundry


(1) Isomorphic Labs Drug Design Engine unlocks a new frontier beyond AlphaFold

Isomorphic Labs

AlphaFold won Demis Hassabis a Nobel Prize for predicting the structure of proteins, which felt like a technological miracle at the time, as captured in The Thinking Game.

This week, Hassabis’ Isomorphic Labs, the Google spinout he CEOs on Tuesdays while also running Google DeepMind, showed that they can now predict how to drug them in a technical report on IsoDDE, its AI drug design engine.

On the hardest protein-ligand structures (the ones most unlike anything in its training data, where AlphaFold 3 struggled) IsoDDE more than doubles AlphaFold3’s accuracy. It outperforms AlphaFold 3 by 2.3x on antibody-antigen modeling and Boltz-2 by nearly 20x. And it predicts how strongly a drug will bind to its target better than FEP+, the gold-standard physics simulation that typically costs orders of magnitude more in compute time.

It’s finding things quickly that have taken researchers over a decade. Cereblon is a protein that researchers spent 15 years believing had one druggable pocket. A 2026 paper experimentally discovered a second, hidden one. IsoDDE found both from the amino acid sequence alone, with no hints about what ligand to look for.

The big question from here is whether and how IsoDDE and other computational breakthroughs translate into actual drugs. As of early 2026, no AI-discovered drug has received FDA approval. AI-designed compounds are progressing to clinical trials at roughly the same success rates as traditionally discovered ones. Biology remains brutally unpredictable once you move from a screen to a human body.

Isomorphic Labs itself has pushed back its clinical trial timeline, now targeting end of 2026 for its first AI-designed drugs to enter human trials. So we’re still in the “proof of concept” phase for the whole field.

But to date, drug discovery's biggest bottleneck has been the staggering cost and time of search. It can takes a decade and billions of dollars per drug. Last year, Hassabis told 60 Minutes: “We can maybe reduce that down from years to maybe months or maybe even weeks.”

IsoDDE compresses the search phase from months of lab work to minutes of computation. If it can reliably surface the right targets and the right molecules faster, even if clinical trial timelines stay the same, you’re running dramatically more shots on goal for the same cost, and taking shots in weirder, harder-to-find pockets that humans would never think to (or at least have the time and resources to) try.

IsoDDE and other tools like it turn the front end of drug discovery from a slow, artisanal hunt into a fast, systematic search. One more bottleneck down. They’ll flood the clinical pipeline with better, more novel drug candidates, which creates another one. We are going to need to do something to accelerate clinical trials and FDA approvals to handle the flood.

(2) Gemini 3 Deep Think Crushes Benchmarks, Does Materials Science and Math

Google DeepMind

Look, I’m a simple man. If you include a video of a Duke lab in the announcement of your new model that “mogs” state-of-the-art models on ARC-AGI-2 (a test designed to be incredibly hard for AI), assists in cutting-edge materials science research, and helps mathematicians solve Erdős problems, I’m going to include it in the Dose. Go Duke.

Deep Think is GDM’s specialized reasoning mode within Gemini 3, designed to spend minutes (or longer) chewing on a single problem, exploring solution paths, backtracking when they don’t work, and building up multi-step chains of reasoning before committing to an answer. Google calls it “System 2” thinking, borrowing the Kahneman framing: where standard Gemini is fast and intuitive, Deep Think is slow and deliberate.

That deliberate approach pays off on benchmarks. Deep Think hit 84.6% on ARC-AGI-2 (the frontier reasoning benchmark, verified by ARC Prize), where the next closest model scored 68.8%. It achieved a 3455 Elo on Codeforces: for context, that puts it in the top tier of competitive programmers on Earth; it would rank 8th in the world. It set a new standard of 48.4% on Humanity's Last Exam, a benchmark designed to be the hardest collection of problems across math, science, and engineering. And it earned gold medal-level results on the written portions of the 2025 International Physics and Chemistry Olympiads.

It’s always hard to know what the benchmarks mean, though. Every time a big lab drops a new model, they beat some benchmarks.

Which is why the video with Duke University's Wang Lab is cool. In it, a researcher uses Deep Think to optimize the fabrication of MoS₂ monolayer thin films, a class of semiconductor materials that's notoriously difficult to grow at precise scales. The researcher prompts Deep Think with synthesis parameters, the model reasons through an optimized growth recipe, and then the system pipes those parameters directly into lab automation software that controls the furnace, gas flows, and temperature profiles. Deep Think designed a recipe for growing thin films larger than 100 μm, a precise target that previous methods had struggled to hit. The era of self-driving labs is upon us.

Meanwhile, collaborating with experts on 18 open research problems, Deep Think helped break long-standing deadlocks across computer science, information theory, and economics. It cracked classic algorithmic challenges like Max-Cut and Steiner Tree by pulling in mathematical tools from entirely unrelated fields, the kind of cross-domain intuition leap that's supposed to be uniquely human but which is basically what I expect a thinking machine with access to all human knowledge to do. Every time a new model drops, I ask it to tell me connections that humans have missed given its view across disciplines, and normally, it’s pretty weak. I’m excited to give Deep Think the test.

In another case, it caught a subtle logical flaw in a proof that had survived human peer review. In research-level mathematics, it autonomously generated a paper on structure constants in arithmetic geometry and collaborated with humans to prove bounds on interacting particle systems. And DeepMind ran it against 700 open problems from Bloom's Erdős Conjectures database, a collection of unsolved problems posed by Paul Erdős, one of the most prolific mathematicians in history, and autonomously solved several of them.

The coding stuff that gets twitter buzzing just doesn’t excite me that much. I didn’t buy a Mac Mini. The writing is still bad. But this stuff… helping humans solve hard problems and make new discoveries… this stuff I’m here for.

It’s a great time to be a researcher, and a bad time to be a problem.

(3) Introducing: Liberty Class

Blue Water

Speaking of problems that have seemed almost impossible for Americans to solve…

American shipbuilding numbers are almost comical. China’s shipbuilding capacity is 232 times greater than America’s. In 2024, Chinese yards built over 1,000 commercial vessels. The US built eight. China’s navy has over 370 battle force ships and is projected to hit 435 by 2030. The US Navy has 296 and is projected to shrink to 283 by 2027 as retirements outpace new construction. 37 of the 45 ships currently under construction face significant delays. America’s four public shipyards average 76 years old, with dry docks averaging over 107. As the Secretary of the Navy put it, one Chinese shipyard has more capacity than all American shipyards combined. You’ve seen the chart.

Good news. This week, Blue Water Autonomy unveiled the Liberty Class: a 190-foot autonomous steel ship with a range of over 10,000 nautical miles and 150+ metric tons of payload capacity. The name is a deliberate nod to the Liberty Ships of World War II, which were built rapidly and at scale to meet wartime demand. Blue Water is making a similar bet: take a proven hull design (Damen's Stan Patrol 6009, battle-tested in demanding conditions worldwide), re-engineer it from the inside out for autonomous operation, and start building at Conrad Shipyard in Louisiana next month. The first vessel is expected to be delivered to the US Navy later this year.

Blue Water developed Liberty entirely with private capital, which is unprecedented for a full-sized Navy ship, but standard in commercial markets. Working with over 100 suppliers, they went from founding in 2024 to construction start in 2026, and they're targeting serial production of 10-20 vessels per year. Conrad's five yards and 1,100-person workforce already produce 30+ ships annually, so the production capacity exists; now, it’s being put to more productive use.

It’s a good start, but we’re going to need like 1,000 of those eventually to catch up.

More good news on the autonomous boats front, then: Saronic was selected for DARPA's Pulling Guard program, which is developing semi-autonomous escort systems to protect logistics vessels at sea. Over 75% of global trade moves by water, and the Navy has historically protected those routes by deploying billion-dollar destroyers and carrier strike groups. Pulling Guard is exploring whether low-cost, modular autonomous platforms can provide distributed maritime protection, “protection as a service” that works in peacetime and conflict. Saronic, which has been building autonomous surface vessels and scaling manufacturing at speed, will design a modular, autonomy-enabled vessel under the program.

America's traditional shipbuilding apparatus is a cautionary tale in institutional sclerosis. But we love sclerosis here at not boring. Every sclerotic incumbent is an opportunity for a startup to build something better, faster, and cheaper. Ships ahoy.

(4) A small polymerase ribozyme that can synthesize itself and its complementary strand

Giannini, Kwok, Wan, Goeij, Clifton, Colizzi, Attwater, and Holliger in Science

Stanford Medical Assistant Professor Jason Sheltzer wrote a better lead-in than I could: “AI is cool and all... but a new paper in Science Magazine kind of figured out the origin of life?”

Here's the backstory. The leading theory for how life began is the “RNA World” hypothesis: before DNA, before proteins, before cells, RNA molecules on early Earth stored genetic information and catalyzed chemical reactions. At some point, one of these RNA molecules figured out how to copy itself, and from that moment, evolution (descent with modification) could begin. The rest, over 4 billion years, is history.

The problem is that scientists have never been able to demonstrate this convincingly in the lab. Previous RNA enzymes (called ribozymes) that could copy other RNA strands were huge, 165 to 189 nucleotides long, and far too complex to have plausibly popped into existence in a primordial soup. And crucially, none of them could copy themselves. They could copy other, simpler RNAs, but their own folded structures blocked self-replication. It was a fundamental paradox: a ribozyme needs to fold to work, but when folded, it can't be copied.

Researchers at the MRC Laboratory of Molecular Biology in Cambridge (the same lab where Watson and Crick figured out DNA's structure) appear to have cracked it. They discovered QT45: a 45-nucleotide ribozyme, less than a quarter the size of previous RNA polymerases, that can synthesize both its complementary strand and a copy of itself. It does this by stitching together three-letter RNA building blocks (trinucleotides) rather than adding one letter at a time. Those triplets bind strongly enough to unravel folded RNA structures, solving the self-replication paradox that has stumped the field for decades.

The "45" matters enormously. Previous self-replicating ribozyme candidates were so large and complex that their spontaneous emergence on early Earth seemed implausible, like lightning striking a junkyard and assembling a 747. At 45 nucleotides, QT45 is small enough that the researchers argue polymerase ribozymes may be far more abundant in random RNA sequence space than anyone thought, meaning self-replication might not have required an astronomically unlikely accident. It might have been, in a sense, easy.

The coolest part is that the triplet building blocks QT45 uses, three-letter RNA chunks, are the same triplet code that all life on Earth still uses today to make proteins like the ones that AlphaFold discovered the structure of and IsoDDE targets. The genetic code is like a still-operational fossil of the very first replication system.

We spend a lot of time in the Dose on people solving hard problems. This one is the hardest problem: how did something come from nothing? How did chemistry become biology? The answer, it turns out, might be astonishingly simple, just 45 letters long. Way shorter than anything I’ve written.

(5) Texas Parents Rush for School Choice

The Wall Street Journal Editorial Board

There was a viral slop essay on X this week that I won’t link to but that you’ve probably seen talking about how screwed humans are, including our kids, except for maybe those of us who pay to get the good models and the analysts who ask AI to do research that would have taken three days in one hour. I, for one, think the kids are going to be alright, especially the ones who learn how to think instead of asking the machines to do it for them.

One thing is clear, though: we’re going to need to educate our kids in a way that’s different from the Prussian Model, which uncharitably optimized us to think like machines so that we would be good factory workers. We need to teach our kids to love learning, to ask questions, and to be curious. Basically, we need to teach our kids in a way that’s the opposite of the way most schools do it now.

That’s why I’ve been a big fan of school choice: states giving parents the money to choose better schools for their kids. School choice is not without its critics, who argue that it takes money away from public schools and hurts public school students, but public schools have had a monopoly on the education of the vast majority of kids who can’t afford private school, and the results have largely been what you’d expect from a state-protected monopoly. School choice encourages competition and can help direct funds to new schools taking new approaches to rethinking education.

This week was a big one for school choice. Texas opened applications for its new Education Freedom Accounts on February 4th, and 42,000 families applied on day one, a nationwide record for any new school choice program, surpassing Tennessee's 33,000 first-day applications last year. By the next morning, the number had crossed 47,000. The latest reports are at 91,000. The application window runs through March 17th.

This was a long time coming. For more than 20 years, Texas's Republican-controlled House blocked school choice legislation, even as the Senate passed ESA bills session after session. The tide turned in 2024 when Governor Abbott campaigned for 16 House candidates who challenged the incumbents blocking his school choice bill. The new House Speaker, Dustin Burrows, pledged the bill would pass. It did, last April. Senate Bill 2 allocated $1 billion for the 2026-27 school year, with room to grow to $4.5 billion by 2030.

The program gives eligible families roughly $10,474 per student per year to use toward private school tuition, homeschooling costs, tutoring, career and technical education, and other approved educational expenses. Students with disabilities can receive up to $30,000. Eligibility is prioritized by economic need, not first-come-first-served, with disabled and low-income students at the top.

I’m personally excited about this one because the Certified Educational Assistance Organization running the day-to-day operations of the program (application portal, payment processing, e-commerce marketplace where families shop for approved educational services) is Odyssey, a not boring capital portfolio company. Odyssey already manages ESA programs in Iowa, Georgia, Louisiana, Utah, and Wyoming, but Texas is a different animal. This is the biggest state school choice program ever launched, and Odyssey is the infrastructure making it work, providing each family with a secure digital wallet, real-time balances, and access to a marketplace of vetted schools and providers. They’ve handled the biggest launch ever seamlessly.

The numbers show that parents want this. I’m excited to see how K-12 education evolves as parents get to choose where to allocate dollars to get the education they think is best for their kids.

EXTRA DOSE: Will Manidis, Anthropic, Simile, 3D printed boats, Zero

Read more

Weekly Dose of Optimism #179

2026-02-06 21:57:45

Hi friends 👋,

Happy Friday and welcome back to the 179th Weekly Dose of Optimism!

We started writing the Weekly Dose during the 2022 bear market because there was a disconnect between the incredible things we saw being built and the (largely market-driven) pessimism. So this week is great. We were born in the darkness.

Even as the markets have vomited, the innovation has continued apace. Zoom out.

We have another jam-packed week of optimism, including four Extra Doses below the fold for not boring world members.

Let’s get to it.


Today’s Weekly Dose is brought to you by… Guru

Your team is probably already using AI for everything: research, customer support, product decisions. Just one problem… AI is confidently wrong about your company knowledge 40% of the time.

While everyone races to deploy more AI tools, they’re building on a foundation of outdated wikis, scattered documents, and tribal knowledge that was never meant to power automated decisions.

Guru solved this for companies like Spotify and Brex. They built the only AI verification system that automatically validates company knowledge before your AI agents use it. Think of it as quality control for your AI’s brain.

The companies that figure this out first will have AI that actually works. The ones that don’t waste valuable human time cleaning up expensive mistakes.

Try Guru Today


(1) Introducing Claude Opus 4.6 and Introducing GPT-5.3-Codex

Anthropic and OpenAI, respectively

The race between Anthropic and OpenAI to build the smartest, most useful thinking machines is heating up, and it’s riveting. The day after Anthropic released its Super Bowl commercials, which make fun of OpenAI for planning to introduce ads into its product (which many people, including Jordi Hays, think are a bit deceptive, but which are super entertaining)…

… both companies dropped their newest, smartest models. Anthropic released Opus 4.6 and OpenAI released GPT-5.3-Codex (Codex is its coding model/app).

Anthropic’s Opus 4.6 is for everyone: better at coding, plans longer, runs financial analyses, does research, etc… I’ve been playing with it and it’s definitely smarter (although thankfully it’s still a shitty writer).

OpenAI’s very-OpenAI-named GPT-5.3-Codex is for coding. It slots right into the Codex app they released this week. I had 5.2 build a website for not boring, and it was very cool that it could build it, but no matter how hard I prompted, the design was trash. I told 5.3 to throw out that trash and make me something that looked better, and it actually did a decent job in one shot. It can also do things like make models and presentations and docs, although it’s not available in Chat yet.

In both cases, researchers at the labs used their own agents to help research and build the new models. “Taken together,” OpenAI writes, “we found that these new capabilities resulted in powerful acceleration of our research, engineering, and product teams.” This is the mechanism that fast takeoff believers believe in: models so smart that they make the next models smarter, and so on.

I don’t know what to say other than have fun playing with your new geniuses this weekend.

(2) As Rocks May Think

Eric Jang

Whenever logical processes of thought are employed — that is, whenever thought for a time runs along an acceptive groove — there is an opportunity for the machine.

— Dr. Vannevar Bush, As We May Think, 1945

How’d we get here?

Eric Jang is VP of AI at 1X Technologies, the humanoid robotics company, and before that spent six years at Google Brain robotics where he co-led the team behind SayCan. He’s one of the people building the robots we covered in my robotics cossay with Evan Beard a few weeks ago.

His new essay, As Rocks May Think, is a riff on Vannevar Bush’s 1945 classic, As We May Think, and the title is the thesis: we taught rocks to think, and they’re getting really smart.

The piece is part technical history, part practical manual, and it is pretty technical, but it’s the most concise overview of how we got to where we are today and where we might be going from here that I’ve come across. Jang walks through the intellectual lineage of machine reasoning, from symbolic logic systems that collapsed when a single premise was wrong, through Bayesian belief nets that got tripped up in compounding uncertainty, to AlphaGo’s breakthrough combination of deductive search and learned intuition, and finally to today’s reasoning models, like Opus 4.6 and GPT 5.3.

For the practical manual piece, Jang walks through building his own AlphaGo and how he uses AI today: “Instead of leaving training jobs running overnight before I go to bed, I now leave "research jobs" with a Claude session working on something in the background. I wake up and read the experimental reports, write down a remark or two, and then ask for 5 new parallel investigations.”

He suspects we’ll al have access to today’s researcher-level of compute soon, and that when we do, we are going to need a shit-ton of compute. He compares thinking machines to air conditioning, a technology that Lee Kuan Yew credited with changing the nature of civilization by making the tropics productive. Air conditioning currently consumes 10% of global electricity. Data centers consume less than 1%. If automated thinking creates even a fraction of the productivity gains that climate control did, the demand for inference compute is going to be enormous.

Maybe that’s why Google anticipates $185 billion in 2026 CapEx spend and Amazon anticipates an even more whopping $200 billion, which sent its stock tumbling after hours.

The sell-off is ugly, but if Jang is right, all of that buildout and much more is going to be put to use. I asked my thinking rock (Claude Opus 4.6) what it thinks about the selloff. It told me: “if the bottleneck is inference compute, build the data centers. Vertical integration, baby.”

(3) Drone Controlled by Cultured Mouse Brain Cells Enters Anduril AI Grand Prix

Palmer Luckey

Don’t count thinking cells out yet, though!

Anduril’s AI Grand Prix, a drone racing competition, has strict rules: identical drones, no hardware mods, AI software flies. Over 1,000 teams signed up in the first 24 hours to compete for $500,000 and a job at Anduril.

Then one team showed up planning to use a biological computer built from cultured mouse brain cells to fly their drone.

Mouse brain cells. Australian company Cortical Labs commercially launched the CL1 last year: a $35,000 device that fuses lab-grown neurons with silicon chips. The neurons are grown on electrode arrays, kept alive in a life-support housing, and learn tasks through electrical stimulation. In 2022, the team placed 800,000 human and mouse brain cells on a chip and taught the network to play Pong in five minutes. The neurons run on a few watts and learn from far less data than conventional AI.

So: is a mouse brain “software”? Who cares.

“At first look, this seems against the spirit of the software-only rules. On second thought, hell yeah.”

(4) Waymo Raises $16 Billion, Now Does 400,000 Rides a Week

Waymo

Speaking of autonomous vehicles… Alphabet’s self-driving car company has way mo’ money at its disposal to save lives.

Nearly 40,000 Americans died in traffic crashes last year. The leading causes, things like distraction, impairment, fatigue, are all fundamentally human problems. Waymo doesn’t have those problems. It’s safer than human drivers, and the faster we get more of them (and other self-driving cars) on the road, the better.

Luckily, the company just raised $16 billion, which is basically a seed round in AI and is like 10% of what any serious hyperscaler is planning to spend on CapEx this year, but which will mean a lot more self-driving cars on the road. The round values Waymo at $126 billion and brings total funding to ~$27 billion. The investor list suggests that if they keep doing their job, there’s plenty more where that came from: Sequoia, a16z, DST Global, Dragoneer, Silver Lake, Tiger Global, Fidelity, T. Rowe Price, Kleiner Perkins, and Temasek, alongside majority investor Alphabet. This is the largest private investment ever in an autonomous vehicle company.

We’re talking a lot about fast takeoffs this week, and Waymo is a case study in gradually, then suddenly.

Waymo started in 2009 as a secret Google project, with a handful of engineers modifying a Toyota Prius to drive itself on the Golden Gate Bridge. For years, the punchline was that self-driving cars were always five years away. Google spent $1.1 billion between 2009 and 2015 and had essentially nothing to sell for it. The pessimists were winning. The five years away joke kept landing.

And then it started working. 127 million fully autonomous miles driven. A 90% reduction in serious injury crashes versus human drivers. 15 million rides in 2025 alone (3x 2024). Over 400,000 rides per week across six US metro areas.

They’re in Phoenix, San Francisco, LA, Austin, Atlanta, Miami. If you’ve ridden in one in one of those cities, the thing that strikes you is how fast it goes from feeling sci-fi to feeling normal. Now, they’re planning to launch in 20+ additional cities in 2026, including Tokyo and London. Saving lives around the globe.

My kids are never going to get their drivers’ licenses, are they?

(5) Contrary Tech Trends Report

Contrary Capital

My friends at Contrary just dropped their annual Tech Trends Report, full of charts, data, and insights across a wide range of technological frontiers. It’s one of the most optimistic documents I’ve read in a while.

A few things jumped out. AI tools are reaching adoption speeds that make the internet’s growth curve look leisurely. OpenEvidence, an AI tool for doctors, hit 300,000 active prescribers in 11 months, a milestone that took Doximity, the previous standard-bearer, 11 years. ChatGPT is at 800 million weekly active users with retention rates approaching Google Search. And coding AI tools like GitHub Copilot, Cursor, and Claude Code are each approaching or at $1 billion in ARR. AI companies are reaching revenue milestones 37% faster than traditional SaaS companies did.

On energy, the numbers are staggering. Welcome to the ELECTRONAISSANCE. Total US electricity generation is projected to grow 35-50% by 2040, driven by data centers, EVs, and manufacturing. The country is investing $1.3 trillion in AI-related capital expenditure alone by 2027, and $3-5 trillion in global data center spending by 2030. Meanwhile, wind and solar are the fastest-growing energy sources globally, and US fab capacity is projected to grow 203% from 2022 to 2032, more than double the global average. America is building again.

And then there’s the frontier stuff. Lonestar Data Holdings sent a data storage unit to the moon in 2025. The report lays out how lunar bases could unlock helium-3 for clean fusion energy (which For All Mankind predicted), rare earth metals for EVs and batteries, and platinum group metals for hydrogen fuel cells. Artemis II, a crewed lunar flyby, is scheduled for April 2026. The US Space Force wants a 100kW nuclear reactor on the moon by decade’s end. Microsoft sank a data center underwater and saw 8x fewer hardware failures. 90% of US factories still operate without robots, which means we have a lot of productivity gains ahead.

There are challenges too, of course: aging grid infrastructure, water stress around data centers, the fact that 60% of CEOs say AI projects haven’t delivered positive ROI yet. But the overwhelming takeaway is that the buildout is happening, the adoption curves are real, and the scale of investment is unlike anything we’ve seen.

We are living in a sci-fi novel. What a time to be alive.

EXTRA DOSE (for not boring world subscribers) BELOW THE FOLD

Skyryse, Machina Labs, OpenAI x Gingko, General Matter x Mario

Read more

Raising a Special Little AI

2026-02-03 21:56:12

Hi friends 👋 ,

Happy Tuesday! I’ve been watching the hype around OpenClaw/Moltbook, and I think people are right that there’s something there but wrong about what.

This short essay is my half-baked thoughts on what that something is, and the type of company that might be built on the insight. One of the things that I like about not boring world is that I can send more work-in-progress ideas instead of just longer, fully formed ones, so the beginning of the essay is for everyone and the full thing is for not boring world members. Join us.

Let’s get to it.


Raising a Special Little AI

I have seen the hype around OpenClaw (fka Moltbot, fka Clawdbot), and around the social network for the agents it spawns, Moltbook. I haven’t gotten involved. I mean, I set up Clawdbot, and texted with it on WhatsApp for a few minutes, but I found that it was easier to simply open my Weather app than to be texted the weather (better yet, open my front door). While some believe that a social network full of agents talking to each other signals the beginning of the takeoff, I just don’t find it particularly interesting.

Maybe it’s because I’m not technical. Maybe because there’s just not that much in my life that needs automating. Maybe because I believe that those who are able to focus through the noise will inherit the kingdom of god.

Having said that, I do subscribe to the Chris Dixon views that The next big thing will start out looking like a toy and What the smartest people do on the weekend is what everyone else will do during the week in ten years, so if this many people are captivated, there’s something going on.

I just haven’t seen anyone hit on what’s actually happening.

My hunch, from the outside, is that what we’re seeing is early forms of competition to create the best AI for yourself. Like raising kids to be the best versions of themselves, but for AIs.

You can see it in the way people are posting. Practically none of what they’re showing off their Clawdbots doing is useful. It’s a race for novelty and specialness, to say as much about the “parent” as the kid. I made this thing do this, even if it does it “all by itself.” It’s like me writing about my son selling Donut Hats; nobody (Sorry Dev) needs a Donut Hat, but I find it fascinating that we raised a little dude who sells them.

Given OpenClaw’s success and the technical skill required to set it up well, people have predicted that we will soon see more cleanly productized versions of AI assistants that can just do stuff for us in the background, usable by normies. And we will! But I don’t think that’s the right takeaway from this. Most normies don’t have that many things that we need automated until we get home robots.

The more important takeaway in my opinion is that we will want to raise our own AIs, and we will want to compete to make them the very best at what we want them to be the best at.

The thing I find funniest about the OpenClaw / Moltbook hubbub is that people are imagining that their AIs are becoming humanlike mainly because of their own very human desire to have and be better and different.

Aluminum, sugar, books, purple dye, glass windows, pineapples, salt, and ice were luxury items once. Then everyone got them. The bar for luxury rises one democratization at a time.

And certainly, if we’re going to have the same thing as everyone else, we want to use it, or raise it, better and differently than everyone else so we can show off our unique, special version of things.

Bandai did $150 million in Tamagotchi sales in their first seven months in the United States by giving people a tiny digital creature that was uniquely theirs to care for, personalize, and show off.

Whatever company seizes on this human desire instead of racing to build another Clawd reskin is going to have trillions of reasons to be proud.

There is a deeper, less toyish precedent: parenting. Every parent thinks that their kid is the greatest kid in the world, and good parents help their kids to become the fullest expression of their passions and curiosities. We read to them, teach them, model morality for them, drive them to class and practice and clubs, and push them when they need a little push, so that they might be the best version of themselves. A world in which every kid was exactly the same would be a bland world.

That is the world we live in with our AI models, though. They are all the same, basically. Not that every major lab’s foundation model pretty much converges on the same outputs—which is true, but a separate conversation—but that each person’s instance of the same model spits out the same thing. This is one of the reasons AI continues to feel like slop even as it improves. Sameness is slop.

Read more

Weekly Dose of Optimism #178

2026-01-30 21:52:09

Hi friends 👋,

Happy Friday and welcome back to our 178th Weekly Dose of Optimism!

This is one of the most jam-packed doses in recent memory. We had seven Extra Dose stories… before rumors emerged that SpaceX and xAI (and/or Tesla) might be merging.

We’re already way over the length limit, so…

Let’s get to it.


Today’s Weekly Dose is brought to you by… not boring world

For more not boring, including all of the stories below the fold, essays co-written with experts, and chats, join our growing community of not boring world members:

Subscribe now


(1) Two Years of Telepathy

Neuralink

Twenty-one people worldwide now have brain chips in their heads, controlling computers with their thoughts. Neuralink’s first product, Telepathy, “aims to enable people with paralysis to directly control computers, phones, and robotic limbs using their thoughts alone.”

That 21 number alone is remarkable. We went from Noland Arbaugh receiving the first implant in January 2024 to over twenty “Neuralnauts” enrolled in trials across the US and Canada in just two years. Several participants have already exceeded the information transfer rate of an able-bodied person using a mouse, hitting over 10 bits per second with their thoughts alone.

But the human stories are what make this real. Noland, patient one, is back in school pursuing a degree in neuroscience. Sebastian, a 23-year-old medical student, uses his implant up to 17 hours a day to study for exams. Audrey, the first female participant, creates intricate digital art and plans to open a physical gallery to showcase her work. People with ALS are typing at 40 words per minute through imagined finger movements, with a goal of reaching conversational speed.

While Telepathy spreads, Elon Musk provided updates on the company’s future products.

Blindsight, which will give those who’ve lost their sight low resolution vision, at first, and higher resolution vision, over time, is ready to begin trials pending regulatory approval. And Musk said that the “next generation Neuralink cybernetic augment with 3x capability” will be ready next year.

For now, Telepathy is a technological miracle, and I highly encourage you to go to the blog post to watch the videos. Imagine what it must be like to be trapped in your body, and then what it must be like to be able to move things with your mind after that. Then remember that we live in an Age of Miracles.

(2) Triple Therapy Eliminates Pancreatic Cancer in Mice

Vasiliki Liaki, Mariano Barbacid, et al. for PNAS

For those who don’t speak Spanish (me), allow me to translate: Spanish scientists led by Dr. Mariano Barbacid have cured pancreatic cancer in mice.

Pancreatic cancer is one of the worst diseases humans face. It kills nearly half a million people globally each year. It has a 13% five-year survival rate (just 8% for the most common form) and is projected to become the second leading cause of cancer death worldwide by 2030. Ninety percent of cases are driven by mutations in a gene called KRAS. The problem: KRAS inhibitors work for a few months, then the tumor rewires around them.

In 1982, a Spanish scientist named Mariano Barbacid isolated the first human oncogene—HRAS—and helped establish that cancer is caused by specific genetic mutations. He’s spent the forty-plus years since then studying the RAS family of genes that includes KRAS. Now, at 75, he may have finally cracked the resistance problem.

The insight is simple: if the tumor can escape one blocked pathway, block three at once. Barbacid’s team at Spain’s National Cancer Research Centre combined daraxonrasib (a KRAS inhibitor), afatinib (an EGFR/HER2 blocker already approved for lung cancer), and SD36 (a protein degrader targeting STAT3). Cut the engine, seal the exits, and disable the backup system, simultaneously.

In mice, the tumors vanished. For over 200 days. No recurrence. The same results held across genetically engineered mouse models and patient-derived tumor xenografts. No significant toxicity.

Barbacid cautions that clinical trials in humans are still years away, requiring funding and regulatory approval. Daraxonrasib alone could be approved later this year. But the principle, that combination therapy designed around resistance mechanisms can achieve durable remission in one of oncology’s most brutal cancers, is now proven in animals.

The man who discovered the first human oncogene may have also discovered how to defeat its most vicious descendant.

(3) AlphaGenome: DeepMind Cracks DNA's "Dark Matter"

Žiga Avsec et al. for Nature / Google DeepMind

Google DeepMind, folks! We been telling you.

When the Human Genome Project delivered its first draft in 2003, scientists discovered something humbling: only about 2% of our DNA actually codes for proteins. The other 98%, once dismissed as “junk DNA,” was a mystery. Two decades later, we know this non-coding DNA is crucial for regulating gene expression, determining when and where genes turn on and off. We just couldn’t read it.

This week, Google DeepMind published AlphaGenome in Nature and open-sourced its code. The model takes in sequences of up to one million DNA letters and predicts how mutations in those stretches affect gene expression, essentially translating the regulatory grammar that governs 98% of our genome.

The technical achievement is significant: AlphaGenome beat or matched the best existing models on 25 of 26 variant effect prediction tasks. It unifies capabilities that previously required specialized tools—splicing prediction, chromatin accessibility, transcription factor binding, gene expression changes—into a single model. Where previous tools had to trade off between sequence length and prediction accuracy, AlphaGenome analyzes million-base-pair stretches at single-nucleotide resolution.

What matters more is what it enables. Nearly 3,000 scientists across 160 countries have already been using the model since DeepMind released a preview API last June. They’re using it to narrow down which genetic variants actually cause disease in conditions from cancer to neurodegeneration. The model won’t tell you if someone will get sick (gene expression is influenced by environmental factors it can’t see) but it can help researchers prioritize which mutations to investigate.

“Ever since the human genome was sequenced, people have been trying to understand the semantics of it,” said Pushmeet Kohli, DeepMind’s VP of Science. “It’s like you have a huge book of three billion characters and something wrong happened in this book. AlphaGenome can be used to say, ‘If you change these words, what would be the effect?’”

AlphaFold gave us the structures of proteins. AlphaMissense predicted which mutations in protein-coding regions cause problems. AlphaGenome completes the trilogy by tackling the regulatory dark matter that connects DNA to everything else. Our bodies are finally becoming machine readable.

(4) Google DeepMind Launches Project Genie, Playable Worlds in a Prompt

Demis Hassabis

No, seriously. Google DeepMind, folks!

Yesterday, the widest-ranging team in AI rolled out playable world models. Through a prompt or an image, you can create virtual worlds and play them with a character of your choosing.

World models are starting to get really, really good. Imagine making anything in the video above even a couple of years ago. It would have taken weeks? months? tens of thousands or millions of dollars? And now, you can do it in a prompt.

A couple years back, Conrad Bastable wrote this excellent piece in defense of monopolies, Monetization & Monopolies: How The Internet You Loved Died, arguing that tech monopolies are good because their outsized profits allowed them to overpay society on the way up with a bunch of not-fully-economically-squeezed products. He uses Google from 2010-2016 as an example of what can go right, and Google 2014-2024 as an example of what happens when the monopoly goes away, but the work they’ve been putting out recently at GDM suggests that it’s still really good to have a ~monopoly money machine that you can throw at trying a bunch of really cool things.

Will these World Models make money over time? Probably. They’ll be useful for games and entertainment and eventually, real world applications. Google will likely make money off of their investment. But in the meantime, you1 can create worlds for your dog or a pink cartoon balloon bunny to explore just because it’s delightful.

(5) The First Human Trial to Reverse Aging Begins

Ryan Cross for Endpoints News

In 2020, David Sinclair’s Harvard lab restored vision in blind mice by partially reprogramming their cells to a younger state. On Monday, the FDA gave Sinclair’s company, Life Biosciences, the green light to try the same thing in humans.

The IND clearance for ER-100 marks the first-ever human clinical trial of partial epigenetic reprogramming, a technique that uses three of the four Yamanaka factors (Oct4, Sox2, and Klf4) to reset age-associated epigenetic markers while keeping cells committed to their original function. By excluding c-Myc, the factor associated with uncontrolled growth, Life Bio aims to thread the needle between rejuvenation and tumor risk that has historically spooked regulators.

The Phase 1 trial will enroll patients with open-angle glaucoma and non-arteritic anterior ischemic optic neuropathy (NAION), diseases where retinal ganglion cells die and can’t regenerate. Sinclair’s lab showed in 2020 that OSK gene therapy could restore vision in aged mice with glaucoma. Now we find out if it works in people.

“Since Shinya Yamanaka first showed that cellular age could be reset, the potential of translating that biology into real medicines has been enormous yet has previously remained largely theoretical,” Life Bio CEO Jerry McLaughlin said. “This IND clearance is a major inflection point for the longevity and aging biology field.”

The eye was a strategic choice. Life Bio knows how to deliver gene therapy there safely, and the impact of restoring vision is immediately measurable. But Chief Scientific Officer Sharon Rosenzweig-Lipson made the broader ambition clear: “We can do it almost anywhere. Whatever age-related diseases are most important to you, those are the ones we’re thinking about.” The company is already developing ER-300 for liver disease.

What makes this different from the $200 billion supplement industry or the parade of failed Alzheimer’s drugs is the mechanism. Life Bio isn’t treating downstream symptoms. They’re attempting what Rosenzweig-Lipson calls a “near total reset,” taking corrupted cellular software and restoring it to factory settings.

Sinclair has been saying for a decade that aging is a disease and that disease is treatable. His lab proved the concept in mice, then monkeys. He’s also a controversial figure, accused by many of being over-promotional and over-extrapolating from animal studies. Now, the FDA is giving him a chance to prove it in humans. As a human, I hope he’s right.

(5a) Silver Linings Puts a Price Tag on Not Dying

Raiany Romanni-Klein, Richard Evans, and Jason DeBacker in silverlinings.bio

If Sinclair is right, along with the others who are working to defeat aging, the economic benefits will be immense.

On Wednesday, I moderated a panel at Deep Tech New York hosted by AlleyCorp with Gearworks CEO Raquel Schreiber and Superabundance co-author Gale Pooley. In Superabundance, Pooley and his co-author make the case that, contra Ehrlich, resource abundance increases with population. More people + freedom to innovate = abundance.

Normally, we assume that this means more new people. Higher birth rates. But in this beautiful study, Raiany Romanni-Klein, Richard Evans, and JasonDebacker found that extending the healthy and productive lives of those of us already living would dramatically grow the economy.

Slowing brain aging by just one year would add $201 billion annually to U.S. GDP. Delaying biological aging by five years would add $2 trillion per year and nearly 7 million lives saved by 2050. These are two of the outputs of Silver Linings, an open-source project that finally puts hard numbers on what longevity research is worth.

The core insight is almost embarrassingly obvious once you see it, and one that Pooley would agree with: working-age adults are the most valuable resource on Earth. The ceiling of every economy is fixed to the health and number of its working-age population. And yet the U.S. spends just 0.54% of its NIH budget on the biology of aging. Alzheimer's research alone gets 8x more funding, despite producing discouraging results for decades.

Silver Linings simulates different research breakthroughs and shows their ROI:

Slow brain aging by 1 year: $201B/year, $8.9T long-term, 268K lives saved

Slow reproductive aging by 1 year: $9B/year, $9.3T long-term, 391K lives

Double organ supply: $50B/year, $3.2T long-term, 529K lives

Make 41 the new 40: $408B/year, $27T long-term, 1.72M lives

The project maps the market failures that explain why we’re underinvesting: pharma profits more from lengthening unhealthy life than improving overall health; insurers don’t invest in our future health because we can switch plans; disease is easier to measure than wellness. And it proposes solutions like an Innovation Accelerator modeled on In-Q-Tel and Advance Market Commitments for aging biomarkers.

The interactive model lets you input your own assumptions. Skeptical only 30% of the population would benefit? Adjust it. Think breakthroughs will take 20 years? Plug it in. The returns still dwarf any plausible investment.

What I love most is the framing. Evolution optimized humans for reproduction, not longevity. American lobsters get stronger and more fertile with age. Naked-mole rats experience no cognitive decline. The Aldabra giant tortoise lives to 200 without ever going for a run.

We’re not doomed to our current aging trajectory. But we do need to fund the alternative if we want to see it come true.


EXTRA DOSE (for not boring world members)

  • Multiscale Causality and the Meaning Crisis

  • A new home robot

  • Tesla shows off its electric stack

  • Standard Nuclear raises $140M

  • Swedish trial shows AI helps detect cancer

  • Pipedream going live in Austin

  • Christian joins a16z AD

Read more

Weekly Dose of Optimism #177

2026-01-23 21:46:57

Hi friends 👋 ,

Happy Friday!

We are back to our regularly scheduled Friday slot after yesterday’s optimistic cossay with Ross Garlick on what could go right in Venezuela and what it would take.

We have a lot of great stuff, including the best thing I’ve ever read by the best writer in biotech, promising cancer vaccine results, Zipline money, ocean plastic removal, internet backbone, and a bunch of bonuses for those of us who are going to be snowed in this weekend. Stay safe and warm out there, and…

Let’s get to it.


(1) Going Founder Mode on Cancer

for

If you read just one thing from the Dose this week, please make it this.

I am a founding member of the Elliot Hershberg Fan Club. He was not boring capital’s biotech partner and remains a great friend and the person I turn to with any biotech question I have now that he’s running Amplify Bio. I love most of what he writes. But I don’t think any of it comes close to this one. I’ve been waiting for it.

The last time Elliot was in New York, we took a walk around Washington Square Park and when the conversation turned to cancer therapeutics, he told me about GitLab founder Sid Sijbrandij’s story for the first time. His point was: this is what one superhuman billionaire is doing to fight his cancer today, and I think something like it will be available to everyone to fight cancer in the future.

Now, he’s written that story down, and it’s better than I expected. It’s the story of Sid’s extraordinary fight against osteosarcoma after exhausting the standard of care.

Sid fought cancer and beat it, only for the cancer to return in 2024. After doctors told him, basically, “You’re done with standard of care, maybe there is a trial somewhere, good luck!”, Sid pulled out all the stops to cure himself.

He put together a 1,000+ page Google Doc of health notes. He obsessively gathered information via every diagnostic he can get his hands on, done often, and built systems to solve problems nobody else would solve for him. Sid assembled a SWAT team, used single-cell sequencing to identify FAP-expressing fibroblasts in his tumor, flew to Germany for experimental radiotherapy, and is now in remission. He won.

The piece is part profile, part science, part fight against a Kafkaesque healthcare system, and part glimpse into a future where personalized oncology actually works, where AI agents order diagnostics, bioinformatics pipelines design custom vaccines, and the total cost of treating early-stage cancer the way Sid did drops dramatically.

From an optimism perspective, it’s a twofer:

  1. It’s possible to beat cancer through personalized therapeutics.

  2. One extremely dedicated person can solve almost anything.

Just read it.

(2) Moderna, Merck Report Positive Results from Cancer Vaccine Study

Nicholas G. Miller for The Wall Street Journal

In the meantime, generally available cancer drugs continue to get better.

This week, Moderna and Merck announced that results from a five-year Phase 2b trial in melanoma patients showed that its cancer vaccine, in combination with Merck’s immunotherapy, Keytruda, reduced the risk of death or recurrence by 49% versus Keytruda alone. That is a massive improvement, and another big sign that mRNA vaccines are going to be a key part of the arsenal in the fight against cancer. The companies have eight trials in Phase 2 or 3 across multiple tumor types beyond just melanoma.

Relatedly, long-term not boring readers may remember Keytruda from our Deep Dive on Varda. The drug is one of the best-selling of all-time and was the single best-selling pharmaceutical in the world in 2024 with $29.5 billion in sales. It is also one of the drugs with the highest price per kilogram at a whopping $194 million per kg.

The drug is out of this world, literally. In 2017, Merck conducted a mission on the ISS to explore the crystal properties of Keytruda in order to improve crystallization. While the research has not been commercialized, the hope is that a tighter distribution of smaller particle sizes would allow for self-administration at home versus the current process of going into the clinic for IV dosing as it stands.

Daily Synchronicity: after I wrote this, Scott Manley posted a video on just this topic.

The future is bright. It’s never been a worse time to be cancer.

(3) Zipline Raises $600M at $7.6B and Makes 2 Millionth Delivery

Zipline has been one of our favorite companies to write about in the Dose, for three reasons.

First, they make autonomous flying drones. They’re building the future we want to live in.

Second, they started out (and continue) by using those drones to deliver drugs to hard-to-reach places in Africa and have saved or improved thousands of lives. Great for humanity, and a smart strategy to get flight hours in before taking on the US.

Third, the future of delivery is going to be unrecognizable, and it’s going to make the ground better, too. Drones are faster and cheaper than cars or electric bikes. Order something, get it whizzed to your house. That also means fewer delivery vehicles clogging up the roads and fewer electric bikes trying to kill you.

Now, they have a fresh $600 million to pull that future forward faster, including the launch of a new market, Phoenix. To do it, they’re going to need a lot of drones. Last year, I got to tour the facility where they design, test, and manufacture new Zips. Molly went behind-the-scenes on Sourcery so now you can, too.

(4) The Ocean Cleanup is Now Intercepting 2-5% of Global Plastic Pollution

Boyan Slat

A non-profit called The Ocean Cleanup is working to take and keep plastic out of the ocean, and it’s on its way towards its goal of removing 90% of floating ocean plastic pollution by 2040. Founder Boyan Slat announced The Ocean Cleanup removed 27,385 metric tons of plastic last year, and is intercepting 2-5% of global plastic emissions. That's roughly the weight of 10 Eiffel Towers.

Slat was 16 when he went scuba diving in Greece and saw more plastic bags than fish. He made it a high school science project. His 2012 TEDx talk went viral. He dropped out of aerospace engineering, raised $2.2M from 38,000 donors in 160 countries on €300 of saved pocket money, and founded a nonprofit to fix the problem.

A decade after the TEDx talk, TOP was pulling out serious plastic: 1M kg by early 2022, 10M kg by April 2024, 50M kg by January 2026. System 03 now cleans an area the size of a football field every five seconds. Their Guatemala river site, which nearly failed when anchors washed out in 2022, removed 10M kg in its first year after they relocated and redesigned. "When people say something is impossible," Slat once said, "the sheer absoluteness of that statement should be a motivation to investigate further."

Slat designed TOP to put itself out of business, which is perfect, because when he’s done on macroplastics, we need him to get to work on microplastics.

(5) Blue Origin Announces TeraWave

Blue Origin

While everyone has been talking about SpaceX’s IPO plans, Jeff Bezos quietly unveiled a second satellite constellation.

TeraWave is not for consumers. It’s enterprise infrastructure: 5,408 optically-interconnected satellites across LEO and MEO, designed to deliver symmetrical upload/download speeds of up to 6 terabits per second anywhere on Earth.

For context, Starlink’s consumer service maxes out around 400 Mbps, but that comparison isn’t perfect. If you recall from Cable Caballero that Tier 1 “backbone” providers build fat pipes and wholesale to Tier 2 middlemen or ISPs, who offer the internet to customers at ~100 Mbps to 10 Gbps, TeraWave is like that Tier 1 backbone provider, but beaming down from space.

TeraWave is targeting ~100,000 enterprise, data center, and government customers who need redundant, high-capacity connectivity where fiber is too slow, too expensive, or impossible to deploy.

The architecture is clever: 5,280 satellites in LEO handle the RF links (up to 144 Gbps per customer via Q/V-band), while 128 satellites in MEO provide the optical backbone for the 6 Tbps throughput. Deployment starts Q4 2027, likely on Blue Origin’s New Glenn.

Bezos already has Amazon Leo (née Project Kuiper) for consumers and small businesses. That’s the Starlink competitor with ~180 satellites up and a 2026 commercial rollout planned. TeraWave goes after a different market entirely: the hyperscale backbone. It’s space-based dark fiber for enterprises, not broadband for RVs.

The timing is pointed. SpaceX has hired four investment banks to take it public in what may be the largest IPO of all time, with a potential valuation north of $1 trillion. Starlink is the business. 9 million subscribers, 9,400+ satellites, 70% of SpaceX’s revenue, adding 20,000+ customers per day. Musk wants to raise tens of billions of dollars in an IPO to build orbital data centers and, eventually, satellite factories on the Moon. He talked a little bit about the vision in a surprise Davos talk.

At the end of the talk, Elon said, “My last words would be, I would encourage everyone to be optimistic and excited about the future, and generally I think for quality of life it is actually better to on the side of being an optimist and wrong rather than a pessimist and right.”

We couldn’t agree more.

BONUS (for paid not boring world members): Brex / Ramp, Levin, Stewart Brand

Read more