MoreRSS

site iconTomasz TunguzModify

I’m a venture capitalist since 2008. I was a PM on the Ads team at Google and worked at Appian before.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Tomasz Tunguz

The Robotic Tortoise & the Robotic Hare

2026-03-18 08:00:00

I set up a race today between two robots.

My Mac on the left vs Claude Code on the right. Both tasked with building a payment app on Stripe’s new Tempo blockchain. Same prompts, same task, side by side.

Opus 4.5 is about 20% smarter than Qwen 35B on benchmarks. And it’s likely 50x larger. The hare should have won. It didn’t.

The local model finished in 2 minutes. Claude took over 6. I asked Claude to score both outputs : local model 6.5, Claude 4.5.1

Video plays at 2x speed.

With 3x faster responses, I could add an extra cycle : “critique the plan and address the critiques.” In the time the hare was still thinking, the tortoise ran another lap.

Prompt Local (Qwen 35B) Claude (Opus 4.5)
Research Tempo & create plan 20.9s 55s
Critique the plan 16.5s 1m 35s
Which language is best? 16.5s 1m 35s
Research feedback online 48.9s 2m 35s
Save implementation plan 15.4s 44s
Total ~2 min ~6 min 24s

Faster responses mean more rounds of revision before a meeting ends or attention drifts. It’s different for agentic coding workflows & complex codebases, where slower work may lead to better outcomes. But for everyday tasks, faster models can enable tighter feedback loops. Tighter loops can produce better outcomes.

We don’t always need the smartest AI to get the job done.

The 12x Bet on AI

2026-03-17 08:00:00

For every dollar hyperscalers earn from AI today, they’re spending twelve dollars to build more capacity.1 That’s the bet embedded in $575 billion of capital expenditure this year.2

How fast does AI revenue need to grow to pay back this data center mortgage?

Hyperscaler CapEx vs Cash from Operations 2016-2026

From 2020 to 2024, hyperscalers issued an average of $20 billion in bonds annually.3 In 2025, that jumped to $96 billion. In 2026, it will reach $159 billion.3 Morgan Stanley projects $1.5 trillion over the next few years.4

Amazon, Microsoft, Alphabet, Meta, & Oracle will spend 90% of their operating cash flow on AI data centers in 2026, up from a historical average of 40%.2

Alphabet issued a century bond, the first by a tech company since Motorola in 1997.5 The debt matures in 2126. Who knows what AI will look like then, or whether Alphabet will exist to repay it.

Hyperscaler Bond Issuance 2020-2026

What assumptions justify this borrowing?

The depreciation schedules encode the bet. Most hyperscalers depreciate AI infrastructure over five years.6 At 60% gross margins & 5% borrowing costs, a 5-year payback on $431B in AI capex requires $180B in annual revenue.7 Current AI revenue is $35 billion.1 They’re underwriting 5x growth in five years.

AI Infrastructure Payback Period by Revenue Scenario

Nvidia’s stated goal is to release new GPU architectures every twelve months, which will compress depreciation cycles. If chips become obsolete in three years rather than five, the required annual revenue jumps to $276B, 7.9x current levels.8

As Michael Mauboussin writes, there’s information in prices. The depreciation schedules tell us what hyperscalers believe : AI revenue will grow 5x within five years. The debt markets are betting alongside them.


  1. Asymco : The Most Brilliant Move in Corporate History? ↩︎ ↩︎

  2. Bank of America Hyperscaler Capex Estimates ↩︎ ↩︎

  3. CNBC : Big Tech’s AI Bond Binge ↩︎ ↩︎

  4. Fortune : Google, Meta, & Oracle’s $1 Trillion Borrowing Spree ↩︎

  5. Bloomberg : Alphabet Plans Tech’s First 100-Year Bond Since Dot-Com Era ↩︎

  6. Hyperscaler Depreciation Policies ↩︎

  7. Calculation : $431B capex ÷ 5 years = $86B depreciation + $22B interest (5% on $431B) = $108B annual cost. At 60% margin, requires $180B revenue ($108B ÷ 0.60). ↩︎

  8. This analysis focuses on direct AI revenue & does not account for internal AI consumption (Copilot, Search, recommendations, internal engineering) that generates value through existing revenue streams. Older chips may retain residual value for inference even after becoming obsolete for frontier training. ↩︎

You Are Responsible for Your Agent

2026-03-15 08:00:00

“What happens when a new employee brings their agent to work?”

An executive asked this recently. Imagine a few years from now : a student graduates, having trained their own agent through university. It knows everything they’ve learned, every paper, every problem solved. Day one, they bring it to work.

It’s like bring your own device circa 2009. The iPhone launched & nobody wanted corporate Blackberries1 anymore. IT scrambled to adapt.

But a rogue phone couldn’t sign contracts. A rogue agent can.

Amazon just learned this at scale. $6.3 million in lost orders. 99% order volume drop across North America. Four severity one incidents in one week.23

Amazon’s AI coding assistant contributed to at least one major production incident. The response : a 90-day safety reset with mandatory two-person review for all code changes.

An internal memo admitted what everyone implicitly knows :

“Best practices and safeguards around generative AI usage haven’t been fully established yet.”3

Companies can’t hide behind hallucinations. Utah’s AI Policy Act4 eliminates the hallucination defense :

“It is not an affirmative defense to assert that the GenAI tool made the violative statement or undertook the violative act.”

Newer and larger models are smarter and more reliable.5 But they fail unexpectedly. There is no relationship between size and how failures change over time. AI-generated code creates 70% more issues than human code.6

The TRUMP AMERICA AI Act would create explicit liability pathways - allowing the US Attorney General, state AGs, & private plaintiffs to sue AI developers for defective design & unreasonably dangerous products.7

That new hire’s personal agent? The company bears liability for its mistakes. The contracts it signs, the code it deploys, all of it lands on the company.

Like a dog or a device, you are responsible for your agent.

Hello, Claude? Are You There?

2026-03-13 08:00:00

Feb 2025
"We've been growing a lot and are out of GPUs."
Sam Altman, OpenAI CEO 1
Mar 2025
"We are still waving off customers or scheduling them out into the future. This is a situation that we have not seen in our history."
Safra Catz, Oracle CEO 2
Oct 2025
"You may actually have a bunch of chips sitting in inventory that I can't plug in. I don't have warm shells to plug into."
Satya Nadella, Microsoft CEO 3
Feb 2026
"What keeps us up at night… The top question is definitely around capacity. All constraints — be it power, land, supply chain constraints — how do you ramp up to meet this extraordinary demand?"
Sundar Pichai, Alphabet CEO 4
Feb 2026
"There's no relief as far as I know. No relief until 2028."
Lip-Bu Tan, Intel CEO 5

What happens when your AI doesn’t answer?

Everything is in short supply. It’s no longer just GPUs. It’s power. Data centers. Memory. CPUs.

If there’s no relief for six more quarters, perhaps it’s time to plan for a world where inference isn’t freely available on-demand.

Inference prices, which have been static, will rise. Subsidies will be harder to justify.

Enterprises will need to rationalize workloads, deciding which teams receive state-of-the-art models & which don’t. Not every CRM update requires a trillion-parameter frontier model.

Inference rationing normalizes. Marketing receives this much, sales receives that much, software engineers probably receive a lot more.

Constraint will be the mother of invention. Companies will optimize what they have, adopt open source where they can, and likely move to smaller models for many workloads.

$ Hello, Claude. Are you there?

Waiting until 2028...

One Billion Lost Packages

2026-03-12 08:00:00

In September 2024, Hurricane Helene flooded Baxter International’s plant in Marion, North Carolina, which produced 60% of the nation’s IV fluids. Within a week, more than 80% of U.S. healthcare organizations reported shortages. One plant, one flood, one week.

That disruption made headlines. Most don’t. Eighty-five million packages arrived damaged in the U.S. in 2024, up 30% from the prior year, costing businesses $4 billion.

Sean McCarthy saw those failures accumulate during his years at Amazon Shipping, where he was one of the early hires. The investigation process never varied. Query the warehouse management system, often two decades old. Cross-reference the carrier portal. Call the driver, who doesn’t pick up. File a claim: seventeen fields. Four hours pass. Sometimes the problem gets solved.

The obstacle was fragmentation. A single shipment can touch 40 to 60 processes across multiple vendors. Connecting them would mean hundreds of bespoke integrations. The project never got funded.

Sean partnered with Henry Ou, who led ML teams at Apple and built ranking systems at ByteDance. Together they founded BackOps, which deploys AI agents that read emails, click through portals, call drivers, and file claims. When a customer reports a problem, BackOps traces it across every system involved, escalating to a human only when a judgment call is required.

Watch the video

We’re leading BackOps’s $26 million Series A.

The product works in two stages. Employees record their screens while solving problems; BackOps converts those recordings into automated workflows. Then Relay, the automation engine, runs continuously: filing claims, initiating reshipments, responding to customers.

Customers report 93% faster response times and 60% time savings. BackOps files 100% of eligible carrier claims automatically. The platform serves a top global automaker, a leading retailer, major grocery chains, and industrial suppliers.

Sean and Henry are targeting a $3.5 billion market growing 13% annually. The bet: AI agents can connect systems that were never designed to talk to each other. So far, the connections hold.

If you’d like to learn more, reach out to Sean.

Read more from Sean, Theory partner Andy, and Axios.

The Marginal Hire

2026-03-11 08:00:00

AI eliminates the marginal hire.

Tech job openings are down 45% from the 2022 peak, but up 16% since the start of 2026 - from 227k to 264k. Why the narrative violation?

Open tech jobs from TrueUp showing 45% decline from peak but 62% recovery from low

Companies are hiring again, just fewer people than before. A reset to a lower baseline.

A team that would have added two engineers to hit next year’s roadmap now ships with the headcount they have. Cursor, Claude Code, Copilot close the gap. The job postings never go live. The offers never extend.

The Workforce Transition - How AI reshapes company headcount over time

Inside most organizations, headcount stays flat. No layoffs. No restructuring announcements. Just fewer new hires than planned.

Block slashing 40% of its workforce showed what happens when a company acts on this logic all at once. Jack Dorsey explained : “Intelligence tools we’re creating & using, paired with smaller & flatter teams, are enabling a new way of working which fundamentally changes what it means to build & run a company.”

Most companies won’t restructure so dramatically. Until an economic shock, a missed quarter, or pressure from the board forces the question. What AI made possible, AI makes necessary. The restructuring that might have happened gradually over five years happens in one quarter.

The seismic shock isn’t coming out of nowhere. It’s building invisibly, one unposted job at a time.