MoreRSS

site iconNot BoringModify

by Packy McCormick, Tech strategy and analysis, but not boring.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Not Boring

16 Lessons on Selling (and Life) from My 5-Year-Old

2026-01-18 23:01:47

Hi friends 👋,

Happy Sunday. Earlier this week, X announced a $1 million article prize. I don’t normally write the kind of things that could win an X Article Contest - listy things, full of life lessons and advice. And then, wouldn’t you know it, my son Dev learned how to sell yesterday morning, and has he did, he dropped wisdom bombs for me to write down. We ended up with sixteen of them.

Now, they’re on X (go like, comment, and share - we need the $1 million, Dev has a world to build).

I really liked how it came out, so I wanted to share it with you all too. It’s kind of a co-written essay with a 5-year-old, who I hope becomes a more frequent contributor. I think I’m going to write more short things and share them in paid not boring world, so join us if you want the full spectrum of not boring, means to meaning.

Subscribe now

Let’s get to it.


16 Lessons on Selling (and Life) from My 5-Year-Old

This morning, my five-year-old son made his first two-dollar sale and dropped sixteen lessons on selling and life that are more practical than any of the slop you’ll find on LinkedIn.

I’ll share them with you, but first, I need to tell you about Dev, about his Donut Hats, and about his world.

One day when Dev was three, he told me that he wanted to build worlds.

Real ones. Big ones. Planets. Like, actual, physical planets.

“Then you’re going to have to study buddy.”

“What do I need to learn?”

Math, physics, engineering, business. No one’s ever built a world before, so you’re going to have to study really hard.

And then he… did.

He asked me for math problems, then harder ones, then harder ones. Kid does 90 minutes of Russian Math every Sunday and loves it.

Physics, he always liked. Gravity was one of his first words, and one of the first concepts he grokked. “Why’d the cup drop bud?” “Gravity.” We read a little bit of Richard Feynman’s lectures, and he stayed with me, but I figured that was probably taking it too far.

Engineering, he loved. Most kids do. Magnetiles in particular, huge structures. Every night, we read a couple of pages from The Way Things Work Now, which my dad always tried to read to me but which I turned down, because I didn’t have worlds to build with the knowledge.

Throughout, he’d pepper me with questions. What materials would we need to make the world? How would we get water to the world? How would we grow trees on the world? Some I could answer; a bunch we had to ask ChatGPT.

Two stories blew my mind in particular, though, logistical things, which are important things to get right if you actually want to build worlds.

One time, we were sitting by the pool on vacation, not talking about worlds at all, when he turned to my wife Puja and I and asked if we knew any companies that made houses. He figured he’d need houses if people were really going to live on this world, and somehow, that while he would be fully capable of building the world itself, there would probably be companies that were already quite good at homebuilding who he could pay to handle that aspect of the plan. He asked the same thing about umbrellas.

Another, we were talking about how to get people to and from the world. I’d met a company that was making Single Stage to Orbit rockets, I told him, and maybe they’d be good because they’d just take off from a normal runway and land on one too. He thought about it for a second and said, “No, we should probably use Starship, because they’ve actually flown before.”

The thing about his growing brain is that it’s always churning. Usually, he doesn’t mention the world for weeks, and then out of the blue, he’ll say something about it, or ask a question he’d clearly been chewing on for a while.

One big question, when you want to build a real, big, actual, physical world is where you’re going to get the money. We back-of-the-enveloped it and figured he’d need about a trillion dollars. I told him about investors. He eenie-meenie-minie-moed and landed on his three-year-old sister, Maya, as a lead investor. Implausible, for now, but the kid has vision and Maya’s pretty good herself, so not, in the opinion of one dad, impossible.

I thought the case was closed. It wasn’t. His brain kept churning.

So one night, earlier this week, I came home to find Dev and Puja at the kitchen table. He had a pencil in his hand and a piece of blue construction paper in front of him. They were making a business plan for his new company, Donut Hats.

I guess that afternoon, he took some Play-Doh, shaped it into a ring, taped it up with blue masking tape (kid loves tape), and realized he might be on to something. He put the first donut hat in a construction-paper envelope, put the envelope in a box, and taped that shut, too, for safe keeping. Then he got to work.

Puja and Dev were already pretty deep. They’d figured out a price ($20, but $10 for family members), estimated costs (surprisingly cheap if you count his child labor at $0), gross margins ($7.65 per at F&F prices), and were starting to work on a marketing plan. Kids would probably be the right target, he thought, but their parents had the money. He kind of just intuited this stuff.

When I asked him why he was starting a company, he basically recited Choose Good Quests and The Company as a Machine for Doing Stuff back at me.

“I want to sell a lot of Donut Hats to make money that we can use to build my world.”

Over the next few days, he made a total of five Donut Hats in different colors and tapes. My favorite is the Orange and Green in Clear Packing Tape, but if that’s not your style, there’s probably one for you, too.

That night, he rolled up the business plan (he loves rolling things up) and placed it on top of the Donut Hat Box, got into bed, and told me, “I’m so excited I finally get to run a company,” before drifting off to dream, I’m sure, about running a Donut Hat business.

Then came the hard part, genetically. I hate selling. I like writing plans. I like making things. I like marketing, but making a direct ask creeps me out. I told him that he would need to sell.

The next morning, he and Maya tried to sell from our stoop. Maya is not afraid of selling. She marched outside and started yelling “Get your Donut Hats! Ten BUCKS!” at the top of her lungs. But it’s winter, and it was 7:15am, and the only people out were harried ones scuttling to work. That wouldn’t do.

If we were going to sell to kids (via their parents), we would need to go to the playground, which we did this morning in a light 8:30am snow. We brought all five Donut Hats in a bag, and laid them out on a built-in table/chess board. There were only two other parent-kid combos there, and neither looked particularly in the mood to spend, so Dev half-heartedly and Maya full-throatedly yelled, “Get your Donut Hats! Ten BUCKS!” No one heard. It’s a big playground.

But then, a dad and his son came in. They headed to the soccer field and started kicking. I told Dev to go introduce himself and ask if they’d like to buy a hat. He said he was nervous. He didn’t want to go. And just then, providentially, the son kicked the ball over the fence. An opening. We grabbed it and threw it back over. They owed us one. I told him to go again, he asked me to come with him (I was as nervous as him, selling to strangers just minding their own business), we walked around the fence, and Devin, Donut Hat in hand, asked, “Would you like to buy a Donut Hat?”

The dad asked to take a look. He put it on his bald head. And he realized immediately that it wasn’t going to work. “How does the Donut Hat stay on my head? I’d imagine it would fall off if I moved. No thank you.”

HUGE. That was the first of what will be many, many No’s in Dev’s life, and he handled it gracefully. I told him it was awesome. We’d gotten our first customer feedback. I pulled out Apple Notes, titled it “Donut Hat feedback,” and told him we should write down all of the feedback we get so that we could go home and improve the product.

We wrote down:

  1. Could fall off head.

While we were out on our soccer field sales call, the main playground started filling up, and playing there, right by our table, were a dad about my age and a son about Maya’s. Easy targets. Dev introduced himself, and asked, “Would you like a Donut Hat?” Father and son looked intrigued. They thought they’d just hit the Free Donut Hat Lottery. I whispered to Dev to tell them that he was selling them, which he did, and to which the dad responded, “How much?”

$10.

$10 is too expensive.

Dev came back at $5. The son, meanwhile, sensing a negotiation, deployed the Crazy Guy strategy. He threw out $6. Then he threw out $45. Then $15. Then $6 again. We waited, giving him the leash to walk himself right into an empty Piggy Bank.

But remember the market insight. The kids want the Donut Hats. The parents have all the money. And the dad wasn’t having it. While the son perused the goods, the dad negotiated for sport. Dev even offered our worst-made, pure Blue Tape Donut Hat at $3. But you could see in the dad’s eyes, he wasn’t going to buy. Finally, they walked away.

  1. Too expensive.

MORE parents had come in, though, and a lot of them. One dad made the mistake of putting his daughter in the swing. He was a sitting duck. So Dev asked me to come with him to the swings.

“Hi I’m Devin, would you like to buy a Donut Hat?” He held out the goods, teasing.

“Oh that’s cool,” the dad, hooded by his sweatshirt but hatless, said. “But I’m not a big hat guy.”

Dad, write it down.

  1. Not everyone’s a big hat guy.

But (and if you’re not a parent, you wouldn’t realize this), once your kid is in the swing, your kid is in the swing. You’re not going anywhere. You’re trapped. Dev just hung around while I pushed Maya on the swing. We weren’t going anywhere either.

Dev told him we had more colors. I threw in that it might look good under his hood. Dev kind of looked at the guy as only a little kid with big dreams can, and… he cracked.

“I don’t have $5, but would you do it for $2?”

Dev looked at me. I shrugged. It was his call.

“OK you can have it for $2.”

Dev let him pick his Donut Hat. Wouldn’t you know it, he picked the Blue Tape. Dev handed it over. The dad handed him two crumpled $1’s.

First sale! Dev was ecstatic.

Ghiblified to keep my kid’s face off the internet.

And he was hooked on selling, whatever the price.

He was in luck. Social proof is a hell of a drug.

The mom pushing her daughter on the swing next to Maya’s saw the dad buy his daughter a Donut Hat and she wanted to buy hers one, too. She looked in her cell phone case / wallet, realized she had $1, and offered it to Dev. Take it or leave it, in nicer words.

He said yes. Two sales. Three dollars. We were HUMMING.

Something changed in Dev. He stopped being nervous and started to love the chase.

What about the dad in the swing on the other side? “Would you like to buy a Donut Hat?”

Sorry, I don’t have any cash.

  1. No cash

Recall, however, that it was a big playground, and while we were selling, it was filling up even as the snow picked up. Dev went out into the big playground by himself, Donut Hat in hand, and started approaching people.

Little man out there hustling

There were so many people spread over such a large playground that when Dev came back next, having sold zero more Donut Hats, instead of feedback, he started dictating sales tips.

  1. We need a map of the playground to see where we can sell to people.

Got it. He went back out. More No’s. Whatever. A no is the first step on the way to yes. He came back.

  1. Come when it’s not too cold.

Speaking of which, Maya was getting cold, and she wanted to go home.

And as we walked home, Maya and I on the sidewalk, Dev on air, he kept dictating, asking me to add notes to what he’d started calling “The Setback List.”

At the University of Virginia, Ian Stevenson has spent decades documenting cases of children seemingly inhabited by old souls, including:

Starting at age 2, James Leininger began having vivid nightmares about a plane crash, eventually providing specific details about being a WWII pilot named James Huston Jr. who flew off the USS Natoma Bay and was shot down over Iwo Jima. His parents, initially skeptical, verified the details through military records and located Huston’s surviving sister.

Shanti Devi was a 4-year-old in India in the 1930s when she began describing a previous life as a woman named Lugdi Devi who died in childbirth in a town she’d never visited. When researchers took her there, she reportedly recognized her “former husband” and navigated to her “previous home.”

At age 5, Muskogee, Oklahoman Ryan Hammons told his mom “I used to be somebody else.” He remembered being a Hollywood extra and talent agent, and when presented with a number of images, identified Marty Martyn in a still from the film Night After Night. Ryan remembered over fifty specific, later-confirmed details about Martyn’s life, and complained that he “Didn’t see why God would let you get to be 61 and then make you come back as a baby.” Martyn’s death certificate said he was 59 when he died, but when Stevenson’s successor, Jim Tucker, researched further, he found the death certificate was wrong. Martyn was actually born in 1903, making him 61 at death, just as Ryan claimed.

All of which is to say, maybe it shouldn’t be so surprising that Dev dropped so much wisdom in items seven through sixteen on The Setback List, but it still blew me away to hear so much wisdom out of the mouth of such a little man.

These are the lessons that Dev McCormick learned about sales on a Saturday morning on the playground in Brooklyn, dictated in random spurts over the next hour:

  1. Always have a backup plan in case things don’t work.

  2. Even if it doesn’t look fun, you should still do it.

  3. You shouldn’t go if it kind of looks like a storm.

  4. You need to remember everything people say because what if you don’t remember that you have a setback list?

  5. People are nicer than you expect.

  6. If someone looks like a bad guy you shouldn’t go to them.

  7. You shouldn’t be nervous because it’s most likely they’ll say no if you’re nervous.

  8. Maybe the importantest one: you definitely shouldn’t give up, because what if people say hehehe to you, that’s not a really good feeling.

  9. Grownups shouldn’t come with you to help because it’s most likely they’ll buy it from only a kid.

  10. The only way that people will buy it is if you’re being nice to them.

I don’t know man, I know I’m his dad, but that’s pretty good.

I think that one day this kid is actually going to build his world. $999,999,999,997 to go.


Postscript: Dev just woke up from a nap. I called him Mr. Sales Man. He said, “I love it when you call me Mr. Sales Man.” Hold on to your wallets.


Have a great weekend, and a long one if you’re reading this in the US.

Thanks for reading,

Packy

Weekly Dose of Optimism #176

2026-01-17 21:36:30

Hey friends 👋 ,

Happy Saturday and welcome to another Weekend Edition of the Weekly Dose.

Sending today because yesterday, we published an in-depth primer on the state of robotics from Evan Beard’s perspective as our first co-written essay for not boring world. A world full of robots doing all of the work that we don’t want to do, and a lot of stuff that we can’t even imagine, is as optimistic as it gets.

Grab a big cup of coffee, cozy up on the couch, and read about MedGemma & MRIs, Claude, Tesla’s new lithium refinery, Conceivable, nuclear and hotels on the moon, and the a16z pod.

Let’s get to it.


Today’s Weekly Dose is brought to you by… Framer

Framer gives designers superpowers.

Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes. Whether you’re starting with a template or a blank canvas, Framer gives you total creative control with no coding required. Add animations, localize with one click, and collaborate in real-time with your whole team. You can even A/B test and track clicks with built-in analytics.

Ready to build a site that looks hand-coded without hiring a developer?

Launch a site for free on Framer dot com. Use NOTBORING for a free month on Framer Pro.


(1) Google Releases MedGemma 1.5 for Medical Imaging

Daniel Golden and Fereshteh Mahvar, Google Research

In this house, we stan Google DeepMind, and Google DeepMind rewarding us.

This week, the company rolled out its MedGemma 1.5 model for healthcare developers. Per CEO Sundar Pichai, “The new 4B model enables developers to build applications that natively interpret full 3D scans (CTs, MRIs) with high efficiency - a first, we believe, for an open medical generalist model. MedGemma 1.5 also pairs well with MedASR, our speech-to-text model fine-tuned for highly accurate medical dictation.”

What it means is that it will be easier for developers to build excellent software that makes it easier for medical professionals to make us all healthier. The challenge with infrastructure like this, though, is that it’s not tangible. It’s hard to know what that means until developers actually go out and build with it.

So in the meantime, Shopify CEO Tobi Lutke gave us all a little preview with the html-based MRI scan viewer that he vibecoded with Claude to get around old, locked down software in order to access information on … himself.

To be clear, this is front-end development. But combine better, easy-to-build frontends with better models to interpret the scans themselves and it’s going to get a whole lot easier and less frustrating to understand, and treat, our bodies.

(2) Claude Releases Cowork

Speaking of vibe coding things with Claude, I’m going to go ahead and do a Weekly Dose first: I’m just recommending that this weekend, you take some time to play with Claude. This release is just an excuse to talk about it. I haven’t used Cowork yet, I don’t use Claude Code, and I’ve found that I haven’t needed to, because there’s so much you can do in just good ol’ fashioned Claude.

Claude Code is getting a lot of hype as people came back from holiday downtime having had time to really play with it for the first time. The hype is deserved. It’s so much fun.

After seeing a tweet about a speedreader, I just… built a speedreader for my a16z essay.

It feels like the first time that the thing we’ve been saying for a long time, that the gap between idea and outcome will disappear, is coming true. Personally, I feel bottlenecked on ideas. So what I’ve started doing is dumping my essays in and asking Claude what we can build on top of them. For yesterday’s piece on the Small Step v. Giant Leap approach to robotics, it made me a game.

I wanted to embed that game in my essay, but Substack doesn’t allow embeds, so I asked it to make me an editor that uses embeds, which it did in a prompt.

This stuff isn’t fully production-ready in the hands of a novice like me, but that’s probably only because I haven’t spent enough time on it. For example, if want to turn your side project into an actual mobile app, you can now do just that in Replit, after they announced a way to publish your apps to the app store right from Replit. Need to play around with that this weekend.

I don’t know how useful any of this stuff is yet or will be for me, but it’s a ton of fun.

(3) Tesla’s Lithium Refinery is Now Operational

There’s vertical integration, and then there’s VERTICAL INTEGRATION.

Electric vehicles need batteries, and batteries need lithium. We have plenty of lithium in the US, but it’s bottlenecked on refining. So that wild man just went out and built his own lithium refinery outside of Corpus Christi, Texas. The refinery went from groundbreaking to live in three years versus the typical decade, and is now the largest lithium refinery in the United States.

One of the challenges with refining here is that traditional processes are so environmentally unfriendly that it’s hard to get them approved. Other countries with less strict regulations don’t have that problem. But the point of technology is to do more with less, and better.

Traditional lithium refining often involves acid roasting that produces hazardous byproducts like sodium sulfate. Tesla's process creates a benign co-product, essentially sand and limestone that can be used in construction materials. The facility processes raw spodumene ore directly into battery-grade lithium hydroxide on site, bypassing intermediate refining steps commonly used elsewhere in the industry.

Musk has long called lithium refining “a license to print money,” because while lithium ore is relatively abundant, the refining capacity to turn it into battery-grade lithium hydroxide was a major bottleneck in the electric vehicle industry.

Now, Tesla is both solving its own supply chain problem and turning on the money printer by bringing that capacity onshore. Vertical integration, baby! If the bottleneck is refining, build the refinery.

(4) The Startup Making Human Embryos With AI-Assisted Robots

Sara Frier for Bloomberg

One in six couples struggle to conceive naturally, and as a result, I have a lot of friends who have gone through IVF to have a baby. The process is a miracle, and there is a lot of room for improvement. According to the CDC, IVF produces live births only 37.5% of the time.

To improve IVF, Conceivable Life Sciences has built AURA, a 17-foot robotic assembly line that can perform every step of IVF embryo creation outside the human body, from separating sperm to fertilizing eggs to flash-freezing embryos. The New York-based startup (we love to see it 🗽) has helped bring 19 babies into the world so far, including one born in September to Acme Capital partner Aike Ho and her wife, who participated in the clinical trial after Ho wrote Conceivable’s first check.

“People should be as excited about this as they were about the moon landing,” Ho told Bloomberg.

The pitch is straightforward: IVF succeeds only 37.5% of the time partly because it depends on individual embryologists who vary in training, technique, and how much coffee they’ve had. AURA makes 30 micro-adjustments per second with thousandth-of-a-millimeter precision, uses AI adapted from Baidu’s computer vision to find eggs in follicular fluid, and can plunge embryos into liquid nitrogen so fast it’s invisible to the human eye—reducing ice crystal formation tenfold.

The founders’ vision is to create “superlabs” where a single embryologist and two technicians oversee thousands of embryo creations daily, dramatically expanding access while cutting costs. They’ve raised $70 million and plan to launch in the US this year.

Sadly, one founder, Joshua Abram, died of cancer weeks before the first American baby was born. Before he died, he told his partner he wanted to see Conceivable responsible for 65% of all IVF births.

Circle of life.

(5) A Big Week for Lunar Development

DOE, NASA, and GRU

a rendering

The DOE and NASA are teaming up to develop a nuclear reactor on the moon by 2030.

Per the DOE, “DOE and NASA anticipate deploying a fission surface power system capable of producing safe, efficient, and plentiful electrical power that will be able to operate for years without the need to refuel. The deployment of a lunar surface reactor will enable future sustained lunar missions by providing continuous and abundant power, regardless of sunlight or temperature.”

If this had happened a couple years ago, I would have been both amazed and bummed that we’re getting new reactors on the moon before we get them in the US. Now, we’re getting both. Meta signed an agreement for 6.6 GW to power its data centers by 2035. What a time to be alive. The only question now is who’s going to build it. Seems like it might be good practice for Radiant on the way to Mars reactors.

And speaking of sci-fi projects on the moon, a startup called GRU is starting to accept reservations for its moon hotel, which is scheduled to open in 2032. Slots cost anywhere from $250k to $1 million, so start saving.

BONUS: I Got to Interview Marc Andreessen and Ben Horowitz

There aren’t a lot of people who can out-optimism me. Marc and Ben are two of them.

After my deep dive on the firm, I had the chance to interview Marc and Ben together this week. We go wide, but I particularly enjoyed talking about how and why new technology companies can grow to become 10x (or 1,000x) larger than the incumbents they replace.

Enjoy!


Have a great rest of your weekend y’all.

Thanks to Aman and Sehaj for all the help. We’ll be back in your inbox next week.

Thanks for reading,

Packy

Many Small Steps for Robots, One Giant Leap for Mankind

2026-01-16 21:59:23

Welcome to the 1,179 newly Not Boring people who have joined us our last essay! Join 256,826 smart, curious folks by subscribing here:

Subscribe now


Hi friends 👋 ,

Happy Thursday! I am thrilled to bring you not boring world’s first co-written essay (cossay? need something here) with my friend Evan Beard, the co-founder and CEO of Standard Bots.

Evan is the perfect person to kick this off.

I have known Evan for ~20 years, which is crazy. We went to Duke together, worked at the one legitimate startup on campus together (which still exists!), and even won a Lehman Brothers Case Competition together (which won us the opportunity to interview at the investment bank right before it went under).

After school, Evan went right into tech. He was in an early YC cohort, back when those were small. He started a company with Ashton Kutcher. I was interested in tech from the outside and always loved talking to Evan, so we’d catch up at reunions and then go our separate ways. In September 2023, a mutual acquaintance emailed me saying “there’s a company you should have on your radar, Standard Bots,” and I looked it up, and lo and behold, it was founded by Evan Beard!

Since reconnecting, Evan has become one of a small handful of people I ask dumb robot questions to. He’s testified in front of Congress on robotics. Last year he spoke at Nvidia’s GTC on the main stage. He was even featured doing robotic data collection in A24’s movie Babygirl alongside Nicole Kidman! Evan knows robots.

And the questions are very dumb! Robotics as a category has scared me. As valuations have soared, I’ve mostly avoided writing about or investing in robots, because I haven’t felt confident enough that I know what I’m talking about to take a stand.

Which is the whole point of these co-written essays!

Evan has dedicated his career to a specific belief about how to build a robotics company. He’s making a different bet than the more hyped companies in the space1, one that is like a Russian Doll with a supermodel in the middle - not very sexy on the outside but sexier and sexier the more layers you remove until you get to the center and you’re like, “damn.”

So throw on a little Robot Rock…

And let’s get to it.


Today’s Not Boring is brought to you by… Framer

Framer gives designers superpowers.

Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes. Whether you’re starting with a template or a blank canvas, Framer gives you total creative control with no coding required. Add animations, localize with one click, and collaborate in real-time with your whole team. You can even A/B test and track clicks with built-in analytics.

Framer is making the first month of cossays free so you can see what we’re all about. Say thanks to Framer by building yourself a little online world without hiring a developer.

Launch for free at Framer dot com. Use code NOTBORING for a free month on Framer Pro.

Just Publish it With Framer


Many Small Steps for Robot, One Giant Leap for Mankind

A Co-Written Essay With Evan Beard

There is a belief in my industry that the value in robotics will be unlocked in giant leaps.

Meaning: robots are not useful today, but throw enough GPUs, models, data, and PhDs at the problem, and you’ll cross some threshold on the other side of which you will meet robots that can walk into any room and do whatever they’re told.

In terms of both dollars and IQ points, this is the predominant view. I call it the Giant Leap view.

The Giant Leap view is sexy. It holds the promise of a totally unbounded market – labor today is a ~$25 trillion market, constrained by the cost and unreliability of humans; if robots become cheap, general, and autonomous, the argument goes that you get Jevons Paradox for labor - available to whichever team of geniuses in a garage produces the big breakthrough first. This is the type of innovation that Silicon Valley loves. Brilliant minds love opportunities where success is just a brilliant idea away.

The progress made by people who hold these beliefs has been exciting to watch. Online, you can find videos of robots walking, backflipping, dancing, unpacking groceries, cooking, folding laundry, doing dishes. This is Jetsons stuff. Robotic victory appears, at last, to be a short extension of the trend lines away. On the other side lies fortune, strength, and abundance.

As a result, companies building within this view, whether they’re making models or full robots, have raised the majority of the billions of dollars in venture funding that have gone towards robotics in the past few years. That does not include the cash that Tesla has invested from its own balance sheet into its humanoid, Optimus.

To be clear, the progress they’ve made is real. VLAs (vision-language-action models), diffusion policies, cross-embodiment learning, sim-to-real transfer. All of these advancements have meaningfully expanded what robots can do in controlled settings. In robotics labs around the world, robots are folding clothes, making coffee, doing the dishes, and so much more. Anyone pretending otherwise is either not paying attention or not serious.

It’s only once you start deploying robots outside of the lab that something else becomes obvious: robotics progress is not gated by a single breakthrough. There is no single fundamental innovation that will suddenly automate the world.

We will eventually automate the world. But my thesis is that progress will happen by climbing the gradient of variability.

Variability is the range of tasks, environments, and edge cases a robot must handle. Aerospace and self-driving use Operational Design Domain (ODD) to formally specify the conditions under which a system can operate. Expanding the ODD is how autonomy matures. It’s even more complex for robotics.

Robotic variables include what you’re handling (identical vs. thousands of different SKUs), where you’re working (climate-controlled warehouse with perfect lighting vs. a construction site with dust, uneven terrain, weather, and changing layouts), how complex a task is (single repetitive motion vs. multi-step assembly requiring tool changes), who’s around (operating in a caged-off cell vs. collaborating alongside workers in shared space), how clear the instructions are (executing pre-programmed routines vs. interpreting natural language commands like “clean this up” or “help me with this”), and what happens when things go wrong (stopping when something goes wrong vs. detecting errors, diagnosing causes, and autonomously recovering).

Multiply these variables together and the range can be immense2. This is because the spectrum of real, human jobs is extremely complex. A quick litmus test is that a single human can’t just do every human job.

Most real jobs are not fully repetitive, but they’re also not fully open-ended. They have structure, constraints, and inevitable variation, much to the chagrin of Frederick Winslow Taylor, Henry Ford, and leagues of industrialists since. Different parts, slightly bent boxes, inconsistent lighting, worn fixtures, humans nearby doing unpredictable things.

It’s the same for robots.

At one end, you have motion replay. The robot moves from Point A to Point B the same way, every time. No intelligence required. This is how the vast majority of industrial robots work today. You save a position, then another, then another, and the robot traces that path forever. It’s like “record Macro” in Excel. It works beautifully as long as nothing ever changes.

At the other extreme, you have something like a McDonald’s worker. Different station every three minutes. Burger, then fries, then register, then cleaning. Totally different tasks, unpredictable sequences, human interaction, chaotic environment. The dream of general physical intelligence is a robot that can walk into this environment and just... work.

At one extreme is automation. At the other is autonomy. Between those extremes lies almost all economically valuable work.

Between automation and a McDonald’s robot that can fully replace a worker is an incredible number of jobs.

It’s my belief that these small steps across this spectrum are where we’ll unlock major economic value today.

That’s what my company Standard Bots is betting on.

Standard Bots makes AI-native, vertically integrated robots. We’re currently focused on customers within manufacturing and logistics. We’ve built a full stack solution for customers to train robot AI models, from data collection, review, and annotation, to model training and deployment. And we make these tools accessible enough for the average manufacturing worker to use.

In a market full of moonshots, our strategy might look conservative. Even tens of millions of dollars in revenue is nothing compared to the ultimate, multi-trillion dollar, abundance-inducing prize that lies in the future.

It isn’t.

We are building a real business today because we believe that it’s the most likely to get us to that abundance-inducing end state first.

Two Strategies: Giant Leap or Small Step

If you believe there’s a massive set of economically valuable tasks waiting on the far side of some threshold, then the optimal strategy is to straight-shot it. Lock your team in the lab. Scale models. Scale compute. Don’t get distracted by deployments that might slow you down. Leap.

If you believe, like we do, that there is a continuous spectrum of economically valuable jobs, many of which robots can do today, then the best thing to do is to get your robots in the field early and get to work.

Each deployment teaches you where you are on the gradient. Success shows you what’s stable, failure shows you where the model breaks, and both tell you exactly what to work on fixing next. You iterate. You take small steps.

It’s widely accepted in leading LLM labs that data is king. The optimal data strategy is to climb this spectrum one use case at a time. You don’t need “more” data. What you really want is diversity3, on-policyness4, and curriculum5. Climbing the spectrum iteratively is the strategy that best optimizes for these three dimensions of good data for any given capital budget. Real deployments on your bots get you on-policyness (nothing else can), the market intelligently curates a curriculum, and both provide rich and economically relevant diversity.

We’ve learned this lesson over years of deployments.

Whenever robotics evolves to incorporate another aspect of the job spectrum between automation and autonomy, it also unlocks another set of jobs, another set of customers, another chunk of the market. One small step at a time.

Take screwdriving. It is much easier to use end-to-end AI to find a screw or bolt than to try to put everything just so in a preplanned and fixed position. Search and feedback is cheap for learning systems. Our robot can move the screwdriver around until it feels that it’s in the right place. It wiggles the screwdriver a little. It feels when it drops into the slot. If it slips, it adjusts. And when our robots figure out how to drive a screw, it unlocks a host of jobs that involve screwdriving. Then we start doing those and learn the specifics of each of them, too.

We learn on the job and get better with time. Many of these robots are imperfect, but they’re still useful. There’s no magic threshold you have to cross before robots become useful.

That’s not our hypothesis. It’s what the market is telling us.

Industrial robotics is already a large, proven market. FANUC, the world’s leading manufacturer of robotic arms, does on the order of $6 billion in annual revenue. ABB’s robotics division did another $2.4 billion in 2024. Universal Robots, which was acquired by Teradyne in 2015, generates hundreds of millions per year.

These systems work, even though they work in very narrow ways. Companies spend weeks integrating them. Teams hire specialists to program brittle motion sequences. When a task changes, those same specialists come back to reprogram the whole thing, for a fee. The robots repeat the same motions endlessly, and they only work as long as the environment stays exactly the same.

Fanuc UI. At the Fanuc company Christmas Party, they let the most drunk engineer choose the menu item labels. It might have gone something like this: “Where’s Carl? On the floor? Carl! Make a noise and give me a symbol and that will be the first menu item. OK someone kick him to get the second one - Pos.Reg[Reg? Perfect!

Despite all of that friction, customers keep buying these robots! That’s the market talking. Even limited, inflexible automation creates enough value that entire industries have grown around it. The low-variability left side of the spectrum already supports billions of dollars of business.

In machine learning, progress rarely comes from a single leap. It comes from gradient ascent: making small, consistent improvements guided by feedback from the environment.

That’s how we think about robotics too.

Our plan is not to leap from lab demonstrations to generally intelligent robots. Instead, our plan is to climb the gradient of real-world variability and capture more of the spectrum.

It’s working so far. We have 300+ robots deployed at customers including NASA, Lockheed Martin, and Verizon. We ended the year on a $24 million revenue run rate, with hundreds of millions of dollars in customer LOIs and qualified sales pipeline. The kink you see in this curve is due to the fact that our robots keep getting better and easier to use the more they (and we) learn.

Standard Bots

Customers are happy because we’re already meaningfully easier to deploy and cheaper to adapt than classical automation, and while we don’t have generally intelligent AI models that can automate any task, we can already automate jobs with a level of variability that no other robotics company can.

We expect our robots to do everything one day, too. We just believe that:

  • “Everything” is made up of a continuous spectrum of small “somethings.”

  • Each of those “somethings,” whether it’s packing a bent cardboard box or checking a cow’s temperature through its anus (a real use case), requires use-case-specific data to be done well.

  • By deploying our robots in the field today, we get paid to collect the data we need to improve our models. That includes the most valuable data of all: intervention data when a robot fails.

  • When we find a new edge case, we can iterate on our entire system of variable robots. This is because we are fully vertically integrated, including data collection, the models, the firmware, and the physical arm.

Our plan is to get paid to eat the spectrum. In the process, we plan to collect data no one else can. We’ll then use this data, which is tailor made for our robots, to iterate on the whole system quickly enough to get to general economic usefulness before the giant leap, straight-shot approaches do.

There’s a lot of context behind our bet. The first and most important thing you need to understand is that robotics is bottlenecked on data.

Robotics is Bottlenecked on Data

Robots already work very well autonomously wherever we have a lot of good data. For example, cutting and replanting pieces of plants to clone them as seen in the video below.

This is unintuitive, because it’s almost the opposite challenge Large Language Models (LLMs) seem to face. What the average AI user like you and I experiences is that the models improve and LLMs automatically know more things.

But LLMs had it relatively easy. The entire internet existed as a pre-built training corpus. There is so much more information on the internet than you could ever imagine. Any question you might ask an LLM, the internet has probably asked and answered. The hard part was building architectures that could learn from it all.

Robotics has the opposite problem.

The data needed for robotics

The architectures largely exist. We’ve seen real breakthroughs in robot learning over the last few years as key ideas from large language models get applied to physical systems. For example, Toyota Research Institute’s Diffusion Policy shows that treating robot control policies as generative models can dramatically improve how quickly robots learn dexterous manipulation skills. The magic of this approach is that it took the architecture largely used to generate images, in which the model learns to remove noise in an iterative manner like in the GIF below…

…and instead applied it to generate the path of the robot’s gripper. An idea that works in one domain is applied to another and BOOM — the outcome works pretty well.

The advancements that have ushered in this new era are small ones adding up. For example, take what researchers call “action chunking,” in which the model predicts a sequence of points to move through in the future instead of just one. That helps performance and smoothness a lot.

Vision-language-action models such as RT-2 combine web-scale semantic understanding with robotic data to translate high-level instructions into physical actions. Systems like ALOHA Unleashed demonstrate that transformer-based imitation learning can enable real robots to handle complex, multi-stage tasks — including tying shoelaces and sorting objects — by watching demonstrations. And emerging diffusion-based foundation models like RDT-1B show that training on large, diverse robotic datasets enables zero-shot generalization and few-shot learning across embodiments.

But those papers also all found something similar. For those remarkable innovations to happen with any reasonable success rate, you need data on your specific robot, doing your specific task, in your specific environment.

If you train a robot to fold shirts and then ask it to fold a shirt, it works. Put the shirts in different environments, on different tables, in different lighting. It still works. The model has learned to generalize within the distribution of “shirt folding.” But then try asking it to hang a jacket or stack towels or to do anything meaningfully different from shirt folding. It fails. It’s not dumb. It’s just never seen someone do those things.

The magic of these models is how they interpolate to handle unseen variability, but only within the training set.

Robots can interpolate within their training distribution. They struggle outside of it. This is true for LLMs, too. It’s just that their training data sets are so large that there isn’t much out of distribution anymore.

This is unlikely to be solved with more compute or better algorithms. It’s a fundamental characteristic of how these models work. They need examples of the thing you want them to do.

So how do you collect example data?

One answer would be to create it in the lab. Come up with all of the edge cases you can think of and throw them at your robots. As John Carmack warned, however, “reality has a surprising amount of detail.” The real world chuckles at your researchers’ edge cases and sends even edgier ones.

Another answer would be to just film videos of people doing all of the things that you’d want the robots to do. Research has shown signs of life here.

For example, Skild has shown that a robot can learn how to do several common household tasks from video and only a single hour of robot data per task.

This is exciting progress, and on the back of it, just this week, Skild announced a $1.4 billion Softbank-led Series C at a valuation of over $14 billion.

Ultimately, general video may lift the starting capabilities of a model. But it still doesn’t remove the need for the on-robot data for the final policy, even for simple household pick-and-place tasks (and industrial tasks will need much more data). For one thing, robots need data in 3D, including torques and forces, and the data needs to occur through time. They almost need to feel the movements. Videos don’t have this data and text certainly doesn’t.

It’s kind of like how reading lots of books makes it easier to write a good book, but watching lots of golf videos doesn’t do much for actually playing golf.

If I want to learn to golf, I need to actually get out there and use a body to swing clubs. Similarly,

the best way to collect data is by using hardware. And for that, there are a number of different collection methods: leader-follower arms, handheld devices with sensors on them, gloves and wearables, VR and teleoperation, and direct manipulation, as in, literally moving the arm and grabbing an object.

All of these approaches can work. Each has pluses and minuses. We use a mix of many of them.

But let’s continue with the golf analogy. Practicing with any human body is better than watching videos, but practicing with my body is the best. That’s the body I’m actually going to play with.

In the same way, even data from other robots isn’t as valuable as data from your own hardware. If your data and your hardware aren’t aligned, you need 100x or 1,000x more data. If I wanted to work on my robot, but I didn’t have my robot, I could use a similar robot to observe the activity. But for it to be effective, I’d need a lot of similar robots.

This is one of the many challenges for general robotics models.

What the Giant Leap Actually Requires

The most obvious counterargument to everything I’ve argued so far and everything I will argue throughout is that while the Giant Leap models haven’t unlocked real world usefulness yet, they undoubtedly will as the labs continue to make breakthroughs. It’s not fun to be short magic!

For the amount of money invested in the space, though, there’s surprisingly little good public thinking about what the Giant Leap approach actually entails.

What is the bet or set of bets they’re making, and how should we reason about them?

The approach we’re taking at Standard Bots is hard. It’s often slow and frustrating. And from the outside, there’s a huge risk that we do all of this work and then, one day, we wake up and one of the big labs has just… cracked it. But I feel confident in our approach because I don’t think the Giant Leap views will produce meaningful breakthroughs, and I want to explain why.

For sure, you’ll continue to see increasingly magical pitches on robot Twitter:

“We can train on YouTube videos. No robot data needed!”

“We can generate the missing data in simulation!”

“We’re building a world model. Zero-shot robotics is inevitable!”

And some of these are even directionally right. There is real, actual progress behind a lot of the buzz. But there’s a ton of noise, too.

Again, I am biased here. But I’m also putting my time and money behind that bias. So here’s how I think about what’s actually going on — what Google, Physical Intelligence (Pi or π) and Skild are actually up to in the labs in pursuit of a genuine leap — from (don’t say it, don’t say it) first principles.

A Model Takes Its First Steps

A lot of the modern robotics-AI wave started the same way: pretrain perception, learn actions from scratch. Meaning, teach the robot how to perceive and let it learn by perceiving.

Take Toyota Research Institute’s Diffusion Policy. The vision encoder (the part that turns pixels into something the model can use) is pretrained on internet-scale images, but the action model begins basically empty.

Starting “empty” is… not ideal, because the model doesn’t yet have what researchers call perception–action grounding. It hasn’t learned the tight relationship between what it sees and what it does:

  • “Moving left” in camera space should mean moving left in the real world.

  • A two-finger gripper can clamp a cup by the handle or rim, but not by poking the center like a toddler trying to eat soup with a fork.

  • Contact is physics, not simple geometry. The world changes when you interact with it.

This grounding stage is basically the toddler phase: I see the world, I flail at the world, sometimes I succeed, mostly I bonk myself.

But most serious teams can collect enough robot data to establish basic grounding in days. So far, so good.

How to Train a Robot

Say you want to train a robot to do a task. Here is what you need to do:

1. Get data

2. Train model

3. Eval and continuous improvement

Get data: You can teleoperate in the lab, the real world, simulation, or learn from internet or generated videos. Each option has its own tradeoffs, and robotics companies spend a lot of time thinking about and experimenting with these tradeoffs.

Train model: Are you going to build it from scratch or rely on a pre-trained model to bootstrap? Training from scratch is easier if you are building a smallish model. Large models typically have entire training recipes and pipelines that involve pre-training, mid-training and post-training phases. Pre-training teaches the robot the basics about how the world works (general physics, motion, lighting). Post-training is about giving tasks specific superpowers.

In LLM terms, pre-training teaches a model how words are related in the training distribution. It learns their latent representations. Post-training (instructGPT, RLHF, Codex) gets a model ready for deployment use cases like chat agents or coding. Post-training can also make the robot faster, cheaper, and more accurate by tightening up the trajectories with RL. A lot of the RL buzz you hear about in the LLM world actually began with robotic task-specific policies.

All sounds great, but you still need the data. The big question is: how do you get the data?

Video Dreams (and Their Limits)

Giant leapers have two big salvation pitches for how they’ll get the data they need.

The first is existing whole-internet video.

Models clearly learn something from video: object permanence, rough geometry, latent physical structure, the ability to hallucinate the backsides of objects they’ve never seen (which is either very cool or deeply unsettling, depending on your relationship with reality).

So why not slurp YouTube, learn the world, and then just... do robotics?

Think about this first. What can humans learn from watching a video? And what can’t they?

Videos are useful for many things:

  • Trajectories and sequencing: Video is great at showing the arc of motion and the order of steps in an action.

  • Affordances and goals: You watch someone turn a knob and you learn that knobs want to be turned. Switches want to be pressed.

  • Timing and rhythm: Timing matters for things like locomotion, assembly, or anything that’s basically choreography. Video carries timing.

If you’re learning to grasp, video can show you: reach → descend → close fingers → lift.

And it can show tool use: the tilt of a cup, the swing of a hammer, the way people “cheat” by sliding things instead of lifting them.

But there are whole categories of data that video simply doesn’t carry: mass, force, compliance, friction, stiffness, contact dynamics.

Humans can sometimes infer some of this visually, but only because we’re leaning on a lifetime of embodied experience. Robots don’t have that prior.

In experiments with over 2,200 participants, researchers Michael Kardas and Ed O’Brien examined what happened when people watched instructional videos to learn physical skills like moonwalking, juggling, and dart throwing. The results were striking:

As people watched more videos, their confidence climbed sharply. Meanwhile, their actual performance barely moved, or even got worse.

That’s the embodiment gap. Video tells you what to do, but not what it feels like to do it. You can watch someone moonwalk all day. You still won’t feel how the floor grips your shoe, how much pressure transfers to your toes, how to modulate tension without faceplanting.

And robots have it worse than humans. At least we have priors. Robots have sensors and math.

I’m going to get a little spicy here.

If you’re not paying very close attention, it looks like feeding robots internet videos is working.

Watch Skild’s “learning by watching” demos closely. Only the simplest tasks use “one hour of human data.” More impressive demos are nestled in the middle of the video without that label. And the videos aren’t random ones pulled from YouTube either. They’re carefully collected first-person recordings from head-mounted cameras. Is doing all of this that much easier than just using the robots?

In short, there are three big reasons video isn’t enough:

  1. Coverage: internet video doesn’t cover the weird, constrained, adversarial reality of industrial environments.

  2. Data efficiency: learning from video alone typically takes orders of magnitude more data than learning from robot-collected data, because the mapping from pixels to action is underconstrained without embodied sensing.

  3. Missing forces: two surfaces can look identical and behave completely differently. Video can’t disambiguate friction. The robot finds out the fun way.

Then, you still have the translation problem: human hands aren’t robot grippers, kinematics differ, scale differs, compliance differs, systematic error shows up unless you train with the exact end effector you’ll deploy.

Which is why many of these companies end up quietly going back to teleoperation.

Human video is useful for pretraining. But weakly grounded data has a real cost: you can either do the hard work of actually climbing the hill, or you can wander sideways for a long time and call it progress.

OK, so the videos on YouTube aren’t that useful. What about simulation?

Where World Models Do and Don’t Work

Simulation and RL are the other big salvation pitch. If robots can self play in a simulated environment that mimics real world physics, the trained policy should transfer to real robots in the real world. And to be fair: sim is really good at certain things right now, especially rigid-body dynamics.

NVIDIA has pushed this hard for locomotion. Disney’s work (featured in Jensen’s GTC 2025 keynote) shows the magic you get when you combine good physics with good control: humanoids that walk, flip, recover (beautifully) in a simulator.

That success comes down to two ingredients:

  1. The physics is tractable: Simulators can handle rigid bodies + contacts + gravity well. You can randomize terrain, generate obstacles, and train robust walking policies without touching the real world.

  2. The objective is specifiable: RL needs a reward.

For walking, the rewards are straightforward: distance traveled, stability, energy use, speed.

For animation, it’s even cleaner: match a reference motion without falling.

So locomotion is the happy place because three things line up for machine learning. You can model the physics, measure the goals, and reset for free when things go wrong.

Then, people try to extrapolate from walking → factory work, and everything breaks.

When you do real things in the real world, physics gets messy. Real tasks involve soft materials, deformed packaging, fluids, cable routing, wear-dependent friction, tight tolerances, and contact-dominated outcomes.

You can simulate parts of this, but doing it broadly and accurately becomes a massive hand-crafted effort. And you still won’t match the edge cases you see in production. Again, you might as well do the real thing.

With real tasks, rewards get brittle or unwritable. “Make a sandwich” is not scalar. Even “place this part down” is full of constraints: don’t tear, don’t spill, align, recover if it slips, don’t jam, don’t scratch the finish, don’t do the thing that worked in sim but breaks the machine in real life.

Waymo is a great example. Waymo uses a ton of simulation today, but real-world data collection from humans driving cars came long before the world model. Do you remember how long human Google workers drove those silly looking cars around collecting data before Waymo ever took its first autonomous ride? As the company wrote in a recent blog post, “There is simply no substitute for this volume of real-world fully autonomous experience — no amount of simulation, manually driven data collection, or operations with a test driver can replicate the spectrum of situations and reactions the Waymo Driver encounters when it’s fully in charge.

You need to collect that data in the real world, and then you can replay and amplify it in sim. That’s how you get the last few “nines.”

Also, resets. What it takes to start over.

In sim, resets are free. In reality, resets take work. Walking is the rare exception because the reset is “stand back up,” but if you want a robot to learn sandwich making through trial and error, someone has to: clean up, restock, reset, try again, and repeat forever, slowly losing their will to live. Cleaning up after a half-baked bot is not why you signed up to be a robotics researcher.

So simulation is valuable, but it’s still not a replacement for real data collection. The highest-leverage use of sim is after deployment: when real robots surface real failure modes, and sim is used to reproduce and multiply those rare cases.

Which brings us back to first principles.

So What’s the Best Way to Train a Robot? (Like You’d Train a Human)

Think about how you train a human.

For simple tasks, text works. For slightly harder ones, a checklist helps. But most real factory work isn’t that simple. You need alignment, timing, judgment, recovery, and the ability to handle “that thing that happens sometimes.”

At that point, demonstration wins. It’s the most information-dense way to transfer intent. This is why people in the trades become apprentices.

It’s the same for robots. And it’s okay if a robot takes minutes or even hours to learn a task, as long as the learning signal is high quality.

Training time doesn’t need to be zero.

Which leads to what we’ve been saying: the giant leap isn’t, and can’t be, architectural.

The Giant Leap, the point at which the model has suddenly seen enough and can do anything, isn’t real. It is enticing and sexy (maybe in part it’s enticing and sexy because it’s always just out of reach). But it doesn’t exist. Even the smartest humans need training and direction. Terence Tao would need years to become an expert welder.

We think the answer is simply committing to taking the time to collect the right data. Robot-specific, task-specific, high-fidelity data, even if it means fewer flashy internet demos.

Three things follow from this:

  1. You will always need robot-specific data.

  2. The highest-quality way to convey a task is to show it (teleop or direct manipulation).

  3. Once you have strong domain-specific data, low-quality vision data from unrelated tasks doesn’t help much.

LLMs feel magical because they interpolate across the full distribution of human text. Robots don’t have that luxury.

To be clear, my contention is not that video, simulation, and better models aren’t useful. They clearly are. My contention is that with them, you still need to collect the right data.

In order to do a specific job — say, truck loading and unloading, or biological sample preparation, or cow temperature checking — you need data on that specific job, and it’s best if that data is generated on your own hardware.

And in order to do any job, which is the promise of general physical intelligence, you need to be able to do a lot of specific jobs, which means that you’ll still need data on each of those specific jobs, or at least jobs that look so similar that you can reliably generalize.

The upshot is that while it may be possible to build generally capable robots with all of this data, all of this data is wayyyy harder to collect than people realize, and it is also way harder to generalize outside of the data you do have (in fact, it’s not yet proven possible).

Which creates a chicken & egg problem:

  • You can’t really test a use case without the data (and a specific type of data)

  • You can’t get the data in a high-fidelity way without doing the use case

That’s the main reason that we think robotics progresses in small steps, not giant leaps. You need to collect all of the data in either case!

And if you believe that, then the next move is obvious…

Get Paid to Collect Data

So how do you gather that data? Do you make thousands of robots — robotic arms, in our case — and build sets where they can practice?

If you think that robots need to get past a certain threshold of capability to be economically useful, that might be the best approach. But we’ve already disproved that thesis. FANUC, ABB, Universal Robots, and others generate billions in revenue for basic automation.

Customers are used to old robots that require a ton of expensive implementation work and are brutal to program. We realized that we could compete with them and win.

Standard Bots Core

We make better arms and automate for a wider range of use cases than current deterministic software. And we do it for less money.

When we deploy a robot for a new customer, it takes a few easy steps and a few hours. And it’s getting easier and easier. We get paid for hardware and software upfront. Our gross profit covers our acquisition costs within 60 days.

This all means that we’re able to scale our data collection efforts almost as fast as we can make the robots, and it’s all funded by our customers. We’re happy for obvious reasons. They’re happy for obvious reasons. And the plan is that our robots keep learning in the field and we both keep getting happier.

Crucially, when there’s an issue, we teleoperate into the environment, error correct, and most importantly, learn from the issue. (Oh, and we have exclusive rights to the issued patent for using AR headsets to collect training data for robot AI models).

This is the secret sauce.

Standard Bots’ data collection engine

Earlier this week, a16z American Dynamism investor Oliver Hsu wrote an essay on the very real challenges that occur when going from the lab to the real world.

In papers and in the lab, a robot that succeeds 95% of the time sounds amazing. In a factory running a task 1,000 times a day, that means 50 failures per day. That’s I Love Lucy on the chocolate line performance. Even 98% means 20 stoppages a day. 99% means 10. You would fire any employee who messed up that much in a week.

According to Oliver, production environments require something closer to 99.9% reliability — one intervention per day, or even every few days — which is the difference between having to hire someone to fix your robot’s mistakes and just letting it work.

He’s right. 95% just isn’t good enough… unless you approach the problem like we do and improve over time. In which case, 95% is a great place to start!

95% is plenty good enough for Day 1 if you’re ready to teleoperate in and fix the 5% issues, which we do. We can ship robots to do things that deterministic, automated robots can’t. It allows us to continue to eat the spectrum by taking on use cases that we can mostly handle, and to treat human interventions as both a service and a data collection mechanism. The robot handles what it can, humans step in at hard cases, and those corrections flow back into training.

This has worked incredibly well. By learning from each of the real-world challenges that make up that 5%, we can bring failure down imperceptibly close to 0% within weeks of deployment.

That’s because intervention data at the moment of failure is the best data. We’ve learned that collecting data right around where the thing failed allows us to efficiently pick up all of the edge cases, and this is often the minimum training data we need. We concentrate at the boundary where autonomy breaks instead of just collecting data on the 95% of stuff we do flawlessly over and over again, and we learn where reality actually disagrees with our model. And because our robots are the ones generating the failures — not humans — we learn where our robots fail.

Learning where robots fail is important. There’s a mismatch when you train a robot on human demonstrations: the human operates in their own state distribution, but the robot will drift into states the human never showed it. Better to let the robot fail and act quickly to resolve.

With every customer, we learn about a use case, train our models, get ongoing data, learn as they fail, and improve our models.

At a certain point, a given use case is largely solved. We have eaten that piece of the spectrum. We can move on to the next, handle a little more variability.

So far, it seems that each use case we solve, along with the resultant improvements we make to our software, firmware, hardware, and models, makes it easier to eat adjacent pieces of the spectrum.

One common misconception about our approach is that it implies starting from scratch with every use case. That’s not how it works. Remember the screwdriver.

We don’t think of our system as a collection of isolated task-specific models. We think of it as a shared foundation of physical skills — perception, grasping, force control, sequencing, etc. — that compounds across deployments. For each new use case, we post-train on top of an ever-improving foundation.

With each use case that gets solved, those foundational capabilities get better. That makes adjacent tasks easier. Over time, the same core skills (screwdriving, for example) show up repeatedly in different combinations and those shared skills compound.

Ideally, the whole thing spins faster and faster. And it’s starting to seem like this is what will happen.

This is how the Standard Bots machine works. We get paid to learn. We get better, faster because we are forced to interact with reality.

And customers teach us about use cases that we never would have guessed existed.

A Forced Aside on Cow Temperatures

I was telling Packy (and he made me include this) about one of our new salespeople’s first days. He’d received a lead from a farm that wanted to use our robots to take their cows’ temperatures. Unusual temperature is the earliest, cheapest signal that something is wrong with a cow.

Do you know how to take a cow’s temperature?

What you do is, you take a thermometer and you stick it in the cow’s anus. You do this once a week, once a month, or somewhere in between, depending on the stage of the cow’s life. There are 90 million cows in the United States. Based on the cycle time math (it takes about one minute per cow) that’s a thousand-robot opportunity.

Two things about that opportunity:

  1. If you’d said, “Evan, if your life depended on it, give me a job that you could automate in the dairy industry,” I would have said milking cows. I’d never think of automating sticking a thermometer in a cow’s anus. That’s a job you learn about from customers.

  2. This is not a job for a humanoid. Surprisingly few jobs are, when you think about it.

One reason it’s not a job for a humanoid is that a humanoid would be overkill. You’re paying for general capabilities (and legs) when what you need is one thing done over and over and over (in a stationary position). Another reason is that a humanoid would be underkill for that specific job: it wouldn’t be set up, neither physically nor in the model, for the specific job.

What you’d need is a flexible gripper, for one thing. But really, it all comes down to entry speed. You can’t just jam it in. The cows don’t like that. And how do you figure out the right entry speed? Every cow is different. Turns out, you need a camera trained on the cow’s face and a model trained on hundreds of cows’ facial reactions; the cow’s face tells you when to slow down (and this behavior should emerge automatically during end-to-end training without any hand-crafted prior). The model needs to be able to understand what to do with that specific sensor data instantly in order to tweak the arm’s speed and angle of attack quickly enough for the cow to let it in. And so on and so forth.

Another reason it’s not a job for a humanoid is that they’re going to be pretty expensive. Elon himself predicted that by 2040, there will be 10 billion humanoids, and they’ll cost $20-25,000 each. About half that cost comes from the legs, which are probably a liability on the farm. Lots of shit in which to slip.

Here’s one more huge reason it’s not a job for a humanoid. Humanoids don’t exist today.

Other than some toy demonstrations, humanoids just do not exist in the field today. Generally intelligent robots certainly do not exist in the field today.


Sidebox: What about humanoids? (defining here as legged bipeds)

The promise of humanoids is captivating to many investors (especially Parkway Venture Capital). Understandably so. “The world was created for the human API.” It sounds so nice, and it’s true to some extent.

But that dream collides uncomfortably with reality. As I was recently quoted saying in the WSJ Tesla Optimus Story: “With a humanoid, if you cut the power, it’s inherently unstable so it can fall on someone.” And “for a factory, a warehouse or agriculture, legs are often inferior to wheels.”

I’m incentivized to say that, so don’t take it from me. In the same story, the author writes that, “inside the company [Tesla], some manufacturing engineers said they questioned whether Optimus would actually be useful in factories. While the bot proved capable at monotonous tasks like sorting objects, the former engineers said they thought most factory jobs are better off being done by robots with shapes designed for the specific task.” (That’s what we do with our modular design, by the way. Thanks, Tesla engineers.)

The Tesla engineers aren’t alone. People who run factories and care more about their business than demos don’t see the ROI, which is why you see companies like Figure shifting their focus to the home. This is the dream. Robots in the home is Rosie. But to put a robot in your home, with your kids, they need to be really reliable.

For humanoids to really be useful in the home, we’d like to coin the HomeAlone Eval.

The humanoid needs to survive in a house with a team of feisty eight-year-olds trying to trip, flip, and slip it — all without injuring them. It’s even hard for a human to remain stable when your kids jump on your back going up the stairs. And if you fall on them, at least you’re soft and fleshy. Robot, not so much. This humanoid eval is much harder to train with RL, but we’ll need to see that before we have one in our house.6

There are interesting approaches to the home that align with our thesis. Matic and now Neo are getting paid to learn inside of your house, from different angles. Matic is starting with a simple and valuable use case - vacuuming and mopping - learning the home, and working up from there. Neo is teleoperating its robots while it collects data.

But autonomous humanoids do not, in any practical sense, exist.


We can wait for humanoids to exist. Or we can be out here learning from customers about all of the things that robots might be able to do as we chew off more and more variability, and then getting paid to learn and perfect those use cases. All while our one-day competitors are stuck in the lab.

We are running as fast as we can with that headstart. A big reason we’re able to run so fast is that we’re vertically integrated.

Why Vertically Integrate?

There is a big reason that deployment accelerates learning that has nothing to do with models and everything to do with hardware.

Recall that data is 100-1,000x more efficient when aligned with its hardware. The more of the hardware you control, the more true this statement is.

Most labs use cheap Chinese arms from companies like Unitree. This makes short-term sense. Those arms have gotten really good and they’re very cheap, a couple thousand bucks.

At Standard Bots, we’re betting on vertical integration.

We make an industrial grade arm that’s designed for end-to-end AI control. In particular, torque sensing in the joints. Because when you’re doing AI, you want to be able to record how you interact with the world, and then be able to train the model on that interaction in order to have the model recreate it.

Which is why we care about torque sensing and torque actuation: so the motor can precisely control how hard the joint pushes, and so the robot can feel how the environment pushes back through the joint. If you don’t have that, then you’re kind of stuck with AI for pick and place or folding.

We’ve created a unique way to do the torque sensing. Everyone else does strain gauges and current-based torque sensing. We have a method to directly measure torque through the bending of the metal, and our way is more accurate and more repairable, easier to manufacture, just better all around. Really, really great torque sensing.

To do that, we make practically everything ourselves. We even make our own motor controller to commutate the motor. The things we don’t make are bearings and chips. Everything else, for the most part, is going to be made by us. So that’s really deep vertical integration.

Standard Bots

It’s necessary, though. Old robots don’t work with new models.

Old robots were designed for motion replay: you send a robot a 30-second trajectory and the robot executes it. AI requires 100Hz real-time control. You’re sending a new command 100 times-per-second based on what the model sees in real-time. A lot of the existing robot APIs don’t even have real-time torque control. I can tell my robot to go somewhere, but I’m just giving it a position. If it hits something, it’s going to hit it at max force. It doesn’t have the precise control I need for it to do a good job.

This doesn’t work for a robot that thinks for itself in real time. So we wrote our own firmware for real time torque control with motor commutation at 60 kHz (60,000 times per second).

This firmware makes our robots smoother, more precise, and more responsive, and also easier and more fun to use. This is really important because it means that we can physically handle a lot more use cases. This, in turn, means that hardware won’t limit our ability to eat more of the spectrum.

Between putting these arms that can physically handle a lot of use cases out in the field and our own data collection for pre-training — a handheld device7, our actual arms8, and increasingly, AR/VR9we’re vertically integrated on the data side, too.

This data feeds our pre-training mix. Think of it as the first industrial foundation model for robotics pre-training. More vertical integration. As discussed, this model can be smaller, add core skills over time, and can be deployed with post training on a specific task.

A mix of hundreds of factories, which are our customers. Payloads up to 66 pounds, not this three-pound bullshit. Industrial environments, industrial equipment. An industrial-grade arm that’s IP-rated and made for 24/7 operation, paired with an industrial-grade model.

Of course, we’re thinking about everything a person could do in a factory warehouse and putting that into our pre-training mix, just like everyone else. The difference is, our robots then quickly go out and learn everything a person actually does in a factory.

This is a fundamental bet we’re making.

Some companies are betting that they can just go create some model, around which an ecosystem will develop, and they’ll then bring their product to market.

We think that the market is too nascent for that.

The tight integration between hardware, data, and model is so crucial while we are still learning how to do new use cases that we believe vertical integration is the only way to do it right.

This is how new technology markets develop. In Packy’s Vertical Integrators, Part II, Carter Williams, who worked at Boeing in Phantom Works, explained that the need for vertical versus horizontal innovation moves in cycles. “Markets go vertical to innovate product, horizontal to reduce cost and scale. Back and forth over a 40-50 year cycle.”

In robotics, we are still very much in the “innovate product” phase of the cycle.

One day, once we’ve collected data on use cases that represent the majority of the value in the industrial economy (and beyond), the industry will probably modularize to reduce cost and scale. Hopefully, we won’t have to make everything ourselves for the rest of time. We still have to today.

The other thing about vertical integration is that controlling everything helps us adapt fast. Every day, we learn something new about how customers operate, what their needs are, how different types of factories run. The ability to learn something, fix, and adjust is invaluable.

For example, we realized in the field that models actually have to understand the state of external equipment, not just the thing the robot is working on. Often there’s an operator that’s using a foot pedal at a machine. We need to collect data on the foot pedal — like whether it is pressed or released — and the model needs to be able to understand these states. From there, we need to make a generic interface that works for all types of external equipment.

And there’s the other thing we’ve discussed as crucial to our business: it’s really important to be able to collect data on failure. So we have a whole loop on that too.

That’s it. That’s the plan.

Robotics is bottlenecked on data. We get paid to collect data by building better robotic arms for industrial use cases. These use cases are broader and larger than we anticipated. For each one, we deploy, learn, find edge cases, intervene, collect the data, and improve. This is necessary at the model level for a specific task, and it’s also necessary at the level of the system. And the only way we are able to do this quickly (or at all) is because we are vertically integrated.

Rinse, robot, repeat.

This is how we eat the spectrum, one small step at a time.

Small Steps, Small Models, Big Value

In The Final Offshoring, Jacob Rintamaki’s excellent recent paper on robotics, he writes, “one framing of general-purpose robotics that I haven’t seen much of isn’t that we now have a robot that can do anything, but rather we have a robot which can quickly, cheaply, and easily be made to do one thing very well.”

That is our plan. To do one thing very well, for every industrial case, one thing at a time. Eventually, we will reach across the spectrum of use cases.

“The strategy for these companies then,” Rintamaki continues, “given that reducing payback time may be All You Need, is to deploy into large enterprise customers as aggressively as possible to start building moats that their larger video/world-model focused competitors still find difficult to match.”

Yes.

Here, I want to reintroduce the concept of variability to discuss the nature of our moats.

There is the data moat that I’ve written about at length here. We are getting paid to collect the exact data we need to make our specific robots better.

What we do with that data, for the particular slice of variability that makes up each use case, may be equally important but is less obvious.

We think that general models will not lead to a giant leap without all of the right robot data. We also believe that smaller models outperform larger ones for many use cases on a number of critical dimensions like cost and speed while accounting for the majority of value available to robots.

Solving everything in a large general model is tempting: we’ve trained LLMs already. Leverage the trillion-dollar machine!

LLMs have strong semantic structure. Word embeddings put similar words close together, and (weirdly, beautifully) semantic distance in language often mirrors semantic distance in tasks.

So we get the appealing idea: use an LLM backbone, condition behavior on short text labels, and store many skills in one model. “Pick.” “Place.” “Stack.” “Insert.” Same model, many skills. That’s the VLA (video-language-action) dream.

But there’s a reason diffusion took off first in robotics.

LLMs are autoregressive: predict next action once → feed it back in → compounding error if wrong. The errors matter hugely when you’re controlling physical systems.

On the other hand, diffusion is iterative: denoise progressively → a single bad step doesn’t doom the rollout.

But there are challenges to making this work well at the architectural level.

LLMs were designed for tokens: discrete symbols, or words. Robots operate on continuous values: positions, velocities, torques. Numbers like 17.4343 instead of words like “seventeen.”

With LLMs, every digit becomes a token. Precision explodes token count, which means latency explodes too. Your robot gets slow, and a slow robot isn’t a particularly useful robot.

This is the core tension:

  • Robotics success so far has leaned heavily on diffusion-style control

  • LLMs are autoregressive and token-based

  • Physical actions don’t map cleanly to tokens

Pi has bridged this gap: they’ve found representations of robot action that play nicely with language-model infrastructure. That’s real, hard, and impressive work.

But here’s another spicy take.

We’re not working with language-model infrastructure because it’s the perfect architecture for robotics. It’s because we, as a species, have poured trillions of dollars and countless engineering hours into building LLM infrastructure. It’s incredibly tempting to reuse that machine.

So, despite its imperfections, taking an LLM and sticking on an action head to predict robot motions (all together known as a VLA) is the best way for us to train the base models that learn many skills from demonstrations across many different customers and tasks.

There’s also the “fast and slow” split: use LLMs as supervisory systems that watch, reason, and call skills, rather than directly controlling motors. Figure’s approach is a good example of that pattern.

The problem with general models is that they have to solve for everything. They are predicated on the belief that if you throw enough compute and data into a single huge model, you will be able to make a robot that can do almost anything. They solve for max variability: you can walk into a completely unseen environment with unseen tools, unseen equipment or fridge or stove, and you can handle all of that perfectly. And the objects are breakable. That’s a tremendous amount of variability to account for in one model, so the model needs to be huge.

Huge models mean models that are more expensive (at training and inference), harder to debug, and slower, which you can see in humanoid performance today.

BUT, and here is a key insight: parameter count scales with variability, not with value.

We think that most of the market can be unlocked by a surprisingly small number of parameters.

Let’s use the example of self-driving again. Apple published a paper on its self-driving work in which it states using just 6 million parameters for decision making and planning policy. Elon said recently that Tesla uses a “shockingly small” number of parameters for their cars.

This is orders of magnitude smaller than the hundred-billions or trillions of parameters we’re used to hearing about for LLMs, because LLMs need to be ready to answer almost any question imaginable at any time, and because each individual LLM user isn’t worth enough to fine-tune custom models for.

It’s the opposite case with robotics if you’re solving for a specific task with constrained variability. The model will need to know how to do a few things very well. Given the cost of deployment and the economic value created, it is absolutely worth fine-tuning a model for that use case.

That means we can distill our larger base model into much smaller models. Which we are. We ship really small models sometimes. They’re low parameter models that can solve, across the spectrum, a really useful number of things. And we can concentrate the robot’s limited compute on narrower problems, which leads to better performance.

We use small amounts of the right data to feed small models that are cheaper, faster, and can be better-performing than the large, general ones that they come from when fine-tuned on the right data.

Of course, the better, cheaper, and faster we are for each specific use case, the more broadly we will deploy, the faster we will learn, and the sooner we can eat more of the spectrum.

At least, that’s my bet.

Is Standard Bots Bitter Lesson Pilled?

My bet isn’t exactly the trendiest. It’s not fun betting against the magic of emergent capabilities.

In one of our conversations, Packy asked me if our approach was Bitter Lesson-pilled, referring to Rich Sutton’s 2019 observation that “the biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”

He pointed me to the 2024 Ben Thompson article, Elon Dreams and Bitter Lessons, in which Thompson argues that starting with a bold “dream,” then engineering costs down to enable scale, beats cautious, incremental approaches and creates new markets.

Waymo looks like it’s in the lead now, Thompson argues, but its approach — LiDAR for precise depth, cameras for visual context, radar for robustness in adverse conditions, HD maps, a data pipeline, etc — is bound to plateau, because it’s more expensive and its dependencies make it less likely to achieve full autonomy.

Tesla FSD, on the other hand, is betting on end-to-end autonomy via vision (cheap cameras) and scaled compute. Use cameras-only at inference to keep vehicles cheap, harvest driving data from millions of Teslas to train large neural networks, distill expensive sensors and mapping used during training into a lightweight runtime, and compound safety through volume until Level 5 full autonomy everywhere becomes viable rather than geofenced Level 4. This is the Bitter Lesson-pilled approach.

I had to think about my answer for a second. I hadn’t thought about it and I wasn’t entirely sure.

It’s definitely a viable question. Could someone come in and just create a super, super intelligence that only needs to be communicated with through a super simple voice interface? I mean, theoretically, obviously, yes. Right?

Wrong. The truth is that you need the data to win.

You can’t be Bitter Lessoned by someone that doesn’t have the training data.

Tesla was only in the position to Bitter Lesson everyone because they had the distribution to collect the data in the first place. The iterative approach, Tesla’s Master Plan, is what enabled the Bitter Lesson approach in the first place.

The iterative, customer-funded approach, the one Tesla took and the one we are taking, is how you get the data that lets you benefit from scale. Thompson himself wrote, “While the Bitter Lesson is predicated on there being an ever-increasing amount of compute, which reliably solves once-intractable problems, one of the lessons of LLMs is that you also need an ever-increasing amount of data.”

The Bitter Lesson in Robotics is that leveraging real-world data is ultimately the most effective, and by a large margin.

You can’t Bitter Lesson your way to victory if you don’t have the training data, and you can’t get the training data without deployment. What Sutton would really suggest, I think, is to get as many robots in the field as possible and then let them learn in a way that is interactive, continual, and self-improving.

We’re not there yet. We still have humans in the loop.

But the first step to all of this, and perhaps our company’s best hedge against the Bitter Lesson, is getting robots deployed to as many customers as we can, as quickly as we can.

What If I’m Wrong?

It’s hard to work in robotics for too long without getting humbled. This is an industry that has, for decades, fallen short on its promise.

So how confident am I that I’m right and basically the rest of the industry is wrong?

I mean, decently confident, confident enough to spend my most productive years building this company. I’m confident that our approach is differentiated and logically consistent. But fully confident? No.

It’s worth saying explicitly that this isn’t a case of us versus the rest of robotics. Some of the people I respect most in the field are taking the opposite bet.

Lachy Groom, the CEO of Pi, is a close friend and led the Series A in Standard Bots. He’s building with a foundation-model view of robotics, and I think this work is important. We talk about this stuff all the time and we both want the same thing: to see tons of robots out in the world, no matter whose approach gets us there fastest.

If the foundation-model view wins out, though, it’s hard to see how any one company will be able to winner-take-all the model market on compute and algorithms alone. There are now at least four frontier LLM labs with basically the same model capabilities. On-demand intelligence, miraculously, is becoming a commodity.

If you were going to run away with this market, my bet is that you’d have to do it with data and with customer relationships, kind of like Cursor for robotics.

Let’s say, for argument’s sake, that Google, Skild, and Physical Intelligence all solve general physical intelligence. In that case, I think whichever company actually owns the customer relationship has the power. That company can just plug in the lowest bidder on the model side.

This is related to the bet that Packy argued China is making in The Electric Slide: if I’m the company that can build robots and sell them to customers, and particularly if customers are already getting value from Standard Bots, then I want the models to commoditize. I want them to be as powerful as possible. Commoditize your complements.

It’s good for us, at the end of the day. Getting a product in the field, selling to customers, and iterating is both a competitive advantage and a hedge. All I care about is the advantage, though.

We believe, like so many of the people working in our industry do, that there will be no bigger improvement to human flourishing than successfully putting robots to work in the real economy. We are on the precipice of labor on-tap, powered by electrons and intelligence. That means cheaper, better goods. It means freeing humans from the work they don’t enjoy. Being a farmer is more fun when you don’t have to take the cow’s temperature yourself. It means that the gap between thought and thing practically disappears. And those are just the first-order effects. We can’t know ahead of time what fascinating things people will dream up for our abundant robotics labor force to do; all we can know is that they will be things people find useful.

We all believe this. We all want to produce a giant leap for mankind. The open question is how we get from here to there.

I believe the way to build the world’s largest robotics company is to eat the industry one use case at a time. And I’m so hungry I could eat a cow.


Big thanks to Evan for sharing his knowledge, to the Standard Bots team for input, to Meghna for editing, and to Badal for the cover art and graphics.


That’s all for today.

For not boring world paid members, I played around with Claude to produce some extra goodies. We made an annotated bibliography with links to papers that support (or push back on) Evan’s argument from both the robotics and business strategy side and a GAME. Members can also ask Evan questions on today’s cossay.

Join us in not boring world for all of this and more behind the paywall.

Thanks for reading,

Packy

Read more

Weekly Dose of Optimism #175

2026-01-10 21:57:13

Hey friends 👋 ,

Happy SATURDAY and welcome to a special weekend edition of the Weekly Dose. We’re sending today because we sent our deep dive on a16z yesterday, but let me know what you think about the weekend send. Might be a good way to spend a Saturday morning, coffee in hand, optimism in veins.

One thing I’m personally optimistic about right now is not boring world. It was a great launch week: #1 rising in business, #2 new bestseller overall, top 60 in business. And as we speak, I’m working on drafts of the first two cossays. We might also begin sharing more of the stories that we left on the Weekly Dose cutting room floor. Join us.

Subscribe now

For now, we have a new food pyramid, Chinese Peptides, Boltz Lab, HALEU $$$, and Rintamaki on Robots.

Let’s get to it.


Today’s Weekly Dose is brought to you by… Framer

Framer gives designers superpowers.

Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes. Whether you’re starting with a template or a blank canvas, Framer gives you total creative control with no coding required. Add animations, localize with one click, and collaborate in real-time with your whole team. You can even A/B test and track clicks with built-in analytics.

Ready to build a site that looks hand-coded without hiring a developer?

Launch a site for free on Framer dot com. Use NOTBORING for a free month on Framer Pro.


(1) There’s a New Food Pyramid in Town

Joe Gebbia and the National Design Studio

We all grew up looking at the food pyramid. Eat lots of carbs and few fats. Gospel.

Then, in the early 2000s, we learned the food pyramid was probably a pyramid scheme thanks to Gary Taubes’ New York Times piece What if It’s All Been a Big, Fat Lie?

The story goes something like this. We used to eat good, normal diets: meat, eggs, butter, vegetables, the stuff humans had eaten for millennia. Then heart disease rates started climbing in mid-century America, and in the 1950s an ambitious University of Minnesota physiologist named Ancel Keys became convinced that dietary fat was the culprit. His Seven Countries Study showed a correlation between saturated fat consumption and heart disease, though critics later pointed out he cherry-picked his countries and ignored confounding variables. Keys was brilliant and ferociously combative, and he won the institutional war, capturing the American Heart Association and marginalizing skeptics.

Meanwhile, in the 1960s, the sugar industry was quietly paying Harvard scientists to publish research blaming fat instead of sugar, and the Harvard scientists were happy to oblige their sugar daddies.

In 1977, Senator George McGovern's committee translated the fat hypothesis into the first federal dietary guidelines, drafted by staffers with no scientific background over the objections of researchers who said the evidence wasn't there yet. Once the guidelines existed, they created their own gravity: the USDA built food guides around them, the NIH funded research that assumed they were correct, and food companies reformulated everything to be low-fat (adding sugar to compensate). The Food Pyramid arrived in 1992, telling Americans to eat 6-11 servings of bread and pasta while using fats “sparingly.”

Americans dutifully complied, fat consumption dropped, carb consumption soared, and obesity rates tripled. It took until the 2010s for the scientific consensus to finally crack. But since then, frustratingly, while we’ve known, the Food Pyramid didn’t changes, for what I’m assuming are reasons of institutional sclerosis.

And then, this week, the National Design Studio just … published … a new one.

It’s beautiful, and it’s more correct, which means that kids and grownups and whoever else are probably just a little more likely to eat the right stuff. And I mean check out the website, which is a government website.

What makes me most optimistic though is that there was this obviously dumb and wrong thing that everyone agreed was dumb and wrong but did nothing about that we now, as a nation, have done something about. I bet there are a lot of other obviously dumb and wrong thing we can fix now that we’re at it.

(2) Not For Human Consumption - Grey Market Peptides

From on Substack

Vectorculture
Not For Human Consumption
The FDA classifies BPC-157 as Category 2 (ineligible for compounding) while Telegram communities with 35,000 members crowdsource third-party lab testing at $850 per batch. Eli Lilly’s retatrutide achieves 28.7% weight loss in Phase 3 trials while Chinese suppliers ship enough semaglutide API to produce over one billion starter doses. At a December 2025 …
Read more

And if eating real food doesn’t do the trick…

Chinese peptides are so hot right now. Apparently everyone in SF is doing them. For the uninitiated (me), peptide is a short chain of amino acids (the building blocks of proteins) that can act as signaling molecules in the body to suppress your appetite or repair your tissues. A Chinese peptide is a peptide that you can get access to cheaper or extra-legally.

This essay goes into much more depth on Chinese peptide, regular GLP-1s, super GLP-1s (Gen3 GLP-1/GIP/glucagon agonists…) and who should get to decide how much we enhance our own bodies.

A few things stand out…

  1. If you thought GLP-1's were miracle drugs, wait for Gen3 GLP-1/GIP/glucagon agonists, which author SOMEWHERE calls “the “holy grail” of weight loss medications. They work. Astonishingly well.” and which are currently in Phase 3 trials.

  2. Plenty of peptides people are using have thin medical backing and even the ones that do need complex supply chains to work, which the grey market probably isn’t respecting, making drugs even less effective.

  3. Cheap versions of existing drugs are WAY cheaper: grey market semaglutide costs ~$50/month versus $24,000 for brand-name — an 80-200x price differential

These are all considerations in the present. Over the medium term, the author sees a clear trajectory: oral GLP-1s democratize access, myostatin inhibitors add muscle preservation, and eventually gene therapy moves from wealthy self-experimenters to mainstream application. Everyone is going to be skinny and jacked.

Speaking of Oral GLP-1s! For the less adventurous among us, Novo Nordisk launched its Wegovy pill, the first FDA-approved Oral GLP-1 for Weight Loss, on Ro!

(3) Boltz Launches Boltz Lab Platform with AI Agents for Biomolecular Design

On Thursday, Boltz launched Blotz Lab: a platform that provides scientists with access to state-of-the-art open-source AI models:

  • Boltz-1 for biomolecular structure prediction (released December 2024, matching AlphaFold3 accuracy),

  • Boltz-2 for structure and binding affinity prediction (June 2025, in collaboration with Recursion and MIT),

  • BoltzGen for de novo protein and peptide binder design (October 2025)

  • Plus, new AI agents for small-molecule hit discovery and protein design.

We like open sourcing models to give scientists superpowers with which to give the rest of us superpowers.

We also like the investor list here. Boltz Public Benefit Corporation announced a $28 Seed round from a16z, who I wrote about yesterday, and Amplify Bio, led by great friend of not boring and former not boring capital biotech partner, Elliot Hershberg.

I texted Elliot to ask about the deal, and he told me this:

Boltz is a David and Goliath story. Google DeepMind, with infinite resources and a world-class team, made a huge breakthrough with AlphaFold. But by AlphaFold2, the code wasn’t open-source for other researchers to build on. So a small team at MIT decided to build their own.

With resources for only one training run, the Boltz team built a shipped a state-of-the-art open model for anybody to use. It took off like wildfire and is now used by every top 20 pharma company and >100k scientists worldwide. Now as Boltz PBC, this team has the resources they need to build infrastructure for scientists to make the best possible use of these models for programming biology.

Biology is hard, so I have a very complicated process to try to understand whether something new in bio is legit: I text Elliot. Boltz passes the Elliot test with flying colors. Get out there any program biology, everyone.

(4) General Matter Gets $900M DOE Contract for Domestic HALEU Production

America used to enrich uranium into the LEU and HALEU that most nuclear reactors use as fuel, and then we stopped, and now we rely on Russia, basically. Read our friends at Crucible Capital on the Nuclear Fuel Value Chain for a clearer picture.

Now, we have companies like Founders Fund-backed, Scott Nolan-led General Matter which are bringing enrichment back to the US, and now General Matter now has a $900 million task order from the DOE to “create domestic HALEU enrichment capacity.”

In just a couple of years, we’ve gone from the government blocking nuclear to funding it aggressively, in part because of its growing popularity, in part because of the data center need, and in large part because entrepreneurs are finally giving them something to fund.

To use all this new fuel, we’re going to need a lot of new reactors, so more good news: Aalo Atomics is making progress on the first new reactor building at Idaho National Labs in 59 years!

Critical progress on all fronts.

(5) The Final Offshoring

by Jacob Rintamaki

Jacob Rintamaki is one of a small group of low-twenty-somethings who make me feel like I was educated via cups-on-string whilst they were educated via Somos fiber. Like I simply do not know how he knows so much already.

I first met Jacob a couple of years ago when I stumbled across his work on nanosystems. He wrote about it a bunch, but this monster is probably the best place to start (and finish) for the curious: A Technical Review of Nanosystems.

Nanosystems are like little tiny tiny atomically tiny machines that we haven’t actually figured out to build yet, so it’s probably unsurprising that when Jacob turned to macrobots, regular old robots, the analysis would be child’s play.

Jacob’s writing on the topic is a great mix of technical, economic, and guy-in-the-scene-in-SF-hearing things. I particularly like his idea that robots and data centers are a match made in heaven and form an elegant flywheel: robots build AI infrastructure → better models → smarter robots → more infrastructure.

That’s the means.

What I particularly love about this freewheeling exploration of our robotic future is that Jacob also writes about the meaning. He ends the piece with two short stories on meaning in a post-labor world1. The optimistic take, and the one I agree with, is that the robots are going to make us more human. Beep boop to that.

You can get the PDF here if you want to read the old-fashioned way, by hand.

BONUS: Below the Paywall

I’m going to try something here. I like seeing not boring world grow, looking at number go up on the Stripe dashboard, and taking down the people ahead of me on the Substack Business Top Bestsellers list, so I’m throwing up a paywall.

There’s nothing behind it really. Just a link to a video that I’m in the middle of and really enjoying that you can get on the internet for free. I bet if you’ve been reading not boring for a while, you can even guess who it’s from.

But if you want to subscribe, I’d love to have you and I promise to try to make it worth your while (eventually, not now. whatever is down there is not worth $20/mo or $200/yr).

Read more

a16z: The Power Brokers

2026-01-10 02:35:20

Welcome to the 310 newly Not Boring people who have joined us since Tuesday! Join 256,606 smart, curious folks by subscribing here:

Subscribe now


This article reflects the views and opinions of the author and does not necessarily reflect the views or beliefs of a16z. Investment returns are included throughout this article. Past performance is not indicative of future results. See related fund returns and disclosures at the end of this piece.


Hi friends 👋 ,

Happy Friday! IT’S TIME TO WRITE about a16z.

Today, a16z is announcing $15 billion in fresh funds.

To commemorate the occasion, I’m writing a Deep Dive on the Firm. I spoke with Firm’s GPs, LPs, ~$200 billion worth of portfolio founders, reviewed documents and presentations, and analyzed returns data for a16z’s funds since inception (see appendix with disclosure information at the end).

There is a lot of writing on the internet about what’s wrong with a16z’s approach. You probably know the arguments. They’ve followed the firm since inception.

I think it is much more interesting to understand just what it is that all of these smart people who have been right in the past think it is that they’re doing now.

And look, I am about as far from an unbiased observer as anyone without an @a16z.com email address could be.

For more than two years, I was an advisor to a16z crypto (but am currently not compensated by the firm). Marc Andreessen and Chris Dixon are LPs in not boring capital. From time to time, I am on the same cap table as a16z. I am friends with a lot of people in the Firm and with most of the New Media team. I partner with, like, and respect these people.

But look, we’re not counting on me to analyze whether a16z’s pitch at this moment in time is worth investing in. Sophisticated institutional LPs have decided, to the tune of $15 billion, to do so. It will take a decade to know whether they made the right decision, and nothing either I or any hypothetical critic could say will change that outcome, just as it hasn’t in the past.

What I can hopefully bring is a way to think about what a16z actually is from my unique experience. I think a16z is the best-marketed Firm in venture. It can, and does, tell a story about itself. Based on my experience, I can tell you that its story is consistent with its actions. The things that a16z says about itself to the public are the same things it trains its team on internally. The pitch it makes is the same one it’s made since its first Offering Memorandum. And you will be able to judge the returns for yourself.

There are a lot of great venture capital funds and investors, some of the best of whom have been profiled recently. Their approaches and successes are increasingly well-appreciated and well-understood.

But a16z is doing something different, bigger, less… understated. It doesn’t feel like venture capital is supposed to, in part, I think, because I don’t think a16z cares if it’s doing “venture capital.” It just wants to BUILD the future and eat the world.

To read the full Deep Dive in your browser, click here.

Let’s get to it.


Today’s Not Boring is brought to you by… Vanta

Compliance is painful, but if ya want to grow, ya gotta do it. Luckily, Vanta’s got you covered.

Get Vanta’s Compliance for Startups Bundle here, and get back to growing.


a16z: The Power Brokers


“I’m living in the future so the present is my past,

My presence is a present kiss my ass.” – Kanye West, Monster


Andreessen Horowitz hears your feedback.

That it’s too loud. That it should shut up and dribble, politically speaking. That you don’t agree with a recent investment or two. That it is unbecoming to Quote Xeet the Pope. That there is no way it will ever generate a reasonable return for LPs on such enormous funds.

a16z does hear you. It has been hearing you, at this point, for nearly two decades.

Like in 2015, when New Yorker writer Tad Friend sat down to breakfast with Marc Andreessen while writing Tomorrow’s Advance Man. Friend had just heard from a rival VC who wanted to get a word in - that a16z’s funds were so large, and ownership percentages so small1, that to get 5-10x aggregate returns across its first four funds, they’d need their aggregate portfolio to be worth $240-480 billion.

“When I started to check the math with Andreessen,” Friend writes, “He made a jerking-off motion and said ‘Blah-blah-blah. We have all the models—we’re elephant hunting, going after big game!’”

I want you to keep that image in your mind. To preempt Marc’s reaction to the reaction you’re about to have to the next paragraph.

Today, a16z is announcing that it has raised $15 billion across all of its strategies, bringing its total regulatory assets under management (RAUM) to over $90 billion.

In a year in which venture fundraising was dominated by a small number of large firms, a16z raised more than the next two funds - Lightspeed ($9B) and Founders Fund ($5.6B2) - raised in 2025, combined.

In the worst VC fundraising market in five years, a16z accounted for over 18% of all US VC funds raised in 20253. In a year in which it took the average VC fund 16 months to close their fund, a16z took just over three months from start to finish.

Split up, four of a16z’s individual funds would be in the 2025 top 10 among entire firms’ raises: Late Stage Venture (LSV) V would be #2, Fund X AI Infra and Fund X AI Apps would be tied for 6th, and American Dynamism (AD) II would be tenth.

One could argue that this is way too much money for a venture fund to deploy with any reasonable expectation of generating outsized returns. To which, I imagine, a16z collectively makes a jerking-off motion and says, “Blah blah blah.” It is elephant hunting, going after big game!

Today, across all their funds, a16z is an investor in 10 of the top 15 private companies by valuation: OpenAI, SpaceX, xAI, Databricks, Stripe, Revolut, Waymo, Wiz, SSI, and Anduril.4

It has invested in 56 unicorns over the prior decade through its funds, more than any other firm.5

Its AI portfolio includes 44% of all AI unicorn enterprise value, also more than any firm.6

And from 2009-2025, a16z led 31 early rounds of eventual $5b companies, 50% more deals than the two next closest competitors.

It has all the models. It has the track record, now, too.

Below is the aggregate portfolio value of those first four funds, the ones that would have had to be worth $240-480 billion to clear that rival VC’s hurdle. Combined, a16z Funds 1-4 had a total enterprise value of $853 billion at distribution or latest post-money valuation7.

And that was just at the time of distribution. Facebook alone has added more than $1.5 trillion in market cap since!

Some form of this pattern keeps playing out: a16z makes a crazy bet on the future. Those in the know say it’s stupid. Wait some years. Turns out it’s not stupid!

a16z raised its $300 million Fund I in 2009 on the heels of the Global Financial Crisis, touting an operating platform to support founders. “We visited a lot of our VC friends and many of them said it was a really dumbass idea and we should definitely not pursue it and it’s been tried before and it didn’t work,“ Ben recalls. Today, nearly every significant VC has some flavor of platform team.

When it invested $65 million of that fund alongside Silver Lake and other investors to acquire Skype from eBay for $2.7 billion in 2009, “everyone said it was an undoable deal due to IP risk” (eBay was in litigation with Skype’s founders over the technology at the time of the deal). Ben recounted the skepticism in a blog post less than two years later when Microsoft acquired Skype for $8.5 billion.

Marc and Ben raised a $650 million Fund II in September 2010, and proceeded to make large late-stage investments in companies like Facebook ($50 million at $34 billion), Groupon ($40 million at $5 billion), and Twitter ($48 million at $4 billion), betting the IPO window would open. Rivals bristled to The Wall Street Journal in a classic, A Venture-Capital Newbie Shakes Up Silicon Valley, that private share deals were just not what venture capitalists did (the now-common practice was so new that the word “secondary” makes no appearance). Matt Cohler, a Benchmark partner, dropped this banger: “There’s also money to be made in pork bellies and oil futures, but that’s not what we do.” In November 2011, Groupon IPO’d, opening at $17.8 billion. In May 2012, Facebook IPO’d at $104 billion. And in November 2013, Twitter IPO’d, closing its first day at $31 billion.

By the time Marc and Ben raised a $1 billion Fund III and a $540 million parallel opportunities fund in January 2012, the criticism shifted to a familiar one: scale. a16z’s funds represented 7.5% of all US VC dollars raised in 2012, while VC kind of sucked. The 2014 Harvard Business School Case Study on a16z notes a 2012 Kauffman Foundation report which found that, “Venture capital has delivered poor returns for more than a decade.” In 2012, VC investments returned an average of 8.9% to the S&P 500’s 20.6% per Cambridge Associates. Legendary venture capitalist Bill Draper said, “The growing consensus about venture capital in Silicon Valley is that too many funds are chasing too few truly great companies.” Which certainly rhymes with today.

In 2016, The Wall Street Journal published an article that Acquired’s David Rosenthal called “so clearly a hit piece planted by rival venture firms” titled Andreessen Horowitz’s Returns Trail Venture-Capital Elite, when its funds were seven, six, and four years old, respectively. It showed that while AH Fund I was a top 5% VC fund, AH II was merely top quartile, and AH III was actually slightly outside of the top quartile.

Which is funny, in hindsight, because that fund, AH III, is a monster fund: sitting at an 11.3x Net TVPI (total value to paid-in capital after fees) as of September 30 2025, and when you include the parallel fund, it’s at a 9.1x Net TVPI.

AH III includes Coinbase, which resulted in distributions of $7 billion gross to a16z LPs across the funds it’s in, Databricks, Pinterest, GitHub, and Lyft (although not Uber, proof positive that one sin of omission trumps every sin of commission), and I believe is one of the best performing large venture funds of all time. Since Q3 2025, Databricks (currently a16z’s largest position) raised at $134 billion, which means Fund III’s performance is even stronger now (assuming other positions have not decreased). a16z has already distributed $7 billion net to LPs from AH III and AH III Parallel with nearly as much in Unrealized Value still on the table.

Much of that unrealized value sits in one company, Databricks: a big data company that was very small, still a few months away from hitting the half-billion-dollar valuation mark, when the WSJ was writing off a16z in 2016. Databricks represents 23% of a16z’s Net Asset Value (NAV) across all funds.

Spend any time around a16z and you will hear the name Databricks a lot. In addition to being its largest position (and what must be a top three largest dollar position in all of venture capital), its story is the cleanest example of how a16z operates at its best.

Databricks & the a16z Formula

There are some things about the Firm we haven’t discussed yet that are useful to understand before we start talking about Databricks.

First, a16z is founded and run by engineers. Not just founders, engineer-founders. This influences how they designed the Firm (to feast on scale and network effects), and also how they pick markets and the companies within them.

Second, there is perhaps no bigger investing sin at a16z than investing in second best. If you miss a winner early, you can always invest in a later round. If you invest in second best, you lock yourself out from investing in the winner. This is true even if the eventual winner isn’t yet born.

Third, once a16z believes it has identified the category winner, the classic a16z move is to give it more money than it thought it needed. Everyone makes fun of them for that move.

These three things have been true since the Firm’s earliest days.

Back in the early 2010s, just a couple of years into the founding of Andreessen Horowitz, Big Data was the Big Thing (you remember this) and the era’s dominant Big Data framework was Hadoop. Hadoop used a programming model called MapReduce (originally developed by Google) to distribute computing across clusters of cheap commodity servers instead of on expensive specialized hardware. It ~*Democratized Big Data*~ and a wave of companies sprung up to facilitate and capitalize on that democratization. Cloudera, founded in 2008, raised $900 million in 2014, leading a year in which investment in Hadoop companies had quintupled to $1.28 billion. Hortonworks, spun out of Yahoo!, IPO’d that year.

Big data, big dollars. And a16z made none of them.

Ben Horowitz, the “z” in a16z, didn’t like Hadoop. A computer science major before he was CEO of LoudCloud/OpsWare, Ben didn’t think Hadoop was going to be the winning architecture. It was notoriously difficult to program and manage, and Ben thought it was poorly suited for the future: every step in a MapReduce computation wrote intermediate results to disk, which made it painfully slow for iterative workloads like machine learning.

So Ben sat out the Hadoopla. And Marc, Jen Kha told me:

Was just giving him so much shit, because at that point, Hadoop was taking over the headlines, and he was like, ‘We fucking missed it. We totally bungled this. We dropped the ball.’

And Ben was like, ‘I don’t think this is the next architectural shift.’

Then finally, when Databricks came around, Ben said, ‘This might be it.’ And he of course just bet the farm on it.

Databricks came around just in time and just down the road in UC Berkeley.

Ali Ghodsi and his family fled Iran in 1984 during the Iranian Revolution and moved to Sweden. His parents bought him a Commodore 64, which he used to teach himself how to code, well enough, in fact, to get invited to UC Berkeley as a visiting scholar.

At Berkeley, Ali joined the AMPLab, where he was one of eight researchers, including thesis advisors, Scott Shenker and Ion Stoica, working to implement the idea in Ph.D. student Matei Zaharia’s thesis paper and build Spark, an open source software engine for big data processing.

The idea was to “replicate what the big tech companies did with neural networks without the complex interface.” Spark set the world record for data sorting speed and the thesis won the award for best computer science dissertation of the year. True to academic form, they released the code for free, and barely anyone used it.

So starting in 2012, the eight met for a series of dinners, over which they decided to team up to start a company on top of Spark. They called it Databricks. Seven of the eight joined as cofounders, and Shenker signed on as an advisor.

Databricks Cofounders - Ali Ghodsi seated in front ~middle, Forbese

Databricks, the team thought, would need a little money. Not a lot, but some. As Ben recounted to Lenny Rachitsky:

When I met with them they were like, ‘We need to raise $200,000.’ And I knew at the time that what they had was this thing called Spark and the competitor was something called Hadoop, and Hadoop had very well-funded companies already running towards it, and Spark was open source, so the clock was ticking.

He also realized that as academics, the team would be pre-disposed to doing something small. “Professors in general… it’s a pretty big win if you start a company and you make $50 million. Like you’re a hero on campus,” he told Lenny.

Ben had bad news for the team: “I’m not going to write you a check for $200,000.”

He also had really good news for the team: “I’ll write you a check for $10 million.”

His reasoning was, if you’re going to build a company, “You need to build a company. You need to really go for it if you’re going to do this. Otherwise, you guys should stay in school.”

They decided to drop out. Ben upped the check size and a16z led Databricks’ Series A at a $44 million post-money valuation. It owned 24.9% of the company.

This initial encounter - Databricks asking for $200k, a16z going much, much bigger - set a pattern. When a16z invests in you, they believe in you.

When I asked him about a16z’s impact, Ali was unequivocal: “I don’t think Databricks would be around today if it wasn’t for a16z. And Ben specifically. I don’t think we would be around. They truly believed in us.”

In the company’s third year, it was only doing $1.5 million in revenue. “It was far from clear that we would make it,” Ali recalls. “The only person that truly believed it was going to be worth a lot was Ben Horowitz. Much more so than us. Mind you, like, much more than me. To his credit.”

Belief is a cool thing to have. It’s even more valuable when you have the power to make it self-fulfilling.

Like in 2016, when Ali was trying to get a deal done with Microsoft. From his perspective, with overwhelming demand to have Databricks on Azure, it was a no-brainer. He asked some of his VCs for introductions to Microsoft CEO Satya Nadella, which they did, but then those introductions got “buried in executive assistant loops.”

Then Ben introduced Ali to Satya properly. “I had an email from Satya saying, ‘We’re absolutely interested in having a very deep partnership,’” Ali recalls, “adding his lieutenants, and their lieutenants. Within a couple hours, I had 20 emails in my inbox, from Microsoft employees who I had tried to talk to before, and they were all like, ‘Hey, when can we meet?’ And it was like, ‘Okay, this is different. This is going to happen.’”

Or in 2017, when Ali was trying to recruit a senior sales executive to keep the foot on the gas. The executive wanted change of control provisions in his contract – essentially, accelerated vesting if the company gets acquired.

It was a sticking point, so Ali asked Ben to help convince the guy that the value of Databricks was “at least $10B.” Ben talked to him, and then sent Ali this email:

Ben Horowitz Email to Ali Ghodsi, September 19, 2018 courtesy of Ali Ghodsi

“You are severely underselling the opportunity.

We are Oracle in the cloud. Salesforce is worth 10X what Siebel was. Workday will be worth 10X what PeopleSoft was. We will be worth 10X what Oracle is. That’s $2T not $10B.

Why does he need change in control? We are not changing control.”

That’s one of the hardest corporate emails of all time, especially considering that Databricks was worth $1 billion at a $100 million revenue run rate then and is worth $134 billion at a more than $4.8 billion revenue run rate now.

“They envision the full potential of the thing,” Ali tells me. “When you’re knee deep in it, like we are operating every day, and you’re seeing all the challenges—the deals are not closing, the competitors are beating you, you’re running out of money, no one knows who you are, people are quitting on you—it’s hard to think about the world that way. But then they come to the board meetings and they tell you, ‘You’re going to take over the world.’”

They were right, and they’re getting paid for their belief. All told, a16z has invested in all twelve Databrick’s funding rounds. It has led four of them. The company is one of the reasons AH 3, from which the Firm did its initial investment, is doing so well, and it is a driver of returns in the larger Late Stage Ventures Funds 1, 2, and 4.

“First and foremost, they just really care about the mission of the company. I don’t think Ben and Marc think of it as an investment return first. That comes second,” Ali observed. “They’re tech believers who want to change the world with technology.”

If you don’t understand what Ali said about Marc and Ben, you will not understand a16z.

What is a16z?

a16z is not a traditional venture capital fund. On its face, this is obvious! It just completed the largest VC fundraise across all of its strategies since SoftBank’s $98 billion Vision Fund in 2017 and Vision Fund II in 20198. Nothing traditional about that. But even the SoftBank Vision Fund was a Fund. a16z is not that.

Of course, a16z has raised money and needs to generate returns. It needs to be great at this, and to date, it’s been exceptional. Not Boring has returns data on a16z’s funds to date that we will share below.

But first - what is a16z?

a16z is a cult of technology. Everything it does is to bring about better technology to make the future better. It believes that “Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential.” Everything flows from that. It believes in the future and bets the firm that way.

a16z is a Firm. It is a business, a company. It is built with the goal to scale, and to improve with scale. There are many characteristics of a Firm that I believe do not apply to a traditional Fund, and we will cover them. I think this distinction solves one of the oddest things about venture capital’s self-image: that venture capital is an industry that sells the world’s most scalable product (money) to its most scalable companies (technology startups) but must not, itself, scale.

This distinction - Firm > Fund - comes from a16z GP David Haber, the most East Coast Finance of the bunch and a self-described student of investment firms as businesses. “The objective function of a fund is to generate the most carry with the fewest people, in the shortest amount of time possible,” he explains. “A Firm is about delivering exceptional returns, and building sources of compounding competitive advantage. How do we get stronger with scale, not weaker?”

a16z is run by engineers and entrepreneurs. Money managers stereotypically try to grab larger pieces of a fixed pie. Engineers and entrepreneurs try to grow the pie by building and scaling better systems.

a16z is a temporal sovereign. It is the institution for the future. The Firm, in its more ambitious moments, views itself as a peer to the world’s leading financial institutions and governments. It has said that it aims to be the (original) JP Morgan for the Information Age, but I think that undershoots the true ambition. If governments work on behalf of chunks of space, a16z works on behalf of that big chunk of time that is the future. Venture capital is simply the way that it’s found it can have the biggest impact on the future, and the business model most aligned with profiting when it does.

a16z makes and sells power. It builds its own power through scale, culture, network, organizational infrastructure, and success, and then gives its power to the technology startups in its portfolio through sales, marketing, hiring, and government relations, primarily, although to hear its founders tell it, a16z will do whatever is in its power to do, which seems to be a lot.

If you were designing such an institution, one that believes that technology is “eating markets far larger than the technology industry has historically been able to pursue,” that Everything is Technology, what you would build is a company that sells the power to win to the hundreds and thousands of companies that might one day come to be the economy. I think you would build an institution that looks a lot like a16z.

Because the companies that might one day become the economy start small and fragile. They start diffuse, each with their own goals and competitors; often, they compete with each other. And they face entities that dominate the present, with no desire to cede ground to new entrants. A small company, no matter how promising, may not be able to hire the very best recruiters who can hire the very best engineers and executives. It might not be able to advocate for policies to give itself a fair shot. It may not have the audience to get its message out to the world in a way that people will listen. It may not have the legitimacy to sell its products to governments and large enterprises who are flooded with pitches promising the next big thing.

It doesn’t make sense for any one small company to invest the billions of dollars it would need to create those capabilities and amortize them over only itself. But if you can amortize those capabilities across all of those companies, across trillions of dollars of future market value, then all of a sudden, the small companies can have the resources of the big companies. They can win or lose on the merits of their product. They can bring about the future the way it should be.

What if you could combine the agility and innovation of a startup with the power and heft of a temporal sovereign?

That’s what a16z is trying to do, and has been trying to do since it was a startup itself.

Why Marc & Ben Started a16z

In June 2007, Marc wrote a blog post titled The only thing that matters, as part of the Pmarca Guide to Startups. It was ostensibly written as advice to startup technology companies, but in hindsight, reads like a manual to founding a16z. And it answered: which of the three core elements of a startup matters most – team, product, or market?

Entrepreneurs and VCs will say team. Engineers will say product.

“Personally, I’ll take the third position,” Marc wrote, “I’ll assert that market is the most important factor in a startup’s success or failure.”

Why? He writes:

In a great market—a market with lots of real potential customers—the market pulls product out of the startup…

Conversely, in a terrible market, you can have the best product in the world and an absolutely killer team, and it doesn’t matter—you’re going to fail…

In honor of Andy Rachleff, formerly of Benchmark Capital, who crystallized this formulation for me, let me present Rachleff’s Law of Startup Success:

The #1 company-killer is lack of market.

Andy puts it this way:

When a great team meets a lousy market, market wins.

When a lousy team meets a great market, market wins.

When a great team meets a great market, something special happens.

What I think Marc and Ben saw in venture capital was a great market (and no one appreciated how great) full of lousy teams (and no one appreciated how lousy).

Between 2007 and 2009, Ben & Marc were figuring out what to do next. They were very successful technology entrepreneurs, who, despite the success, had big chips on their shoulders, and who, because of their success, had the fuck you money with which to say fuck you.

But how?

As entrepreneurs, and then as angel investors, Marc and Ben dealt with a lot of shitty venture capitalists and they thought it might be fun to compete with those guys.

“For Marc, it was not about the money, at least from my vantage point,” David Haber told me. “He’s been rich since he was about 20. In the beginning, it was probably more about punching Benchmark or Sequoia in the face.

Venture capital had another thing going for it, something that very few other people realized in the depths of the GFC-induced recession: it was perhaps the greatest market on earth. That mattered tremendously to Marc.

Of course, not all of venture capital was lousy. The two firms that Marc wanted to punch in the face - Sequoia and Benchmark - were excellent (Marc quoted Andy Rachleff!), aside from their proclivity for removing founders. And for those founders looking to stay in charge, Peter Thiel had launched Founders Fund in 2005 and was in the middle of deploying the 2007 vintage FF II that would go on to return $18.60 in cold hard cash (DPI) for every dollar invested, as Mario wrote.

But compared to today, on the whole, it was a lazy, clubby, hand-made industry.

There’s this story that Marc likes to tell about meeting with a GP at a top firm in 2009, when he and Ben were thinking about launching a16z, who compared investing in startups to grabbing sushi off a conveyor belt. According to Marc, this GP told him that:

The venture business is like going to the sushi boat restaurant. You just sit on Sand Hill Road, the startups are going to come in, and if you miss one, it doesn’t matter because there’s another sushi boat coming up right behind it. You just sit and watch the sushi go and every once in a while you reach into the thing and you pluck out a piece of sushi.

That was fine if the goal was to keep your good thing going, “as long as the ambitions of the industry were constrained,” Marc explained to Jack Altman on Uncapped.

But Marc and Ben’s ambitions were not constrained. There would be no greater sin at their firm than “missing one,” not investing in a great company. It mattered greatly. Because they saw that the big tech companies were going to get much, much bigger as their market grew.

“Ten years ago, there were only approximately 50 million consumers on the Internet, and relatively few had broadband connections,” Ben and Marc wrote in the Offering Memorandum for Andreessen Horowitz Fund I in April 2009. “Today, approximately 1.5 billion people are online, and many of them have broadband connections. As a result, the biggest winners on both the consumer and infrastructure sides of the industry have the potential to be far larger than the most successful technology companies of the previous generation.”

At the same time, it had gotten much cheaper and easier to start a company, which meant that there would be more of them.

“The cost of creating a new technology product and entering the market in at least a beta phase has dropped dramatically over the past ten years,” they wrote to potential LPs, “and now is often only $500,000-$1.5 million, as compared to $5-15 million 10+ years ago.”

Finally, the ambitions of the companies themselves grew as they transitioned from being tools companies to competing directly with incumbents, which meant every industry would become a technology industry, and every industry would get larger as a result.

This is why the market was so great at just that time. He went on to say:

From the 1960s through, call it, 2010 there was a venture playbook... the companies are basically tool companies, right? Picks and shovel companies. Mainframe computers, desktop computers, smartphones, laptops, internet access software, SaaS, databases, routers, switches, disc drives, word processors - tools.

Around 2010, the industry permanently changed... the big winners in tech more and more are companies that go directly into an incumbent industry.

Was a16z overpaying for companies in those early days? Or was it paying a good price relative to what it realized they could become?

In hindsight, it’s easy to claim the latter. What’s impressive about a16z is that they said the same thing in foresight.

If, as they wrote, the roughly 15 technology companies per year that ultimately reached $100 million in annual revenue generated approximately 97% of the public market capitalization for all companies founded in that year - the now-familiar Power Law - then they better do whatever it took to be in as many of the companies with the potential to be one of the 15 as possible, and then in a position to double and triple down in the winners.

And to do that, with just two investing partners, a16z had to think about how to build a firm differently than anyone else had.

So after sharing the basic terms of the AH I investment – $250 million target fund size, of which the General Partners would commit $15 million – Ben and Marc captured their firm strategy in a paragraph.

AH Fund I Offering Memorandum

That is the strategy they are executing against to this day, even as the Firm has grown far beyond two partners and top 5 ambitions.

The Three Eras of a16z

Since that first fund, and throughout the Firm’s history, a16z’s outsized belief in the future, its asymmetric conviction, has been its core competitive advantage in my opinion. It is the point of differentiation that spawns all of the others.

How it has applied that advantage and chosen to differentiate has evolved over time, as the firm’s ambitions, resources, fund sizes, and power have grown.

First Era (2009-~2017)

In the First Era of a16z (2009-~2017), the core insight was that if Software is Eating the World, the best software companies would become far more valuable than anyone currently priced in.

This belief enabled a16z to do three things in order to move from new entrant to Top 5 firm:

Pay up for deals. As discussed earlier, a16z did deals out of its early funds that many others thought were too expensive or off-piste at the time. On the Acquired podcast, Ben Gilbert said, “The common knock was that they were overpaying to buy name brand for themselves to buy their way into winners,” but argued that it was both rational at the time, and noted, “Would anybody argue today that they actually paid too high a valuation for anything they did from 2009 to 2015? Absolutely not.” As Ben Horowitz explained in the 2014 HBS case study, “Even with multibillion-dollar valuations, investors might be underestimating companies’ potential.” That underestimation was a16z’s advantage.

Build operational infrastructure others called wasteful. Hiring a full services team, recruiting partners, executive briefing centers… all of this looked like overhead drag on a fund manager at the time. But if you believed portfolio companies could become category-defining and needed enterprise muscle to get there, the spend made sense. They were building for a future where startups needed to look like real companies to win Fortune 500 deals.

Treat technical founders as the scarce resource. It was also a bet that because companies were getting cheaper and easier to build, technical geniuses without traditional management chops could and would build more important companies. So it did everything it could to woo and support them, bringing the CAA model to venture. “Founder-friendly” is a meme now, but it was genuinely novel at the time.

Importantly, in this First Era, the most important thing was to just invest in the right companies and profit as they became as successful as a16z believed that they could. They were focused on helping the founders, to be sure, but mainly, they were taking advantage of an available arbitrage.

AH III, which had both Coinbase and Databricks, is a standout, but it’s worth noting the consistency.

“As an LP we are happy with consistent [net] 3x [TVPI] funds with the odd one being [net] 5x+ [TVPI] and this is what they have delivered,” David Clark, the CIO at VenCap who has been an LP in a16z since AH 3, told me, “a16z are one of the few firms that have been able to deliver this performance at scale over a sustained period of time.” You can see that in the performance numbers above.

If this was the Era in which a16z was willing to pay up and “invest in pork bellies” in order to make a name for itself that would pay off over time, that trade didn’t seem to cost much in the short term.

Second Era (2018-2024)

In the Second Era of a16z (2018-2024), the key belief was that the winners were indeed getting much larger than anyone expected, they were staying private longer, and technology was eating more industries than others realized.

I think this belief enabled a16z to do three things in order to move from Top 5 Firm to Leader:

Raise larger funds. In the First Era, a16z raised $6.2 billion over nine funds. In the Second Era, over five years, a16z raised $32.9 billion over 19 funds. The standard VC wisdom was that returns degraded with fund size. a16z argued the opposite: if the biggest outcomes were getting bigger, you needed more capital to maintain meaningful ownership through multiple rounds. The worst things you could do were miss winners, and not own enough of the ones you owned. Marc is fond of saying that you can only lose 1x your money, but your upside is practically unlimited.

Build beyond a single fund. During the First Era, a16z raised core funds along with follow-on later-stage funds. All of a16z’s GPs invested out of the same funds, even if they each had their own focus areas. It also raised one Bio Fund, because Bio is a completely different beast. For purposes of this article, I am focusing on those a16z venture funds that are not focused on bio and heath.

In the Second Era, a16z began decentralizing. In 2018, it launched CNK I, a16z’s first dedicated crypto fund under Chris Dixon. In 2019, it recruited David George to lead a dedicated Late Stage Ventures (LSV) Fund and raised its largest fund to date: LSV I was approximately two times larger than any previous a16z fund at $2.26 billion. Through this period, it raised new funds across Core, Crypto, Bio, and LSV, as well as a dedicated Seed fund (the $478 million AH Seed I) in 2021, a dedicated Games fund (the $612 million Games I), and its first cross-strategy fund (the $1.4 billion 2022 Fund), which allowed LPs to invest pro rata across all of the funds in that vintage.

Importantly, while individual funds could tap into the centralized resources of the firm, like Investor Relations, each one designed its own dedicated platform team – marketing, operations, finance, events, policy, and more – in order to meet the specific needs of founders in their vertical.

Hold positions longer. During the Second Era of a16z, leading companies began staying private much longer and raising more money in the private markets, both primary (to fund the company) and secondary (to give employees and early investors liquidity). The practice that Matt Cohler compared to buying pork bellies when a16z bought late-stage secondary shares in Facebook became standard as companies like Stripe, SpaceX, WeWork, and Uber were able to access the same kind of liquidity in private markets that was previously only available in the public markets.

This created a challenge for the industry – LPs couldn’t get liquidity as easily, which gummed up the capital allocation cycle - but for firms that believed that tech companies were going to get much, much bigger, it was a godsend. It presented the opportunity to put a lot more money to work in high-quality companies that happened to be private, and pulled returns that would have accrued to public market investors into the private markets. I believe this shift is one of the key reasons that VC Firms like a16z have been able to get much larger without crushing returns.

In response, a16z did a couple of things. It became a Registered Investment Adviser (RIA), allowing it to invest freely in both crypto, public equities, and secondaries, and launched the aforementioned LSV 1 under David George9. During the Second Era, LSV raised $14.3 of the $32.9 billion raised across a16z’s funds. The Crypto fund also split into Seed ($1.5 billion) and later stage ($3 billion) for Fund IV.

These are the top 10 deals in each listed LSV fund, based on post-money valuation on most recent round or current market cap:

  • LSV I: Coinbase, Roblox, Robinhood, Anduril, Databricks, Navan, Plaid, Stripe, Waymo, and Samsara

  • LSV II: Databricks, Flock Safety, Robinhood (which they exited in the public markets and recycled into more Databricks), Stripe, Deel, Figma, WhatNot, Anduril, Devoted Health, and SpaceX

  • LSV III: SpaceX, Anduril, Flock Safety, Navan, OpenAI, Stripe, xAI, Safe Superintelligence, Wiz, and DoorDash

  • LSV IV: SpaceX, Databricks, OpenAI, Stripe, Revolut, Cursor, Anduril, Waymo, Thinking Machine Labs, and Wiz.

If you were going to buy logos, as 16z has been accused of in the past, you could certainly do worse than these ones. That said, LSV I is in the top 5% of its vintage according to Cambridge Associates data as of Q2 2025, and both LSV II and LSV III are in the top quartiles of theirs.

As of September 30, 2025, LSV I was held at 3.3x net TVPI, LSV II was held at 1.2x net TVPI (although is likely higher now after Databricks’ and SpaceX’s recent fundraises), and LSV III is held at 1.4x net TVPI (also likely will be higher after SpaceX closes a major secondaries sale at a reported $800B valuation, up >2x.)

By believing that the outcomes for these marquee companies will be much larger than most (although certainly not all; see: Founders Fund and SpaceX, Thrive and Stripe) others believed, a16z has been able to put more money to work in the best private tech companies while they can.

Crucially, they’ve begun to show that it’s possible to achieve venture-like returns in a growth stage fund under the right conditions. Namely, based on analysis I have seen from one of a16z’s LPs, firms with strong early stage practices can deliver venture-like multiples (and higher IRRs) by continuing to invest at the growth stage. A deeper relationship with these companies, of course, can also increase the firm’s power.

In the Second Era, a16z believed the most important thing was to own as much of the winners as possible, which was easier to do if you knew the companies better from investing early and had dedicated late stage funds to continue to double-down, or correct early stage mistakes. (Although still not majority investments like you’d see in other asset classes.)
This, too, was an arbitrage, although I believe a16z did more to help its individual companies succeed in this era.

The returns from the Second Era are still early, but they are tracking ahead of where the First Era funds were at a similar stage in their life, when the WSJ reported on their underperformance.

The 2018 funds are sitting at 7.3x net TVPI, the 2019 funds are sitting at 3.4x net TVPI, the 2020 funds are sitting at 2.4x net TPVI, the 2021 funds are sitting at 1.4x net TVPI, and the 2022 funds are sitting at 1.5x Net TVPI.

What is particularly notable in this era is the outperformance of the crypto funds (CNK 1-4 and CNK Seed 1). CNK I has returned 5.4x net DPI to LPs already.

Perhaps even more surprisingly to those who argued that a16z crypto had raised too much money in 2022 at the wrong time, the $3 billion it raised for CNK IV reflects (or “is carried at”) 1.8x Net TVPI to date.

The two biggest stories of this Second Era, LSV and crypto, speak to two facets of a16z’s belief in the future. LSV is a response to the fact that companies are staying private longer, and have greater private market capital needs. Crypto is a representation of the idea that innovation (and returns) can come from entirely new sectors than the ones you’re used to investing in.

They also speak to the need for a16z to expand what it does on behalf of its portfolio companies and the industry. In order to help its late stage companies thrive, it would have to recreate some of the benefits of being public in the private markets.

And in order to ensure crypto’s survival in the US, to ensure that new technology companies of all types would get a fair shake against the entrenched interests, it would need to head to Washington.

Which brings us to the Third Era of a16z (2024-Future), in which the key belief is that new technology companies will not only reshape but win every industry, if allowed to, and that a16z must lead the industry and the country in the right direction.

This belief is once again changing the nature of a16z. At a certain scale, and $15 billion in fresh funds is as good a line as any, picking winners isn’t enough.

You have to make winners by shaping the environment they compete in.

As Ben said, “It’s Time to Lead.”

The Third Era of a16z: It’s Time to Lead

You might imagine, at this point in the game, an analyst at a rival VC firm texting journalist Tad Friend something like, “In order to return 5-10x aggregate across your new $15 billion funds, ‘You would need to make the entire US tech industry multiples larger than it is today.’”

To which Marc and Ben, you imagine, say: Yes.

That is the Firm’s explicit plan. Here’s the logic.

Since 2015, it has funded more unicorns at the early stages than any other investor, and the distance between a16z and #2 (Sequoia) is as wide as the distance between #2 and #12.

“Number of companies funded at the early stages that become unicorns” is, of course, a very specific and convenient way to judge “best.” More common would be to cite returns, by multiple or IRR or simply quantum of cash distributed to LPs. Others might point to hit rate or consistency. There are many ways to slice and dice a league table.

But this way seems to be consistent with how a16z views the world. As I heard repeatedly in my time with a16z crypto, betting on a category because a lot of smart entrepreneurs are building there and getting it wrong is totally OK. But picking the wrong company within a category, missing the eventual winner for any reason, is not. As Ben put it:

We know that building a company is a highly risky endeavor, so we don’t worry about investments that do not work out if we ran the right process when we made the investment and assessed the risks properly. On the other hand, we worry quite a bit about incorrectly assessing whether or not the entrepreneur was the best in her category.

If we pick the wrong emerging category, that’s no problem. If we pick the wrong entrepreneur, that’s a big problem. If we miss the right entrepreneur, that’s also a big problem. Missing the generational company either due to conflict or passing on it is far worse than investing in the best entrepreneur in a category we misjudged.

By its own estimation of what matters most, then, a16z has become the leader of the venture capital industry.

“So now what?” Ben asks. “What does it mean to lead an industry?”

In the X essay announcing this $15 billion raise, he answers: “As the American leader in Venture Capital, the fate of new technology in the United States rests partly on our shoulders. Our mission is ensuring that America wins the next 100 years of technology.”

This is a remarkable thing for a venture capital firm to say about itself.

It is also, if you accept the premises — that technology is the engine of progress, that America’s continued leadership depends on technological superiority, and that a16z is the largest and most influential backer of new American technology companies, with the power and resources to give them a fair shot against incumbents — not entirely unreasonable.

To win the next 100 years of technology (which, at a16z, is the same thing as winning the next 100 years, period), he continues, it must win the key new architectures - AI and crypto - and then apply these technologies to the most important areas, like Biology, Defense, Health, Public Safety, and Education, and infuse them into the government itself.

These technologies will make markets much larger. As I argued in both Tech is Going to Get Much Bigger and Everything is Technology, they mean that industries and jobs-to-be-done that were not previously in tech’s addressable market now are, which means that Venture Capital Addressable Value (VCAV) will increase dramatically as well.

US VC Exits Are Getting Much Larger, Chart from David Clark at VenCap

This is a continuation of the bet that a16z has been making, but with a consequential twist on the belief: this value will be unlocked, and the future of America (and the world) safeguarded, if a16z does its job as leader.

Specifically, that means five things:

  1. Make American technology policy great again

  2. Step into the void between private and public company building

  3. Take marketing into the future

  4. Embrace the new way that companies will be built

  5. Keep building the culture while scaling our capabilities

Nearly everything about a16z that makes you scratch your head is in service of these five things.

Most notably, a16z has gotten much more vocally involved in politics over the last two years, with Marc and Ben publicly supporting President Trump in the last election. That made a lot of people angry, and there is an argument to be made that a venture capital fund should not be influencing national politics.

a16z would take the opposite side of that argument, vociferously. It wants to Make American technology policy great again.

Marc and Ben laid out its argument in The Little Tech Agenda, which can be summarized as:

  • New technology companies are crucially important to our nation’s success.

  • In order to win the future, we need pro-innovation laws, policies, and regulations, and we must prevent regulatory capture by large, well-resourced incumbents.

  • The opposite has been happening: “We believe bad government policies are now the #1 threat to Little Tech.”

  • There is no one to fight for new technology companies in the halls of government or against incumbents: large incumbents won’t do it, startups shouldn’t spend their limited resources doing it.

  • Venture capital firms stand to benefit from new technology companies’ success financially, so VC should be the ones to fight this battle, and as the leader of the VCs, it falls upon a16z to do it.

a16z is a single-issue voter. Little Tech is the only thing it cares about. It is bipartisan.

These are talking points – “We do not engage in political fights outside of issues directly relevant to Little Tech.” and “We support or oppose politicians regardless of party and regardless of their positions on other issues.” – and they are, from everything I have seen at a16z, the absolute truth.

The Firm is not doing politics because it’s fun (although Marc, at least, seems to really enjoy the spectacle; he seems to really enjoy a lot of things, to find humor in the absurd, which is an underrated competitive advantage but one that we do not have time for today). a16z is willing to look stupid, and take arrows, in the short run to help new technology flourish in the long run.

For a very long time, as former Benchmark partner Bill Gurley argued in 2,581 Miles, tech could largely ignore Washington, and Washington could largely ignore tech. A few years ago, that changed, in part because of tech’s transition from making tools to fighting incumbents that I discussed earlier. Crypto was the first sector for whom this became existential.

When a16z first went to Washington, Little Tech was not a constituency in D.C. Large tech companies had their own lobbyists and relationships. Incumbents - banks, defense companies, whoever - had their own lobbyists and relationships. But Little Tech, including crypto, did not. No one firm, with the potential exception of Coinbase at the time, could afford the cost or groundwork it would take to represent itself in the nation’s capitol, let alone in state capitols around the nation.

So in October 2022, a16z crypto hired Collin McCune as its Head of Government Affairs, and Collin got to work educating America’s politicians on crypto. Collin, Chris Dixon, a16z crypto General Counsel Miles Jennings, other members of the team, and crypto founders, from in the portfolio and the industry more broadly, made repeated trips to DC to explain how crypto worked, what it could become, and more generally, the danger of regulating new tech out of existence.

And it’s worked. Thanks in large part to their efforts and the efforts of the industry’s bi-partisan Fairshake SuperPAC, crypto is not at existential risk from legislation. Last year, President Trump signed the GENIUS Act into law, which regulates crypto stablecoins for the first time, and comprehensive market structure legislation passed the House in overwhelmingly bi-partisan fashion. It is now making its way through the Senate with hopes of it passing and being signed into law later this year.

That experience proved valuable when AI became the hot button issue in DC. McCune now leads the Government Affairs practice across the Firm, with a permanent presence in DC and efforts spanning AI, Crypto, American Dynamism, and more. It is currently advocating for a comprehensive federal AI standard to avoid a mess of competing state regulations, among other pro-innovation policies.

While lobbying can be a dirty word, the fact of the current matter is that Little Tech’s competitors have sophisticated Government Affairs and Policy teams working to capture regulators and make it impossible for new entrants to fight on a fair playing field.

In order for tech to win the future, and for a16z to return its funds, staying out of politics is no longer an option. The good news is, as a Firm that needs new companies to form, grow, and win in order for its continued survival, a16z is as incentivized as any organization can be to keep the playing field open to innovation.

Because from the present moment, even a16z admits that it has no idea which companies are going to be built, or how, going forward.

Embracing the new way that companies will be built means being open to the idea that, with AI, entrepreneurs may be able to build companies with 1/10th or 1/100th as many employees as they could before, and that the things it takes to build a great company may be very different than what it’s taken in the past. It means that a16z needs to adapt, too.

So for example, it launched Speedrun, its own accelerator through which it invests up to $1 million and runs a 12-week program for nascent companies. This gives a16z early insight into how these new companies are being built and into each of the specific companies, so that it might be able to invest more in the winners more intelligently.

But it also comes with risk: increasing the number of companies that can say that they’re backed by a16z and lowering the bar risks dilution of legitimacy. For example, a16z has caught heat on twitter for Speedrun backing Doublespeed, which calls itself “Synthetic Creator Infrastructure,” but which others have called a “phone farm” and “Spam as a Service.”

The framing - “Gets funding from Marc Andreessen” - is funny, because Marc is not making sub-$1 million Speedrun application decisions – each Speedrun check represents something like .001% of a16z’s AUM. But that speaks to exactly the challenge. I’d seen references to this a16z-backed company a bunch of times on Twitter before guessing that they were a Speedrun company and looking it up to confirm. Most people won’t do that.

An more notorious example in a similar vein is Cluely, the startup that promised to help its customers Cheat on Everything, for which a16z led a $15 million round out of its AI Apps Fund.

People rightly questioned why a16z, a Firm that is actively working to shape America’s future, was also investing in a startup that valued virality over morality. Does the existence of a Cluely in the portfolio rub just a little legitimacy off of all the others, at least in the eyes of the Very Online?

Very possibly. Speaking personally, I didn’t love it. The vibes were off. It was unbecoming.

BUT! It was internally consistent.

Because beyond the actual product, what Cluely was pitching was that there was a radically new way to build a company in the Age of AI: one that assumed the underlying model capabilities were converging and commoditizing, that distribution would be the only thing that mattered, and that if it took a little controversy to get distribution, whatever.

If you are embracing the new way companies are being built, $15 million and a little Twitter controversy are a cheap price to pay for a front row seat to one of the most novel approaches.

More generally, in the business that a16z is in, looking stupid from time to time is the price you pay to avoid going the way of Kodak. You need to be willing to take risks, and taking risk does not just mean capital. At a16z’s scale, a little capital is the least risky thing to risk.

There is an argument to be made, though, that in the grand scheme of things, little blips on X (an a16z portfolio company itself) simply do not matter. Katherine Boyle, the a16z General Partner who co-founded the Firm’s American Dynamism practice, made that argument, in fact, when I asked her about it:

You could say that yes, maybe we take a few hits on Twitter because a company that people don’t like in a certain pocket of San Francisco or in New York or whatever don’t like it. Like, “We don’t like that they’re doing American Dynamism! We don’t like that they’re doing crypto!”

But the actual scale of the machine means that that little tiny blip in a moment just does not matter.

The best in class institutional comps are scaled systems. Something like the United States of America. Do we care when the United States of America does something embarrassing on a global stage? No, it doesn’t affect the United States of America, like it doesn’t affect the Holy Roman Catholic Church.

We’re thinking in centuries, not in tweets.

You might not agree with a16z on everything, but you have got to respect the balls on that Firm.

For what it’s worth, when I asked some of a16z’s LPs what they thought of certain Twitter-controversial companies, I was consistently met with blank “Who?”s.

The only thing that seems to have ever really mattered to a16z’s returns is the winners: finding them early, winning their deals, and owning as much of them as possible over time. Ask any a16z LP about Databricks; they know about Databricks.

Now, in the Third Era, the It’s Time to Lead Era, something that matters just as much is helping them grow, even as they get much larger.

This is what I think Ben means by Step Into the Void Between Private and Public Company Building, and I think it is the most critical reframing of how to think about a16z today, and about how it could possibly return 5-10x on $15 billion.

“In earlier days,” Ben said, “venture capitalists helped companies achieve $100,000,000 in revenue and then hand them over to investment banks for the next portion of their journey as a public company.” That world no longer exists. Companies are staying private much longer and at much larger scale, which means that the venture capital industry, led by a16z, needs to expand its capabilities to meet the needs of much larger companies.

To that end, the Firm recently brought on former VMWare CEO Raghu Raghuram in a triple-threat role - GP on the AI Infra team with Martin Casado, GP on the Growth Team with David George, and a Managing Partner and Ben’s “consigliere who will help [him] run the firm.” Along with Jen Kha, Raghu is leading a set of new initiatives to “address the needs of larger companies as they grow.”

That means working with national governments across the world to help portfolio companies scale and sell into their regions, forming strategic relationships with companies like Eli Lilly, with which it launched the $500 million Biotech Ecosystem Fund, and growing the number and depth of LP relationships worldwide. It means expanding the scope of a16z’s Executive Briefing Center, where large companies can meet directly with a custom set of relevant a16z portfolio companies.

Even for larger companies, there are things that it doesn’t make sense for each individual company to build from scratch, but that probably make sense for a16z to build and allocate across the portfolio. It just so happens that those things are at the level of governments, trillion dollar companies, and trillions of dollars in capital.

All of this could mean that companies can stay private longer without sacrificing the legitimacy, relationships, or access to capital that come with being a public company.

Which means that companies can grow to become much larger in the private markets, where they are squarely in a16z’s addressable market.

Which means that a16z has an opportunity to invest more capital with a reasonable chance of generating strong returns, which means the potential for more resources to invest in building more capabilities and more power, both of which it can lend to portfolio companies, and increasingly, to the entire new technology industry, in order to help bring to bear more and better new technology on more parts of the economy so that we can all have a better future.

There are, of course, many things that could go wrong. Mo Money Mo Problems. The leaders take the arrows. Etc.

The way I see it, I think a16z is playing the game at a different scope and scale than anyone has played it to date, with all of the opportunities and risks that entails.

More surface area means more potential vulnerabilities, for example. And the longer companies stay private, theoretically, the harder it is to generate liquidity for LPs, the harder it is for LPs to invest in the new funds that allow a16z to invest in the new companies that might one day be very large companies.

Ultimately, though, there are two constituencies that matter: founders and LPs, the Firm’s customers and its investors.

The Only Constituencies That Matter: LPs & Founders

How founders and LPs view the Firm, as expressed by who they take dollars from and give dollars to, respectively, is a compressed version of everything else I’ve discussed.

Here’s my logic:

If the best founders believe that all of the machinery that a16z has built will help them build bigger businesses than they otherwise would, they will take its money over another fund’s (or will at least make sure that it is one of the firms that it takes money from).

And if LPs believe that a16z continues to invest in the most excellent founders, they will give it money over other funds, and keep that money with them, even in the face of a liquidity crunch.

When I spoke with Jen Kha, she told me an anecdote that makes it clear that being in the very best companies is really the only thing that matters (other than being in the right market in the first place) in this game.

A couple of years ago, in the middle of that brief venture bear market, amidst liquidity concerns and the uncertainty of the early days of the Trump Administration’s stance towards endowments’ tax status, a16z offered them liquidity. To read the headlines at the time, including rumors that very blue chip endowments were selling off their venture portfolios, this was like offering water in the desert.

Specifically, Fund 1 had a Seed position in Stripe, and Fund III had a very large position from Databricks’ Series A. a16z went to its LPs and said, “We know you’re in a liquidity crisis. We would be willing to buy your interest in those names back if you would like to and figure out a way to get you some liquidity.”

“And literally, Packy,” Jen recalls, “30 out of 30 LPs said, ‘Absolutely not.’ They were like, ‘Thanks, but we don’t want liquidity out of those names. We want liquidity out of other names.’”

VenCap’s David Clark, an a16z LP, explained, “VC is not about early liquidity. It’s about compounding growth over multiple years. We don’t want our managers selling their best companies prematurely.”

Anne Martin at Wesleyan is one of those thirty early LPs and a testament to the power of compounding. She has backed a16z since Fund 1 in 2009, when she was at the Yale Endowment, and has participated in twenty-nine funds as CIO at Wesleyan. The new funds that a16z just closed on will bring it over thirty.

“a16z a very meaningful position and our longest-running in the portfolio that I underwrote,” Anne told me when we spoke last month. “It was one of two new managers I brought to my investment committee in our first meeting after I was hired.”

Having started the relationship by investing in a $300 million fund – “She directly negotiated the LPA [Limited Partner Agreement] with Ben,” Jen said – Anne agrees with a16z that the opportunity has expanded enough to support larger fund sizes:

What’s interesting about Andreessen is... you take like a $1.6 billion10 fund for AI Infrastructure, and you build a simple matrix and you’re like, ‘Okay, let’s say they own 8% of a company at Exit…” What do you need from an exit to return the fund? If you own 8%, you need a $20 billion dollar exit. And you know, those are rare, but Andreessen seems to have quite a few of them. And the other thing that’s impressive is, like, is 8% the right number for them? Because a lot of times they have way bigger ownership.

For Illustrative Purposes Only… but like, very doable

“I think with them, it’s like the ownership and their ability to help these companies achieve a huge outcome,” Anne told me. “That’s what gets LPs comfortable with these fund sizes.”

That ability to help achieve huge outcomes is why even the most sought-after founders are willing to “charge” a16z less than its competitors. In several deals in 2025 alone, a16z invested at lower prices than other top firms investing in the same round. While it’s not right to share specific names, I heard of four investments in tech household names that followed this pattern last year alone.

Effectively, founders value the resources that a16z will bring that it can, at least occasionally, pay below-market today. That is a big change from the early days, when rival VC firms were so incensed at the prices a16z would pay that they nicknamed the firm “A-Ho.” It is a proof point that there is real, tangible value to working with a16z that companies “pay” for with higher dilution than they give other firms.

Which is to say, while I said there were two constituencies that matter, at the end of the day, there is really just one. I think if the best founders want to work with a16z, the best LPs will too.

Does a16z Improve Its Portfolio Companies’ Outcomes?

This is the question, huh? Some hypothetical formula that’s like:

% of market value attributable to a16z * market value impacted

And the tricky part is that to really have the biggest impact on the left-hand side of the equation, you have to be helpful when the right-hand side is smallest.

When you do that, though, when you help tiny companies become massive ones, you engender the kind of loyalty that means that founders are willing to say good things to you to other founders who are considering taking your money, and to people who are writing pieces about you.

When I asked Erik Torenberg to introduce me to a few of their founders, he connected me with founders representing over $200 billion in market value within hours, including Ali Ghodsi at Databricks and Garrett Langley at Flock Safety.

I bring those two up specifically because within 48 hours of our connection, Databricks announced that it had raised $4 billion at a $134 billion valuation, and Flock Safety helped catch the alleged Brown / MIT murderer. It was a visceral jolt of power.

Boston.com and The Wall Street Journal

What I wanted to understand, though, was how a16z applied its power on their behalf. Did it actually help shape outcomes? Did working with a16z meaningfully alter their trajectory?

Have the hundreds of millions or billions of dollars that a16z has invested in power infrastructure made a noticeable difference to the consumers of that power?

To believe in a16z’s Third Era bet, that it can expand the market for new technology companies and make its portfolio companies more valuable than they’d otherwise be in order to generate strong returns on $15 billion in fresh capital, you probably need to believe the answer to this question is yes.

The answer to this question is yes.

Recall that Ali at Databricks said there would be no Databricks without a16z. That’s $134 billion (and counting) added to venture capital’s addressable market, and ~$20 billion to a16z’s net returns in one fell swoop. Even if he is being hyperbolic, there’s an argument to be made that a16z’s support of Databricks – from early sales to the Microsoft partnership to support in building out specific departments – has paid back every dollar a16z has invested in its platform since inception.

In fact, lets say hypothetically a16z still owns ~15% of Databricks, back of the napkin math would estimate that Firm’s impact would need to account for something like 25% of Databricks’ value to have paid back standard VC management fees one could assume a16z has brought in since inception.

All of the founders I spoke with described a particular work style that’s consistent across a16z, no matter the GP you work with, that is clearly inspired by CAA: they stay out of your way and let you run the company, until you ask for something, at which point, they SWARM.

This is how a16z operates to win deals. The General Partners of each Fund decide what to invest in, and when they need to, they call in the rest of the Firm’s resources, including Marc and Ben, to win a deal.

“The firm at its best is delegated authority, delegated conviction, and group tackle,” David Haber told me. “Marc basically said, ‘Look, if you tell me this is the next Coinbase, I will fly anywhere in the world. I will fly that entrepreneur to have dinner at my house tonight. Get on the plane, pull out all the stops.’”

Once the deal is done, that is how GPs work with founders, too.

“They’re extremely supportive, no matter what. a16z, to a fault, has always supported me and the founding team, even when they didn’t see eye to eye with me. They’re not going to get in your hair on things that they shouldn’t be,” Ali told me. “But the second you need them, they’re all hands on deck, making it happen.”

And of course that’s how a16z is going to be with Databricks, but I heard the same thing from a few of my a16z-backed portfolio companies that are earlier in their journey, too.

Shane Mac, the CEO and co-founder of a16z crypto-backed messaging protocol XMTP, (appropriately) messaged me that “a16z does a ton of things. Like most VCs do. What I think matters more is what they don’t do:

They don’t tell me what to do. They don’t play short term games. They don’t waste my time ever. Every single connection they make, changes the trajectory of our business. They help me believe in myself more and realize that I too can build something so ambitious and we together can change the world.

I think that’s what they actually do the best. They believe in me and push me to realize I can do more than I ever thought possible.

Dancho Lilienthal and Jose Chayet, the founders of [untitled] (which I wrote about in late 2024 when a16z invested), shared a very similar experience, despite working with a different GP (Anish Acharya) on a different team (AI Apps).

A few weeks ago, they got on a Zoom catchup with Anish. They were nervous that their growth was compounding instead of exploding like the AI companies.

“We were scared that maybe an investor would be like, ‘Oh I don’t want to give them my time anymore because they’re just a slow burn’” Dancho recalled. “And Anish basically looked us in the eyes (remotely) and said: ‘Guys, the only way for me to not be here for you is if I die or if I get fired. I will be here no matter what.’”

This is how they describe the let-you-breathe-then-swarm approach:

They’re kind of like the most idealized version of a great parent. Really there for you when you need them and responsible for you and making sure everything’s good for you. And they’re not in your way and getting annoying when you don’t need them.

Which is beautiful. And by-design.

Joe Connor, the founder of Odyssey, the school choice platform backed by both a16z and not boring capital, said that he doesn’t go to a16z for day-to-day operating advice, but “Any time I need to talk to anyone, anywhere on planet earth, I can talk to them if I message Katherine.”

While it can put you in touch with any expert on any topic anywhere in the world, a16z explicitly does not want to get involved in its companies’ operations. In the 2014 HBS case study, Marc said, “We are not training wheels for start-ups. We do not do things for companies that they must be able to do for themselves.”

What a16z aims to do provide is legitimacy and power.

The Firm has provided a number of services over time, Alex Danco, who thinks a lot about these kinds of things, told me, “What are the most important services that we provide now? It’s hiring and it’s sales & marketing. Why those two things? Because that’s where you need the legitimacy bank. And the point of a16z is to be the legitimacy bank.”

Or, as Marc has said, “The thing you want from your VC is power.”

Joe gave me two examples.

A while back, when it was very small, Odyssey was having an issue with Stripe that it couldn’t solve through normal channels. “a16z got me on an email with Patrick Collison and it got solved immediately,” he said. “I’ve never been turned down when I’ve asked for something I needed help with. Stripe is worth like $95B, we were worth like nothing, but we’re all part of this a16z ecosystem and people pay it forward.”

Outside of the a16z ecosystem, the name still carries weight. Odyssey sells to state governments, whose employees probably could not name three venture capital firms or care less about them. But, Joe said, “They know a16z. They know Marc and Ben. And in the beginning, before we had a track record, before states could choose us based on experience, it gave states the confidence that we could do what we said we were going to do. That we had the better tech we said we did, because the people who backed Stripe and Instacart said we do.”

In October 2024, Odyssey won the contract to administer Texas’ $1 billion Education Savings Account (ESA) program, the largest in the nation. Now, it has its own legitimacy.

That is what it looks like to confer legitimacy, and it shows that legitimacy can scale. Legitimacy, to the vast majority of the market that doesn’t follow Silicon Valley closely, demands scale. The better a16z markets itself, the more legitimate its portfolio companies become in the eyes of potential customers, partners, and employees.

“If our firm does all kinds of great things and nobody knows about it,” Ben asks, in It’s Time to Lead, “did we actually do it?” Obviously, a16z is marketing to founders. They need to know what it can do for them. But it’s also marketing to everyone those founders might want to do business with in the future.

Marketing

Which is why it makes sense to build the best New Media team money can buy.

Some attention is cheap and derivative. The New Media team wants to make attention mean something. The team runs a full in-house media operation that builds and operates high quality owned channels (on X, YouTube, Instagram, and Substack), executes launches and timeline takeovers, and embeds directly with portfolio companies during critical periods.

“We are going through a bit of a PR challenge,” Flock Safety’s Garrett Langley told me a couple of weeks ago, before his company helped solve the murders at MIT and Brown, and at least for a little while, win back the PR cycle, “and while most of our cap table has ideas, a16z took action. Erik and his team jumped in. Literally. They are in our Slack now. They are in our positioning/branding docs. We just did a podcast with Ben this week. Having a trusted and respected brand, like a16z, come and stand up for what we do is critical for both the market and our employees.”

They are there when times are good, too, in those exciting early days. Fei-Fei Li, legendary computer scientist and founder of World Labs: “Four weeks before our Marble launch, their New Media team proposed an idea I had never seen before. They wanted to shoot our announcement video on a 3D LED volume stage, with our own product generating the environments in real time. They partnered with us on everything: the cinematic video, a behind-the-scenes documentary, a launch event, introductions to influencers in the VFX world. The launch went viral. We hadn’t built up our marketing muscle yet, and they helped us lay the groundwork for our operation. That kind of support, from creative vision to company building, is not something you find elsewhere..”

I am biased, because these are friends and long-time collaborators, but they are some of the very best in the business.

Erik has been obsessed with building new media organizations for a long time, which is why I turned to him to produce Age of Miracles. He’s also been obsessed with the idea that venture firms can have structural advantages.

Like, Erik and I had a chat about how he was thinking about building out the team, and never in a million years did I think he’d recruit Alex Freaking Danco.

If Writing is a Power Transfer Technology, and I clearly think it is, getting Alex Danco, and newer-hire Elena Burger, to write with and for you is an incredible superpower that money can’t buy. When every company is trying to hire a Chief Storyteller, a16z is just collecting them and spreading them across the portfolio.

Or, I first met Erik and David Booth when I did On Deck in 2019. No one in tech thinks about building community in the way that David does. Now, he gets to run what he’s learned with significantly more resources and access to the world’s best talent in order to attempt to make a16z into an even better “preferential attachment” machine and VC into a network effects business.

I realize that I’m gushing a bit here, and that a16z has tried to own the narrative in the past with initiatives like Future, which fizzled out, but 1) by the economic logic above, a16z should be taking 100 shots like Future and 2) this is the platform team I know best, because it’s what I do, and I wouldn’t have even thought these people were gettable. If that’s the level the other teams operate at, it makes me more confident that they really are building a machine with compounding advantages that companies couldn’t build on their own.

Every dollar spent telling the stories of the Firm and its portfolio companies is amortized in so many directions that they become a rounding error, and if all of those dollars, almost no matter how many a16z spends, mean winning and helping just one more Databricks, Coinbase, Applied Intuition, Deel, Cursor, or insert your favorite a16z company here, they’ve all been worth it.

That’s the economics a16z is playing with across everything. It’s the same logic the Firm applies to investing in startups - “The most you can lose is the dollar you invested, but your upside is practically limitless” - applied to everything the Firm does.

It makes more sense for a16z to build the best version of a thing that most of its portfolio companies will need (but is not core to those companies) than it does for any single company to, at least until it gets much bigger.

Hiring

Very tactically, the two things that I heard from every founder I spoke with is that a16z has been particularly impactful in two areas: hiring and sales.

Hiring has been a core part of a16z’s offering since the beginning, when Marc and Ben brought in Shannon Schiltz (Callahan) from Opsware, Shannon convinced Ben to hire Jeff Stump as Talent Lead, and the two of them built out a Talent Team that early stage money can’t buy.

“The scale and the quality is just different,” Ali said about a16z’s Talent Team versus other VCs’. “It’s like a little side thing, I’ve hired a couple people to help you with recruiting on the side versus a big recruiting department whose job it is, and who is measured on really closing candidates and giving you top funnel.”

Founders across stages told me that the Talent Team is helpful from the beginning to the very end.

Cursor co-founder and President Oskar Shulz said over email that “a16z’s size allows them to be helpful across a few different functions,” among the most impactful of which have been “engineering / research hiring, executive hiring. Other, smaller firms don’t have the resources to give us a good talent pool overview.”

Resources can mean GPs, too. In a recent conversation between a16z AI Infra GP Martin Casado and Cursor CEO Michael Truell, the two discuss that Martin spends his nights and weekends recruiting for Cursor. “Have your board members do lots of calls until they cry uncle,” Truell joked. “Take advantage of their time.”

Qasar Younis, the founder and CEO of $15 billion Applied Intuition, said that, “A number of our early employees, including the President of our company, came through a16z. Our number two in finance came from a16z. We’ve even had multiple a16z employees work at Applied, including Matthew Colford, who was an early a16z gov public member.”

Deel co-founder and CEO Alex Bouaziz said that as his company has gotten bigger, and become a bigger part of the a16z portfolio, it’s been able to take advantage of more of its resources:

From when we started working with a16z, Shanbar [Executive Talent Partner Shannon Barbour] has felt like a part of our talent acquisition team. We’ve gotten to work closely with her and Jeff Stump on executive hiring, and when we were hiring our CFO, Ben [Horowitz] interviewed everyone we were considering for the role, which was pretty cool. The CFO we wanted [Joe Kauffman] is a very talented, demanding guy, so Ben and Anish helped me close him. Anish was texting him. Ben was texting him.

Now that Deel has crossed $1 billion in ARR, and getting its board public-ready, “a16z has helped us hire two of our three independent board members - Francis deSouza (Google Cloud COO, Disney Board Member) and Todd Ford (ex-CFO of Coupa, HashiCorp board member). They helped source, go through deep diligence, reference check, make intros. They spent the time to be a true strategic partner.

Sales

a16z has been equally helpful with sales, both directly and indirectly, from the early days to later stages.

Ian Brooke, the founder and CEO of a16z American Dynamism and not boring capital portfolio company Astro Mechanica (which I wrote about in April 2024), said that both the direct and indirect are important for his business’ attempts to sell to Defense.

“I don’t think there’s any other fund we could bring up to our government partners that has the credibility and brand recognition that a16z does. That’s especially true inside the DoD,” he said of the indirect stuff. Directly, “they make sure to be known inside of government agencies, so they can make the right introductions, like the connections they made for us with the Air Force Rapid Capabilities Office, among others.”

“Working with the government is all about relationships with the right people and offices,” he continued, “And a16z actually makes a point to cultivate and share those relationships. A senior person within the DIU (Defense Innovation Unit) literally told me, ‘We take the a16z recommendations very seriously. We ask them, “Who should we meet?”’”

Qasar, whose company sells to auto manufacturers and increasingly to defense and other American Dynamism industries, credited a16z with helping it break into defense: “Our first defense customer came through an EBC (Executive Business Center) kind of thing that they do.” Applied gets targeted introductions, too, no matter the industry. “Anyone I want to get a hold of, Marc can get a hold of,” Qasar said. “Whether that’s with our defense business, automotive business, our construction, mining, he can get to them.”

Of course, a16z can help sell software, too. It’s the Firm’s bread and butter, where the network effects and benefits to scale really show up.

Jordan Topoleski, the COO of Cursor, which a16z first backed at the Series A, explained how a16z has helped them sell: “The platform team introduced us to nearly 200 CTOs at key target customers during our first year working together. They held daily standups with us, came by our office late at night, and had a dedicated team focused on organizing strategic meetings for us. As we scaled our footprint in financial services, they once scheduled 34 c-suite meetings in a single week at their office for us. They’ve felt like an extension of our GTM team.”

Then there’s Databricks, which credits the EBC with 50% of its early sales, and credits Ben in particular with making the transformative Microsoft deal happen.

And Alex at Deel said that while it was hard for his company to sell into enterprise in its first couple of years, it’s now using a16z’s Enterprise Marketplace and its GTM (Go-to-Market) Team to help access large organizations and win. Today, 10-15% of the business is enterprise.

Both Alex and Garrett at Flock Safety said that while the hands-off-then-swarm motion is true at the earlier stages, as their companies have grown, a16z’s platform teams embed within the right teams in their companies. It’s a way of giving founders space while helping their businesses.

“It’s often hard for me to tell investors what I need help on,” Alex said, “but when you have dedicated platform people embedded – the recruiting and GTM teams there are aligned with my similar teams - it’s better than needing to make a specific ask.”

Garrett at Flock Safety described a VC barbell:

Some firms, you are picking the GP and the firm is secondary. I’d argue A16Z is on the opposite side, you are picking the firm, not the GP. While DU technically sits on our Board, I spend time with Ben, DG [David George] (leads growth), Alex I [Immerman] (also on the growth team), Erik [Torenberg] (for comms/branding), Stump (exec recruiting), look I could keep going but I think you get the picture.

And that’s just me, my exec team also has specific contacts at the firm for every function. That’s incredibly helpful.

Being so deep in the weeds with its biggest companies means that a16z can help the business grow in both big and small ways, ways that are tangible and measurable.

But it also, maybe more importantly, means that a16z knows the businesses well enough to back up the Belief Truck, and the money truck, when the time comes.

Belief

Qasar Younis has had a great experience with a16z. Marc is on his board, which is rare, and Marc and the broader Firm have been there whenever he needed, opening up its Rolodex.

“But,” he admits, knocking on wood, “we haven’t had any real problems. I think that’s really the crucible for an investor, how they react when you have a problem.”

To that end, what both Garrett at Flock Safety and Alex at Deel had to say about the firm speaks volumes. We talked earlier about the New Media team embedding itself within Flock during a recent PR challenge.

Alex at Deel was no stranger to PR challenges last year.

Axios

“As a firm,” Alex told me, “whenever there was bad media, they stood with us straight.”

I remember this. I remember seeing Ben and Anish tweet in support of Deel almost immediately after Rippling had accused it of spying and thinking that it was … bold.

But, Alex said, “They said, ‘We know who you are, how you work, your background, your ethics. We stand with you.’ They came out very publicly, very immediately. And behind closed doors, they were reminding people, ‘Guys, you know Alex.’ When someone like Ben knows all of the details, and has been abreast of this exact situation with you for two to three years, it’s such a strong representation.”

“Investors were supportive,” he said, “but they were just another level. They were figuring things out, helping me figure out how to manage it, getting the best people to help, getting their own hands dirty. When a lot of this bullshit came out, I couldn’t have asked for a better partner.”

What’s the point of having fuck you legitimacy if you can’t say fuck you on behalf of your portcos?

Then, Deel crossed $1 billion in ARR, and then it raised $300 million from new investor Ribbit Capital, who presumably did its diligence and came to the same conclusion a16z did, at a $17.3 billion post-money valuation, $5 billion higher than the valuation set in the Series D pre-drama in February 2025. a16z, of course, participated, as it has in every round, including secondaries.

“They’re very loyal investors,” Alex said. “Every time there was a secondary happening or investor selling, a16z bought all the stock it could. They bought every share of Deel in the market because they were so embedded. The rest of the market didn’t know us well because Deel hadn’t fundraised.”

There was also the time that Deel needed money to buy a company. “Our C round wasn’t a proper round,” Alex remembered. “I wanted to buy a company, I needed money, and I talked to a few investors. There is nothing like raising money from a fund like a16z who you can turn to and say, ‘This is going to be game-changing’ and they can move fast to drop $100 million for the acquisition you want.”

Today, as a result of that consistent support, a16z owns “twenty-something percent” of Deel across its funds, per Alex, building a position well-earned through belief and specific tactical support.

It is a validation of the model: know your companies so well, work so closely with them, that you know them better than anyone else and can go all-in when others can’t, all while helping them become larger than they would otherwise.

It is clear talking to its founders that working with a16z has had a direct, material impact on their business. As with a16z’s policy work, though, the Firm’s impact is both direct and indirect. Even the founders outside of a16z’s portfolio benefit from the change it’s brought to the industry.

Which means that the other way a16z helps its portfolio companies, and new technology companies more broadly, is by forcing other funds to spend their management fees on helping startups win.

“A lot of the stuff that a16z promoted in those early years have become really mainstream venture views,” Qasar at Applied Intuition told me.

“Being founder-focused, having technical GPs, having a platform. You look back at Benchmark, Founders Fund, KP, Sequoia, Khosla—they would take pride in writing a check and disappearing. There was a real point of ‘Hey, you’ll never talk to us.’ That was a feature, not a bug. And now it’s actually really flipped where founders say, ‘Well, what more can you do for me? I can get money anywhere.’”

“That’s an a16z fingerprint.”

As I wrote in Venture Capital and Free Lunch back in early 2024, I think management fees are “among the most interesting buckets of capital in the world”11, and a16z has a lot of them. This has been one of the main criticisms of the Firm - that of course it wants to raise a lot of money, because it earns management fees on every dollar, every year, no matter what.

A more interesting observation might be: of course it wants to raise that money, because then it can invest a boatload into building the type of stuff that almost no other pool of capital is incentivized to in order to help its companies, and new technology, win.

Working with a16z crypto is what brought me to that realization originally, and in writing this piece, it’s become clear that no firm has applied its management fees to beneficial ends as long, as consistently, as aggressively, or as successfully as a16z.

“From my vantage point,” David Haber said, “One of the structural advantages that the firm had early on was that Marc and Ben were already very wealthy and so didn’t need to take salaries. Instead, they’d play the long game and invest management fees into the platform, and build sources of compounding competitive advantage. We continue to make that trade: instead of paying people more money and bonuses, like many funds do, we choose to invest in the firm and compound our advantage over time.”12

You can spend $1 billion of your LPs’ money building a machine to try to help all of the new technology companies in your portfolio succeed, it pays itself back many, many times over on one Databricks, and it continues to pay itself back over and over with every Coinbase, Applied Intuition, Deel, Cursor, you name it.

So of course, every large venture firm is now trying to build such a machine, which means that founders have billions of dollars and hundreds of smart, connected people working on their behalf in their mission to displace sclerotic incumbents, eliminate waste, fight off death, shrink the globe, keep it safe, and do all of the things that technology is supposed to do in service of the future.

Which is the whole point.

The Future of the Firm of the Future

When anyone joins a16z, which happens frequently these days with more than 600 employees, they must sign the Firm’s Culture Doc.

While everyone at the Firm reads the doc, Katherine Boyle thinks that “we don’t give it the actual reverence it deserves.”

“There’s one line,” she says, “number three: We believe in the future and bet the Firm that way.” Katherine loves that line. As she sees it:

Everyone in Silicon Valley misinterprets this. It means that we will never take the negative. That is why sometimes we look stupid compared to all of the other firms that will take the negative. It’s in our culture doc that we will never bet against the future.

I actually think it should be number one. No other firm can say that. Other firms will send out memos like “it’s about to be a macro crisis.” “We believe in the future and bet the firm that way” is literally why Marc and Ben created the firm.

Marc and Ben have no problem looking stupid. But if you ever bet against the future in any category, you will be fired.

Wholehearted, full-throated belief in the future sounds quaintly naïve at best, and like bullshit, at worst.

Before I saw inside a16z a few years back, I thought it was at least partially bullshit, too. These were elephant hunters! They just wanted to win. You can wrap yourself in the future just the same way you can wrap yourself in the American flag.

From the outside, what it looks like a16z is trying to build is one of the largest financial institutions in the world. It looks like that, in part, because that is what it happens to be building. ~$90+ billion in regulatory assets under management is real money where I come from.

When we discussed a16z’s funds in the context of large financial institutions, capital-F Firms like Apollo and Blackstone, David Haber pointed out that a16z is still small in comparison. Blackstone manages $1 trillion; Apollo will soon.

There is a lot for a16z to learn from those Firms, about compounding advantages and scale and incentives and internal operations and what it takes to run a global financial institution. On the surface, there are similarities between what those Firms are today and what a16z hopes to become.

I think there is a big difference.

Apollo and Blackstone don’t believe in anything, really. They are financial institutions meant to deliver financial returns. And there is absolutely nothing wrong with that. The economy needs what they provide, and they are very good at providing it. The very best.

a16z believes. a16z is building a company to bring about a glorious future through technology, one that uses finance as a means to that end. One that grows and compounds and, like any normal technology business, gets better as it does. One that is able to wield evermore resources and power on behalf of that future in which it believes, even if it doesn’t know quite what it will look like yet.

That’s the entrepreneurs’ job. They provide the details. a16z provides the belief.

As we wrapped up our call, I asked Ali what he thinks is most misunderstood about the Firm that he’s worked with for over a decade. He didn’t have to think hard.

“Ben and Marc are believers,” he said. “If that’s not clear from reading their blogs, I think they are tech believers to a fault. They truly believe that tech can revolutionize the world. And in every startup they’re involved in, that’s how they envision it. They envision the full potential of the thing.”

The history of a16z to date is the history of everyone believing that Marc and Ben were doing venture capital the stupid way, waiting about ten years, seeing the results, realizing they were right, and then trying to do the same thing themselves, without the benefit of everything that had compounded for a16z during their competitors’ decade of disbelief.

And then doing it all over.

Of course it worked when the funds were only $300 million, even $1 billion. It won’t work at this scale.

Of course it worked in the early days of social networking. It won’t work in, say, crypto. Or American Dynamism. Or AI.

And then, of course, at least to date, for the most part, it has.


When a16z believes, it believes harder and longer than anyone else. It has the resources to be patient, and the resources to know that its steadfast commitment will likely be rewarded.

Whether you think they’re right this time or not, whether you agree with them or not, whether you like how they’re playing the game or not, Marc and Ben and the team they’ve built at a16z really do believe that they are working on behalf of the future, and in so doing, working on behalf of us all.

As odd as it sounds to say it, theirs is one of the most humble approaches to venture capital I’ve seen: if a lot of really smart people are excited about something, there’s probably something to be excited about. Follow them there. Raise a whole fund to follow them, before anyone else even thinks there’s a there there.

You can disagree with this approach. You should disagree! There is no one right way to do venture, but you must believe in something.

What you should probably not do is judge a16z without understanding the game it’s playing or the bets it’s making.

a16z is betting that technology will eat more and more of the economy, and that when it does, the new companies will be 10x, 100x bigger than the old ones they replace. That is a foundational one, but one that is available to any venture fund with some cajones.

a16z is betting that it can help make that future bigger and better than it would otherwise be, sooner, through policy and platform and power, and help its portfolio companies win in the process. Based on my conversations with a16z’s founders, this bet seems to be paying off today, and it is an unbelievably asymmetric one. Each win can pay for a lot of capabilities. The machine compounds.

The most interesting bet it’s making, in my opinion, in light of the first two, is the one that seems most obvious, when you put it like I’m about to put it:

That a VC firm can, like nearly every other type of company in the universe, get better with scale.

If it’s right, and I think it is, then a16z’s best years are ahead of it.

Which is great. I like those guys.

But the magical thing about a16z’s product, the thing that it is actually improving with scale, is that when it’s able to build more resources, skills, network, and power, then every new technology company it works with, and even many it doesn’t, improves with its scale, too.

A world in which a16z succeeds is a world in which new technology companies can compete with incumbents on a more equal footing, may the best product win.

A world in which a16z succeeds is a world in which new technology - at every layer of the stack, from energy to AI, crypto to self-driving cars - diffuses through the economy more quickly and impactfully.

A world in which a16z succeeds, if you believe, like I do, that new technology gives humans the means with which to make the world better, is a more abundant one, sooner.

a16z works on behalf of the future. If it’s really right, all of our best years are ahead of us, too.


A big thank you to everyone who spoke with me for this piece.


That’s all for today. We might even be back in your inbox this weekend, though. It’s a brave new not boring world.

Subscribe now

Thanks for reading,

Packy


Appendix

Important Disclosures Related to Performance

This appendix is provided for informational purposes only and does not constitute an offer to purchase any interest in any fund managed by a16z Capital Management, LLC (“ACM”). This presentation information should not be relied upon in any manner as legal, tax, investment or accounting advice. An investment in any fund managed by ACM involves a high degree of risk including a risk that the entire amount invested is lost.

All figures as of 09/30/25 unless otherwise noted. All performance figures, valuations and fund summaries contained herein are unaudited and subject to change. Past performance is not indicative of future results. There can be no assurances that any future a16z Capital Management, LLC fund or investment will achieve comparable results. Furthermore, performance of any future ACM fund will not be comparable to performance of existing funds due to material differences in market conditions, differences in investment strategy and other factors. No individual investor or fund received investment performance illustrated above.

Gross and Net performance figures provided herein do not represent and should not be used as a substitute for the actual performance of each ACM managed fund. Performance figures of certain funds reflect the use of a warehouse, capital call, or similar line of credit. Performance would differ if calculated from the time of the opening of such line of credit rather than from the initial contribution of capital and may be lower. Performance includes performance for the primary or “main” fund identified herein as well as any vehicle that aggregates capital from multiple unaffiliated investors for the primary purpose of investing directly into such primary fund, including those funds which do not charge a management fee or carried interest. If such funds were excluded, performance would be lower. Fund performance does not include funds in ACM Bio and Health strategy or the Cultural Leadership Fund strategy; single-investor vehicles; or special-purpose/single-investment vehicles (SPVs), unless specifically mentioned.

Performance figures include reinvested capital and may differ if such performance was excluded. Performance reflects voluntary General Partner fee waivers and would be lower if such fee waivers were excluded. Any investments and portfolio companies described or referred to in this report are not representative of all investments in vehicles managed by ACM and there can be no assurance that the investments described are or will be profitable or that other investments made in the future will have similar character or results. Performance figures do not include all investment funds managed by ACM. Visit a16z.com/portfolio for a list of all investments in a16z-managed vehicles.

Gross/Net Total Value to Paid-In Capital (TVPI): represents the sum of (1) the aggregate amount of distributions made to the all limited partners of those Funds and (2) the fair value of all limited partners’ capital accounts as of the end of the period indicated expressed as a multiple of the aggregate amount of capital that has been contributed by all limited partners of a given fund. Net includes the effect of management fees, fund expenses and carried interest allocations.

Net DPI: Net Distributions to Paid in Capital for ACM funds represent the aggregate amount of distributions made to the limited partners of those funds expressed as a multiple of the aggregate amount of capital that has been contributed by those limited partners of a given fund.

Gross Metrics (Fund-Level): Includes cash still held, and any other assets and liabilities of the Fund. Gross return adds back management fees, carried interest, and fund expenses.

1

The rival’s analyst crunched the numbers and estimated 7.5%. Close. Andreessen Horowitz’s average ownership in its portfolio companies was 8%.

2

Pitchbook data lists Founders Fund Growth III ($4.6B) and Founders Fund IX ($972M) closed in 2025.

3

Per Pitchbook, US VC funds raised in 2025 totaled $82.0B, including a16z’s $15B.

4

Source: Valuations per Pitchbook as of September 14, 2025.

5

Source: Ilya Strebulaev, Venture Capital Initiative, Stanford GSB, April 2025.

6

Source: Based on Pitchbook data available as of July 31, 2025. Excludes AI unicorns in China and Hong Kong.

7

Past performance is not indicative of future results. See appendix for important disclosures and information on related fund returns.

8

I see you, Insight Partners fans. The $20B Fund XII it raised in 2022 includes a Buyout fund.

9

For more on LSV, check out David George’s excellent recent conversations with Patrick O’Shaughnessy and Harry Stebbings.

10

a16z closed on $1.7 billion for Fund X AI Infra, $1.6 billion used for illustrative math.

11

The rationale is simple: if megafunds are confident that they’ll benefit from the growth of the early stage tech ecosystem, they can justify paying for all sorts of pro-ecosystem things. If you believe that technology is good, and I do, then management fees applied to strengthen the tech ecosystem are like charity that keeps paying for itself.

a16z recently announced that it would be “supporting candidates who align with our vision and values specifically for technology.” It also built a world-class crypto research team based on the belief that “There is an opportunity for an industrial research lab to help bridge the worlds of academic theory with industry practice.” The team has since built and open sourced a number of useful research-based products, including Lasso and Jolt.

For firms with a long view, there’s an economic incentive to support the kinds of things that have long, uncertain payoffs that doesn’t exist anywhere besides government and academia, both of which have become increasingly sclerotic and slow-moving. I wouldn’t be surprised to see more VC-supported basic and applied research labs, for example.”

12

To be clear, the a16z team is paid quite well. You can put away your violins. The point is that instead of paying a small amount of people galaxy-owner dollars, it chooses to pay many more people very well in order to build compounding advantages.

welcome to not boring world

2026-01-06 21:48:25

Welcome to the 847 newly Not Boring people who have joined us since 2025! Join 256,316 smart, curious folks by subscribing here:

Subscribe now

And today, for the first time ever, you can pay while you’re there to get even more not boring.


Hi friends 👋 ,

Happy Tuesday! Happy New Year! And HAPPY not boring world LAUNCH DAY.

I’ve been looking forward to today for a long time, so without further ado…

Let’s get to it.


welcome to not boring world

Today, we’re launching not boring world: the paid section of not boring, where the world’s smartest founders, researchers, investors, creatives, and general geniuses, the ones I couldn’t hire as a full-time writer for a million bucks, write their best ideas.

Subscribe now

Here’s the master plan.

Geniuses bring their genius ideas, I help write them. Call it a Cossay or a Joint or something. Whatever we call it, make it as easy to for busy practitioners to share their insights in the essay format those insights deserve as it is for them to spill them on a podcast.

We get all the geniuses in one place, and then we grow from there. This world is biological; your guess re: how it evolves is as good as mine. But it will, and I want you to be a part of it.

We are making a few bets. That ideas are meant to be written. That people who are out there doing things earn ideas you can’t find in LLMs. That the most biased narrator, the one betting his or her livelihood on his or her idea, is the most reliable narrator. That when it’s easier than ever to get mediocre outputs on-demand, the best thing you can feed your brain is high-quality inputs from high-quality people.

That the future can be full of both means and meaning, and that we can create the home for this good future online. A place that’s smart and weird and, hopefully a little magical.

I mean, it’s a newsletter, so we’ll see. But that’s what I’m going for.


Something that I keep writing about because I think it’s really important and becoming even more important is differentiation: doing the thing that only you can uniquely do.

not boring world is that, for me.

It is a bet on the written word when everyone is going all-in on video.

It combines two of my favorite things in the world: talking to smart people and writing.

And it works way better now than it would have when I started not boring.

Over the past six (6!) years of writing this newsletter, I’ve gotten the chance to know and work with some really smart people, people way smarter than me, who are out in the field building stuff and researching stuff and creating stuff. These people are betting their prime years on certain ideas that no one else understands (or believes) yet.

They are more motivated than anyone else could possibly be to understand every facet of the bet they’re making: the history, the technical details, the economics, where it could all go wrong, and what could happen if it goes right.

Meaning, the best person to learn about what’s happening in robotics from is the founder of a robotics company. Even better, from multiple founders of multiple robotics companies, each making a slightly different bet on the winning approach.

Unfortunately, because these people are busy building, and because they’re not necessarily writers (those who can’t do, write, or something), their very best, most core ideas don’t get to see the light of day.

I love to bring them out when I write Deep Dives, but I only write a few Deep Dives a year, often after years of knowing a founder and his or her company. I am only one person and I have only ten fingers. I am a bottleneck.

Most founders resort to a much lighter-weight way to get their ideas out in the world: podcasts.

I love podcasts. I listen to them all the time. I’ve made a couple of them.

Having said that… I think podcasts are a pretty terrible medium for ideas.

They’re great for getting to know people. They’re great for stories. They’re certainly great for putting on wherever you are and whatever you’re doing.

But quick: name the best idea you’ve heard on a podcast recently.

Aggregation Theory was written. Commoditize Your Complements was written (twice). The Bitter Lesson was written. Attention is All You Need was written, too. I, Pencil. Meditations on Moloch. The Cathedral and the Bazaar. 1,000 True Fans. Do Things That Don’t Scale. The next big thing will start out looking like a toy. Why Software Is Eating The World. All written.

It has been said that reading is dead, that video won. And for most people, that’s probably true. Video is easy, and video is fun. But I can’t think of a single canonical idea born on a podcast or video, not one.

Ideas are meant to be written.

They are meant to be put to paper, cross-examined, torn apart, rearranged, supplemented with data, edited, packaged, placed next to shitty hand-drawn illustrations. They are meant to be quoted and shared and built upon.

So not boring world is doubling down on the written word.

One goal is to make it as easy for genius frontier practitioners to produce an essay as it currently is for them to go on a podcast and yap. To start, that will mean co-writing essays with them.

We have solid precedent. My two most popular essays of all time, The Electric Slide and Excel Never Dies, were co-written with founders! Sam D’Amico and Ben Rollert, respectively. I just realized this last night, long after deciding to do this. Literally the two most popular essays in not boring history.

Literally the Two Most Popular Not Boring Essays Ever, per Substack

The trade is simple: they’re smarter than me, I’m (usually) a better writer than them.

As Ben put it: “I have good ideas but somehow fail to bring them anywhere.”

My job is to bring them out. It’s what I love to do, truffle pig new ideas and shave them all over your inbox.

Hardware is a Fruit, which I co-wrote with Daylight’s Anjan Katta, was a more recent trial balloon. Anjan had a core idea against which he’s been building a company for years. He texted me a small version, then sent me a voice note with a longer version, and a couple of days later, we had an essay that he probably never would have written about an idea that I never would have had on my own.

I have a lot more in the hopper. All are ideas I never could have come up with myself, even using our new thinking machines. Which leads me to the second thesis behind not boring world

We will write down ideas you can’t find in an LLM.

My co-authors are figuring out new knowledge in real time, and we’re going to share it fresh, before we’re even sure it’s right, while the hypothesis it still being tested.

Early last year, I wrote Long Questions/Short Answers, and I argued that finding the right questions will become extremely important as LLMs make it easier to get to answers.

not boring world is something similar. Like: Long Inputs / Short Outputs.

MSCHF’s Gabe Whaley went on Jackson Dahl’s Dialectic podcast last year and talked about employing a full-time person whose job it is to bring in fresh inputs and educate the rest of the team on them. That’s just about the coolest thing I’ve ever heard.

I want not boring world to feel kind of like that. A home for really good, well thought-out inputs, inputs that are so fresh, so frontier, that they’re not in the LLMs yet. Inputs that exist only as ideas trapped in geniuses’ heads. I intend to help get out as craftfully as I can.

To start, that will be through co-written essays with smart founders, researchers, creatives, industry insiders, etc… but I think this starts getting most fun as that network of collaborators grows and continues to contribute.

So Anjan wrote that essay on hardware, now he can share his inputs. The papers he’s reading. Tidbits from conversations he had. But more: art, too, and music, even meditations.

And it’s not just Anjan. Each contributor will enter the center of not boring world and be able to share their inputs, independently or in conversation with each other. They’ll debate each other. Maybe they’ll collaborate. All of a sudden, not boring world becomes this fascinating little network. At its best, it will help you learn things you can’t learn anywhere else. And it’ll help you see the raw inputs and thought processes behind these ideas, so you can form your own thoughts.

This is because our thesis is that you should come up with your own outputs — that’s your job — fed by great inputs — that’s ours.

If I just tell you to “buy Google” or whatever, there is exactly zero alpha in that. It’s much better for you to combine some new inputs from the frontier, your own experience, and inputs that you find yourself to come to your own conclusions.

As the Genie tells Aladdin, bee yourself.

The coolest part about my job has been stuffing my head with inputs directly from really smart people doing things, letting it all swish around, coming up with my own outputs, and through that process, as if by magic, developing my intuition. I want to share that magic with you.

Subscribe now

But inputs can be messy, which is where not boring world comes in. I’ll curate the very best people I can find and present their ideas as coherently as possible, to balance the raw inputs with legibility.

Two things will help us do this.

The first is Editorial Infrastructure. I’m going to be hiring out a team to build a machine that helps the very best ideas and inputs shine and spread. Editors, illustrators, researchers.

This team is not built out yet. By subscribing to not boring world today, in the very beginning, you’re helping the start of what I hope becomes a unique and world-class modern media organization that grows with the people actually making the news.

There is a fun secret behind all of this: I couldn’t pay any practitioner worth hearing from any amount of money to get them to come work for not boring full-time. If they were so easily bought, they wouldn’t be generating the types of ideas worth listening to.

But they’re certainly willing to spend a few hours (or less) to 1) get their favorite ideas in essay shape and 2) get those ideas in front of smart, curious people (who are potential employees, investors, customers, and partners). This is the Liquid Super Team at a higher level than I initially intended.

The risk here is that the message gets diluted. What is not boring world if it’s tech and architecture and book reviews and group meditations and …?

The answer to that question is the second thing that will help us build this universe: a coherent worldview.

not boring world publishes on behalf of the good future.

That means technology, but not only technology. The closest I’ve come to expressing this worldview is in Means & Meaning. not boring world covers both.

Means are obvious: how technologists give humans ever-greater means – including cutting edge research and the strategy it takes to scale enough to have a real impact.

Meaning is harder: what is worth paying attention to? How can we contribute? What are we doing here? How do we bring about Modern Magnificenza?

If we are zipping around the world on supersonic planes depressed and arguing about politics or whatever, we have failed.

Come join our world and make sure that doesn’t happen.

The truth is, I don’t know exactly where we’ll end up. I do know that this is as energized as I’ve been about not boring since the beginning. I have the most fun when we’re experimenting with new things, even and especially when they feel different, and this feels like something that doesn’t exist, but should, and that we’re in the best place to make it happen.

Turning on paid subscriptions was actually the plan from the very earliest days of not boring. Once I hit like 5,000 subscribers, I’d turn on a paywall, convert like 10%, and be off to the races. But on the way to 5,000 subscribers, then 10,000, then 100,000, then 255,000, I realized that I really liked it when those words I’d put so much effort into writing got into as many peoples’ brains as possible. So I kept the newsletter free, partnered with sponsors, and grew.

The thing is, I still like when the essays I pour my very best ideas into get to fly free, so those will remain free, along with the Weekly Dose. Everything you get from not boring today, you will still get for free.

not boring world is different and more.

Over the next few months, we’ll be dropping co-written essays every other week to paid members of not boring world. The roster is already full of some of the smartest and most creative people I know. No one I’ve asked has said no yet.

As I build out the team, and as co-writers become contributors, frequency will increase. We’ll figure out how to do that without overwhelming you. We’ll also open up a chat in the Substack app to discuss it all together. This isn’t going to be a one-sided thing. You’ll get to be a part of what I’m building. Book clubs? Possibly. Daily rituals? I hope so, eventually.

This is the maximalist version of what a newsletter can be. Newsletter as world container. Newsletter as jumping off point. Curation and creation combined. Our place to experiment with new formats. The last subscription you’ll ever need (jk please support my fellow substackers).

It’s a bet on the written word, that great new ideas are trapped in human heads, and that it’s possible to build a future that increases both means and meaning.

Subscribe now

Welcome to not boring world: a newsletter, yeah, for now.


Big thanks to Meghna Rao for editing, Hanne Winarsky for helping me prepare to launch.


That’s all for today. We will be back in your inbox a BUNCH this week.

Thanks for reading,

Packy