MoreRSS

site iconNot BoringModify

by Packy McCormick, Tech strategy and analysis, but not boring.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Not Boring

Weekly Dose of Optimism #177

2026-01-23 21:46:57

Hi friends 👋 ,

Happy Friday!

We are back to our regularly scheduled Friday slot after yesterday’s optimistic cossay with Ross Garlick on what could go right in Venezuela and what it would take.

We have a lot of great stuff, including the best thing I’ve ever read by the best writer in biotech, promising cancer vaccine results, Zipline money, ocean plastic removal, internet backbone, and a bunch of bonuses for those of us who are going to be snowed in this weekend. Stay safe and warm out there, and…

Let’s get to it.


(1) Going Founder Mode on Cancer

for

If you read just one thing from the Dose this week, please make it this.

I am a founding member of the Elliot Hershberg Fan Club. He was not boring capital’s biotech partner and remains a great friend and the person I turn to with any biotech question I have now that he’s running Amplify Bio. I love most of what he writes. But I don’t think any of it comes close to this one. I’ve been waiting for it.

The last time Elliot was in New York, we took a walk around Washington Square Park and when the conversation turned to cancer therapeutics, he told me about GitLab founder Sid Sijbrandij’s story for the first time. His point was: this is what one superhuman billionaire is doing to fight his cancer today, and I think something like it will be available to everyone to fight cancer in the future.

Now, he’s written that story down, and it’s better than I expected. It’s the story of Sid’s extraordinary fight against osteosarcoma after exhausting the standard of care.

Sid fought cancer and beat it, only for the cancer to return in 2024. After doctors told him, basically, “You’re done with standard of care, maybe there is a trial somewhere, good luck!”, Sid pulled out all the stops to cure himself.

He put together a 1,000+ page Google Doc of health notes. He obsessively gathered information via every diagnostic he can get his hands on, done often, and built systems to solve problems nobody else would solve for him. Sid assembled a SWAT team, used single-cell sequencing to identify FAP-expressing fibroblasts in his tumor, flew to Germany for experimental radiotherapy, and is now in remission. He won.

The piece is part profile, part science, part fight against a Kafkaesque healthcare system, and part glimpse into a future where personalized oncology actually works, where AI agents order diagnostics, bioinformatics pipelines design custom vaccines, and the total cost of treating early-stage cancer the way Sid did drops dramatically.

From an optimism perspective, it’s a twofer:

  1. It’s possible to beat cancer through personalized therapeutics.

  2. One extremely dedicated person can solve almost anything.

Just read it.

(2) Moderna, Merck Report Positive Results from Cancer Vaccine Study

Nicholas G. Miller for The Wall Street Journal

In the meantime, generally available cancer drugs continue to get better.

This week, Moderna and Merck announced that results from a five-year Phase 2b trial in melanoma patients showed that its cancer vaccine, in combination with Merck’s immunotherapy, Keytruda, reduced the risk of death or recurrence by 49% versus Keytruda alone. That is a massive improvement, and another big sign that mRNA vaccines are going to be a key part of the arsenal in the fight against cancer. The companies have eight trials in Phase 2 or 3 across multiple tumor types beyond just melanoma.

Relatedly, long-term not boring readers may remember Keytruda from our Deep Dive on Varda. The drug is one of the best-selling of all-time and was the single best-selling pharmaceutical in the world in 2024 with $29.5 billion in sales. It is also one of the drugs with the highest price per kilogram at a whopping $194 million per kg.

The drug is out of this world, literally. In 2017, Merck conducted a mission on the ISS to explore the crystal properties of Keytruda in order to improve crystallization. While the research has not been commercialized, the hope is that a tighter distribution of smaller particle sizes would allow for self-administration at home versus the current process of going into the clinic for IV dosing as it stands.

Daily Synchronicity: after I wrote this, Scott Manley posted a video on just this topic.

The future is bright. It’s never been a worse time to be cancer.

(3) Zipline Raises $600M at $7.6B and Makes 2 Millionth Delivery

Zipline has been one of our favorite companies to write about in the Dose, for three reasons.

First, they make autonomous flying drones. They’re building the future we want to live in.

Second, they started out (and continue) by using those drones to deliver drugs to hard-to-reach places in Africa and have saved or improved thousands of lives. Great for humanity, and a smart strategy to get flight hours in before taking on the US.

Third, the future of delivery is going to be unrecognizable, and it’s going to make the ground better, too. Drones are faster and cheaper than cars or electric bikes. Order something, get it whizzed to your house. That also means fewer delivery vehicles clogging up the roads and fewer electric bikes trying to kill you.

Now, they have a fresh $600 million to pull that future forward faster, including the launch of a new market, Phoenix. To do it, they’re going to need a lot of drones. Last year, I got to tour the facility where they design, test, and manufacture new Zips. Molly went behind-the-scenes on Sourcery so now you can, too.

(4) The Ocean Cleanup is Now Intercepting 2-5% of Global Plastic Pollution

Boyan Slat

A non-profit called The Ocean Cleanup is working to take and keep plastic out of the ocean, and it’s on its way towards its goal of removing 90% of floating ocean plastic pollution by 2040. Founder Boyan Slat announced The Ocean Cleanup removed 27,385 metric tons of plastic last year, and is intercepting 2-5% of global plastic emissions. That's roughly the weight of 10 Eiffel Towers.

Slat was 16 when he went scuba diving in Greece and saw more plastic bags than fish. He made it a high school science project. His 2012 TEDx talk went viral. He dropped out of aerospace engineering, raised $2.2M from 38,000 donors in 160 countries on €300 of saved pocket money, and founded a nonprofit to fix the problem.

A decade after the TEDx talk, TOP was pulling out serious plastic: 1M kg by early 2022, 10M kg by April 2024, 50M kg by January 2026. System 03 now cleans an area the size of a football field every five seconds. Their Guatemala river site, which nearly failed when anchors washed out in 2022, removed 10M kg in its first year after they relocated and redesigned. "When people say something is impossible," Slat once said, "the sheer absoluteness of that statement should be a motivation to investigate further."

Slat designed TOP to put itself out of business, which is perfect, because when he’s done on macroplastics, we need him to get to work on microplastics.

(5) Blue Origin Announces TeraWave

Blue Origin

While everyone has been talking about SpaceX’s IPO plans, Jeff Bezos quietly unveiled a second satellite constellation.

TeraWave is not for consumers. It’s enterprise infrastructure: 5,408 optically-interconnected satellites across LEO and MEO, designed to deliver symmetrical upload/download speeds of up to 6 terabits per second anywhere on Earth.

For context, Starlink’s consumer service maxes out around 400 Mbps, but that comparison isn’t perfect. If you recall from Cable Caballero that Tier 1 “backbone” providers build fat pipes and wholesale to Tier 2 middlemen or ISPs, who offer the internet to customers at ~100 Mbps to 10 Gbps, TeraWave is like that Tier 1 backbone provider, but beaming down from space.

TeraWave is targeting ~100,000 enterprise, data center, and government customers who need redundant, high-capacity connectivity where fiber is too slow, too expensive, or impossible to deploy.

The architecture is clever: 5,280 satellites in LEO handle the RF links (up to 144 Gbps per customer via Q/V-band), while 128 satellites in MEO provide the optical backbone for the 6 Tbps throughput. Deployment starts Q4 2027, likely on Blue Origin’s New Glenn.

Bezos already has Amazon Leo (née Project Kuiper) for consumers and small businesses. That’s the Starlink competitor with ~180 satellites up and a 2026 commercial rollout planned. TeraWave goes after a different market entirely: the hyperscale backbone. It’s space-based dark fiber for enterprises, not broadband for RVs.

The timing is pointed. SpaceX has hired four investment banks to take it public in what may be the largest IPO of all time, with a potential valuation north of $1 trillion. Starlink is the business. 9 million subscribers, 9,400+ satellites, 70% of SpaceX’s revenue, adding 20,000+ customers per day. Musk wants to raise tens of billions of dollars in an IPO to build orbital data centers and, eventually, satellite factories on the Moon. He talked a little bit about the vision in a surprise Davos talk.

At the end of the talk, Elon said, “My last words would be, I would encourage everyone to be optimistic and excited about the future, and generally I think for quality of life it is actually better to on the side of being an optimist and wrong rather than a pessimist and right.”

We couldn’t agree more.

BONUS (for paid not boring world members): Brex / Ramp, Levin, Stewart Brand

Read more

The Venezuela Opportunity

2026-01-22 21:54:22

Welcome to the 1,422 newly Not Boring people who have joined us since our last essay! Join 258,248 smart, curious folks by subscribing here (and go paid for more of the good stuff)

Subscribe now


Hi friends 👋 ,

Happy Thursday! We’re back with our second cossay, on a very different topic from our first on robots, but unified by the same question: What will it take to build things in the West?

One of my goals in co-writing essays is to share the unique insights and earned perspectives that I get to hear from people who learn by doing.

For example, one Friday morning in late October, in the midst of President Trump’s verbal escalation in Venezuela, I sat outside of a small cafe in Mexico City having breakfast with Forrest Heath III and Ross Garlick, the CEO and CFO, respectively, of our Colombian portfolio company, Somos Internet. We were in Mexico City for an Arc conference on building in Latin America during which Forrest and I hosted a salon on the potential for the region to be a strong energy and manufacturing partner to the United States.

During that breakfast, after niceties and microPOP logistics (you could actually, Ross said, rent space in restaurants or empty retail to serve as the mini-data-centers on which Somos’ active ethernet network relies), we started talking about Venezuela. What did they think, as people building a business next door, about potential U.S. intervention? Was it a big risk?

The conversation that we had from there, and a couple we’ve had since, have surprised me. They were more optimistic about the situation and about America’s role in it than I was. The people they’d spoken with in Venezuela told them that they all wanted Maduro out, they said, but that if any of them defected, they would be targeted and potentially killed. The U.S.’s presence might be able to break that impasse.

Their ideas were the first I thought of when the news that the U.S. had dropped into Venezuela and taken Maduro into custody on January 3, 2026 in Operation Absolute Resolve. I felt lucky to have a different perspective on the situation than the ones I was reading, not the One Final and Correct Perspective, but a differentiated one based on specific experience.

So I asked Ross to co-write an essay with me on what could go right in Venezuela.

Let’s get to it.


Today’s Not Boring is brought to you by… Framer

Framer gives designers superpowers.

Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes. Whether you’re starting with a template or a blank canvas, Framer gives you total creative control with no coding required. Add animations, localize with one click, and collaborate in real-time with your whole team. You can even A/B test and track clicks with built-in analytics.

Say thanks to Framer by building yourself a little online world without hiring a developer.

Launch for free at Framer dot com. Use code NOTBORING for a free month on Framer Pro.

Just Publish it With Framer


The Venezuela Opportunity

A Co-Written Essay with Ross Garlick

I like to say that living in Colombia is a long-term arbitrage.

I have never been to a place with a larger delta between external perception and internal reality, and I’m the beneficiary of getting in “early” and witnessing the world wake up to the truth.

I grew up in England, moved to the States for university, stayed to work in finance, quit to start a café in Bogotá, and moved to Medellín to become the CFO at Somos Internet, which many of you now know about thanks to Cable Caballero. Colombia is home. My wife and I recently got married here. We love it here. We plan to make our lives here. So it brings me no great pleasure to say what I’m about to say.

Venezuela has even more potential than Colombia.

A lot of people have become familiar with Venezuela over the past few weeks, since Operation Absolute Resolve, in which the United States captured Maduro and shipped him to Brooklyn. Many are trying to understand the implications for Venezuela, the region, and the United States. I’ve seen emotions from my friends and online bubbles that range from full blown catharsis to a cynicism that nothing substantive has actually changed.

I’m excited about it. Operation Absolute Resolve has created more open and exciting possible outcomes for Venezuela than any other event in my eight years in the region.

That said, in order to reap the country’s full benefits, people need to get excited about the right thing.

Oil is the obvious prize. But it’s a complicated one. President Trump asserts that selling oil from Venezuela is “gonna make a lot of money,” and it is true that Venezuela’s 304B barrels of reserves are the world’s largest. Until recently, more than 80% of its oil exports went to China. Redirecting the flow of Venezuelan oil would hamper Chinese road building and cut off 50% of Cuba’s oil supply, while giving America cleaner access to (very heavy, harder to refine) oil. Realignment could be as big a geopolitical win as a financial windfall.

As it stands, DOE Secretary Chris Wright has indicated that the U.S. will control Venezuela’s oil “indefinitely,” and, as it stands, that seems to be the biggest win to come from Maduro’s capture.

But oil isn’t the only prize in Venezuela. It’s not even the biggest.

We must rebuild and reopen Venezuela because it’s the most underpriced opportunity on Earth.

I’ll start with a caveat. This is a longshot opportunity that requires us to ask ourselves: “What is possible if things go right?” That’s one of the questions we’ve also been asking as we at Somos evaluate the Venezuelan market, and talk to people on the ground. During one of those conversations, a journalist friend told me, “There’s a long way to go and a lot of things have to go right for your vision to come true.” He’s right.

January 22, 2026

To start, a transition is not actually guaranteed. Prediction markets expect Delcy Rodríguez, the longtime Chavista official serving as interim President, to remain in power throughout the year.

I wouldn’t bet against those odds, but this market resolves at the end of 2026 and Marco Rubio has said that the US has a three-step plan for Venezuela: stability, recovery, and then transition, a plan that will undoubtedly take time to fulfill.

For this exercise, let’s say we do get a U.S.-aligned, freely and fairly elected transition government by year end 2027.

Well executed, with a free and democratic Venezuela, the Gran Colombia bloc of Venezuela, Colombia, Panama, and Ecuador could become as strategically important to the U.S. as the EU or Mexico is within 25 years.

For context: Mexico is now America’s largest trading partner at $840B annually, supporting 5-6M U.S. jobs. The U.S.-EU relationship totals $1.5T in trade and $5T in mutual investment, supporting 5.7M American jobs. A Gran Colombia bloc with 105M+ people and $700B+ in GDP could eventually approach these scales, particularly if the country becomes a nearshoring destination for supply chains for which the US currently depends on Asia. It has the mineral resources, talent, and strategic location to offer what Mexico and the EU already provide to the U.S.: a large, proximate, culturally-aligned economy where American investment creates American jobs and reduces American vulnerability to rivals.

This potential future is why it’s worth it for the United States to help rebuild and reform the country’s institutions instead of calling Maduro’s capture a victory, grabbing the oil via an uneasy truce with the remaining Chavistas, and moving on to the next conquest.

Importantly, doing so will help the U.S. undercut China’s current long game. The country is quietly buying influence in the region to an extent that would likely surprise most Americans, even those who are aware of the Belt & Road initiative and the rise of high-quality global Chinese brands. When Latinos think about EVs, they’re thinking about BYD, not Tesla. I’ve taken multiple Ubers in JAC vehicles and admired the Zeekr EVs on display in their flagship Bogotá showroom. Huawei and Xiaomi are the default phones for the lower and middle-income classes.

China is also building infrastructure. In November 2024, President Xi visited Peru to inaugurate the massive Chinese-owned Chancay Megaport on the Pacific, and subsequently hosted the presidents of Brazil, Colombia, and Chile in Beijing. There, they announced further Belt & Road infrastructure, including a massive Chinese-funded 3,000 km cross-continental freight railway from Brazil’s Ilhéus Port on the Atlantic to the Chancay. This railway will partially help circumvent the need for the Panama canal.

The country that finances Venezuela’s rebuild will be the one that captures the spillovers from its rebound.

I don’t say all of this as a geopolitical analyst. I’m a gringo business owner and operator who has worked with Venezuelans and been blown away by the talent, optimism, and potential of the region. So much so that I am actively evaluating the opportunity for Somos to expand into Venezuela.

I once hired a dishwasher named Cesar in my restaurant in Colombia. Cesar is a former small business owner from Venezuela who walked 72 hours to Bogotá across the border holding his newborn baby, despite having been robbed. He made it to Colombia. Within 5 years, Cesar saved up enough to open his own taco joint and now has three restaurants of his own.

Cesar is one of the estimated 8M people who have left Venezuela in the past ten years. This is a quarter of the population, a New York City’s worth of the country’s best and brightest talent, working-age men and women looking to build a life elsewhere.

The experiences Venezuelans have survived through over the past quarter century of Chavismo1, combined with the institutional memory of a country that was once richer per capita than Spain, Greece, or Israel, has created an entrepreneurially minded group of people with the grit and perseverance to overcome seemingly insurmountable obstacles and thrive outside of their home country.

Imagine if we unleashed this talent to rebuild Venezuela from the ground up. Imagine the promise of a Nova Gran Colombia, with Venezuela a force instead of a blocker.

Gran Colombia

A little history for those unfamiliar with the concept of Gran Colombia. Back in May 1819, Simon Bolivár campaigned to liberate New Granada, which we now call Colombia, from the Spanish. He led a combined army of Venezuelan and New Granadan troops from Venezuela’s flooded plains, into the Casanare Province, and up to the foot of the Andes Mountains, and over into New Grenada. The Spanish, who had assumed the Andes were impassable during the rainy season, were caught completely off guard. Within weeks, Bolívar’s ragged survivors had regrouped, recruited local support, and routed the royalist forces at the Battle of Boyacá on August 7th, a decisive two-hour engagement that effectively ended Spanish rule.

With New Granada secured, Bolívar moved quickly to formalize his vision of unity. In December 1819, the Congress of Angostura proclaimed the creation of Gran Colombia, merging Venezuela and New Granada into a single republic, with Ecuador to be incorporated once liberated.

Bolívar’s logic for a unified South American Republic was straightforward: if they were fragmented, the former Spanish colonies would be weak, poor, and perpetually vulnerable to reconquest or foreign meddling. United, they could pool military resources to finish the wars of independence, command respect on the world stage, negotiate trade agreements from a position of strength, and develop shared infrastructure across a territory blessed with Caribbean ports, Andean agriculture, Pacific access, and vast natural resources. A large, stable republic might even attract the European investment and migration that the young United States was already drawing.

The experiment barely outlasted its architect. Regional elites resented distant rule from Bogotá; Venezuelan leaders like José Antonio Páez chafed under centralized authority and began agitating for autonomy almost immediately. The geography that Bolívar had so dramatically conquered worked against him in peacetime. The Andes and jungle landscape made communication slow and governance nearly impossible across such distances. By 1826, Páez was in open revolt. Venezuela formally seceded in 1830, Ecuador followed months later, and Bolívar, sick and disillusioned, died that December.

Not once in the intervening two centuries have the countries that previously made up Gran Colombia been both governed by a fairly elected government and operated fully at peace.

Today, Colombia, Ecuador, and Panama all have democratically elected governments. Colombia has been at peace with the Marxist guerrilla group FARC since 2016, though smaller conflicts continue. Panama has been stable since 1989. Ecuador, the smallest, faces a severe organized crime crisis but the state is not in armed conflict. Things in the region are not perfect, but they’re as good as they’ve been in a long time. Venezuela has been the most notable exception.

As of January 3rd, that may be changing.

The Potential of a Nova Gran Colombia

Bolívar failed partly because geography made a single republic ungovernable. In 2026, the question isn’t whether the Andes are passable in the rainy season, but whether modern infrastructure, money movement, and rules can make the region economically contiguous. If they can, you get the benefits of unity without the need for a single flag.

A free Venezuela offers the chance for a new bloc with a population approaching Mexico’s and a GDP that would rank between Taiwan and Belgium’s to develop side-by-side. Had Venezuela’s post-2012 collapse never happened, Nova Gran Colombia’s GDP would fall between Saudi Arabia and Poland’s2.

This bloc would count among its resources the Panama Canal as well as large coasts on both the Atlantic and Pacific oceans for transport, along with massive oil, gold, mineral, and freshwater reserves3.

It has a young4, educated5, and urban6 population. All of this is with the region’s highest-potential country, Venezuela, hamstrung by socialism.

With a liberated Venezuela, this bloc could grow to become as strategically important to the U.S. as Mexico and Canada or the EU.

A free and democratic Venezuela could drive reverse migration for the 8 million strong diaspora and become a destination for migrants of all nationalities and socioeconomic levels. Cheap real estate, amazing climate, and a national industry (oil and gas) made for well-paid, technical jobs should make Venezuela a destination for everyone from digital nomads to blue collar workers from Mexico to Chile, and even U.S. retirees looking for alternatives to Florida, Arizona, and Costa Rica. Venezuela is missing 8 million people versus where its population would have been prior to Chavismo. It can add back many more. This reverse migration would help Venezuela, Latin America, and the United States, which has been a destination for many who have fled.

A stable Gran Colombia would make an ideal AI hub for the Western hemisphere. We are in the middle of an AI capex supercycle, and the bottleneck is no longer chips, but power, permits, and fiber. The IEA estimates that data centers used roughly 415 TWh of electricity in 2024, and projects that figure could roughly double to about 945 TWh by 2030. Gran Colombia sits in a rare spot that makes it an ideal part of the solution. It is geographically central to the Americas, and it is wired into the global internet through brand new, high capacity submarine cables such as TAM-1, CSN-1, and MANTA. These cables land in Barranquilla or Cartagena, Colombia, with low latency to principal datacenter hubs including NAP of the Americas in Miami. Miami to Bogotá pings average around 45 milliseconds. And there are multiple regions at altitude for year-round cool temperatures.

But geography is table stakes. Power is the real story. Unlike most emerging market “AI hub” pitches that depend on intermittently clean electricity, this region already runs on dispatchable hydropower at scale. Hydropower was 58% of Colombia’s electricity generation in 2024, supported by roughly 11 GW of installed hydro capacity. Venezuela generated about 64% of its electricity from hydropower in 2021 and has roughly 16 to 17 GW installed. The crazy thing is that the current hydro story is only a fraction of the potential. Colombia’s theoretical hydro potential is about 56 GW, and Venezuela’s technically feasible potential is about 62.4 GW. Those numbers imply a massive opportunity for firm, low-carbon baseload that could anchor hyperscale data center buildouts, especially when complemented by gas for reliability. APD, Somos’s sister company, is working on turning this potential into reality.

A Marshall Plan in the region could involve the U.S. government underwriting credit to build out the world’s biggest datacenter hubs in Colombia and Venezuela. In the same way that Apple invested $50B per year in capex in China in the 2010s to build out manufacturing capacity, today’s data center and power capex wave need not be confined to the borders of the United States.

A clean slate approach in Venezuela offers the chance to think about infrastructure for the 21st Century. Data centers are just one piece of the puzzle. Venezuela has the opportunity to build ports with fully automated docking and customs, interlocking energy microgrids to accommodate distributed solar and storage, roads and fulfillment distribution facilities rebuilt to prepare for the arrival of self-driving cars, and even microairports for flying cars and Zipline-esque drone logistics hubs. This last one is an opportunity in Venezuela, as well as in Colombia, where mountains separate our largest cities.

It is hard to overstate how important modern infrastructure will be to this transition.

To be sure, even a return to pre-Chavismo conditions would be a boon to the region. Bilateral trade between Colombia and Venezuela peaked at $6-7 billion annually in 2006-2007 before collapsing to just $200 million by the early 2020s. Since the 2022 border reopening, trade has grown, which shows that the link is still there and latent. Still, the countries are trading dramatically below their peak.

That said, the fact is that Latin America trades very little with itself, and trade is required for the economic impact to take hold. A 2004 IMF paper found that a 1 percentage point increase in trading partners’ growth is correlated with up to about 0.8 percentage points higher domestic growth. But per JP Morgan, just 15% of Latin America’s exports stay within Latin America, compared to ~40% in Asia-Pacific and 65% in Europe. This is a reflection of commodity-export concentration, poor cross-border infrastructure, and decades of political fragmentation.

This is an old problem, one that predates Bolívar, potentially fixable at last with modern infrastructure. One of the reasons Gran Colombia fell apart so quickly was that it was a geographically challenging region to govern; autonomous flight can fly over the natural impediments and provide a bigger economic boost in our region than any in the world. More imminently, projects like the Autopista el Mar highway, which connects our home state of Antioquia to the new Puerto Antioquia, will take the over-ground trip to the coast from fourteen hours down to four. The highway will tunnel through the mountains, and could be built across borders if project developers and their backers had confidence in the region.

Autopista el Mar and Puerto Antioquia

Modern infrastructure is both an opportunity and a necessity. Venezuela will need to rebuild, and in the process, we may have the opportunity to defeat the natural foe that frustrated Bolívar two centuries ago.

Finally, a dollarized Venezuela could demonstrate what “leapfrogging” looks like for economies now that the GENIUS Act has created a U.S. regulatory framework for stablecoins. Venezuela has been living through de facto dollarization since Maduro relaxed controls in 2019, with dollars widely used for pricing and transactions. And as dollars have remained scarce and difficult to move through the formal system, the country has also become de facto stablecoin-ized. A Chainalysis report found that from July 2023 to July 2024, 47% of transactions under $10,000 in Venezuela were conducted using stablecoins. That combination makes Venezuela an unusually good test market for the U.S. exporting regulated digital-dollar infrastructure, especially once GENIUS creates clearer rules for issuers. A dollarized, bank-light economy like Venezuela is exactly where “stablecoin settlement + merchant acceptance” could leapfrog legacy rails. It would create more demand for U.S. Treasuries, as well.

No company better demonstrates the country’s capacity for financial innovation out of necessity than Cashea.

A $0-200M run-rate revenue jump in three years may not raise eyebrows the way it would have prior to the AI era. But $0-200M run-rate revenue for a LatAm startup that only raised $2.5M and has been profitable for two years is unheard of.

Cashea, founded in 2022, is a dollar-denominated interest-free BNPL for SMBs “built in Argentina but made for Venezuela.” It processed over 4% of Venezuela’s GDP in GMV as of September 2025, with a goal to process more than 6% by year end.

But Cashea was unable to raise real VC capital or obtain a substantial credit line to operate in Venezuela. The country was seen as too unstable. So Cashea partnered directly with merchants who fund and bear the loan themselves. These merchants then offer credit to their customers, which Cashea guarantees if they default. It’s a hack that is as difficult to bootstrap in Venezuela as it would be in the U.S. or anywhere else, but a lack of credit availability in the system now drives a model which has been adopted by 5,000+ merchants across the country. It is growing exponentially, without major financing costs for Cashea, and has a delinquency rate below Affirm or Block’s Afterpay. Cashea charges merchants a fixed commission and has now begun to facilitate merchant’s receivables in a version of factoring offered on the Venezuelan Stock Exchange.

The company doesn’t need a massive balance sheet or financing facility, and it has completely obsoleted the need for traditional credit card rails, which are non-existent in the country anyway.

Here’s a great overview of the company from Fintech Leaders:

Cashea proves that necessity breeds innovation. A “clean slate” Venezuela will need a lot of it.

But clean slates can also be green pastures. They can help new market entrants build from scratch, innovate on business models, and leapfrog state-owned incumbents.

That is the opportunity we are excited by at Somos Internet, the fastest-growing ISP in Colombia. We build vertically integrated digital infrastructure to give customers better internet at a structurally lower cost. Our users love us. And we’re making plans for international expansion.

Until January 3rd, Somos hadn’t seriously entertained the idea of entering Venezuela. It was attractive, but there was too much risk.

Now, we are considering the opportunity. Venezuela’s capital city, Caracas, checks most of the boxes we look for when considering new markets.

Caracas from r/CityPorn

Large market size and high population density: Caracas is a major city with a population the size of Chicago (three million) and a population density greater than San Francisco, which makes it a perfect market for a new entrant like us. We can find more potential customers for every km of fiber deployed.

High existing ARPUs and low market penetration: A lack of private competition has left Venezuelans with a raw deal. State-owned CANTV offers fiber to the home (FTTH) services starting from $25/month for 60 Mbps. This is extremely expensive in 2026. And the website claims to offer 1 Gbps at $150/month. Thanks in part to these expensive plans, the penetration of fixed internet is low and many households rely on cell data as their primary form of connectivity. We believe that giving people access to great internet increases economic opportunity. It is a virtuous cycle.

Early adopter culture: The general Venezuelan population is open to trying new alternatives (see Cashea’s adoption) because they are dissatisfied and lack legacy offerings.

Proximity to existing infrastructure: This isn’t a dealbreaker for Somos, but it is, on the margin, better to expand to adjacent geographies rather than jump to new geographies entirely. This would require new contracts with Tier I providers to maintain a unified network architecture.

Despite its attractive characteristics, we are not jumping into Venezuela yet. The situation is still too uncertain.

But given that we are actively evaluating the opportunity, our perspective may provide useful color on what businesses are looking for before committing to rebuild the region.

Read more

16 Lessons on Selling (and Life) from My 5-Year-Old

2026-01-18 23:01:47

Hi friends 👋,

Happy Sunday. Earlier this week, X announced a $1 million article prize. I don’t normally write the kind of things that could win an X Article Contest - listy things, full of life lessons and advice. And then, wouldn’t you know it, my son Dev learned how to sell yesterday morning, and has he did, he dropped wisdom bombs for me to write down. We ended up with sixteen of them.

Now, they’re on X (go like, comment, and share - we need the $1 million, Dev has a world to build).

I really liked how it came out, so I wanted to share it with you all too. It’s kind of a co-written essay with a 5-year-old, who I hope becomes a more frequent contributor. I think I’m going to write more short things and share them in paid not boring world, so join us if you want the full spectrum of not boring, means to meaning.

Subscribe now

Let’s get to it.


16 Lessons on Selling (and Life) from My 5-Year-Old

This morning, my five-year-old son made his first two-dollar sale and dropped sixteen lessons on selling and life that are more practical than any of the slop you’ll find on LinkedIn.

I’ll share them with you, but first, I need to tell you about Dev, about his Donut Hats, and about his world.

One day when Dev was three, he told me that he wanted to build worlds.

Real ones. Big ones. Planets. Like, actual, physical planets.

“Then you’re going to have to study buddy.”

“What do I need to learn?”

Math, physics, engineering, business. No one’s ever built a world before, so you’re going to have to study really hard.

And then he… did.

He asked me for math problems, then harder ones, then harder ones. Kid does 90 minutes of Russian Math every Sunday and loves it.

Physics, he always liked. Gravity was one of his first words, and one of the first concepts he grokked. “Why’d the cup drop bud?” “Gravity.” We read a little bit of Richard Feynman’s lectures, and he stayed with me, but I figured that was probably taking it too far.

Engineering, he loved. Most kids do. Magnetiles in particular, huge structures. Every night, we read a couple of pages from The Way Things Work Now, which my dad always tried to read to me but which I turned down, because I didn’t have worlds to build with the knowledge.

Throughout, he’d pepper me with questions. What materials would we need to make the world? How would we get water to the world? How would we grow trees on the world? Some I could answer; a bunch we had to ask ChatGPT.

Two stories blew my mind in particular, though, logistical things, which are important things to get right if you actually want to build worlds.

One time, we were sitting by the pool on vacation, not talking about worlds at all, when he turned to my wife Puja and I and asked if we knew any companies that made houses. He figured he’d need houses if people were really going to live on this world, and somehow, that while he would be fully capable of building the world itself, there would probably be companies that were already quite good at homebuilding who he could pay to handle that aspect of the plan. He asked the same thing about umbrellas.

Another, we were talking about how to get people to and from the world. I’d met a company that was making Single Stage to Orbit rockets, I told him, and maybe they’d be good because they’d just take off from a normal runway and land on one too. He thought about it for a second and said, “No, we should probably use Starship, because they’ve actually flown before.”

The thing about his growing brain is that it’s always churning. Usually, he doesn’t mention the world for weeks, and then out of the blue, he’ll say something about it, or ask a question he’d clearly been chewing on for a while.

One big question, when you want to build a real, big, actual, physical world is where you’re going to get the money. We back-of-the-enveloped it and figured he’d need about a trillion dollars. I told him about investors. He eenie-meenie-minie-moed and landed on his three-year-old sister, Maya, as a lead investor. Implausible, for now, but the kid has vision and Maya’s pretty good herself, so not, in the opinion of one dad, impossible.

I thought the case was closed. It wasn’t. His brain kept churning.

So one night, earlier this week, I came home to find Dev and Puja at the kitchen table. He had a pencil in his hand and a piece of blue construction paper in front of him. They were making a business plan for his new company, Donut Hats.

I guess that afternoon, he took some Play-Doh, shaped it into a ring, taped it up with blue masking tape (kid loves tape), and realized he might be on to something. He put the first donut hat in a construction-paper envelope, put the envelope in a box, and taped that shut, too, for safe keeping. Then he got to work.

Puja and Dev were already pretty deep. They’d figured out a price ($20, but $10 for family members), estimated costs (surprisingly cheap if you count his child labor at $0), gross margins ($7.65 per at F&F prices), and were starting to work on a marketing plan. Kids would probably be the right target, he thought, but their parents had the money. He kind of just intuited this stuff.

When I asked him why he was starting a company, he basically recited Choose Good Quests and The Company as a Machine for Doing Stuff back at me.

“I want to sell a lot of Donut Hats to make money that we can use to build my world.”

Over the next few days, he made a total of five Donut Hats in different colors and tapes. My favorite is the Orange and Green in Clear Packing Tape, but if that’s not your style, there’s probably one for you, too.

That night, he rolled up the business plan (he loves rolling things up) and placed it on top of the Donut Hat Box, got into bed, and told me, “I’m so excited I finally get to run a company,” before drifting off to dream, I’m sure, about running a Donut Hat business.

Then came the hard part, genetically. I hate selling. I like writing plans. I like making things. I like marketing, but making a direct ask creeps me out. I told him that he would need to sell.

The next morning, he and Maya tried to sell from our stoop. Maya is not afraid of selling. She marched outside and started yelling “Get your Donut Hats! Ten BUCKS!” at the top of her lungs. But it’s winter, and it was 7:15am, and the only people out were harried ones scuttling to work. That wouldn’t do.

If we were going to sell to kids (via their parents), we would need to go to the playground, which we did this morning in a light 8:30am snow. We brought all five Donut Hats in a bag, and laid them out on a built-in table/chess board. There were only two other parent-kid combos there, and neither looked particularly in the mood to spend, so Dev half-heartedly and Maya full-throatedly yelled, “Get your Donut Hats! Ten BUCKS!” No one heard. It’s a big playground.

But then, a dad and his son came in. They headed to the soccer field and started kicking. I told Dev to go introduce himself and ask if they’d like to buy a hat. He said he was nervous. He didn’t want to go. And just then, providentially, the son kicked the ball over the fence. An opening. We grabbed it and threw it back over. They owed us one. I told him to go again, he asked me to come with him (I was as nervous as him, selling to strangers just minding their own business), we walked around the fence, and Devin, Donut Hat in hand, asked, “Would you like to buy a Donut Hat?”

The dad asked to take a look. He put it on his bald head. And he realized immediately that it wasn’t going to work. “How does the Donut Hat stay on my head? I’d imagine it would fall off if I moved. No thank you.”

HUGE. That was the first of what will be many, many No’s in Dev’s life, and he handled it gracefully. I told him it was awesome. We’d gotten our first customer feedback. I pulled out Apple Notes, titled it “Donut Hat feedback,” and told him we should write down all of the feedback we get so that we could go home and improve the product.

We wrote down:

  1. Could fall off head.

While we were out on our soccer field sales call, the main playground started filling up, and playing there, right by our table, were a dad about my age and a son about Maya’s. Easy targets. Dev introduced himself, and asked, “Would you like a Donut Hat?” Father and son looked intrigued. They thought they’d just hit the Free Donut Hat Lottery. I whispered to Dev to tell them that he was selling them, which he did, and to which the dad responded, “How much?”

$10.

$10 is too expensive.

Dev came back at $5. The son, meanwhile, sensing a negotiation, deployed the Crazy Guy strategy. He threw out $6. Then he threw out $45. Then $15. Then $6 again. We waited, giving him the leash to walk himself right into an empty Piggy Bank.

But remember the market insight. The kids want the Donut Hats. The parents have all the money. And the dad wasn’t having it. While the son perused the goods, the dad negotiated for sport. Dev even offered our worst-made, pure Blue Tape Donut Hat at $3. But you could see in the dad’s eyes, he wasn’t going to buy. Finally, they walked away.

  1. Too expensive.

MORE parents had come in, though, and a lot of them. One dad made the mistake of putting his daughter in the swing. He was a sitting duck. So Dev asked me to come with him to the swings.

“Hi I’m Devin, would you like to buy a Donut Hat?” He held out the goods, teasing.

“Oh that’s cool,” the dad, hooded by his sweatshirt but hatless, said. “But I’m not a big hat guy.”

Dad, write it down.

  1. Not everyone’s a big hat guy.

But (and if you’re not a parent, you wouldn’t realize this), once your kid is in the swing, your kid is in the swing. You’re not going anywhere. You’re trapped. Dev just hung around while I pushed Maya on the swing. We weren’t going anywhere either.

Dev told him we had more colors. I threw in that it might look good under his hood. Dev kind of looked at the guy as only a little kid with big dreams can, and… he cracked.

“I don’t have $5, but would you do it for $2?”

Dev looked at me. I shrugged. It was his call.

“OK you can have it for $2.”

Dev let him pick his Donut Hat. Wouldn’t you know it, he picked the Blue Tape. Dev handed it over. The dad handed him two crumpled $1’s.

First sale! Dev was ecstatic.

Ghiblified to keep my kid’s face off the internet.

And he was hooked on selling, whatever the price.

He was in luck. Social proof is a hell of a drug.

The mom pushing her daughter on the swing next to Maya’s saw the dad buy his daughter a Donut Hat and she wanted to buy hers one, too. She looked in her cell phone case / wallet, realized she had $1, and offered it to Dev. Take it or leave it, in nicer words.

He said yes. Two sales. Three dollars. We were HUMMING.

Something changed in Dev. He stopped being nervous and started to love the chase.

What about the dad in the swing on the other side? “Would you like to buy a Donut Hat?”

Sorry, I don’t have any cash.

  1. No cash

Recall, however, that it was a big playground, and while we were selling, it was filling up even as the snow picked up. Dev went out into the big playground by himself, Donut Hat in hand, and started approaching people.

Little man out there hustling

There were so many people spread over such a large playground that when Dev came back next, having sold zero more Donut Hats, instead of feedback, he started dictating sales tips.

  1. We need a map of the playground to see where we can sell to people.

Got it. He went back out. More No’s. Whatever. A no is the first step on the way to yes. He came back.

  1. Come when it’s not too cold.

Speaking of which, Maya was getting cold, and she wanted to go home.

And as we walked home, Maya and I on the sidewalk, Dev on air, he kept dictating, asking me to add notes to what he’d started calling “The Setback List.”

At the University of Virginia, Ian Stevenson has spent decades documenting cases of children seemingly inhabited by old souls, including:

Starting at age 2, James Leininger began having vivid nightmares about a plane crash, eventually providing specific details about being a WWII pilot named James Huston Jr. who flew off the USS Natoma Bay and was shot down over Iwo Jima. His parents, initially skeptical, verified the details through military records and located Huston’s surviving sister.

Shanti Devi was a 4-year-old in India in the 1930s when she began describing a previous life as a woman named Lugdi Devi who died in childbirth in a town she’d never visited. When researchers took her there, she reportedly recognized her “former husband” and navigated to her “previous home.”

At age 5, Muskogee, Oklahoman Ryan Hammons told his mom “I used to be somebody else.” He remembered being a Hollywood extra and talent agent, and when presented with a number of images, identified Marty Martyn in a still from the film Night After Night. Ryan remembered over fifty specific, later-confirmed details about Martyn’s life, and complained that he “Didn’t see why God would let you get to be 61 and then make you come back as a baby.” Martyn’s death certificate said he was 59 when he died, but when Stevenson’s successor, Jim Tucker, researched further, he found the death certificate was wrong. Martyn was actually born in 1903, making him 61 at death, just as Ryan claimed.

All of which is to say, maybe it shouldn’t be so surprising that Dev dropped so much wisdom in items seven through sixteen on The Setback List, but it still blew me away to hear so much wisdom out of the mouth of such a little man.

These are the lessons that Dev McCormick learned about sales on a Saturday morning on the playground in Brooklyn, dictated in random spurts over the next hour:

  1. Always have a backup plan in case things don’t work.

  2. Even if it doesn’t look fun, you should still do it.

  3. You shouldn’t go if it kind of looks like a storm.

  4. You need to remember everything people say because what if you don’t remember that you have a setback list?

  5. People are nicer than you expect.

  6. If someone looks like a bad guy you shouldn’t go to them.

  7. You shouldn’t be nervous because it’s most likely they’ll say no if you’re nervous.

  8. Maybe the importantest one: you definitely shouldn’t give up, because what if people say hehehe to you, that’s not a really good feeling.

  9. Grownups shouldn’t come with you to help because it’s most likely they’ll buy it from only a kid.

  10. The only way that people will buy it is if you’re being nice to them.

I don’t know man, I know I’m his dad, but that’s pretty good.

I think that one day this kid is actually going to build his world. $999,999,999,997 to go.


Postscript: Dev just woke up from a nap. I called him Mr. Sales Man. He said, “I love it when you call me Mr. Sales Man.” Hold on to your wallets.


Have a great weekend, and a long one if you’re reading this in the US.

Thanks for reading,

Packy

Weekly Dose of Optimism #176

2026-01-17 21:36:30

Hey friends 👋 ,

Happy Saturday and welcome to another Weekend Edition of the Weekly Dose.

Sending today because yesterday, we published an in-depth primer on the state of robotics from Evan Beard’s perspective as our first co-written essay for not boring world. A world full of robots doing all of the work that we don’t want to do, and a lot of stuff that we can’t even imagine, is as optimistic as it gets.

Grab a big cup of coffee, cozy up on the couch, and read about MedGemma & MRIs, Claude, Tesla’s new lithium refinery, Conceivable, nuclear and hotels on the moon, and the a16z pod.

Let’s get to it.


Today’s Weekly Dose is brought to you by… Framer

Framer gives designers superpowers.

Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes. Whether you’re starting with a template or a blank canvas, Framer gives you total creative control with no coding required. Add animations, localize with one click, and collaborate in real-time with your whole team. You can even A/B test and track clicks with built-in analytics.

Ready to build a site that looks hand-coded without hiring a developer?

Launch a site for free on Framer dot com. Use NOTBORING for a free month on Framer Pro.


(1) Google Releases MedGemma 1.5 for Medical Imaging

Daniel Golden and Fereshteh Mahvar, Google Research

In this house, we stan Google DeepMind, and Google DeepMind rewarding us.

This week, the company rolled out its MedGemma 1.5 model for healthcare developers. Per CEO Sundar Pichai, “The new 4B model enables developers to build applications that natively interpret full 3D scans (CTs, MRIs) with high efficiency - a first, we believe, for an open medical generalist model. MedGemma 1.5 also pairs well with MedASR, our speech-to-text model fine-tuned for highly accurate medical dictation.”

What it means is that it will be easier for developers to build excellent software that makes it easier for medical professionals to make us all healthier. The challenge with infrastructure like this, though, is that it’s not tangible. It’s hard to know what that means until developers actually go out and build with it.

So in the meantime, Shopify CEO Tobi Lutke gave us all a little preview with the html-based MRI scan viewer that he vibecoded with Claude to get around old, locked down software in order to access information on … himself.

To be clear, this is front-end development. But combine better, easy-to-build frontends with better models to interpret the scans themselves and it’s going to get a whole lot easier and less frustrating to understand, and treat, our bodies.

(2) Claude Releases Cowork

Speaking of vibe coding things with Claude, I’m going to go ahead and do a Weekly Dose first: I’m just recommending that this weekend, you take some time to play with Claude. This release is just an excuse to talk about it. I haven’t used Cowork yet, I don’t use Claude Code, and I’ve found that I haven’t needed to, because there’s so much you can do in just good ol’ fashioned Claude.

Claude Code is getting a lot of hype as people came back from holiday downtime having had time to really play with it for the first time. The hype is deserved. It’s so much fun.

After seeing a tweet about a speedreader, I just… built a speedreader for my a16z essay.

It feels like the first time that the thing we’ve been saying for a long time, that the gap between idea and outcome will disappear, is coming true. Personally, I feel bottlenecked on ideas. So what I’ve started doing is dumping my essays in and asking Claude what we can build on top of them. For yesterday’s piece on the Small Step v. Giant Leap approach to robotics, it made me a game.

I wanted to embed that game in my essay, but Substack doesn’t allow embeds, so I asked it to make me an editor that uses embeds, which it did in a prompt.

This stuff isn’t fully production-ready in the hands of a novice like me, but that’s probably only because I haven’t spent enough time on it. For example, if want to turn your side project into an actual mobile app, you can now do just that in Replit, after they announced a way to publish your apps to the app store right from Replit. Need to play around with that this weekend.

I don’t know how useful any of this stuff is yet or will be for me, but it’s a ton of fun.

(3) Tesla’s Lithium Refinery is Now Operational

There’s vertical integration, and then there’s VERTICAL INTEGRATION.

Electric vehicles need batteries, and batteries need lithium. We have plenty of lithium in the US, but it’s bottlenecked on refining. So that wild man just went out and built his own lithium refinery outside of Corpus Christi, Texas. The refinery went from groundbreaking to live in three years versus the typical decade, and is now the largest lithium refinery in the United States.

One of the challenges with refining here is that traditional processes are so environmentally unfriendly that it’s hard to get them approved. Other countries with less strict regulations don’t have that problem. But the point of technology is to do more with less, and better.

Traditional lithium refining often involves acid roasting that produces hazardous byproducts like sodium sulfate. Tesla's process creates a benign co-product, essentially sand and limestone that can be used in construction materials. The facility processes raw spodumene ore directly into battery-grade lithium hydroxide on site, bypassing intermediate refining steps commonly used elsewhere in the industry.

Musk has long called lithium refining “a license to print money,” because while lithium ore is relatively abundant, the refining capacity to turn it into battery-grade lithium hydroxide was a major bottleneck in the electric vehicle industry.

Now, Tesla is both solving its own supply chain problem and turning on the money printer by bringing that capacity onshore. Vertical integration, baby! If the bottleneck is refining, build the refinery.

(4) The Startup Making Human Embryos With AI-Assisted Robots

Sara Frier for Bloomberg

One in six couples struggle to conceive naturally, and as a result, I have a lot of friends who have gone through IVF to have a baby. The process is a miracle, and there is a lot of room for improvement. According to the CDC, IVF produces live births only 37.5% of the time.

To improve IVF, Conceivable Life Sciences has built AURA, a 17-foot robotic assembly line that can perform every step of IVF embryo creation outside the human body, from separating sperm to fertilizing eggs to flash-freezing embryos. The New York-based startup (we love to see it 🗽) has helped bring 19 babies into the world so far, including one born in September to Acme Capital partner Aike Ho and her wife, who participated in the clinical trial after Ho wrote Conceivable’s first check.

“People should be as excited about this as they were about the moon landing,” Ho told Bloomberg.

The pitch is straightforward: IVF succeeds only 37.5% of the time partly because it depends on individual embryologists who vary in training, technique, and how much coffee they’ve had. AURA makes 30 micro-adjustments per second with thousandth-of-a-millimeter precision, uses AI adapted from Baidu’s computer vision to find eggs in follicular fluid, and can plunge embryos into liquid nitrogen so fast it’s invisible to the human eye—reducing ice crystal formation tenfold.

The founders’ vision is to create “superlabs” where a single embryologist and two technicians oversee thousands of embryo creations daily, dramatically expanding access while cutting costs. They’ve raised $70 million and plan to launch in the US this year.

Sadly, one founder, Joshua Abram, died of cancer weeks before the first American baby was born. Before he died, he told his partner he wanted to see Conceivable responsible for 65% of all IVF births.

Circle of life.

(5) A Big Week for Lunar Development

DOE, NASA, and GRU

a rendering

The DOE and NASA are teaming up to develop a nuclear reactor on the moon by 2030.

Per the DOE, “DOE and NASA anticipate deploying a fission surface power system capable of producing safe, efficient, and plentiful electrical power that will be able to operate for years without the need to refuel. The deployment of a lunar surface reactor will enable future sustained lunar missions by providing continuous and abundant power, regardless of sunlight or temperature.”

If this had happened a couple years ago, I would have been both amazed and bummed that we’re getting new reactors on the moon before we get them in the US. Now, we’re getting both. Meta signed an agreement for 6.6 GW to power its data centers by 2035. What a time to be alive. The only question now is who’s going to build it. Seems like it might be good practice for Radiant on the way to Mars reactors.

And speaking of sci-fi projects on the moon, a startup called GRU is starting to accept reservations for its moon hotel, which is scheduled to open in 2032. Slots cost anywhere from $250k to $1 million, so start saving.

BONUS: I Got to Interview Marc Andreessen and Ben Horowitz

There aren’t a lot of people who can out-optimism me. Marc and Ben are two of them.

After my deep dive on the firm, I had the chance to interview Marc and Ben together this week. We go wide, but I particularly enjoyed talking about how and why new technology companies can grow to become 10x (or 1,000x) larger than the incumbents they replace.

Enjoy!


Have a great rest of your weekend y’all.

Thanks to Aman and Sehaj for all the help. We’ll be back in your inbox next week.

Thanks for reading,

Packy

Many Small Steps for Robots, One Giant Leap for Mankind

2026-01-16 21:59:23

Welcome to the 1,179 newly Not Boring people who have joined us our last essay! Join 256,826 smart, curious folks by subscribing here:

Subscribe now


Hi friends 👋 ,

Happy Thursday! I am thrilled to bring you not boring world’s first co-written essay (cossay? need something here) with my friend Evan Beard, the co-founder and CEO of Standard Bots.

Evan is the perfect person to kick this off.

I have known Evan for ~20 years, which is crazy. We went to Duke together, worked at the one legitimate startup on campus together (which still exists!), and even won a Lehman Brothers Case Competition together (which won us the opportunity to interview at the investment bank right before it went under).

After school, Evan went right into tech. He was in an early YC cohort, back when those were small. He started a company with Ashton Kutcher. I was interested in tech from the outside and always loved talking to Evan, so we’d catch up at reunions and then go our separate ways. In September 2023, a mutual acquaintance emailed me saying “there’s a company you should have on your radar, Standard Bots,” and I looked it up, and lo and behold, it was founded by Evan Beard!

Since reconnecting, Evan has become one of a small handful of people I ask dumb robot questions to. He’s testified in front of Congress on robotics. Last year he spoke at Nvidia’s GTC on the main stage. He was even featured doing robotic data collection in A24’s movie Babygirl alongside Nicole Kidman! Evan knows robots.

And the questions are very dumb! Robotics as a category has scared me. As valuations have soared, I’ve mostly avoided writing about or investing in robots, because I haven’t felt confident enough that I know what I’m talking about to take a stand.

Which is the whole point of these co-written essays!

Evan has dedicated his career to a specific belief about how to build a robotics company. He’s making a different bet than the more hyped companies in the space1, one that is like a Russian Doll with a supermodel in the middle - not very sexy on the outside but sexier and sexier the more layers you remove until you get to the center and you’re like, “damn.”

So throw on a little Robot Rock…

And let’s get to it.


Today’s Not Boring is brought to you by… Framer

Framer gives designers superpowers.

Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes. Whether you’re starting with a template or a blank canvas, Framer gives you total creative control with no coding required. Add animations, localize with one click, and collaborate in real-time with your whole team. You can even A/B test and track clicks with built-in analytics.

Framer is making the first month of cossays free so you can see what we’re all about. Say thanks to Framer by building yourself a little online world without hiring a developer.

Launch for free at Framer dot com. Use code NOTBORING for a free month on Framer Pro.

Just Publish it With Framer


Many Small Steps for Robot, One Giant Leap for Mankind

A Co-Written Essay With Evan Beard

There is a belief in my industry that the value in robotics will be unlocked in giant leaps.

Meaning: robots are not useful today, but throw enough GPUs, models, data, and PhDs at the problem, and you’ll cross some threshold on the other side of which you will meet robots that can walk into any room and do whatever they’re told.

In terms of both dollars and IQ points, this is the predominant view. I call it the Giant Leap view.

The Giant Leap view is sexy. It holds the promise of a totally unbounded market – labor today is a ~$25 trillion market, constrained by the cost and unreliability of humans; if robots become cheap, general, and autonomous, the argument goes that you get Jevons Paradox for labor - available to whichever team of geniuses in a garage produces the big breakthrough first. This is the type of innovation that Silicon Valley loves. Brilliant minds love opportunities where success is just a brilliant idea away.

The progress made by people who hold these beliefs has been exciting to watch. Online, you can find videos of robots walking, backflipping, dancing, unpacking groceries, cooking, folding laundry, doing dishes. This is Jetsons stuff. Robotic victory appears, at last, to be a short extension of the trend lines away. On the other side lies fortune, strength, and abundance.

As a result, companies building within this view, whether they’re making models or full robots, have raised the majority of the billions of dollars in venture funding that have gone towards robotics in the past few years. That does not include the cash that Tesla has invested from its own balance sheet into its humanoid, Optimus.

To be clear, the progress they’ve made is real. VLAs (vision-language-action models), diffusion policies, cross-embodiment learning, sim-to-real transfer. All of these advancements have meaningfully expanded what robots can do in controlled settings. In robotics labs around the world, robots are folding clothes, making coffee, doing the dishes, and so much more. Anyone pretending otherwise is either not paying attention or not serious.

It’s only once you start deploying robots outside of the lab that something else becomes obvious: robotics progress is not gated by a single breakthrough. There is no single fundamental innovation that will suddenly automate the world.

We will eventually automate the world. But my thesis is that progress will happen by climbing the gradient of variability.

Variability is the range of tasks, environments, and edge cases a robot must handle. Aerospace and self-driving use Operational Design Domain (ODD) to formally specify the conditions under which a system can operate. Expanding the ODD is how autonomy matures. It’s even more complex for robotics.

Robotic variables include what you’re handling (identical vs. thousands of different SKUs), where you’re working (climate-controlled warehouse with perfect lighting vs. a construction site with dust, uneven terrain, weather, and changing layouts), how complex a task is (single repetitive motion vs. multi-step assembly requiring tool changes), who’s around (operating in a caged-off cell vs. collaborating alongside workers in shared space), how clear the instructions are (executing pre-programmed routines vs. interpreting natural language commands like “clean this up” or “help me with this”), and what happens when things go wrong (stopping when something goes wrong vs. detecting errors, diagnosing causes, and autonomously recovering).

Multiply these variables together and the range can be immense2. This is because the spectrum of real, human jobs is extremely complex. A quick litmus test is that a single human can’t just do every human job.

Most real jobs are not fully repetitive, but they’re also not fully open-ended. They have structure, constraints, and inevitable variation, much to the chagrin of Frederick Winslow Taylor, Henry Ford, and leagues of industrialists since. Different parts, slightly bent boxes, inconsistent lighting, worn fixtures, humans nearby doing unpredictable things.

It’s the same for robots.

At one end, you have motion replay. The robot moves from Point A to Point B the same way, every time. No intelligence required. This is how the vast majority of industrial robots work today. You save a position, then another, then another, and the robot traces that path forever. It’s like “record Macro” in Excel. It works beautifully as long as nothing ever changes.

At the other extreme, you have something like a McDonald’s worker. Different station every three minutes. Burger, then fries, then register, then cleaning. Totally different tasks, unpredictable sequences, human interaction, chaotic environment. The dream of general physical intelligence is a robot that can walk into this environment and just... work.

At one extreme is automation. At the other is autonomy. Between those extremes lies almost all economically valuable work.

Between automation and a McDonald’s robot that can fully replace a worker is an incredible number of jobs.

It’s my belief that these small steps across this spectrum are where we’ll unlock major economic value today.

That’s what my company Standard Bots is betting on.

Standard Bots makes AI-native, vertically integrated robots. We’re currently focused on customers within manufacturing and logistics. We’ve built a full stack solution for customers to train robot AI models, from data collection, review, and annotation, to model training and deployment. And we make these tools accessible enough for the average manufacturing worker to use.

In a market full of moonshots, our strategy might look conservative. Even tens of millions of dollars in revenue is nothing compared to the ultimate, multi-trillion dollar, abundance-inducing prize that lies in the future.

It isn’t.

We are building a real business today because we believe that it’s the most likely to get us to that abundance-inducing end state first.

Two Strategies: Giant Leap or Small Step

If you believe there’s a massive set of economically valuable tasks waiting on the far side of some threshold, then the optimal strategy is to straight-shot it. Lock your team in the lab. Scale models. Scale compute. Don’t get distracted by deployments that might slow you down. Leap.

If you believe, like we do, that there is a continuous spectrum of economically valuable jobs, many of which robots can do today, then the best thing to do is to get your robots in the field early and get to work.

Each deployment teaches you where you are on the gradient. Success shows you what’s stable, failure shows you where the model breaks, and both tell you exactly what to work on fixing next. You iterate. You take small steps.

It’s widely accepted in leading LLM labs that data is king. The optimal data strategy is to climb this spectrum one use case at a time. You don’t need “more” data. What you really want is diversity3, on-policyness4, and curriculum5. Climbing the spectrum iteratively is the strategy that best optimizes for these three dimensions of good data for any given capital budget. Real deployments on your bots get you on-policyness (nothing else can), the market intelligently curates a curriculum, and both provide rich and economically relevant diversity.

We’ve learned this lesson over years of deployments.

Whenever robotics evolves to incorporate another aspect of the job spectrum between automation and autonomy, it also unlocks another set of jobs, another set of customers, another chunk of the market. One small step at a time.

Take screwdriving. It is much easier to use end-to-end AI to find a screw or bolt than to try to put everything just so in a preplanned and fixed position. Search and feedback is cheap for learning systems. Our robot can move the screwdriver around until it feels that it’s in the right place. It wiggles the screwdriver a little. It feels when it drops into the slot. If it slips, it adjusts. And when our robots figure out how to drive a screw, it unlocks a host of jobs that involve screwdriving. Then we start doing those and learn the specifics of each of them, too.

We learn on the job and get better with time. Many of these robots are imperfect, but they’re still useful. There’s no magic threshold you have to cross before robots become useful.

That’s not our hypothesis. It’s what the market is telling us.

Industrial robotics is already a large, proven market. FANUC, the world’s leading manufacturer of robotic arms, does on the order of $6 billion in annual revenue. ABB’s robotics division did another $2.4 billion in 2024. Universal Robots, which was acquired by Teradyne in 2015, generates hundreds of millions per year.

These systems work, even though they work in very narrow ways. Companies spend weeks integrating them. Teams hire specialists to program brittle motion sequences. When a task changes, those same specialists come back to reprogram the whole thing, for a fee. The robots repeat the same motions endlessly, and they only work as long as the environment stays exactly the same.

Fanuc UI. At the Fanuc company Christmas Party, they let the most drunk engineer choose the menu item labels. It might have gone something like this: “Where’s Carl? On the floor? Carl! Make a noise and give me a symbol and that will be the first menu item. OK someone kick him to get the second one - Pos.Reg[Reg? Perfect!

Despite all of that friction, customers keep buying these robots! That’s the market talking. Even limited, inflexible automation creates enough value that entire industries have grown around it. The low-variability left side of the spectrum already supports billions of dollars of business.

In machine learning, progress rarely comes from a single leap. It comes from gradient ascent: making small, consistent improvements guided by feedback from the environment.

That’s how we think about robotics too.

Our plan is not to leap from lab demonstrations to generally intelligent robots. Instead, our plan is to climb the gradient of real-world variability and capture more of the spectrum.

It’s working so far. We have 300+ robots deployed at customers including NASA, Lockheed Martin, and Verizon. We ended the year on a $24 million revenue run rate, with hundreds of millions of dollars in customer LOIs and qualified sales pipeline. The kink you see in this curve is due to the fact that our robots keep getting better and easier to use the more they (and we) learn.

Standard Bots

Customers are happy because we’re already meaningfully easier to deploy and cheaper to adapt than classical automation, and while we don’t have generally intelligent AI models that can automate any task, we can already automate jobs with a level of variability that no other robotics company can.

We expect our robots to do everything one day, too. We just believe that:

  • “Everything” is made up of a continuous spectrum of small “somethings.”

  • Each of those “somethings,” whether it’s packing a bent cardboard box or checking a cow’s temperature through its anus (a real use case), requires use-case-specific data to be done well.

  • By deploying our robots in the field today, we get paid to collect the data we need to improve our models. That includes the most valuable data of all: intervention data when a robot fails.

  • When we find a new edge case, we can iterate on our entire system of variable robots. This is because we are fully vertically integrated, including data collection, the models, the firmware, and the physical arm.

Our plan is to get paid to eat the spectrum. In the process, we plan to collect data no one else can. We’ll then use this data, which is tailor made for our robots, to iterate on the whole system quickly enough to get to general economic usefulness before the giant leap, straight-shot approaches do.

There’s a lot of context behind our bet. The first and most important thing you need to understand is that robotics is bottlenecked on data.

Robotics is Bottlenecked on Data

Robots already work very well autonomously wherever we have a lot of good data. For example, cutting and replanting pieces of plants to clone them as seen in the video below.

This is unintuitive, because it’s almost the opposite challenge Large Language Models (LLMs) seem to face. What the average AI user like you and I experiences is that the models improve and LLMs automatically know more things.

But LLMs had it relatively easy. The entire internet existed as a pre-built training corpus. There is so much more information on the internet than you could ever imagine. Any question you might ask an LLM, the internet has probably asked and answered. The hard part was building architectures that could learn from it all.

Robotics has the opposite problem.

The data needed for robotics

The architectures largely exist. We’ve seen real breakthroughs in robot learning over the last few years as key ideas from large language models get applied to physical systems. For example, Toyota Research Institute’s Diffusion Policy shows that treating robot control policies as generative models can dramatically improve how quickly robots learn dexterous manipulation skills. The magic of this approach is that it took the architecture largely used to generate images, in which the model learns to remove noise in an iterative manner like in the GIF below…

…and instead applied it to generate the path of the robot’s gripper. An idea that works in one domain is applied to another and BOOM — the outcome works pretty well.

The advancements that have ushered in this new era are small ones adding up. For example, take what researchers call “action chunking,” in which the model predicts a sequence of points to move through in the future instead of just one. That helps performance and smoothness a lot.

Vision-language-action models such as RT-2 combine web-scale semantic understanding with robotic data to translate high-level instructions into physical actions. Systems like ALOHA Unleashed demonstrate that transformer-based imitation learning can enable real robots to handle complex, multi-stage tasks — including tying shoelaces and sorting objects — by watching demonstrations. And emerging diffusion-based foundation models like RDT-1B show that training on large, diverse robotic datasets enables zero-shot generalization and few-shot learning across embodiments.

But those papers also all found something similar. For those remarkable innovations to happen with any reasonable success rate, you need data on your specific robot, doing your specific task, in your specific environment.

If you train a robot to fold shirts and then ask it to fold a shirt, it works. Put the shirts in different environments, on different tables, in different lighting. It still works. The model has learned to generalize within the distribution of “shirt folding.” But then try asking it to hang a jacket or stack towels or to do anything meaningfully different from shirt folding. It fails. It’s not dumb. It’s just never seen someone do those things.

The magic of these models is how they interpolate to handle unseen variability, but only within the training set.

Robots can interpolate within their training distribution. They struggle outside of it. This is true for LLMs, too. It’s just that their training data sets are so large that there isn’t much out of distribution anymore.

This is unlikely to be solved with more compute or better algorithms. It’s a fundamental characteristic of how these models work. They need examples of the thing you want them to do.

So how do you collect example data?

One answer would be to create it in the lab. Come up with all of the edge cases you can think of and throw them at your robots. As John Carmack warned, however, “reality has a surprising amount of detail.” The real world chuckles at your researchers’ edge cases and sends even edgier ones.

Another answer would be to just film videos of people doing all of the things that you’d want the robots to do. Research has shown signs of life here.

For example, Skild has shown that a robot can learn how to do several common household tasks from video and only a single hour of robot data per task.

This is exciting progress, and on the back of it, just this week, Skild announced a $1.4 billion Softbank-led Series C at a valuation of over $14 billion.

Ultimately, general video may lift the starting capabilities of a model. But it still doesn’t remove the need for the on-robot data for the final policy, even for simple household pick-and-place tasks (and industrial tasks will need much more data). For one thing, robots need data in 3D, including torques and forces, and the data needs to occur through time. They almost need to feel the movements. Videos don’t have this data and text certainly doesn’t.

It’s kind of like how reading lots of books makes it easier to write a good book, but watching lots of golf videos doesn’t do much for actually playing golf.

If I want to learn to golf, I need to actually get out there and use a body to swing clubs. Similarly,

the best way to collect data is by using hardware. And for that, there are a number of different collection methods: leader-follower arms, handheld devices with sensors on them, gloves and wearables, VR and teleoperation, and direct manipulation, as in, literally moving the arm and grabbing an object.

All of these approaches can work. Each has pluses and minuses. We use a mix of many of them.

But let’s continue with the golf analogy. Practicing with any human body is better than watching videos, but practicing with my body is the best. That’s the body I’m actually going to play with.

In the same way, even data from other robots isn’t as valuable as data from your own hardware. If your data and your hardware aren’t aligned, you need 100x or 1,000x more data. If I wanted to work on my robot, but I didn’t have my robot, I could use a similar robot to observe the activity. But for it to be effective, I’d need a lot of similar robots.

This is one of the many challenges for general robotics models.

What the Giant Leap Actually Requires

The most obvious counterargument to everything I’ve argued so far and everything I will argue throughout is that while the Giant Leap models haven’t unlocked real world usefulness yet, they undoubtedly will as the labs continue to make breakthroughs. It’s not fun to be short magic!

For the amount of money invested in the space, though, there’s surprisingly little good public thinking about what the Giant Leap approach actually entails.

What is the bet or set of bets they’re making, and how should we reason about them?

The approach we’re taking at Standard Bots is hard. It’s often slow and frustrating. And from the outside, there’s a huge risk that we do all of this work and then, one day, we wake up and one of the big labs has just… cracked it. But I feel confident in our approach because I don’t think the Giant Leap views will produce meaningful breakthroughs, and I want to explain why.

For sure, you’ll continue to see increasingly magical pitches on robot Twitter:

“We can train on YouTube videos. No robot data needed!”

“We can generate the missing data in simulation!”

“We’re building a world model. Zero-shot robotics is inevitable!”

And some of these are even directionally right. There is real, actual progress behind a lot of the buzz. But there’s a ton of noise, too.

Again, I am biased here. But I’m also putting my time and money behind that bias. So here’s how I think about what’s actually going on — what Google, Physical Intelligence (Pi or π) and Skild are actually up to in the labs in pursuit of a genuine leap — from (don’t say it, don’t say it) first principles.

A Model Takes Its First Steps

A lot of the modern robotics-AI wave started the same way: pretrain perception, learn actions from scratch. Meaning, teach the robot how to perceive and let it learn by perceiving.

Take Toyota Research Institute’s Diffusion Policy. The vision encoder (the part that turns pixels into something the model can use) is pretrained on internet-scale images, but the action model begins basically empty.

Starting “empty” is… not ideal, because the model doesn’t yet have what researchers call perception–action grounding. It hasn’t learned the tight relationship between what it sees and what it does:

  • “Moving left” in camera space should mean moving left in the real world.

  • A two-finger gripper can clamp a cup by the handle or rim, but not by poking the center like a toddler trying to eat soup with a fork.

  • Contact is physics, not simple geometry. The world changes when you interact with it.

This grounding stage is basically the toddler phase: I see the world, I flail at the world, sometimes I succeed, mostly I bonk myself.

But most serious teams can collect enough robot data to establish basic grounding in days. So far, so good.

How to Train a Robot

Say you want to train a robot to do a task. Here is what you need to do:

1. Get data

2. Train model

3. Eval and continuous improvement

Get data: You can teleoperate in the lab, the real world, simulation, or learn from internet or generated videos. Each option has its own tradeoffs, and robotics companies spend a lot of time thinking about and experimenting with these tradeoffs.

Train model: Are you going to build it from scratch or rely on a pre-trained model to bootstrap? Training from scratch is easier if you are building a smallish model. Large models typically have entire training recipes and pipelines that involve pre-training, mid-training and post-training phases. Pre-training teaches the robot the basics about how the world works (general physics, motion, lighting). Post-training is about giving tasks specific superpowers.

In LLM terms, pre-training teaches a model how words are related in the training distribution. It learns their latent representations. Post-training (instructGPT, RLHF, Codex) gets a model ready for deployment use cases like chat agents or coding. Post-training can also make the robot faster, cheaper, and more accurate by tightening up the trajectories with RL. A lot of the RL buzz you hear about in the LLM world actually began with robotic task-specific policies.

All sounds great, but you still need the data. The big question is: how do you get the data?

Video Dreams (and Their Limits)

Giant leapers have two big salvation pitches for how they’ll get the data they need.

The first is existing whole-internet video.

Models clearly learn something from video: object permanence, rough geometry, latent physical structure, the ability to hallucinate the backsides of objects they’ve never seen (which is either very cool or deeply unsettling, depending on your relationship with reality).

So why not slurp YouTube, learn the world, and then just... do robotics?

Think about this first. What can humans learn from watching a video? And what can’t they?

Videos are useful for many things:

  • Trajectories and sequencing: Video is great at showing the arc of motion and the order of steps in an action.

  • Affordances and goals: You watch someone turn a knob and you learn that knobs want to be turned. Switches want to be pressed.

  • Timing and rhythm: Timing matters for things like locomotion, assembly, or anything that’s basically choreography. Video carries timing.

If you’re learning to grasp, video can show you: reach → descend → close fingers → lift.

And it can show tool use: the tilt of a cup, the swing of a hammer, the way people “cheat” by sliding things instead of lifting them.

But there are whole categories of data that video simply doesn’t carry: mass, force, compliance, friction, stiffness, contact dynamics.

Humans can sometimes infer some of this visually, but only because we’re leaning on a lifetime of embodied experience. Robots don’t have that prior.

In experiments with over 2,200 participants, researchers Michael Kardas and Ed O’Brien examined what happened when people watched instructional videos to learn physical skills like moonwalking, juggling, and dart throwing. The results were striking:

As people watched more videos, their confidence climbed sharply. Meanwhile, their actual performance barely moved, or even got worse.

That’s the embodiment gap. Video tells you what to do, but not what it feels like to do it. You can watch someone moonwalk all day. You still won’t feel how the floor grips your shoe, how much pressure transfers to your toes, how to modulate tension without faceplanting.

And robots have it worse than humans. At least we have priors. Robots have sensors and math.

I’m going to get a little spicy here.

If you’re not paying very close attention, it looks like feeding robots internet videos is working.

Watch Skild’s “learning by watching” demos closely. Only the simplest tasks use “one hour of human data.” More impressive demos are nestled in the middle of the video without that label. And the videos aren’t random ones pulled from YouTube either. They’re carefully collected first-person recordings from head-mounted cameras. Is doing all of this that much easier than just using the robots?

In short, there are three big reasons video isn’t enough:

  1. Coverage: internet video doesn’t cover the weird, constrained, adversarial reality of industrial environments.

  2. Data efficiency: learning from video alone typically takes orders of magnitude more data than learning from robot-collected data, because the mapping from pixels to action is underconstrained without embodied sensing.

  3. Missing forces: two surfaces can look identical and behave completely differently. Video can’t disambiguate friction. The robot finds out the fun way.

Then, you still have the translation problem: human hands aren’t robot grippers, kinematics differ, scale differs, compliance differs, systematic error shows up unless you train with the exact end effector you’ll deploy.

Which is why many of these companies end up quietly going back to teleoperation.

Human video is useful for pretraining. But weakly grounded data has a real cost: you can either do the hard work of actually climbing the hill, or you can wander sideways for a long time and call it progress.

OK, so the videos on YouTube aren’t that useful. What about simulation?

Where World Models Do and Don’t Work

Simulation and RL are the other big salvation pitch. If robots can self play in a simulated environment that mimics real world physics, the trained policy should transfer to real robots in the real world. And to be fair: sim is really good at certain things right now, especially rigid-body dynamics.

NVIDIA has pushed this hard for locomotion. Disney’s work (featured in Jensen’s GTC 2025 keynote) shows the magic you get when you combine good physics with good control: humanoids that walk, flip, recover (beautifully) in a simulator.

That success comes down to two ingredients:

  1. The physics is tractable: Simulators can handle rigid bodies + contacts + gravity well. You can randomize terrain, generate obstacles, and train robust walking policies without touching the real world.

  2. The objective is specifiable: RL needs a reward.

For walking, the rewards are straightforward: distance traveled, stability, energy use, speed.

For animation, it’s even cleaner: match a reference motion without falling.

So locomotion is the happy place because three things line up for machine learning. You can model the physics, measure the goals, and reset for free when things go wrong.

Then, people try to extrapolate from walking → factory work, and everything breaks.

When you do real things in the real world, physics gets messy. Real tasks involve soft materials, deformed packaging, fluids, cable routing, wear-dependent friction, tight tolerances, and contact-dominated outcomes.

You can simulate parts of this, but doing it broadly and accurately becomes a massive hand-crafted effort. And you still won’t match the edge cases you see in production. Again, you might as well do the real thing.

With real tasks, rewards get brittle or unwritable. “Make a sandwich” is not scalar. Even “place this part down” is full of constraints: don’t tear, don’t spill, align, recover if it slips, don’t jam, don’t scratch the finish, don’t do the thing that worked in sim but breaks the machine in real life.

Waymo is a great example. Waymo uses a ton of simulation today, but real-world data collection from humans driving cars came long before the world model. Do you remember how long human Google workers drove those silly looking cars around collecting data before Waymo ever took its first autonomous ride? As the company wrote in a recent blog post, “There is simply no substitute for this volume of real-world fully autonomous experience — no amount of simulation, manually driven data collection, or operations with a test driver can replicate the spectrum of situations and reactions the Waymo Driver encounters when it’s fully in charge.

You need to collect that data in the real world, and then you can replay and amplify it in sim. That’s how you get the last few “nines.”

Also, resets. What it takes to start over.

In sim, resets are free. In reality, resets take work. Walking is the rare exception because the reset is “stand back up,” but if you want a robot to learn sandwich making through trial and error, someone has to: clean up, restock, reset, try again, and repeat forever, slowly losing their will to live. Cleaning up after a half-baked bot is not why you signed up to be a robotics researcher.

So simulation is valuable, but it’s still not a replacement for real data collection. The highest-leverage use of sim is after deployment: when real robots surface real failure modes, and sim is used to reproduce and multiply those rare cases.

Which brings us back to first principles.

So What’s the Best Way to Train a Robot? (Like You’d Train a Human)

Think about how you train a human.

For simple tasks, text works. For slightly harder ones, a checklist helps. But most real factory work isn’t that simple. You need alignment, timing, judgment, recovery, and the ability to handle “that thing that happens sometimes.”

At that point, demonstration wins. It’s the most information-dense way to transfer intent. This is why people in the trades become apprentices.

It’s the same for robots. And it’s okay if a robot takes minutes or even hours to learn a task, as long as the learning signal is high quality.

Training time doesn’t need to be zero.

Which leads to what we’ve been saying: the giant leap isn’t, and can’t be, architectural.

The Giant Leap, the point at which the model has suddenly seen enough and can do anything, isn’t real. It is enticing and sexy (maybe in part it’s enticing and sexy because it’s always just out of reach). But it doesn’t exist. Even the smartest humans need training and direction. Terence Tao would need years to become an expert welder.

We think the answer is simply committing to taking the time to collect the right data. Robot-specific, task-specific, high-fidelity data, even if it means fewer flashy internet demos.

Three things follow from this:

  1. You will always need robot-specific data.

  2. The highest-quality way to convey a task is to show it (teleop or direct manipulation).

  3. Once you have strong domain-specific data, low-quality vision data from unrelated tasks doesn’t help much.

LLMs feel magical because they interpolate across the full distribution of human text. Robots don’t have that luxury.

To be clear, my contention is not that video, simulation, and better models aren’t useful. They clearly are. My contention is that with them, you still need to collect the right data.

In order to do a specific job — say, truck loading and unloading, or biological sample preparation, or cow temperature checking — you need data on that specific job, and it’s best if that data is generated on your own hardware.

And in order to do any job, which is the promise of general physical intelligence, you need to be able to do a lot of specific jobs, which means that you’ll still need data on each of those specific jobs, or at least jobs that look so similar that you can reliably generalize.

The upshot is that while it may be possible to build generally capable robots with all of this data, all of this data is wayyyy harder to collect than people realize, and it is also way harder to generalize outside of the data you do have (in fact, it’s not yet proven possible).

Which creates a chicken & egg problem:

  • You can’t really test a use case without the data (and a specific type of data)

  • You can’t get the data in a high-fidelity way without doing the use case

That’s the main reason that we think robotics progresses in small steps, not giant leaps. You need to collect all of the data in either case!

And if you believe that, then the next move is obvious…

Get Paid to Collect Data

So how do you gather that data? Do you make thousands of robots — robotic arms, in our case — and build sets where they can practice?

If you think that robots need to get past a certain threshold of capability to be economically useful, that might be the best approach. But we’ve already disproved that thesis. FANUC, ABB, Universal Robots, and others generate billions in revenue for basic automation.

Customers are used to old robots that require a ton of expensive implementation work and are brutal to program. We realized that we could compete with them and win.

Standard Bots Core

We make better arms and automate for a wider range of use cases than current deterministic software. And we do it for less money.

When we deploy a robot for a new customer, it takes a few easy steps and a few hours. And it’s getting easier and easier. We get paid for hardware and software upfront. Our gross profit covers our acquisition costs within 60 days.

This all means that we’re able to scale our data collection efforts almost as fast as we can make the robots, and it’s all funded by our customers. We’re happy for obvious reasons. They’re happy for obvious reasons. And the plan is that our robots keep learning in the field and we both keep getting happier.

Crucially, when there’s an issue, we teleoperate into the environment, error correct, and most importantly, learn from the issue. (Oh, and we have exclusive rights to the issued patent for using AR headsets to collect training data for robot AI models).

This is the secret sauce.

Standard Bots’ data collection engine

Earlier this week, a16z American Dynamism investor Oliver Hsu wrote an essay on the very real challenges that occur when going from the lab to the real world.

In papers and in the lab, a robot that succeeds 95% of the time sounds amazing. In a factory running a task 1,000 times a day, that means 50 failures per day. That’s I Love Lucy on the chocolate line performance. Even 98% means 20 stoppages a day. 99% means 10. You would fire any employee who messed up that much in a week.

According to Oliver, production environments require something closer to 99.9% reliability — one intervention per day, or even every few days — which is the difference between having to hire someone to fix your robot’s mistakes and just letting it work.

He’s right. 95% just isn’t good enough… unless you approach the problem like we do and improve over time. In which case, 95% is a great place to start!

95% is plenty good enough for Day 1 if you’re ready to teleoperate in and fix the 5% issues, which we do. We can ship robots to do things that deterministic, automated robots can’t. It allows us to continue to eat the spectrum by taking on use cases that we can mostly handle, and to treat human interventions as both a service and a data collection mechanism. The robot handles what it can, humans step in at hard cases, and those corrections flow back into training.

This has worked incredibly well. By learning from each of the real-world challenges that make up that 5%, we can bring failure down imperceptibly close to 0% within weeks of deployment.

That’s because intervention data at the moment of failure is the best data. We’ve learned that collecting data right around where the thing failed allows us to efficiently pick up all of the edge cases, and this is often the minimum training data we need. We concentrate at the boundary where autonomy breaks instead of just collecting data on the 95% of stuff we do flawlessly over and over again, and we learn where reality actually disagrees with our model. And because our robots are the ones generating the failures — not humans — we learn where our robots fail.

Learning where robots fail is important. There’s a mismatch when you train a robot on human demonstrations: the human operates in their own state distribution, but the robot will drift into states the human never showed it. Better to let the robot fail and act quickly to resolve.

With every customer, we learn about a use case, train our models, get ongoing data, learn as they fail, and improve our models.

At a certain point, a given use case is largely solved. We have eaten that piece of the spectrum. We can move on to the next, handle a little more variability.

So far, it seems that each use case we solve, along with the resultant improvements we make to our software, firmware, hardware, and models, makes it easier to eat adjacent pieces of the spectrum.

One common misconception about our approach is that it implies starting from scratch with every use case. That’s not how it works. Remember the screwdriver.

We don’t think of our system as a collection of isolated task-specific models. We think of it as a shared foundation of physical skills — perception, grasping, force control, sequencing, etc. — that compounds across deployments. For each new use case, we post-train on top of an ever-improving foundation.

With each use case that gets solved, those foundational capabilities get better. That makes adjacent tasks easier. Over time, the same core skills (screwdriving, for example) show up repeatedly in different combinations and those shared skills compound.

Ideally, the whole thing spins faster and faster. And it’s starting to seem like this is what will happen.

This is how the Standard Bots machine works. We get paid to learn. We get better, faster because we are forced to interact with reality.

And customers teach us about use cases that we never would have guessed existed.

A Forced Aside on Cow Temperatures

I was telling Packy (and he made me include this) about one of our new salespeople’s first days. He’d received a lead from a farm that wanted to use our robots to take their cows’ temperatures. Unusual temperature is the earliest, cheapest signal that something is wrong with a cow.

Do you know how to take a cow’s temperature?

What you do is, you take a thermometer and you stick it in the cow’s anus. You do this once a week, once a month, or somewhere in between, depending on the stage of the cow’s life. There are 90 million cows in the United States. Based on the cycle time math (it takes about one minute per cow) that’s a thousand-robot opportunity.

Two things about that opportunity:

  1. If you’d said, “Evan, if your life depended on it, give me a job that you could automate in the dairy industry,” I would have said milking cows. I’d never think of automating sticking a thermometer in a cow’s anus. That’s a job you learn about from customers.

  2. This is not a job for a humanoid. Surprisingly few jobs are, when you think about it.

One reason it’s not a job for a humanoid is that a humanoid would be overkill. You’re paying for general capabilities (and legs) when what you need is one thing done over and over and over (in a stationary position). Another reason is that a humanoid would be underkill for that specific job: it wouldn’t be set up, neither physically nor in the model, for the specific job.

What you’d need is a flexible gripper, for one thing. But really, it all comes down to entry speed. You can’t just jam it in. The cows don’t like that. And how do you figure out the right entry speed? Every cow is different. Turns out, you need a camera trained on the cow’s face and a model trained on hundreds of cows’ facial reactions; the cow’s face tells you when to slow down (and this behavior should emerge automatically during end-to-end training without any hand-crafted prior). The model needs to be able to understand what to do with that specific sensor data instantly in order to tweak the arm’s speed and angle of attack quickly enough for the cow to let it in. And so on and so forth.

Another reason it’s not a job for a humanoid is that they’re going to be pretty expensive. Elon himself predicted that by 2040, there will be 10 billion humanoids, and they’ll cost $20-25,000 each. About half that cost comes from the legs, which are probably a liability on the farm. Lots of shit in which to slip.

Here’s one more huge reason it’s not a job for a humanoid. Humanoids don’t exist today.

Other than some toy demonstrations, humanoids just do not exist in the field today. Generally intelligent robots certainly do not exist in the field today.


Sidebox: What about humanoids? (defining here as legged bipeds)

The promise of humanoids is captivating to many investors (especially Parkway Venture Capital). Understandably so. “The world was created for the human API.” It sounds so nice, and it’s true to some extent.

But that dream collides uncomfortably with reality. As I was recently quoted saying in the WSJ Tesla Optimus Story: “With a humanoid, if you cut the power, it’s inherently unstable so it can fall on someone.” And “for a factory, a warehouse or agriculture, legs are often inferior to wheels.”

I’m incentivized to say that, so don’t take it from me. In the same story, the author writes that, “inside the company [Tesla], some manufacturing engineers said they questioned whether Optimus would actually be useful in factories. While the bot proved capable at monotonous tasks like sorting objects, the former engineers said they thought most factory jobs are better off being done by robots with shapes designed for the specific task.” (That’s what we do with our modular design, by the way. Thanks, Tesla engineers.)

The Tesla engineers aren’t alone. People who run factories and care more about their business than demos don’t see the ROI, which is why you see companies like Figure shifting their focus to the home. This is the dream. Robots in the home is Rosie. But to put a robot in your home, with your kids, they need to be really reliable.

For humanoids to really be useful in the home, we’d like to coin the HomeAlone Eval.

The humanoid needs to survive in a house with a team of feisty eight-year-olds trying to trip, flip, and slip it — all without injuring them. It’s even hard for a human to remain stable when your kids jump on your back going up the stairs. And if you fall on them, at least you’re soft and fleshy. Robot, not so much. This humanoid eval is much harder to train with RL, but we’ll need to see that before we have one in our house.6

There are interesting approaches to the home that align with our thesis. Matic and now Neo are getting paid to learn inside of your house, from different angles. Matic is starting with a simple and valuable use case - vacuuming and mopping - learning the home, and working up from there. Neo is teleoperating its robots while it collects data.

But autonomous humanoids do not, in any practical sense, exist.


We can wait for humanoids to exist. Or we can be out here learning from customers about all of the things that robots might be able to do as we chew off more and more variability, and then getting paid to learn and perfect those use cases. All while our one-day competitors are stuck in the lab.

We are running as fast as we can with that headstart. A big reason we’re able to run so fast is that we’re vertically integrated.

Why Vertically Integrate?

There is a big reason that deployment accelerates learning that has nothing to do with models and everything to do with hardware.

Recall that data is 100-1,000x more efficient when aligned with its hardware. The more of the hardware you control, the more true this statement is.

Most labs use cheap Chinese arms from companies like Unitree. This makes short-term sense. Those arms have gotten really good and they’re very cheap, a couple thousand bucks.

At Standard Bots, we’re betting on vertical integration.

We make an industrial grade arm that’s designed for end-to-end AI control. In particular, torque sensing in the joints. Because when you’re doing AI, you want to be able to record how you interact with the world, and then be able to train the model on that interaction in order to have the model recreate it.

Which is why we care about torque sensing and torque actuation: so the motor can precisely control how hard the joint pushes, and so the robot can feel how the environment pushes back through the joint. If you don’t have that, then you’re kind of stuck with AI for pick and place or folding.

We’ve created a unique way to do the torque sensing. Everyone else does strain gauges and current-based torque sensing. We have a method to directly measure torque through the bending of the metal, and our way is more accurate and more repairable, easier to manufacture, just better all around. Really, really great torque sensing.

To do that, we make practically everything ourselves. We even make our own motor controller to commutate the motor. The things we don’t make are bearings and chips. Everything else, for the most part, is going to be made by us. So that’s really deep vertical integration.

Standard Bots

It’s necessary, though. Old robots don’t work with new models.

Old robots were designed for motion replay: you send a robot a 30-second trajectory and the robot executes it. AI requires 100Hz real-time control. You’re sending a new command 100 times-per-second based on what the model sees in real-time. A lot of the existing robot APIs don’t even have real-time torque control. I can tell my robot to go somewhere, but I’m just giving it a position. If it hits something, it’s going to hit it at max force. It doesn’t have the precise control I need for it to do a good job.

This doesn’t work for a robot that thinks for itself in real time. So we wrote our own firmware for real time torque control with motor commutation at 60 kHz (60,000 times per second).

This firmware makes our robots smoother, more precise, and more responsive, and also easier and more fun to use. This is really important because it means that we can physically handle a lot more use cases. This, in turn, means that hardware won’t limit our ability to eat more of the spectrum.

Between putting these arms that can physically handle a lot of use cases out in the field and our own data collection for pre-training — a handheld device7, our actual arms8, and increasingly, AR/VR9we’re vertically integrated on the data side, too.

This data feeds our pre-training mix. Think of it as the first industrial foundation model for robotics pre-training. More vertical integration. As discussed, this model can be smaller, add core skills over time, and can be deployed with post training on a specific task.

A mix of hundreds of factories, which are our customers. Payloads up to 66 pounds, not this three-pound bullshit. Industrial environments, industrial equipment. An industrial-grade arm that’s IP-rated and made for 24/7 operation, paired with an industrial-grade model.

Of course, we’re thinking about everything a person could do in a factory warehouse and putting that into our pre-training mix, just like everyone else. The difference is, our robots then quickly go out and learn everything a person actually does in a factory.

This is a fundamental bet we’re making.

Some companies are betting that they can just go create some model, around which an ecosystem will develop, and they’ll then bring their product to market.

We think that the market is too nascent for that.

The tight integration between hardware, data, and model is so crucial while we are still learning how to do new use cases that we believe vertical integration is the only way to do it right.

This is how new technology markets develop. In Packy’s Vertical Integrators, Part II, Carter Williams, who worked at Boeing in Phantom Works, explained that the need for vertical versus horizontal innovation moves in cycles. “Markets go vertical to innovate product, horizontal to reduce cost and scale. Back and forth over a 40-50 year cycle.”

In robotics, we are still very much in the “innovate product” phase of the cycle.

One day, once we’ve collected data on use cases that represent the majority of the value in the industrial economy (and beyond), the industry will probably modularize to reduce cost and scale. Hopefully, we won’t have to make everything ourselves for the rest of time. We still have to today.

The other thing about vertical integration is that controlling everything helps us adapt fast. Every day, we learn something new about how customers operate, what their needs are, how different types of factories run. The ability to learn something, fix, and adjust is invaluable.

For example, we realized in the field that models actually have to understand the state of external equipment, not just the thing the robot is working on. Often there’s an operator that’s using a foot pedal at a machine. We need to collect data on the foot pedal — like whether it is pressed or released — and the model needs to be able to understand these states. From there, we need to make a generic interface that works for all types of external equipment.

And there’s the other thing we’ve discussed as crucial to our business: it’s really important to be able to collect data on failure. So we have a whole loop on that too.

That’s it. That’s the plan.

Robotics is bottlenecked on data. We get paid to collect data by building better robotic arms for industrial use cases. These use cases are broader and larger than we anticipated. For each one, we deploy, learn, find edge cases, intervene, collect the data, and improve. This is necessary at the model level for a specific task, and it’s also necessary at the level of the system. And the only way we are able to do this quickly (or at all) is because we are vertically integrated.

Rinse, robot, repeat.

This is how we eat the spectrum, one small step at a time.

Small Steps, Small Models, Big Value

In The Final Offshoring, Jacob Rintamaki’s excellent recent paper on robotics, he writes, “one framing of general-purpose robotics that I haven’t seen much of isn’t that we now have a robot that can do anything, but rather we have a robot which can quickly, cheaply, and easily be made to do one thing very well.”

That is our plan. To do one thing very well, for every industrial case, one thing at a time. Eventually, we will reach across the spectrum of use cases.

“The strategy for these companies then,” Rintamaki continues, “given that reducing payback time may be All You Need, is to deploy into large enterprise customers as aggressively as possible to start building moats that their larger video/world-model focused competitors still find difficult to match.”

Yes.

Here, I want to reintroduce the concept of variability to discuss the nature of our moats.

There is the data moat that I’ve written about at length here. We are getting paid to collect the exact data we need to make our specific robots better.

What we do with that data, for the particular slice of variability that makes up each use case, may be equally important but is less obvious.

We think that general models will not lead to a giant leap without all of the right robot data. We also believe that smaller models outperform larger ones for many use cases on a number of critical dimensions like cost and speed while accounting for the majority of value available to robots.

Solving everything in a large general model is tempting: we’ve trained LLMs already. Leverage the trillion-dollar machine!

LLMs have strong semantic structure. Word embeddings put similar words close together, and (weirdly, beautifully) semantic distance in language often mirrors semantic distance in tasks.

So we get the appealing idea: use an LLM backbone, condition behavior on short text labels, and store many skills in one model. “Pick.” “Place.” “Stack.” “Insert.” Same model, many skills. That’s the VLA (video-language-action) dream.

But there’s a reason diffusion took off first in robotics.

LLMs are autoregressive: predict next action once → feed it back in → compounding error if wrong. The errors matter hugely when you’re controlling physical systems.

On the other hand, diffusion is iterative: denoise progressively → a single bad step doesn’t doom the rollout.

But there are challenges to making this work well at the architectural level.

LLMs were designed for tokens: discrete symbols, or words. Robots operate on continuous values: positions, velocities, torques. Numbers like 17.4343 instead of words like “seventeen.”

With LLMs, every digit becomes a token. Precision explodes token count, which means latency explodes too. Your robot gets slow, and a slow robot isn’t a particularly useful robot.

This is the core tension:

  • Robotics success so far has leaned heavily on diffusion-style control

  • LLMs are autoregressive and token-based

  • Physical actions don’t map cleanly to tokens

Pi has bridged this gap: they’ve found representations of robot action that play nicely with language-model infrastructure. That’s real, hard, and impressive work.

But here’s another spicy take.

We’re not working with language-model infrastructure because it’s the perfect architecture for robotics. It’s because we, as a species, have poured trillions of dollars and countless engineering hours into building LLM infrastructure. It’s incredibly tempting to reuse that machine.

So, despite its imperfections, taking an LLM and sticking on an action head to predict robot motions (all together known as a VLA) is the best way for us to train the base models that learn many skills from demonstrations across many different customers and tasks.

There’s also the “fast and slow” split: use LLMs as supervisory systems that watch, reason, and call skills, rather than directly controlling motors. Figure’s approach is a good example of that pattern.

The problem with general models is that they have to solve for everything. They are predicated on the belief that if you throw enough compute and data into a single huge model, you will be able to make a robot that can do almost anything. They solve for max variability: you can walk into a completely unseen environment with unseen tools, unseen equipment or fridge or stove, and you can handle all of that perfectly. And the objects are breakable. That’s a tremendous amount of variability to account for in one model, so the model needs to be huge.

Huge models mean models that are more expensive (at training and inference), harder to debug, and slower, which you can see in humanoid performance today.

BUT, and here is a key insight: parameter count scales with variability, not with value.

We think that most of the market can be unlocked by a surprisingly small number of parameters.

Let’s use the example of self-driving again. Apple published a paper on its self-driving work in which it states using just 6 million parameters for decision making and planning policy. Elon said recently that Tesla uses a “shockingly small” number of parameters for their cars.

This is orders of magnitude smaller than the hundred-billions or trillions of parameters we’re used to hearing about for LLMs, because LLMs need to be ready to answer almost any question imaginable at any time, and because each individual LLM user isn’t worth enough to fine-tune custom models for.

It’s the opposite case with robotics if you’re solving for a specific task with constrained variability. The model will need to know how to do a few things very well. Given the cost of deployment and the economic value created, it is absolutely worth fine-tuning a model for that use case.

That means we can distill our larger base model into much smaller models. Which we are. We ship really small models sometimes. They’re low parameter models that can solve, across the spectrum, a really useful number of things. And we can concentrate the robot’s limited compute on narrower problems, which leads to better performance.

We use small amounts of the right data to feed small models that are cheaper, faster, and can be better-performing than the large, general ones that they come from when fine-tuned on the right data.

Of course, the better, cheaper, and faster we are for each specific use case, the more broadly we will deploy, the faster we will learn, and the sooner we can eat more of the spectrum.

At least, that’s my bet.

Is Standard Bots Bitter Lesson Pilled?

My bet isn’t exactly the trendiest. It’s not fun betting against the magic of emergent capabilities.

In one of our conversations, Packy asked me if our approach was Bitter Lesson-pilled, referring to Rich Sutton’s 2019 observation that “the biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”

He pointed me to the 2024 Ben Thompson article, Elon Dreams and Bitter Lessons, in which Thompson argues that starting with a bold “dream,” then engineering costs down to enable scale, beats cautious, incremental approaches and creates new markets.

Waymo looks like it’s in the lead now, Thompson argues, but its approach — LiDAR for precise depth, cameras for visual context, radar for robustness in adverse conditions, HD maps, a data pipeline, etc — is bound to plateau, because it’s more expensive and its dependencies make it less likely to achieve full autonomy.

Tesla FSD, on the other hand, is betting on end-to-end autonomy via vision (cheap cameras) and scaled compute. Use cameras-only at inference to keep vehicles cheap, harvest driving data from millions of Teslas to train large neural networks, distill expensive sensors and mapping used during training into a lightweight runtime, and compound safety through volume until Level 5 full autonomy everywhere becomes viable rather than geofenced Level 4. This is the Bitter Lesson-pilled approach.

I had to think about my answer for a second. I hadn’t thought about it and I wasn’t entirely sure.

It’s definitely a viable question. Could someone come in and just create a super, super intelligence that only needs to be communicated with through a super simple voice interface? I mean, theoretically, obviously, yes. Right?

Wrong. The truth is that you need the data to win.

You can’t be Bitter Lessoned by someone that doesn’t have the training data.

Tesla was only in the position to Bitter Lesson everyone because they had the distribution to collect the data in the first place. The iterative approach, Tesla’s Master Plan, is what enabled the Bitter Lesson approach in the first place.

The iterative, customer-funded approach, the one Tesla took and the one we are taking, is how you get the data that lets you benefit from scale. Thompson himself wrote, “While the Bitter Lesson is predicated on there being an ever-increasing amount of compute, which reliably solves once-intractable problems, one of the lessons of LLMs is that you also need an ever-increasing amount of data.”

The Bitter Lesson in Robotics is that leveraging real-world data is ultimately the most effective, and by a large margin.

You can’t Bitter Lesson your way to victory if you don’t have the training data, and you can’t get the training data without deployment. What Sutton would really suggest, I think, is to get as many robots in the field as possible and then let them learn in a way that is interactive, continual, and self-improving.

We’re not there yet. We still have humans in the loop.

But the first step to all of this, and perhaps our company’s best hedge against the Bitter Lesson, is getting robots deployed to as many customers as we can, as quickly as we can.

What If I’m Wrong?

It’s hard to work in robotics for too long without getting humbled. This is an industry that has, for decades, fallen short on its promise.

So how confident am I that I’m right and basically the rest of the industry is wrong?

I mean, decently confident, confident enough to spend my most productive years building this company. I’m confident that our approach is differentiated and logically consistent. But fully confident? No.

It’s worth saying explicitly that this isn’t a case of us versus the rest of robotics. Some of the people I respect most in the field are taking the opposite bet.

Lachy Groom, the CEO of Pi, is a close friend and led the Series A in Standard Bots. He’s building with a foundation-model view of robotics, and I think this work is important. We talk about this stuff all the time and we both want the same thing: to see tons of robots out in the world, no matter whose approach gets us there fastest.

If the foundation-model view wins out, though, it’s hard to see how any one company will be able to winner-take-all the model market on compute and algorithms alone. There are now at least four frontier LLM labs with basically the same model capabilities. On-demand intelligence, miraculously, is becoming a commodity.

If you were going to run away with this market, my bet is that you’d have to do it with data and with customer relationships, kind of like Cursor for robotics.

Let’s say, for argument’s sake, that Google, Skild, and Physical Intelligence all solve general physical intelligence. In that case, I think whichever company actually owns the customer relationship has the power. That company can just plug in the lowest bidder on the model side.

This is related to the bet that Packy argued China is making in The Electric Slide: if I’m the company that can build robots and sell them to customers, and particularly if customers are already getting value from Standard Bots, then I want the models to commoditize. I want them to be as powerful as possible. Commoditize your complements.

It’s good for us, at the end of the day. Getting a product in the field, selling to customers, and iterating is both a competitive advantage and a hedge. All I care about is the advantage, though.

We believe, like so many of the people working in our industry do, that there will be no bigger improvement to human flourishing than successfully putting robots to work in the real economy. We are on the precipice of labor on-tap, powered by electrons and intelligence. That means cheaper, better goods. It means freeing humans from the work they don’t enjoy. Being a farmer is more fun when you don’t have to take the cow’s temperature yourself. It means that the gap between thought and thing practically disappears. And those are just the first-order effects. We can’t know ahead of time what fascinating things people will dream up for our abundant robotics labor force to do; all we can know is that they will be things people find useful.

We all believe this. We all want to produce a giant leap for mankind. The open question is how we get from here to there.

I believe the way to build the world’s largest robotics company is to eat the industry one use case at a time. And I’m so hungry I could eat a cow.


Big thanks to Evan for sharing his knowledge, to the Standard Bots team for input, and to Badal for the cover art and graphics.


That’s all for today.

For not boring world paid members, I played around with Claude to produce some extra goodies. We made an annotated bibliography with links to papers that support (or push back on) Evan’s argument from both the robotics and business strategy side and a GAME. Members can also ask Evan questions on today’s cossay.

Join us in not boring world for all of this and more behind the paywall.

Thanks for reading,

Packy

Read more

Weekly Dose of Optimism #175

2026-01-10 21:57:13

Hey friends 👋 ,

Happy SATURDAY and welcome to a special weekend edition of the Weekly Dose. We’re sending today because we sent our deep dive on a16z yesterday, but let me know what you think about the weekend send. Might be a good way to spend a Saturday morning, coffee in hand, optimism in veins.

One thing I’m personally optimistic about right now is not boring world. It was a great launch week: #1 rising in business, #2 new bestseller overall, top 60 in business. And as we speak, I’m working on drafts of the first two cossays. We might also begin sharing more of the stories that we left on the Weekly Dose cutting room floor. Join us.

Subscribe now

For now, we have a new food pyramid, Chinese Peptides, Boltz Lab, HALEU $$$, and Rintamaki on Robots.

Let’s get to it.


Today’s Weekly Dose is brought to you by… Framer

Framer gives designers superpowers.

Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes. Whether you’re starting with a template or a blank canvas, Framer gives you total creative control with no coding required. Add animations, localize with one click, and collaborate in real-time with your whole team. You can even A/B test and track clicks with built-in analytics.

Ready to build a site that looks hand-coded without hiring a developer?

Launch a site for free on Framer dot com. Use NOTBORING for a free month on Framer Pro.


(1) There’s a New Food Pyramid in Town

Joe Gebbia and the National Design Studio

We all grew up looking at the food pyramid. Eat lots of carbs and few fats. Gospel.

Then, in the early 2000s, we learned the food pyramid was probably a pyramid scheme thanks to Gary Taubes’ New York Times piece What if It’s All Been a Big, Fat Lie?

The story goes something like this. We used to eat good, normal diets: meat, eggs, butter, vegetables, the stuff humans had eaten for millennia. Then heart disease rates started climbing in mid-century America, and in the 1950s an ambitious University of Minnesota physiologist named Ancel Keys became convinced that dietary fat was the culprit. His Seven Countries Study showed a correlation between saturated fat consumption and heart disease, though critics later pointed out he cherry-picked his countries and ignored confounding variables. Keys was brilliant and ferociously combative, and he won the institutional war, capturing the American Heart Association and marginalizing skeptics.

Meanwhile, in the 1960s, the sugar industry was quietly paying Harvard scientists to publish research blaming fat instead of sugar, and the Harvard scientists were happy to oblige their sugar daddies.

In 1977, Senator George McGovern's committee translated the fat hypothesis into the first federal dietary guidelines, drafted by staffers with no scientific background over the objections of researchers who said the evidence wasn't there yet. Once the guidelines existed, they created their own gravity: the USDA built food guides around them, the NIH funded research that assumed they were correct, and food companies reformulated everything to be low-fat (adding sugar to compensate). The Food Pyramid arrived in 1992, telling Americans to eat 6-11 servings of bread and pasta while using fats “sparingly.”

Americans dutifully complied, fat consumption dropped, carb consumption soared, and obesity rates tripled. It took until the 2010s for the scientific consensus to finally crack. But since then, frustratingly, while we’ve known, the Food Pyramid didn’t changes, for what I’m assuming are reasons of institutional sclerosis.

And then, this week, the National Design Studio just … published … a new one.

It’s beautiful, and it’s more correct, which means that kids and grownups and whoever else are probably just a little more likely to eat the right stuff. And I mean check out the website, which is a government website.

What makes me most optimistic though is that there was this obviously dumb and wrong thing that everyone agreed was dumb and wrong but did nothing about that we now, as a nation, have done something about. I bet there are a lot of other obviously dumb and wrong thing we can fix now that we’re at it.

(2) Not For Human Consumption - Grey Market Peptides

From on Substack

Vectorculture
Not For Human Consumption
The FDA classifies BPC-157 as Category 2 (ineligible for compounding) while Telegram communities with 35,000 members crowdsource third-party lab testing at $850 per batch. Eli Lilly’s retatrutide achieves 28.7% weight loss in Phase 3 trials while Chinese suppliers ship enough semaglutide API to produce over one billion starter doses. At a December 2025 …
Read more

And if eating real food doesn’t do the trick…

Chinese peptides are so hot right now. Apparently everyone in SF is doing them. For the uninitiated (me), peptide is a short chain of amino acids (the building blocks of proteins) that can act as signaling molecules in the body to suppress your appetite or repair your tissues. A Chinese peptide is a peptide that you can get access to cheaper or extra-legally.

This essay goes into much more depth on Chinese peptide, regular GLP-1s, super GLP-1s (Gen3 GLP-1/GIP/glucagon agonists…) and who should get to decide how much we enhance our own bodies.

A few things stand out…

  1. If you thought GLP-1's were miracle drugs, wait for Gen3 GLP-1/GIP/glucagon agonists, which author SOMEWHERE calls “the “holy grail” of weight loss medications. They work. Astonishingly well.” and which are currently in Phase 3 trials.

  2. Plenty of peptides people are using have thin medical backing and even the ones that do need complex supply chains to work, which the grey market probably isn’t respecting, making drugs even less effective.

  3. Cheap versions of existing drugs are WAY cheaper: grey market semaglutide costs ~$50/month versus $24,000 for brand-name — an 80-200x price differential

These are all considerations in the present. Over the medium term, the author sees a clear trajectory: oral GLP-1s democratize access, myostatin inhibitors add muscle preservation, and eventually gene therapy moves from wealthy self-experimenters to mainstream application. Everyone is going to be skinny and jacked.

Speaking of Oral GLP-1s! For the less adventurous among us, Novo Nordisk launched its Wegovy pill, the first FDA-approved Oral GLP-1 for Weight Loss, on Ro!

(3) Boltz Launches Boltz Lab Platform with AI Agents for Biomolecular Design

On Thursday, Boltz launched Blotz Lab: a platform that provides scientists with access to state-of-the-art open-source AI models:

  • Boltz-1 for biomolecular structure prediction (released December 2024, matching AlphaFold3 accuracy),

  • Boltz-2 for structure and binding affinity prediction (June 2025, in collaboration with Recursion and MIT),

  • BoltzGen for de novo protein and peptide binder design (October 2025)

  • Plus, new AI agents for small-molecule hit discovery and protein design.

We like open sourcing models to give scientists superpowers with which to give the rest of us superpowers.

We also like the investor list here. Boltz Public Benefit Corporation announced a $28 Seed round from a16z, who I wrote about yesterday, and Amplify Bio, led by great friend of not boring and former not boring capital biotech partner, Elliot Hershberg.

I texted Elliot to ask about the deal, and he told me this:

Boltz is a David and Goliath story. Google DeepMind, with infinite resources and a world-class team, made a huge breakthrough with AlphaFold. But by AlphaFold2, the code wasn’t open-source for other researchers to build on. So a small team at MIT decided to build their own.

With resources for only one training run, the Boltz team built a shipped a state-of-the-art open model for anybody to use. It took off like wildfire and is now used by every top 20 pharma company and >100k scientists worldwide. Now as Boltz PBC, this team has the resources they need to build infrastructure for scientists to make the best possible use of these models for programming biology.

Biology is hard, so I have a very complicated process to try to understand whether something new in bio is legit: I text Elliot. Boltz passes the Elliot test with flying colors. Get out there any program biology, everyone.

(4) General Matter Gets $900M DOE Contract for Domestic HALEU Production

America used to enrich uranium into the LEU and HALEU that most nuclear reactors use as fuel, and then we stopped, and now we rely on Russia, basically. Read our friends at Crucible Capital on the Nuclear Fuel Value Chain for a clearer picture.

Now, we have companies like Founders Fund-backed, Scott Nolan-led General Matter which are bringing enrichment back to the US, and now General Matter now has a $900 million task order from the DOE to “create domestic HALEU enrichment capacity.”

In just a couple of years, we’ve gone from the government blocking nuclear to funding it aggressively, in part because of its growing popularity, in part because of the data center need, and in large part because entrepreneurs are finally giving them something to fund.

To use all this new fuel, we’re going to need a lot of new reactors, so more good news: Aalo Atomics is making progress on the first new reactor building at Idaho National Labs in 59 years!

Critical progress on all fronts.

(5) The Final Offshoring

by Jacob Rintamaki

Jacob Rintamaki is one of a small group of low-twenty-somethings who make me feel like I was educated via cups-on-string whilst they were educated via Somos fiber. Like I simply do not know how he knows so much already.

I first met Jacob a couple of years ago when I stumbled across his work on nanosystems. He wrote about it a bunch, but this monster is probably the best place to start (and finish) for the curious: A Technical Review of Nanosystems.

Nanosystems are like little tiny tiny atomically tiny machines that we haven’t actually figured out to build yet, so it’s probably unsurprising that when Jacob turned to macrobots, regular old robots, the analysis would be child’s play.

Jacob’s writing on the topic is a great mix of technical, economic, and guy-in-the-scene-in-SF-hearing things. I particularly like his idea that robots and data centers are a match made in heaven and form an elegant flywheel: robots build AI infrastructure → better models → smarter robots → more infrastructure.

That’s the means.

What I particularly love about this freewheeling exploration of our robotic future is that Jacob also writes about the meaning. He ends the piece with two short stories on meaning in a post-labor world1. The optimistic take, and the one I agree with, is that the robots are going to make us more human. Beep boop to that.

You can get the PDF here if you want to read the old-fashioned way, by hand.

BONUS: Below the Paywall

I’m going to try something here. I like seeing not boring world grow, looking at number go up on the Stripe dashboard, and taking down the people ahead of me on the Substack Business Top Bestsellers list, so I’m throwing up a paywall.

There’s nothing behind it really. Just a link to a video that I’m in the middle of and really enjoying that you can get on the internet for free. I bet if you’ve been reading not boring for a while, you can even guess who it’s from.

But if you want to subscribe, I’d love to have you and I promise to try to make it worth your while (eventually, not now. whatever is down there is not worth $20/mo or $200/yr).

Read more