2026-04-14 20:03:42
"Within 2 or 3 years, AI mathematicians will surpass human mathematicians for any specific mathematical task. I don't think it'll be a decade, like some people say." — Tudor Achim, CEO of Harmonic
Listen or watch now on
YouTube, Spotify, or Apple Podcasts
Tudor Achim is the co-founder and CEO of Harmonic, a startup working to solve one of AI’s hardest problems: mathematical reasoning. In July 2024, Harmonic achieved gold-medal-level performance on International Math Olympiad problems alongside systems from OpenAI and Google DeepMind—but with a key difference: every proof Harmonic submitted was formally verified. Tudor's path to Harmonic wound through competitive piano, computational biology, and autonomous driving. He studied at Carnegie Mellon's music preparatory school, worked on machine learning at Quora, briefly pursued a PhD before dropping out, and then co-founded an autonomous driving company, Helm.ai. Harmonic's core product, Aristotle, uses reinforcement learning and the programming language Lean 4 to solve problems and verify solutions.
In our conversation, we explore:
Why Tudor believes math is the fundamental toolkit to understand the world
How Harmonic uses hallucinations as a feature, not a bug
How Aristotle works and the applications beyond pure mathematics
The reinforcement learning process that lets Harmonic generate synthetic training data and solve problems humans have never attempted
Why Tudor believes AI could surpass human mathematicians on specific tasks within 2–3 years
Why the future of mathematics looks more like GitHub than academic journals
The alternating pattern between intellect leaps and data leaps throughout scientific history
How studying piano under an extraordinary teacher taught Tudor discipline and the value of sticking with hard problems
Brex: The intelligent finance platform.
Guru: The AI source of truth for work.
Rippling: Stop wasting time on admin tasks, build your startup faster.
(00:00) Intro
(03:34) From competitive piano to computer science
(06:28) The mathematical foundations of music (and why Tudor keeps them separate)
(08:24) Can AI ever create art with true intent?
(09:51) Early obsessions
(12:52) Defining intelligence
(14:49) Discovering machine learning’s potential at Quora
(17:30) Why Tudor chose computational biology for his PhD
(19:19) The decision to drop out and build Helm.ai
(22:55) The two breakthroughs that made mathematical AI possible in 2023
(25:28) The importance of Lean 4
(28:21) How Tudor and Vlad Tenev discovered they shared the same impossible dream
(32:35) Why formal verification became the core conviction
(34:21) The timeline for AI surpassing human mathematicians
(35:25) An overview of Aristotle: the world’s first always-correct mathematical agent
(38:12) Why Tudor says hallucinations are the engine of creativity
(39:30) The translation challenge from natural language to formal proof
(40:40) Reinforcement learning
(42:10) Why Aristotle is both faster and cheaper than alternatives
(43:34) Tradeoffs and use cases
(45:34) Math in AI now and what’s next
(47:38) Tying with OpenAI and DeepMind at the International Math Olympiad
(49:08) Democratizing AI and correctness
(53:13) Tudor’s 2030 thesis
(56:02) History’s alternating rhythm of thinking and measuring
(57:53) What Tudor has been wrong about
(58:52) What Tudor’s best at
(1:00:18) Final meditations
LinkedIn: https://www.linkedin.com/in/tudorachim
X: https://x.com/tachim/with_replies
Mineko Avery’s obituary: https://obituaries.post-gazette.com/obituary/mineko-avery-1089363974
Stefano Ermon on LinkedIn: https://www.linkedin.com/in/ermon
Vladislav Voroninski on LinkedIn: https://www.linkedin.com/in/vladislav-voroninski-83200213
Leonardo de Moura’s website: https://leodemoura.github.io
Vlad Tenev on LinkedIn: https://www.linkedin.com/in/vlad-tenev-7037591b
Vladimir Novakovski on X: https://x.com/vnovakovski
Terence Tao on LinkedIn: https://www.linkedin.com/in/terence-tao-6291246
Grigori Perelman: https://en.wikipedia.org/wiki/Grigori_Perelman
Harmonic: https://www.harmonic.fun
Carnegie Hall: https://www.carnegiehall.org
Carnegie Mellon Music Preparatory School: https://www.cmu.edu/cfa/music/preparatory-summer-programs/index.html
Why A.I. Isn’t Going to Make Art: https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art
Quora: https://www.quora.com
The Unreasonable Effectiveness of Mathematics in the Natural Sciences: https://webhomes.maths.ed.ac.uk/~v1ranick/papers/wigner.pdf
Helm: https://helm.ai
Honda Introduces Next-generation Technologies for Honda 0 Series Models at Honda 0 Tech Meeting 2024: https://global.honda/en/newsroom/news/2024/c241009eng.html
Lean: https://lean-lang.org
Aristotle: https://aristotle.harmonic.fun
Poincaré conjecture: https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture
OpenAI: https://openai.com
DeepMind: https://deepmind.google
International Mathematical Olympiad (IMO): https://www.imo-official.org
I’d love it if you’d subscribe and share the show. Your support makes all the difference as we try to bring more curious minds into the conversation.
Production and marketing by penname.co. For inquiries about sponsoring the podcast, email [email protected].
2026-04-10 21:27:57
Friends,
You didn’t mind speaking with Hanns Scharff. No, it was better than that — you liked it. He spoke English well, for one thing, the product of years spent in South Africa and a wife from London. But that didn’t fully explain it either. He spoke softly (unlike the others), was good for a joke or a story, and when he directed his dark, thoughtful eyes in your direction, you didn’t feel fearful, but at ease.
You could almost forget that he was the Luftwaffe’s most effective interrogator. And you, his prisoner.
While the Nazi Party’s other torturers and wheedlers relied on threats and violence, Scharff found success with a more genteel approach, taking downed pilots on long walks in the Taunus hills northwest of Frankfurt, during which he seemed to avoid discussing military matters. Only later, in some cases much later, would prisoners realize what had happened.
One American pilot recalled such a stroll. Only after he and Scharff had wandered and chatted for a while did the German mention, almost in passing, that a chemical shortage seemed to have impacted American munitions: their tracer bullets now trailed white smoke rather than their usual red.
No, no, the American told him. That wasn’t caused by a chemical shortage; it was a matter of design. American tracer bullets shifted from red to white when a pilot was running low on ammunition. It was a kind of warning system.
There it was. By purposefully saying the wrong thing, Scharff prompted the pilot to correct him, all without asking a question. A pocket had been picked without the wallet’s owner even registering a rustle. The pilot would not realize what had happened until it was much too late.
This technique, and others like it, are the topic of Confidential by John Nolan, a 1999 book that is as fascinating as it is difficult to obtain. Though prized by intelligence officials and professional “elicitors,” Confidential is no longer in print and only available to buy second-hand. To grab my copy, I spent a few hundred dollars on eBay and waited weeks for it to arrive. All of which only adds to its strange allure, as if someone decided Nolan’s work was too useful to simply leave lying around.
Confidential is a wildly entertaining and impressively insightful book. In studying it closely these last few months, I’ve also come to believe it’s an important one. Though Nolan is ostensibly writing for the professional intelligence gatherer, his conversational techniques are useful to anyone, in any context. They are liable to make you more engaging and persuasive, as well as a better conversationalist.
It is also worth knowing when someone else is using them. Why did that salesperson seem to purposefully misspeak? Was I imagining it, or did that headhunter seem to disbelieve everything I said? What is it about this person that makes me want to open up so much? For founders working in sectors of national interest, Confidential will help you protect what you know. If you are building almost anything of note, there is a good chance that someone out there — whether in a bland concrete building, a glassy office tower, or a grassy tech campus — would love to understand it better than you’d like them to.
This is the second piece in an occasional series about books that change how you see everyday interactions. The first, on Keith Johnstone’s Impro, explored the invisible power dynamics in every conversation. Confidential picks up the other side of that coin: how information actually moves between people, and what a former spy figured out about making it move faster.
If you’d like to read more work like this — pieces that dig into overlooked ideas and the people behind them — a premium membership is $22/month or $220/year. Subscribe now.
Like a character in Atlas Shrugged, the natural question when beginning Confidential is, “Who is John Nolan?”
The first thing to say is that this is the name you would want for a spy — a vaguely heroic sound being stirred into a bowl of porridge.
There are, fittingly, few details online, so we must rely on Nolan’s own telling. For twenty-two years, he worked as a spy. From the sparse available details, it seems Nolan spent time in some of the intelligence community’s more controversial programs, a background that lends Confidential both its authority and occasional chill.
After his time working for the government, Nolan founded a corporate espionage consultancy that advised business clients and gathered intel on their behalf. (One of the only articles I can find that mentions Nolan outside of Confidential covers an espionage campaign conducted by P&G against Unilever in 2000 to obtain the “secrets of shampoo.” Nolan’s firm was ostensibly the orchestrator.)
As part of his work, Nolan’s team relied on the psychological tools he outlines to extract sensitive information — all while being perfectly explicit about who they were. Beyond his team, Nolan also trained executives to use his techniques and protect themselves against him.
Nolan’s own “call to adventure” is a memorable starting point for the book. In 1960s New Jersey, Nolan started work as a typewriter salesman for a small local outfit, competing with a rep from IBM. Despite offering a superior product, Nolan struggled to shift them. While he flailed around trying to convince companies to have him in for an appointment, his rival was having companies call him - everyone knew that if you needed a typewriter, you went to “Big Blue.”
Sitting in a coffee shop one day, Nolan watched as the IBM salesman loaded typewriters into his station wagon. “In a brief moment of clarity,” Nolan came to a realization. Why bother to hunt and scrabble for customers when he could simply follow his rival and figure out who was in the market for typewriters?
For the rest of the day, he tailed the station wagon, watching it go from office to office. The next morning, he set out and visited every company, one after another, showcasing what his product could do. That week, he sold twelve typewriters.
Over the following months, he repeated the trick, shadowing the IBM rep a few days each week. He grew cocky enough to wait outside an office building and follow him an hour later. Without meaning to, Nolan stumbled into the world of intelligence gathering and had seen what it could yield.
This is the first of Nolan’s rollicking stories, but it would be wrong to classify this as a collection of yarns. Across Confidential’s 350 pages, Nolan outlines techniques of striking psychological acuity, interleaved with lessons from the history of espionage, and detailed examples. On a given page, you’re just as likely to learn about the subterfuges Johnson & Johnson deployed to defend the Tylenol market as to analyze the brilliant sinuousness of Sherlock Holmes’s questioning style.
For this piece, we’ll focus mainly on Part I: “Eliciting the Information You Want and Need.” Though the latter two parts offer interesting details, they primarily address how organizations can collect intelligence more effectively or protect against spies.
As the title of Part I suggests, it covers the art of “elicitation.”
Even if you are familiar with this word, its place in the Nolan lexicon is particular and benefits from definition. When the author writes about elicitation techniques, he explicitly means the following:
Elicitation…is defined as that process which avoids direct questions and employs a conversational style to help reduce concerns and suspicions—both during the contact and in the days and weeks to follow—in the interest of maximizing the flow of information.
As Nolan explains, elicitation is expressly distinct from interrogation and interviewing. “Interrogation [is] obtaining what you want from someone who possibly has it, who has not admitted to having it, and who knows who you are and why you want it,” he writes. Meanwhile, “interviewing is the process of obtaining information from someone who probably has it, who has more or less admitted to having it, and who knows who you are and why you want it.” Interrogation is, by definition, adversarial, while interviewing tends not to be.
As you’ll see, elicitation is a subtler dance.
2026-04-07 20:04:26
“I don't think Silicon Valley knows anything about networking anymore.” — Anil Varanasi
Listen or watch now on
YouTube, Spotify, or Apple Podcasts
Anil Varanasi, co-founder and CEO of Meter, is building a new kind of networking company for the AI era. Alongside his brother Sunil, he has helped raise more than $250 million to challenge incumbents like Cisco with a vertically integrated approach spanning hardware, software, deployment, and ongoing operations, all delivered through a utility-style model. His view is that networking has remained largely unchanged for decades, even as it has become foundational to everything from AI workloads to real-world infrastructure. Meter’s ambition is not just to improve existing networks, but to make them autonomous over time. Before starting the company, Anil and Sunil were deeply involved in filmmaking, a background that still shapes their philosophy of building with cathedral-level craft across every layer of the stack.
Together we explore:
The “burden of knowledge” and why progress is getting harder across fields
Why most companies over-index on technology and ignore business model innovation
The three ways companies create advantage: technology, delivery, and business model
How Meter’s trade-in model borrows from the automotive industry
Why networking should function like electricity or water—not hardware
Lessons from Japanese vending machine logistics for infrastructure deployment
The hidden coordination problem behind vertically integrated companies
Why Anil believes “common knowledge” is often wrong
How COVID forced Meter to abandon geographic constraints and scale nationally
The case for fully autonomous networks in a world of exploding demand
.tech domains: An identity for builders at their core.
Granola: The app that might actually make you love meetings.
Brex: The intelligent finance platform.
(00:00) Introduction to Anil Varanasi and Meter
(03:52) The burden of knowledge and slowing innovation
(08:18) Losing creativity vs gaining expertise
(10:25) What Meter actually does
(13:26) Early life, immigration, and upbringing
(15:47) Parental influence
(20:03) Film, storytelling, and creative influence
(22:55) Why Anil didn’t pursue filmmaking
(25:44) Parallels between company building and filmmaking
(27:00) Early programming and building
(28:05) George Mason and understanding systems
(29:59) The dynamic of working with his brother as a co-founder
(34:03) His first business and lessons learned (or lack thereof)
(35:15) Lessons from successful companies
(38:16) Japanese vending machines and logistics insight
(41:10) Scrapping 18 months of work
(42:40) Conviction and long-term company building
(46:02) COVID shock and near-death moment
(49:59) Building hardware like a cathedral
(52:25) Rethinking the networking business model
(57:06) Build vs buy and transaction costs
(59:39) Networking as infrastructure and utility
(01:01:30) The case for autonomous networks
(01:03:25) Hiring, talent, and what actually matters
(01:06:15) Big unanswered questions (sleep, science)
(01:07:28) Rethinking education
(01:09:02) Infinite games and long-term thinking
LinkedIn: https://www.linkedin.com/in/anilcv
Website: https://anilv.com
The Great Stagnation: How America Ate All The Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better: https://www.amazon.com/Great-Stagnation-Low-Hanging-Eventually-eSpecial-ebook/dp/B004H0M8QS
Finite and Infinite Games: https://www.amazon.com/Finite-Infinite-Games-James-Carse/dp/1476731713
Matt Clancy’s blog: https://www.newthingsunderthesun.com
Warren Buffett: https://en.wikipedia.org/wiki/Warren_Buffett
Charlie Munger: https://en.wikipedia.org/wiki/Charlie_Munger
Orson Welles: https://en.wikipedia.org/wiki/Orson_Welles
Ronald Reagan: https://en.wikipedia.org/wiki/Ronald_Reagan
Arnold Schwarzenegger: https://en.wikipedia.org/wiki/Arnold_Schwarzenegger
P.C. Sreeram: https://www.imdb.com/name/nm0820269
Satyajit Ray: https://en.wikipedia.org/wiki/Satyajit_Ray
Mani Ratnam: https://en.wikipedia.org/wiki/Mani_Ratnam
Elon Musk on X: https://x.com/elonmusk
Ashlee Vance: https://www.ashleevance.com
Dwarkesh Patel’s podcast: https://www.dwarkesh.com
Meter: https://www.meter.com
Coefficient Giving: https://coefficientgiving.org
George Mason University: https://www.gmu.edu
You Are Not Late: https://medium.com/message/you-are-not-late-b3d76f963142
The End of Asymmetric Information: https://www.cato-unbound.org/2015/04/06/alex-tabarrok-tyler-cowen/end-asymmetric-information
Sam Altman on Trust, Persuasion, and the Future of Intelligence - Live at the Progress Conference (Ep. 259): https://conversationswithtyler.com/episodes/sam-altman-2
P. C. Sreeram (Babu Export Company): https://babuexportcompany.com/9381/
The Apu Trilogy: https://www.criterion.com/boxsets/1145-the-apu-trilogy?srsltid=AfmBOor2rfr6GalePfNZXS7QiljZiehumuoJXvu4TyL2szofL16qrAIp
Nayakan: https://www.imdb.com/title/tt0093603
Marty Supreme: https://www.imdb.com/title/tt32916440
Ford: https://www.ford.com
Toyota: https://www.toyota.com
Intel: https://www.intel.com
Adobe: https://www.adobe.com
Cisco: https://www.cisco.com
NVIDIA: https://www.nvidia.com
Things we don’t understand: https://anilv.com/understand#
Waldorf education: https://en.wikipedia.org/wiki/Waldorf_education
I’d love it if you’d subscribe and share the show. Your support makes all the difference as we try to bring more curious minds into the conversation.
Production and marketing by penname.co. For inquiries about sponsoring the podcast, email [email protected].
2026-03-26 22:47:36
Friends,
If Silicon Valley has any religion, it is that of the founder. Nowhere else puts as much faith in, nor grants as much latitude to, sovereign individuals attempting to build something from scratch. Only within this strip of approximately 35 miles might a broke 20-year-old in pajama pants and Adidas sliders command greater reverence than a celebrated researcher, diligent doctor, or decent executive. This is the strangeness of Silicon Valley and its genius.
The cult of the founder has enjoyed a fresh, febrile burst. Now, more than anytime in the last decade, operating in “founder mode” — the term popularized by Paul Graham’s post — is seen as synonymous with efficacy. Managers (that wretched, blighted species) are viewed not only as less productive but less legitimate, usurpers and meddlers that merely disrupt the glowing chi that stems from the central chakra of those who build.
Look across the tech landscape, however, and there is one manager that bears closer inspection: Satya Nadella. Since his appointment as Microsoft CEO in 2014, few executives boast a more impressive record. Given Microsoft’s current strengths, it is easy to forget the company Nadella inherited. Unlike Tim Cook, who stepped into an innovative organization still in the early innings of capitalizing on a new product category, Nadella stepped into a company that was culturally rotten, creatively blocked, and stuck with a sideways stock price. It is true that Ballmer had sown the seeds for a cloud computing renaissance, as we’ll discuss, but this was far from the finished article.
In the intervening 12 years, Nadella not only drove the company to a $3 trillion market cap but also oversaw an authentic internal revolution, expanded its product suite, and positioned Microsoft to keep pace in the AI era. He has done so while portraying himself as the consummate modern manager, fond of borrowing from the Buddha, and peddling the MBA-circuit bon mots of empathetic leadership and a “growth mindset.” Nadella’s own chronicle of his turnaround, Hit Refresh, is stuffed with such cheery banalities. While the great CEOs of the past and current generation are prone to fits of rage, savage dressing-downs, and impossible expectations, Nadella appears genuinely reasonable, a happy guru who would like you to work hard, sure, but don’t forget to take time for your family and maybe a restorative hobby.
How has he done this? How does a peacetime CEO win in a war zone? Can one really win at this scale without the animal intensity of Musk or Huang? Is the balmy public presentation the whole story?
To answer these questions, I have spent the past three months studying Nadella’s leadership from as many angles as possible. That includes Hit Refresh, Acquired’s two-part series on Microsoft before Nadella, a slew of podcasts and long-form articles, internal emails released in court filings, annual shareholder letters, and confidential expert interviews with former Microsoft executives.
What emerged is a nuanced portrait of how a manager built fresh power structures beneath him, constructed new mythologies, reset cultural norms, and developed founder-like authority.
This piece is part of The Generalist’s ongoing series of managerial “playbooks,” exclusively available to premium subscribers. You can find our previous editions on Elon Musk, Jeff Bezos, and Jensen Huang here.
Our mission, across all of these playbooks, is to reveal the real strategies legendary entrepreneurs use to build their businesses. These are often uncomfortable and in direct conflict with traditional managerial advice. However, if you believe progress depends on innovation, as we do, then understanding these principles, foibles included, is not only interesting but essential.
To unlock all four playbooks and everything else The Generalist has to offer, join us now for $22/month. You’ll get immediate access to our best long-form writing, company case studies, exclusive interviews, and private databases.
Manifest authority through mythology.
Borrow power from the old regime (even as you counterposition against it).
Remake the aristocracy beneath you.
Make it safe to fail.
Once the narrative is set, use it as cover.
Hone your sharpest knife.
If you can’t win the future, at least don’t lose it.
In each section, we’ll unpack the strategies behind these principles and outline their benefits and tradeoffs.
A 10,000+ word playbook of tech’s most effective non-founding CEO
How Nadella earned founder-like authority without founding anything
How Nadella dismantled Microsoft’s infamous stack ranking culture
The bathroom-break decision that opened Azure to Linux
The $2.5 billion acquisition that had nothing to do with productivity (and everything to do with distribution)
The licensing maneuver that imposed a 400% tax on competitors’ cloud customers
How a panicked 2019 email led to the $13 billion OpenAI bet
Over 100 hours of research, confidential executive interviews, and court filings distilled
…and much more. To unlock the full playbook and learn how a “safe pick” turned a stagnant giant into a $3 trillion force, join our premium newsletter today.
Learn How to Automate Compliance for SOC 2, ISO 27001, and More with Vanta
With customer expectations rising and compliance needs shifting, keeping up can be daunting. Vanta’s Agentic Trust Platform helps fast-moving startups and security teams get audit-ready fast and stay continuously compliant, turning compliance into a deal accelerator, not a blocker.
Join to learn how Vanta can help you:
Automate evidence, policies, and remediation across SOC 2, ISO 27001, HIPAA, HITRUST, ISO 42001, and more
Build real security foundations, not check-the-box fixes
Show credibility faster with a public Trust Center and AI-powered questionnaires
Keep engineers focused with guided workflows and developer-native automation
By definition, a non-founding CEO does not start from scratch. They enter an environment of someone else’s making and must transform it into something of their own. To understand how Satya Nadella changed Microsoft then, we must first grasp the state of the company he inherited. It was one just emerging from what became known as its “lost decade.”
When Steve Ballmer stepped into the CEO role in January 2000, he was taking the reins of the most valuable company on the planet. Less than three weeks earlier, Microsoft had hit a peak valuation of $615 billion, with a stock price approaching $60.
When the crash came, Microsoft cratered, dropping below $250 billion. It was not the fall that was remarkable, but what happened after. Or rather, what didn’t happen after. In the years that followed, as other wounded tech players stabilized and then climbed, Microsoft stayed stuck, even as its underlying performance improved. During Ballmer’s reign, revenue compounded from $23 billion to $86 billion while operating income improved from $11 billion to $28 billion. And yet, the stock barely moved, flatlining at about $30 a share. Over a similar timeframe — between late 2000 and mid-2012, Apple snowballed from a $4.8 billion pipsqueak into a $541 billion behemoth. By the time Nadella’s reign began, Microsoft was firmly in its shadow.
2026-03-24 20:05:42
“I would argue the biggest risk is actually locking in a very narrow monoculture for superintelligence. One superintelligence is much less safe than infinite superintelligence.” — Vincent Weisser
Listen or watch now on
YouTube, Spotify, or Apple Podcasts
Much of the fear around AI centers on misalignment – the idea that powerful systems might act against human interests. Vincent Weisser worries about something different: what happens if advanced AI systems are perfectly aligned with the interests of a small group of institutions? That concern led him to co-found Prime Intellect, a startup building open infrastructure for training and deploying advanced AI models. Before Prime Intellect, Weisser helped organize Vitalik Buterin’s Zuzalu experiment and worked in decentralized science, where he helped unlock roughly $40 million in funding for unconventional research. Today, he’s applying that same open ethos to AI, working to ensure the tools that shape superintelligence remain broadly accessible rather than concentrated in the hands of a few.
In our conversation, we explore:
Why Vincent believes multiple superintelligences are safer than one
The intellectual influences that shaped Vincent’s thinking about intelligence and progress, including David Deutsch and Nick Bostrom
Prime Intellect’s evolution from distributed compute infrastructure to frontier model training and reinforcement learning tools
Why Vincent believes open and decentralized science could accelerate discovery
The Zuzalu experiment and what it suggests about the future of scientific communities
The role of aesthetics and craft in building technology
Why Europe might have a cultural advantage in a post-superintelligence world
Vincent’s predictions for the next five years of AI
Granola: The app that might actually make you love meetings.
Brex: The intelligent finance platform.
Rippling: Stop wasting time on admin tasks, build your startup faster.
(00:00) Introduction to Vincent Weisser
(03:28) The book behind Prime Intellect’s name
(07:35) The case for suffering
(09:35) An overview of Prime Intellect
(13:03) Why open source models matter
(21:18) Vincent’s intellectual influences
(25:17) Early years in the startup scene
(31:48) Funding science outside traditional institutions
(41:22) The past 6 months of AI progress
(43:45) Deciding to build Prime Intellect
(46:55) Why GPUs were the right starting point
(51:39) Training models on Prime Intellect
(59:48) Why beauty matters
(1:03:48) The Zuzalu experiment
(1:06:27) Prime Intellect’s AGI Easter egg
(1:11:13) Predictions for the next five years
(1:15:09) Final meditations
LinkedIn: https://linkedin.com/in/vincentweisser
X: https://x.com/vincentweisser
Goodreads: https://www.goodreads.com/user/show/69248416-vincent-weisser
Website: https://primeintellect.ai
The Metamorphosis of Prime Intellect: https://www.amazon.com/Metamorphosis-Prime-Intellect-Roger-Williams/dp/1411602196
The Beginning of Infinity: Explanations That Transform the World: https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359
Superintelligence: Paths, Dangers, Strategies: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834
Steve Jobs: https://www.amazon.com/Steve-Jobs-Walter-Isaacson/dp/1982176865
The Singularity Is Near: When Humans Transcend Biology: https://www.amazon.com/dp/0143037889
Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100: https://www.amazon.com/Physics-Future-Science-Shape-Destiny/dp/0307473333
A Pattern Language: Towns, Buildings, Construction (Center for Environmental Structure Series): https://www.amazon.com/Pattern-Language-Buildings-Construction-Environmental/dp/0195019199
George Hotz on X: https://x.com/realGeorgeHotz
Andrej Karpathy on X: https://x.com/karpathy
Benjamin Bratton’s website: https://bratton.info
David Deutsch’s website: https://www.daviddeutsch.org.uk
Albert Einstein: https://en.wikipedia.org/wiki/Albert_Einstein
Vitalik Buterin’s website: https://vitalik.eth.limo
Celine Halioua’s website: https://www.celinehh.com
Prime Intellect: https://www.primeintellect.ai
Anthropic: https://www.anthropic.com
OpenAI: https://openai.com
DeepSeek: https://www.deepseek.com
Google DeepMind: https://deepmind.google
Are We Alone In The Universe? Sara Seager on Exoplanets, Venus, and the Hunt for Alien Life (Astrophysicist and Planetary Scientist at MIT): https://www.generalist.com/p/are-we-alone-in-the-universe-sara-seager
Y Combinator: https://www.ycombinator.com
Loyal: https://loyal.com
Cursor: https://cursor.com
Zuzalu: https://zuzalu.city
Why I Built Zuzalu: https://www.palladiummag.com/2023/10/06/why-i-built-zuzalu
Dyson sphere: https://en.wikipedia.org/wiki/Dyson_sphere
I’d love it if you’d subscribe and share the show. Your support makes all the difference as we try to bring more curious minds into the conversation.
Production and marketing by penname.co. For inquiries about sponsoring the podcast, email [email protected].
2026-03-17 20:03:46
Listen or watch now on
YouTube, Spotify, or Apple Podcasts
Karol Hausman is the co-founder and CEO of Physical Intelligence, a robotics company building a general-purpose “AI brain for the physical world.” The company has raised more than $1 billion in funding to develop foundation models that allow robots to operate across many machines, environments, and tasks rather than being programmed for a single purpose. The core thesis: the same scaling dynamics that transformed language models may also unlock robotic intelligence. But only if you resist every commercial pressure pushing you toward specialization. The central challenge isn’t mechanical design. It’s intelligence: how robots learn, generalize, and interact with a physical world that is far harder to simulate than it is to describe. Before launching Physical Intelligence, Karol worked at Google Brain and Stanford University, studying robot learning alongside researchers Sergey Levine and Chelsea Finn, who later became his co-founders.
In our conversation, we explore:
How growing up in a small town in Poland and watching Star Wars sparked Karol’s fascination with robots
The moment a lecture from Sergey Levine convinced him to abandon his PhD research direction and pivot fully to deep learning
Why robotics has historically lagged behind breakthroughs in language models
The case for building a general “AI brain” for the physical world rather than a single specialized robot
The role of real-world data in training robots, the limits of simulation, and how deployment could create a powerful data flywheel
The return of reinforcement learning and the parallels between human learning and robot training
The unique challenges of physical intelligence and why robots must operate with far higher reliability than language models
Brex: The intelligent finance platform.
Granola: The app that might actually make you love meetings.
(00:00) Intro
(04:05) Karol’s early fascination with robots
(07:38) How Karol relates to Fei-Fei Li’s biography
(08:52) What inspired Karol to build better robots
(11:19) Philosophical influences
(15:33) Parallels between The Inner Game of Tennis and robotics
(18:21) Karol’s entry point to robotics and PhD program
(25:49) Combining robotics with LLMs: The Taylor Swift demo
(30:48) The 1970s SHRDLU AI experiment
(32:33) Founding Physical Intelligence
(35:13) How Lachy Groom got involved
(39:40) How research shapes what Physical Intelligence builds
(45:22) The importance of real-world data
(49:07) The return of reinforcement learning in robotics
(53:31) The risk of commercializing too early
(55:47) Finding the right partners for the business
(57:13) Open research questions
(1:00:00) NVIDIA’s simulation engines
(1:01:57) The surprising speed of progress
(1:04:16) Reliability in robotics
(1:07:31) Compensating for missing senses
(1:12:28) Book recommendation
LinkedIn: https://www.linkedin.com/in/karolhausman
Worlds I See: https://www.amazon.com/Worlds-I-See-Fei-Fei-Li/dp/1250389895
The Inner Game of Tennis: The Classic Guide to the Mental Side of Peak: https://www.amazon.com/Inner-Game-Tennis-Classic-Performance/dp/0679778314
On the Move: https://www.oliversacks.com/oliver-sacks-books/on-the-move
Why Greatness Cannot Be Planned: The Myth of the Objective: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective-ebook/dp/B00X57B4JG
Fei-Fei Li on X: https://x.com/drfeifei
Baruch Spinoza: https://en.wikipedia.org/wiki/Baruch_Spinoza
Sergey Levine on X: https://x.com/svlevine
Brian Ichter on LinkedIn: https://www.linkedin.com/in/brian-ichter-26875978
Lachy Groom on LinkedIn: https://www.linkedin.com/in/lachy-groom-b218895
Chelsea Finn on LinkedIn: https://www.linkedin.com/in/cbfinn
Lee Sedol: https://en.wikipedia.org/wiki/Lee_Sedol
Physical Intelligence: https://www.pi.website
Karol’s post on X about Worlds I See: https://x.com/hausman_k/status/1732087549034889688
Ontology: https://en.wikipedia.org/wiki/Ontology
The History of AI in 7 Experiments: https://www.generalist.com/p/the-history-of-ai
NVIDIA: https://www.nvidia.com
Proprioception: https://en.wikipedia.org/wiki/Proprioception
I’d love it if you’d subscribe and share the show. Your support makes all the difference as we try to bring more curious minds into the conversation.
Production and marketing by penname.co. For inquiries about sponsoring the podcast, email [email protected].