2026-02-22 03:42:14
This episode is brought to you by Progressive Commercial Insurance. As a business owner, you take a lot of roles:
But when it comes to small business insurance, Progressive has you covered. They offer discounts on commercial auto insurance, customizable coverages that can grow with your business, and reliable protection for whatever comes your way.
Count on Progressive to handle your insurance, while you do, well, everything else. Quote today in as little as eight minutes at ProgressiveCommercial.com.
Progressive Casualty Insurance Company. Coverage provided and serviced by affiliated and third-party insurers. Discounts and coverage selections not available in all states or situations.
The American Airlines Advantage Business Program is changing the way companies book travel and get rewarded. Designed for fast-growing businesses, the program makes it easy to earn rewards. And it’s free to join.
Your company earns one Advantage Mile for every dollar spent on business travel booked anywhere with American. Use these miles to help offset future travel expenses, transfer to employees, and more.
You’ll also gain access to a suite of tools to streamline travel management, including:
And it’s a win-win. Travelers can earn additional loyalty points on top of what they already earned through the Advantage Program, helping them reach status faster.
Earn more on business travel you’re already taking with the American Airlines Advantage Business Program. Register today at aa.com/AdvantageBusiness.
Welcome to the Seneca Podcast, a weekly discussion of current affairs in China. In this program, we look at books, ideas, new research, intellectual currents, and cultural trends that can help us better understand what’s happening in China’s politics, foreign relations, economics, and society.
Join me each week for in-depth conversations that shed more light and bring less heat to how we think and talk about China.
I’m Kaiser Guo, coming to you this week from my home in Chapel Hill, North Carolina.
Seneca is supported this year by the Center for East Asian Studies at the University of Wisconsin-Madison, a national resource center for the study of East Asia. The Seneca Podcast will remain free, but if you work for an organization that believes in what I’m doing with the show and with the newsletter, please do consider lending your support.
You can reach me at [email protected]. And listeners, please support my work by becoming a paying subscriber at sinecapodcast.com. You’ll enjoy, in addition to the pod:
And of course, you can bask in the knowledge that you’re helping me do what I honestly believe is important work. So, do check out the page, see all it is on offer, and consider helping me out.
Today, my guest is Afra Wong. I suspect many of you will already have come across her work through her podcast, through appearances on other China-focused shows, or through the many provocative, beautifully written, and fascinating essays she’s published.
Afra is a writer working between London and the Bay Area, currently a fellow with Gov.ai, and previously with the Roots of Progress Institute. Before going full-time as an independent writer last year, she spent six years in Silicon Valley covering AI and crypto, running newsrooms, building developer communities, and absorbing the Valley’s growth logic from the inside.
She writes about China and about Silicon Valley — the latter sometimes metaphorically — but about neither of these places ever as mere abstractions.
She writes about them as overlapping systems, how China’s technological interiority shows up in Western debates about AI, industrial policy, and even progress itself.
She’s also the host of the Chinese language podcast Pipei Jiao Wah, Cyber Pink, and part of the Baihua podcasting community.
We’re talking today about her recent Wired piece on what might be China’s most influential science fiction project that you’ve never heard of: the Morning Star of Ling Gao, or Ling Gao Qi Ming, and the worldview behind it, something known as the Industrial Party or the Gung Yedang.
If you haven’t read that yet, click the link, read the piece. It’s one of actually several China-focused pieces in this issue of the magazine — some really good stuff. Come back when you’ve finished. We will still be here.
This isn’t just going to be a conversation about time travel sci-fi — though that would be a lot of fun — but actually about:
About how a country explains to itself why it fell behind. and what it thinks salvation looks like.
Afra Wang, a very, very warm welcome to Seneca.
Oh, wow. Thank you so much, Kaiser.
When you were describing my work experiences, it’s almost like I’m reliving my past life, especially my time doing a lot of growth stuff for tech companies and crypto. And actually, I discovered the Morning Star of Ling Gao, or Ling Gao Qi Ming, as a collective science fiction novel writing project from my crypto phase.
Really?
Yeah. I was told by a lot of nerdy technologists, people who are Chinese cypherpunks, saying there is the greatest DAO experiment ever, which is a sci-fi story collectively written by many people, like hundreds of thousands of people. I was like, “wow, what do you mean?”
Because DAO in crypto represents decentralized, autonomous organization. Referring to this science fiction writing as a DAO experimentation is really fascinating. It also sort of reflects on the demographic — the people who are reading this story, right? Who are reading the Morning Star of Lingao? Who are reading Lingao Qiming? And it turns out to be:
Yeah, not surprising at all. A lot of overlap with sci-fi.
But before we get into sci-fi and about that essay, this is your first time on the show, so I’d like to give listeners a chance to get to know you a bit better.
You describe yourself as a kind of cultural in-betweener, and that really resonates obviously with me. For people who move between China and the West, especially when writing about technology and about power, translation isn’t just a linguistic exercise. It’s actually epistemic, but it’s also moral and maybe even aesthetic. I mean, it covers pretty much all of philosophy.
One thing that struck me reading your essay is how effortlessly you seem to do this, just to kind of code switch, not just in language, but also in your moral and emotional register, especially when you’re writing about something as charged as the industrial party. Is that something you experience as deliberate, or does it feel almost second nature to you at this point?
I think probably I am a somewhat open-minded and perceiving person, so I don’t know, people have been telling me that I tend to kind of like be able to make friends with all kinds of people. I think that’s, in a sense, like a good trade for me to be a more discerning writer because I think I’m really sensitive to vibes.
Also, I like to use the vibe because this is how I feel. I’m really sensitive to the aesthetic, the sensations when I encounter something, for example, the Silicon Valley mental model versus the Hangzhou-Shenzhen-Beijing mental model, right? I was really fascinated by the sort of the cognitive infrastructure, like the intellectual backbone of the Chinese version.
So I, you know, last year I wrote something called the China Tech Canon, which is a response.
Yeah, that was great. Thank you so much.
Yeah, I think it’s like, it’s all come to the sense that I want to like deeply, contextually translate certain, you can say:
I want to translate something to the Western discourse, but in a much more like humanistic and personal way because I think I am somehow constantly digesting cultures from both sides. I am native in Chinese, but I feel really native in English as well, in the Silicon Valley discourse as well. So I think that I’m just kind of like naturally juggled in between.
Do you go the other direction as well? Do you translate the Silicon Valley kind of tech canon into Chinese as well? Or do you find yourself doing more sort of the explanation in the direction of explaining China to the West?
Yeah, so not about technology, but I’ve been doing this Chinese language podcast for many years with my amazing co-host. I think all of us are cultural in-betweeners and we actually translate the Western popular culture and then talk about those Western popular culture in Chinese language. You know, for example, the popular movie Hamnet is a golden global hit. And we recorded a podcast about Hamnet in Chinese language, but the whole context, the theme, and the reaction, the catharsis we experienced — we were basically discussing this movie in Chinese language, although it’s a quintessential English movie.
Yeah, I read the novel. I have not seen the movie yet. Is it good?
Oh, it’s absolutely good. It’s so moving. It’s very touching, and you do experience this Greek tragedy style catharsis at the very end because it’s like a movie to Force you to confront a lot of eternal questions like death, like loss. Like, yeah, it’s such a layered movie, I can’t really explain it. It’s beautiful. It absolutely changes some part, like, deepest part of you.
So do you ever find yourself judging things differently depending on which context you’re inhabiting? I mean, because, I mean, not because you think one side is right, but because, you know, different histories seem to demand different weights, different priorities. You know, I mean, this is something I’m constantly wrestling with.
How conscious is that process for you when you’re writing? So, you know, you might have one view of the industrial party, say, as a Chinese person living in China and another entirely looking at them from the outside and talking about that to Americans. So, do you find yourself sort of having different standards?
I think I do. I think I’ve been having double consciousness since I grew up as a kid in China. I have double consciousness in a sense that a lot of stuff can coexist although they look like contradict to each other but they could both be true.
Like, you know, in a sense I went through the whole Chinese education, right? I finished high school in China and then I only went to U.S. for college. And I think, I guess, like, accepting a lot of contradictory views and philosophies, as you said, abstemious knowledge systems is part of reality to me, I would say.
But I still think the Chinese Chinese me and English me or the sensible me and anxious immigrant me, when they’re coexisting, I think there is a converging aesthetic standards or sensibility that I uphold. For example,
I think there is certain sensibilities and aesthetics that’s always true and always, I could always try to stay true to that.
Wow, that sounds so healthy and grounded. That’s fantastic. It seems like you experience this kind of ability to code switch and to experience sort of two whole different moral and epistemic systems as more of a freedom than a burden, then.
I would say so. Like, for example, this piece for Wired, it’s about Industrial Party, it’s about this poorly written, crowdsourced science fiction writing. I do not like reading this piece. I do not like reading this story at all because it’s so poorly written.
But at the same time, it gives this energy and spirit of what people are actually craving for in the rapidly developing, urbanizing China and why people feel so strongly about this developmentalism. And in a sense, maybe U.S. needs more poorly written collective science fiction like Lingao because U.S. right now kind of needs some industrial party people.
I mean, I hate the story. I hate the, you know, like the greatest Chinese science fiction as the title of this Wired piece is actually an irony, right? It’s not actually greatest because it’s like honestly really bad, but it speaks to so many things that I, yeah.
We’re going to get really deep into Ling Gao in just a second here, but there are a couple other things I still want to ask you about because there’s another divide that I see you moving across really fluently and that’s the one between STEM and the humanities, between, you know, the engineering ways of seeing the world and the more humanistic or cultural ways of seeing the world.
So reading your work, I get the sense that you’re genuinely at home in both of these registers. You’re able to translate between them without, you know, romanticizing the one or condescending to the other. Is that, again, something that you’re conscious of when you’re writing or does it feel like a natural part of, you know, how you make sense of tech and society?
I’m not sure if I’m really fluent in the STEM language. First, I am not a technologist. I don’t code except, you know, like right now, live coding makes everything easier. Everyone’s doing that.
Yeah, everyone’s doing it. Not me.
Yeah, I honestly don’t think I speak the KPI coded language, like optimizing everything, improve everything, because I do have a lot of friends that are like that, but I do think working in tech company gives me a sense that an entire corporation, like hundreds of people could just like grind really hard, iterate the product really hard just to like improve
- 2% of user retention or
- 1% of daily active user
because I’ve been there and… I was one of the people who were trying really hard to retain users, study the users, or try to improve the recommendation algorithm, so our app has more revenue that day. You can see this is all correlated, right? I was a content manager, a growth manager during my first job. When you put out a certain content or adjust the algorithm a little bit, there’s an instant bump in your revenue that day. It’s almost like it’s extremely correlated.
If you spend more money on Facebook’s advertisement, you will just get more new users. It is so direct in the tech world, and I do think I understand that eagerness or straightforwardness in the tech landscape.
This divide, though, between the STEM view and the humanities view, do you feel like that divide is even more acutely felt in Chinese life than in the Western context? I mean, the gap between the engineering dude and the artsy fartsy literati type—do you think that’s an outdated caricature by this point, or is that still something very much a dividing line in Chinese life?
I think China’s society logic was dictated by the STEM optimization logic, or like industrialization logic for a long time until the young people are so tired, people are so tired, and then this sort of optimizing bubble bursted.
So back then, maybe 10 years ago, optimizing everything — trying hard. There was an internet slang for people trying too hard, trying to get promoted, make a lot of money during the economic boom — during the Chinese economic boom and internet attack boom. This was admirable.
But right now, this bubble bursted, so people proactively do not want to participate, such as Nuli lore, Nuli fairy tales. Instead, you see China’s today’s mainstream sentiment is:
This is the current mainstream. I would say China is a post-industrial party society now.
Oh, good, good. I’ll feel more at home there because I’m a good old Gen X slacker, so I know all about avoiding work.
I mean, it’s interesting to me because I feel like I agree. There used to be a period where one side of that divide was absolutely treated as:
and the other was just written off. But yeah, I’m glad to see this swinging back.
Before we get into what Lingao represents, I think it’s worth situating it a bit. For listeners outside China, it’s almost completely unknown, as you said. How widely known is it inside China, especially among communities that care about:
Is this like a cult classic or is it something closer to shared cultural infrastructure?
I don’t think it’s widely known as a popular cultural product like a movie or Journey to the West. This is basically the most common vernacular day-to-day language.
But I think Lingao is very popular, very influential in a niche community. This community itself is what I would say the elite class of technologists, the STEM people who see themselves as pillars of China’s urbanization and industrialization, and predominantly male.
So Lingao, to be completely honest, strikes me as a semi-misogynist, misogynist novel because a lot of plots imply many things towards women.
But Lingao is a cult fetish. It is a Bible for the industrial party, this loosely connected intellectual group in China.
Yeah, I definitely want to ask you about the gendered nature of this book and about science fiction more broadly. I remember reading Senti, Three Body Problem and just being shocked. There’s stuff you could not get away with in America today, just the level of misogyny that was in there.
But how did you come across it yourself? You told me that you heard about it from Crypto Bros in the Valley, right?
Yes. Chinese Crypto Bros. I heard it from Chinese Crypto Bros.
Yeah, that’s hysterical.
What finally got you to read the thing? I mean, what did it just keep coming? And before you actually surfed over to it, what did you think it was? What kind of reputation did it have in your mind before you actually read a page?
I actually didn’t know anything except it is like a DAO experiment. It is a crowdsourced sci-fi lore. And like, to be honest, when I read anything that’s Chinese internet native, I tend to have lower expectations because I know some of the products, some of, especially those fiction writing stuff, is almost like Harry Potter fanfic, right?
It’s not written by J.K. Rowling herself, but written by the fans who just spend an afternoon, put a lot of scrappy plots together, and then you have a fanfic. So I tend to treat Ling Gao as an interesting phenomenon, like a part of the deeper corner of the Chinese intellect as a lore instead of as a serious science fiction. So I kind of had a lower expectation entering this novel.
And it turned out to be, yes, it is very scrappy. It was written by so many people to the point they started collectively writing it since 2006. And then people just keep writing and piling up and piling up.
A few years later, people were like:
“Okay, now we have too many things. Like the plots are going to multiple directions. We need to sort of come up with a kinetical plot together.”
So someone came up and compiled the storylines together, which creates the sort of, quote-unquote, kinetical Ling Gao timeline as we see today. But you can guess the nature of collaboration is:
There are thousands and thousands of different branches. When I was reading it, I couldn’t really tell which part is the kinetical story and which part is the fanfic back then. But because it’s well written, it’s sort of merged into the kinetical.
Is it because they’re all very put together and scrappy? It doesn’t read a thoughtful thing but reads like a collective stream of consciousness. There are these people who did the actual organizing, who actually decided what is canon and what is sort of peripheral.
What do we know about these people, about these principal writers? Who are they? What kinds of backgrounds do they come from?
They all use pseudonyms online, but we know some phenomenon writers sort of emerged out of the Ling Gao scene, later became the influencers or the writers for Guancha Zhe Wang (观察者网). And Guan Cha Zheu Wang is inseparable from Ling Gao’s collective writing.
Give us a sense of what Guan Cha Zheu Wang is. I mean, they have a certain political slant, a certain reputation. Why don’t you explain what Guan Cha Zheu is?
Okay, so in the Wired piece, I told the readers that Guan Cha Zheu Wang is almost like Chinese breadboard, but I think it’s less like breadboard because it doesn’t punch up. It kind of only punches west.
So Guancha isn’t that up though?
Yeah, so it is, I would argue, a more thoughtful patriotic or nationalistic collective online magazine delivering a lot of pro-industrial policy, pro-state opinion pieces, and some of the pieces are quite persuasive.
You know, I used to be a reader of Guancha when I was in college. Guancha reached its peak in the early 2010s. The founder himself, Eric Lee, I think he studied at UC Berkeley.
Yeah, I think he was the same year as me, in fact.
I see, I see. Yeah, he studied at UC Berkeley. It seems like he made a lot of money and he sort of diverted his money into this collective intellectual body building and started Guan Cha Zheu Wang.
It’s like a think tank and online publication, but it really represents a cohort of writers who, just like Ling Gao, have a strong stamp background, very pro-China, very pro-industrialization, and very anti-West.
And early on, a lot of their pieces are similar to a little bit like today’s narrative on how to establish a strong national, industrial national identity, and unapologetically loving China and being patriotic.
Yeah, so it’s very, how to say, very rad, very internet native. I would argue they’re very internet native because all of them know how to talk. They’re actually… Really good writers. You know, I mean, Eric X. Lee, the person who’s really sort of at the heart of it, as you say, you see Berkeley graduate and a venture capitalist of some success, very, very wealthy guy. He, in fact, is very well-spoken and quite persuasive in some quarters. You know, he has this famous TED talk in English. I agree. You know, he gets this gigantic standing ovation from him. I’ve described him before as sort of the first sword of China apology. He’s very gifted, I mean, in that sense. Yeah.
Let’s get back to the 共业党. Yeah. You know, it’s often spoken of in juxtaposition to the so-called 秦华党, which I’ve seen translated variously as the sentiment or the sentimental party. Does this 秦华党 actually have a representative online novel or a body of literature associated with it, you know, like we see with Lin Gao or is this just a straw man? Is it a real thing even?
I think 秦华党, if I understand correctly, is like the basically the civic space existed once on Chinese internet and I would say they no longer exist. I can say Chai Jing would be seen as a 秦华党 by industrial party standard because Chai Jing is this Chinese journalist who would make a documentary about air pollution, you know, she would Under the Dome. Yeah, Under the Dome. Like she would make a lot of influential documentary or journalistic pieces to remind people that the human cost of China’s rapid development load, you know, she would care about the migrant workers’ rights. She would care about the people who are dislocated because of the deformation of the city, because of the reconstruction of the city. Xi would, you know, care about air quality, right? Yeah.
So anything that’s been negatively affected or left behind by China’s headlong rush toward industrialization, right? Yeah, I would say that both the party and industrial party, I mean, industrial party doesn’t have the power to purge the sentimental party or, you know, the humanistic, the free journalist, the China’s civil civic space, but industrial party justifies for the state to marginalize and purge what they call sentimental party.
But I think sentimental party is actually a core part of my formative experience because I was growing up in China where the internet was a place to discuss real things from political reform to rule of laws to freedom expression to many things. I remember reading a lot of absolutely brilliant investigative reports about
I remember reading so many great stories about the one-child policy, about how this one town in China has forged some ties to systematically trade female babies to have them adopt in the U.S. A lot of the stories like this couldn’t exist in today’s China because of the demise of a sentimental party, because of the state’s effort of eradicating them.
So the industrial party in a sense doesn’t have any real political power, but I think they are a collective unconsciousness of the regime, of what CCP really prioritized or really think about.
Just to be clear, when you eradicated them, it’s not like they were rounded up and locked up. You’re talking about censorship, you’re talking about all sorts of different lawfare efforts, pressures to, yeah.
Yeah, I mean, when I was in high school, when I was in middle school, I could go on Weibo and read about Han Han’s pro-democracy essays, and those are really bold, quite fundamentally radical essays, if you see them now. I would be reading Chatter 08, written by Liu Xiaobo. I would be reading a lot about Arab Spring. I mean, a lot of the content sort of existed inside of the Great Firewall. It’s a beautifully diverse, chaotic, steaming, intellectual space. I kind of grew up in this internet.
People in those internet forums seriously talk about civic stuff, seriously talking about can China have a political reformation in the future? Because those possibilities were so real back then.
I think when you talk about the industrial party, you need to sort of dial your clock back to the 2000s and 2010s, it was because the tension between sentiment party versus industrial party were really real. I wrote in the piece, I think the signature event was the Wenzhou high-speed rail crash, the train wreck. I still remember vividly where I was that day and how I felt because I was about to board the high-speed rail from Beijing because the high-speed rail finally because I grew up in Shanxi.
Shanxi is an economically backward… Province, so I was really excited to see Taiyuan finally had a high-speed rail connecting to Beijing.
Instead of staying in the old train to take an overnight train to go to Beijing, you can actually spend only three hours to go to Beijing now.
Beijing as a cosmopolitan city, in my mind back then as a high schooler, it’s so close by to me, I can just go there. I was so excited and then the story burst out about the terrible train wreck in Wenzhou.
I remember there was a huge debate online about who was guilty, right? Like, where is the weakest link in this? If you dig way back in the Seneca archive to July or maybe August of 2011, you’ll find the show that we did about that.
Yeah, so I remember back then, all the public intellectuals were still active, their other accounts are still not banned. So a lot of people online writing lengthy articles or posting online about the liability of the authority that didn’t have a proper monitoring system.
And so basically, the thing is because a certain signal was missed, two high-speed rail trains basically crashed face-to-face. It was basically a pure human mistake. It was because a certain message didn’t send to the other side, so the tragedy happened. It was pure human mistake.
But anyways, I remember so many people writing about it online and there’s this one piece basically crying for China to slow down. And it was like,
“slow down China, wait for people.”
Implies don’t let such bloody train crashes happen again because this is a price we cannot afford just to aimlessly progress.
And this is a moment when industrial party people came and then they took the stage. They organized a systematic rebuttal against the humanistic sort of pro-slowdown discussion.
Because the industrial party intellectuals have a lot of advantages for knowing so much industrial knowledge because they are the ones building a lot of Chinese infrastructure. For example, I featured this one intellectual, his name is Ma Qianzu, one of the authors.
So the industrial party people organized a big rebuttal and they systematically published many articles to not justify this accident but saying we should take this accident seriously, but this shouldn’t be the reason for China to slow down its building on the high-speed railway infrastructure.
And yes, I think the industrial party and the development logic won in the debate and so the result is China didn’t slow down.
I mean, like I think retrospectively if it slowed down maybe China wouldn’t have such a convenient, vast, amazing network of high-speed rail today. But I think back then if China should develop was a real and very visceral and painful question to confront.
A lot of people’s idea is no, we really shouldn’t progress like this:
Like, are we, like, why are we allowing ourselves to be the colony for development?
But I think right now we’re basically sitting in the future to meditate on the dispute and one could say of course development is China’s thing, is what China always wanted.
But no, like, you know, there are people strongly against a lot of things the government proposed. There are people interesting to ponder that alternative.
Yeah, but that’s what this itself is — it’s a, you know, Lingao itself is pondering an alternative.
Now I haven’t read it myself, not one bit of it, I’m probably not going to, but I’m hoping you can give us kind of a controlled spoiler.
So a wormhole opens to 1628 from our present, or from the present of the time you know, 10 years ago when they were writing this.
So how does the alternate timeline then unfold? What kind of society do these guys end up building in Hainan? How different does it end up looking from our own history? How much do they change history in this project, in the book?
Yes, so okay. So reading this book is very interesting because the plot evolves as the people who write the story evolve. So like, and also a lot of the writers would write themselves in. The story features a captain—like a captain of the ship that would transport the 500 time travelers back to the Ming dynasty. The captain himself, his real name and real-life nickname, became known as Captain as well. At a certain point, the boundaries between past and present, fictional and reality, kind of blurred.
The same happens with Ma Qian Zu himself. He is one of the main people in the novel. So, it’s Qian Zu and Qian Zu—they almost spell the same in Pinyin, but one is Ma Qian Zu and the other implies humbleness. Ma Qianzu means you stand next to the horse to serve, but fictional Ma Qianzu is arrogant. You are Qianzu: you can see a thousand miles away. Zu means seeing.
I think things like this are very interesting. The basic premise starts from a simple thought experiment: what if you can travel back to the Ming dynasty with modern knowledge and equipment? People started writing about it without character building or discussion. The first 30 chapters are all about people getting together to think about what equipment they should bring to the Ming dynasty.
You will see this laborious preparation list, almost like the list a very organized person writes when packing for a long trip. People spend 30 chapters to prepare for this list.
Then, around chapter 37, people finally get together to board the ship that will take them to Ling Gao. You also see this immense obsession with details:
They conduct serious, detailed risk assessments. It’s really first principles thinking—almost like an action manual. If you really had a wormhole to travel to the Ming dynasty, you could simply follow it.
This is because a lot of the knowledge is factual. Professional people research and fact-check it themselves and each other in a peer review process ensuring scientific accuracy. People are thinking about how to bootstrap an industrial revolution on this island—what do we need?
But, let’s get to my question: how far do they take it? Are we talking decades of institutional development, or does it mostly stay in an early building and consolidation phase? Do they change history profoundly? Do we even know what history looks like now as a result of the changes they make?
The story kind of progresses as the current time progresses, I would say. Everything stays in the Ming Dynasty—there is no fast forward to the Qing Dynasty or the Republic period. The time flow of the Ming Dynasty basically matches today’s pace.
Because the story has been written for about 20 years, a lot has changed:
There are also plots, like some fanfic, which are not part of the main story:
There is also a story plot that diverges from the main story that… people ended up colonizing Australia, and they formed a huge sort of empire, almost like a British empire. In the 19th century, they forged a huge Linggao Australian empire across Australia, New Zealand, and Southeast Asia. The north part would be like Linggao county, like Hainan and Taiwan. Yeah, so like crazy stuff, really crazy stuff.
But what really strikes me about this, as you’ve described it, is this is an alternate history that doesn’t imagine salvation through new ideas, or a moral awakening, or the scientific revolution necessarily, but actually just through kind of competence, and very specifically through technological, technical competence.
There’s this like obsessive attention to getting the tech tree right, like:
It’s just like precise accumulation step by step, as you’ve described. But alongside that, there’s also, I guess, as you talked about in your piece, this kind of unglamorous work of building institutions that can sustain these capabilities over time.
They’ve thought through a lot it seems, seen that way. Linggao feels less like escape fiction and more like a thought experiment about governance and about why technocratic instincts have such appeal to China.
Let me ask about this because there’s this framework explicitly in academic terms. We usually talk about the Needham question and about things like Ken Pomerantz in his book The Great Divergence. These are ways of explaining why industrialization took off in Europe rather than in China when China seemed quite ready for it in some measures, by the Song dynasty.
We had the capability to do mass mechanized production in some ways. But again, I haven’t read it, but reading your essay and the broader discourse around the industrial party, it feels like this community has its own implicit theory of history.
How would you characterize the industrial party’s answer to the Needham question? What do they seem to think actually mattered in producing the divergence that we saw, that Pomerantz describes?
I think, I got educated in China. I think the sort of national scar, the hundred years of humiliation that China left behind, didn’t modernize until the European powers kicked your ass. Then China started industrialization.
This part of history has always been a sort of a collective scar, a wound, a true wound basically, among everyone that I know who received primary education in China.
Alternatively envisioning a China that started to modernize, started to industrialize at the pace of the European counterparts has always been, I think, a psychological comforting thought experiment.
I also noticed that sometimes the national consciousness in Linggao’s plot is really weak. Of course, it is a big part of almost like a salvation porn or like salvation.
“That’s a good way to talk about it: salvation porn.”
Part of it is salvation porn, but I realized a big part of it is the joy of meticulously planning everything itself. To the engineers, building itself is very joyful; it is beautiful, it is satisfying.
Because I observed this among not just Chinese engineers but among a lot of the western engineers that I’m friends with. They love YouTube channels like Primitive Technology. It is literally an Australian man using mud to build all sorts of tools from zero.
It seems like engineers really enjoy this sheer ability to transform the surroundings with the scientific knowledge they possess in their mind. It seems like totally my dad.
Like homo sapiens seems to us like we’re homo sapiens. We seem to really enjoy thinking about our ability to transform our surroundings.
I mentioned Robinson Crusoe. I think Robinson Crusoe is like the 18th-century Primitive Technology YouTube channel.
“For sure, for sure.”
When I was a kid, I obsessed with this book because I constantly imagined myself being this all-powerful human being, like going into a savage island and humanizing and civilized a place by my sheer intelligence, by the modern advanced knowledge I possess. And I think, thinking about this, it’s not just Li Gong Dan; it’s also Western engineers. I know, it’s also you, it’s also me. Very interesting, right? It’s like reading this makes you happy.
Seeing the primitive technology YouTube videos makes me calm. Like, I think as a hunter-gatherer, like offspring of a hunter-gatherer society, human being, I found this psychologically safe.
So I think a big part of Ling Gao’s dopamine hit comes from writing about technology and planning itself, writing about building the civilization itself, other than national, yeah, right, for sure.
I get that. I get that for sure. There’s something about—I mean, it’s a flex, you know? They get to show, look how I understand the very fundaments of the technologies that I deal with. But there’s also something like this kind of inherited historical vulnerability at work here.
You know what you talked about, this century of humiliation thing. I mean, not a grievance in a narrow sense, but just kind of a memory of how badly things can go, you know, when state capacity falters.
So I wonder, in addition to this satisfying kind of, you know, tech just tech qua tech, there is—I wonder if there’s this kind of implicit never again embedded in the discourse, you know? Not just about foreign domination but also about chaos, about fragmentation, about, you know, loss of national agency, right? I mean, that’s in there too. I wonder if that appeals to you as well.
I agree. I agree. I think this, we should memorize that engineering and industrialization and urbanization are the true things that truly gave the Chinese nation power. Like we shall engrave this in our bowl.
I think this is part of the message the Ling Gao Qiming Morning Star of Ling Gao has been sort of projecting. And it reminds me of—so there’s a scholar whose name is Wang Xiaodong, you’re probably familiar with, yeah, of course, who wrote, I think in 2009, China is Unhappy. I remember it was a big intellectual sensation.
Like he is the one who coined the term industrial party. So in this article that he coined the term industrial party, he stated very clearly that—I actually want to read this—he stated really clearly that:
“We must never envy the finance Hollywood, the Grammys, and NBA of the West. We would rather forge iron and smelt copper and let the Americans sing and dance for us because forging iron and smelting copper is the true—this is where true power lies.”
And I think this is a big—this basically crystallizes industrial party’s salvation arc, which is it is the industrial capability that made China powerful so other people couldn’t kick our ass again.
The true power, the true international strength that European countries wouldn’t bully us, like America and Japan wouldn’t bully us, is because now we can forge iron and melt copper. It is not because we can sing or dance, it’s not because we care about social welfare, it’s because we can build stuff.
I think industrial party has such a clarity about the importance of engineering and industrial knowledge.
I want to quickly shout out Fred Gall, who actually wrote another essay right after yours came out, and it happened that the very day that I read yours right away suddenly in my inbox there was Fred’s Substack. And he had actually written about it as well, and you know he definitely helped me to get oriented with this.
But what you’ve just described, it’s engineering then becomes an act of patriotism, right? It becomes synonymous with patriotism. Building is loving your country, and that connection seems to be quite explicit in the whole industrial party discourse.
I mean, building itself becomes a moral act. It takes on moral weight, which is a really interesting worldview.
Fred also frames this though in his writing as a generational revolt, especially against earlier, maybe more literary or humanistic modes of thinking about China, the China that you maybe described when we worried about the cost, we worried about the human cost.
I mean, it doesn’t describe this hostility exactly, but a sense of that those ways of talking had become just kind of unmoored from material reality.
So there is this tension between the When Yi Ching Nian phase and the Li Gong Nan dominance phase.
And, but I want to get to this gendered layer here that feels really important for me to acknowledge—that this industrial party worldview, this whole emphasis on engineering on… Discipline on technical mastery that to me feels very gendered in terms of who speaks with authority, what kinds of traits are valorized.
You’re somebody who identifies as a feminist and you work very fluently across technical and cultural domains. How do you read that gendered dimension to that, who gets to imagine the future in these narratives?
I think first of all Ling Gao Qiming itself is a piece of historical record because I think the collective writing process peaked maybe during 2011 to 2015, and this is the internet before China’s feministic awakening. So I would say certain feministic consciousness hasn’t arrived in China yet.
So Ling Gao Qiming is in a sense a product of its time — a pre-feminist cultural product — and people just really don’t have a lot of tools or instruments or frameworks to criticize it.
Just like a lot of women writers would participate in writing, they would probably feel extremely uncomfortable but they couldn’t name why they feel uncomfortable. But now, retrospectively looking at this text, looking at these primary sources, it is very much misogynistic.
It’s just like how much Liu Zixing’s Three-Body Problem feels extremely misogynistic when you’re reading in Chinese.
I mean Ken Liu did a great job in removing a lot of the poorly written female parts, it’s still in there, yeah, yeah. But you know like there’s definitely some plots in Liu Zixing’s work that would be like:
“Oh, you’re a woman but how can you listen to Bach, this German composer, like because Bach is such a representation of rationality, a rational music. How can women appreciate this beautiful, high class, high broad rational music?”
You know, such plots permit Lin Gao and the first 500 pioneers — like a very small group of them are women, predominantly men. And I think the made revolution is the part which is really fascinating because Lin Gao basically operates in the semi-military structure where the resource needs to be centrally planned and allocated to people.
It is a techno-authoritarian society where it’s also a little bit like plutocracy. I would say people who possess the most engineering knowledge have a better social status.
So at the time, there is this distribution of:
These people are very unhappy. I mean, they’re all fictional plots by the way, and those plots are the incels of Linggao — the single people from Linggao.
In the sense of domestic servants are also, you know, sex slaves, which is not being explicitly said but later you will see this Linggao society operating as a semi-feudal but techno-authoritarian style political structure.
Later, they recognize that:
This is all part of the modernization process.
China’s modernization success depends on female workers in the factory, so Linggao is like:
“Okay, if we’re rational enough to truly industrialize Hainan, to truly industrialize Ming dynasty, we shall truly give the female servants proper treatment, so we can properly…”
So it’s basically all rational, not like:
It’s not moral—it’s rational.
So it’s rational for the Linggao community to progress to a female-male equality scenario, and then this is basically a historical fatalistic direction instead of out of, you know, humanitarian concern or out of cuteness or moral goodness.
Wow, there’s just so much to plumb here, and it’s sort of the theory of history that underpins this that I’m particularly interested in. Maybe I will at some point take a crack at this thing. I’ll be good for my Chinese anyway.
So let me shift a little way away from Linggao here.
I do want to bring it back in frame but this book Breakneck, by Dan Wong, which is one of the most talked about books of 2025. Dan, of course, as you know, describes China as an engineering state.
I mean, listening to you talk about Linggao and the industrial party, that phrase starts to feel less like an abstraction and more like an actual lived… Worldview, right? Does that framing resonate with how you understand what Linggao is imagining, or does it miss something important?
You have this book club where you have been talking about, reflecting on Chinese language discussions of Breakneck. You know, it’s called What? Reading Breakneck in China.
Yeah, reading Breakneck from China.
Right, right.
One thing that struck me in your book club reflections—I’ll link to that because you’ve written about it on your Substack—is that Chinese language discussions about that book seemed less surprised by that framing than English language ones.
So, I mean, did the idea of an engineering state feel like any kind of a revelation to Chinese readers, or more like seeing something familiar finally given a name?
I really appreciate Dan’s framing. I think Dan’s framing is at least to better capture certain reality in China. I honestly think the democratic versus autocratic binary is not helpful anymore. Like, if you look at the US, what’s democratic about the US, right?
I know a few Chinese, China-focused scholars who used to study the authoritarian regime of China and now all sort of pivot to study the US authoritarian term.
You know, I honestly think Dan’s framework can somehow better explain the reality and better get to the point. It’s really helpful, it’s really instrumentally helpful.
And then, according to Dan, he tends to be playful with this framework. He’s like not 100% serious about it, doesn’t want to challenge the status quo of democracy versus autocracy. But yeah, I’m going to borrow that cop-out from him.
I’m just being playful here, I’m not really—it’s a way to not commit completely, right? I mean, that playful is—it doesn’t have like, you know, we have generations of scholars studying authoritarian systems, right? But like in a sense, I don’t think Dan wants to challenge that.
I think he comes up with this framework just to better explain today’s China and today’s US.
Yeah, I think I do appreciate this framework, and I think the engineering state captures a lot of the developmental, the knee-jerking intuition for the Chinese society as well as the party’s industrial policy.
I think the industrial party ideology is reflected by the CCP itself as well, and I would argue this industrial development is the priority spirit, is a collective unconsciousness among so many powerful people, so many decision makers in China.
For example, Xi Jinping mentioned the new productivity force. I think new quality forces of production is very industrial party coded—it’s because this implies that China’s economy is stagnating; the growth is that as we don’t have the prosperity like the growth like before, how do we solve this problem?
Okay, let’s shift to this magical new productivity, new quality productivity force. Let’s do more engineering, let’s upgrade our engineering so problems could be solved.
I think there is this industrial party-coded naivety or innocence in it, and then I think a big part of the CCP’s decision makers still think they can engineer a lot of problems away. But in reality, it’s not true anymore because the industrial party itself has a lot of intellectuals start to have their own reckoning on a lot of China’s problems, and then they realize that a lot of problems couldn’t be engineered away.
So, Dan Wong’s book, do you feel like it hits differently between English and Chinese audiences when it comes down to their different lived experiences? How would you, if you had to sum up the difference between how your Chinese friends—many of them have maybe not spent time in the West—how that hits differently?
A lot of people are overly obsessed with if China is a real engineering state. For example, they would argue:
So a lot of the Chinese language readers who are living in China would be dissatisfied with Dan’s engineering state verdict, because they would argue like, you know:
I tend to think it’s very useful to accept it in a sort of provisional and playful sense. But there’s this irony I keep coming back to: it feels like it’s only just in the last year or so that many Americans have really fully become aware of the scale of China’s industrial might or industrial power in China.
Yet, in our conversation, it sounds like the industrial party worldview—the whole framework that helped articulate and legitimize this push from within China, this crazy breakneck, engineering-driven mentality—is already losing some of its explanatory force in China. It’s weird that Americans are only starting to believe this is the case at the moment when the industrial party logic has lost or is losing its grip.
I don’t think the industrial party logic has lost its grip in China. I’m pretty sure a lot of the industrial policy decision makers still very much adhere to the industrial party logic:
But the intellectuals who were part of the industrial party movement in the early 2010s, I think they’re starting to suffer from China’s declining economy and, say, COVID. For example, Ma Qianzhu himself, an influencer in China with two million followers on Bilibili, is a very articulate writer. But his account was banned because he voiced certain issues during COVID. Ma Qianzhu himself got cancelled by the state even though he used to support everything for the state.
This brings us to the irony where the industrial party people, the engineers themselves, are very smart and aware of certain societal issues like:
These issues don’t necessarily yield to the logic of industrialism.
I’m curious about Fred Gao—I don’t know if you know him personally, but I’ve met him in Beijing. He’s a really nice guy and has been explicit about moving away from the industrial party orbit over time.
I wonder if this is a personal evolution or symptomatic of a broader shift in discourse. I think for many industrial party intellectuals, it feels like a personal evolution. They have kind of grown out of the industrial party phase. I would say they lost their innocence in believing engineering could solve everything. It’s not a magic potion.
Mai Tienzo himself definitely took some hits in life to realize that his youth was starry-eyed and innocent about many things. It’s called growing up. A lot of people I know had that kind of super faith in technology early on, and anything that didn’t surrender to the hard logic of mathematics and engineering was just worthless. They’d ask, “Why bother reading novels? You should be reading that kind of thing.”
People grow up, right?
It’s really funny because within the crypto community, I also met a lot of rational engineers—people who hang out in the rationalist forum community. I see them growing up as well, starting to learn that:
I see them also sort of grow out of this obsessive, almost purity phase.
It’s funny like my tensile right now, he speaks out. A lot about the child supply, and he speaks out about local government debts and certain central-local relations. He also has an absolutely descending voice during COVID. Well, I mean, it’s comforting to know that it’s still possible for people to change.
Yeah, let’s go for one final question just to wrap this all up about what Lingao tells us about China today. If someone wants to understand contemporary China—not the politics necessarily, or the policies, or the political imagination—what should they take away from the Lingao phenomenon? What does it tell us about how China thinks about:
What’s your big bottom line takeaway?
A big thing that tells us is maybe stories like Lingao are worth more attention. In a sense, it’s a more grassroots Senti—a Three Body Problem that’s more widely accessible. In a sense, it’s an egalitarian Liu Cixin collective building process. Like, you know, Three Body Problem’s Liu Cixin is representative, but I think Senti maybe speaks more to the unpolished, the authentic, the grassroots, the organic aspect of these things.
For me, reading Lingao is such a journey. It introduced me to knowledge I never really thought about. Part of the Industrial Party I constantly laughed about during the peak of their debate in the early 2010s: they constantly laugh at this humanistic journalist who would complain about the suffocating urban life and want to escape to the forest. As long as this journalist can take a hot shower and have access to the internet, the Industrial Party would laugh at this fantasy.
This escapist imagination ignored the infrastructure it needs to have a hot shower and wifi connection. The Industrial Party deeply advocates for the invisible wires buried in the ground. They advocate for the pipes that transmute the hot water to this escapist little Eden garden. This humanistic journalist would imagine oneself to be like this, but Industrial Party people are really making a lot of the invisible stuff visible to me.
In the process of US re-industrialization, such knowledge is revealing because I used to take hot water and electricity for granted. Then I learned that’s not true. China’s electricity supply is top of the world right now—the high voltage grids, convenient industrial basis—everything to fuel China’s innovation.
Yeah, I think Industrial Party really gives me certain knowledge that humbles me because I could be that ignorant humanistic journalist complaining about urban life. I want to take a hot shower in the forest and don’t reply emails, but I still want wifi. I could completely ignore the infrastructures—that’s like the iceberg under the ocean.
Yeah, I think, in a sense, Lingao is a textbook for me to learn about the industrial process at its very first principle. It’s not fun to read but also fun to read. That’s really an interesting take. I gotta wonder what these guys today would think of Li Ziqi.
I mean, you know, for those of you who know, Li Ziqi is a very, very popular video blogger, huge on YouTube and stuff like that. This woman is very attractive, who left her life in the city to go home and take care of her aging parents or grandmother in the countryside in Sichuan, and has made this enormous following because she’s so good. On the one hand, she sounds so far like that kind of journalist who wanted to flee as long as there were hot showers and internet.
But this woman also has mad skills. I mean, she crafts, she does, she’s a good asset on, you know, Hainan Island in 1628 for these guys because she knows how to build stuff, how to make stuff, and all these traditional crafts. I wonder what they would make of somebody like her.
She embodies, on the one hand, both what they don’t like and what they very desperately need.
Oh yeah, I think if I were a Lingao writer, if I were part of the Engineering Party, I would salute Li Ziqi because if I were them, I would meticulously break down the amount of planning for her to do in order to create. A seamlessly beautiful video like that—if I were an industrial party member, I would appreciate the engineering part of her production. I would be like,
“Oh my god, it’s because you did so much invisible infrastructural production work.”
So the 20 minutes—the visible time of you showing up on the screen—can look so effortless and seamless. I think, yeah, I generally think the Ling Gao people would appreciate her engineering skills in a sense—like production engineering and resource management skills for sure. Fantastic!
What a fun conversation this has been, and the time has just flown by. Afra, let’s move on now.
First of all, thank you for spending so much time speaking with me, and again, everyone’s got to go and read your piece if you haven’t done it already. It’s just a wonderful piece of writing. For me, I think it’s one of those things where this little slice, as you say, just this artifact of Chinese culture, made me think so much about the contemporary Chinese condition. It made me think so much about, you know, the mindset that really does—in so many ways—just sort of inform and shape the world that we inhabit today.
It’s become—it’s not just ideology, it’s more like infrastructure, right? The whole mentality, in many ways, has come to define the modern polity.
But let’s move on and talk about this segment that I call paying it forward. If you’ve got a young colleague or a friend or somebody whose work you want to call attention to, now is the time to do.
I think one thing I need to shout out is—I mentioned in a piece that there’s no English translation for Lingo, which is not true. So, two months ago, obviously a group of people took it as a passion project and translated the canonical version into English and made it a website.
They also basically have a GitHub commit about the tools they use to translate the piece. They use the GMLI 2.5 to translate everything.
Yeah, I’m just really glad that people are spending effort systematically translating Lingo into English, so I would recommend reading that. I think that’s the first recommendation.
Second is, unfortunately, if you’re not a Chinese language speaker or don’t listen to Chinese, you won’t get the great content. Baihua is this podcast incubator actually started by my friend Izzy. We’re all like sort of the founding members of Baihua, and we’re trying to incubate more Chinese language podcasts.
One of the podcasts I really like and really appreciate is called Xin Xin Renlei. I can also send the link.
“Please do.”
Xin Xin Renlei is a podcast hosted by three tech journalists who are also, like me, really bilingual and understand the tech world on both sides. They find some very interesting niche topics to discuss. For example, they would talk about:
Yeah, so highly recommend Xin Xin Renlei. The English name is Pixels Perfect.
Pixels Perfect, Xin Xin, Xin Xin, Xin Xin.
Okay, well excellent, excellent—that’s fantastic. Now, I don’t know whether that was your paying it forward recommendation or your actual recommendation recommendation. I distinguish between them, but did you have a book or something that you wanted to recommend?
Yes, I actually read voraciously. I do have a lot of books I would recommend. One would be, I think, it’s edited and written by Carrie Brown. It’s called
China from European's Eyes: 100 Years of History
I think that book, to me, is—
You know, like we always talk about how China is the foil and mural for the West’s imagination, and people’s obsession about China—the way people project China as a beacon for technological advancement today—is actually a sense of otherness, right? Like other in China.
So this book illustrated that this phenomenon is not new. It has been existing for 800 years. You know, many European intellectuals have been portraying China as the otherness projection—like it’s elderly, alien, different—but it… It could be either really beautiful or really ugly. It could be elderly powerful or elderly powerless. The reason why China couldn’t develop modern technology and modern systems, Hegel would argue, was because the Chinese language, the characters, are so laid back.
Basically, Cary Brown, as a historian, compiled 16 or 18 permanent European intellectuals on their takes of China. So the people from like Voltaire to Hegel. Yeah, so I think it’s a fascinating intellectual genealogy. I would recommend it.
Yeah, that sounds great. I mean, I have all the time in the world for Cary Brown. I think he’s wonderful, brilliant, and a fantastic writer. I don’t understand how he writes so much—like he’s gotten a new book every six months.
Oh, I have another one I really must say is Ilin Liu’s upcoming new book. It’s called The War Dancers. It’s coming out, I think, at the end of February, and this is a book about the history of the Chinese internet in the past 30 years. I think you’re going to be interviewing her.
Yeah, I read it. It’s absolutely such a craft—it’s a beautiful craft, so well written. She’s a great writer. Oh, she’s such a great writer. Honestly, as her friend, I really admire her craft. Such a role model.
Yeah, we know each other socially as well, and I am going to have her on the show to talk about the book. So yeah, I mean, it’s great because the book is really well written. I read that book—it’s called The War Dancers. I couldn’t remember the full title, but I have it. So I’ll make sure to put the title in there, and it’s an excellent recommendation.
Related to your recommendation of Cary Brown, just to remind people, I recommended this book ages ago. But it’s a very similar approach, although it’s not just China; it’s all of Asia. It’s Jürgen Osterhammel’s book Unfabling the East: The Enlightenment’s Encounter with Asia, which is something that I haven’t recommended before, and yeah, it’s absolutely great.
My recommendation for this week actually has something in common with that. It’s Tami Mansari, who I’ve recommended another of his books before. He’s an Afghan American writer and journalist, and he wrote a book called Destiny Disrupted: A History of the World Through Islamic Eyes.
It’s a real deep dive into the history of Islam as understood by Muslims themselves, from the time of the prophet in the 7th century all the way up to September 11th, viewed through the eyes of Muslims themselves. I think it’s a very useful exercise in building cognitive empathy and understanding the Islamic worldview—not that there’s one single monolithic worldview, but it’s a great book.
It also reminds me of another book written by Kim Stanley Robinson, who also likes to write about hard science like Liu Cixin and the Industrial Party. He has a book called The Years of Rice and Salt. I was just talking about that book the other day with a friend of mine. It’s a great book.
I have recommended that one on the show years and years ago. It’s an alternative history, which I really like. Even since we’re talking about alternative histories here, not a time travel one, but the premise is that the Black Plague actually ends up killing 99% of people in Europe. It starts with Tamerlane’s troops coming up to the Bosporus and then deciding, “Nope, we’re not going over there,” because they were planning on conquering Europe. But no need—the plague has already killed everyone.
Fascinating, yeah, fascinating book. It also has a lot of Buddhist touches, like reincarnation. The interstitial chapters are like the Bardo chapters.
Yeah, I really hope China has someone like Kim Stanley Robinson. I think he could be both spiritual and insanely technical, like Red Mars and Gray Mars, which are very detail-oriented in terms of Mars terraforming.
But a lot of his work is also deeply humanistic. Of course, there’s this cli-fi classic Ministry for the Future as well. So yeah. I would love to meet him one day. He seems like such a wonderfully interesting man. I know, I know, I love his recent preservation of Sierra, it is almost like he’s the embodiment of California spirit—both technologically aware but also deeply drawn to the mountains.
I don’t know, I think something fascinating about this guy, I really like him. Yeah, yeah, yeah.
All right, hey, well thank you so much, what an enjoyable conversation. I think we could go on recommending books to one another for several more hours, but we will call a stop to it.
I look forward to meeting you in person one day. I’m going to be in England at the end of the month of February, but I don’t know if you’ll be around. I think so.
If it’s London, yeah, I’ll be around. Yeah, it’s such a fun recording of a podcast with you.
Okay sir, thank you for inviting me. Yeah, yeah, what a great time.
You’ve been listening to the Seneca Podcast. The show is produced, recorded, engineered, edited, and mastered by me, Kaiser Guo. Support the show through Substack at www.synicapodcast.com, where you will find a growing offering of terrific original China-related writing and audio.
Email me at [email protected] if you’ve got ideas on how you can help out with the show or if you just want to say hi. Don’t forget to leave a review on Apple Podcasts.
Enormous gratitude to the University of Wisconsin-Madison’s Center for East Asian Studies for supporting the show. Huge thanks to my guest Afro Wong. Thanks for listening and we’ll see you next week. Take care.
I earned my degree online at Arizona State University. I chose to get my degree at ASU because I knew that I’d get a quality education. They were recognized for excellence and I would be prepared for the workforce upon graduating.
To be associated with ASU both as a student and alum makes me extremely proud. Having experienced the program, I know now that I’m set up for success.
Learn more at:
ASU Online
asu.edu
2026-02-22 03:42:14
⚡️ Prism: OpenAI’s LaTeX “Cursor for Scientists” — Kevin Weil & Victor Powell, OpenAI for Science
Okay, we’re here at OpenAI with some exciting news from the AI for Science team. With us is Kevin Weil, from, I guess, your VP of AI for Science.
VP of OpenAI for Science, yeah. OpenAI for Science, and Victor Powell, who is the product lead on the new product that we’re talking about today. And with me is our new AI for Science host, RJ. Welcome.
“Thanks for having us.”
“Thanks for having us. Yeah, it’s very good to be here.”
“Thanks for hosting us as well. It’s always nice to come over to the office.”
What are we announcing today?
So we’re launching Prism, which is a free AI-native LaTeX editor.
What does all that mean? Because probably a lot of people on the pod haven’t worked with LaTeX in the past. LaTeX is a language, effectively, for typesetting mathematics, physics, and science in general.
So if you’re a scientist writing a paper, you’re probably not using Google Docs because you need to — you have diagrams, you have equations, et cetera. But it’s — and it’s been the standard for decades. But the tools that people use to actually write LaTeX, write their papers, haven’t changed in a long time.
And in particular, AI can help with a lot of the tasks, right? Because you spend your time doing the science, you need to write it up. That’s an important part of communicating your work. But you want that to be fast, and you want that to be accelerated, and AI can help in a ton of ways. And we’ll talk about some of those.
But if you step back, right, it is OpenAI for Science. Our goal is to accelerate science. And the surface area of science is very large. So we’re trying to build tools and products that help every scientist move faster with AI.
Some of that is obviously the work that we can do with the model, making the model able to solve really hard scientific frontier kind of problems, allowing it to think for a long time. But it’s not only that, right?
If there was a lesson from what happened over the last year with software engineering, it’s that part of the acceleration in software engineering came from better models. But part of it also came from the fact that you now have AI embedded into the workflows, into the products that you use as a software engineer, right?
And so that’s what we’re doing here. So OpenAI for Science, it’s both building great models for scientists and also speeding them up by bringing AI into the workflow. That’s what we’re doing with Prism.
I often say like every million copy and paste done in ChatGPT, there’s probably some product to be built.
“Right, exactly.”
That’s a good analogy.
“Yeah.”
That’s a good way to look at it. Especially with LaTeX, having written a lot of LaTeX papers.
“Yes.”
“Yeah, me too.”
The number of hours as a grad student I spent trying to get some diagram to line up exactly.
“Exactly.”
“Oh, man.”
Yeah. Cool. And Victor, this is your sort of baby.
“Yeah, I guess it started off as just a project. I left Meta about three years ago trying to look for various different projects to start. And this was one that like when I sort of presented it to people, they’re like, oh, I get it. That’s, I see what you’re doing.”
And so I’ve just been focused on that, building it for about a year and a half. And, you know, it has now become part of OpenAI and that’s been very exciting.
“Congrats.”
“Thank you.”
Yeah. So it’s kind of a fun story, right? I mean, we, as we were thinking, we had this thesis around, it’s not just models. It’s also building models into the workflow and accelerating scientists in that way.
And this is, there are obviously a lot of different ways that you can do that, but the scientific collaboration and publishing thing is definitely one of them. And I was looking around like, what is there in this space? And there hadn’t been a lot of innovation for a long time.
Like it wasn’t that different from when I was writing up my assignments and papers in tech and grad school. And then I found on this Reddit forum, maybe it was /r/LaTeX. I don’t remember, but somewhere on this Reddit forum, I found this thing about a company called Cricket.
And I was looking around, I couldn’t find who the founder was. It took me a little while. And then I think I found you on Twitter and DM’d you out of the blue and just said,
“Hey, I don’t know if you want to talk about this, but I would love to talk about this if you’re open to it,”
and gave you my number. And we talked on the phone and then jumped on a Zoom and eventually met in San Francisco and made it happen.
“That’s right.”
It’s awesome to have you guys here, but it’s just, yeah, I have a ton of respect. For what you, what you started to build. I actually never heard that full story from you until now. You gotta find that Reddit user and thank them because, you know, it might have been me.
I thought you were totally in stealth because it was the hardest thing to actually figure out who the founder of this thing was. And then I was like, “Oh, for sure. He’s not going to respond to my random DM.”
I mean, I guess that’s a part of, part of our focus has always just been entirely on product, and to the point where it’s almost embarrassing how little we focus on anything else.
Yeah. It worked out for you.
Also full circle for a moment for you using Twitter to do your business development.
Yeah, that’s right. So that’s kind of interesting.
Like I actually, yeah, probably one of the most important social network innovations, I guess, is those, that stuff. And I’m sure you know a lot about that.
Shall we go right into a demo or talk about it?
Yeah, always fun to show it.
I’m a fan of show, don’t tell. Push people to the video.
All right. I’ll try and arrange this so you guys can see a little bit.
Yes.
So what you have here, so this is, this is Prism. And what you can see is on the left here, this is actual LaTeX. You can see why you might want AI to help you write it because it’s a little bit, it’s a language. It’s a little bit messy.
And then on the right, this is my colleague’s paper. Alex Lipsoske is a physicist. This is a paper that he wrote on black holes. And so you see it over here, all the, all the, you know, you can imagine trying to write this in Google Docs or something — it’d be impossible.
This is why LaTeX is super powerful.
And then, you’ve got kind of your files here that make up the project:
You can go through and change it and then you compile that into the PDF itself. But here I can say, at the bottom, you can use the AI using GPT 5.2. And I could say, you know, this introduction, maybe I want a little help writing the introduction.
So,
“Help me proofread the introduction section paragraph by paragraph, suggest places where I can simplify.”
There’s a lot of demo and we’re working on it pretty heavily. So just, you can’t be nervous.
Spoken like a true founder.
One of the nice things is you could do this in ChatGPT, but you’d have to go upload your files into a chat, right? And you’re going back and forth here because the AI is built into the product. It has all of the files that are part of your project. It automatically puts them in context. It works the way you think it would work.
So here it’s looking at the files.
And it’s given us kind of a diff here. So it’s suggesting changes. You’ve got:
You can see the different places where it is suggesting that we change things.
So, okay, we can, we’ll just keep all of them, right? YOLO.
Here’s the thing — we’re changing Alex’s paper. What’s the big deal?
So here’s another thing we were talking about: diagrams in LaTeX.
So, I’ve got a, say, I wanted to input a commutative diagram, right? It’s really easy to draw a commutative diagram like this. Yeah, it is an absolute nightmare to put these things into LaTeX.
So I will upload this photo and I’ll say here, whoops.
Is there a tech bench for this kind of stuff? Like a set of evals?
So here’s a commutative diagram that I drew on the whiteboard:
“Can you make it into a LaTeX diagram and put it right after the, I don’t know, right after, right before, right at the top of the introduction section? Make sure you get the details right.”
So, I didn’t want to interrupt you while you were typing, but why don’t you use voice?
Oh, actually I should. I totally could.
Yeah.
No, but isn’t it interesting that we all have these voice buttons and we don’t use it?
Yeah, it’s not second nature yet. Like it’s interesting.
And that one I totally should have. I was going to also show something. So here I am in the LaTeX and it’s working.
You also can create new parallel chats. So you can have whole sessions with ChatGPT that can be going in parallel.
So here I’ll ask it, there are all these equations. We’re talking about symmetries of this black hole wave equation. And in particular, there’s this complex symmetry here. I like how it. Notice how, yeah. Yeah. Notice how it sinks when I highlight it, but I’ll say like, why don’t you, I’ll go to my chat so I can start doing this in parallel.
I’ll say, please make sure, or please verify that the H plus operator in the new symmetries section is indeed a symmetry of the stationary axisymmetric black hole. Do you understand those questions? You lost me. Are there a whole wave of equations? I have, but after that. I don’t know if Brandon is actually a natural physics person.
Yeah. I’ll say, don’t do it in the paper. Show it here. I don’t want it to actually edit the paper. I just wanted to prove it here. Right. Yeah. Okay. So I’ll get that going.
Now, while we’re waiting for the diagram to finish, we can also get another thing going in parallel. So I’ll say, I need to write up a set of lecture notes on general relativity. You know, say I’m a professor, right? I’ve got, I’m teaching a class or something, put together a 30-minute set of lecture notes on a Riemannian curvature.
Wow. That’s a very different task. Put it into the file. I made this gr_lecture.tex. Okay. And so I’ve got this going.
All right. Well, it came back on my earlier one — H plus symmetry. Is it really here? You got ChatGPT doing a whole bunch of work to verify that this is indeed a symmetry of the equation. Okay. It does. It confirms it.
Right. So you’ve got the full power of a reasoning model that can think deeply about frontier science. And now we can go back while it works on the other thing.
Okay. So this was where I was making the diagram, right? It put it right below the introduction. I’ll compile it again. So this is an auto compile. You can turn that on.
Yeah. Okay. And it nailed it. So it looks like it got it pretty much exactly. Just a small check. Check the details.
Oh yeah. Good enough for me. Yeah. It’s pretty good, but all right, we can see if it’ll get it right. Let’s say, the C vertex should be directly…
To your point about voice though, I do think maybe over time the code might recede into the background more as you’re really interacting with the paper.
When you started this product, how were you envisioning it would be used? Or were there other design choices you were considering that you didn’t take?
By the way, before you answer it, we have our general relativity lecture notes here, but that was quick. So 30 minutes, this is a — yeah — so 30 pages there, 30-minute section.
Okay. So we got curvature, covariant derivatives. This looks like a reasonable set of notes if you were going to go teach a class, right? It just did it for you.
Or you can think like, you know, generate the problem set for this week.
So it’s got some examples here. We could tell it to work out solutions to the examples. That’s sort of a hidden feature of LightTeX too, that it actually makes it pretty easy to generate problem sets with answer sheets and things like this.
There’s so many cool features of LightTeX that I think are underutilized.
So anyways, you could see we:
And that’s just basically all in parallel.
And you can imagine lots of other things you can do.
For example, if you have a proof and maybe just have the bullet points on a proof, you can say, “Here are the bullet points. Now flesh it out for me.”
You can imagine checking all of your references before you publish, making sure all of them are real and up to date. You can imagine having it generate your references based on the topic.
There are just so many areas where AI can help. That’s a big problem when you’re trying to put together a paper: get all the references right.
This is time that used to go to typing a paper, not science. And now it can go back to science.
And that’s just one of the ways that we look at accelerating scientists all over the world.
I would say definitely be careful about including references you haven’t read.
Like that’s the point: you can put a hundred references, but if you didn’t read them, you might as well not have them.
But yeah, I think that web connection is very important.
And like, is this stock GPT five or is this like a fine tune?
It’s GPT 5.2.
Yeah.
Yeah.
But, and by the way, when you’re looking at references, you can also ask ChatGPT to help you understand the reference, you know, read this paper, tell me the relevance. So all of the things that you might want to do to accelerate your work, you can just do from within this interface.
You still have to do your work, but it should make it faster, especially like even linking to the references. So you can go and verify like, okay, this is this one. So this might also make it easier to write the paper as you do the work, right? Rather than, rather than, oh, okay. Now I got to spend two days in LaTeX land, like trying to get my paper.
Right. Like a tool for thought rather than just a publishing tool.
Yeah.
What about collaboration?
It’s great.
Yeah.
So it’s built for, I mean, you can speak to this. Well, it’s built for collaboration. So you can bring on as many collaborators as you want, which is nice. I think most other tools in the space have hard limits and charge you money and other things. In Prism, it’s as many collaborators as you want for free.
Commenting.
Yeah. So you’ve got commenting, you’ve got all the kind of collaboration tools that you would want.
Yeah.
Good.
And then any of the like engineering choices, like, you know, what might engineers not appreciate when just looking at a tool like this?
Often it would be like multi-line diff generation that you need to do because you’re editing a pretty complex document. It does get pretty complicated. I mean, we’re using, let me know if I’m getting too technical into the weeds, but, you know, we’re relying heavily on the Monaco JavaScript framework.
So that I’m very familiar with the lack of documentation of Monaco. That’s actually interesting you say that because it’s very true that it’s an extremely powerful library that is almost entirely undocumented.
Yeah. It’s just types. But you can use codecs now to generate the documentation for you.
Yeah. You think Microsoft should get on that.
But yeah, yeah.
You know, like just stuff like that. Like I like to hear about the behind the scenes of like building something like this.
I think initially one of the, one interesting challenges was that we really pushed on it being WebAssembly and fully just running in the browser at first, the whole entire LaTeX compilation. That did help us in the sense that we were able to flesh out the design and the AI capabilities early on without having to invest heavily in the backend infrastructure.
But eventually we did hit a wall with that approach. Once we switched it to a backend PDF rendering, that’s when we really started to hit an inflection point with usage.
Now fast.
Yeah.
Yeah.
I think we also, the AI in here benefits a lot from everything that we’ve learned building codecs. And as we go forward, I think we’ll likely just integrate the full codecs harness into the application here.
So you get all the benefits of the tools and the skills and all the things that codecs can do today, and you just sort of automatically can bring that into your environment here.
Yeah.
Are they just the same app?
Maybe. I think potentially it depends on…
I mean, here’s the reason I’m hesitating: I think the interesting thing with this and with codecs is we’re still mostly in a world today where:
But the more that AI improves, people trust it and they’re just YOLOing it, right? You’re generating code and you’re looking at the code sort of secondary to instructing the AI and driving from that.
The UI probably changes for all of these things, right? You don’t need your document front and center because you’re actually not looking at your document as much. That’s sort of your backup and your interaction with your AI is primary.
And as that happens, I think you might see these UIs kind of converge over time. So we’ll see.
But I definitely would love to see a world where people needed to spend less time thinking about the actual syntax and much more about what they’re trying to create.
Yeah.
I feel like this plus a notebook would be amazing.
Yeah.
Because, because, and something that AI can run quite, run a analysis, generate plots. So stick that in the paper here. Like, “Oh, read, you know, like this paper, like this part of the paper, like take that equation and like, you know, do something with it.” That would be a really amazing integration.
Yeah. Like think through the different corollaries of this thing from this paper and produce some alternatives. And then like, yeah, I completely agree. Yeah. Yeah.
I do think that’s sort of the progression where it’s like doing, doing maybe work for a few seconds versus maybe we’re already at a point where it’s doing work for a few minutes, eventually doing work for hours, days, coming back with very complicated analysis.
Yeah. I mean, that, that’s actually maybe a good segue into some of the other questions that I had about your initiative.
I mean, so stepping back to AI for science in general, can you talk a little bit? I have a million questions, but maybe start with what I… okay. I feel that validation of AI for science is critical to its success, right? You have to have some sort of real world validation of the results that you produce with your AI, right?
So what are the, I know that there’s been some publicity in the past. What are the latest and greatest hits of the things that big labs or any lab is doing with open AIs?
I mean, when you step back and look at the trend, I think that’s the biggest thing. Because we can debate exactly – like you’ve probably seen in the last few weeks, even – there’ve been a bunch of different examples of like GPT 5.2 contributing to open research problems and things like that.
And then you get into this debate of:
And you know, that’s a legitimate discussion. But when you step back two years ago, we were like, you know, this thing can pass the SAT. That’s amazing. And then you progress to like, it can do a little bit of contest math and it can start to solve harder problems. Wow.
And then you keep going and it’s starting to solve graduate level problems. And then you have a model that gets a gold medal at the IMO. And now we’re sitting here talking about, you know, it solving open problems at the frontier of math and physics and biology and other fields.
So it’s just, I mean, the progression is incredible. And if you think about where we are today, then you fast forward six months, 12 months. I am very optimistic about what the models are going to be able to do to accelerate science.
Yeah. It’s like, it’s already happening. If there’s one thing that I’ve learned from my like two-ish years at OpenAI, it’s:
“You go very quickly from this thing is just impossible for AI to do. Like it’s too hard. I can’t do it” to “Hey, I can just barely do it. And it kind of doesn’t work. Only early adopters are doing it because it’s not particularly reliable yet, but it sort of works,” to “Oh my God, AI does this thing really well. And I could never imagine not using AI for this in the future.”
It’s like, once you start to get to, you know, five, 10% on some particular eval, you very quickly go to like 60, 70, 80. And we’re just at the phase where AI can help in some — not all, but in some elements of frontier science, math, biology, chemistry, et cetera. And it just means we’re like right at the cusp and it’s super exciting.
So, I mean, fast forward a year or, you know, the end of the year, and we have AIs that can do a lot of this discovery process. Then the bottleneck becomes the wet lab or the lab, right?
Yeah. So what are you seeing in that domain?
Yeah. By the way, I totally — we were talking a little bit about software engineering before and the analogies. I think 2026 for AI and science is going to look a lot like what 2025 looked like for AI and software engineering.
Where if you go back to the beginning of 2025, if you were using AI heavily to write your code, you were sort of an early adopter and it kind of worked, but it certainly wasn’t like everybody was doing it. And then you fast forward 12 months and at the end of 2025, if you were not using AI to write a lot of your code, you’re probably falling behind. I think we’re going to see that same kind of progression in AI and science where, today it’s early adopters, but you’re really starting to see some proof points and solving open problems, developing new kinds of proteins and things like that.
But you’re right, as it really starts to work. I think this is the year that it’s really going to start to work. It shifts the bottleneck.
And I think we’re going to be starting to talk a lot more about robotic labs and other things. Like, do you need to have a grad student pipetting things? No, probably not. Right now you do, but why shouldn’t we have robotic labs where you have AI models doing what they do best—reasoning over a huge amount of different information.
They have read substantially every paper in every field and can bring a lot of information to bear to help prune the search tree on, for example, a new material that you’re trying to create. Then you have a robotic lab that can roll out a bunch of experiments in parallel, do them while we sleep, and feed the results back into the AI, let it learn from them, design the next set of experiments, and go.
So, it’s hard to imagine that doesn’t even have to be yellow science, right? To your point, you’re verifying it as you go because you have an actual lab building it in real life. But you can just do so much more in parallel. You can think harder upfront with AI to design the experiments, prune the search tree, search over a smaller number of higher-value targets, then automate the experimentation and turn it around faster.
And again, like this is acceleration: if we’re successful, you end up doing maybe the next 25 years of science in five years instead. So in 2030, we could be doing 2050 level science, and that would be an awesome outcome. The world is a better place if that happens.
Absolutely. I guess we spoke recently with Heather Kulik at MIT, and one of the things she pointed out was that there’s an element of serendipity to working in a lab that you lose. She was of the opinion that there’s
So again, you’re at a bottleneck, but humans need something to do.
Well, what she said sounds totally reasonable to me. There are probably places where humans are adding no value because they’re literally just trying to pipette a certain amount of a thing and do another thing, or do some repeated motion in a bunch of different ways.
And then there are places where it’s less well understood. You want the full flexibility of a really smart human thinking about the work they’re doing.
By the way, the same is true in the more theoretical fields as well, where it’s not about automating all humans out of their jobs. This is about accelerating scientists. It’s scientists plus AI together being better than scientists alone or AI alone.
I think the same is true whether you’re talking about:
- something happening in silico proving a theoretical problem
- something happening in the real world with a lab
Find the parts that you don’t need a human to do and try to automate them as much as possible so the humans can spend their time on the most valuable things.
I’m very pro the in silico acceleration, because you have more control over that and you can parallelize, repeat, and do all those things.
I think there will be huge value because a lot of fields are heavily simulatable. For example, nuclear fusion runs a lot of simulations before experiments because experiments are very time-consuming and expensive.
But I’m excited to see what you can do when you have a loop between a very intelligent reasoning model that understands fusion and a simulation: the model thinks about what parameters to set, runs a bunch of simulations in parallel, feeds that back, and you have the same sort of lab loop—except it’s all in silico, running on a giant GPU cluster.
Then, when you’ve really gotten to the end of that calculation, you go run it IRL.
This is bringing it back to prism. This is sort of a nice aspect that you’re getting a more sophisticated view of your result, right? Instead of just, you know, like a chat output in it, I would hope as it develops, it’s a way for a scientist to be able to interact with the information before you kick off your nuclear fusion experiment for, you know, $10 million or whatever.
And the human can learn from more things, right? You just get more data that you can look at and evaluate. So, yeah.
So this, by the way, this fusion discussion makes me think like, you know, if one day opening after science, you know, it gets serious enough and starts to self-accelerate, you should solve cold fusion and, you know, be your own power source.
Well, I mean, this is why we’re so excited about this, right? I mean, imagine our mission is to bring AGI to the world in a way that’s beneficial to all of humanity.
It’s right there at the lobby. Yeah. It’s amazing. You see it every day you walk in, you see it. Yeah, absolutely.
And imagine, I mean, if we had GPT-9 inside of ChatGPT today, it would be awesome. You could do lots of things. But if you had GPT-9, which I’m using as a stand-in for AGI, and it could:
Like, that’s the real benefit of AGI. That’s, I think, maybe the most tangible way that we’re all going to feel AGI as it starts to be real.
Yeah. And that’s why this work is so mission-driven for us.
So, that brings up two questions in my mind:
Because this is how—though you laugh, it’s a little bit serious—all the AI for drug discovery companies ended up being drug companies because they couldn’t sell the drug, so far, with some exceptions now like Noetic, for example.
But they end up being drug companies because they can’t sell the drug. In any event, there’s a lot of precedence for using AI to basically build your own portfolio.
So, are you thinking about that angle or this is right now just about enabling scientists outside of OpenAI?
Yeah, I mean, my personal belief as we drive towards AGI is not that we’re going to create AGI and then all sit back and enjoy our universal basic income and write poetry. The future will involve, especially advanced science, experts helping to drive these models.
I don’t believe any one company is going to do everything. That’s why we’re focusing, first and foremost, on accelerating scientists outside of these walls. Our goal is not to win a Nobel Prize ourselves, it is for a hundred scientists to win Nobel Prizes using our technology.
Yeah.
At the same time, I think there are places where sometimes, when you’re building for other people, you learn best if you actually go end to end on something.
Yeah.
Because then you’re your own customer and you understand it in a tighter loop than you would if you were purely building for people outside the walls.
So, I think it makes sense for us to take a handful of bets like that, but by and large, we’re going to partner because the surface area of science is massive.
Yeah.
And we want to accelerate all of science.
Yeah.
We’re covering all sorts of disciplines from chemistry to structural biology to material science. It’s all over the place. There’s a lot to do.
One thing I did want to bring across also was that AI for Science sits within the broader research org at OpenAI. One of the more interesting things is like self-acceleration, let’s call it.
Where Jakub has very publicly declared that we’ll have an automated researcher by September 2026.
Yeah. The beginnings of one, I think you said, right? Like the intern version this year?
Right.
First product.
Yep.
And I’m sure you have more cooking internally, but why so soon? That’s eight months away. What’s the goal there? Anything you can share?
Yeah, I mean, eight months feels like forever in this industry. AGI by then? Basically infinite time.
I mean, no, it’s exactly what you said, right? It’s if we can create a a model, an AI researcher that can actually do novel AI research, then we can move way faster, right? We will self-accelerate. We can discover more things quickly. We can apply GPUs and compute to moving our own research faster. And that just means that we can improve our models at a faster rate.
And every bit that we improve our models means that we are a step closer to bringing AGI and all the things that we were talking about with personalized medicine and new materials. And, like, we can bring these amazing things into the world faster. So it is about self-acceleration.
Yeah.
I think one thing I’m most trying to figure out is how closely is machine learning research, which is a science, or high-performance compute, which is also something that you guys are doing a lot of, close to the traditional hard sciences, let’s call it, like physics and chemistry.
I think in a lot of ways it’s sort of a parallel effort to this. Like, it is the work that we’re trying to do with AI, OpenAI for Science, and accelerating other scientists. The parallel internally is they’re trying to build products and models for AI researchers to accelerate them.
So there’s a lot of sort of parallelism to these two work streams. They’re similar in goal, just for a different set of users.
Yeah.
Okay.
Any parting thoughts, questions, anything we should have asked?
Well, I hope everybody tries Prism. It’s available today at prism.openai.com. It’s totally free. You log in with your ChatGPT account, and you can go build anything you would like. We’re really excited to see what people use it for, and if you run into issues or have any feedback, let us know.
I have a paper I’m going to write really soon on that.
Amazing.
We’ll just show notes on this thing. I don’t know. Let’s see what it does in LaTeX.
Yeah. Totally.
Yeah.
Congrats on your first OpenAI launch.
There you go. Congratulations.
Congrats.
Thanks for having us.
Yeah.
Thank you.
2026-02-22 03:42:14
The Engineering State and the Lawyerly Society: Dan Wang on his new book “Breakneck”
This episode is brought to you by Progressive Commercial Insurance.
As a business owner, you take on a lot of roles: Marketer, bookkeeper, CEO. But when it comes to small business insurance, Progressive has you covered. They offer:
Count on Progressive to handle your insurance while you do, well, everything else. Quote today in as little as seven minutes at progressivecommercial.com. Progressive Casualty Insurance Company coverage provided in service by affiliated and third-party insurers. Discounts and coverage selections not available in all states or situations.
Parlez tout français. Hablas español. Parlez italiano.
If you’ve used Babbel, you would. Babbel’s conversation-based techniques teach you useful words and phrases to get you speaking quickly about the things you actually talk about in the real world. With lessons handcrafted by over 200 language experts and voiced by real native speakers, Babbel is like having a private tutor in your pocket. Start speaking with Babbel today. Get up to 55% off your Babbel subscription right now at babbel.com/wandery. Spelled B-A-B-B-E-L.com/wandery. Rules and restrictions may apply.
Welcome to the Sinica Podcast, a weekly discussion of current affairs in China.
In this program, we’ll look at:
that can help us better understand what’s happening in China’s:
Join me each week for in-depth conversations that shed more light and bring less heat to how we think and talk about China. I’m Kaiser Kuo, coming to you this week from Chapel Hill, North Carolina.
Sinica is supported this year by the Center for East Asian Studies at the University of Wisconsin-Madison, a national resource center for the study of East Asia.
The Sinica Podcast will remain free, but if you work for an organization that believes in what I’m doing with the show, please consider lending your support.
You can reach me at [email protected]. And listeners, please support my work by becoming a paying subscriber at SinicaPodcast.com. The semester is starting, two big tuition checks to write, so help a guy educate his kids.
You will enjoy, in addition to the podcast:
And of course, you will have the knowledge that you are helping me to do what I honestly believe is important work.
So do check out the page to see all that’s on offer, and please do consider helping me out.
Dan Wang has been on the Sinica Podcast a couple of times before, and I am delighted to have him back today.
He is one of the sharpest and most original observers of China’s technology sector and manufacturing landscape, having won a certain level of fame for his annual letters and other essays — writings that somehow managed to combine on-the-ground insights with big picture perspectives.
Dan has worked for Gavekal Dragonomics in Beijing since 2017. After a stint with the Paul Tsai China Law Center at Yale, he’s now at the Hoover Institute at Stanford.
If you’ve seen the PBS Nova documentary “Inside China’s Tech Boom,” which I had the pleasure of narrating — it’s a film by David Borenstein — you’ve already encountered Dan. He was a featured voice helping to explain the deeper drivers behind China’s technological rise and talked eloquently, I thought, about the importance of process knowledge, of what the Greeks called metis, which is an important idea that’s really stayed with me and has become quite foundational to my understanding of China and the importance of manufacturing.
Today, we’re going to be talking about his new book, which comes out just about the time you’ll be listening to this. It’s called:
“Breakneck: China’s Quest to Engineer the Future.”
It’s a book that posits — and here I’m greatly oversimplifying — that China is ruled by engineers and they do what engineers like to do: they build. America, on the other hand, is ruled by lawyers. It’s an engineering state on the one hand and a lawyerly society on the other.
Dan’s book is full of memorable witticisms and pithy, trenchant observations. Perhaps most importantly, it explores what each side might ideally learn from the other. They obviously each have their strengths and their weaknesses, so I’m really anxious to ask Dan about whether he thinks Americans are actually learning the right lessons or just burying their heads in the sand and inhaling big plumes of copium.
Before we jump in, I want to point out that this book was especially interesting for me as somebody whose abortive doctoral dissertation was specifically about the rise of this engineering state, about the… The emergence of technocrats in post-Mao China. So things might get a little in the weeds. I ask your forgiveness in advance and will do my best to keep it reasonably accessible.
Dan Wang, welcome back to Sinica and happy birthday, man.
Dan Wang: Thank you very much, Kaiser. And what better birthday present than to speak to old friends like this?
Dan Wang: Yeah, it’s great to have you.
We have to start with what, for me, was clearly the most important part of your entire book, which is that magical and totally improbable guitar-making hub in Guizhou that you stumbled upon as you and Christian Shepard from the Washington Post and another friend rode your bikes through that mountainous province toward Chongqing.
As a card-carrying guitar nerd, this totally blew my mind. I got to find this place. How does a little inland town end up just cranking out guitars for the whole world? I mean, is this just one of those serendipitous quirks of China’s industrial sprawl? Or is there something systematic in how the state, local governments, and entrepreneurial networks operate so that these clusters take root in the unlikeliest places?
And I guess more importantly, were there any of you guys who were guitar players? And if so, did you guys try out some of the local handiwork while you were there?
Kaiser, you’re much cooler than me. You are a guitar player. I am a clarinet player. And I think by coolness, that just really outranks me.
How indeed did kind of a third or fourth tier city in Guizhou become one of the great hubs of guitar making?
Well, in 2021, when I was stuck in China during the summer due to the success of the zero COVID strategy at the time, I asked two friends of mine,
“Hey, why don’t we go on a really long bike ride somewhere in the southwest, which I find the most beautiful part of China?”
Oh, for sure.
And so over five days, we cycled from Guiyang to Chongqing. It was four days in Guizhou, the province of Guizhou, and then until the fifth day when we reached Chongqing.
It was on our second or third day when we came across these giant guitar cymbals on the side of the road. So there were these guitars that were hanging off streetlights. There’s this giant guitar that was on a hill that was kind of this ornamental thing. And off in the distance, there was another big guitar that you could see on the town square.
And so we were very puzzled by this. We unfortunately didn’t stop to try out the handicraft. I’m pretty sure that neither Chris, Zheng, Tung, nor I are anything of real guitar players ourselves.
And afterwards, I went to find that Zhengan County in Guizhou is indeed the largest guitar-making hub in the world. I think it’s something like 30% of guitars in China is produced there. I have to get the exact figure right in my book.
And that happened due to a great accident in which a lot of folks in Guizhou were moving to Guangdong. In the 90s, Guangdong was making absolutely everything and anything. Some people were making guitars for export. And so a lot of people from Guizhou just happened to move to a particular guitar factory.
One of the things that we really found on our bike ride was when you’re going through China’s countryside, Tristan made this very astute observation that there are hardly any middle-aged or people in their 20s or 30s that you could find in Guizhou. It’s a lot of children being led with the grandparents. And that’s because anyone who is able to work has been moving over to the coastal areas where you could have a much better job producing guitars or whatever it is for export.
And something that the local government in Zhengan did was that it found that, well, there’s a lot of people making guitars here. Guitars are not really endemic to the local culture of people playing guitar. That’s not really a Guizhou thing. That’s not really necessarily a Chinese thing.
I’m working to change that, but yeah.
Well, you’re a big force, Kaiser. Maybe we can change that. But it just attracted a lot of people to try to say,
“Hey, why don’t you move back home to Guizhou? You can make a lot of guitars here.”
And somehow that strategy worked. And so a lot of people moved back to Guizhou from Guangdong, and now they’re producing guitars mostly on the lower end.
So this is not the sort of things that will be sold in, I think, the high-end guitar shops that you would probably frequent, Kaiser. But there is some innovation here, and I expect that they will get better and better.
Yeah. I mean, it’s amazing how good quality the Chinese guitars have. I mean, it’s astonishing. And all of the major brands are actually making a lot of their guitars in China now.
Yeah. No, that’d be great. You can pedal there. Right. Yeah. No, I’m not going to do that.
But yeah, was the enticements just the usual package of tax incentives, of steeply discounted infrastructure promises of raw materials? What do they do to entice people to a place like that? What do they typically do?
I think the typical enticement is:
A lot of folks in Guizhou, folks in the Southwest can’t necessarily love the Southeast and Guangdong where they were working. It’s too humid. They might say, “we don’t love the Cantonese food. Where’s all the spices? Where’s all the pickles? Where is the really pungent flavors that folks in Guizhou are used to?”
And so this coincided with sort of this rural revitalization program that Beijing has emphasized for quite a while now. And so I think it is just this big happy accident that I would say a pretty random place in Guizhou is just making so many guitars now.
Awesome. Dan, I know you’re going to end up on every major podcast talking about this book, so I want to avoid just asking you about the main themes or going through chapter by chapter. Instead, I was hoping that we could use the main themes of the book as kind of a jumping-off point to explore a lot of the questions that popped into my head as I read it, questions I’m sure you’ve thought about as well. Not necessarily things that made their way into the pages of the book itself, but let me start here.
I mean, we can all rattle off the obvious differences between an engineering state and a lawyerly society. You got speed versus procedure, certain social orderliness versus the chaos of pure market forces. But what are some of the more subtle trade-offs, the ones that most people don’t even know that they’re making that maybe shape daily life in each system? I’m thinking predictability, dignity, moral legitimacy. I mean, which of these things matters to people who live inside each system?
Yeah. Well, I want to push you a little bit on this, Kaiser. I wonder which is the system that delivers legitimacy. I could posit that the lawyerly society has some degree of legitimacy because there are some procedures in place that people expect that rules have to be followed, and maybe the lawyers are better at following the rules.
On the other hand, the Communist Party, I think, would say, well, we have much greater legitimacy. We have this, what is that term, whole process, substantive democracy, in which we are delivering much better things for the people. So I think legitimacy is a concept here that we can play around a little bit with.
What I’ll say is that the engineering state, I think I came onto this framework in part due to these excellent articles I found in 2001, I believe, that was written by an interesting analyst at the time called Kaiser Kuo, who pointed out that there were quite a lot of engineers that were being promoted into the Central Committee and the Politburo.
And I think there has been quite a lot of discussion since 2002, which is the really striking year when every member of the Standing Committee of the Politburo, notably Hu Jintao, as well as Wen Jiabao, had degrees in engineering.
Of course this was a really striking fact for a lot of people.
I think there has also been this kind of view and understanding that America is very lawyerly and that the government is of the lawyers, by the lawyers, and for the lawyers.
And so what I wanted to add onto this kind of general understanding that was in the air, so to speak, was that I felt like I really experienced the merits and the madness of the engineering state by living there from 2017 to 2023.
I was in China at a time when a lot of things were getting a lot better. The high-speed rail system had really come into fruition at that time. People were no longer shoving each other around to get in line. The system felt quite rational and well-organized.
Shanghai is a marvelously functional city where one is never really more than 15 or 20 minutes away from a subway stop. Shanghai was building all sorts of parks. It built about 500 parks by the year 2020. By the end of this year, the city targets that it will have a thousand parks. Shanghai is just this remarkably well-functional, livable place.
And so that was something that I really experienced by living there. But Shanghai is also infamously the city that suffered perhaps the worst lockdown ever. In the history of humanity, in which 25 million people were unable to leave their apartment compounds for about eight to ten weeks over the course of the spring in 2022. And so that was something that I felt very ethically myself.
When I moved to the Yale Law School’s Paul Tsai China Center, really being embedded in one of these most elite, elite-making institutions in the United States, really seeing that the US is run by lawyers, seeing how the Biden administration at that time had been really, really lawyerly. About 11 out of 15 cabinet members in Joe Biden’s administration had gone to law school. Many prominent folks went to Yale Law in particular.
That was sort of what I wanted to add, that this was something I lived and felt in both places.
Yeah, absolutely. We’ll talk a little bit about this idea of performance legitimacy down the road here. But so I want to dig into sort of maybe philosophical underpinnings of this contrast that you highlight.
In the West, we often reach for the trolley problem as a kind of shorthand for thinking about moral tradeoffs.
I mean, do you pull that lever to sacrifice one life in order to save five? I’ve often wondered how this dilemma looks different through the lens, you know, like the one that you’ve drawn, whether it looks different between an engineering state and a lawyerly society.
I would imagine an engineering-oriented society be more inclined to treat this as kind of a technical optimization problem. You just kind of minimize total loss, while a maybe more lawyerly society would insist on:
Kind of, you know, a utilitarian versus a deontological philosophical orientation.
Maybe that points to a deeper distinction. I mean, do these orientations that you’ve described, do they line up with the classic contrast between communitarian or group-oriented values on the one hand and on individualistic ones on the other?
Yeah. I think that’s actually a pretty fascinating question. I wonder if there is a systematically different way that Chinese tackle the trolley problem in a way that is pretty distinct from the way that Westerners think about the trolley problem.
I think the level that I was thinking a little bit more about was that I think part of the reason I wanted to come up with this framework of engineers and lawyers is that I think we’ve been reasoning about the US-China conflict in these 20th century terms like:
And all of these terms have some use, but I’m not really sure that they still really apply in very nice ways now.
You know, are we going to say that something like, is China fundamentally left-wing or right-wing? Well, I can make arguments on both sides. Is the US fundamentally more left-wing or right-wing? Again, this is something that we can debate and I’m not sure how far exactly we get up to these sort of frameworks.
And so the framework that I came up with of the engineering state and the lawyerly society, I would submit is just no worse than trying to figure out exactly to what extent China is Marxist today.
You know, I don’t think that Marxism is quite the right lens to try to understand the people’s republic. Maybe it is, but I think this is what we need to do is to have a plurality of frameworks here.
Maybe we should have something like the discussion of:
We just need to have more than one framework really to think about the great conflict of the moment.
Yeah, no, I completely agree. And that’s what I really like about this particular framework is it takes us beyond these sort of binaries of ideology, you know, China being just such an incredibly syncretic society that blends so many aspects.
But the one thing that I think it all circles around is this technocratic policy, and I think it feels like, to me, a very, very good explanatory lens. So I applaud that.
I’ve often used a concept I kind of borrow from economics when I think about what a society values. And that’s, you know, the concept of elasticity.
You know, I imagine that in every society, individuals have kind of an intuitive sense. I don’t think they have it mapped out really explicitly, but, you know, how much of one thing that they value, they’d be willing to give up to gain some amount of something else that they value.
You can, you know, kind of almost put numbers to it.
I'll trade you three points of administrative efficiency to get one point of procedural fairness, right?
Or I'll trade you two points of transparency for one point of speed.
I mean, it seems to me like for decades, Americans’ coefficient of elasticity has been really, really rigid. They’ve been very unwilling to trade down in civil and political rights for even for, you know, pretty markedly… Improved economic outcomes. But I mean, it’s my sense, Dan, I’m wondering if you agree that lately, because of China’s example in manufacturing strength, in infrastructure, in its energy build-out, in the energy transition, in education, in STEM education especially, I feel like there’s a shift happening and this is happening. And I think you note this, it’s both on the right and the left, in America, like within MAGA and among also, say the abundance bros, right? You know, Derek Thompson and Ezra Klein and those guys, more Americans seem willing to accept some erosion in rights or process in exchange for what they believe are better material outcomes.
Do you think that that coefficient is changing? And if it is, does it change the way that you think about the lawyerly versus engineering states, especially if we start seeing each side borrowing from the other’s value hierarchy?
Well, there’s certainly a lot of borrowing between the U.S. and China at the moment. But I’m not sure that they’re borrowing all the right things.
Yeah. That’s the big issue. Are they?
Well, I think what we are seeing with the Trump administration is a lot of authoritarianism without the good stuff—good stuff like functional subways, better transport infrastructure, and better infrastructure generally. I think you’re very right to point out that there is a sense of deep dissatisfaction in the U.S. I mean, that is always true everywhere at all times, but I think there is an especially big sense at the moment that the U.S. has not been very functional for quite a long while.
The U.S. has not been very functional because especially in the bigger cities, where things are just far too expensive. If we’re thinking about cities like New York, Boston, or San Francisco, housing prices are really unaffordable for too many people. These are cities that try to build new infrastructure—mass transit—and basically don’t do a very good job of it.
You know, I was really struck that it’s not just that New York is unable to build new subway stations and new subway lines with any sort of efficiency; it costs about $2 billion per mile to build a new mile of subway in New York City. They’re not even doing simpler stuff very well.
And so this is the sort of thing that looks kind of ridiculous. Why does it take several years to upgrade a bus station? I realize that’s kind of a complex structure. There are all sorts of intricacies with the tunnels, but still this is fundamentally a bus station that shouldn’t take more than five years to build out.
So, you know, we have broken mass transit. We have unaffordable housing. The pandemic revealed that the U.S. isn’t able to manufacture a lot of pretty basic goods. There were shortages of masks and cotton swabs. There were shortages of furniture, all sorts of simple consumer goods that weren’t easily exportable from China at the time.
And so there is a pretty big sense that nothing is working when we have to face this critical transition to decarbonize the economy and to build a lot more solar, wind, transmission lines, which all demand quite a lot of land.
And so I think I wonder if there is the case that the U.S. has even made a conscious decision to try to erode some of the elasticity of the proceduralism. Because I think one of my arguments in the book is that the proceduralism has encrusted itself throughout a long period of time without anyone’s real intention to create a lot of processes everywhere throughout the American government.
This is sort of a force that kind of took a life of its own. And this was something that a lot of homeowners and especially the NIMBY set exploited, I would say, to block new housing in Berkeley for students, to block a solar or a wind project as well as their transmission grids. This became something that richer people were able to access and exploit to block projects that they didn’t like.
And that isn’t even really a majoritarian demand for greater proceduralism. This was kind of an independent life force that grew upon itself and has a very vested interest of minoritarians that are really vested in trying to keep that system so they are able to block a new apartment building if it takes them with their light away, for example.
You know, you work to be very fair in the book and that’s something I really like about it. I mean, you don’t just heap praise though on the engineering state. You make a point of calling out the downsides. And they’re very real. Can we talk about some of those? What the problems are of the engineering state? What does it get wrong? You sort of channel the James Scott scene like a state thing and a lot of the excesses of that thinking.
There’s two chapters in particular of your book that really dwell on this. And they are about, of course, the one-child policy, which is a conspicuous failure of the engineering state mentality and also the zero COVID policy, which starts off as sort of a triumph, not right away, right? It, I guess, displays some of the pathologies of it, but by the spring of 2020 you see this V-shaped recovery. You see China really use its state capacity to wrangle the COVID epidemic.
But then of course, you talk quite a bit about the lockdown. So, talk a little bit about what some of the major downsides are. I think the engineering state has major upsides.
Um, so to be clear, I really want to articulate that the speed of construction of new housing in China, new roads, tall bridges, subway systems, nuclear, all sorts of construction in China, I would say is net positive. You could go to Guizhou as I did, look at these really tall bridges. It is pretty easy to say, well, this is a bridge to nowhere, but I think it is also true that a bridge to nowhere quickly turns nowhere into two somewheres at the ends of these bridges.
If you take a look at China’s major infrastructure, I would say that on net, it’s been extremely positive, that the benefits have way, way, way exceeded the costs.
Now, I would say that there have certainly been some costs:
But in spite of these costs — human, environmental, financial — I would still say that the benefits of infrastructure way exceeded the downsides of so much frenetic construction.
When I say that you talk about downsides, I don’t mean to suggest that you present a kind of moral equivalence between the systems. It’s pretty clear that you believe one side needs to learn more from the other right now.
It’s pretty clear where you think the osmotic gradient should flow.
The problem, I think, is that the Chinese leadership is not only physical engineers. They’re also fundamentally social engineers, and they cannot stop themselves from treating the population as just another building material to be remolded or torn down as the circumstances demand.
And so I think we can point to a lot of social engineering projects in China and we can point to the repression of ethno-religious minorities in Tibet as well as Xinjiang. Even with the Han majority, people have lived for a long while with the hukou system, which is not even fully abolished yet, in which it becomes really difficult for a migrant worker to move to Beijing or Shanghai and access educational facilities for her child.
What I really decided to focus on were these two big projects that you mentioned:
And I think you’re really right to point out that zero COVID follows an arc that isn’t very straightforward.
I think the first act of this big dramatic arc of zero COVID was the spring of 2020, or even earlier in the winter of 2020, when I was living in Beijing and we heard about this new pneumonia that was spreading through Wuhan.
And when we saw the Wuhan lockdown, which was in January, I believe January 23rd, you have these sort of dates that are emblazoned in your mind if you lived through the pandemic in China.
Wuhan lockdown, hearing the stories of the ophthalmologist, Dr. Li Wenliang, who raised valid concerns and was disciplined by the state for raising these sort of concerns, created a lot of anger among pretty much everyone I knew that there was yet another respiratory virus that was spreading from China.
This is the second one after 20 years with the first SARS crisis.
There had been some political suppression of bad news up until the state really tried to react and try to tamp it down. A big way. And so that was the great first act when a lot of commentators from the U.S. and parts of the West were sometimes even gleefully saying that this might be China’s Chernobyl moment in which a disaster triggers the political downfall of the entire regime. And so that was the first act.
And then the second act proved a lot of that wrong. So the second act of China’s COVID experience was the much longer time period when Beijing, Shanghai, central government, local governments proved that China was able to control the virus much more effectively than the U.S. can or much of the West could. And so the second act was people in China feeling relatively glad that they were living in China and able to be free of transmissions, able to carry on life relatively normally.
There were some costs. I wasn’t able to see my parents who were in Pennsylvania. My parents were telling me this very un-Chinese thing, which is to say,
“Stay there. Don’t come to visit us. Trump’s America in 2020 is a terrible mess. So, you should just stay in China where life is a lot better.”
They weren’t wrong. They weren’t wrong at the time.
But then there was the third act of China’s COVID experience. That third act was triggered by the much more transmissible Omicron variant of the virus, which overcame a lot of vaccines and was just extraordinarily transmissible. That was really the variant of the virus that forced Shanghai to go into lockdown for about eight weeks in the spring of 2022 when people could only go downstairs to their apartment compounds to have their noses and their throats swabbed. Otherwise, you couldn’t really go outside even for any sort of fresh air.
And so this was a time that drove a lot of people crazy. This was a time when a lot of families were suffering some degree of food insecurity because the Shanghai government had no logistical capacity to really try to deliver food to a lot of families. I knew a lot of families where the parents really tried to reduce their food intake so that they could save some food for their kids.
The food shortages resolved after, I believe, something like the second week of April. But, you know, this was something that was pretty extraordinary—that people were feeling food insecure in China’s largest city in the year 2022. That was really surprising.
And then the great denouement of the great dramatic act of China’s COVID experience was when in 2022’s December, Beijing decided to drop all COVID restrictions in the coldest month of the year, when people had very few fever reducers in stock to meet this great ending of the pandemic when zero COVID kind of became total COVID.
And so in Shanghai, I caught COVID around December 22nd, when I think everybody else was catching COVID at around the same time. So luckily, I had quite a fine experience with all of these things. But there were a lot of folks in Shanghai who didn’t have a very good time getting COVID at that point.
And so, you know, this is where the engineering state is pretty ambiguous, I think, in terms of its effect. So sometimes it looks pretty good that it was able to follow WHO recommendations and control the virus until it then collapsed under its own weight.
So the evidence here is pretty ambiguous, I would say.
Yeah, absolutely. But, you know, at the same time, I worry that there’s a certain type of American copium smoker who is taking these failures of the engineering state, assuming them to be inevitable consequences of adopting the sorts of things that you would like them to say. And, you know, they’re telling themselves these sort of self-soothing daily affirmations, like:
So, you know, actually, America is doing great. Thank you very much.
Yeah, I wonder.
I think I absolutely agree with you that the mood in the U.S. especially fluctuates way too wildly for what the situation actually is.
I remember at the end of 2022, there was just excessive triumphalism in the U.S. because China ended its zero COVID program in this horrible collapse in which a lot of people died and the state suppressed all of this data.
Russia then wasn’t doing very well in its fight against Ukraine. And so it looked like Ukraine was also winning against autocracy.
And the end of 2022 was also the years when it seemed like the U.S. had these great technological breakthroughs,
and the autocracies simply didn’t have these technologies in place.
And so the views have shifted quite a lot. And these views go up and down, I think, a little bit too wildly given the state of… The evidence. And one of the things that I’m always trying to say, you know, when I was at China, now when I’m at the Hoover Institution is always that this is going to be a really long struggle between the U.S. and China. This conflict, these tensions will go on for a very long time. I don’t think that it is anything like a static picture in which one country is winning and they will have any sort of a decisive advantage. I think that the struggle will take place over a very long time.
And there’s not going to be any scenario in which one country simply disappears off the face of the earth. That is a fantasy. And I think it is also a fantasy to imagine that either country will collapse and never get back on its feet. I think that both countries are going to be winning and losing. And when they’re winning, they’re going to be making a lot of mistakes. When they’re losing, they’re going to try to catch up. And that’s just going to be a dynamic process over the next few decades.
Do you agree?
I do agree. I think the language of existential threat and the framing of zero sum is foolish when you see it on either side. Let me get to the things that we ought to be, we as Americans ought to be learning from China. One of the things that you really emphasize is process knowledge. I mentioned that in the introduction. For you, is that primarily a cultural asset? That is the status of engineers, the kind of tolerance for iteration. Is it a firm level capability, having long patient capital, kind of shop floor autonomy? Or for you, is it kind of a policy environment with permitting and procurement and standards at the fore?
Where would you intervene first, in other words, to sort of rebuild process knowledge in the United States where it’s so sorely lacking?
I think it is all of the above, Kaiser, that it is cultural, it is policy driven, it is a matter of economics. So I think the most important thing to grasp about technology is not the actual physical instruments or tools that we can see, anything like a robotic arm. It’s also not a recipe or a blueprint or a patent, any sort of knowledge that’s really easy to write down.
I think the most important part of technology has to be the process knowledge, which is all of this meta and tacit knowledge that exists more on a population level. And so this is something that various hubs of knowledge production have been able to recreate in the past.
You know, at the start of the industrial revolution in the UK, there was just a lot of knowledge about how to build textiles in order and how to build engines.
Right.
When that moved from Britain to Germany, Germany had a lot of process knowledge about how to do interesting new fields like electrical engineering, as well as chemistry. And that has moved from country to country. The US has been a major industrial leader on something like automotives, on something like semiconductors in the past.
And right now, a lot of process knowledge with manufacturing is being built and activated and grown in China, where you could be a worker in Shenzhen, making iPhones in the first year, being poached to make Huawei phones the second year, then making a DJI drone the third year, and then making a CATL electric vehicle battery the fourth year.
And so there’s just so much knowledge that can’t be written down with technology that is necessary for the production of a lot of different goods.
So I think this is one of these things that the US didn’t sufficiently appreciate when a lot of corporates did offshore a lot of jobs to China. I want to be clear that a lot of the manufacturing job losses in the US have been triggered by automation and technological change, not so much by offshoring, but something like 10% of the manufacturing change is created by offshoring.
And one of these things that I wonder about is if Apple didn’t build all of its iPhones in Shenzhen, and rather built it in, let’s say:
What if all of that knowledge involved in building hardware was actually in the industrial Midwest in the US as well? Could it be that Wisconsin or Michigan or Ohio are actually major producers of:
that is present in Shenzhen as well?
And so this is one of these things that I think has been critically understated in the US that has been driven by an excessively financial profit-driven model that didn’t account for all of the most important things with process knowledge.
Right. I mean, this possibility, this hypothetical that you float of an Apple producing in Cleveland, that seems to place a little too much of the onus on Apple. It’s not as though that decision could have been undertaken in a vacuum. There were other factors that it had to consider rather than just simply the cost of labor. It was, as you say, you know, there was a policy… Environment. You know, there are other reasons they chose not to do that. And surely you would agree that it’s not just on Apple.
Absolutely. I think that the infrastructure wasn’t in place. The costs were much, much lower in the past. And so these are all real.
Yeah. Yeah. I mean, when you, when you talk about, when I asked you about process knowledge and, you know, whether it’s a cultural asset or a firm-level capability or policy environment thing, you said all of the above. I mean, that reminds you of something that you wrote recently. You just published in Foreign Affairs with your former boss, Arthur Kroeber, who is, by the way, one of the people in the China space who I just admire the most.
You guys wrote that, you know, China has taken in all of the above technology strategy. What would you include as the pieces of that strategy that perhaps people are less aware of?
I think that people know, you know, big pieces of it, but some of it, I think there is still a gap in our understanding of how China did this. What would you identify?
Arthur and I wrote that piece in Foreign Affairs called The Real China Model, in part to try to rebut the sense that China has succeeded technologically simply because it has stolen all the IP from the US. And so, you know, I, one of my favorite boogeymen is this tweet by Senator Tom Cotton, which he tweeted on World IP Day,
“China doesn’t innovate, it only steals.”
I think that is a flagrantly wrong presumption that I think we just need to discard because it is not helping us understand China any better.
There’s also this view out there that China succeeded simply by subsidizing its way into technological leadership. I think that’s not wrong, but I think it is woefully incomplete to say that the Chinese have been able to make central planning work and been able to select winners. I think they haven’t had a terrific track record on that.
What we point out in this piece is that China has actually built a lot of what we call deep infrastructure to be able to have its success.
Now, deep infrastructure goes beyond traditional infrastructure where China is superb — of trains and ports and highways to move goods around. What we point out are three big things:
Yeah, you noted, I mean, I think just to throw one stat, that China’s total electricity output is greater than the United States and the EU combined, and every year it adds another Britain’s worth of electrical production.
That’s right. Chinese people are on smartphones constantly, maybe even a little bit too much. And the Communist Party is very much in charge.
And so when you marry these three pieces of deep infrastructure —
- power
- connectivity
- process knowledge
— to the fierce dynamism among Chinese entrepreneurs who are really competitive in trying to build interesting new projects, build more cheaply than the other guy, not necessarily achieving a lot of profit, but creating new and worthwhile products, when you marry all of these things together, I think it is no surprise that China has become the technological superpower that it is today.
There are some elements of technology theft from the West. There is an obvious element of the state trying to pick winners, subsidizing all of these things.
What we can acknowledge is that China has both a strong state as well as strong entrepreneurs that have built a lot of these technological achievements.
Dan, I’ve often remarked on how China in the 21st century is a much less technophobic or techno-pessimistic society than America is today. You can see it in survey research on attitudes toward things like AI. But I mean, anyone who’s lived in China and the US, as both you and I have, we know this intuitively, right? Just in the posture that people have toward technology.
I mean, so years ago, I interviewed a philosopher named Anna Greenspan about a book that she wrote.
Called Shanghai Future, one I highly recommend.
Me too.
You’ve read this?
Yes, I’m a big admirer of Anna’s work.
Yeah, she’s great. So you remember, she talked about this big difference in attitudes toward futurity in the US and China. I’ve come to use kind of shorthand that I like. China is still in its Star Trek phase and the US is in its Black Mirror phase, right?
So the question I have for you is, what is the causal direction, if indeed you see any causality at work here, between China’s technocratic engineer-dominated polity and its technophilic society? Does the technocracy create the technophilia or does the technophilia create the technocracy?
I think that the technocracy creates the technophilia. I’m willing to change my mind on this, but I think it is definitely the case that China’s leadership uses mega projects, big prestige projects, really to try to rally the population into doing something better. And I think there are some ways in which this could be a little bit insidious.
One theory that I’ve come across is that one of the reasons that Li Peng, the premier throughout the 1990s, was so heavily invested in the Three Gorges Dam was in part to try to distract from his own image as what the Western media labeled as “the butcher of Beijing” for having ordered the Tiananmen crackdowns.
And so the Chinese government decided that it is going to try to build its way out of this political crisis of 1989 and to really invest in a lot of technology here. There should be a forthcoming book about this. And so once that book is out, maybe we can point to it.
I think it is definitely the case that the Chinese government loves pointing at pictures of great infrastructure. You can’t open an issue of Tioshe, which I was fervently reading when I was living in China, without coming across some amazing new bridge that the government has built, some great new port, which always looks very telegenic, or some speeding high-speed rail going through the countryside.
And so they definitely love to create these sort of images. There is a sense, I think, in which the Chinese government really likes to promote these big novels like Wandering Earth, which has been adapted into a film, and Three-Body Problem, in which there is kind of this emphasis on a world government that is entirely run by engineers working together to overcome a great threat to humanity.
That is, I think, a common theme to Liu Cixin. I think he is one of these progenitors of the engineering state’s mindset.
Right. Of the so-called Industrial Party, the Gongyedang.
That’s right. It’s sort of the Ur text of the Gongyedang.
And I spent a lot of time talking about the Gongyedang in my chapter on tech power.
Right.
And I think the contrast is with the United States, which has had a pretty major tech clash. I think we saw a lot of skepticism of social media, especially after 2017. There right now is still a lot of worries about what smartphones are doing to young people, what social media is doing to young people, what AI might be doing to all of us.
That is all real. And that strain is less present in China, I think, in part because the state loves to create new engineering projects, and in part because I think the Chinese have naturally been more optimistic over the last 40 years than Americans have because they’ve seen their lives improve in such obvious ways.
In lockstep with the improvement of technology. So yeah, it’s reinforcing, right?
And I wonder to what extent the Chinese government might actually be actively censoring some of these views. There has been extensive censorship of opposition to the Three Gorges Dam. And there may even now be some censorship to the big new dam that is being built in Tibet as well.
And so I think there is, on the one hand, the leadership itself is technophilic and trying to engineer their way out of every problem. On the other hand, they may also be censoring some of the perhaps merited, humanistic, critical backlash against what technologies are doing to us.
I want to get into how maybe the technophilia has enabled the technocracy in just a little bit, but because I do think there’s a little bit of bidirectional causality here.
But I want to first ask you whether you think that things like the fact that so many of the leaders are themselves engineers, it sets up a ladder of success, right? I mean, where high status and access to resources and power are kind of enabled by technical, technological prowess, right? So it sets up an incentive system.
So if you are a parent, you’re raising children, you’re going to want to push your children into STEM education. And that itself kind of reinforces that technophilia in society, you know, to your point.
I feel like that’s a big piece of it. Have you given much thought to that as well, to the sort of social forces that work in reinforcing technocratic politics? I think there is definitely a sense that Chinese parents prefer that their kids study STEM degrees. And that is definitely much more obvious that many more Chinese kids are studying math relative to American kids, which I think is a shame. Many more Americans need to be much, much better than the pathetic math capabilities that they presently possess through a lackluster education focused on STEM. I think that should definitely be the case.
So Vivek Ramaswamy was right. Maybe Vivek was right. The issue, I think, is that the slight wrinkle that I would present to you, Kaiser, is I wonder if it is the case that though parents encourage kids to study STEM, they’re not necessarily encouraging the kids to become engineers.
I think the allure of working in tech and consumer internet, especially for one of these big, prestigious firms like
is still much more alluring than working as an engineer. Maybe it is so much more alluring to work in the financial sector rather than in some sort of a technical engineering field, in part because they pay so much better. And so I think the kids are still facing the same tug of incentives that smart kids in the U.S. also feel in being drawn to Silicon Valley as well as Wall Street.
And something else I wonder about, I’m really curious for your take on this, Kaiser. You’ve been spending a little bit more time in China than I have over the last few years. But I was in China in December of 2024. And one of these things that I become really cranky and annoyed by is just how much people are on their phones all the time.
So people are texting other folks over the middle of a dinner. You know, you can see over a hot pot and a hot pot restaurant, many people are just on their phones instead of speaking to their dinner companions. Every trendy cafe shop is better photographed rather than a place to sit and have coffee with other people.
Maybe I’m just getting too old and cranky here, Kaiser. Maybe you can talk me into, you know, being a little bit more sympathetic.
No, you’re only going to hear the same crankiness from me. I mean, it’s something I freaking hate. And I’m also probably guilty of it. I mean, I find myself just having that tug. I mean, I can’t even conceive of taking a subway ride without having my headphones. I’ll walk three blocks back to my apartment if I’ve forgotten my headphones. Yeah, I’m terrible about it. But yeah, this is like the plight of modern homo sapiens. It’s not just a China or America thing. I see it in the States almost just as bad.
But yeah, I mean, I’ve remarked on this before. I used to, you know, you’re standing on the sidelines of a soccer game and you turned another parent of one of your kids’ classmates and you say,
“What are you doing about juniors’ screen time?”
And they’re too busy on their own damn phone to even hear your question. And yeah, it’s a problem.
It’s a problem. I wonder if it might be slightly worse in China because everything has to be turned into a Wang Hong spot and everything has to be photographed as well.
Oh, Christ. Yeah. I mean, I was in Shaxi in Yunnan and it’s becoming that way, you know, because Li Weifei shot a television show called
“Chiou Fung Le Di Phang”
and everyone has to, you know, like have their picture taken where she was and where that scene was shot. Christ.
Maybe let’s check our crankiness and get back to some techno-optimism.
Yeah. You know, actually, I want to dig into history here. I mean, you don’t explore this so much in the book, but I’m sure you’ve given us a lot of thought, which is, you know, the question of what gave rise to the engineering state in China?
I mean, when do we start to see it emerge? Was it a deliberate policy choice or something that just sort of happened? I mean, because this is something I explored quite a bit in my own work as a graduate student I mentioned in the intro and that you so kindly name-checked. I was really inspired to write on this question because by the early nineties, when I was doing this work, it was already, you know, China was already so thoroughly technocratic.
It was already so dominated by engineers. It hadn’t even peaked yet, but already you could see it. I mean, there were already books about this, but like Lee Chung, who’s now at HKU, Lynn White of Princeton, they did a lot of work on, on technocracy. But what struck me was that it had become so technocratic, but somehow it had gone unremarked upon in China itself.
There were foreigners who were looking at this fact and marveling at it, but it was in China itself. It was like, “yeah, of course.”
I wonder if there’s something deeper in China’s history, maybe the imperial civil service examination system, or this, you know, oriental despotism idea of Karl Wittfogel in his hydraulic theory of civilization. You know, he posits that… The technical demands of water management in China created both the opportunity and the necessity for centralized political control. So you have engineers sort of running the state. These were the things that I was exploring and I was wondering what you think about this. What are the historical and maybe cultural roots of the engineering state?
Yeah, I think there are definitely deeper roots in both the engineering state as well as the lawyerly society. That was my next question.
The part of America being very lawyerly, you can read the Declaration of Independence as almost a legal document. So many of the founding fathers were lawyers: first 16 U.S. presidents from Washington to Lincoln—13 of them have been lawyers at some point. And so in the U.S. there is definitely this very obvious legal tradition.
And I think that you can say the same about China as well. I don’t want to take this too literally. I think the work of Karl Wittfogel on oriental hydraulic despotism was a product of the time. He was this strange cold warrior that was trying to discredit the Soviet Union. I don’t refer to Wittfogel at all in my book. But I am definitely a big fan of the work on the clergy system. In particular, Professor Huang Yashun’s book, The Rise and Fall of the East. What is it? Examination.
The Rise and Fall of the East. Examination, autocracy, science and technology. That might be right. We have to fact-check that one. But I think the examination system is very real.
And so I do want to trace a lineage of the engineering state to imperial times. Without being too literal about this, but one might be able to say that imperial China was a proto-engineering state in part because the emperors ordered so many people to build Great Walls or Grand Canals.
So many people died trying to build this canal. The historical records here may be exaggerating some things, but so many people were supposed to have fallen in the course of building this Grand Canal. One might be able to say that the emperors rarely hesitated to almost completely reorder a peasant’s relationship to her land. So there was some social engineering here as well.
Again, I don’t want to be too literal to say that the emperors were straightforwardly engineers, but I think one can trace the sort of lineage because of the state’s management of the imperial exam or the Keji system.
And I think one of these differences I want to trace between the West and China is that I think the Chinese were practicing a source of a sense of absolutism starting from the first Qin dynasty with Qin Shi Huang, in which the state really tried to control quite a lot of things.
This is someone that we label today in China as a despot who buried the scholars and standardized the weights. And so there’s this sense of autocracy stretching back for about 2,000 years now. The Chinese had been practicing absolutism way before the European monarchs ever whiffed this idea in the 17th and 18th centuries.
And so one of my ideas here is that one of the reasons, perhaps, that China did not develop a liberal tradition was that the court administered the exams, which was how one became an intellectual in the first place. And so it becomes really difficult for an intellectual to become a court intellectual by advocating for constraints on the power of the emperor. So mostly all of the mandarins were encouraged to just say,
“How do we govern better? How do we increase the discretion of the sovereign?”
You don’t really get very far by saying,
“Well, what we need is some sort of property rights. What we need is to protect the business people.”
You never really quite had that. And so you didn’t have as vibrant a sense of a liberal intellectual tradition emerging out of China. Rather, that was much more of an absolute sense of trying to increase the power of the sovereign.
Yeah, absolutely. I think that you’ve put your finger on it right there. This cooptation of the entire literati class just by making their advancement contingent on their support for a state orthodoxy.
Right. And I think we see parallels to that today in the Communist Party. I actually think that, I mean, I could spend a lot of time talking about this, but that there’s always been this sort of privileging of knowledge elites. And that assumes, of course, that there’s some objective knowledge in the universe against which you can be tested.
So I mean, at all points, there is this sort of a paradigm of what is true. And there is some canonical set of texts. They could be the Confucian classics or they could be, you know, engineering texts. And if you have demonstrable knowledge of that, somehow that qualifies you for office. I mean, that seems to be sort of the common… Thread. Yeah. So something that I, so, you know, the U.S. obsession with process, in its best form, protects the weak, which is really good. But, as we’ve discussed, it can impede the provision of public goods, the building of infrastructure that can really hurt the weak.
So China’s obsession with outcomes often lifts the many, but can screw the few or occasionally, as in the case of the one-child policy and zero COVID, which you talked about, it can screw the many as well.
So I guess the big question is, how do you build or design institutions that kind of somehow bind outcomes to rights? That is, build fast without trampling people. And what are the kinds of small practical reforms that can move either system in that direction?
Maybe we can start with China. What are some ways where these institutions can be bound up more in rights? And then we can move to the U.S. because you’re very hard on U.S. proceduralism. You’re very generous about its civic function, but maybe we could talk a little bit about the reforms that lawyers could champion that would improve build speed without betraying that kind of ethical core.
Yeah. Well, here is where I would give a plug to my friend, Nick Bagley’s work. He is a law professor at the University of Michigan. He has a book that will be coming out that I think is a perfect encapsulation of the problems of the lawyerly society. He doesn’t quite call it that. And he proposes these tangible legal reforms such that:
So that is one of these books that will be coming out sometime next year.
You know, I think there is actually a kind of a simple answer to a lot of construction. It’s not that the U.S. and China are the only countries that are unable to hit the right balance. I think actually a lot of countries have hit the sweet spot in terms of constructing mass transit while protecting the public interest. And so this is most of Europe. This is Japan. And we can just take a look at what these other countries do.
You know, I was just back to the U.S. after spending much of the summer in Europe. My wife and I spent a month in Denmark. Denmark is really highly functional in terms of public transit. You can go down to the subway systems that are completely spotless. They’re cleaner than anything in Shanghai. They’re fully automated and they just work really well. And you don’t even have to buy any tickets or go in any turnstiles. It’s such a high trust society that people know that you will have bought your tickets beforehand.
And so, countries like Denmark, countries like Japan, which has built a lot of high-speed rail, these are not shining exemplars of human rights abuses.
I would say that, you know, we can just take a look at:
They are able to build trains and subways and all sorts of infrastructure at really reasonable costs without having violated a lot of rights. And so it is mostly the Chinese and the Americans that have gotten the balance wrong.
Yeah, that’s a good point. And do you see efforts now on either side to try to learn from these better examples?
I wonder to what extent China is learning better examples of public interest. I think there have been some ways in which China is learning good lessons. I think it is not the case that environmental reviews for high-speed rail, for example, are entirely perfunctory. I think that the builders are actually trying to do their best to mitigate a lot of environmental issues.
What’s just not available in China is endless lawsuits that can delay absolutely everything on purely procedural bases.
And I think the Chinese have also had some examples of protests that achieved the delay or the cancellation of projects. Remember, I think it was in 2020, when folks in some bigger city—may have even been in Shanghai—went onto the streets to protest the construction of a new trash processing site near their home.
Now, maybe that’s nimbyism. Maybe that is misbegotten. But, you know, we do see that there have been some protests of people trying to maintain their neighborhoods and tell what they like. Maybe that’s positive. Maybe that’s negative.
And I think there is definitely this big sense in the U.S., as we mentioned before, that the U.S. has been dysfunctional for the many, and we need to get much better at building housing, mass transit, all sorts of infrastructure to get the country moving again.
Now, for the most part, I would say that the U.S. government now isn’t learning the right lessons from China. Rather, it’s learning most of the bad lessons from China.
Yeah, as you said. So on the topic of learning lessons, you know, the COVID lockdowns… showed the extreme downsides of the engineering state. I mean, a good engineer, a good scientist, presumably learns from mistakes. I think it’s widely accepted that there were a lot of mistakes made during that time.
What lessons do you think China’s leaders themselves drew from the experience?
That’s a great question. And I haven’t given that too much thought. And I wonder whether there is a lot of studies here. Now, how did enforcing these lockdowns really change the leadership’s mind? Now, I wonder whether they have also learned some of the wrong lessons with COVID.
I mean, one of the things that really struck me was that the Shanghai lockdown, locking down 25 million people in 2022 for eight weeks was accomplished through just the normal police systems. You know, you just had the regular police actually enforce COVID lockdowns.
As best as I can tell, no officers of the People’s Armed Police, which is the paramilitary force that wear what looked like army uniforms, were really deployed to try to enforce a lockdown of that magnitude. And they certainly didn’t have to bring out the People’s Liberation Army to try to suppress the desire to be free.
And so I wonder whether the leadership has learned a lesson that actually the coercive internal security apparatus doesn’t have to be so large in order for the people to be pretty obedient about what are really extraordinary controls that no one had expected at that time. That could be a potential lesson there.
Perhaps other lessons have been that the Chinese surveillance state grew very extensively, that people were tracked on their phones all the time for contact tracing purposes. And there were some issues about privacy concerns. But for the most part, people went along with all sorts of these projects.
And I wonder if the Chinese state has just learned that autocracy is actually much more possible. It’s even more possible than they thought. And I’m hopeful that they learn some good lessons out of this as well. Off the top of my head, I’m not sure I can name any, but I’m wondering, what do you think?
Yeah, no, I mean, I think that you touched on something that I wanted to ask you about, because you know, a lot of people who believe that, you know, the COVID era biosecurity state that was coming into being — you know, the controls that the health code apps and the checkpoints created — that this was just never going to be set aside once the pandemic passed, that this was going to be a regular feature of life.
They thought that the leadership was going to get so addicted to this level of control that they just never let go of it. But it seems like they have. I mean, the app is gone. The checkpoints, the screenings, they’re all, you know, a thing of the past.
Indeed.
I think that’s a pretty good example of maybe a lesson, if not a lesson learned, at least that they exercise a little bit of restraint. Touch wood.
Right. The question is whether they have very long memories and built up this muscle such that if they ever need to exercise these muscles again, they’re going to be able to roll these things out.
It doesn’t surprise me that a lot of the checkpoints, a lot of these apps, and a lot of these COVID testing facilities have been torn down because they became these hated symbols of enforcement. So it could be the case that they took away these highly visible symbols of enforcement, but they have the memory and the muscles to try to bring them back really quickly if necessary.
Yeah, that’s a very good point. I think they certainly have that muscle memory now.
Dan, you write about fortress capabilities, the kind of redundancy, the overcapacity that Western analysts often dismiss or disparage as wasteful. But you actually make the argument in your book, I thought that was a really interesting one, how inefficiency can actually be kind of a source of resilience in China’s system.
How should we be thinking about the trade-off between resilience and efficiency when comparing China’s fortress model with America’s maybe leaner, but possibly more fragile system?
I think one of the things that the pandemic revealed was exactly how fragile a lot of America’s supply chains really were. They were poised for perfection, and it didn’t take much for everything to be ruined.
And there has been this…
Depending on just-in-time delivery and…
Exactly. Just-in-time delivery is something that creates a lot of profitability because you’re reducing your flow of inventory. I think this is also really attributed to Tim Cook of Apple that created these hyper-optimized supply chains. Things were moving around all the time, and so they built very little inventory in order to prepare for shocks.
And I think one of the benefits, mostly a benefit of the engineering state, is that they do build a lot of redundancy. It creates a lot of inefficiency. You can find… There is extreme inefficiency in the Chinese state with the state-owned enterprise sector. There’s just so many redundant jobs. You would just have too many people doing the same things. You have dragging down profitability in all sorts of ways. But that turns out to be really useful in a crisis, that you have the capacity to retool your manufacturing lines in order to build not electronics, but cotton masks, as was the case with JD, Jindong, as well as Foxconn as well.
And so China has a lot of redundancy. China is trying to build up its own oil and gas sector, even though it’s much more costly to tap Chinese gas and oil relative to Russian or American gas or oil because Xi Jinping really treasures energy sovereignty. They’re building a lot more farmland in less than optimal places. I think it is also very striking that as soon as you take the high-speed train out of Beijing or Shanghai, you run into farmland really quickly. And that is because they want to set aside a lot of land for provinces and major municipalities to be food self-sufficient.
And so all of the redundancy involved with manufacturing, all of this overcapacity, there’s also a way to maintain process knowledge that they are constantly training their workers to make sure that their skills don’t go rusty. And I think, again, this is where I think for the most part, China’s engineering state in a lot of economics, you can point to a lot of flaws with debt, with environmental destruction, with all sorts of profitability costs. But there are also some benefits, and these are really revealed during a crisis which you can never predict could emerge.
So Dan, I mean, shame on me. I have not yet finished reading Derek Thompson and Ezra Klein’s Abundance, but I think I get the gist of their argument. It’s interesting to me how little they actually talk about China. But where do your ideas sit in relation to their ideas?
I would want to be a card-carrying member of the Abundance movement. I am slated to speak at the Abundance Conference in Washington, D.C. in the first week of September. So I think I am proximate enough to that.
Now, I think my challenge to Ezra and Derek are to speak a little bit more about China. I think the first parts of the Abundance book, there’s a lot of discussion of how the U.S. isn’t building enough mass transit and infrastructure.
And then the second part of Abundance is talking a little bit more about the scientific failings of the U.S., in which they’re not really taking advantage of being able to scale up and commercialize a lot of American scientific innovations. So China is a good operating model for Abundance. It’s not the best. It is not the most amazing, shining example for the U.S. to follow.
I would love people to ask them whether that was a tactical choice on their part to avoid making the China comparison just to, you know, I mean, because the optics of it aren’t necessarily good. It no longer looks like rah-rah, go America. It erodes some of the, I think, the patriotic oomph that the book otherwise has.
I suspect that what is the case is that, I mean, it’s not only, I mean, it’s not the case that China is avoided entirely. Both Abundance as well as Breakneck talk about California high-speed rail and its awful failings relative to the Beijing-Shanghai line. I suspect what is the case is that Ezra and Derek believe, as I do, that America doesn’t need to become like China in order to build infrastructure. It would be good enough to be like France, Denmark, or Japan.
And so I think we really don’t need to reach the China model. There’s just much better models for the U.S. to reach. And so this is why I say that China is a good operating model of abundance, not the best.
It is good because China has demonstrated that there are virtues to overcapacity, that it is really good to have a hyper-competitive solar sector that is driving prices down, not making a lot of money for investors, but, you know, creating a lot of consumer surplus and building a lot of mass transit for a country that desperately needed it.
There were a lot of costs, but, you know, again, we don’t have to fully copy the Chinese model wholesale in order to get to a better mode of abundance.
You know, you close your book down by emphasizing lived experience, what ordinary citizens feel day to day in terms of dignity, of fairness, of security. I mean, I’ve argued for a long time that Chinese people, like all people, most people at least, anchor their feelings about a given government and its legitimacy, not just in performance, however important that is, but also in whether the state feels to them intuitively morally upright or whether it feels just.
States that emphasize procedural legitimacy obviously tend to foreground this. You know, in China, when you have local corruption, You have arbitrary crackdowns, you have unequal treatment. It can definitely undermine legitimacy, legitimacy on the ground. And when people see the state standing up to bullies or ensuring national dignity, it can bolster this type of legitimacy, which I would love together. You know, it’s this sort of sense of moral uprightness or justice, and that can be domestic or foreign.
How much do you think legitimacy in China actually rests on what I would call the moral dimension, the state, you know, being just or upright and defending dignity? In other words, when you have corruption or arbitrary crackdowns and this stuff eats away at moral standing versus when the state asserts itself against bullies or delivers on fairness, how decisive is that in shaping how people experience the party’s legitimacy day to day?
I ask this because so often there’s this idea that China’s only about performance legitimacy and that somehow an economic downturn or slowdown could deliver a death blow to performance legitimacy. I feel like that’s only a part of the story.
I certainly agree that it has been a persistent fantasy in the U.S. and some parts of the West that China’s political legitimacy depends entirely on economic growth. You’ve seen this narrative come again and again:
I think that is just a silly argument that we see even in 2025. I think that China’s legitimacy is more broadly based than that. I wonder to what extent moral legitimacy, the sort of Confucian virtue, is very much present in China.
I think certainly there is a view that the leadership would try to act as if they are very good Confucians in China. And I wonder to what extent that is actually very effective. Because I think one of the issues I have with China, and it was Professor Huang Yasheng who laid this out very well, is that the state tries to increase a lot of legitimacy in the virtue of the rulers, but they’re not thinking in terms of incentives and constraints and systems that really try to police behavior and induce better governance.
When they reduce things into a matter of morality and virtue, it becomes more about the person rather than about the system. And I think Huang Yasheng has been really good at pointing out how virtue has been a distraction to better governance. What do you think?
Yeah, no, I think that he’s not wrong, that that is a problem. It’s not systematized; that it’s still subject to a lot of kind of patrimonialization. And I think that he’s absolutely right, that if you look at patterns of protest in Chinese history, the way that it is voiced often is in terms of moral failings of leaders rather than particular policies. That is not always a helpful framing when it comes from above or from below. So I tend to agree with him there.
I want to move on though and talk about legitimacy itself. I think there’s this inability among many Americans, and I think you just hinted at it just now, to see beyond procedural legitimacy as the only possible foundation for proper political authority.
I have long believed that this fundamental refusal—it’s not always articulated, but it’s often really present in the American habitus, just in the language that we use—is a big part of the problem when it comes to forming a good understanding of China. It produces a very unhelpful moral framing, and it makes us interpret everything that Beijing does in the most negative possible light.
I think it fuels escalation. It’s not like Beijing is unaware also that there is this kind of assumption of illegitimacy on the American part. I mean, it’s pretty obvious from China’s point of view, and it makes them very defensive. It makes them very anxious. It makes them also assume the worst: that they assume America’s real goal is to destabilize China, which, yeah, they’re not necessarily wrong.
Maybe you’re not.
So my question is, does this appear to you to be changing? Do you think that there is now an appeal to the American public of this idea of performance legitimacy, especially since procedural legitimacy no longer appears in America to deliver the goods when it seems to be so badly eroded? Is there kind of an uptick in appreciation for performance legitimacy?
Because I mean, just to put my cards on the table, I mean, I’ve noticed since January of this year a vibe shift, especially among younger people, in their attitudes toward China. And often it seems to be on the grounds that, hey, look, they deliver the goods.
I think there absolutely is a sense even within the American elite to say, well, we design all of these… Procedures in place in order to ensure some sort of fairness and making sure that the public interest is consulted. And I think there has been a sense even within the Democratic Party that, you know, we take a look at these blue states and blue cities, big cities, which are almost unanimously governed by Democrats. And they don’t seem to be working all that well.
You know, there’s tremendous public disorder in a lot of cities. Mass transit isn’t functioning very well. A lot of politicians are much more interested to govern on social issues rather than delivering economic issues that many families, working-class families care the most about. And I think there is a sense that we can’t just rely on processes in order to deliver the sort of legitimacy that we’re talking about.
I think that that is a very vibrant debate within the left now, that we can’t simply be the lawyerly society anymore. How do we actually deliver the goods? And so this is where, to put my own cards on the table, I am in favor of abundance. I am in favor of Ezra and Derek’s program to create much better cities, show that California and New York are not deeply broken things.
That when voters point to the track record of Democratic mayors as well as governors, there is something real here to be able to say that they’re actually meeting the needs of the people rather than just making sort of statements and performative gestures that don’t actually deliver the goods for anyone.
So in the end, and here, I mean, we’ll kind of wrap up with this, but you know, the engineering mindset can be way too literal, right? And the lawyerly mindset can be way too formal. I guess what I want is some kind of conceptual pluralism. I want like this set of institutional practices that somehow are able to switch frames, you know, to use the right frame in the right moment.
I guess what I’d like to see is somehow that we build the muscle inside China, its one-party state, to build that muscle inside polarized democracies like the one we live in right now, to be able to do that, to be able to be, you know, conceptually plural in that way. And I feel like that’s what your book gets at.
Is that a fair characterization? And what are the ways we can build toward that kind of, you know, conceptual pluralism?
You’re absolutely right, Kaiser. And I’m glad that you picked up on this point, that one of the things I really craved after spending six years in China was some degree of pluralism, that, you know, it wasn’t just one official register speaking above all the rest. That was really eagerly censoring all of these different viewpoints.
And I think I’ve said so many cancelable remarks on this podcast, Kaiser, but let me offer a yet more cancelable remark. I think there is a better profession rather than engineers and lawyers to govern the population, and that is dentists. No, I joke.
I think that the right profession to govern the population, if we had to choose but one, would be something like economists. I think that economists have a sense of procedure, they have a sense of getting things done, and they have a sense of social science, not to engage in really stupid things.
Unfortunately, I think economists are the most reviled academic profession on the planet. They certainly have gotten into a sticky wicket for themselves. But I think one thing that I will always be glad for for economists is that they were the people most actively pushing back against things like policies like the one-child policy.
That was the case in China, in which it was the economist who was the head of Peking University that really pushed back against the one-child policy in earlier formulations in the 1950s. And it was mostly the economic profession in the West that pushed back against the population bomb by Ehrlich.
And so I think that economists are the happy go-between. But I think that economists certainly need to be supplemented by degrees of pluralism on themselves. There should be lawyers in government. Absolutely. There should also be engineers in government rather than the U.S. Senate, which has 47 people who went to law school and one person trained in engineering.
I think there should be some sort of a balance with all of these things. I certainly don’t want to be entirely ruled by humanists. Mao Zedong was many things. He was, I think, primarily a poet. And if you take a look at earlier iterations of the Soviet Union, you had all these fantastic writers around Joseph Stalin. They were such good writers. They were such good literary critics.
And look at what a mess they made. So I don’t want to be governed by poets and literary critics. That sounds like an absolutely terrible paradigm. I think what we need are people who understand social science. And so my nomination is to be ruled by economists.
I’m going to put my vote in for historians. I think they have that sort of… Perspicacity and then that broader frame. And they’re not as paralyzed as economists are. And if we have to go with economists, I’m going to go with the Arthur Krobers over the Michael Pettises to rule us. That’s a better economist, perhaps. I think I, as someone who belongs to an institution called the Hoover History Lab, think that historians would not be so bad either.
Yeah, not so bad at all. Well, Dan, what a fantastically fun and wide-ranging conversation I’ve had. I cannot recommend the book more highly. Make sure that you get out and buy it right away. It comes out on the 26th, on August 26th. I encourage you all to pick up a copy. Above all, it’s a really fun read. It’s full, like I said, of really great turns of phrase. I had a long list of memorable quotes from it that I put together as I was reading it.
Let’s move on now, though, down to the segment I call “Paying It Forward,” where I ask you to name-check a younger colleague, maybe somebody at Hoover. I mean, Hoover was full of villains as far as I can tell, but there’s got to be one person worth name-checking there before we move on to recommendations. So who do you offer “Paying It Forward”?
I will offer two names:
So those are my two names, Afra Wong as well as He Liu. He Liu and I have crossed swords a little bit on Substack. He’s extremely committed to the liberal project when it comes to China, and nothing wrong with that. But like I said, we’ve crossed swords a bit. But great recommendations both. Afra, I’ve seen some of her work as well, and it’s excellent.
What about recommendations, Dan? Do you have a book you’ve read recently that you would like to recommend or anything, film, music, anything at all?
Well, I think over the course of book writing, I really got myself back into the classics, the things that I have really enjoyed. And so I guess I will recommend two sets of things.
The first set are the Mozart’s Italian operas written with Lorenzo da Ponte. These are:
I found myself, over the course of book writing, listening to these highly pleasurable, fun, and inventive operas that I think will stay with me for the rest of my life. So these are the Italian operas by Mozart.
Yeah.
And I think what I will do is also recommend my quartet of favorite novels. I have four novels that I’ve been rereading recently. And so the first one is The Red and the Black by Stendhal, which has these incredible depictions of the mistakes and stupidities that one commits in the act of love. This is a French novel that was published in 1830.
I will also throw in another French novel, the Proust. And so these are really wonderful, intoxicating tales of love that Marcel Proust has created for us. The entire series of In Search of Lost Time.
That’s right. And the Penguin translations are all quite good in English.
A third novel is that everybody is reading Moby Dick this summer.
Yeah. Why is that? Why is everyone reading Moby Dick? I mean, I know that Joe Weisenthal from the Bloomberg Odd Lots podcast seems to be leading the charge on this. But I reread Moby Dick about, well, maybe six or seven years ago. Yeah, fantastic novel. But what do you think explains it being such a zeitgeist thing this summer?
It is just like a strange and bizarre marvelous white whale. You never know at which corners of the four seas that Moby Dick will shoot his spout up. And so I think that’s a little bit of a mystery to me. But I am I am a dickhead. And I love the depictions of mesmerizing whale lore.
And my favorite final novel is Bleak House by Charles Dickens. It is just this very fun, inventive, clever book that is a miracle of construction. So I commend it to your listeners.
Operas by Mozart, as well as this quartet of novels. Fantastic. Great, great, great, great.
I have a couple of recommendations. One is by Yun Sun from the Stimson Center. She heads their China practice. It’s in Foreign Affairs. It’s called China is Enjoying Trump 2.0, which I thought did a really good job of sort of channeling Beijing’s perspective on what’s happened in the time since Trump took office now. It’s like seven months now.
It’s really good. She’s always solid. And this is a particularly, I think, Excellent view into the Chinese mind on this. I also want to plug a book I’m reading right now. It’s called Revolutionary Spring by Christopher Clarke, who is one of my favorite historians. It’s just an amazing work of history.
Hopefully, you’ve read his earlier book, The Sleepwalkers, which is about the run-up to the First World War, which I also highly recommend. I’ve actually recommended it before on Seneca.
I’m hard-pressed to think of a working historian who has all the things that Clarke brings to the table, which is just an obvious facility in so many languages and this ability to just zoom in. Because the revolutions of 1848, which is what Revolutionary Spring is about, these happen all over Europe and at the same time.
So if you’ve got to write a book on this, you need to be able to:
And the other thing, of course, is that Clarke is just a brilliant, brilliant writer. His prose is just delicious.
I think it’s such a good book. It’s a really hardcore history. I mean, it’s not for the faint of heart. There’s more detail than I think a lot of people are used to, and it’s just great. So that and Sleepwalkers—my recommendations.
Dan, once again, thank you so much for taking so much time to talk to me. And happy birthday.
“Thank you.”
What is it? Happy birthday.
“It is my 33rd birthday.”
What is three? Is that an auspicious number?
“I’m not sure.”
Well, it’s half of 66, which is an auspicious number.
“Okay, that’s good.”
Yeah. Thank you so much for taking the time. And congrats on the book, which is, again, just so terrific. It’s been a total delight.
“Thank you again, Kaiser.”
Looking forward to seeing you again.
You’ve been listening to The Seneca Podcast. The show is produced, recorded, engineered, edited, and mastered by me, Kaiser Kuo. Support the show through Substack at www.sinecapodcast.com, where there is a terrific offering of original China-related writing and audio.
Email me at [email protected] if you’ve got ideas on how you can help out with the show. Don’t forget to leave a review on Apple Podcasts.
Enormous gratitude to the University of Wisconsin-Madison Center for East Asian Studies for supporting the show. Huge thanks to my guest, Dan Wong.
Thanks for listening, and we’ll see you again next week. Take care.
Bye.
2026-02-18 08:00:01
Mathematical Superintelligence: Harmonic’s Vlad & Tudor on IMO Gold & Theories of Everything
Hello, and welcome back to the Cognitive Revolution. The presenting sponsor of today’s episode is Granola. Regular listeners have heard me describe the blind spot finder recipe that I’m using on Granola to look back at my recent calls and help me identify angles and issues I might be neglecting.
I love that concept, but it’s also worth highlighting how Granola can help raise your team’s level of execution by supporting follow-through on a day-to-day basis. This morning, for example, I had two very practical calls in which I committed to a number of things. In the past, to be honest, there’s a good chance I’d have forgotten at least a couple of the things I said I’d do. But with Granola, I can easily run a to-do finder recipe and get a comprehensive list of everything I owe my teammates.
This is the sort of bread and butter use case that has driven Granola’s growth and inspired investment from execution-obsessed CEOs, including past guests Guillermo Rauch of Vercel and Amjad Massad of Replit.
See the link in our show notes to try my blind spot finder recipe and explore all of the ways that Granola can make your raw meeting notes awesome.
Now, today, my guests are Vlad Tenev and Tudor Achim, co-founders of Harmonic, an AI research lab dedicated to building mathematical superintelligence, and also the creators of Aristotle, an AI system that achieved gold medal-level performance at the 2025 International Mathematical Olympiad.
While OpenAI and Google DeepMind achieved similar performance by scaling reasoning in chain of thought, Harmonic stands out for their commitment to formally verifiable methods. This is because it generates candidate proofs in Lean, a programming language that serves as a proof-checking assistant by using a trusted kernel to confirm that every single step of reasoning follows from a few explicit premises and accepted logical rules.
Aristotle’s work can be automatically validated, and its performance is in principle limited only by the scale of compute available for reinforcement learning.
In an effort to better ground my own intuitions for mathematical superintelligence, we begin with a metaphysical discussion about:
From there, we turn to the Aristotle architecture that delivered IMO Gold performance. It consists of:
- A large transformer model that uses a Monte Carlo tree search strategy, reminiscent of systems like AlphaGo, to discover valid paths from point A to point B in mathematical reasoning space.
- A lemma guessing module that helps manage context and keep things on track by generating candidate waypoints between a given starting point and a potentially distant end goal.
- A specialized geometry module modeled on DeepMind's alpha geometry.
We also discuss the Aristotle API’s informal mode, which attempts to auto-formalize whatever the user asks it to prove.
We discuss what its responses to my admittedly silly requests imply about the boundary between statements that could in principle be mathematically proved, and those which are sufficiently factual or philosophical in nature so as to fall outside the scope of the system.
Examples include propositions like:
“all is love”
and
“Epstein didn’t kill himself”
In the final section, we discuss:
On this last point, I have to say, with so many grandiose AI promises flying around these days — from a country of geniuses in a data center, to a century of progress in five years, to curing all diseases in our natural lifetimes — it is rare that I am genuinely taken aback by a company’s vision for the future.
And yet, as you’ll hear, Tudor did manage to leave me at least momentarily speechless when he described a future of theoretical abundance in which all physical phenomena we observe have multiple competing coherent explanations, which can only be separated by increasingly exotic experiments.
If you’re like me, you’ll find this episode a useful opportunity to:
Vlad Tenev and Tudor Achim, co-founders of Harmonic, makers of Aristotle, and winners with an asterisk of the IMO gold in 2025.
Welcome to the Cognitive Revolution.
Thanks for having us.
Greetings and salutations.
Thank you.
So this is going to be, I think, a fascinating conversation. It’s probably going to be more metaphysical than most of our episodes, but also there’s a lot of practicality because what you guys are doing certainly has aspirations to go beyond the pursuit of mathematical superintelligence.
Maybe just for starters, how do you guys understand what math is? That was something I was really wrestling with in preparing for this. And then, you know, that’s obviously very metaphysical. To make that a little bit more practical, what would you say are the core cognitive skills that people that are good at math really develop and excel at? And how do those skills do when we look at the performance of like the frontier large language models that all of our listeners are familiar with today?
“Yeah. Well, look, first, thanks for having us. It’s really great to be here.”
You know, when you ask, what is math? What is it useful for? What are the core cognitive skills? it gets like one of the core theses of our company, which is that mathematics is reasoning.
So a lot of people think of mathematics as this really esoteric thing. You know, you’re thinking maybe group theory, stuff you’ve seen in movies like Good Will Hunting, but mathematics at its core is the process by which humans understand the world by breaking their understanding down into small sequences of logical steps that other people can understand and verify for themselves.
So when you’re solving a physics problem or doing your taxes or thinking about what happened at the beginning of the universe, ultimately you have to have an explanation that is
And so when we talk about what it takes to be good at math, the question is what it does take to be good at reasoning. And so that’s, again, that ability to break this down into steps.
It turns out math is really useful for understanding the universe and building lots of engineering things, but ultimately it’s just about reasoning.
I watched your podcast that you did with Sequoia maybe 16 months ago or so now. And I recall Vlad’s story of like, basically,
“I thought that if I got good at math and I’d probably be good at other things and it sort of worked for me.”
So that’s like one way to, in a very practical sense, unpack the idea that math is reasoning. It certainly seems to help people generalize to at least related domains and be really effective, for example, in entrepreneurship.
But I’m not entirely clear still on like, are you making a more almost platonic claim there? It seems like there’s the very simple notion that like, okay, I should teach my kid a lot of math because then they’ll be smart generally. And again, that works for humans.
But is there something that you see as like a more fundamental law of the universe, sort of correspondence between what we are doing in math and what we are doing in these other domains? Because it doesn’t seem like we have the same sort of like verifiability in almost anything else.
We do have it a little bit in computer science, but even in physics, right? We’ve got like still very fundamental questions about
“I don’t think that stuff is at all agreed upon.”
So maybe you guys throw up your hands at this mystery too, or maybe you feel like you have kind of an intuition for what the answer is.
“Yeah, I can give you my perspective.”
I got into math through physics. So when I first came to Stanford as an undergrad, I had read Brian Greene’s The Elegant Universe, which was sort of like the first popular string theory book.
And when I was a kid, one of the earliest memories, one of the first full English books that I read was A Brief History of Time by Stephen Hawking. So I’ve always been interested in kind of the big questions, right?
Cause you know, back in the day, that was not obvious. You know, we thought electricity was separate from magnetism and it was just like a really big… I probably think one of the greatest achievements of science is figuring out that these two are actually two sides of the same coin really. And then, and then the big question is like, well, what’s going on with gravity? Is it, is it the same? Right.
And, in the middle of this, we found out that the weak force and the electromagnetic force were also splintered off of one electro-weak force. So it kind of feels like there was just one thing at the beginning and we have to understand what that thing is.
And what I found when I became a physics major at Stanford, and I started asking all of these questions, eventually they’d send me over to the math department. And they’re like,
“Well, in order to understand string theory, you have to understand all of these other things. Right. And if you want to understand general relativity, you’ve got to get into differential geometry.”
And so that’s how I became a pure math major and ended up doing a PhD. The impetus was actually trying to understand the real world through physics.
If you think about what’s the usefulness of physics, I mean, all of the big inventions that humanity has that really push us forward are kind of like physics inventions, really. I mean, when you think about:
They’re physics things.
So the real reason to do math is math is interesting and beautiful. There’s an art aspect of it, but it helps you. It helps you understand physics. Physics helps you understand engineering. And then you can create things that have huge value.
You were asking, how does math work in other fields where things are not as precise? I think math shows up just maybe a little more subtly than people think.
So there is this physicist, Eugene Wigner, who wrote a famous essay called the unreasonable effectiveness of mathematics, which was commenting on a really interesting phenomenon.
So Vlad mentioned differential geometry and special relativity. It turns out that when Einstein was creating that theory, he relied on these thought experiments from the 19th century around how to think about certain manifolds and their properties.
And that was actually the key tool that we use to explain what special relativity is, and then develop it for general relativity.
That’s a perfectly representative case because those thought experiments in the 19th century were almost preposterous. It made no sense to think about them because
“How could you possibly apply these concepts to the real three dimensional world?”
And then it turns out that it’s very useful for understanding the four dimensional world when you include time and curvature.
There are myriad examples like this.
If you consider number theory for a long time, that was really seen as an incredibly esoteric branch of math with no practical implications. But people pushed on that theory for a long time. And then it turns out that that’s the key tool you need to create a secure digital economy.
So now essentially all of human civilization has a digital economy, which is based on this branch of math.
So I think it’s almost the wrong question to ask,
“Well, I don’t know, there’s a lot of math out there. How is it useful?”
The point is you just do the math and then eventually some of it, not all of it, will be more useful than you possibly could have imagined.
So the investment in math is: it’s not just to build a really smart system. It’s to create a lot of new math that we can then figure out ways to apply later.
One interesting thing that the conversation reminded me of when you first asked,
“What is math? What does it look like?”
I think one of the reasons we got excited about applying AI to this domain is there are lots of different things that mathematicians do.
And I think we’re very excited about the prospect of AI accelerating the former. We think that’ll happen.
But the latter is something that AI is already really, really good at today and was good to some degree when we got the idea for Harmonic. You know, you look at GPT-4, which had just come out when we started and it excelled at just pulling information, doing these types of needle-in-a-haystack tasks of, can you just really quickly go through all the literature and pull things that might be relevant.
And I would say even if you can be an amazing mathematician, you’re in that category.
I think a lot of the work could be accelerated if you just knew all the math that was being done and could pick out the relevant things to an unsolved problem that you have at hand.
So I think the problem itself lends itself really well to what AI is already good at.
Hey, we’ll continue our interview in a moment after a word from our sponsors.
One of the best pieces of advice I can give to anyone who wants to stay on top of AI capabilities is to develop your own personal private benchmarks, challenging but familiar tasks that allow you to quickly evaluate new models.
For me, drafting the intro essays for this podcast has long been such a test.
I give models a PDF containing 50 intro essays that I previously wrote, plus a transcript of the current episode and a simple prompt.
And wouldn’t you know it, Claude has held the number one spot on my personal leaderboard for 99% of the days over the last couple of years, saving me countless hours.
But, as you’ve probably heard, Claude is the AI for minds that “don’t stop at good enough.”
It’s the collaborator that actually understands your entire workflow and thinks with you.
Whether you’re debugging code at midnight or strategizing your next business move, Claude extends your thinking to tackle the problems that matter.
And with Claude Code, I’m now taking writing support to a whole new level.
Claude has coded up its own tools to export, store, and index the last five years of my digital history — from the podcast, and from sources including Gmail, Slack, and iMessage.
And the result is that I can now ask Claude to draft just about anything for me.
For the recent live show, I gave it 20 names of possible guests, and asked it to:
Based on those, I asked it to draft a dozen personalized email invitations.
And to promote the show, I asked it to draft a thread, in my style, featuring prominent tweets from the six guests that booked a slot.
I do rewrite Claude’s drafts, not because they’re bad, but because it’s important to me to be able to fully stand behind everything I publish.
But still, this process, which took just a couple of prompts once I had the initial setup complete, easily saved me a full day’s worth of tedious information-gathering work and allowed me to focus on understanding our guests’ recent contributions and preparing for a meaningful conversation.
Truly amazing stuff.
Are you ready to tackle bigger problems?
Get started with Claude today at Claude.ai/TCR.
That’s Claude.ai/TCR.
And check out Claude Pro, which includes access to all of the features mentioned in today’s episode.
Once more, that’s Claude.ai/TCR.
AI agents may be revolutionizing software development, but most product teams are still nowhere near clearing their backlogs.
Until that changes, if it ever does, designers and marketers need a way to move at the pace of the market without waiting for engineers.
That’s where Framer comes in.
Framer is an enterprise-grade website builder that works like your team’s favorite design tool, giving business teams full ownership of your .com.
With Framer’s AI Wireframer and AI Workshop features, anyone can create page scaffolding and custom components without code in seconds.
And with:
it’s no wonder that speed, design, and data-obsessed companies like Perplexity, Miro, and Mixpanel run their websites on Framer.
Learn how you can get more from your .com from a Framer specialist or get started building for free today at Framer.com/Cognitive and get 30% off a Framer Pro annual plan.
That’s Framer.com/Cognitive for 30% off. Rules and restrictions may apply.
Okay, that’s quite helpful. I think, coming into this, I had focused my own mind on sort of two modes of math, I guess.
One being the kind of Einstein-like — obviously that’s a high-level example of a kind of eureka moment of having some insight that,
“hey, this highly abstract and, you know, seemingly perhaps like very esoteric formalism can actually unlock like major understanding.”
That’s kind of amazing. Very amazing.
And then there’s also this sort of grind-it-out, like I’ve got this thing that I want to prove, and I’m going to kind of, perhaps stumble my way even through the space of possible logical moves until I finally chart a path there. And then you’re adding another, a third layer, which is like problem selection in the first place, which I guess is pretty related to the Einstein thing, but certainly distinct in some ways.
Let’s take a minute before we get into the Aristotle system and how it works and how you’ve trained it and all that stuff to just talk about Lean.
Lean is basically a programming language that does this kind of very bit-by-bit logical maneuvering, right? Where you have certain assumptions coming in, you’re going to take these various steps, and the goal is to get to a certain outcome.
Tell us, because I’m just learning about this, in the context of preparing for this and a couple other podcasts, and I think most people don’t know anything about it.
So maybe give us a little bit of a more intuitive understanding of what Lean is. And I’d be keen to understand it on a little bit of a practical level to like:
So Lean, in my view, is the best programming language ever created.
In Lean, you can write any program you would write in Python or C or C++, but you can also express essentially any logical concept.
So if we’re okay getting into a bit of the details, it is a dependently typed programming language, which means that at compile time, you can express very complicated properties of the program that you can check before ever running it.
So on the one hand, you have on one end of the spectrum, you have something like JavaScript, where you can check basically nothing. And then on the other end, you have Lean.
But the really cool thing is you asked about axioms. So when Aristotle produces any output, it’s produced as annotated Lean code.
So there’s the programming language Lean, we write theorems, we write programs, we prove things. And there’s a lot of comments explaining to the person reading it what it’s doing.
But when we talk about proving things, you end up relying on three axioms, in addition to just the basic concept of the calculus of constructions, which is what the programming language is based on.
And just as an example to show what an axiom means: the axiom of choice —
“It’s not saying anything that would be controversial, it’s saying if you have a non-empty set, it’s possible to choose an element from it.”
And so from these three extremely basic axioms, it turns out you can build:
It’s all based on this core set of axioms.
And so the goal of a system that outputs Lean is to find interesting statements and programs then prove things that just depend on these axioms. And that’s really where the difficulty lies.
As you alluded to, sometimes you have to make big logical leaps, sometimes you have to grind through a lot of math, but both of those are essential. So you can’t really skip one of those steps.
But the Lean itself is just incredible. You can express so many ideas in it, you can prove so many things, and you can use it as a programming language too.
So it’s really up there for me in programming languages.
I started playing with Lean when Tudor and I started making a plan for this business, and we had a pretty early decision about whether we wanted to go formal and informal.
One thing that struck me about it is, as a former mathematician, I barely used the computer when I was doing math.
I was in my PhD in the late 2000s, and the only time you’d really be using a computer when doing math is when you wanted to type up your homework or your research paper or something.
But all the thinking about it would happen on a chalkboard or a whiteboard. All the collaboration about it would happen in person at these conferences or on a chalkboard in one’s office.
For a while, it was just like maybe mathematics would always be this pure thing that would just be kind of untouched by technology.
But what Lean has done is it transformed the mathematics from kind of like chalkboard and couch to now it’s in VS Code.
You know, you can do it in Cursive. You’re putting your math on GitHub, where now you can run these large collaboration projects.
So even when you subtract out AI, I think the Lean by itself without AI changed how people do mathematics, because now you’re seeing extremely prolific, famous mathematicians running these large projects where they’re collaborating with dozens of people around the world trying to do things like formalize research or formalize the proof of Fermat’s Last Theorem. And more and more and more of the folks are adopting Lean as like an accelerant.
So I think it’s changing how mathematics is being done and actually accelerates collaboration and accelerates progress and sort of like removes this notion of peer review.
If you’re a mathematician, if you’re a mathematician and you want to prove something, a big part is getting someone to read it and actually spend the time to tell you if it’s correct.
And so, you have the proof of Fermat’s last theorem, which took many, many years to be proved.
What happened was sort of this collection of people got together and when they all agreed that the proof was complete, it was sort of like ordained that the thing was proven.
And I think another thing formal does is it makes it so that that’s unnecessary.
Like if the proof checks and there’s no caveat that there’s no bug in the Lean kernel or how you’ve set up the statement, you obviate the need for manual human verification.
And the implications of that are pretty interesting too, right?
You have all of these potential citizen mathematicians who now with AI can solve unsolved problems and they don’t need to get anyone at a, you know, PhD program, a lean institution interested in their problem in order to tell that it’s correct.
They just have to have the Lean certificate and the proof is correct.
So, yeah, I think that’s a powerful thing.
If you think about journals, journals and math exist for this: it’s like the prestige of the review board tells you whether you should read something or trust it.
So I do that. The notion of trust is really changed fundamentally with tools like Lean.
Yeah. And I think that the open source software community has really solved this problem a long time ago.
So if you go on GitHub, one can simply open a pull request on some repository.
So now you’ve contributed.
That element of trust is not so present, you can just run the tests.
Also, when you talk about impact and prestige, you can look at the number of stars you have.
So if a repository is very popular, it gets forked a lot, it gets a lot of stars.
So you’ve disintermediated essentially any gatekeeper here, it’s totally open source, there’s no morning trust required, and there’s a measure of impact.
And so I think math is going to start going the same way.
Previously, mathematicians relied on their social networks to figure out:
But with Lean, you can have a big math project, anybody can come and contribute a proof.
And if Lean accepts it, then it’s right.
If a lot of other mathematicians start to depend on that result, we’re going to notice:
And so then you start to measure the prestige that way.
So it would be very interesting if Lean is the one tool that allows you to go from kind of the cathedral style of development where very closed networks, et cetera, to more bazaar style development where it’s kind of wild west.
But Lean is like the computational certificate that everything is correct.
I wish I understood a little bit better, had a more intuitive sense for what exactly is going on with Lean still.
This is going to be hard, I think.
But in doing my kind of research, one thing that stands out is the kernel is really small.
So, in terms of what you need to trust, it’s a pretty small amount of core code that has been thoroughly vetted many times by many people.
So there’s kind of that level of understanding.
I think I would still love to have a little bit better sense because when you mentioned the three axioms, for example, it’s a little weird for people outside the field to be like,
“Oh, there’s two that are kind of bizarre and technical. And then there’s this one that’s like if you have a non-empty set, you can choose an element from it.”
And I’m like, that seems like common sense, but why was that ever controversial?
Is there a way to describe the sort of space of legal moves in math or in Lean in sort of— I don’t usually like analogies.
I often try to set this up as an analogy-free zone, but because I’m—I think I and a lot of others are going to struggle with the very literal understanding, maybe this is a time for an exception to my no analogies rule.
Is there sort of like a— I don’t know, like a chess analogy or something where you could say, like, here’s the pieces and here are the legal moves that you can make to kind of give people a little bit of a better sense of what it actually means to move through these spaces?
I think the chess example is perfect. So a theorem in Lean is something like, given this starting configuration of a chessboard, it is possible to get to this configuration. And a proof of this theorem would be listing the sequence of moves. And what the kernel is doing in Lean is saying for every single move that you claim is valid, it’s checking, “hey, does this rule exist in my rulebook?”
So the theorem says you can get from A to B. The sequence of moves is, okay, here’s the sequence. And the kernel is just saying, “yes, this step is right, this step is right, this step is right.” And now I’ve confirmed that I’ve ended up in a target state. So Lean is doing that, but of course, the individual steps are different, they’re mathematical steps, and they depend on one or more of these three axioms.
The three axioms, although they’re technical, they’re very short. So if you write them down as mathematical statements, they’re under, I think, each of them is under a tweet in length. Like the axiom of choice definition in Lean is maybe 10 characters, and the other ones are maybe 100. So they’re not very complicated, they’re just a little bit annoying to write in math.
And then people say, okay, well, if we assume these axioms are true, and they’re also common sense, just like a bit more complicated. And we’ve checked every single step against those axioms, then we say the whole proof is correct.
Could you give like a few examples maybe of like the pieces and the moves? Obviously, we can’t come anywhere close to being exhaustive, but what are the primitives in terms of the…
I’ll give a mathematical but simpler example of a primitive. So let’s consider first-order logic.
So the deduction rules you have are:
So let’s say you have a proof that says: if I have A, and I know if A then B, and if B then C, the theorem says C is true.
And the proof of that says:
A is true,
I have if A then B, which means B is true,
And then I have the step B is true,
I know that if B then C,
And then I can conclude that C is true.
So this is a first-order logic, so it’s not quite the same as what we’re talking about in Lean. You can do more advanced types of logical statements there. But ultimately, that’s what’s happening.
I think it’s going to be hard to…
Essentially, the next step beyond that is just getting to Lean and the calculus of constructions and these axioms.
So there is one thing when I learned it. There’s actually…
People are also exploring use of Lean to teach math. And I think now it’s sort of like practical at the high school level, but you could see a world where it extends to like middle school and maybe even younger if someone’s precocious enough. But I think mathematics education will go from sort of like the chalkboard to the computer lab.
So there’s this thing called the natural number game where you learn Lean by deducing properties of like multiplication and addition basically. So for example, the commutative law, which is basically that
A plus B equals B plus A, right?
Or the distributive law, right?
A times quantity B plus C equals A times B plus B times C.
So you can sort of like discover and prove these fairly basic facts just using the core axioms and the Lean language.
So that’s a good way, you know, if anyone just wants to like, all right, what is this Lean thing? Why is it useful? But I’m not a research mathematician. Dip your feet into it. I think I would recommend that.
And that’s been extended to harder things too. I think there’s like the real analysis game now, which is if you want to learn real analysis, it’s very proof based. And it’s basically the foundation of calculus.
You can start with like basic facts about:
And then you can kind of keep proving more and more complex things.
That’s a great tip. I’m definitely going to bookmark the real numbers game and see if I can get my soon to be seven-year-old into it.
Hey, we’ll continue our interview in a moment after a word from our sponsors.
Want to accelerate software development by 500%? Meet Blitzy, the only autonomous code generation platform with infinite code context, purpose built for large, complex enterprise scale code bases.
While other AI coding tools provide snippets of code and struggle with context, Blitzy ingests millions of lines of code and orchestrates thousands of agents that reason for hours to map every line-level dependency.
With a complete contextual understanding of your code base, Blitzy is ready to be deployed at the beginning of every sprint, creating a bespoke agent plan and then autonomously generating enterprise grade premium quality code.
Grounded in a deep understanding of your existing code base, services and standards, Blitzy’s orchestration layer of cooperative agents thinks for hours to days, autonomously planning, building, improving and validating code. It executes spec and test driven development done at the speed of compute. The platform completes more than 80% of the work autonomously, typically weeks to months of work, while providing a clear action plan for the remaining human development.
Used for both large scale feature additions and modernization work, Blitzy is the secret weapon for Fortune 500 companies globally. Unlocking 5x engineering velocity and delivering months of engineering work in a matter of days.
You can hear directly about Blitzy from other Fortune 500 CTOs on the Modern CTO or CIO Classified podcasts. Or meet directly with the Blitzy team by visiting Blitzy.com. That’s B-L-I-T-Z-Y dot com.
Schedule a meeting with their AI solutions consultants to discuss enabling an AI native SDLC in your organization today.
The worst thing about automation is how often it breaks. You build a structured workflow, carefully map every field from step to step, and it works in testing. But when real data hits or something unexpected happens, the whole thing fails. What started as a time-saver is now a fire you have to put out.
Tasklet is different. It’s an AI agent that runs 24/7. Just describe what you want in plain English, send a daily briefing, triage support emails, or update your CRM. And whatever it is, Tasklet figures out how to make it happen.
Tasklet connects to more than 3,000 business tools out of the box, plus any API or MCP server. It can even use a computer to handle anything that can’t be done programmatically. Unlike ChatGPT, Tasklet actually does the work for you.
And unlike traditional automation software, it just works. No flowcharts, no tedious setup, no knowledge silos where only one person understands how it works.
Listen to my full interview with Tasklet founder and CEO, Andrew Lee. Try Tasklet for free at Tasklet.ai. And use code COGREV to get 50% off your first month of any paid plan. That’s code COGREV at Tasklet.ai.
And we haven’t really talked about Mathlib, but the Lean kernel is quite small. There’s an open source project called Mathlib, which you can kind of think of as the largest digital repository of mathematical knowledge.
So all of the, a lot of the famous theorems and results can be found in Mathlib, and those give you almost like additional complex moves or algorithms to prove your thing. So you can apply a theorem, and it’s almost like applying a function from a library. That can help you get to the goal.
Yeah, I think that people can understand what it is better. Just think of it like every math textbook in the world merged into one in a self-consistent way. So eventually, all of mathematical knowledge will be in this one repository.
And if you hit build on your computer, you’re going to be able to check it all from the foundations. If you have any question about any math concept, you just search for it, you click on go to definition, you can jump around. It’s really going to be the new foundation for math in the future. It’s pretty exciting.
I think mathematics is certainly going to change fundamentally—like how it’s done, how fast it moves. And I think to a large degree, it already has. And AI is just going to accelerate it.
The great thing about our timing is Harmonic really started when both of these things matured to a level of capability where you could start doing interesting stuff.
I think both of these matured to the level where you can start putting them together and doing really cool things. And I think we were just the first to see that. That’s how we came up with this concept of mathematical superintelligence, which really means the combination of formal verification and formal tools with artificial intelligence.
Funny story, as I was using Aristotle a little bit to try to wrap my head around all of this, I don’t have the sophistication to pose any really interesting problems. So one challenge that I gave it was to prove that two plus two equals four. And then I had to laugh when it came back, just citing something from Mathlib that was like, “this is already proved in Mathlib” for the, the theorem is literally like the two plus two equals four theorem. So I was like, it’s done. And I was like, yeah, that’s not exactly what I was looking for, but I guess I kind of got what I deserve there for asking it such a basic question.
Did you use the, were you using the web interface or the terminal UI? I started by having cloud code installed the terminal and then was using that a little bit. And then somehow it tipped me off to the fact that there was a web interface. And so I, then I, after that I’ve moved over to the web interface. Yeah, that came to the last week and they’re probably a little bit more appropriate for those types of questions.
I think we wanted to roll it out for on terminal, because I think it makes it a little bit more clear what the tool is great at. I mean, lots of things can answer two plus two equals four, but even I can answer that in the calculator.
Yeah.
And I think, I think for a while we were talking about like, how do we describe this, this, like what Aristotle is? I mean, it’s, it’s kind of like an amazing calculator where you can imagine you could just talk to your calculator. So it has:
You know, something like ChatGPT or Claude are very expressive, but sometimes you have to double check its work because it doesn’t always, you know, it doesn’t have the verification. But really the intent is to put those together.
And it turns out that the things, the first things that people really want to be sure about and to verify are like more complicated things. So I think the, you probably found this out, but the complicated things I think is where you really start to have aha moments when you’re using it.
Yeah, let’s get into Aristotle. And I appreciate the time spent in remedial education. I think it’s beneficial, not just for me, but hopefully everybody will now be able to kind of grok what we’re about to get into much better with that foundation that we’ve laid.
So Aristotle has three core parts. I’ll just kind of sketch them and then you can, you know, give me the double click on them.
First, there is this Monte Carlo tree search type thing. I kind of think of that as sort of an AlphaGo-like structure where we are systematically exploring the space of moves. I guess that’s where I got the chess analogy, right? Is that I kind of was making this equivalence between Aristotle, at least that part of Aristotle, and AlphaGo.
And so it’s kind of maybe I can make this move. And then there’s this learned scoring function that’s like,
“Okay, does that move seem promising? Does this path of, you know, does this branch of all possible moves that I could make, does it seem promising? Do I seem like I’m getting closer to my goal?”
And with that, you can kind of grind things out, run deep tree search, right?
The second part in some ways to me jumped out as even more interesting and kind of, I really want to dig into the metaphysics of it a bit, because this is the lemma-based informal reasoning system, which I take to be sort of saying,
“Okay, if I have some really big mountain to climb, and it’s maybe so big that I can’t just grind my way… it’s maybe becomes impractical to grind my way through like all these small localized steps.”
It’s sort of guessing like what’s the base camps that I would want to get to along the way that are like the really good way points such that if I can get there, then I know I’ve like, I’ve made it somewhere.
But that’s really interesting because it sort of strikes me a little bit more like a, like it behaves, it seems like a little bit more like a language model where it’s kind of guessing and not so formal. I mean, it says in the, in the technical report that it is an informal reasoning system.
And then there’s a third part, which we maybe don’t have time to go as deep on, which is specifically dedicated to geometry. Again, in the technical report, you described that as being like AlphaGeometry, which I think DeepMind developed.
So correct any misconceptions that I have there and give me the double click on what, like what more I should understand about how this thing works.
Sure. I think, I think you covered the components pretty accurately. So one thing I have to say is that, you know, we revamp our systems pretty often here. So I think Aristotle now looks quite different than Aristotle for the IMO. You know, I think a lot of things are consolidated and improved.
I think that you made this point about the Monte Carlo tree search being more of a grinder. I wouldn’t quite characterize it that way.
So the Monte Carlo tree search is actually doing a lot of inference on its own about high-level steps. So the levels that we’re talking about, they’re much closer to solving a challenging math problem than they are to prove that two squared equals four. So there’s a lot of reasoning that goes into them.
In some sense, it’s grinding once you get low enough in the search tree because you’re just closing out cases or easy subproblems. But it’s really solving harder problems on its own.
And so when we combined it with the informal reasoning system, you could almost think of it as a form of context management, actually. So ultimately, you need to end up with a lean proof, and that’s going to involve big steps and small steps. And it’s helpful when you’re focusing on the smaller steps to not have to remember the entire context of the bigger steps.
And so it turns out the informal reasoning system itself actually makes enormous quantities of mistakes. So one should not think of it as,
“oh, it’s a really smart human that’s laying out the steps to base camp.”
It’s more like a system that can propose lots of things that are wrong and don’t have to be formalizable or even correct. And you kind of try to assemble things from that.
So you can think of both of them as kind of doing the same thing, just at slightly different scales and complementing each other. And they’re actually all LLMs. So as we described in the tech report, the tree search itself is driven by language models.
Part of the language model is proposing steps. Part of it is scoring steps. But they work in concert to solve the lemmas and then eventually the full problems.
And as you mentioned, alpha geometry, it’s a slightly different system. We’re exploring kind of high-level steps and then trying to use an algorithm to grind through the rest of it. I think if we’re talking about systems grinding through a lot of math, I would say alpha geometry in the deductive reasoning system is really a grinder. So it’s really trying to find every possible conclusion of a geometry diagram.
I would say there’s not too much pattern recognition intelligence going on there. And that’s because geometry, if you think about it, is more constrained. You basically have points. You can basically start with points.
If you have three points, there’s only so many angles involved. Obviously, if you go to like 10 or 15 points, things blow up pretty quickly. But it also then becomes hard for humans to solve. And I think that’s why geometry was among the first class of competition problems to fall to AI and automation.
I think there’s also a couple of other components that might seem simple but are non-trivial that Aristotle, the system, does and are independently improving.
One is auto formalization: taking input that you provide in natural language and faithfully translating it into Lean in the best possible way. And I think, relative to our competitors, at least, I’m not aware of anything that’s as good at that as we are.
And also theory building: sometimes in the way of solving something, you have to create new theories and new structures that might not exist in mathlib. Aristotle has the capability of actually building that on the fly and incorporating that into the proving process.
Another funny anecdote, so that you’re referring to what I discovered is informal mode, right? Where I can provide—I think real users would not do this—but you can provide anything, any natural language input, just something that the system will then try to prove.
I asked it to prove all is love. And it came back and said,
“this is a philosophical statement and outside the scope of the Lean formalizer’s ability to prove.”
I also asked it to prove
“Epstein did not kill himself.”
And it came back and said,
“this is a statement about current events. And again, it’s sort of outside the Lean formalizer’s ability to prove.”
But yeah, I think this kind of gets back to this sort of metaphysical question that I find so perplexing around that translation from the messy real world of human affairs and intuitions to the formal definitions of,
“okay, this is actually the thing that we would want to prove.”
I did find that very, very interesting that you had such a thing at all. And I guess, well, do you have a sense for— I also do want to get into a little bit more details of just like technically how you created the models and all that stuff. But, you know, on my spectrum from 2 plus 2 equals 4 to all is love, is there, how do you think about the intuition for what the boundary is of what is inside the, what, because I, because I, again, when in listening to your previous interview with Sequoia folks, it seemed like you had the sense that eventually as the system and systems like this get capable enough that more and more things that are of interest every day people will start to become the sorts of things that they can do.
So like, how do you think about that boundary and how does that boundary expand over time?
I think the, the ultimate boundary of a system like Aristotle is in reasoning through any problem where people can also agree on what it means to be a valid sequence of reasoning steps. So right now you have math. That’s one obvious one. When we talk about mathematics being the same as reasoning, that chess example you gave is a perfect one. So you can express the logic of a chess game and then check it, right, and then reason about it.
I think one area that’s really going to touch a lot of people’s lives is it turns out you can use the same reasoning approaches to think about software. So when people write software, they write these things called:
And it’s kind of having the computer just run the program and check the output against what they expect. But that’s what they do after they’ve written the code.
It turns out that when engineers are writing code, they’re thinking logically:
“Okay, if I have this range in my input, I can think okay, as I go to this for loop at these if statements, it implies certain things about the output.”
And that itself is logical and mathematical reasoning. So we’re starting to see API users reason about programs in the same way that they can reason about math. People are writing cryptography implementations and then checking:
I think the same kind of input that will go to software will help take us to a bug-free software future.
Now, Vlad and I disagree a little bit. It’s not clear to me if we’ll be writing history essays or something — maybe there is a way to value them objectively. But I think the boundary is really in anything that’s quantitative and logical in nature.
Yeah, I think in the first version of Aristotle, it would actually formalize and build a theory for your all is love example. And it would give you a correct proof that it’s probably true.
I think it surprised us. People were asking all sorts of questions. We had people asking:
And Tudor mentioned computer science. So I think it’s actually surprised us how broad of a set of things it can successfully create a theory around and formalize.
I think the constraints we put were just, you know, when you’re building a product, you want to make sure that you deliver value. At this point, I don’t think we provide the most value if you want to write a history essay.
So we’re trying to nudge people to the point where they can discover what Aristotle is really, really good at as quickly and simply as possible. I think over time, you should expect that the surface area increases.
We start formalizing things. And I don’t think it’s inconceivable that at some point it pulls current events and news from the internet, puts out the axioms, and can sort of fact-check and make conclusions based on real-world events.
Not our focus right now, but I don’t think it’s a crazy thought.
I mean, I ask a question sometimes. I’m interested in astronomy, right? And I wanted to know:
“When’s the next full solar eclipse that I can see from within 50 miles of Palo Alto, California?”
The models usually struggle with this type of stuff because nobody’s asked that identical question out on the internet, so they can’t pull it. You actually have to do some math.
So you can imagine there’s a spectrum and there are questions like this that a model that can reason actually from first principles is going to be way better at.
Okay, let’s talk about just how you created this thing a little bit and how your experience, lessons learned, et cetera, kind of relate to some of the live questions more broadly in the AI space. I think you can take on faith that folks listening to this show will be familiar with things like reinforcement learning from verifiable rewards and stuff like that and certainly understand kind of how the ability to generate synthetic data feeds into a system like that and that’s, I’m sure, part of what you’re doing.
What more can you tell us in terms of like, would it make sense to start training something like this from some off-the-shelf pre-trained model or does that messiness that those, you know, LLMs start with corrupt or pollute your, the purity of the mathematical reasoning too much? Can you tell us anything about size of models, which could be parameters, could be tokens, whatever?
I’m interested in things like also, is there any role for taste in this process? Obviously, like mathematics, mathematicians are very interested in correct proofs, but they’re also interested in these eureka moments and the sort of sense of elegance of the proof, right? There’s a sense of the beauty that, you know, matters as much, I think, to many people as the correctness or maybe not as much, but, you know, it’s certainly heavily weighted.
And then I also noticed there’s test time training that’s part of this, and I think that’s, you know, a huge trend that I’m kind of watching in general.
So, you know, you can swing or take any of those pitches, but what do you think are kind of the most interesting next level of depth that people can use to inform their own AI worldview with?
Well, first, I have to say that if your audience knows about reinforcement learning from verifiable rewards, you’ve got a great audience.
“That’s not betting data.”
Yeah, that’s not. So, I think that is a safe assumption. Nobody was talking about that stuff, right? It was like science fiction almost, but it’s cool to see it entering the popular consciousness.
I want to address the taste question, because that actually, you know, strikes at a key thing that, you know, companies can decide on.
So, we get gold performance at the IMO, we have a very powerful system, and it was obvious we had to give it to people.
And there’s two ways you can do it.
That’s one way of expressing taste in the research map.
The other way, which we ultimately decided to do, and we think it’s been great for the community, is we said, well, we’re not going to be the ones to decide what’s important in math. We’re going to make Aristotle accessible to everyone.
And so, we opened up the API, the web interface, there’s a lot of great features coming.
And then, in this scenario, taste is expressed by the community by the revealed preference of what they submit to the API.
So, we don’t choose what kind of math they do.
We’re not saying, hey, Navier-Stokes is more important than P versus NP.
It’s the mathematicians that have the credits on the API to say, well, we care about X or some other thing.
And that’s why we’ve seen so much interest in:
And for a while, there are people doing a lot of interesting conjectures in graph theory on the platform.
And I think that that’s actually the right way for companies to engage with the community.
You know, you open the system and you let the people decide, you know, where they want to allocate those computer resources.
So, I think that’s an important decision. We’ve come on one side of it, but I think that’s the right long-term approach.
I think there’s a philosophical question there, too, which is, are we headed for a future where the AI labs themselves are going to generate all the discoveries?
Will the cure for cancer or diabetes look like a giant AI lab with a two gigawatt data center just churning on this problem? And then, you know, it comes out and they capture all the value?
Or does it look more like millions of people empowered with these tools working independently and collaborating and, you know, in that world, they’ll get the credit and the value will largely accrue to them?
And I think we believe that the second world is more interesting and it’s probably the one that’s more likely.
The first one is rather dystopian and less likely.
And I think we noticed that because when we rolled out Aristotle, you know, we had one view of what people would use it for, but then we started getting all of these, you know, Erdős problem results and things like that.
And it’s like, we’re not going to run on all the Erdős problems. We’re not going to do like computational learning theory, formalizations in house.
So I think the amount of cool things being done with it just explodes if you put it, if you make it generally available. So I think it’s not only right from a business strategy standpoint, but also like, I think that the world that we built, assuming this path, is a better world that I would like to live in.
So that speaks to taste in terms of problem selection.
But I was also just thinking in terms of, as you’re training the model, you’ve got the correctness signal, but maybe one sort of heuristic for elegance would be like just brevity.
Which is maybe one kind of way of trying to send an elegance-like signal through a deterministic mechanism. But I would be very interested to know if there is like a panel of mathematicians that you guys have reviewing solutions for elegance to try to make sure that this thing is not just a pure grinder long-term, but really has a more eureka flavor to it.
Well, brevity—if brevity is the definition of elegance—then our two plus two equals four proof probably takes the cake, right?
“I can’t get any shorter than that.”
I would feel bad for any mathematician’s job of us to compare AI proofs. That’s certainly not the job I’d want.
So we, we have never. It’s a big business these days across all domains:
Yeah, we have done essentially zero of that in the two years we’ve been around.
I think the metric we optimize for is the net present value of future proofs or computational costs of future proofs. And so that guards very naturally against certain phenomena.
When you’re solving easy problems early on in reinforcement learning, you absolutely can solve them with grinding. So you can say,
Let me just do brute force.
But you know that if you do that, it’s going to cause issues later because you haven’t learned how to do more complicated things.
In contrast, if you’re given two proofs that are not grinding, but one is drastically longer and more inefficient than the other, you prefer the more efficient one.
So there’s a tension there because you can get more efficient by grinding, but that messes you up in the future. So it’s a balance that our AI researchers strike based on their intuitions about what’ll be helpful long-term.
But we have never had panels of mathematicians do testing on proofs or anything like that. Really, you want to give your system as few priors as possible and just run reinforcement learning at scale.
There’s a famous essay called The Bitter Lesson, which I’m sure your viewers are familiar with. We really believe in that at Harmonic.
To get to your question about how we started: sometimes we’ll start from pre-trained models. Ultimately, you want to do whatever optimizes that net present value of future cost of proof. So pre-trained models are great for that.
I think at some point you might ask the question,
“Is that going to bias you too much towards how humans do math?”
And so you want to mix in reasoning systems that are not trained from human knowledge, right? They have more entropy and more complementary knowledge.
That kind of thing we always play with, but it hasn’t really been the living factor so far. I think that pre-trained models are a great starting point.
Cool. I guess one thing: Goodfire just announced today that they raised a bunch of money at a unicorn valuation. I was a very small-scale supporter of theirs, and it got me thinking.
This also connects to Vlad’s comment where you said the system can sort of invent new theory.
Obviously, one big thing people have said AIs can’t do, or AIs can never do—which is always a dangerous position to take—is that they can’t come up with new abstractions.
Sure, they can learn from what we have done and what we’ve encoded into language, but will they ever come up with their own abstractions? I think that’s not a very strong, increasingly hard position to defend.
But what is so interesting with Goodfire is they’re now starting to look at model internals and unlock new kinds of understanding based on looking at what the model has learned.
The famous one they just put out is like new markers of Alzheimer’s that people didn’t know about, but the model was able to figure out, and they were able to figure out what the model had learned by looking internally.
I’m kind of wondering:
I mean, one of the things that I’m very excited about is eventually Aristotle powering a spacecraft, right? Much like HAL 9000, but a benevolent one, one that doesn’t go crazy. So, yeah, I think eventually you’ll see it expanding into more real-world things.
I think the… I don’t know if you’re as excited about that. A safe HAL 9000. A safe HAL 9000, I think, would be very valuable.
You know, to your question on interpretability, I think that interpretability is often used as a proxy for trustworthiness. So, a lot of the reason that people explore interpretability technology is that they can make sure that the system does the right thing or aligns with the user’s intent.
So, when it comes to trustworthiness, we made the explicit decision at the very beginning of the company to focus on Lean. By outputting our reasoning in a formally verified way, that is the most interpretable possible output. So, the computer can check it. If the human wants to understand how the proof works, they just keep hitting “go to definition.”
It’s almost like navigating through a code base. There’s no more interpretable way to output math than in Lean, really. That’s the maximal version.
So, now the question is, okay, well, how interpretable is the model? I think, in the context of the bitter lesson, we just focus on letting the system do whatever it can to optimize for computationally cheap proofs of more and more complex things, with a caveat that it has to output in a way that’s verifiable.
I think down the road, we’re very curious, how does it do math? How is it so smart? And we’ll look into that. But for us, we’ve solved the trustworthiness question upfront by focusing on formally verified output.
Yeah. Okay. That’s quite interesting.
I do sort of feel like, I have this one kind of mental—mathematicians are famous for visualizing things—my kind of visualization of what is happening in a large model is sort of like shrink-wrapping reality.
Like, you’ve wrapped in plastic all of, you know, all internet data or all the kind of whatever domain it is that you’re trying to learn at scale, and you’re just sucking all the air out of it and gradually shrinking down to whatever, hopefully, is kind of the true structure.
And it strikes me that in math in particular, that structure might be amazingly simple. Or, there might be really interesting things to learn by running that process and then kind of cracking it open and seeing what is inside.
I would expect it to be maybe a lot more interpretable internally than something that has had to learn all internet data and can recite Wikipedia and all that sort of stuff.
I actually think that what these models are doing is interesting because they’re smashing together all of the techniques that all mathematicians have done before.
And so, while I haven’t seen the spark of superintelligence yet where it’s some breakthrough eureka idea that’s incomprehensible, I’d say that if you push it in, learning how the models do things, you kind of ask it to solve more and more complex problems and just see, like,
I think that’ll be a lot more interpretable and comprehensible than trying to dig through the way it’s structured—I might be wrong, but that’s probably where I’d start to interpret how it does things.
Yeah.
So does that mean maybe we can kind of look at different levels of difficulty of problem?
We’ve got the Erdős problems.
There’s definitely a phenomenon happening right now where people are using either Aristotle by itself, or—I’ve also seen a lot of examples, not that many, but increasingly more, of GPT 5.2 Pro to sort of generate a proof in token space, then bring it over to Aristotle for formalization.
Then there’s, of course, the IMO.
If I understand correctly, everybody who—and I think it was just three, right?—that you guys, OpenAI and DeepMind, got the gold level performance. I think everybody missed the same one question, which is really interesting to me.
I’d be interested in your thoughts on,
“why that—why so consistent?”
And then, of course, we’ve got these extreme problems where you would need this sort of move 37-like moment to solve them.
So maybe kind of sketch out,
I mean, I think – so on the IMO, the three labs that announced gold medal performance—us, DeepMind, and OpenAI—all missed question six. And I think that it wasn’t super surprising to us because question six is probably, I don’t know, 5x harder even for humans, right? It’s just a more complex question with lots of steps, and it requires this type of spatial reasoning that right now is more difficult to encode in formal systems.
We were running our system on it quite a bit, and we felt like we saw signs of life. So it’s definitely not inconceivable that before too long, question six is going to fall and be gobbled up just like the other questions. I mean, even one year before, questions three and five would have probably been well beyond reach for most of the models. So I think it does appear to be more or less a smooth exponential.
Yeah, I agree with that. I want to highlight that there’s two aspects of this.
So I think years ago, if you had asked someone, “Hey, could you automatically formalize a number theory paper in Lean or Rock or Isabel, these other languages?” you’d have been laughed out of any room of mathematicians you’d be in. And today, we are seeing people upload the full text of a math paper and run Aristotle a few times. We’re thinking of adding a Ralph button to just keep going, keep going, keep going. And then you get a formal version of it.
I think that phase transition has essentially come and gone now because of Aristotle. So in the next couple of years, as AI keeps improving, the fact that we can now formalize the AI’s arguments obviates the need for the humans to just be the verifiers, right, just sitting there and checking if some output is correct, to ones being the tastemakers. So we’re the ones setting what problems to work on if we’re happy with the techniques used. So that, I think, is the interesting transition that’s happened. So smooth exponential capabilities, but I think we’ve gone zero to one on verification.
I think that’s such a great point because I think there was some debate about this at the beginning.
And in a way, if you look at DeepMind, they started with formal, with AlphaProof, which was the silver medal-winning model back in 2024. It was a great result at that time, and that was a formal model. And then they went back to informal for Gemini this year, and I’m sure they ran AlphaProof. Maybe it was just that AlphaProof didn’t do as well. OpenAI, obviously, informal.
But if you think about, okay, let’s say we go to a world five years from now, and the autonomous math being done by AIs increases. Instead of five to ten-page proofs, you’re starting to produce 5,000-page proofs, which you should assume, right, as these models can autonomously reason more and get more efficient, they’ll produce longer and longer output per unit time. It’s going to be a proxy for complexity.
Who’s going to review that? Nobody’s reading a 5,000-page math proof. So I think it’s becoming even more clear that the future is formal because you have this problem of someone having to validate it and check it. And we want to make sure that the time to validate it and check doesn’t actually grow linearly with the complexity of the proof.
Yeah, that was really the founding thought experiment of harmonics.
So we asked ourselves in 2023:
- These models can do high school math poorly, but they could do elementary school math poorly a year ago.
- What happens in 10 years if we ask it to prove the Riemann hypothesis?
Any model will make an attempt at it and give you 100,000 pages of output, which you might as well throw in the trash for two reasons:
No, you just can’t wrap your head around what is going on in that proof.
And so there were two hypotheses, both of which have been proven out:
If you compare the resourcing we’ve had compared to the big labs, we’re punching well above our weight at the IMO. So I think, in our view, the debate on formal versus informal is settled. I mean, clearly, it’s going to be formal.
One can debate, okay, what’s the most efficient way to train a model? There’s some aspects to informal that are helpful, but I don’t think we’re ever going back to a world where we’re like, “oh, it’s just going to be informal from here on out.”
I think the interesting question, though, is to extend this to software, right? Because the same things actually hold for software that hold for math.
Let’s say AIs are getting to the point where they can autonomously work and create a software project over a period of a week or multiple weeks. You know, who was it? The cursor team ran this and generated like a Chromium-compatible browser, right? It was something like one and a half million lines of code. It was incredible.
So who’s going to read that code and find all the security vulnerabilities and the bugs? And is that code in the future that’s generated by AIs going to be in Python and Java anymore? Like, why would it be in Python and Java? Those are just languages optimized for human readability.
And, you know, if the answer, we think, to humans reading and trusting something or even an AI that the model is collaborating with checking something is the same. You want to make the cost of verification as low as possible. And that makes us believe that the future of software is formal as well. And more and more software will be written in formally verifiable languages.
Yeah. And I think, you know, Lean is our favorite language. It would be amazing if everyone can write in Lean. I think that as AI writes more and more code, it will be easier for people to accept that. But we’ll see.
And I’ll start with mission-critical, important stuff where bugs are much more serious and much more costly. And there’s a bunch of domains that already are doing formal verification for software, but they’re doing it in a very artisanal way.
You know, they’re hiring Lean or Rock or Isabel experts and kind of painstakingly formalizing stuff. So I think you’ll start to see it accelerating the work of those people first, but then it’ll just diffuse and you’ll see, like, formal vibe coding before too long.
Yeah, I love the term vibe-proving, by the way. Yeah, I think that vision is an incredibly compelling one. And, you know, it’s also one that I’m still kind of wrapping my head around.
For listeners who haven’t already heard it, I did one episode with Kathleen Fisher, who was at DARPA, and I think now has just moved to ARIA in the U.K. to lead their whole operation. And Byron Cook, who’s like a legend of the formal methods field at AWS. And, yeah, they’re kind of right there with you, you know, envisioning this world of basically totally verified, bug-free software, starting with mission-critical stuff, but potentially extending to everything over time.
I guess one – so I think that is super compelling.
The one kind of nagging – I don’t know if it’s a worry that I have or what exactly, but I’ll just frame it as a question – is, like, if we are training an AI to be superhuman at formal reasoning, within the formal reasoning system that we have,
how do we get new abstractions from that or how do we get a sort of Einstein kind of moment where, you know, like, it seems that at some point we all sort of thought the world was just naturally 3D and that was, like, obviously intuitive.
And then it’s kind of come to light, obviously now, that, like, well, that was an adaptive understanding of the world that served us well as monkeys, you know, and allowed us to survive. But it was at – at the end of the day, we now know that it’s, like, a lossy approximation of true physics.
And so I’m kind of like, do we have any room for doubt or worry that the math that we have now, as sophisticated as it has become, might also at some point prove to be not quite the right paradigm? And is there any way – if you’re training in this, like, purely formal way, is there any way sort of to punch your way out of the box as an Einstein did, right? He seems to have –
“The fourth wall.”
So he broke the fourth wall conceptually, but the key thing to remember is that he was able to describe his theory rigorously and formally in the framework of differential geometry.
So the point I was making earlier about math being reasoning is the point I’ll appeal to now, which is to say that no matter what complicated theory somebody might come up with to explain how the universe works in the future,
If it’s going to be based on a series of logical deductions that can be explained to someone else and checked independently, that is itself a logic that can be encoded with Lean or other languages like Lean and then verified. So, again, the axioms that Lean is based on are so minimal and just expressing just the most basic possible common sense about how reasoning should happen, like, one thing might fall from another, or if two things look the same, they are the same.
That’s the level of axiom we’re talking about. So I really don’t think there’s any conflict here. I think that one should just think about formal reasoning as an especially detailed version of informal reasoning that a computer can check automatically. There’s no limitation to it. Sometimes it might be a little more verbose than you’d want, right? So you want to write tactics and things to cut down on that, but there’s really no fundamental tension to turn into.
And I think there, you also, you know, might be thinking about Gödel’s incompleteness, like the fact that in any sort of axiomatic system, there’s statements that are true and unprovable. And there’s also statements that are undecidable, right? And independent. So there’s sort of like a bunch of edge cases here, but I think it doesn’t prevent us from making a lot of progress and proving actually the lion’s share of useful things. I mean, there could be things that are unprovable but true that are very, very useful to know as well. But, yeah, no way to know unless you explore the frontiers.
Do you think there’s always going to be a role for entropy of some sort in these systems? I mean, I think hallucinations are a key part of a reasoning system. Hallucinations are what allow a model to explore something that has never been encoded by a human before.
So, you know, when we run Aristotle, whether it was at the IMO or noun, it makes a lot of mistakes. It tries a lot of paths that don’t work. But that exploration is the very thing that lets you get to the right answer after enough attempts. So entropy is crucial. I think this whole notion of seeking fundamentally hallucination-free LMs doesn’t really make much sense.
Now, of course, you want to pair them with a system like Aristotle that can verify things in 10. But, no, I think entropy hallucinations are a key part of the training process for models like this. You’ve got to be able to pose false statements in order to prove that they’re false. Learn like humans. You know, you try a lot of room for humans. Some of the most creative humans are the ones that hallucinate the most.
So what’s kind of the latest progress on the path to superintelligence? You said you, and I think this is true of all good frontier AI companies, whether, you know, at the application layer or the model layer or anything, any hybrid of those, you know, you’re updating your systems frequently. It sounds like there’s kind of a convergence of some sort going on between the tree search part and the informal lemma guesser that you described in the technical report. What can you tell us about kind of what the trends are right now?
I think a lot of the—well, just to review the progress, right? So we started in 2023 and then in 2025 goal performance, the IMO, we topped out this Verena benchmark at the end of the year with our public API users started solving Airdish problems, right?
So I think there’s a very clear trend, right? And, and capabilities. I think the phase transition I mentioned has also happened.
So I think what’s next for harmonic and for the field at large is, you know, a couple of things.
Well, we can expect math live to grow. So math live is the, think of it like the Wikipedia for math that’s computationally certified. So as Aristotle makes it possible to auto formalize a lot of math, you can expect that users will start contributing a lot of pull requests to math live. And that makes it possible to solve more and more problems on top of that base.
I think when we look at how mathematicians are using our API, certainly people are starting to work on more important unsolved conjectures that a lot of people would care about.
So you can kind of think about conjectures as like,
“Okay, there’s a conjecture that’s technically been open, but nobody really cares about it.”
So it’s not like people are trying all the time, but now you might have some conjectures that, yeah, like a mathematician might try it once or twice a year, just take a shot at it. Maybe a hundred mathematicians would.
And then eventually, but the millennium prize problems where, you know, any mathematician would be happy to spend years on it if they might be able to solve it. So I think what you can expect from Aristotle and other systems is, you know, more and more problems get picked off. So it becomes easier to use it extends to software, as I mentioned.
So we have users using it to check, say, decretable software, whether in Lean or other languages.
And overall, if I had to pick out just one trend, it’s really just that formal reasoning goes more and more mainstream. So as more stuff is produced with AI, I think you’ll see complementarily more formal reasoning to kind of verify all of it.
And I think on the product side, we’ve gotten a lot of feedback coming in from the folks using it. Obviously, whenever you’ve got customers that are using a technology like this, they’re very passionate.
So there’s lots of ways in which they’re still complaining about things and improving the ergonomics of it, making it so that people don’t have to hop between so many different tools. And we could just solve their problem as simply as possible and at the lowest possible cost. You should see that continue to improve.
There have been updates to the system pretty much on a daily basis. Maybe you’ve seen some of them just as you’ve been kind of experimenting yourself. But that is going to continue. And you should expect that it gets exponentially more useful over time.
So maybe a good place to close is kind of the vision for what that looks like as you succeed. I mean, obviously, one thing is solving Millennium Prize problems. But I’d love to get a little bit more of kind of an intuitive understanding than that.
I mean, one dichotomy that kind of comes to mind is this very formal reasoning-based paradigm versus what I think of as intuitive physics. It does seem like models are very good at developing intuitive physics in kind of any number of spaces.
Right. Like folding a protein with a model is not something that’s done in a formal way. It’s just kind of something where whatever kind of mess of heuristics they learn, they can do a protein fold orders of magnitude faster than we would be able to do it.
And if we were going to do it through a sort of physics-based simulation approach, when we think of no limit to math and what a mathematical superintelligence looks like, I also think Eliezer, once famously—or at least famous to me—said:
“A real superintelligence in his mind could look at one still image and deduce all of physics from just the information contained in that one still image.”
That kind of also connects, I guess, to test time training.
What is your vision? You can bounce off any of those concepts, but what is your vision of how this thing evolves? Is it an ever bigger tower of formal statements? Is there some role of new kinds of intuition, new abstractions that emerge out of that that aren’t so strictly defined but potentially useful?
You know, what is this thing doing in 2030 once all the Millennium Prize problems are solved?
I think that by 2030, we will have theoretical explanations for everything, basically.
I mean, if you look at the history of science, there’s leaps of intellect and leaps of data:
Right now, there’s really been a shortage of people that are able to reason logically at the highest level.
So when you think about unifying general relativity and quantum mechanics, it’s just a very hard thing to do.
I think what you’ll see is really like anything that can be posed mathematically, which is what underlies all of science, we’re just going to get theories for everything that are self-consistent and make sense.
I think we’ll then go back into a regime where we’re data limited. So, we might have maybe five theories that unify QM and GR, and we’ll have to run very high energy experiments to figure out which one is right.
We’ll have to wait a while to build those colliders. But at the very least, we’re not going to be bottlenecked anymore on wondering, “Can we explain something?”
We’ll have a system that can explain anything perfectly correctly. So it really will be a renaissance of science. You just remove the intellectual bottleneck in everything.
So do I understand that correctly? Basically, you’re envisioning:
Yeah, because AI is not omniscient. Whether it’s our model or others, they’ll be able to reason about anything they can kind of ground in their own logical deduction rules.
But ultimately, there are aspects of the universe where you just have to run the experiment and find out how it really works.
Wow.
Just to be clear, I think there’s a lot of utility before you get there. If I have to analyze asymptotically where we get to that, that’s my point. Well, I mean, that’s, we’ve heard about centuries of scientific progress collapse into five years. That sounds like more like a few thousand years, perhaps, of scientific progress.
Also, that’s left will happen, and then you just have to get more data. But you’ll have a superintelligent system that can help you. Wow. Okay. That’s about as grand of a vision as I’ve heard anywhere.
Do you guys worry about the safety of these systems? It sounds like we haven’t talked about that really at all in this context, but I’ve done many explorations of different safety concerns.
You know, Eliezer, when he described the model, whatever AI he was kind of envisioning, when he described it, understanding all of physics from a single image, he also thought that was going to be super dangerous because it would be so powerful.
How do you guys think about that aspect of this whole, I mean, we’re talking about a lot of stuff in the next five years.
I mean, I think right now we’re not so worried about it because the outputs of our system are constrained.
I think that you’re likely to see, like, the first dangers will probably look a lot like cybersecurity incidents, right? Because, you know, you have the models that are making API calls and running autonomously, interacting with other systems.
So that both creates API level cybersecurity holes and the mechanisms to exploit those. So I think you’re likely to see a lot of those.
I think for our model, since it’s basically just the interface to the outside world is tightly constrained, and it’s not just going to fire off a request to your Gmail account or the iMessage APIs, we’re a little bit further away from that. But, you know, you can imagine we’re going to have to start taking that much more seriously when we do get to a point where we’re connecting the model to the outside world and it’s, you know, speaking in the interfaces are not just sort of like lean files being outputted.
Yeah, I do think constrained action space is certainly one of my favorite paradigms for keeping things under control. But I mean, there’s a full like molt book, molt bot thing that has been fascinating to watch. And, you know, I think we’re entering a strange new world for sure.
And I think the benefit is we’re probably not at the danger frontier. So we’ll have the opportunity to learn from others’ mistakes, and hopefully they don’t screw up too badly in order for us to learn.
Yeah, okay, fascinating stuff. This has been fascinating stuff, guys. I really think the approach is really interesting.
The vision for how far we can expect, or even somewhat entertain the possibility of being in 2030 is arresting, and both inspiring and for me, a little bit scary.
Anything else you want to leave people with before we break?
I think for me, and you kind of see this in the values that we put on our website of what we care about:
You know, that’s what we believe in and in the world that we’re helping — the future that we’re helping bring to life.
Yeah. And I think just to add to that, for me, when I started using Aristotle, it was very different to have an experience where the output’s always correct. And so I think if people haven’t experienced that before, they should just try it out. It’s a free to sign on for.
Cool.
Well, there’s, I’m sure there’ll be plenty of ways to monetize mathematical superintelligence when the time comes. We might do ads, you know.
Yeah. I can’t wait for that.
All right. We’ll do those anthropic ads to life.
Fascinating stuff, guys. I really look forward to watching your progress. Thanks for both the remedial education and a grand vision today. It’s really extraordinary. What a time to be alive.
Vlad Tenev and Tudor Akeem, co-founders of Harmonic. Thank you both for being part of the cognitive revolution.
Thanks for having me. Pleasure to be with you.
If you’re finding value in the show, we’d appreciate it if you’d take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube.
Of course, we always welcome your feedback, guests and topic suggestions, and sponsorship inquiries, either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The cognitive revolution is part of the Turpentine Network, a network of podcasts, which is now part of a16z where experts talk technology, business, economics, geopolitics, culture, and more.
We’re produced by AI Podcasting. If you’re looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at AI podcast.ing.
And thank you to everyone who listens for being part of the cognitive revolution.
2026-02-12 08:00:01
The programming language after Kotlin – with the creator of Kotlin
Why would anyone create a new programming language today if AI can already write most of your code?
Andrey Breslav has an interesting answer.
Andrey Breslav is the creator of Kotlin, a language that runs on billions of Android devices and is one of the fastest growing languages in the world. Today we cover how Andrey designed Kotlin by deliberately borrowing ideas from Scala, C Sharp, and Groovy, and why he considers leaving out the ternary operator one of his biggest regrets.
We also discuss why making Kotlin interoperate seamlessly with Java was a gigantic undertaking, and what it took to get it done. Kotlin adoption went through the roof after Google announced it as the official language for Android, in a move that even took Andrey and the Kotlin team by surprise.
Andrey’s new project, CodeSpeak, is a new programming language built on English, designed for an era where AI writes most of the code. If you’re interested in the future of programming languages from someone who built one of the most loved languages of today, then this episode is for you.
This episode is presented by Statsig, the unified platform for flags, analytics, experiments, and more. Check out the show notes to learn more about them and our other season sponsors, Sonar and WorkOS.
Andrey, welcome to the podcast.
Hello.
Thank you for having me.
It is not often that I meet someone who designed such an influential language across mobile and backend. So let’s start with: how did it all start?
Okay, so that was a little messy because I went to school back in St. Petersburg, studied computer science, and I didn’t really know exactly what kind of programmer I wanted to become. I knew I wanted to be a programmer. At some point, while I was still at the university, I started teaching programming in school. It was a big, passionate hobby of mine.
At some point, I got a job with Borland and worked in some developer tools. That was awesome. Borland was a very big name, though they went under pretty soon after I joined. I hope it wasn’t because of me.
I worked at the tail end of the UML era, doing developer tools in the UML space. That was very interesting. I learned a lot. But then Borland went under, and I went back to teaching full-time. Then I started PhD school. All that was kind of not really planned out.
In my PhD, I was working on domain-specific languages (DSLs), and generally, I was interested in languages. I was curious about typed languages specifically. I was always curious about how these things worked, but never really serious. When I started looking into DSLs, it was slightly more serious. Although my PhD was a mess and I never defended because of that.
At some point, someone reached out — he was actually a person who was in charge of Borland’s office in St. Petersburg. By that time, he was already at JetBrains. He reached out to me while I was in Tartu, Estonia, where I was a visiting PhD student for a year. It was a lovely time.
He invited me, during my next visit to St. Petersburg, to visit the JetBrains office and talk about something related to languages.
What I thought was that it was about this project called MPS (Metaprogramming System) that JetBrains had. I knew about it. It’s about DSLs. I worked on DSLs; it was plausible they wanted to talk about something like that.
But I was completely wrong.
What they wanted was to start a new programming language.
I was completely unprepared for that. I had never thought about doing something like this. My first reaction was:
“You don’t do new language. You don’t need it.”
The basic pitch was that the Java ecosystem needs a new language. Java is outdated, so on and so forth. We can talk more about this.
It was 2010, I think. I said, “but there are other languages. Everybody’s doing fine. Why do you need to do that?”
Then this conversation was actually very insightful because the guys at JetBrains explained how things actually were. It was a big problem by that time.
So Java didn’t really evolve and hadn’t been for a long time.
What was the reason behind this? Can you take us back for those of us who are not in the ins and outs?
Yeah. So the last major version of Java by 2010 was Java 5, released in 2004 — a six-year-old language. Since then, there were updates. Java 6 made no changes to the language at all. Java 7 made minor changes. In parallel, other languages — especially C Sharp — were progressing very well. And by 2010, C# had all the nice things. There already were lambdas, like header functions and all that nice stuff. There were getters and setters and many other things that made the language much nicer. And Java was felt like it was standing still. There was a project to work on lambdas for Java, but that was in the works and had been in the works for a long time and only came out in 2014. So that was the situation.
And, you know, the ecosystem didn’t stand still in the sense that other people were building languages. And there was Scala, there was Groovy. And, of course, people at JetBrains knew both Scala and Groovy. They built tools for them.
It’s traditional to build your tools in the language you’re building the tools for. So the Scala plugin was built in Scala. And there was a lot of Groovy used in JetBrains as well. So they knew what the issues were with the language. And both languages are very interesting and very good in their own ways.
But they saw an opportunity in the market because basically Groovy was too dynamic and too far from, you know, hardcore, mainstream, large-scale production. Because dynamic languages are not for that, basically.
What are dynamic languages for? What are their strengths and best use cases? The trade-off, I guess, if you look at a statically-typed language like Java, Kotlin, and Scala, for example, versus dynamic languages like Python, Ruby, JavaScript, and Groovy:
And this may be changing nowadays a little bit. And this is in part what I’m working on now. But back in the day, it was completely true. The whole art of making a good language was to restrict the user in a good way.
Yeah, but in any case, the situation with dynamic languages is that they are much more user-friendly in the beginning. But then when the project scales, you’ll have trouble making large refactorings. You have trouble making sure that everything works together. You need to do a lot more testing and rely on other things like that.
As opposed to static languages where you have precise refactoring tools and other things that can make sure that at least a certain class of problems just doesn’t happen. And, you know, this is why, at least in our mind back then, it was absolutely clear that if we’re building a language for large projects, big teams, so on and so forth, it has to be a static one.
So with Groovy, that was a big issue of performance as well, because Groovy was building a dynamic language on top of a very static runtime. So there was quite a bit of tension there.
That wasn’t the Groovy side and the Scala side. Scala is a wonderful static language and incredibly powerful and with tons and tons of good ideas. But it had its own problems. It relied very heavily on implicits, for example. And I have a history of debugging one line of Scala for an hour to try and figure out what it does. Just because it was pretty complicated.
Also, the compiler was very slow and there were issues of stability, and many, many things were just not accessible enough for a lot of engineers. So from the experience of using Scala, JetBrains, my colleagues basically understood that it’s not what’s going to change the industry. Although Scala got a lot of adoption.
And again, like Martin Odersky, he is a great language designer. And I think one of the biggest use cases was old Twitter. A lot of it was built on Scala and they scaled to massive scale, etc. And I think LinkedIn as well.
So in any case, these were, you know, it’s always very nice when other languages kind of pioneer things. And then you can build on top of their successes and failures. And we were in that position, basically.
So the argument that people at JetBrains were making was basically that there is a window of opportunity. People need this language. We, JetBrains, are the company who can actually put out a language and make it successful because:
- We have access to the users.
- We have their trust.
- We can make good tools.
And it was another issue with Scala, for example. It was very difficult to build tools for Scala back then. Now Scala 3 is more tooling-friendly, but back then it was a nightmare.
Like, I said that, you know, if you have a static language, you can’t have precise refactorings if the language is too complex. And some languages are particularly challenging. So Scala back then and C++ were incredibly challenging to make precise tools for.
So, and that was the basic pitch. And I quickly understood that, yeah, they were right. And this was something that was worth a shot in the sense that it was not completely hopeless, not completely dead in the water. I had no idea if we could pull it off.
It’s, it was then when we actually sketched some initial features on the whiteboard.
Just because JetBrains is genuinely run by engineers? Hold that thought from Android on how JetBrains is genuinely run by engineers. This is because I happen to know another company also run by engineers: Sonar, our seasoned sponsor.
If there’s a time when we need true engineers, it’s now. As AI coding assistants change how we build software, code is generated faster than before. But engineering basics remain important. We still need to verify all this new AI-generated code for quality, security, reliability, and maintainability.
A question that is tricky to answer:
How do we get the speed of AI without inheriting a mountain of risk?
Sonar, the makers of SonarQube, has a really clear way of framing this:
Vibe, then verify.
The vibe part is about giving your teams the freedom to use these AI tools to innovate and build quickly. The verify part is the essential automated guardrail. It’s the independent verification that checks all code, human- and AI-generated, against your quality and security standards.
Helping developers and organizational leaders get the most out of AI, while still keeping quality, security, and maintainability high, is one of the main themes of the upcoming Sonar Summit.
It’s not just a user conference. It’s where devs, platform engineers, and engineering leaders are coming together to share practical strategies for this new era. I’m excited to share that I’ll be speaking there as well.
If you’re trying to figure out how to adopt AI without sacrificing code quality, join us at the Sonar Summit. To see the agenda and register for the event on March the 3rd, head to:
sonarsource.com/pragmatic/sonarsummit
So everybody I talked with was deeply in the weeds with IDEs and everything in new programming languages very well. We had a very technical discussion.
I don’t remember exactly all of the features we were talking about, but the current syntax for extensions in Kotlin was already there. I don’t remember why exactly we focused on extensions, but it was there.
So, from day one, we’re basically building on top of ideas from other languages, like extensions obviously came from C#.
Yeah, so it was a very exciting conversation, but I didn’t make a decision then because I was in Tartu and I needed to finish there. It took me a few months to finish.
Then I came to St. Petersburg for one month because after that I had an internship scheduled with Microsoft Research in Redmond. I was going to Seattle to stay there for about three and a half months.
I said, “Okay, guys, I have this month. I can work in the office and we can try to sketch things, but then I’ll go into Microsoft and then I will decide whether I commit or not.” Which in hindsight, I made the right decision in the end.
I had a great time for this month or so. I worked with the guys in the office — it was mostly Max Shafirov we were working with and it was incredible. We had such great discussions and I actually saw Max this morning and it was like, it was great time.
So then I went to Seattle, did something completely different. There are Microsoft researchers, some really great researchers working there, actually was exposed to the top notch level of academia for the first time — was very insightful.
But after that, I kind of realized what the question was: whether I want to try to pursue an academic career, which I didn’t feel like I was really built for and was not sure whether I can be a good researcher on my own or I’ll have to follow in somebody else’s footsteps.
So for those of us engineers, which will be the majority who have not built a language from scratch, how do you start with it? Like, speaking for myself, I know how to:
How does a language start?
In our case, we basically talked a lot for a few months. I think not everyone is like that, but I think the best when I’m talking to people.
This was the ideal environment because we were basically discussing things with the Macs constantly for many months. There were a few internal presentations that I made at JetBrains and some of the slides survived.
I can see, including my spelling mistakes in the slides — my English wasn’t as good then — and you can see some of the evolution through those slides. I think there’s a recording of one of those presentations.
So we were basically doing whiteboard design for some time. And the great thing about doing this at JetBrains was that there were a lot of people with opinions about not so much how to make a language, but what problems do programmers face and what they like and don’t like in other languages. So I had tons and tons of input from other people and very good people. So that helped. And I really, I don’t think I realized how special that environment was back then. Like I was 26, to be clear. And I had no idea how things were done in general. But somehow these people just trusted me. I’m not sure it was very rational on their part. It worked out. But I’m not sure I would recommend anyone to do this.
And so in the first few months, I understand that you kind of whiteboarded and wrote down how you want this language to evolve. You kind of, you know, like wrote out like,
“We’re going to have these features. Or how can we imagine?”
So I guess the easiest way to explain this would be like this. It basically went off what the pains were with Java. And there were quite a few. And there was a lot of experience of using Java across the community and inside JetBrains. And we kept making lists of things we wanted to fix.
I came up with some ideas and some other people suggested other ideas about how things can be fixed, what is an actual problem, and what we don’t care about, and so on and so forth. For some time, I was just, you know, pieces of the puzzle basically laid out on a table without fitting together. And then at some point, we started fitting them together. I was just doing a lot of that in my head, which is not the best way. But this is how I knew how to do it.
There were also some crazy ideas that we thought were important back then. For example, I wanted to implement multiple inheritance, fully-fledged multiple inheritance, which was a dumb idea. And multiple inheritance meaning that a class can inherit from like several classes, and you have to take care of like conflict resolution and all sorts of edge cases. Right? Yeah.
The actual challenge is not so much conflict resolution in terms of methods, but initialization of state. Constructors are really hard. And it was actually someone outside of Gibbons who explained to me that was a very bad idea. And I’m very grateful to them. Yeah. So, you know, there were crazy ideas as well. And some of them just fall off over time as we were discussing or prototyping.
I think I started writing code maybe six months in or something like that. Maybe a little earlier than that. I started with a parser. And it was actually a very unique way to start a language because the idea was to start not with a compiler, but with an IDE plugin. I have it in the editor first, which is, you know, an IDE plugin shares a lot with the front end of the compiler, so it’s not absolutely crazy. But I was just relying a lot on the infrastructure that was available in IntelliJ IDEA.
All the parsing infrastructure, and it was awesome. Like, the parsing infrastructure in IntelliJ IDEA is better than anything else in the world because it’s the heart of the IDE. It has to be incredibly fast and very robust and so on and so forth. But then later, someone who knew the infrastructure a lot better than I do had to factor that bit out to make the Kotlin compiler autonomous. And it was Dmitry Zemirov who did that. And he’s an awesome engineer. Like, he’s probably one of the best people to refactor a large code base and, like, take this one bit out of something that was already 10 plus years old back then.
So we started with this IDE plugin. I think Max wrote the scaffolds and I actually plugged in the parser and everything. And that was an interesting start because it was very interactive. So I could show off the language as if it existed because it had some tooling. But I couldn’t compile anything in the very beginning. And that was actually a very good way to experiment with the syntax.
But then soon after, I started working on a full-fledged front-end and on some translation. And Dmitry and Alex Kachman were working on the back-end. Everybody was part-time.
When you say you work on front-end, and they work on back-end, in a language context, what does that mean?
It’s slightly different in different languages.
Basically, the front-end is what deals with the:
And the back-end is what translates to the executable code.
In our case:
Front-end:
- reading the text
- parsing
- doing types
- all that
Back-end:
- generates Java bytecode
And Kotlin has multiple back-ends for different target languages:
At that time, nobody was full-time working on this project. Even I was part-time, a PhD student, part-time Kotlin developer. And it was the very early days.
Then, at some point, I gave up my PhD and focused 100%. Which was also, like, isn’t it a weird decision to start a new language part-time? Yeah. Looking back, I was young and stupid.
There’s a saying that we didn’t do it because it was easy. We did it because we thought it was easy. Absolutely that. I didn’t realize how hard the problem was. I also had an unreasonable amount of hubris. I just thought I knew how to do everything. I didn’t. But it worked out in the end.
So, when the language started, what did you call it internally? There’s always internal code names, right? Right, yeah.
So, I don’t think there was a discussion of this first name at all. It was generally understood that the language will be named Jet. And it was logical. We had all the code base using the name Jet. We had:
Then someone realized that the name was trademarked by someone else. It was actually people we know there in Novosibirsk in Russia doing something. It’s not a language, but it was a compiler, and we couldn’t use it.
This is when we started looking for another name. It was very painful — looking for names. Guys, this is so bad. It’s one of the worst things because you never know what name will work unless you want to do an extensive study.
And then all the good names are taken, of course. Then some of the names that are not taken are not taken because they’re not really Google-able.
Some people are just very brave. People who named their language Go. This is why people now call it Golang because otherwise you can’t identify it. It’s a verb in English, a very common word.
Yeah, so we had weird options. In one of my old presentations, I found a list of early names:
And those weren’t great.
By that time, other languages were popping up. One of the alternative languages was called Ceylon. The logic was: Java was the island of coffee. And Ceylon was an island of tea.
Dmitry Jemerov basically looked out of the window and said,
“OK, we have an island here in St. Petersburg. In the Gulf of Finland, there’s a big island called Kotlin.”
And it’s a good name in the sense that it’s very Google-able. Nobody uses it for anything. It’s very recognizable. It’s not super smooth for many languages, but it’s kind of OK.
Nobody was in love with that name and we were kind of hesitant.
You know, “Kot” means a bad thing in German. Also, there is like some negative connotation in Mandarin, I was told, or something like that. You know, it’s always some language has some nasty association with any word.
We basically were super hesitant. So when we announced, and we had this deadline, that we were basically putting this off, when we announced, we were still not sure.
So we called it, we decided it would be a code name. We called it Project Kotlin to have wiggle room to later replace the name — but it stuck.
The first thing we did was put out basically a Confluence page with a description of the language. It was just a bunch of wiki pages and there was no compiler available then, I think.
There, the word Kotlin appeared many, many times. I was like,
“My God, this thing doesn’t, like, I can’t do search and replace and then change the name everywhere.”
So the workaround that I came up with was to create an empty page called Kotlin. And so it had a name. And then everywhere else, you mention it as a page. When you rename a page, it gets renamed everywhere.
This is why there was an empty page called Kotlin in that documentation. But yeah, the name stuck and it turns out to be not a bad name.
So, when it started, what were the main differences with Kotlin compared to Java? Because Java was, what was the big one? How did you explain to developers who initially started onboard or wanted to give it a go?
Yeah, I guess there were a few major selling points. Then there were other things on top of that. When we started, like in the very beginning, we didn’t have null safety in mind. Null safety came a little later.
After one of the internal presentations, it was Max Shafirov who invited Roman Elizarov, who later was the project lead for Kotlin. Roman came and listened to the presentation, gave some feedback, and said something like,
“Guys, if you want to do something really big for enterprise developers, figure out null safety.”
And we did. It took a while.
So in the very beginning, it was the general idea of what makes Java feel so outdated. There were a bunch of things. Lambdas were very big. The general, like, the general feeling from Java back then was it was very verbose. It was called the ceremony language. A lot of people were grumpy about too many keywords, like public static void main is something everybody was really grumpy about.
But also, there were getters and setters for every property. There were constructors and overloads and all that stuff that looks like boilerplate because it is. Yeah. It’s super annoying to type out.
The problem with boilerplate is, on the one hand, it’s annoying to type out. But tools can generate it for you and fold it and so on and so forth. But the bigger problem is always readability. So reading is more important. Reading code is more important than writing code. We do a lot more of that.
And with boilerplate, it’s terrible because if some tiny thing is different in the middle of completely standard boilerplate code, you’ll miss it. You’ll become blind to it and you can debug for days not seeing that. So, you know, that was the point of sort of modernizing Java, making Java programs be more about what they do and less about the ceremony of making the compiler happy, basically.
And, you know, type inference was also a big thing because Java was repeating types a lot and many other things like that were, like, semicolons. The modern languages of the time already got rid of semicolons. And so in Kotlin you also got rid of it?
Yeah. So we got rid, basically, in terms of syntax, we got rid of semicolons and duplicated types. And that was a lot of noise across the code.
What does it mean that Java had duplicated types?
So in that version of Java, when you declare, say, a local variable, you say it’s a list of string called strings equals new array list of string.
Oh, yes. I remember this one.
Yes, yes. You need to type it out twice. And if you get one of them wrong, compiler, et cetera.
Right. So, and at best, you could omit the second mention of string by using a diamond operator, but that only came later, you know. Basically, it was very verbose, especially if your types are long.
So, and a bunch of things like that were really annoying to a lot of people, especially compared to C# or Scala.
So, we did all of that. And then, on top of that, there were other value-add features and null safety was a big thing that we spent multiple years actually on implementing. And I think it’s one of the main differentiating factors now for Kotlin alongside of with extensions and other things. But null safety is one of the core features.
And can we just spell out why null safety is so big?
I mean, I just today I came across a bug on the, I couldn’t send a package because in JavaScript on the Dutch post website, there’s a null issue happening in production.
But, you know, before Kotlin and a lot of languages, why is it such a big problem?
It is.
Yeah. So, dealing with null references is a big hassle in most languages. And I think it was Tony Hoare who called it the “billion-dollar mistake” at some point because, like, introducing, I think it was about introducing null pointers to C or something.
So, basically, when we look at all the runtime errors that we have in Java code, I think null pointer exceptions will be at the top. So, you know, the type system of the language is supposed to protect you from those unexpected errors.
So, there are errors you’re designed for and maybe errors that are not even your fault, like a file system error or something like that. But there are also errors that should be prevented by the compiler. So, for example, class cast exception or missing method error are things that the compiler is trying to protect you for. It’s trying to make sure that this never happens in your program unless you switch off the check by making an enforced cast or something.
And with nulls, it’s not a thing in Java. Like, anything can be null, and if it’s null, it will just fail. Yeah. It throws an exception and the program dies. So, it’s a very common thing.
So, a lot of people are kind of used to it, and there are different ways of being disciplined about it and so on and so forth. But, basically, this is a plague across any code. You know, there are different approaches to this.
And in Kotlin, we took the approach of:
- A: enforcing it in the type system,
- but also making it free at runtime.
What does that mean, that you made it free?
So, one very common way of dealing with nulls is to use something like an option type, where you have a box, which might be empty, or might have an object in it.
No. And that box is not free. Like, you have to allocate it, you have to carry it around everywhere. And, this easily creates a lot of objects in the old generation for the garbage collector, so it can be challenging. What we did was just have a direct reference at runtime; our nullable or not null reference is the same as Java’s reference.
All we do is compile-time checking and some runtime checking when we cross the boundary. But that’s a lot cheaper than allocating objects. Although the runtime is getting better, and they can optimize some of those objects away, it’s still an overhead.
What are features that you took in from Kotlin that were inspired by other languages that you admired?
A lot of them. I have an entire talk about this. It’s called Shoulders of Giants. We really learned from lots and lots of languages. And it was always the point. Andre just mentioned how Kotlin was built on top of the shoulders of giants, taking good ideas that existed, not reinventing them. This was one of the reasons Kotlin succeeded as much as it did.
But jumping forward from 2010 to 2026, one thing that is totally different today is the speed of things. AI is allowing nimble teams to build faster than ever before. Companies that used to take years to move into the enterprise are doing it in months.
This speed creates a new problem:
These show up almost immediately. This is where WorkOS, our seasoned sponsor, comes in.
WorkOS is the infrastructure layer that helps AI companies handle that complexity without slowing down.
Features include:
- SSO for enterprise buyers
- MCP offer agenda workflows
- Protection against free trial abuse with Radar
Teams like OpenAI, Cursor, Perplexity, and Vercel rely on WorkOS to power identity and security as they scale. If you’re building AI software and want to move fast and meet enterprise expectations, check out WorkOS.com.
With this, let’s get back to Andre and how Kotlin was standing on the shoulders of giants.
So the slogan for Kotlin was “pragmatic language for industry.” The pragmatic bit, which is a nice rhyme with your podcast, was kind of coming from the experience with Scala being called an academic language. A lot of people had trouble getting their heads around many of the very smart tricks in the design.
And so our idea was:
“We’re not doing academic research here. We’re not trying to invent anything. If we don’t get to invent anything, it’s a good thing, not a bad thing.”
From the engineering perspective, it’s generally a good idea to do this. Usually, you end up making something new, but most of what you’re doing shouldn’t be very new because you want familiarity. You want people to easily grasp what you’re doing. This has to be familiar from other languages.
Also, if you’re building on top of the ideas of other languages, you benefit from them having tried it already. You can look at their designs, their community’s reactions, and the implications all over the place. That gives you a huge benefit.
So we did a lot of that.
I think the language that influenced Kotlin the most is, of course, Java. Because the entire runtime of Kotlin is the JVM, and we depend on that.
Apart from that, Scala had a huge influence. We used many ideas from Scala, including:
vals and varsIt’s a huge pity that this didn’t make it into Java design. It was flipped at the very end of the design process to what Java has now. The Martin Odersky idea was much better.
We had to fix this problem on the Java boundary and figure that out.
There were many ideas we took from Scala, and that was very helpful. We usually transformed those ideas a little bit to adapt to our setting and to build on the knowledge of how it actually works in practice. We left some things out. We simplified some things.
For example, Scala had traits. Traits are a very powerful construct, like an interface where:
What you couldn’t have were constructor arguments. You always have a default constructor and can initialize all your fields.
It’s not as bad as multiple inheritance in C++, but it’s still a little complicated when it comes to the order of calling constructors. We decided we don’t want to deal with that. It’s a complex algorithm and hard to explain. Let’s just get rid of the state in interfaces and only have method bodies. And I think it was a good compromise. Especially given that Java ended up in the same place. It was easier to integrate.
Yeah, so Scala was a big influence. C Sharp was a very big influence. Extensions, of course. And we learned quite a lot from how C Sharp compilers do things.
There, there was also one particular trick that makes Kotlin syntax a lot nicer, nicer than Java’s and nicer than Scala’s, that we’ll learn from C Sharp. And it was actually my colleague who worked on the C Sharp IDE who told me about this, which is basically a super pragmatic thing they do in C Sharp.
There is like, when you call generic functions, you use angle brackets inside an expression. But the thing is that there is no such thing as angle brackets. There is less and greater. Right? And the parser can easily get confused and think that this expression, since we’re not in a type context, it’s an expression context. This expression is a comparison. It’s not an inequality, right? It’s not a call. And this is mathematically unresolvable. It’s an ambiguous grammar.
Yeah, look, you can do anything about it. And the way other languages handle this is:
collections.<Type>functionName().And we did the same or something very similar, and it just works. And the syntax is very familiar and very intuitive, and we’re very happy about that.
Yeah, because when you read it, as a person, I never get confused. Like, this is not a smaller sign. Like, I know it’s a generic. Yeah. Yeah.
Okay. Wow. Most of the time, it’s not a practical problem. And there is a way to disambiguate, if you like. So C Sharp was a big influence.
Groovy was a big influence as well. JetBrains used Groovy for build scripts. And there were incredibly useful patterns in the Groovy syntax that they call builders, which is not about building programs, but, you know, building objects.
And this is what inspired something fairly novel that we did in Kotlin, which was typed builders, where we had the same syntactic flexibility, or almost the same syntactic flexibility, as Groovy, but it was all typed. And we could make sure that all the arguments matched and so on and so forth.
So all that side basically was inspired by how Groovy people did this and reworked into a typed setting. And this is why we have, for example, extension function types. And this is why we have dangling lambdas and other things that are actually very nice syntactic constructs.
So, yeah, many, many things came from different languages.
A less known language called Gosu, I think it was what inspired us to do smart casts.
What are smart casts? Oh, yeah. So, I think smart casts are one of the nicest things a compiler can do to a developer. Because it’s a very common situation when you say:
If x is a string (so you do an instanceof check), then do something with x.
The annoying thing is that in a lot of languages, you have to cast x to string again. Like, you’ve done the check. After you’ve done the if, you know it’s a string, but then you need to write it out again.
Yeah, so you’ve just done the check, but you have to say string again to make the compiler happy.
So, smart casts basically get rid of that. So, that cast gets figured out automatically. So, if that’s a string and then inside the bracket, you can now use it because it’s a string. Yeah, you can use it as a string.
And isn’t it an easy thing, right? So nice. Yeah, it’s a very nice thing.
Yeah, it’s a pretty complicated algorithm. Because, you know, variables can change values and the check that you’ve just made can go stale. And, you know, there’s a bunch of algorithmic trickery around this.
And you can’t do a smart cast on any expression. It has to be a certain type of expression that can be stable enough and so on and so forth. But, you know, it’s a very nice thing. And you can get rid of so much noise in the code because, like, all the code in the world is riddled with this instanceof cast. instanceof cast.
So, we wanted to get rid of that. And it worked. And it was fun to implement.
What were things that you looked at other languages, you considered, maybe we should bring it in. But you, after debate, you’re like:
“No, let’s just leave this out.”
Like, not all of them, obviously, but some of the big ones that kind of came close. We had a design for pattern matching in Kotlin that was inspired by functional languages like Scala and Haskell and others. But at some point, early on when I was still working on the parser, I just realized that this is a huge feature.
So, when I was sketching it out on a piece of paper, it looked like a very useful thing, just another feature in the language. But then when I started working on the parser, I realized it’s an entire language in size. Like, you have to create a parallel universe in syntax for pattern matching. And I was like, okay, this will be a lot of work. Let’s postpone it.
Later on, when we were doing review for 1.0 or maybe a little earlier than that, I just realized that smart casts plus we have something called destructuring together give us like 80% of all the good things pattern matching can do to normal developers. Then there is another group of developers that can be very vocal, mostly compiler developers and people super into functional programming. They have a point, but that point is only relevant to them, and there are not very many, so we decided to not have pattern matching back then.
And, you know, maybe there comes a day that pattern matching gets added to Kotlin. And pattern matching is, is it in the case? Yeah, it’s the… So you can have, like, a lot nicer case statements, a lot more expressive ones, right? Yeah.
Generally, Kotlin has this compromise where you have our version of switch case, which is called when, and you can have smart casts there. So you can say:
That kind of gives you a lot of the niceties of pattern matching, but some things you can’t express like that. And that was, I think, a good compromise because it’s a really big feature. It’s hard to design well. There would be a lot of work on the tooling side. But maybe it gets in the roadmap one day. I’m not sure.
Java is trying to get towards pattern matching, so we’ll see. Maybe they kind of make it more mainstream.
Why did you omit the infamous ternary operator, which is when you write out something with the question mark and the colon, and it confuses new developers every single time if you’ve not seen it before? Yeah. Was it for readable reasons?
This is the saddest story I think in the design of Kotlin. I didn’t realize how much people liked it. The reason was, Kotlin used this principle from functional languages that everything we can make an expression is an expression. So if is not a statement, and the ternary operator is sort of a patch on the design of C and other C-like languages that makes an if expression, basically.
The logic was:
okay, we have if as an expression already,
can we just get rid of this extra syntax construct,
especially given that it's using very precious characters?
Like, there is a question mark and a colon, and we might find some other use for that. So we decided to not have it. We used question marks for nullable things and the colons for types and so on.
But it turned out that if as an expression is pretty verbose; people don’t like it. I resisted for some time, and then by the time I agreed, it was too late because you can’t retrofit the ternary operator into the current syntax in Kotlin—it just doesn’t agree with how other operators have been designed.
So you’re actually sad about it not being there a little bit? Yeah, I think in retrospect, it was a mistake because pragmatically, it’s more use than harm to have it. But we just can’t retrofit it.
What are some other interesting features that you like about the language that you added that we could explain for those who are not familiar?
Okay, so the good ones, there’s quite a lot of them. One feature that is not a traditional kind of language feature is Java interoperability. That’s probably the single thing we spent the most time on. And I always say that if someone offers you a job to create a system that interoperates transparently with another huge system you don’t control, ask for a lot of money. It’s a very tricky deal to figure this out.
Interoperability means that from Kotlin, you can invoke Java, and from Java, you can invoke Kotlin. You do a bunch of work there, but it just works in the end as a developer. You don’t need to think about it.
The idea is whenever you have a Java library somewhere in the world, you can always use it from Kotlin. It was a big selling point because if you start as just a language in a vacuum and you don’t have any libraries, that’s not a good start.
In this direction, definitely, it was an absolute requirement for Kotlin. But also, we had the requirement to go the other direction. In an existing project, you could just rewrite parts of your code from Java to Kotlin, and everything keeps working. And some libraries actually did that. Many projects started using Kotlin bit by bit.
A lot of people started with just writing tests. But then, you start adding things in Kotlin, new things, for example. And all the Java code around that has to transparently use the Kotlin code. So we put a lot of effort into that. And that was fun.
Can you explain to us as engineers, like, it sounds like it was a friggin’ big project. What is the work, right? Because from the outside, again, I’m just being your average developer, where I’m invoking a Java class.
And things I can think of are:
What is hard? Tell me, tell me. I’m dying to know.
So one thing to note here is that we don’t control the Java compiler. We somehow need to make it work so that in your Java code, you make a call into something that only exists in the Kotlin source. And the Java compiler somehow agrees to call it to begin with. It’s not a Java file. It doesn’t know it exists.
So the way it actually works is: when we build a mixed project, what we do is we first compile all the Kotlin code. That can depend on the Java sources in the project. So we have a Java frontend baked into the Kotlin compiler so we can resolve everything in the Java code. Then we produce class files, binaries for the JVM that the Java compiler can read. So when Java compiles, it takes Kotlin sources as binaries. And this is how it works.
We would have to implement a Java compiler otherwise. Fortunately, Java has separate compilation, so this works.
This trick means that whenever you have tooling, like in your IDE, for example, when you navigate from Java sources to Kotlin sources, it has to be a special trick. Someone needs to go and teach the Java world to know about the Kotlin world.
Of course, the IDE doesn’t do the compilation to navigate. But at compilation time, we don’t control the compiler. So we did our own IDE. This way, we could do something about the Java tooling, but we couldn’t do anything about the Java compiler. So that’s trick number one.
Then, when it comes to incremental compilation, it becomes even funnier because Java incremental compilation is a complex algorithm on its own. Now we are incrementally compiling two languages at once. And that’s fun.
Incremental compilation algorithms are generally a very messy, very complicated heuristic with tons of corner cases. So, that’s like one example.
But then you start making interesting new things in Kotlin. You need to expose them to Java. You need to make sure that whatever fancy thing you have, Java can actually interoperate with that.
One example would be Kotlin’s approach to making Java collections nicer in Kotlin without rewriting the collections using the same library. Java collections are what’s called invariant because they’re all read-write. So if you have a list, it always has a set method.
That’s a little bit of a problem because whenever you have a list of objects, you cannot assign a list of strings to that. That’s annoying because you want to be able to represent a list of anything, and for that, you need to play with question marks, wildcards, and stuff like that.
It would be very nice if we had a read-only list interface that doesn’t have a set method. Then there is no problem in assigning a list of subclasses to a list of superclasses. But this interface doesn’t exist at runtime, right? We can’t just invent it. Or can we?
So we actually can. No.
In the Kotlin compiler, we have this layer of trickery specifically for Java collections where Kotlin always sees Java collections. If they come from the Java world, they are read-write, mutable collections, we call them. But mutable, right? Yeah.
So the Java collections are always mutable or platform mutable. I’ll talk about that later. But when you do it in Kotlin, you can actually distinguish between read-only and mutable collections, and it’s all very nice on the Kotlin side.
But then when Java sees the Kotlin collections, they are normal again. When we expose them through binaries, the Java world always sees them as normal collections; they’re mutable for Java, and it’s all right.
Okay, I’m starting to see why you said you need a lot of money for this because this is just one of many things. But this itself sounds like, I don’t know how you solve that.
Yeah, so just to add a little bit of detail to this. So the nice thing about those read-only collections is that you can pass a list of string for a list of object, right?
Wouldn’t it be nice if a Kotlin method that takes a list of any could accept a list of string in Java? But aren’t we erasing all the Kotlin nice stuff? We are, but we know that this list is actually what’s called covariant. So we can expose it to Java as a list of question mark extends and not just list of objects. So, you know, it becomes covariant for the Java world as well. And that’s like one hack that makes it a little more transparent.
And there’s a bunch of that. So, you know, so that’s another thing that we had to play with. But the biggest thing is, of course, nullable types. And actually, we handle nullable types and these things with collections kind of similarly, which makes the whole typing layer of the interop quite interesting.
But basically, so Java doesn’t know anything about nulls, right? Well, it knows about nulls, but not about nullable types. It does not exist. Yeah, Java doesn’t know about nulls at compile time. So in terms of types, it’s just not represented. So technically, every Java type is a nullable type.
And this is where we started. We said, okay, so Kotlin types can be not null and it’s very convenient. And when you have a not null type, you can just call a method on it normally, right? But if something is nullable, you can’t just dereference it. You have to first check for null and then use it, right? Or if there is a safe call operator, well, just propagate null is on the left-hand side.
So we started with saying,
“Okay, all Java types are nullable, which is a conservative, like very mathematical way of treating it.”
This is correct, right? Yeah, you’re not going to be wrong with that. Yeah. And we implemented that and we started using it inside JetBrains. And the feedback was horrible. Like your code is plagued with those null checks and you know that they shouldn’t be there because you can’t express anything on the Java side the right way.
And there were like, we had some annotations for the Java side. It was also brittle and not always worked because, you know, there can be long chains and stuff. And some libraries just don’t have the annotations. And we struggled with that for a long time.
And basically we realized that this assumption that everything in Java has to be treated as nullable just doesn’t work. This was a turning point where we sat down and reimagined the whole thing.
And we worked with a great type theory type practice, I would say, guy from, I think it was back then he was in Cornell, Ross Tate. So Ross helped me figure out the sort of mathematical side of how you can represent those types that come from Java and should be, like we should be aware of that they are from Java and can possibly be nullable.
But we shouldn’t treat them as nullable because it was very inconvenient. And Ross put together a very nice sort of calculus about those.
And when we started implementing it, all the nice things are gone. The mathematical beauty is completely gone from all that. And I think we took the general idea of sort of splitting a type in two and everything else is just very messy industrial kind of thing. That’s not sound, but it works well.
Okay. And interoperatively sounds like it was a journey, but a necessary one.
How long did it take? Can you give me just a sense of like how many people working on it? How much, because I think in traditional projects we can get a sense, but I have no idea with the language. How does this work? And how long did you think it would take versus how much it took?
Yeah. So let’s start with that.
Yeah. So, so I had no idea. And I always said like, okay, a year from now feels far enough. We’ll probably be done by then.
In practice, we started in 2010, yeah, autumn of 2010, basically. And we released in February 2016. So, you know, it was a long time, five-ish years. And that, you know, in part was just because I didn’t know how to manage projects.
And my initial team, the people who worked full-time on the project, I looked up on GitHub to verify that. Everybody who, almost everybody, who joined JetBrains to work on Kotlin was a fresh graduate. Because I used to teach and I had some good students and I knew how to work with students. And so basically everybody on the team was a student, apart from a few veterans from JetBrains who were helping, not all of them even full-time.
So we started getting experienced engineers on the team a bit later. And, you know, to be fair, a lot of those people, people who are following Kotlin know those names. People who are core contributors, who built out, like, absolutely foundational parts of Kotlin, joined as fresh graduates. And they became great engineers.
But I think I overdid it a little bit. So it’s great to have, you know, younger people have no fear. And that’s wonderful. But, you know, the balance was not right.
And how big was the team initially and then towards the release?
So we started out basically with four people part-time. And, yeah, we went like that for maybe a year or something. So the initial prototype was built like that. And then people started joining in. By the time we released, I think it was around 25 people or something.
And the team grew quite a bit. So by the time I left in 2020, it was about 100 people on the team, 70 of them engineers. So it became a pretty big undertaking.
Can you tell us about the development process inside language?
I think a lot of us are used to building, you know, like services, backend services or products or mobile apps, etc. They typically have a release process. How does this work inside a language? Like, what is your release process and what is the, I guess, best practices?
Like, do you even do code reviews or, you know, like how can we imagine? Because, again, it feels such a rare project. There are people building languages, but not many of them.
Yeah, so one peculiar thing about building languages is what’s called bootstrapping when you write your compiler in your language.
Oh, nice.
Which means that, you know, to compile your code, you need a previous version of your compiler. And you better agree with your colleagues which version it is. It can be really tricky, especially when you do things about the binary format. And there is, like, quite a lot of bootstrapping magic going on.
And I don’t think you can reproduce the Kotlin builds from scratch. Because, you know, if you just take a snapshot of the Kotlin repo, you can only build that with a Kotlin compiler. And I don’t think we kept all the bootstrapped versions. So it might not be really possible without a lot of manual intervention to rebuild all the sources from the very beginning and reproduce all the versions.
Because sometimes, you know, we had to, like, commit a hack into a branch and use that branch as a bootstrap compiler for the next build and then throw the branch away. So that was, like, a one-off compiler used to facilitate some change in the binary format or syntax or something. So that’s a separate kind of fun.
But generally, I mean, many practices are very similar. Like, we had code reviews pretty early on. It’s my personal quirk, again, that I like to talk to people. So in code reviews, I often just sat together with someone and either they reviewed my code or I reviewed theirs. But this is, you know, I can’t argue that it’s much better or worse. It’s just how I prefer it because I like talking to people.
So code reviews, yes. And, of course, we had an issue tracker like everybody else. Ours was always open. So everybody can submit bugs to the Kotlin bug tracker, which was very helpful. It’s hard to manage because there will be, like, with usage, there will be a lot of bugs and a lot of feature requests and all kinds of stuff. But it’s worth it. You have a communication channel.
Release cadence is a very difficult thing to figure out for such projects. Because one big consideration you have for languages is backwards compatibility.
In part, this is what delayed 1.0 because we wanted to be reasonably sure we can maintain compatibility as soon as we call it 1.0. In part, because it was the expectation, especially Java is incredibly stable and very good with that until Java 9 came about. And also, Scala had a lot of trouble because they were breaking compatibility a lot. And the community was struggling, really. So we really didn’t want to repeat that.
But, you know, it turns out you can even break compatibility Python 2 to Python 3 and survive.
Barely. Barely survive.
They’re doing very well. Now they’re doing well, yes.
Yeah.
So we were really serious about that. But basically what it means is you start doing interesting things like deprecation cycles. So we actually invented an entire tool set for compatibility management.
So before 1.0, we tried to help people migrate. So we had those milestone builds. Embarrassingly, we had 13 of those.
And, you know, when we broke the language in major ways, we tried to provide tools for automatic migration.
That’s nice of you.
Which was, I don’t think, a standard practice in the industry back then. Now people are doing it more. So I’m very happy to have sort of popularized this idea. And then when we were preparing for 1.0, we did a major review of everything and took a year to sort of review all the design.
What we’re doing is basically trying to anticipate what changes we might want to make or what new features will require. And to basically prohibit things that might block that. So we tried to make sure that the changes we were planning were guarded well by compiler errors to make sure that users don’t accidentally write anything that looks like a new feature. And that was fine.
We had design meetings, I think, every day at some point—basically working on that, like, “okay, let’s outlaw this. Let’s prohibit that.” And we prohibited a lot of stuff correctly and some stuff incorrectly. But, you know, generally worked out. So this compatibility thing was a big deal.
But there’s also a lot of stuff that we didn’t anticipate. So we had to figure out ways to manage this. And there is something in Kotlin compiler called “message from the future,” which is basically when in a newer version of a compiler, you introduce something that the old compiler doesn’t understand.
We have different options. And one option a lot of languages go for is:
But it’s a little hard for people then to manage their versions because new libraries, new versions of libraries come with new compiler expectations and you have to migrate your entire project to do that. It’s a little annoying. And if what you’re adding is like one method that basically invalidates the whole library for an old compiler, that’s not great.
So what we’re doing is a newer compiler can write something into the binary that tells the old compiler, “okay, this method is what you can’t understand, but everything else is fine.”
Wow, that’s smart.
Yeah.
So we call this a message from the future and like it can provide some details. So there’s that.
And there’s also the discipline of experimental features, which is incredibly helpful. And I am very happy to see other languages doing it now. And even Java does experimental features now, which is wonderful.
Andrei just talked about experimental features in programming languages and how that used to be rare back in the 2010s. What this reminded me is that running experiments in production used to also be rare. Not because teams did not want to do it, but because doing it meant building a lot of internal tooling around it:
Assignment, rollouts, measurements, dashboard, debugging, the whole thing.
For a long time, only a handful of companies really pulled this off at scale. Companies like Meta and Uber.
Which brings me to Statsig.
Statsig is our presenting partner for the season. Statsig gives engineering teams the tooling for experimentation and feature flagging that used to require years of internal work to build.
Here’s what it looks like in practice:
And the key is that the measurement is part of the workflow. You’re not switching between three different tools and trying to match up segments and dashboards after the fact. Feature flags, experiments, and analytics are in one place, using the same underlying user assignments and data.
This is why teams and companies like Notion, Brex, and Atlassian use Statsig. Statsig has a generous free tier to get started, and pro pricing for teams starts at $150 per month.
To learn more and get a 30-day enterprise trial, go to Statsig.com/pragmatic.
And with this, let’s get back to Andre and experimental features in Kotlin.
So we did quite a lot of work when you’re doing something experimental. This is something that’s supposed to break, and you want to emphasize this to make sure that the user is aware that:
“this is something we are not promising to keep compatible. This is something we’re going to break.”
We used to put the word experimental in package names for people to understand that this is going to be renamed. And warnings when you use language features, and we require compiler keys to enable language features and stuff like that. It kind of helps. So we did quite a lot of that.
All this is an extra layer. And unlike a SaaS system, for example, a compiler leaves behind, but not behind, creates a lot of artifacts that pin down its history in the world. There is source out there and there are binaries out there, and you’re guaranteed to encounter them every time anyone hopes that
“this is an obscure case. Nobody will ever hit that.”
With enough users, you hit every freaking case. And this is so surprising.
I discovered this fairly early on. I think before 1.0, when we had a few thousand users, I realized that
“if something’s possible, some person out there will actually do it.”
So you got 1.0 out. Can you tell me how Kotlin grew in popularity? When you released it, what was your target audience? And then how did Android happen?
Okay, so that’s a complicated story. Let’s try to not get off track, because this has a lot of sidetracks to it.
When we started Kotlin, we were not really very aware of Android. And I mean, we knew that that was a thing called Android.
Kind of ironic.
Yeah.
From now, message from the future.
Right.
Yeah.
So basically in 2010, we were focused on the majority of Java developers that was all about the server side.
Clear.
Yeah.
So the most money IntelliJ was making was on Spring users. And, you know, everybody knew that this was what the Java platform was about by then. So we were targeting server side developers, basically.
And also desktop developers, because JetBrains had the, probably the last desktop application written in Java, or at least in Swing.
So that was the target. It was initially not even a plan to do Android.
Kotlin got some usage for the server side. And, you know, it’s still there and it’s growing there, not as fast as on Android, but still has quite some representation on the server side.
But then a few years in, some person on the Internet asked us whether Kotlin works on Android. And I was like, I heard Android uses Java, so Kotlin should work. We’ll never try. Go and try.
I think it was either the same user or a different user who came back and said
“the toolchain crashes.”
And it wasn’t even Kotlin toolchain. It was the Android toolchain that crashed. And, you know, we looked into it and it turns out that some tool in the Android toolchain that’s written in C just fails with a core dump. And it’s not very clear what’s going on.
We later figured it out. It turned out that the Android developers and the people who built the Android platform actually read the spec of the JVM, unlike the people who implemented the Hotspot VM. Because the Hotspot VM, I suspect, came before the spec. So it was the reference implementation, but it was actually specified after it was built.
The Hotspot VM was super lenient to weird things. Like, there would be, if we put a flag on a class file that was not allowed for classes, Hotspot wouldn’t care. And we ran everything on Hotspot. So we thought everything was fine.
But then on the Android side, those were the people who actually read the spec and implemented it. Yeah, they would complain about everything.
This is why we used the Android toolchain as a testing environment basically, because
“this is how we could get rid of stupid things in our bytecode.”
They helped us a lot with validating everything. But, you know, there were some gotchas there. Some legacy stuff nobody cares about in mainstream Java just were faithfully implemented on the Android platform.
That was fun.
So, you know, at some point, pretty early on, I had this realization that Android was a growing platform. Which, to me then, I didn’t have much understanding of the dynamics of markets, but it meant that there would be a lot of new applications.
And it’s much easier to start completely anew with a new language.
So, I made sure, at some point, that we worked well on Android. It was already after the lawsuit.
So, the big context to all this was that when Oracle acquired Sun Microsystems, they sued Google for billions of dollars for using Java.
And I think that is settled.
It was settled in some way, yeah.
And then everyone could go on their own way.
Right.
But yeah, it took years and years to settle.
Back then, it was very much a thing. And, you know, that dispute was somewhere in the background.
But yeah, so basically, we saw that a lot of people on Android really liked Kotlin. They loved it.
Yeah.
As soon as it was stable, pretty much. I mean, I think for all the things that you mentioned: it was just so much nicer than Java. Easier to write, easier to read, lots of nice features.
So, you know, you use Android as a way to actually make sure that Kotlin compiled correctly.
And then, why did it take off on Android?
Yeah, so the situation in Android was pretty interesting because unlike Java server side that is kind of under control of the teams that develop on it. In the case of Android, there are devices in the pockets of people, right? And when you have billions of those devices, and those devices don’t always update the virtual machine.
So, people on Android were basically stuck with old Java. And even when Java started progressing, and, for example, Java 8 came out in 2014, it was very difficult to roll out this new version of Java across the entire Android ecosystem because it required updates to the virtual machine.
There were workarounds, and Retro Lambda really helped, and so on and so forth. But there was still a lot of people stuck with really old Java. So, Java wasn’t on par with Kotlin or C Sharp in 2014. But it still was much better, and solved the major problem. But it was not available to the Android people.
So, there was a lot more frustration with Java in the Android community.
And also, there was Swift on iOS. Where it was a real example of a big ecosystem transitioning from a really dated language to something really nice.
I think compounding these two things were the major factors. Also, we made sure that Kotlin worked well on Android.
Very fortunately, at some point, Google switched the developer tooling from the Eclipse platform to the IntelliJ platform when IntelliJ was open-sourced back in, I don’t remember, 2013, I think.
So, we had a nice plug-in because everything worked on the IntelliJ platform, and the same plug-in worked for Android. Many other things were just very smooth. Well, very smooth—there were a lot of bugs, but reasonably smooth.
So, it felt like a very good match, and a lot of people appreciated that.
We really wanted to somehow draw the attention of the team at Google to maybe talk about it or something, but it just didn’t happen.
We released in 2016, and there was some communication with Google in general, but there was no interest on that side. They were like, okay, we, I guess we’ll just keep going as we do.
Some people were already building Android applications, and some people were building production applications in Kotlin before we released 1.0.
Kudos to the brave people because they gave us indelible feedback. But you guys are too brave.
So, it just grew organically.
When we started, in the very beginning, I set this internal goal to myself, that if we get to 100,000 users, it’s a success.
I’ve done well enough if it gets to 100,000. Of course, it’s hard to tell how many users the language has, but you can kind of estimate that.
I think we were on track to get to 100,000 users during 2016 because it was growing, it was in the tens of thousands, it looked good.
Then, some people from Google reached out and said they wanted to chat.
It turned out they wanted to chat about announcing official support for Kotlin at Google I/O 2017, that would be in like three months from the time of that conversation.
They said, “yeah, sure, let’s do it. What do we need to do?”
It turned out we had to figure out quite a few things, but we managed.
I think it was a heroic effort on the side of the Google team. They did amazing, impossible things.
I have good friends among them now.
It was really, really close. Like, we could have missed the deadline, but we figured it out.
On our side, we had to make many things work and figure out how we interoperate with Android Studio better, and how to set up processes and everything.
But there was a big legal thing around it. This is when the Kotlin Foundation was invented. We had to design the protocols for decision-making in the Kotlin Foundation.
Google owned the trademark for Kotlin for one year because of legal things. It was basically a guarantee from the JetBrains side until the foundation was set up.
You can look up the public record:
Google was in possession of the Kotlin trademark for a year.
But then the foundation was set up and the trademark was transferred to the foundation.
It was fun. It was a pretty crazy time.
But it was amazing to see how happy people were at Google I/O when the announcement happened.
Then usage must have skyrocketed. You probably blew past 100,000 pretty quickly.
Yes, I think we went into millions that year.
So this was basically the moment happening.
I knew many years before that the easiest way for a language to succeed is to be part of a platform.
For example:
And I knew that Kotlin had no platform. So it was supposed to be a much tougher time for Kotlin than for some other languages. But, yeah, the platform came along somehow.
Jumping forward to a lot more closer today, you left Kotlin in 2020. Later, you left JetBrains. What are you doing right now?
Yeah, so I’m also working on a language right now. But it’s sort of a different kind of language because the times have changed. And, you know, you can look at it from a similar perspective. Like, in Kotlin, we wanted to get rid of boilerplate. We wanted to make programs more to the point. And less of a ceremony.
And I think this is where we, today, we have a great opportunity to do the same thing at a different level. Because of AI, right? Because of AI. It’s all because of AI.
Yes. AI is great because many things that are obvious to humans are obvious to LLMs as well, which closes this gap between what the machine can understand and what a human can understand quite a lot. Which means we might not need to write dumb code anymore. That would be very nice.
So, on the one hand, you know, the entire history of programming languages is going from lower to higher levels of abstraction. We started with machine code. And then assembly was a step up, actually.
Teams could grow and you didn’t have to be a super competent programmer to build working software. And then, you know, things like Kotlin built on top of that success. And we raised level instructions some more, but now we can do even better in the same domain.
So, you can imagine a normal program, some application code. A lot of the things in this code are obvious to you and to me. So, if you ask me to write this code, you don’t spell everything out. You explain what the program needs to do and I can implement it. And it will work the way you want.
There are, you know, it depends on how detailed the specification is. But you can tell me a lot less than you would have to tell a compiler.
And so, this is the point with Codespeak. We want to basically shrink the amount of information a programmer needs to tell the computer to make the program work. From my current anecdotal experience, you can shrink a lot of the code about 10x.
Which means that a lot of projects out there can be a lot smaller. And it will be a lot easier for humans to deal with that and a lot easier to read — and reading is the most important bit — and a lot easier to navigate.
It becomes, you know, the essence of software engineering. When you are not dealing with a stupid compiler, you’re not restricted by that anymore. What you’re expressing is what only you know about what needs to happen because everything else, the machine knows as well.
So, can you tell me a bit more on what Codespeak is or what this language is? Is it designing an actual, like, in a formal language, just simpler? Is it using, of course, we know that AI and LLMs and agents can do all the funky stuff. Where is this? What is this?
Okay, yeah, so I’ll try to explain this.
So, I think the best way of thinking about Codespeak is it’s a programming language that’s based on English. It’s not a formal language or not an entirely formal language. But it’s a programming language. It’s a language that’s supposed to be used by engineers. But it uses LLMs heavily.
And this is like the way new languages will be. Because, you know, you can think about the ultimate language of today as a normal programming language that uses an LLM as a library.
You know, there was a time where NPM was wonderful because, you know, it’s a huge repository of all kinds of JavaScript libraries. It’s the node packet manager, one of the biggest package managers in the world, right?
Right, yeah.
So, you have:
- a huge library out there that you can call,
- but now you have an even better NPM,
- The LLM that has seen all the code in the world,
- and if you're inventive enough, you can fish this code out of the LLM.
Yeah. You need to know how to prompt.
Right.
And the trick is, like, it would be really nice to have a programming language that has the entire LLM as a library or as a bag of libraries, right?
The trick is to take anything out of an LLM, you have to use natural language. So, the query language to this incredible database of all the knowledge is informal. And there is no way, at least known today, that you can make it formal.
So, inherently, this ultimate language of today has to be, at least in part, informal. And this is what we’re working on.
So, it’s still in the air, like, how formal can we make it? And, you know, it’s not the goal to make it super restricted. But the goal is to leverage all the power and support the user. You know, we need to rule out stupid mistakes and things like that. And we’re still working on that. But the basic idea is, if you, instead of spelling out every line of code and every bit of your algorithm, you can basically communicate intent the same way I can communicate it to you, you will just get there much faster.
So, one question that I asked Chris Lattner, which I’m going to ask you as well, you’re talking about designing a language for software engineers to build software more efficiently, maybe more concise, in a new way, and it sounds super exciting. But going to the other side, we have LLMs. Do you think there is a need to design a new type of programming language for LLMs to use more efficiently?
That’s a very interesting question. And I had a few discussions about this. My position is it’s probably misguided because of a number of things.
So, one, to get an LLM to understand some language well, you need a huge training set. And with the new language, that training set is not there. You can try to synthesize it and so on and so forth, but it’s not going to be as good as other languages. Like, for example, right now, the newer languages are just harder for LLMs than the more established ones.
And, you know, there are ways around it. I think the later models added some more Kotlin into the RL sets and it’s getting better. But still, it’s pretty hard. And so that’s challenge number one.
Also, challenge number two, I don’t think there necessarily has to exist a language that makes it better because LLMs are trained on human language. Their knowledge of programming languages is part of that. Their power is in having been exposed to all the code in the world and its existing code. And inventing a new language for that, I don’t know how promising that can be.
You can do another thing, which is an interesting research project. You can sort of extract a language from an LLM because, internally, it has some intermediate representations of what’s going on during inference. And maybe you can sort of extract the optimal prompting language.
It’s not guaranteed to be intelligible to humans. And there are some experiments that show that you can create completely unintelligible prompts that give the same results as normal human prompts, but they will be shorter.
You maybe can do something like this. I don’t know if it will help a lot. But what we’re doing in code speak as part of working in this language, we need to really nail down this query language capacity.
What we’re doing now is we are looking at existing code, and we’re trying to find the shortest English descriptions for this code that can generate equivalent implementations—not necessarily character to character, but they have to work the same way.
That’s an interesting exercise because you need to figure out how to represent the ideas in the code in a way that:
But also, this code you represent evolves over time, right? So you have a commit history on top of this version. Going forward in time, you need to be able to represent all the changes in your code speak version.
You need to make sure that when it’s a small change in the original code, the change in the spec is smaller. That’s an interesting challenge. So in this way, we’re sort of discovering code speak as a language, or at least parts of it, and not really designing that bit of it.
You know, it’s a very new world in the sense that, nowadays, if you work with AI, everything is a machine learning problem. That means, back in the day, if you had a very smart algorithm on paper, you could just implement it and make sure it works. Nowadays, whatever algorithms you have in mind, you need the dataset.
First of all, like if you don’t know how to collect a dataset, don’t even start. And, yeah, this is what we’re doing.
So just taking a look at, you are using these tools day in, day out. I mean, you’re building with them. How do you think programming as a whole, or I’ll say software engineering, is being changed by AI? And how do you think the future is starting to look? Especially thinking about software engineers. You’re a software engineer yourself. You’ve written so much code in your life. And are you still writing code?
Yeah, I’m writing some code, yeah.
Sorry, typing or prompting?
I’m doing both. Sometimes I’m just typing. More often, I’m typing with cursor tab completion. I’m doing quite a lot of prompting as well. And that’s a combination of all this. But cursor’s completion is really a step up from traditional IDEs. And I think the IntelliJ side has something similar now. So it’s like a lot of coding, but in a very different kind of mindset and a different tool set.
Yeah, so in terms of what’s happening to programming, I think we are in the early days of the new era. So, you know, it’s only last year that we figured out that coding agents are good. No. Cloud code and cursor agent and so on and so forth. And I think this is a very early step.
Right now we are in this phase where a lot of people are in love with agents and it can be very useful and I use them every day. But I think there are inherent problems with the model, with how you interact with a coding agent because it’s a one-on-one chat. And as a human, I talk to the agent in human language. So I’m communicating my intent on a high level.
And that intent gets translated into code and it’s the code that I commit to the repo and it’s the code that my teammates will see. So my chat history is lost. Big problem.
Yeah, so it turns out I’m talking to a machine in human language. But the way I communicate with my team is the machine language. That’s kind of backwards.
So, yeah, so what we’re trying to do in the Codespeak is to elevate everything to the human language level. So this is where we start. We say, okay, we have this incredible tool. We can prompt agents to implement code for us. And we are just picking it up. So I think a lot of teams haven’t yet realized how difficult it is to review the code.
And I’ve talked to people who are like,
“Maybe we can just not review this code.”
I’m like, yeah, I mean, you can for a couple of days and then it just collapses. And I think another big theme of today is that we’ll be doing a lot of testing.
And like, you may not need to review the code if your tests are really good. You need to verify it, right? Yeah. That’s what you’re saying is verifying might not mean reviewing. Right. Or it could not mean. Yeah, depending on the domain. Of course, of course.
You might get by without reviewing the code as much, but being sure somehow either reviewing the tests or somehow else, making sure that your tests are good. That’s a trend. And we are putting a lot of effort at Codespeak into automated testing and making sure the tests actually check the right things and that they check all the code and all that stuff.
It’s very interesting computer science. And also, it’s now a question of, especially in the case of Codespeak, and I think for other agents as well, like, yeah, reviewing code can be too much, but can we present the tests we generated to the user in a way that actually verifies that we did what was to be done?
It’s tricky. Some tests will be just very long and tedious to read and, you know, but we’re working on that. And that’s where we are.
And I think we’ll see a lot of development in terms of power of the models and we’ll get some quote unquote obvious things implemented in agents. For example, the agents are just starting to use like language servers and basically all the stuff that we’ve always had for code is not very utilized.
And, you know, if you compare like IDE-integrated agents like Cursor or Juni at JetBrains, you have a lot of like code navigation capability and, you know, databases of code is indexed and you can navigate it very quickly. You can find things very quickly.
When you run cloud code, for example, it might not have that and use grep and it will be as successful, but take a lot longer and burn a lot more tokens.
So, you know, I’m sure this year all these tools come to most agents and we’ll have a lot more sophisticated scaffolding around the models.
So that’s one thing. But then, you know, my question is always what’s going to happen in the endgame or in further future. And there it’s very hard to predict. And we can assume that models will become much smarter. But an important thing is that humans will not.
So one thing I know about the future and it’s hard to know the future, but this thing I do know about the future, humans will be as smart or as dumb as they are today. And if we have incredibly smart models, what we will be doing is constrained by how humans are and this is one of the reasons why I’m working on Codespeak because Codespeak is a tool for humans, not for models.
Yeah. And humans, I know I can build a tool for them.
I guess an important footnote is that many people will say things like,
“If we have smart enough models, they can review the code themselves and they can test the code themselves.”
But then my question would be like, who’s making the decisions here?
You know, if all the software engineering work is done by models, it means humans don’t have any say in that. And this has a name. It’s called technological singularity.
Yeah. When humans are not making decisions, it means we’re not in charge.
Yep. So this is not the future I’m building Codespeak for. Nobody should build any projects for that future. In that future, we’re gone. Your projects don’t matter.
But so my assumption when I’m talking about the future is that the technological singularity is not happening. And so the basic assumption is humans are in charge.
And if humans are in charge, it’s their job to communicate intent. So we have to say what kind of software we need to build. And when we’re talking about serious software, it’s always complex. There’s no way there’s some very simple thing that will make a difference.
And when we talk about this complexity, this is what our jobs will be, like dealing, managing this complexity, figuring out what we actually need to do. And this is absolutely engineering. There is no way someone can tackle huge amounts of complexity without an engineering mindset. It can be called software engineering, can be called something else, but you will have to do it. You will have to navigate this complexity, organize this complexity, figure it out.
And I’m not talking about the complexity of many, many layers of implementation. Maybe not. Maybe that is what’s called accidental complexity, something that happens or arises from how we implement systems. But there is also essential complexity. How we want it to behave is complex enough that we need to figure it out.
And this is why I believe there will be teams of engineers working on systems like today. Maybe they will be a lot more powerful teams. Maybe fewer people can deliver a lot more software. Yes, but still teams of people working on organizing complexity.
And this is what Codespeak is for.
Going back to where we are today with what the models can do today, what do you see with developer tools? It feels a little bit of a wild, wild west right now, very much so. I mean, there’s a lot of, obviously with Cloud Code, with Curse or with others.
But what are areas that you think we will see, we will have to see new, different, better tools to actually just catch up with how we can generate? And what parts feel the most messy and the most interesting? Especially because at Kotlin, you have, and the team has built so many tools for developers.
Right. So I think, as I already mentioned, this year will be the year of making developer tools available to agents.
There are some technical challenges, but you can’t figure it out. The people will be doing that.
There’s also a surprising advantage to using a good UI for your agent. It’s very nice to have everything in your terminal, in one sense. But then you can have a lot better user experience if it’s a dedicated environment.
The terminal tools, especially Cloud Code, are amazing. And it’s a complete breakthrough of what you can do in a terminal. But generally, you can do better in a specialized environment.
So I think we’ll see more of this integration into development environments or just new development environments built from the ground up to work with agents primarily. So that is an important thing.
Since we are putting a lot more emphasis on review, there should be new tools for review. And I think we can do better than what we’re doing now in many respects.
I don’t expect many breakthroughs in testing this year because it’s hard. I’m doing it right now. It’s hard. It’s not going to happen this year. But maybe some advances will arrive this year.
But generally, I think the big lesson of the last couple of years is that all the things that were, quote unquote, obviously needed and, you know, the idea of connecting agents to developer tools was absolutely the trivial thing to think of two years ago. But they take a long time to happen because it’s hard.
And, you know, nobody in this industry is lazy. Like everybody’s working their asses off. But it just takes time. You need to figure out the basics before you can do advanced things. So, you know, all the straightforward ideas will get implemented at some point.
I think there’s been this massive jump with AI, especially over the winter break, where the coding agents, the CLIs, have become a lot more capable.
And I know so many developers who are actually just prompting most of their code, if not all of it. It’s just a massive, massive jump. I don’t think we’ve seen anything this fast.
I see a lot of engineers scared because it can shake you to the bone. You know, it took 10 years to get really good at coding. And the writing the code part feels that it’s kind of going out, you know, the trash can.
You yourself have coded for a longer time. What would your advice be for developers who are feeling like this, that they’re feeling, you know, it is scary.
I think we, and I talk with some folks, a lot of people message me as well. How are you thinking about this specifically these last few months? It’s really hard to give advice.
There are a few ideas I can share. So one thing is there’s a lot of hype and a lot of it gets to the management and a lot of people make suboptimal decisions. But that will go away.
So, you know, there’s more and more news about people not hiring junior developers, for example.
And a lot of other things can be really stressful in the moment, but some of them will be rolled back. So that’s one thing.
Another thing, it’s absolutely worth it to invest your time into learning these tools and getting good at it. There’s a lot of skepticism around in the developer community about how useful it actually is. And, you know, I tried it on my project and it’s no good.
There is quite a bit of skill to using these tools. Unfortunately, it’s not super formalizable. At least so far, nobody figured out a really good, clear way of communicating how to do it well. But there are people who can do it much better than others. They not always can’t articulate why their prompts work better. But, you know, you can learn it. You can get a lot better at it.
And, you know, not necessarily believing everyone on Twitter. Some people claim crazy things, but you can be very productive with these things when you use them well. And it’s absolutely worth investing into that.
And yeah, so as I mentioned before, in the future, it will still be engineers building complex systems. So keep that in mind. It’s not like we all go to nothing.
And for new grads, people coming out of university, what would your advice be for them who are like determined, like, “all right, I actually want to be a standout engineer. Maybe with these tools, I can do it faster.” What would you advise them to focus on either skills or experiences to get?
I guess it’s a matter of what your inclinations are.
So, you know, generally, if you have any inclination in looking under the hood and figuring out how things work, go as deep as you can. As a younger person, you have a lot of mental capacity for that. And this helps a lot. You become a very good expert in very wide fields, just through drilling down on many things.
That’s closing. I just wanted to do some rapid questions. I just ask and you shoot what comes next.
What is a favorite tool that you have? It can be digital. It doesn’t have to be digital.
Well, I love my AirPods. They’re incredibly convenient. They fit under my earmuffs.
Well, another tool would be earmuffs.
Earmuffs. Incredibly good.
Yeah, I saw you wearing it. I’ll take that one, Earmuff.
And what’s a book recommendation that you recommend and why?
There is this classic that’s been recommended across the tech community for many years. It’s called Zen and the Art of Motorcycle Maintenance.
I heard that recommended.
Yeah, it’s a very good book. I mean, there is a part of it that’s about technology and how to deal with the real systems and others, but it’s also a very good novel. I really like it.
Well, Andrew, thank you so much. This was very interesting and I think inspiring as well.
Thank you very much. It was great to chat.
It was great. Thank you.
The thing that struck me most from this conversation with Andrey was his observation about how we work with AI coding agents today. You talk to an agent and play in English. It generates code. You commit the code. But that conversation, your actual intent, it disappears. You communicate with the machine in human language, but with your teammates in code, in machine language.
Whether or not CodeSpeak becomes the answer, what is sure is that we’re missing an intent layer. And someone is going to figure out how to preserve it.
If you enjoyed this episode, please do share it with a colleague who’s been thinking about where programming is headed. And if you’re not subscribed yet, now’s a good time. We have more conversations like this one coming.
Thank you and see you in the next one.
2026-02-04 08:00:01
Uneasy Calm: Ryan Hass on Three Pathways for U.S.-China Relations Under Trump
Welcome to the Sinica Podcast, the weekly discussion of current affairs in China. In this program, we look at books, ideas, new research, intellectual currents, and cultural trends that can help us better understand what’s happening in China’s politics, foreign relations, economics, and society.
Join me each week for in-depth conversations that shed more light and bring less heat to how we think and talk about China. I’m Kaiser Guo, coming to you this week from my home in Chapel Hill, North Carolina.
Sinica is supported this year by the Center for East Asian Studies at the University of Wisconsin-Madison, a national resource center for the study of East Asia. The Sinica Podcast is and will remain free.
But if you work for an organization that believes in what I’m doing with the show and with the newsletter, please consider lending your support. I know you think of this as boilerplate by now, but seriously, I am looking for new institutional support.
The lines are open and you can reach me at:
or just my first.last name at gmail.
Listeners, please support my work by becoming a paying subscriber at cynicapodcast.com. Seriously, help me out.
I know there are a lot of Substacks out there and they start to add up. Yes, but I think this one delivers some serious value. You get my stuff, the China Global South podcast, the fantastic content from Trivium, including not only their excellent podcast but also their super useful weekly recap. You get James Carter’s “This Week in China’s History” column. You get Andrew Methvin’s Chinese Phrase of the Week.
I am really trying to deliver value for your hard-earned dollars, so please do sign up. Things are tough, I get it, but consider help now. Tough for me too.
As we move into the second year of Donald Trump’s seemingly interminable second presidency, U.S.-China relations have once again defied easy characterization.
What began as a return to tariff escalation and hardball trade tactics has somewhat unexpectedly given way to a period of relative strategic calm marked by:
Even in the national security strategy and the national defense strategy that was just released.
The once dominant language of great power competition has definitely receded, and many of the most vocal China hawks who shaped Washington’s approach for the past decade appear to have been sidelined.
In their place, we’ve seen a policy posture that reflects Trump’s highly personalistic approach to foreign affairs and emphasis on leader-to-leader rapport.
“Xi Jinping’s my friend,” deal-making over doctrine, and a willingness to bracket or at least downplay ideological disputes in favor of transactional progress on trade, technology, and risk reduction.
Trump’s repeated praise for Xi Jinping, his apparent sensitivity to certain of Beijing’s red lines, including on Taiwan, and his apparent comfort at treating China as a peer rather than a civilizational rival mark a sharp departure from recent bipartisan orthodoxy in Washington, if you indeed believe that it was a bipartisan consensus.
Supporters argue that… This shift has lowered the risk of conflict and delivered tangible gains. Critics, though, counter that the United States is conceding leverage without securing durable returns. Either way, the result is a relationship that feels less confrontational for now.
In my private communications with certain among my more panda-hugging friends, there’s this sort of bewilderment. It’s like, we kind of agree that Trump is awful for this country but not so bad for U.S.-China relations, right? But beneath the surface calm lie unresolved structural tensions, deep mutual dependencies, of course, that can be weaponized, and parallel efforts in both capitals to reduce those vulnerabilities.
So, what comes next? Are we headed toward a genuine lasting stabilization or a familiar snapback to the acrimony that once dominated, once our expectations collide with reality? Or a more ambiguous middle path, one in which both sides buy time, avoid escalation, and quietly work to insulate themselves against future shocks?
Well, to help us think through all these questions, I am joined by Ryan Haas, director of the John L. Thornton China Center at Brookings, and one of the most clear-eyed analysts of the U.S.-China relationship working today. Ryan has just published an essay on the Brookings website laying out three plausible pathways for the relationship under Trump scenarios ranging from:
He joins me from D.C. And Ryan, welcome back to Sinica, man.
Thank you, Kaiser. It’s wonderful to be back with you.
So Ryan, like I said, you’re joining us from Washington. Let me start there. One of the strengths of your piece is that it treats leaders as not free agents but constrained actors. From where you sit in D.C., what are the most powerful domestic forces that are shaping the U.S.-China policy right now? And which of them do you think actually matter to President Trump?
Well, it’s a really interesting question. I have to say, sitting in Washington, D.C., one thing that is very palpable is a hope, a wish among many inside the beltway that we will soon snap back to the way things were before—that this one to two-year window is just sort of a brief pause from the long-term trajectory of intensifying competition and confrontation.
I’m a little less confident of that. In fact, I’m fairly skeptical that’s where things are headed, but that’s certainly a palpable sense of mood within the beltway.
To your question, I actually think that President Trump is fairly unconstrained in terms of his approach to China. I believe he is pursuing the approach that he thinks will yield the best benefit for him personally and politically, but also for the country. The basic contours of it, to the extent that you can assign strategy to what President Trump is doing, are:
- Trying to lower the temperature of the U.S.-China relationship through direct engagement with President Xi.
- Showing tremendous respect to President Xi and, by extension, China in service of that effort.
- Building deterrence in Asia militarily.
- Reducing dependence upon China for critical inputs to the U.S. economy.
- In his own way, trying to rebalance the U.S.-China economy.
That’s the direction he is trying to take things. I don’t think he surveys the landscape of the U.S. political class and finds too many threats to his vision and approach to the relationship. But he’s thinking about midterms, he is thinking about 2028, and he’s thinking about affordability and things like that.
I mean, is that part of the logic that’s driving him to soften things with China right now—to hit pause?
Yeah. I think that there are a few things causing him to do that. First, he believes that China has us over a barrel in terms of their control over earth and other critical inputs. Until we get out from under the sword of Damocles that the Chinese have above our head, I don’t think he sees much value in taking the U.S.-China relationship toward head-on collision.
He also recognizes that he’s managing a lot of other problems around the world simultaneously. Adding to that list with intensifying confrontation with China may not be wise or prudent.
But I think he also recognizes that there isn’t a ton of appetite in the United States among the body politic for head-on confrontation.
This is something, Kaiser, you have written about and talked about—the vibe shift in the United States. President Trump, one of his unique strengths is… His reptilian feel for the mood of the American people. And in this regard, I think that the president reflects what he can sense from the American people in terms of what their expectations are for the U.S.-China relationship today.
Well, that’s comforting. The other questions, industrial policy coalitions used to be, at various times, a ballast for stability or even an active force for improved relations with China. Are they acting on him today? Is there business pressure somewhere? Is Jensen Huang a major force in his thought these days?
Well, I think that President Trump operates much differently than traditional U.S. presidents, in the sense that he is not sitting in the Oval Office waiting for his staff to bring him options for him to decide upon as it relates to China. As we’ve talked about before in Berkeley and elsewhere, he is his own China desk officer. He takes his own responsibility for calling the shots and setting the direction of U.S. policy towards China.
And in doing so, he is not informed by stale, turgid intelligence briefings that stone-faced people deliver to him early in the morning. He is talking to a range of people in and outside of government. He’s talking to people he treats as peers and considers as peers, including Jensen Huang, but not just Jensen Huang. He is basing judgments upon the body of inputs he’s receiving, which are far broader than a traditional U.S. president would.
So if he is so unconstrained and if his policy toward China, as with all things, is such a function of his just idiosyncratic whims and his character, is this current pivot away from ideology credit where it’s due? It’s something that I’m really happy to see. Is this something that could survive Trump or is it inseparable from his personal instincts and his incentives?
Well, I’ll try to take this in two parts. The first is that I think Trump is in a category of one amongst the U.S. political class in his willingness and tolerance to affect the change in America’s overall orientation towards China. And you noted this very articulately in your introduction, that he has moved the United States away from sort of an emphasis and a framing of great power competition as the sole lens through which to view the U.S.-China relationship to something that’s much broader.
I think of it as sort of non-conflictual coexistence, a more pragmatic, realistic appraisal of the nature of the U.S.-China relationship than preceded President Trump. But it does raise the question, I think a very legitimate question that you’re asking, which is, is this just something that will perish when President Trump departs office?
I can’t tell you. I honestly don’t know. But my instinct would be that no, this has the potential to outlast President Trump. However, for it to do so, a few things will need to happen:
First, President Trump will need to demonstrate return on investment. Over the next couple of years, he will need to demonstrate that this less harsh approach to the U.S.-China relationship yields tangible benefits for the American people and American workers.
Secondly, whoever succeeds him, whether Democrat or Republican in 2029, will need to be able to make a case for what America’s national goals are and how China relates to them.
It’s impossible to know how those two variables will play out, but it is certainly a possibility that we could see an elongation of this period beyond just Donald Trump.
The ball then is sort of in Beijing’s court. They need to pay a return on that investment, and I think if they want it to endure beyond Trump.
But speaking of Beijing, let’s flip the lens to Beijing. Is Xi similarly unconstrained? Is he a sort of singular determinant of Chinese policy toward the U.S., or does he have domestic determinants of China’s policy toward the United States at this point?
I mean, and if they are, is it like economic stabilization in the post-COVID period? There’s plenty of things that bedevil the Chinese economy right now.
Is it:
What are Xi’s considerations as far as you can tell?
Well, one of the unique aspects of this moment is that we are in a situation where the two countries are driven by very personalistic leadership styles. There are some, for me, uncomfortable similarities now in the way that the two countries are sort of operating.
I don’t think that Xi is perfectly unconstrained. I’ve never subscribed to the view that he has a monopoly on power in China and that he alone can determine the outcomes for 1.4 billion people. But I do think that there are certain things that… He is very invested in and that his brand is associated with, his political brand. One of them is making progress towards greater self-reliance and less dependence upon the United States and the West for China’s future growth, innovation and technological breakthroughs. And this period of relative calm in the relationship, I think serves that purpose. It gives breathing room and space for China to make progress down the path of greater self-reliance.
The second is being able to give proof to the narrative that time is on China’s side, that China has “winded its back” and that it’s the United States that on a relative basis is declining. And I think there are plenty of proof points that President Xi and those around him can point to, to build that case persuasively inside China today, which I think also gives some momentum to the current direction that we’re in.
I mean, I know it’s hard to say with any certainty, but is it your sense that there’s debate within the Chinese system about how hard or soft to lean into this current period of calm? Is this something that, you know, is he facing opposition? In other words, are there people who are saying,
“Hey, America’s showing weakness, time to press our strength,”
or does it seem to be, you know, Xi’s calling the shot in that case?
Ryan Hass: You know, it’s a good question. My latest sort of touch for that is a bit dated. I was last in Beijing and Shanghai in December. So I’m a month plus out from my last contact with people who are in policy circles in China.
But based upon that last round of conversations, my view is that many people recognize that this moment is serving China’s interest well, that China’s goal is to try to relieve pressure and sort of unblock the path to China’s continued rise.
To the extent that President Trump is willing to play a role in that by relaxing pressure upon China, whether it be through:
I think those are all sort of indicators that this is working to China’s long-term benefit.
Kaiser Guo: So Ryan, a central claim or assumption in your essay is that both sides, Beijing and Washington, are behaving less out of mutual trust than out of mutual sense of vulnerability. That, I think, isn’t a claim that many people would challenge, actually, and I wouldn’t.
To what extent do you think that policymakers in both capitals genuinely understand this as kind of a negative sum dynamic? And to what extent are they simply discovering through painful trial and error that they are mutually vulnerable and that they need to chill out?
Ryan Hass: Well, I have a pretty high degree of conviction around this point, but I don’t have some smoking gun evidence that I can point to to prove it.
My sense is that both leaders and those around them have come over the past year to recognize that the other side is capable of doing immense harm to itself.
And I think that this has been a revelation, more so on the US side than the Chinese side. The Chinese side has been well aware for a long time that the United States is capable of being a dangerous superpower that can do immense harm to China.
But when President Trump and Secretary of Treasury Besant and others entered office last year, they entered office with a certain degree of bravado and hubris. Secretary Besant famously said that
“China is holding a pair of twos in terms of, you know, the cards it has in its hand and the lack of leverage it has over the United States.”
No one is talking like that anymore.
Through painful trial and error, both sides have come to realize that they are each capable of doing harm to the other. And that if one side initiates action against the other, it should expect painful retaliation response.
And so I don’t think that President Trump and President Xi over the past year have developed some like brotherly friendship where they decided not to do harm to each other.
I think they both come to recognize that if they take actions that are harmful to the other, that they will get hit back in response. And that it will hurt.
And that was the whole lesson in 2025 leading up to Busan, right?
Kaiser Guo: And you know, your trip may have been a couple of months ago, but that was still in the post-Busan era. So I think you have a probably quite accurate read of how they’re feeling right now. Not much has changed since then, so.
Ryan Hass: Right.
Right.
Yeah. There haven’t been many major ruptures or fluctuations from then till now. Except the rupture that, you know, Mark Carney spoke of.
But so Ryan, let’s jump in with your first scenario, the soft landing. In this pathway, both leaders:
I mean, you know, they’re really talking about investment right now. We’ve got Ford talking about working with Xiaomi possibly, according to the FT, at least on a battery plant, right?
Yeah, you’re absolutely right. I think for this scenario, the soft landing scenario to take root, a couple of things would need to happen.
So that’s the first prerequisite.
I think one of the things that some people point to who are advocates of this approach would be some type of grand bargain.
So we know that President Trump is planning to travel to China in April. If that visit were to yield a sort of significant breakthrough on a contentious issue, most people would identify Taiwan as the candidate, Taiwan combined with some type of transactional benefit for the United States and its workers. Then that would give momentum or solidity to the idea that we could travel down this path.
But short of that, I think it’s hard to imagine both sides really sort of believing and acting in ways that both leaders believe they can sustainably improve over the long term of the nature of the relationship.
What makes that costly from the American side?
So you say that it would require both sides to send costly signals. What sorts of signals are we talking about from Beijing, and what would be costly about those? How hard would they be to deliver domestically in Beijing?
It’s a great question. I think in the case of China, there is a certain degree of skepticism about whether the Chinese leadership would be comfortable seeing some of its companies and crown jewels invest or produce outside of China. We see this in particular with Meta’s efforts to acquire a Chinese-origin AI company that relocated to Singapore.
Meta’s, yeah.
Another area, in the Taiwan context, would be if President Trump were to alter longstanding declaratory policy toward Taiwan, would China reciprocate by:
These are the types of questions that sort of point to costly signals that each side would expect the other to give if they were to give it themselves.
I have trouble seeing that is costly to China compared to the electoral costliness of signals from America. So it feels like China can ram this through; Trump faces electoral pressure.
Yeah, he might. But let’s keep in mind, he’s never going to be on a ballot again for the rest of his life.
That’s true.
And so, President Trump has never shown a lot of conviction about election outcomes that don’t involve his name on the ballot.
Ryan, looking back over recent U.S.-China history, is there a precedent that you can point to for restraint for restraint actually holding for any decent length of time?
I can’t think of anything off the top of my head right now that would give a lot of confidence to the notion that restraint for restraint is a time-tested and well-established trend. This is the critique that I think people of the soft landing approach would make, is that the soft landing would The discussion involves the United States making concessions to China without receiving reciprocal benefits in return. There’s a pretty calloused skepticism that has built up over years, including within the Trump administration, as a consequence of the underperformance of China in the phase one trade deal.
Obviously, you floated this possibility that something like a fourth joint communique on Taiwan could anchor the sort of soft landing you’re talking about, the grand bargain.
What problem would such a document actually be trying to solve? What would be the content of a fourth communique? And is Taiwan ultimately the issue that makes this scenario maybe politically untenable, even if both leaders are inclined toward restraint? I mean, is Taiwan going to flummox this?
I think it will be very difficult. The idea would be that the last time that the United States and China had a communique was in 1982. A lot has happened in the last 40 plus years. A new framework that sets out a baseline of understanding for how both sides will approach cross regulations may be a useful stabilizing mechanism.
I’m on the more skeptical end of the spectrum on this question. I don’t think that the challenge is a lack of understanding about the nature of cross-strait issues. I think that there are just competing interests involved that need to be managed.
In Washington, it’s treated as sort of a foregone conclusion that Beijing is desperately seeking a fourth communique or some type of new understanding related to Taiwan with President Trump. There are a few factors that may mitigate against that as a foregone conclusion:
Yeah, yeah, yeah. Just to remind everyone, this is your most optimistic scenario. And in this most optimistic version, there is still a sense that the soft landing would be kind of inherently provisional, something closer still to a pause than to a full reset.
I am ineradicably optimistic but still have trouble seeing either polity really arriving at some kind of durable modus vivendi right now. There’s just no trust. There are many deeply entrenched habits of mind on both sides.
But there are other scenarios that you posit here. The second scenario is the one I sincerely hope to avoid: a hard split.
You frame this as a familiar arc: Trump starts conciliatory, grows very frustrated, and then swings really hard. We’ve seen this many times. What are the most plausible triggers that could push the relationship down this kind of path toward a hard split?
Well, there are a few ways we could get here:
1. There could just be a misunderstanding on what each side agrees to. President Trump comes to the conclusion that the Chinese are under-delivering on their promises. He grows frustrated, angry, and we find ourselves back following the same cycle as we did during the first term, where:
- The first three years focused on negotiating a phase one trade deal.
- The fourth year focused on letting it rip because the president was so angry and frustrated that COVID had spread and undercut his reelection prospects.
2. China takes actions against American allies that involve **use of force** and puts the United States in a very difficult position of deciding whether or not to employ force against China to come to the defense of their allies and uphold **Article 5 commitments** or traditional understandings of security commitments.
Examples of such allies include The Philippines or Japan. Right. Right. Right. And then what some people in Washington would say is that as the midterms get closer, the political incentive for President Trump to become harder and harder towards China will grow, and that the political imperatives of President Trump wanting to hold off Democrats gaining control of the House and relaunching impeachment probes against him will compel him to grow tough.
This is the hope, I think, of a lot of people in Washington who want us to get back into the business of great power competition. And I’ll just offer just a quick caution, Kaiser, as to why I’m not yet convinced that this is the natural course of events that we’re going to find ourselves in.
First, you know, the president has demonstrated that he is very sensitive about America’s dependence on rare earths. That dependence is not going to change in the next 12 months, 18 months, even two years.
Yes. The second is that President Trump just genuinely is not activated by the military threat or the ideological nature of competition between the United States and China. But he’s much more focused on economic and tech issues. He wants to make deals that he can point to and tout his as successes and breakthroughs.
And having a hostile relationship with China would sort of move against that objective.
I also think that President Trump is pretty comfortable with the status quo right now. He doesn’t face immense political pressure at home for where the US-China relationship stands. He also likes to brag privately with his colleagues and counterparts about how much tariff money he believes that the United States is generating from tariffs on China, never mind the fact that it’s US importers that pay the tariffs.
And then lastly, I think that President Trump is very focused on legacy and blowing up relations, burning down the house with China is not a legacy enhancing exercise. Putting the relationship on a new plane potentially could be.
So, I mean, the fear of a blue wave in 2026 in the midterms, I mean, I get that. But part of him also has somebody’s got to be showing him these polls that say,
“there’s just not a lot of appetite right now among voters for tough on China. It’s not a winning campaign strategy right now.”
I mean, poll after poll after poll is showing that that is fundamentally weakened vibe shift once again.
Right.
So, I mean, hopefully that’s a mitigating force.
Yeah.
And traditionally, midterm elections are not animated by China or by foreign affairs. I mean, there really isn’t any empirical evidence that going tough on China improves the odds of House and Senate candidates getting elected.
So, from Beijing’s perspective, I mean, it’s pretty easy for us to think of what kinds of U.S. actions would collapse strategic calm and force Beijing to take a harder line that would be reciprocated by Washington. I mean, all sorts of triggers, right?
But where do you think miscalculation is especially dangerous? What are the areas where you think that crossed wires and signals misinterpreted are particularly dangerous?
I would suggest, just as a hypothetical scenario, if the United States became more aggressive with other countries about urging them, insisting that they adopt America’s AI tech stack—
Right.
—and conditioning security support for them doing so. That could be an example of how things could go off track.
And if there were further actions like we saw last fall where the Department of Commerce rolls out something in an uncoordinated fashion, the 50% rule, the affiliates rule.
Right.
Something along those lines that the Chinese perceive as violating the truce, the understanding that was reached between both leaders—that could compel the Chinese to reciprocate and retaliate.
Well, that problem may be solved. Trump has apparently neutered BIS, right? So we’ll see.
One thing that struck me is how much this scenario depends on momentum, on anger compounding on anger. Once the relationship starts moving in this direction, how easy is it to reverse?
I ask because this isn’t the first time either Beijing or Washington has seen things go sideways. And you’d think that both sides might have learned something about how to manage that sort of crisis. And at least sometimes they’ve managed to get the relationship back on track.
And we saw that with the taco meal that resulted in Busan.
Has there been any learning? I mean, do you think that there’s enough sort of wisdom on either side to avoid that kind of scenario?
Well, I think that the key to avoiding that scenario is the two leaders. When things begin to veer off track, it’s the two leaders that usually put things back on track. And the challenge, the structural challenge, is that the Chinese traditionally, historically, are pretty reticent about requesting calls from President Xi to President Trump.
So if there is an incident that is an unplanned encounter between naval vessels or whatever it may be, and things begin to sort of go off the rails, pressure builds. We have a spy balloon-like dynamic emerge inside the United States where there is just boiling angst and anger about something that China has done that violates American airspace or hurts American sailors or whatever it may be.
When the Chinese do not appear to be reaching out to President Trump personally, we could find ourselves in a tough spot. And if the Chinese are perceived to be the instigator of this downward spiral and they don’t communicate directly with President Trump but try to operate through intermediaries, I think that President Trump could find himself both humiliated and offended in ways that could sort of compound the initial problem.
So that’s scenario two: one where there’s a hard split, not an optimal outcome at all, obviously.
You, fortunately, ultimately judged scenario three, which is about buying time and building insulation, as the most likely path. I would certainly concur. But what, in your mind, makes this outcome more resilient than the other two? I mean, because it seems sort of inherently unstable, right? It’s provisional. It’s about sort of just playing for time. And so it feels very impermanent.
But why do you think this is maybe more durable than the other two possible outcomes?
To me, Kaiser, and this is unscientific, this is just sort of a feel, it feels like the most realistic scenario. I don’t think that either of the two leaders is prepared to sort of make significant lasting concessions to the other. I don’t think that either country is prepared to accept a subordinate status to the other.
I think that both countries, in their own way, are able to tell themselves a story that time is on their side. And if they just regenerate or strengthen themselves, that they will be able to outlast and outpace the other.
And so this third scenario of sort of buying time and building insulation, it’s most appealing to me because it works for both leaders and how they describe their intentions and their goals.
This scenario allows him to make directional progress on all those goals.
Similarly, for President Xi, I think that there’s a fairly mirrored set of objectives.
President Xi is very committed to strengthening China’s self-reliance and moving down that path. He certainly, in my mind at least, does not seek a confrontation or conflict with the United States. But he also isn’t interested in making any significant gestures or major concessions to the United States either.
I think that the Chinese believe that they have momentum behind them. And the wave of leaders that have come to Beijing over recent weeks to visit President Xi, I think, have reinforced that perception.
So a core insight of your piece, Ryan, is that both sides are constrained by deep mutual dependencies. I think most people who are listening are aware of some of these and can rattle them off:
But what do you see as underappreciated vulnerabilities on each side that might reinforce this uneasy equilibrium? Are there things that we’re not talking about enough where there is mutual dependence?
Well, I’ll offer a few.
When I was in China last December, I was discomforted to be reminded in almost every meeting about America’s dependence upon active pharmaceutical ingredients from China, APIs. And I don’t think that that was just sort of a stream of consciousness idea that bubbled into the minds of everyone we were sitting down with. It was a reminder that rare earths aren’t the only source of American dependence upon China.
Similarly, I think for China, they are painfully aware of their dependence upon the United States and the West for:
But also at a more intangible level, access to America’s higher educational system. This is something both from the students themselves and their future contributions to Chinese society, but also Chinese leaders’ ability to keep that door open for students, the children of their peers, is critically important. And if the relationship were to deteriorate, we’ve already seen that this is something that the Trump administration has considered using as a retaliatory tool.
Now he’s talking about 600,000 Chinese students in America. I guess maybe he thinks about them as a service export rather than as human beings who contribute to the flourishing of our academic community.
But whatever the case, I think that having Chinese students in the United States enhancing the education of classrooms that they’re a part of is a net benefit for the American people.
So, Ryan, in this scenario, you kind of suggest that the way we score this is by measuring who reduces dependence faster. I mean, if we look out five, ten years from now, which side do you think is better positioned to actually succeed in reducing those dependencies? I mean, who’s working hard at this?
But what’s your assessment of this?
Well, we have a tendency to swing from one extreme to the other in the way that we talk about this in Washington. A few years ago, Kaiser, you and I were talking about peak China, whether it’s a serious thing, how should we think about it? Everyone was focused on all of China’s weaknesses, vulnerabilities, and soft spots.
In recent months, it feels like the pendulum has swung to the other extreme where China can make everything. China can do anything. Ten foot tall again, right?
The world is sort of gravitating towards China. The United States is in dire straits. I’m uncomfortable with either of those extremes.
I think that China does have profound challenges, but it also has immense strengths. Neither of those are going to go away anytime soon. We have to get comfortable to be able to look at both of those side by side.
And the same can be said of the United States.
I will just make one observation that I hope is in service of answering your question, which is that I am deeply uncomfortable with the direction that our country is headed in certain respects. I think that right now the social fabric of the country is tearing, and national unity is the foundation of national strength.
No country can be stronger on the world stage than it is at home.
What we are watching in Minnesota and elsewhere is deeply troubling, both for me from a spiritual standpoint, but also just from a civic standpoint, and also in a measure of national power.
Secondly, I worry very much about America’s alliance network fraying and unraveling. Alliances traditionally have been a force multiplier of American influence on the world stage. Now, I think that our alliance network exists more in name than function.
This is going to be a long-term cost that the United States is going to pay for the moment that we find ourselves in.
But more fundamentally, and this I think, speaks most directly to the question that you’re asking, I worry that America’s economic competitiveness is eroding somewhat.
I just feel like at a certain level, President Trump is pursuing a 19th century strategy of assuming the control of natural resources will be the source of national power. We find ourselves in a different world today.
I think that his resource obsession is a strategic distraction.
For me, the goal needs to be to stimulate growth.
Growth comes from productivity. Innovation and diffusion come from:
- Talent
- Ideas
- Efficient allocation of capital
- A transparent and predictable legal system
This is how America gains strength.
The further we turn from that, the more that I fear we will lose our ability to achieve the sort of escape from dependence that your question was anchored in. Yeah, I mean, it’s so frustrating to be, this is a man whose favorite metaphor is cards, but, you know, he’s talking about who’s got the stronger hand, you know, who holds more cards.
It feels like somebody’s got to be able to convince him that what he’s been doing by, like you say, turning away talent at the border, by destroying those things like predictability, rule of law, alliances, all these things, you know, that act as force multipliers for us.
He’s plucking valuable cards out of his hand and, you know, lighting them on fire to light his cigars. It’s just bizarre.
I mean, I feel like at this point, Beijing must look at, you know, the hands that each side holds and conclude that there’s some very pronounced asymmetry here.
I feel also like that could really make this equilibrium that you described in scenario three more fragile. I mean, if one side succeeds faster than the other in reducing vulnerability, and right now it looks like China’s succeeding faster in reducing vulnerability, that actually seems like it would destabilize this equilibrium.
I agree with you if the equilibrium is measured in bilateral terms only.
And I thought that Adam Tooze made a very important point in the interview that you flagged to his with Ezra Klein after Davos, which is that if we are thinking about the world as undergoing a power transition from the United States to China, it is going to trigger all the anxieties, insecurities, and antibodies in the United States about China’s rise and compel us to try to suppress it.
And if we rather think about what’s going on in the world, not as a power transition, but as a power diffusion, where the United States is not significantly declining, but power is growing much more diffuse in the international system. The international system is splintering. It’s growing more disordered.
Then the nature of the challenge shifts, and the way that we think about and address and respond to it also evolves.
I am much more inclined to the latter view, that we’re seeing a splintering and a diffusion of power rather than a transition in power. But this is going to be, I think, sort of a core aspect of the debate that will be underway about the way that America relates to the world for the next couple of years.
Yeah, it’s interesting. I seized on that metaphor that Tooze used, too.
And I started thinking about that kind of moral panic securitization that we’ve seen in this country as an autoimmune response.
“You’ve got to take some goddamn antihistamines and chill.”
I agree with you that this scenario, this third scenario that you describe, is probably the most likely.
Does this framework, just stepping back, suggest that we’ve entered a phase right now where U.S.-China relations are less about, you know, trying to build trust or establish shared norms and more just about engineering resilience under assumed conditions of enduring mistrust?
I mean, where each side, you know, we’ve got a hand on the other’s choke points,
It’s, you know, I guess it’s structurally analogous to, obviously not identical to, kind of, you know, mutual assured destruction during the Cold War.
If that’s right, how should it change the way policymakers even think about stability?
Well, it’s a great question. I am inclined to your second scenario that you just described. I do think that we’re both sort of holding each other’s oxygen tubes to a certain extent.
I don’t think that there’s any outbreak of goodwill or warm, fuzzy feelings towards each other right now. And I also think that we’re in a pretty fraught moment. Both countries believe that they are gaining a certain degree of advantage over the other or that they can do immense harm to the other.
But on top of that, if you look at, you know, social science work and some public polling data,
And so this isn’t the time. We are not at a moment where there’s going to be some grand breakthrough in the relationship.
I think that if we manage it well through this coming period, we will have done a service as stewards of a long-term relationship rather than as authors of some concluding chapter to it.
Well put. Beautiful.
A final question to you. I mean, if listeners wanted to just cut through the rhetoric and only watch for just a handful of real concrete indicators over the next, say, 12 to 18 months, what would you tell them to focus on to assess which scenario we’re actually in or which we’re careening toward?
I would encourage people to watch the frequency of interaction between the two leaders,
- how often they talk on the phone,
- how often they acknowledge exchanging views through each other as ambassadors or intermediaries.
I would pay attention to the degree to which both sides are preparing for engagements, direct face-to-face summits between the two leaders, whether this is a professional process or just sort of a slapdash trip across the ocean. I would watch to see how well the United States is doing in terms of building or stockpiles, reducing its sort of vulnerability to shocks in the industrial supply chain system from China.
And similarly, I would watch to see the degree to which China is sort of making progress and innovating around some of the export controls and other obstacles that the United States has put in its development path.
So how important are atmospherics going to be around the April Trump visit to Beijing? Well, I think it’ll be significant.
You know, it’s somewhat ironic, Kaiser, because traditionally, the United States trades form for substance. You know, we decide to negotiate away different sort of bells and whistles of a Chinese leader’s visit to the United States in exchange for substance. Because we know that the Chinese leader cares deeply about the imagery that comes out of such engagements because
it bestows respect and gives people inside China pride that their leader is being treated with dignity on the world stage.
Now, I think we’re in a moment where sort of the roles are reversing, where it’s President Trump will be committed to the trappings of dignity and respect, and we’ll want something grander and more dramatic than what he experienced with the state visit plus in 2017 or 18. 17 it was, yeah. I expect that he will probably go to a second city this time as part of his trip.
And so how he is received by the public, but also, you know, the imagery that comes out of that will be important to him. But ultimately, I think that the measure will be to what extent has his travel to China benefited the American worker and the American people. And, you know, we’ll have to see.
Well, I will be there on the ground in Beijing in April. I’m leaving very soon. In fact, just two weeks from now. And I will report faithfully. I’ll do a couple of shows about, you know, preparations for the Trump visit and see how that plays out. Because I think that is a very, very telling indicator.
And I think you’re absolutely right. We are in this world right now where the Trump presidency cares very much about all the symbolism, the pageantry, all the sort of etiquette and the formalism of it. And I think Beijing knows that. Beijing knew that before November 2017 when he went. They sort of turned up the flatterometer to very, very high. They know how to do this.
Well, I will be listening carefully to your reporting from on the ground, Kaiser.
Well, thank you, Ryan. Make sure to read the piece. It’s on the Brookings website and everything else that Ryan writes because it’s all super, super good.
Ryan, thank you so much for taking the time to chat with me. Let’s move on to paying it forward. Do you have a younger colleague or somebody who you’ve been working with who deserves a shout out here on the show?
I do this selfishly because, you know, I’m looking to cultivate, you know, new guests to bring on. I would point to Audrey Wong, who is an incredibly thoughtful, talented researcher, writer, public intellectual, who is doing tremendous work explaining China’s economic orientation to the world.
Fantastic.
And we can find her stuff on Brookings?
Audrey, I believe she’s at USC right now.
Oh, okay. Cool. Excellent. Audrey Wong. I will look out for her.
And what about recommendations? As you know, we do a recommendation every week. What do you have for us? You got a book or a film or some music, a travel destination, something that you want to recommend?
You know, Kaiser, I wish that I had something super cool to share. I’m going to just default to a book recommendation from Robert Sutton. He wrote The Conscience of the Party, the biography of Hu Yaobang.
And it’s as much just a gripping human story about Hu Yaobang, the last reformer in China, as it is a sort of an x-ray of the Chinese Communist Party system and the way that it operates and how it operates. So it’s for anyone who’s sort of interested in the functions of the party. I think that Robert’s book is a tremendous starting point.
That’s been on my list for a while. I really need to finally get around to reading it.
That’s an excellent recommendation. Thanks, Ryan.
So I’ve got a book as well, as well as a couple of China-related things. But my book is just for fun. I’ve been reading the long-lost final book that Alexander Dumas wrote. The English translation that I have is called The Last Cavalier, but it’s also known as The Knight of Saint-Hermain. The French title is Le Chevalier de Saint-Hermain.
But either way, it is a really fun bit of Napoleonic-era historical fiction in which actually Napoleon himself is a major character. And Dumas gives him a really kind of believable personality. I mean, much better than Ridley Scott gave him in that lamentable film, which I hope none of you had to suffer through.
But there are loads of fascinating characters. Many of them are historical. It sent me skirting to Wikipedia many a time just to sort of look these people up. But it’s also just got a ton of historical material mixed in. It’s got letters and decrees and courtroom proceedings, all kind of jumbled into the fictional stuff.
I mean, the story, the plot is a bit of a shaggy dog. It’s maybe, you know, 40% fewer total tangential plot lines might have made this book a little more sort of readable. But it’s still worthwhile if you’re interested.
Dumas actually writes himself or his father. I mean, he does this sort of breaking the fourth wall thing where he suddenly starts talking to the first person and then talks about his father, who was this Napoleonic general, who’s also Alexander Dumas.
It’s anyway, great stuff to take your mind off the world as it is. But still, you kind of get to scratch this itch for, you know, political turmoil and intrigue. If you’re listening to this show, you probably have such a niche.
For a couple of quick China-related recommendations, some really good sense-making of the Chinese economy has dropped just in the last couple of days for the day we’re taping. Check out the Asia Society conversation led by Lizzie Lee, who listeners will know, of course, from her many appearances on the show.
She’s joined by two of my faves:
It’s about the challenges of rebalancing the Chinese economy, but it goes way beyond that. It goes, you know, into the – obviously, you know, the problems of the property market and much else. It’s as good as you would expect with these three all taking part.
Related to that is the latest outstanding Trivium China podcast, of course, which you can find on the Sinica Network. It’s hosted by Andrew Polk, and it is just a banger of an episode.
Joe Peisal, who heads macro research at Trivium, is the guest for the first half, and they do this thing that they’re going to be doing every month or so, just looking at the macro numbers. But this one sort of looks at just – not just macro numbers for Q4, but for the year. And it’s a great survey.
The second half, though, features Danny McMahon and Corey Combs, who are both absolutely brilliant.
They are both really brilliant. It’s on, you know, why China is facing headwinds on boosting capital expenditure, which, if you follow the Chinese economy, you’ve probably heard, dropped really, really precipitously in the last quarter. So check out those shows.
I’m a neophyte soul when it comes to the Chinese economy, but I’m always interested in learning. So these guys have taught me just enormous amounts.
Anyway, Ryan, great to have you on again, man. And this is going to be a very Brookings-heavy month because I’m going to be talking to your colleagues, Kyle Chan and Patty Kim about the work of theirs recently.
“Delighted to hear it, and thanks for having me on, Kaiser.”
Thank you. You’ve been listening to the Sinica Podcast. The show is produced, recorded, engineered, edited, and mastered by me, Kaiser Guo. Support the show through Substack at SinicaPodcast.com, where you will find a growing offering of terrific original China-related writing and audio.
Email me at [email protected] if you’ve got ideas on how you can help out with the show. Do not forget to leave a review on Apple Podcasts.
Enormous gratitude to the University of Wisconsin-Madison Center for East Asian Studies for supporting the show this year. And, of course, huge thanks to my fabulous guest, Ryan Haas, who is always a favorite, fan favorite, my favorite.
I’m really – thank you, Ryan, once again. Thank you, guys. Thanks for listening. We’ll see you next week. Take care.