2026-03-22 20:31:51
Jessica Fain is a product leader at Webflow and former Chief of Staff to the CPO at Slack, where she worked alongside April Underwood and many past podcast guests including Stewart Butterfield, Annie Pearl, Tamar Yehoshua, and Noah Weiss. She’s spent her career learning how executives actually make decisions—and why most people completely misunderstand the process.
Listen on YouTube, Spotify, and Apple Podcasts
Why great ideas often don’t get buy-in
Why executive calendars are “like strobe lights” and why the first 30 seconds of a meeting matter so much
Why executives are usually optimizing for a global maximum while you are often optimizing locally
The best question Jessica uses when a leader says something that seems wrong: “That’s so interesting. What led you to believe that?”
Why you should go in to learn, not to convince
Why showing only one option is a mistake
Why AI will make influence more important, not less
Omni—AI analytics your customers can trust
Lovable—Build apps by simply chatting with AI
Vanta—Automate compliance, manage risk, and accelerate trust with AI
• LinkedIn: https://www.linkedin.com/in/jessica-fain-79b8989
• Box: https://www.box.com
• Slack: https://slack.com
• Brightwheel: https://mybrightwheel.com
• Webflow: https://webflow.com
• April Underwood on LinkedIn: https://www.linkedin.com/in/aprilunderwood
• Lessons in product leadership and AI strategy from Glean, Google, Amazon, and Slack | Tamar Yehoshua (Product at Glean, ex-Google and Slack): https://www.lennysnewsletter.com/p/you-dont-need-to-be-a-well-run-company-to-win-tamar-yehoshua
• Atlassian: https://www.atlassian.com
• Behind the scenes of Calendly’s rapid growth | Annie Pearl (CPO): https://www.lennysnewsletter.com/p/behind-the-scenes-of-calendlys-rapid
• Calendly: https://calendly.com
• Glassdoor: https://www.glassdoor.co.in/index.htm
• The 10 traits of great PMs, how AI will impact your product, and Slack’s product development process | Noah Weiss (Slack, Foursquare, Google): https://www.lennysnewsletter.com/p/the-10-traits-of-great-pms-how-ai
• Ethan Eismann on X: https://x.com/eeismann
• Slack founder: Mental models for building products people love ft. Stewart Butterfield: https://www.lennysnewsletter.com/p/slack-founder-stewart-butterfield
• Ilan Frank on LinkedIn: https://www.linkedin.com/in/ilanfrank
• Checkr: https://checkr.com
• Ali Rayl on LinkedIn: https://www.linkedin.com/in/alirayl
• Rachel Wolan on LinkedIn: https://www.linkedin.com/in/rachelwolan
• How Webflow’s CPO built an AI chief of staff to manage her calendar, prep for meetings, and drive AI adoption | Rachel Wolan: https://www.lennysnewsletter.com/p/how-webflows-cpo-built-an-ai-chief
• Barbara Minto’s website: https://www.barbaraminto.com
• How Slack invests in big little details through Customer Love Sprints: https://slack.design/articles/sweating-the-small-stuff
• Building product at Stripe: craft, metrics, and customer obsession | Jeff Weinstein (Product lead): https://www.lennysnewsletter.com/p/building-product-at-stripe-jeff-weinstein
• The Enneagram Institute: https://www.enneagraminstitute.com/type-descriptions
• The Pitt on Prime Video: https://www.amazon.com/The-Pitt-Season-1/dp/B0DNRR8QWD
• Towel warmer: https://www.amazon.com/FLYHIT-Large-Towel-Warmer-Bathroom/dp/B0CB5K34L2
• Casa: https://getcasa.com
• Jimi Hendrix: https://en.wikipedia.org/wiki/Jimi_Hendrix
• Greek Theatre: https://en.wikipedia.org/wiki/Greek_Theatre_(Los_Angeles)
• Pachinko: https://www.amazon.com/Pachinko-National-Book-Award-Finalist/dp/1455563927
• Homegoing: https://www.amazon.com/Homegoing-Yaa-Gyasi/dp/1101971061
• A History of Burning: https://www.amazon.com/History-Burning-Janika-Oza/dp/1538724243
• The Overstory: https://www.amazon.com/Overstory-Novel-Richard-Powers/dp/039335668X
Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Lenny may be an investor in the companies discussed.
2026-03-22 01:19:36
👋 Hello and welcome to this week’s edition of ✨ Community Wisdom ✨ a subscriber-only email, delivered every Saturday, highlighting the most helpful conversations in our members-only Slack community.
2026-03-18 23:27:01
If you’re a premium subscriber
Add the private feed to your podcast app at add.lennysreads.com
When I released my full podcast transcript archive, Ben Shih, a non-technical designer at Miro, built LennyRPG: a Pokémon-style RPG where you battle podcast guests with product trivia. In this episode, Ben walks through the exact six-step workflow he used to go from sketch to shipped game, including the tools, prompts, and decisions at every stage, along with the broader lessons he learnt along the way.
In this episode, you’ll learn:
How to have AI interview you instead of writing the PRD yourself
When to pivot your framework, rather than force the wrong tool
How to systematically process 300+ transcripts into game data
Three core lessons Ben learned from building LennyRPG
References
2026-03-17 20:46:05
👋 Each week, I answer reader questions about building product, driving growth, and accelerating your career. For more: Lenny’s Podcast | Lennybot | How I AI | My favorite AI/PM courses, public speaking course, and interview prep copilot
P.S. Get a full free year of Lovable, Manus, Replit, Gamma, n8n, Canva, ElevenLabs, Amp, Factory, Devin, Bolt, Wispr Flow, Linear, PostHog, Framer, Railway, Granola, Warp, Perplexity, Magic Patterns, Mobbin, ChatPRD, and Stripe Atlas by becoming an Insider subscriber. Yes, this is for real.
A few months ago, I shared all of my podcast transcripts on socials on a whim, and holy sh*t, y’all found such incredibly creative ways to use this data: parenting wisdom rooted in PM advice, user research scripts, antimemes, an infographic for every episode, a “Learn from Lenny” Twitter bot, and at least 50 other amazing projects.
But my favorite project of all was by Ben Shih, a non-technical product designer at Miro, who created LennyRPG. I asked Ben to share the step-by-step journey behind this wildly fun, video-game-inspired project—how he built it and what he learned.
To let a thousand more flowers bloom, today I’m releasing my entire newsletter archive (and my podcast transcripts) in AI-friendly Markdown files. Also, an MCP server and a handy GitHub repo. Paid subscribers get all of the data (some 350 posts and 300 transcripts); free subscribers can access a subset. Grab the data here: LennysData.com.
I don’t think anyone’s ever done anything like this before, and I’m excited to give you this excuse to start playing with the latest and greatest AI tools.
Here’s my challenge to you: build something, and let me know about it. I’ll pick my favorite and give you a free 1-year subscription to the newsletter. Just post a link to your project in the comments below. If you’ve already built something, slurp in this new data and submit it, too. I’ll pick a winner on April 15th. Here’s the data. Let’s go.
Ben is a designer and product builder who enjoys creating small, fun, and thoughtful products that make the world a little better. He’s currently a growth designer at Miro. You can explore more of his work on his website or LinkedIn.
Also, a big thank-you to Tal Raviv, Claire Vo, and Este Lopez for helping me beta-test and improve LennysData.com (which I proudly “agentically engineered” with Codex and Claude Code 👌).
A couple months back, Lenny dropped something special. He made transcripts from all his more than 300 podcast episodes structured and publicly available. As someone who’s listened to the podcast for years, I couldn’t stop thinking about what I could actually build with this.
Brian Balfour talks a lot about building at the right moment: if you get the timing right, you’ll find a real window of opportunity. This felt like one of those windows.
The first idea that popped into my mind was to make a Lenny interview app where you can practice job interviews with Lenny’s Podcast guests. However, the more I thought about that idea, the less excited I felt. Interview practice tools by nature feel stressful, and that’s the last type of product I wanted to create. I wanted to make something fun.
What if I turned Lenny’s Podcast into a small role-playing game (RPG)? A game where you explore a pixel world, meet guests from Lenny’s Podcast, compete with them to test your product knowledge, and even capture them like Pokémon when you win. That’s how LennyRPG was born.

When I build apps with AI, I usually follow a very simple flow:
Define the core idea: I start by clarifying what the app is. For visually heavy products, I sketch it out so the AI can better understand the requirements.
Create a product requirement document (PRD): I turn the core idea into a proper PRD with the AI. This becomes the single source of truth for the build.
Create a proof of concept: I use the PRD to plan implementation and build the core functionalities first.
Add remaining features: I finish the end-to-end flow, such as connecting the database and building out settings, profile pages, and other non-core features.
Polish: I go through the app end-to-end, fix UX/UI details, and do final code reviews to make sure everything is stable.
Ship it: I deploy, get feedback, and get it out into the world.
The process isn’t that different from before the AI era. But now I really make sure that I spend enough time on the first two steps to ensure that the AI gets all the context of what I want. In my experience, nailing the core idea and PRD determines 80% of how smooth the rest of the build will be.
Here are the main tools and technologies I used throughout the build:
Ideation and planning: Miro, ChatGPT
Coding: Claude Code, Codex, Cursor
Image generation: GPT Image Gen (gpt-image-1.5)
Quiz generation: GPT-4o
Game engine: Phaser 3
Database: Supabase
Deployment: Vercel
Now let’s walk through how I used this process to build my RPG game. I’ll share the exact prompts, tools, and decisions at each step so you can apply the same workflow to your own projects.
The core idea was simple: turn Lenny’s Podcast into a Pokémon-style RPG where players encounter podcast guests in the wild and battle them through product questions.
For many apps, text and a clear idea are enough to get started with. But for highly visual products like this game, spending some time on visualization can help you get a solid sense of how you want the game to look and feel. That makes a big difference later on when you ask the AI to build the UI and interactions.
For this game, I dropped a few Pokémon screenshots into a Miro board and put together a rough concept directly on the board. Nothing fancy, mostly text and boxes on top of screenshots. But it was enough to show how I imagined the map, the battle screen, and how the characters might look.
The goal was not to design the game exactly but to give the AI something concrete to read and reason about. Once the core idea was roughly visualized, the AI could read the visuals alongside the text, which led to a much stronger PRD in the next step.
As part of the visualization, I also created a few test avatars with ChatGPT to validate the content generation workflow. This helped me understand the prompt engineering needed for consistent pixel art style avatars.
The process was very simple. I dragged in images of the RPG characters I wanted the style to match, then added Lenny’s photo into ChatGPT to create a similar one.
Once I was happy with it, I asked ChatGPT to describe the tone, style, and design in detail so that I could reuse that as a prompt later.
Prompt I used:
Study and think through the styling, design, colors, proportions, and overall look in detail, then return only a polished image-generation prompt that will create a similar front-facing character based on the provided person’s photo, with a transparent background and no additional elements.
In my experience, the PRD is the most important document if you want the AI to execute your vision correctly. As a PRD does well with human teammates, it gives the AI the base understanding of your app’s goal, problem statement, and core idea. Whenever the AI hits a wall or the context window runs out, you can refer it back to the PRD to realign. No matter what stage you’re at, the PRD makes sure everything the AI generates stays true to what you’re actually trying to build. That’s why I always invest time here.
That said, writing PRDs can sometimes drive you crazy. So instead of writing the PRD myself from scratch, I let the AI interview me. I pasted my core idea along with the visuals into ChatGPT and then asked it to ask me questions so that I could answer them one by one.
Prompt I used:
Ask me questions to help you put together a brief PRD for the following web game: I want to create a mini game that takes all the podcast episodes from Lenny’s Podcast, generates questions from each episode, and make it like a Pokémon RPG game, with similar visuals. What I am expecting is, for example, you found Elena in the wild, and you can compete with Elena on product questions, you get 5 questions, and you lose HP [hit points] when you lose the answer etc. We can randomly pick 50 guests from the podcast and get challenged. The entire theme/design of the game needs to be very Pokémon RPG style in the old day.

With the prompt, ChatGPT came back with 17 questions. I moved them into Miro to visualize them better and used Wispr Flow to quickly dictate my answers verbally.
Answering these questions also forced me to think through gaps and assumptions in my idea, while giving the AI much better context than a one-page description ever could.
Once I answered all of the questions the AI had for me, I chained the answers together with all the available artifacts in Miro and asked the AI to generate a comprehensive PRD.
With the PRD in place, I moved it over to Cursor as a Markdown file so I could start working on the POC.
For the actual development, my setup uses three tools: Claude Code, Codex, and Cursor’s Composer.
I treat Claude Code as my lead engineer. It helps draft the implementation plan, think through architecture, and reason about product and design constraints. I’ve also found it to be great at searching for solutions and open source libraries. Codex is mainly for executing tasks from the implementation plan. It’s very good at following instructions accurately, and comes with a more generous token limit. Composer is mostly for smaller tasks like formatting documents, JSON files, or writing simple scripts.
Using the PRD as input, I first asked Claude Code to search for any open source projects that could help me move faster. This is something I always do early on. Very often, people have already built something similar and made it open source on GitHub, which can help you set things up much faster.
One of the first libraries I landed on was RPG-JS. Thanks to the library, it took me around five minutes to get something running. I was able to quickly build out the essential game flow. The overworld map handled basic player movement, encounter zones, and simple UI elements.
But very quickly, I started hitting challenges.
Challenge #1: Hitting the limits of RPG-JS and pivoting
After a few iterations, it became clear that RPG-JS was not the right foundation. The framework is heavily designed around inventory systems and weapon-based combat. That worked against me, since my battles were quiz-based and logic-driven. The more I tried to bend it, the harder it became for the AI to reason about the system cleanly.
After talking it through with Claude Code, I decided to stop forcing it and pivot. The new framework that I decided to use is Phaser, a 2D game framework used for making HTML5 games for desktop and mobile.
Challenge #2: Getting the map running in Phaser
After switching to Phaser, things became much more flexible in terms of scenes, maps, and game logic. However, because everything is more customizable, even setting up a basic map took a lot more work.
Fortunately, using Claude Code, I found a Medium article from a while ago that included an open source, reusable map template. That helped me speed things up significantly and get back to focusing on the game itself.
Challenge #3: Polishing the details in Phaser
Phaser is a powerful but complex library with a lot of different features. Claude Code took some time to actually understand how it works, and I had to go through many iterations to get the details right. Things like importing the right fonts, making sure UI elements were positioned correctly, and editing everything within Phaser’s open canvas all required a lot of back and forth.
One tip for complicated tasks like this is to ask Claude Code to create a simple Markdown file to log everything it tries, so it can keep referring back and updating what works and what doesn’t.
This helps a lot for Claude Code and AI in general to understand the codebase and framework better. It’s especially useful as your codebase grows larger, where even small things like font changes can become difficult for an AI to handle.
After working through all these challenges, the game finally reached a playable state. The starting screen, map, and battle screen were all working end-to-end.
Once the POC was ready, I shared it internally in the office to get a few people to try it out. At this stage, I wasn’t looking for polished feedback or detailed bug reports. I mostly wanted to see how people reacted when they opened the game for the first time. Did they immediately understand what to do? Did the core loop make sense? Most importantly—did it feel fun, or did it feel like work?
This kind of lightweight, informal testing gave me confidence that the core idea worked, and that it was worth investing more time to turn the POC into something more complete.
Once I got the app running correctly with the basics and got great feedback from folks in the office, I started following the plan to scale my POC into a proper game.
But the process was less straightforward than expected, mainly because there were a lot of podcast episodes to process. Scaling from a working POC to a full game turned out to be mostly about figuring out how to handle things systematically instead of manually.
Here are the main tools and decisions that helped me get there:
The transcript file provided by Lenny contained only raw text. To make it usable in the game, I first had to enrich the data with things like episode title, episode URL, and podcast cover.
To do this, I pulled in the podcast RSS feed with Cursor’s Composer and used it to attach the missing metadata to each transcript. This gave me a much more complete dataset that the game could actually use.

Then, using Claude Code, I asked it to create a simple CLI tool that could systematically generate quiz questions for each episode using the OpenAI API. Instead of doing this episode by episode, the tool processed everything in one go.
This step was as simple as typing in a prompt: “Create a CLI command tool that creates a simple way to read through all the transcripts in /transcript folder one by one, and for each, generate 5 questions following the requirements and JSON format: {Your requirements and JSON format}”
It took around 20 minutes to finish, and the output was a structured JSON file that I could plug directly into the game.

One of the hardest parts of building the game was creating over 250 RPG avatars in a consistent way. Each avatar needed a photo of the guest as input. Doing this manually by searching and downloading guest photos one by one would have taken forever.
Fortunately, every Lenny’s episode already includes an episode cover that contains the guest’s avatar. For this, I triggered Cursor’s Composer to pull RSS feed again to pull the image URLs, downloaded them locally, and used those as inputs for avatar generation.
That solved the sourcing problem but introduced another one: How do I make sure every avatar looks consistent in quality and style?
This is where I used OpenAI Playground to repeatedly test and refine my prompt, as well as testing which models work the best for the task. I kept adjusting it until every generated avatar followed the same style and looked like it belonged to the same game.
Once the prompt was stable, I used Claude Code again to write another CLI tool that could systematically generate all the RPG avatars from the episode covers. That turned a very painful manual task into a one-click process.
And of course, for each output, I had to check one by one to make sure the sizes and styling were similar and matched how the guests looked in the podcast cover. This was one of the most interesting steps because there were a few fun edge cases. For example, I didn’t know Adam Grenier really has rabbit ears on top of his profile image in the original podcast cover—I almost deleted them. Or there are episodes with two people in the cover image, like Jake Knapp and John Zeratsky’s episode, so I had to tell AI to generate a single separate image for each person.

Audio is a huge part of any game. Many successful gamified apps, like Duolingo, invest a lot of effort in sound design because it makes everything feel more alive.
At the same time, searching for the right background music and wiring it into the game usually takes a lot of time. So I went to Claude Code and simply said: “Search for me background music for each phase, with mute control.”
To my surprise, it was able to find OpenGameArt.org, an open source audio library for games, and wire it into the game correctly. When I wrote the prompt, I actually just wanted to add background music for when players are on the map, but it automatically added music for battle screens, victory screens, and defeat screens as well. I still had to adjust the timing and volume, but most of the heavy lifting was done automatically. That part genuinely felt like magic.
Defining the gaming mechanics was the most interesting part of the process, as I wanted the game to be fun and low-stress but still competitive enough that people felt progression and stakes. I’ve studied game theory in the past, but for this project, most of it came down to common sense, play testing, and iteration.
I started with a very simple rule set: Each opponent has three questions. Every correct answer gives XP (experience points). If you answer all three correctly, that counts as a perfect kill.
To keep things interesting, I added small variations. Occasionally, one of the three questions becomes a bonus question, which gives extra XP and a small HP boost. This introduces a bit of randomness without breaking balance.
Stage progression is based on XP thresholds. Once you reach the required XP, a new map unlocks with a new batch of guests. Defeated opponents disappear and get added to your collection, so you can’t farm the same ones repeatedly.
I worked through most of this logic on my own first and then verified with the AI to make sure there were no obvious bugs or edge cases. The AI sanity-checked numbers and flows, but the final calls on balance, pacing, and stress level were all manual.
The last step of the game is the leaderboard. It is the competition aspect of the game, where people can see their ranking and compete with each other.
I knew I had to set up a database for this, so I started by setting up Supabase MCP in Claude Code. That means instead of manually setting up tables, APIs, and connections, all I had to do was describe to Claude Code that I wanted a leaderboard synced with Supabase.
Once I did that, it triggered Supabase MCP, which called tools like create_project and apply_migration to set up the project and tables automatically, including the database structure and the connection between the game and Supabase. This made the whole process much faster and removed a lot of setup work that would normally take much longer.
The result was a working leaderboard that synced player progress in real time, without my having to touch much backend code at all.
Before shipping, I focused on final polish to make sure the app was stable, usable, and presentable enough for public launch. At this stage, the core gameplay was already working, so the goal was not to add new features but to reduce friction and obvious issues.
For this step, I downloaded the review skill from the Claude Code Awesome Skills marketplace and used it to review the entire codebase comprehensively.
This was especially helpful for catching things I would normally miss, such as state issues between scenes, missing error handling, and small logic bugs that only show up after multiple rounds of gameplay. I did not blindly accept everything it suggested, but it gave me a solid checklist to go through before shipping.
I went through the game end-to-end and logged all UI and UX inconsistencies in a Markdown file—things like spacing issues, text overflow, unclear labels, alignment problems, and visual hierarchy issues.
Once everything was written down, I let AI pick up the list one by one and fix them. This worked surprisingly well, especially when the issues were clearly described. It also made the process much more systematic compared with fixing things ad hoc while clicking around the app.
For SEO, I used Claude Code to help figure out the basics: page title, meta description, social preview, and basic indexing setup.
Since this was a game and not a content-heavy site, I did not go deep into SEO optimization. The main goal was to make sure the site was indexable, shareable, and looked good when people posted it on social media.
Once the game was deployed smoothly on Vercel, I reached out to Lenny in the community Slack to get a quick sanity check. I honestly wasn’t even expecting a direct reply given how busy he is—but to my surprise, I got a very kind and encouraging response from him.
That was the nudge I needed to just ship it.
2026-03-16 23:03:06
Brought to you by Optimizely—Your AI agent orchestration platform for marketing and digital teams
Gui Seiz (designer) and Alex Kern (engineer) from Figma show how to pull a live interface from production, staging, or localhost into Figma, turn it into editable design frames, explore variations collaboratively, and push changes back into code using Claude Code and MCPs—creating a continuous design ↔ code loop.
Listen now on YouTube | Spotify | Apple Podcasts
The design handoff is dead—replaced by continuous sync. Instead of designers creating comprehensive design packages with every state documented, AI enables bidirectional flow between Figma and code. Pull production code into Figma to see what actually exists, make changes in Figma, then push those changes directly back to code. No more version-final-final-v3.
Direct manipulation still beats prompting for precision. While AI can generate designs from prompts, dragging elements around in Figma remains the gold standard for fine-tuning. As Gui notes, “No one wants to prompt for the exact hex code or shade of yellow”—it’s just easier to use the color picker and eyeball it.
Use Figma’s MCP to keep design files current with production. The biggest problem in design-code workflows is drift—production gets ahead of Figma, or Figma contains dreams that never shipped. With MCP, you can programmatically pull any production state into Figma, ensuring that designers always work from what actually exists.
Turn your engineering wiki into executable skills. Every team has that onboarding page: “This is what you do before pushing a PR.” Alex built a /ship skill that automatically runs pre-flight checks, pushes to Git, monitors CI, and even fixes minor lint issues—up to five times with a one-hour timeout. Take every SOP and make it a skill.
Structure your codebase for AI assistance. Alex spends 20% to 30% of his time optimizing code structure so AI can accomplish more with less. This isn’t about writing better code for humans; it’s about making your codebase more legible to AI agents so each prompt delivers better results.
How Figma’s Team Syncs Design and Code with Claude Code and Codex: https://www.chatprd.ai/how-i-ai/how-figma-team-syncs-design-and-code-with-claude-code-and-codex
Automate Your Pre-Merge PR Checklist with a Custom AI `/ship` Skill: https://www.chatprd.ai/how-i-ai/workflows/automate-your-pre-merge-pr-checklist-with-a-custom-ai-ship-skill
Automate Design Documentation by Exporting All Component States from Code to Figma: https://www.chatprd.ai/how-i-ai/workflows/automate-design-documentation-by-exporting-all-component-states-from-code-to-figma
Create a Bidirectional Sync Between Production Code and Figma Designs with AI: https://www.chatprd.ai/how-i-ai/workflows/create-a-bidirectional-sync-between-production-code-and-figma-designs-with-ai
Brought to you by:
Daniel Roth (Editor in Chief and VP of Content Development at LinkedIn) shares how he builds and ships iOS apps to the App Store without writing code. He walks through the workflow he uses with Claude Code—including a dual-agent system where one AI writes code and another reviews it—along with how he plans features, manages development with branches, and turns ideas into working apps.
Listen now on YouTube | Spotify | Apple Podcasts
Create dueling AI agents to build better code. Daniel uses “Bob the Builder” to generate code and “Ray the Reviewer” to critique it for security and architecture issues. This two-agent system creates checks and balances similar to how engineering teams work, with Bob focusing on implementation and Ray ensuring quality. The friction between copying plans between agents also helps Daniel learn more about the code being generated.
Use AI to prevent dropping the ball at work. Daniel’s most valuable AI workflow isn’t for coding—it’s for managing his responsibilities as a leader of 400 people. He ends each day by asking Copilot, “What did I drop the ball on?” The AI scans his emails, Teams messages, and documents to identify unanswered messages and pending tasks. This 30-minute evening routine helps him wrap up loose ends before leaving work.
Build personalized apps that solve your own problems first. Daniel created “Commutely” to solve his specific problem of knowing whether to run for the New York subway. As he explains, “It was a perfect product-market fit because I was the entire product.”
Keep a running feature tracker with AI-powered prioritization. Daniel maintains a Claude chat that tracks all feature ideas with estimated build time and potential impact. His prompt instructs Claude to “keep track of ideas and offer guidance: time estimate to build, estimated back-and-forth hours, potential impact score on 1–3 scales for customer happiness and growth impact.” This creates a prioritized backlog he can pull from whenever he has time to build.
Document everything in Markdown files. Daniel saves all AI conversations as Markdown files, explaining, “Every time I’m working with Claude, I’m saying, ‘Write it into a file. Log everything.’” This solves two problems: Claude’s limited context window and his own memory limitations when returning to projects after breaks. This documentation habit creates a knowledge repository he can reference later.
How I AI: Daniel Roth’s Dueling Agent Workflow for Building iOS Apps: https://www.chatprd.ai/how-i-ai/daniel-roth-dueling-agent-workflow-for-building-ios-apps
Build iOS Apps with a Dueling AI Agent Workflow: https://www.chatprd.ai/how-i-ai/workflows/build-ios-apps-with-a-dueling-ai-agent-workflow
How to Use Claude for AI-Powered Feature Prioritization: https://www.chatprd.ai/how-i-ai/workflows/how-to-use-claude-for-ai-powered-feature-prioritization
How to Use a Simple Copilot Prompt to Never Drop the Ball Again: https://www.chatprd.ai/how-i-ai/workflows/how-to-use-a-simple-copilot-prompt-to-never-drop-the-ball-again
If you’re enjoying these episodes, reply and let me know what you’d love to learn more about: AI workflows, hiring, growth, product strategy—anything.
Catch you next week,
Lenny
P.S. Want every new episode delivered the moment it drops? Hit “Follow” on your favorite podcast app.
2026-03-16 20:04:08
Daniel Roth, editor in chief at LinkedIn, went from business writer to iOS app developer, without ever learning how to code. Using Claude Code, Daniel built and shipped multiple production-ready iOS apps to the App Store, including Commutely, a personalized train-tracking app for New York commuters.
Listen or watch on YouTube, Spotify, or Apple Podcasts
How to set up a dual-agent Claude Code system (builder + reviewer)
Why being a “picky customer” is the right mindset for non-technical builders
How Daniel prioritizes features using AI-ranked impact vs. build time
Why saving everything as Markdown files creates long-term context
The importance of branch-based development—even when AI writes the code
How Daniel ships to the App Store without formal engineering experience
His end-of-day “What did I drop the ball on?” Copilot workflow
WorkOS—Make your app enterprise-ready today
Vanta—Automate compliance and simplify security
(00:00) Introduction to Daniel Roth
(02:46) Daniel’s AI development workflow overview
(05:56) Using Claude to prioritize feature ideas
(08:58) Building vs. marketing
(09:47) Creating a retention plan for his app
(10:38) Introducing Bob the Builder and Ray the Reviewer
(13:50) How Bob and Ray work together to build features
(14:37) Why Daniel focuses on learning the process
(16:34) The importance of using branches for development
(17:39) Managing AI agents like managing a team
(21:12) Navigating the App Store
(23:06) Being a “picky customer” rather than a PM
(25:00) Testing in Xcode and shipping to the App Store
(28:14) Quick recap
(30:00) Creating terminal aliases with Claude
(31:38) Demo of his Commutely app
(32:10) Using Copilot to manage work responsibilities
(35:05) How Daniel talks to AI without personifying it
• Claude: https://claude.ai/
• Claude Code: https://claude.ai/code
• Cursor: https://cursor.sh/
• Xcode: https://developer.apple.com/xcode/
• Canva: https://www.canva.com/
• Microsoft Copilot: https://copilot.microsoft.com/
• Terminal: https://support.apple.com/guide/terminal/welcome/mac
• Obsidian: https://obsidian.md/
• Commutely (iOS app): https://apps.apple.com/us/app/commutely/id6755789873
LinkedIn: https://www.linkedin.com/in/danielroth1/
Newsletter: https://www.linkedin.com/newsletters/forward-deployed-editor-7378272989982683137/
ChatPRD: https://www.chatprd.ai/
Website: https://clairevo.com/
LinkedIn: https://www.linkedin.com/in/clairevo/
Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].