2025-12-12 08:00:00
No, founders are not adopting Bryan Johnson’s regimen to reverse aging. Quite the opposite : the average founder raising capital ages six months every year.1
I suspect founder age has been increasing steadily for three reasons. First, venture capital has shifted toward AI, which grew from roughly 10% to 60% of investment in just three years.2 AI founders skew older. Many AI labs are started by PhDs who spent extended periods in school & often come from industry, commercializing initiatives from major labs or hyperscalers.
Second, the shift toward B2B rewards experience. B2B founders benefit from established relationships with potential team members, design partners & expertise selling to enterprises. These networks take years to build.
Third, press coverage distorts perception. Media tends to spotlight younger founders pursuing product-led growth or consumer strategies. The Cursor team, fresh from MIT, captures the zeitgeist. But there are many founders who grow up within an industry & then go out to upend it.
Perhaps venture capitalists should start funding reverse aging programs. If this trend holds, the typical founder will be a decade older in 20 years.
Methodology : Data collected through systematic web search & aggregation of publicly available founder information. Analysis focuses on median trends to reduce sensitivity to outliers. Time series analysis identifies cyclical patterns in the data. ↩︎
PitchBook Q1 2025 data shows AI captured 58% of global venture capital in Q1 2025, up from roughly 14% in 2020. ↩︎
2025-12-10 08:00:00
Enterprises learned a lesson from cloud data warehouses. They handed over both data & compute, then watched as the most strategic asset in their business, how they operate, became someone else’s leverage, which created an opportunity for Iceberg.
Fool me once…
Leaders have recognized their companies need a new system of record for AI agents in the form of a context database. There are two different kinds of these context databases :
Operational context databases store standard operating procedures & institutional knowledge : when a customer calls about resetting a password, when legal reviews an NDA with a new prospect, when HR answers questions about options vesting for a new hire.
All of these processes represent trade secrets & intellectual property, which are key assets for a business. Capturing them from employees ensures continuity in processes & builds a sustainable, defensible asset.
Analytical context databases are a semantic evolution of semantic layers : they contain definitions & calculations for metrics like revenue or customer acquisition cost.
Semantic layers told AI what data meant. Analytical context databases teach AI how to reason about it.
Steven Talbot’s recent piece on Omni’s agentic analytics architecture describes :
a coordinator mechanism, which decides which tool to use next based on the question, the results, & what’s already been tried.
The key to both operational & analytical context databases isn’t the databases themselves. It’s the feedback loops within them.
Steven’s system adapts mid-flight, retries when things break, or stops when it has something useful to show. This creates an ever-improving cycle of accuracy. Accuracy creates trust. Trust creates adoption. Adoption creates more feedback. Companies that develop the best feedback loops will build the most valuable context databases.
Context databases enable the future of process automation, representing the real promise of AI within the workforce. It’s the evolution of RPA (robotic process automation), but it’s RPA & process discovery injected with non-determinism.
This non-determinism is essential for the success of AI agents. It allows for exception handling, forestalling one of the failure modes of the first generation of RPA. AI agents are excellent at ingesting large volumes of content & reasoning about them.
The move from manual context engineering to automated context platforms is inevitable. Context databases will be sold as standalone products & bundled. Enterprises will come out of this transformation for the better : with evolving systems that improve over time.
2025-12-09 08:00:00
A new study from OpenAI shows AI saves the average white-collar worker 54 minutes per day. Where does all that value go?
BLS data shows median weekly earnings for full-time workers hit $1,165 in Q3 2024, or $60,580 annually. Across 2,080 working hours, hourly compensation equals $29.13. One hour saved per day equals 250 hours per year, or $7,282 in recovered productivity per seat.
Current AI pricing captures between 3% & 5% of this value :
| Platform | Price/Year | Value Captured |
|---|---|---|
| ChatGPT Plus | $240 | 3% |
| Microsoft 365 Copilot | $360 | 5% |
| Google Workspace AI | $288 | 4% |
| Gamma (presentations) | $480 | 7% |
Typically, vendors capture 10-15% of value, leaving employers & employees with 85-90% value capture.
This gap suggests significant pricing power remains untapped since the total value capture for these tools is only 5%. Microsoft announced price increases last week, with Microsoft 365 E3 rising from $36 to $39 per user monthly starting July 2026 - an 8.3% increase, closer to a COLA adjustment than a value-based price increase.
Bundling complicates the picture.
Microsoft 365 Copilot & Google Workspace AI include access to email, spreadsheets, & presentation software. ChatGPT Plus does not. Will enterprises pay for standalone AI applications on top of their existing Copilot or Workspace subscriptions? Must standalone AI tools also bundle these features?
Unbundling is already happening in some categories too :
Gamma, a $2.1B unicorn building AI presentation software, charges $480 per seat annually. 33% more than Microsoft 365 Copilot, which already includes PowerPoint with AI features.
Gamma has 600,000+ paying subscribers, suggesting a market exists for best-in-class vertical tools even when bundled alternatives exist.
In the SaaS ecosystem, bundling & unbundling were both motifs. AI doesn’t seem any different, aside from the significant productivity gains that create room for both strategies to win.
Data sources : OpenAI productivity study (Dec 2025), Microsoft/Google public pricing, Bureau of Labor Statistics Q3 2024 median weekly earnings
2025-12-08 08:00:00
Streaming is the next category to consolidate within the modern data stack. IBM announced its intent to acquire Confluent.
The deal values Confluent at $11.1 billion, or 10.0x LTM revenue. Confluent commands more than 40% of the Fortune 500 as customers & has grown into a $1.1 billion revenue business.
The founders of Confluent, including CEO Jay Kreps, created Apache Kafka, a streaming technology built inside LinkedIn. Founded in 2014, Apache Kafka now runs at more than 80% of the Fortune 100. Kafka powers real-time data pipelines & stream processing, updating data systems whenever a new event happens. When a taxi ride is booked, a credit card is swiped, or a user likes a comment, Kafka handles the data flow.
Confluent continues to grow nicely, with recent quarterly revenue of $298.5 million, growing 19.3% year over year. It has a gross margin of 74.1%, which is typical for software companies. Although its operating margins are negative, roughly -27%, this reflects a high cost of sales.
The company’s sales efficiency sits at 0.38x : Q3’s marginal gross profit of $13.5M annualized ($54M) divided by Q2’s selling & marketing expenses of $143.6M. For every dollar spent on S&M in one quarter, the company generates 38 cents in incremental gross profit the next year.
New customer acquisition reinforces the challenge.
| Price Point | Total Customers | Net New Q3 | % of Total |
|---|---|---|---|
| $20K+ ARR | 2,533 | +36 | 100% |
| $100K+ ARR | 1,487 | +48 | 59% |
| $1M+ ARR | 234 | +15 | 9% |
Confluent added 36 net new $20K+ customers in Q3. The $100K+ cohort represents the largest sequential increase in 2 years.
Growth comes primarily from expansion : net revenue retention sits at 114%, meaning existing customers increase spend by 14% annually. Gross retention hovers near 90%. The $100K+ customers account for more than 90% of ARR. Confluent’s challenge isn’t keeping customers. It’s acquiring new ones.
The most compelling trend : 10-percentage-point YoY improvement in operating margins, suggesting a path to profitability within 12-18 months.
Is Confluent a bargain or appropriately priced?
The bull case points to category-defining technology in Apache Kafka, 21%+ growth with improving margins, a clear path to profitability & critical infrastructure for AI & real-time applications. The bear case notes the company remains unprofitable, faces competitive threats from AWS Kinesis & Azure Event Hubs, competes with open-source Kafka alternatives & carries IBM integration execution risk.
| Company | Buyer / Market | Year | Revenue Growth | Revenue Multiple |
|---|---|---|---|---|
| Snowflake | Public | 2025 | 29% | 17.8x |
| MongoDB | Public | 2025 | 21% | 14.4x |
| Tableau | Salesforce | 2019 | 15% | 11.7x |
| Confluent | IBM | 2025 | 19% | 10.0x |
| Splunk | Cisco | 2023 | 16% | 5.3x |
A major question for Confluent over the last few quarters has been : what is the next adjacent product category to catapult another wave of growth? The AI surge has been a positive for the company.
Within IBM, Confluent may find its technology complementary to the raft of AI technologies demanded by large enterprises.
2025-12-05 08:00:00
Since I watched software engineers using AI, I’ve become jealous.
I’ve seen the most sophisticated software engineers assign 15 to 20 coding tasks in GitHub to an AI. They play foosball. They grab coffee. They return to evaluate the agent’s work.
The agent tackles the same task three different ways. Sometimes it nails the solution on the first try. Other times it needs more input. That engineer has paralleled her time by 10x to 15x.
Why can’t a business person do the same?
I read Twitter articles about developments in AI & have questions, but I don’t have time to delve deeper then. I often pop out of meetings with five tasks to process, & I’d love to dictate my thoughts. I have an idea for a vibe-coded experiment to run internally.
So I hooked up Gemini command line to my Asana. Now I can bark out orders like Gunnery Sergeant Hartman & have a team of agents respond in minutes. Accessing all the tools I’ve built.
This is different than accessing AI in the browser. The agent has access to my systems : Asana, email, calendar, CRM. It can read my files, pull data from my tools, & act on my behalf. I’m not copying & pasting between browser tabs. The agent works within my environment.
I’ve gone from producing 10 to 31 tasks per day. Agents are working on my behalf while I am in meetings.
I had an idea this morning : run a statistical analysis on my Asana usage. An agent wrote a piece of code to hit the Asana API & save the data into a particular folder. An R script generated these two charts & many others. I ran a collection of different statistical analyses. We went back & forth in the Asana comments on which visualizations were the right ones. These aren’t simple tasks; many of them have 30 or 40 comments, representing iterations on the original idea.
Then I created an outline in the SCQA format. Within five minutes, this post had been written. Another agent graded the post & ensured it matched my style.
This is the power of AI : parallel work while we’re passive (I was at the gym at the time).
When we give agents the right tools, the velocity of work changes instantly. Business people can now do what software engineers discovered months ago. We can spin up 15 parallel workstreams. We can evaluate results over coffee. We can 3x our daily output without working longer hours.
The Agent Inflection Point isn’t coming. It’s here.
We will be open-sourcing this next week.
2025-12-03 08:00:00
While OpenAI signed $1.15 trillion in compute contracts through 2035, DeepSeek trained a frontier model for $6 million. This was 2025’s central question : are we building on bedrock or quicksand?
The top 10 posts of 2025 examined some of these topics :
Are we in a bubble echoing the telecom crash, or building the next internet? Do traditional exit paths still work when secondaries dominate & IPOs vanish? How do you design tools when the user is AI, not human? 2025 forced a reckoning with reality.
How AI Tools Differ from Human Tools : I consolidated my 100+ AI tools into unified, parameter-rich interfaces based on Anthropic’s research. The counterintuitive finding : AI systems need complex tools with complete context, while humans need simple, chunked interfaces. Claude’s success rate approached 100% after the redesign.
Back to Text : How AI Might Reverse Web Design : I watched an open-source agent book flights by navigating airline websites, extracting data from visual chaos. If AI thrives on pure text, the future of the web might look exactly like it started : simple text, but for robots instead of humans. The better AI performs, the fewer websites we’ll visit.
Circular Financing : Does Nvidia’s $110B Bet Echo the Telecom Bubble? : Nvidia’s vendor financing totals $110B in direct investments plus $15B+ in GPU-backed debt, 2.8x larger relative to revenue than Lucent’s exposure in 2000. But unlike the telecom bubble, Nvidia’s top customers generated $451B in operating cash flow in 2024. The merry-go-round has paying riders.
The Scaling Wall Was A Mirage : Gemini 3 launched with the same parameter count as Gemini 2.5 but achieved massive performance improvements, breaking 1500 Elo on LMArena. Oriol Vinyals credited improving pre-training & post-training, with no walls in sight. Nvidia’s earnings confirmed Blackwell Ultra delivers 5x faster training than Hopper, translating scaling into capability.
The Complete Guide to SaaS Pricing Strategy : The only three pricing strategies that matter : Maximization (revenue growth), Penetration (market share) & Skimming (profit maximization). Usage-based pricing experienced 29% longer sales cycles in 2023, but companies like Twilio achieved 130%+ net dollar retention by deliberately underselling initial contracts & expanding naturally.
The Great Liquidity Shift : 71% of exit dollars in 2024 came from secondaries, not IPOs or M&A. With target ARR for IPO growing from $80M in 2008 to $250M today, secondaries have become a permanent fixture in venture capital markets. Venture is evolving to look more like private equity.
2025 Predictions : I predicted the IPO market would rip, voice would become a dominant AI interface & the first $100M ARR company with 30 or fewer employees would emerge. Data center spending by hyperscalers would eclipse $125B, stablecoin supply would hit $300B & consolidation would define the Modern Data Stack.
OpenAI’s $1 Trillion Infrastructure Spend : OpenAI committed $1.15T in infrastructure spending from 2025-2035 across Broadcom ($350B), Oracle ($300B), Microsoft ($250B), Nvidia ($100B), AMD ($90B), Amazon AWS ($38B) & CoreWeave ($22B). At OpenAI’s projected 70% gross margins, this implies nearly $1T in revenue by 2030.
The AI Cost Curve Just Collapsed Again : DeepSeek released two breakthroughs : V3 slashing training costs by 90%+ & R1 delivering top-tier performance at 1/40th the cost. The innovation? Simply asking AI to show its work. The net : powerful smaller models with 25-40x reduction in price, plus explainability that could satisfy GDPR & enterprise auditability requirements.
The Mirage in the Software Clouds : Public SaaS growth rates halved from 36% to 17% since 2023. It’s not that software spending is slowing, it’s that high-growth companies aren’t going public. The IPO drought since 2022 means public SaaS analyses no longer reflect the true state of software anymore.
I’m grateful for your readership & engagement throughout 2025. Thank you.