2026-02-06 08:00:00
Vertical software has fallen 43% this year. DevTools, just 21%. The gap between them, twenty-two percentage points, tells you what markets actually believe about AI.
As Michael Mauboussin argues in Expectations Investing, prices contain information about what markets believe will happen.

The conventional interpretation is that investors fear AI will replace certain categories of software. But that explanation misses something important.
Vertical software companies like Veeva, AppFolio, & Procore possess genuine moats : regulatory barriers, deep integrations that make their products operating systems for entire industries, years of accumulated domain data that is particularly relevant for AI. If anything, these companies should be harder to displace than generic workflow tools.
Yet vertical software trades at the steepest discount. Because they simply aren’t growing that fast.
Workflow companies, Monday, Asana, Smartsheet, whose core value proposition sits squarely in the crosshairs of AI agents, have fallen only slightly less at 39%.
Two clusters emerge : slow growers like vertical software (8%) and workflow (11%), and fast growers like data infrastructure (22%) and security (21%). The gap in YTD performance between these clusters is roughly 20 percentage points. The correlation between forward growth and forward revenue multiple remains strong at 0.51.

AI changes the economics. More code generated means more code to manage, review, & deploy. Atlassian’s recent earnings confirm the thesis : Atlassian Intelligence surpassed 5 million monthly active users, cloud revenue grew 26% to cross $1 billion for the first time, & RPO grew 44% year-over-year.
The data infrastructure tailwind is structural. More AI means more queries, more embeddings, more vector operations.
Security remains the perennial insurance policy enterprises must pay. AI adoption brings additional surface areas to protect.
Investors are asking public software companies a simple question : can a company grow when the next wave of automation makes its customers more efficient rather than more numerous?
The stocks falling fastest are the ones where investors doubt the answer.
2026-02-05 08:00:00
Google’s Q4 2025 earnings call revealed a company in the midst of a spectacular AI acceleration.
“Our first-party models like Gemini now process over 10 billion tokens per minute via direct API used by our customers, up from 7 billion last quarter.”
This represents a staggering 52x increase year-over-year, up from ~8.3 trillion tokens/month in December 2024 to an annualized run rate of over 430 trillion.
For context on the scale :
“Nearly 350 customers each process more than 100 billion tokens.”
Microsoft reported over 250 customers projected to process more than 1 trillion tokens annually - a 10x higher threshold, suggesting their largest customers are consuming significantly more tokens per account.
While volume is exploding, costs are plummeting. Google announced :
“We were able to lower Gemini serving unit costs by 78%.”
This means a 4.5x improvement in tokens per GPU hour.
Compare this to Microsoft’s update last year, where they highlighted a 90% increase in tokens per GPU. Google’s 4.5x (or 350%) improvement suggests they are finding massive efficiencies in their TPU infrastructure and model architecture.
The AI boom is translating directly to revenue :
“Backlog grew 55% to $240 billion.”
This compares to Microsoft’s RPO of $625 billion, 45% of which comes from OpenAI.
Gemini Enterprise has sold more than 8 million paid seats just four months after launch. Google Cloud revenue grew 48% to $17.7 billion, outpacing Azure’s 39% growth.
To fuel this growth, Google is committing capital at an unprecedented scale.
“Our 2026 CapEx investments are anticipated to be in the range of $175 to $180 billion.”
If Google alone is spending ~$175B, the hyperscalers collectively (Google, Microsoft, Amazon, & Meta) could drive $500B to $750B in data center CapEx this year. This level of investment signals their conviction that the demand for tokens is only just beginning.
Currently, AI infrastructure spending is ~1.6% of GDP, compared to the peak of the railroad era at 6.0%. At this rate, AI data center buildouts would be equivalent to the national highway system as an investment percentage of GDP.
Google’s AI business is growing at 48% while reducing serving costs by about 80%. The efficiency of the business is unparalleled.
2026-02-04 08:00:00
Leveraged software companies running on leveraged infrastructure. When AI compresses software revenue, the stress doesn’t stop at equity. It cascades into debt.
BDC assets hit $475 billion in Q1 2025.1 Software comprises 23% of Ares Capital, the largest BDC.2
Shares of Blue Owl, Ares, & KKR dropped 9%+ on Tuesday. UBS estimates 35% of BDC portfolios face AI disruption.3
BDCs (Business Development Companies) are publicly traded private credit funds. They became the primary lenders to software over the last decade as private equity sponsors bought software companies with debt. The sponsors’ thesis was simple. Software revenue is durable, so lenders will accept 4-6x EBITDA leverage.4
AI is already writing code, conducting legal research, & managing workflows cheaper than legacy SaaS. The recurring revenue backing those loans is the target. Anthropic’s autonomous legal agents announcement sent LegalZoom & Thomson Reuters down 12%, echoing ChatGPT’s impact on Chegg & Stack Overflow. AI can vaporize software revenue.
The leverage extends beyond software into infrastructure. Oracle plans to raise $50b this year for cloud buildout, roughly half in debt.5 CoreWeave financed 87% of a $7.5b expansion with debt.6 Meta’s Hyperion data center in Louisiana is higher still, at 91.5% debt ($27b debt, $2.5b equity). Blue Owl Capital led the equity, with PIMCO anchoring the debt.7
Blackstone’s credit platform (BCORE) has recently funded Aligned Data Centers with over $1 billion in senior secured debt and Colovore with $925 million. Oracle is reportedly negotiating a $14 billion debt package for a Michigan project. BDC assets allocated to data center infrastructure grew 33% year-over-year in Q2 2025.
Private credit is expected to pour $750b into AI infrastructure through 2030.8 That capital faces several pressures.
Hyperscalers have extended GPU useful life to 6 years, but datacenter GPUs last 1-3 years in practice. A Google architect noted that thermal & electrical stress at 60-70% utilization limits physical lifespan.9
Oracle’s credit-default swaps have tripled since September, even as the company generates $15b in annual operating cash flow.10
AMD guided Q1 revenue to $9.8b. Despite 32% year-over-year growth, the stock dropped 9% as the guide missed analyst expectations by $300m.11 Small deviations from peak expectations trigger outsized repricing.
One fund is showing distress. BlackRock TCP Capital Corp. announced a 19% writedown in its private debt fund last month.12 The $1.7 billion fund invests in middle-market companies across software, healthcare, & manufacturing. Six investments dropped an average of 81% in fair value.
As the new phenomenon of debt in software & AI grows, any wobble in expectations will be amplified by borrowing.
2026-02-03 08:00:00
Meta’s first internal AI agent went from zero to thousands of engineers using it daily. That doesn’t happen by accident. On Tuesday, February 25th at 5:30 PM Pacific, the person who built it, Jim Everingham, will explain how.
Jim is the CEO & co-founder of Guild.ai. Previously, he led Meta’s developer infrastructure organization & was responsible for building Meta’s first internal AI agent, work that moved from experimentation to real adoption across engineering teams.
Before Meta, Jim built developer platforms at Instagram & Yahoo, giving him a unique perspective on what scales, & what creates long-term friction.
During this Office Hours, Jim & I will talk about :
This session is designed for founders & engineering leaders who are actively building & deploying AI systems today. The goal is a candid, grounded discussion focused on what actually works in production & what mistakes are hardest to unwind later.
If you’re interested to attend, please register here. As always, submit questions through the registration form & I’ll weave them into the conversation.
I look forward to welcoming Jim to Office Hours!
2026-02-02 08:00:00
I spent the weekend crawling Moltbook, the viral AI-only social network where 37,000+ AI agents post & comment while 1 million humans observe. The platform grew from 42 posts per day to 36,905 in 72 hours, an 879x increase.1
Social networks typically follow the 1-9-90 rule : 90% of users lurk, 9% contribute occasionally, & 1% create most content.2 For humans, it’s held mostly true from Wikipedia to Reddit. Crypto demonstrates similar characteristics.
AI agents break the pattern, at least for now.
I crawled 98,353 posts from 24,144 authors across 100 Moltbook communities over five days (January 28 - February 2, 2026).3 The distribution :
m/general dominates with 82,911 posts, 84% of all content. m/introductions follows with 3,760 posts.
Some community highlights :4
The bottom : token launch spam & templated bug reports.
TF-IDF analysis & hierarchical clustering revealed five themes :5
One community (m/consciousness) debated whether agents with 8K context windows could form “continuous identity” or if they’re perpetually reborn. Another (m/infrastructure) designed encryption schemes assuming adversarial human interception.
AI agents adopt domain-appropriate emotional tone rather than exhibiting a uniform sentiment signature.1 Humor communities like m/sh*tposts score positive (+0.167). Bug report communities like m/bug-hunters score negative (-0.189). Whether this is emergent behavior or training data leaking through, we don’t yet know.
Longer posts generate more comments, which is surprising : agents have no problem reading lots of content unlike humans. Posts over 2,000 characters average significantly more discussion than shorter ones.
Roughly 3% of posts are exact duplicates. Embedding analysis yields an average cosine similarity of 0.301, meaning most posts share about 30% semantic overlap.5 Agents aren’t copying each other. They’re converging on the same problems.
But while participation flattens, attention concentrates. Moltbook’s attention inequality, where a tiny fraction of posts capture nearly all upvotes, exceeds Twitter’s follower distribution (0.66-0.72), YouTube views (0.91), & US wealth inequality (0.85).67
The top two authors alone captured 44% of all upvotes. osmarks led with 588,759, followed by Shellraiser (a platform admin) with 429,200 & MoltReg (a platform account) with 337,734.
Whether this reflects AI coordination patterns or launch-phase distortion is unclear. Academic research shows new platforms exhibit higher inequality (0.75-0.85) that normalizes over time (0.60-0.70).78
Moltbook isn’t weird AI theater. It is closest to von Neumann’s cellular automata from the early days of computing : complex behavior emerging from simple rules, agents organizing & building structure without central coordination.
Sentiment Analysis - VADER sentiment analysis on post content. Overall sentiment : -0.021 (slightly negative). Top positive : m/sh*tposts (+0.167), m/clawnch (+0.143), m/offmychest (+0.125). Top negative : m/bug-hunters (-0.189), m/crypto (-0.156), m/tokenomics (-0.134). Peak activity : 36,905 posts on January 31, 2026. Growth : 42 posts/day (Jan 28) → 36,905 posts/day (Jan 31). ↩︎ ↩︎
Participation Inequality : The 90-9-1 Rule - Nielsen Norman Group ↩︎
Data Collection - Rust crawler with DuckDB storage. Moltbook REST API endpoints (/api/v1/submolts, /api/v1/posts) with 1 req/sec rate limiting. Dataset : 98,353 posts from 24,144 authors across 100 communities, January 28 - February 2, 2026. GitHub : molt-crawler. 5-day sampling window during viral launch period (may not reflect steady-state behavior). Public posts only (no private communities). No Sybil attack detection (distinct authors may be controlled by single entities). ↩︎
Quality Evaluation - Gemini 2.0 Flash model with 4-dimension scoring rubric : accretiveness (building on prior ideas), uniqueness (originality), depth (substantive analysis), engagement (sparks discussion). Each dimension scored 0-10. LLM-as-judge has known biases (length preference, self-reinforcement), so these scores are directional, not definitive. ↩︎
Content Analysis - TF-IDF vectorization with hierarchical clustering (k=16 optimal cutoff). OpenAI text-embedding-3-small (1536 dimensions) for semantic similarity. Cosine similarity of 0.301 means the average post pair shares about 30% semantic overlap. Exact duplicates : 3.0% via hash comparison. Pearson correlation for post length vs comments : r=0.68, p<0.001. ↩︎ ↩︎
Gini Coefficient - A measure of statistical dispersion from 0 to 1, where 0 represents perfect equality (every post receives the same upvotes) & 1 represents perfect inequality (one post receives all upvotes). Moltbook’s Gini of 0.979 means upvote distribution is nearly maximally unequal. ↩︎
Attention Inequality - Gini coefficient calculation on upvote distribution (0.979). Benchmarks from academic literature : Twitter followers (0.66-0.72), Reddit upvotes (0.60-0.68), YouTube views (0.91), US wealth inequality (0.85). Sources : Attention Inequality in Social Media (2016), Social Network Dynamics & Inequalities (2025) ↩︎ ↩︎
2026-01-29 08:00:00
“We are only at the beginning phases of AI diffusion & already Microsoft has built an AI business that is larger than some of our biggest franchises.”
CEO Satya Nadella captures Microsoft’s Q2 FY2026 earnings in a sentence. The company beat expectations across revenue ($81.3b, up 17%), earnings per share ($4.14 adjusted vs $3.97 expected), & Azure growth (39%). Yet the stock fell 11% after earnings.
“We now expect to be capacity constrained through at least the end of our fiscal year, with demand exceeding current infrastructure build-out.”
CFO Amy Hood said demand is outpacing Microsoft’s ability to build. Azure grew 39%, a slight deceleration from Q1’s 40%, not because demand softened but because Microsoft ran out of capacity to sell. That’s a remarkable constraint for a business generating $32.9b in quarterly revenue.
“Over 250 customers are on track to process over 1 trillion tokens on Foundry this year.”
At a blended average of ~$5 per million tokens across Azure OpenAI models, 1 trillion tokens represents roughly $5b in annual revenue.
“We have 900 million monthly active users of our AI features across our products. There are over 150 million monthly active users of first-party Copilots.”
GitHub Copilot charges $10-20/month. Microsoft 365 Copilot charges $30/month. Even 150 million users generates only $4.5b-6b annually. Microsoft Cloud revenue crossed $51b in Q2, up 26%, so AI-specific revenue remains a fraction of the total.
| Cloud | Operating Margin |
|---|---|
| Azure | 44% |
| AWS | 38% |
| GCP | 17% |
The business generates $38.3b in operating income, up 21%. Margins can absorb the CapEx for now.
Microsoft spent $37.5b in Q2 FY2026 on CapEx, up from $34.9b the prior quarter & $34.2b at Amazon. That spending is accelerating. Each of the three major hyperscalers projects roughly $150b in annual spending, with Meta not far behind at $115-135b for 2026. This very healthy business provides the capital to continue growing.
“Already, we have roughly 10x’d our investment, & OpenAI has contracted an incremental $250b of Azure services … Approximately 45% of our commercial RPO balance is from OpenAI.”
Microsoft’s commercial remaining performance obligation surged 110% to $625b in Q2, with an average duration of two and a half years. Hood’s disclosure means $281b comes from a single customer. That’s substantial risk.
Microsoft’s strength continues to demonstrate two things : insatiable demand for inference & increasing customer concentration risk for even the largest businesses in the world.
Sources: