2026-02-19 08:00:00
For the first time in venture history, three distinct channels share the liquidity burden roughly equally.
A decade ago, secondaries barely registered. They accounted for roughly 3% of exit value in 2015. Today they claim 31% : nearly $95b in the trailing twelve months.
The shift accelerated after 2021’s IPO bonanza. When public markets closed their doors in 2022, investors found alternative routes. Secondaries absorbed demand that would have flowed to traditional exits. When Goldman Sachs acquired Industry Ventures, the transaction signaled secondaries have arrived. Morgan Stanley followed with EquityZen, then Charles Schwab announced its acquisition of Forge Global. Wall Street recognized the structural change before most of venture did.
This matters for founders & investors. When IPOs dominated exits, fund models assumed a small number of public offerings would generate the bulk of returns.
Now liquidity arrives through multiple doors. A founder might sell secondary shares to patient capital while the company remains private. A GP might move positions through continuation vehicles. An LP might trade fund stakes on an increasingly liquid secondary market.
The 830 unicorns holding $3.9t in aggregate post-money valuation cannot all exit through IPOs. The math doesn’t work. At 2025’s pace of 48 VC-backed IPOs, clearing the unicorn backlog would take seventeen years. Secondaries provide a release valve that traditional exits cannot.
Companies like OpenAI have embraced this reality, running employee tender offers while voiding unauthorized secondary transfers. The largest private companies now manage their own liquidity programs rather than waiting for public markets.
Today, secondary liquidity concentrates in the top 20 names. SpaceX, Stripe, OpenAI. For the founder of company #50, the secondary market remains largely theoretical. For secondaries to succeed as a broad asset class, buyers must underwrite positions in companies without household recognition. As the market grows, this coverage gap becomes opportunity.
For LPs starved of distributions since 2022, the expansion of secondary channels offers hope. The $169b in cumulative negative net cash flows needs somewhere to go. More exit paths mean more opportunities to return capital.
When a Series B employee asks about liquidity today, the answer isn’t “wait for the IPO.” It’s “we’re planning a tender offer next year.”
A decade ago, secondaries were a footnote. Now they’re infrastructure. Liquidity flows where it can, not where tradition suggests it should.
2026-02-19 08:00:00
I’ve spent the last year building AI agent systems. Here are nine observations.
1. Prototype with the Best
When the input is unpredictable, email parsing, voice transcription, messy data extraction, reach for state of the art. Figure out what works with the best models, then specialize them over time.
2. Polish Small Gems
I fine-tuned Qwen 3 for task classification using rLLM1. The 8B model beats GPT 5.2 zero-shot prompting & runs locally on my laptop. Fine-tuning shines when the task is well-defined & the input distribution is stable.
3. Use Built-In Spell-Check
Static typing forces the AI to face a spell-check/compiler. Ruby let agents hallucinate valid-looking code that failed at runtime. Rust checks code’s grammar. One-shot success rates improve substantially for medium-complexity tasks.
4. Cajole your Team of Agent Rivals
Build your agentic braintrust. Ask Claude to create a plan. Then prod Gemini & Codex to critique it; Claude addresses the critiques & implements the code. Once implemented, ask Gemini & Codex to critique the implementation relative to the plan & Claude to revise. Agents are great micromanagers.
5. Put All the Clay in One Pot
Building an agent is an exercise in Play-Doh. Some yellow, some red, some green clay. Each from a different pot. I’d like all the tools in one place : manage my memory, manage my prompts, capture my logs because it’s all a single closed loop to improvement with the model. Prompt → Output → Evaluation → Optimization → Prompt.
6. Recognize The iPhone 15 Era of AI
Qwen 3, GLM, DeepSeek V3, & Kimi K2.5 deliver strong performance at a fraction of the cost. The models are now strong enough for workflow tool calling that more intelligence may not have as concrete a benefit. Tau22 shows many models have attained this threshold & now we’re comparing them on cost rather than accuracy.
7. Document FTW
As Harrison Chase put it : “in software, the code documents the app; in AI, the traces do.” Our system runs a nightly prompt optimization system. It collects the last 100 agent conversations, extracts failures (task timeouts, incorrect outputs, user corrections), & generates improved prompts using an LLM-as-judge3. This closed-loop improvement lifts task success rates incrementally each week without manual intervention.
8. Prompt Musical Chairs
We can’t bring the system down for new prompts. The agents watch a prompt file & reload it automatically when it changes. This separates deployment from experimentation & enables DSPy4-style optimization to run automatically. Combine with versioned prompt files & you get full rollback capability.
9. Who Do You Work For?
Skills are for interactive conversations. Code is for agents. Skills are easier to debug. When a skill fails, you know exactly where to look. When an agent chains ten function calls & the output is wrong, you’re hunting through logs.
What have you learned?
RLLM is a Hugging Face library for reinforcement learning from human feedback on language models. ↩︎
Tau2 is an agentic benchmark measuring tool-calling accuracy across models. ↩︎
LLM-as-judge uses one language model to evaluate the outputs of another. ↩︎
DSPy is Stanford’s framework for programmatically optimizing prompts & few-shot examples. ↩︎
2026-02-17 08:00:00
Last quarter, my AI inference costs hit $100,000 annualized.
I started small. Six months earlier, I was spending $200 a month on Claude. Then I added three agent subscriptions : Codex, Gemini, & Claude Code. I was paying $600 a month.
Next I started using AI to transform my todo list into my done list, increasing tasks to 31 per day. $92 daily inference invoices started arriving. Then $400 per month on browser agents.
Within two quarters, my inference spend grew from $7,200 to $43,000 to over $100,000 run rate.
So I migrated to an open source model. It took a weekend. The key was building the right testing loops : I had six months of historical task data, so I could replay requests through the new model & hill-climb to parity with AI agents working through the night. By Sunday evening, they performed identically. At 12% of the cost.
I’m not the only one paying attention to this cost.
Technology companies are adding a fourth component to engineering compensation : salary, bonus, options, & inference costs. Levels.fyi pegs the 75th percentile software engineer salary at $375k. Add $100k in inference & the fully loaded cost is $475k. That’s 21% in tokens.
The question CFOs will pose : what am I getting for all this inference spend? Can I do it cheaper?
If the metric for a new cloud is gross profit per GPU hour, the employee equivalent is : productive work per dollar of inference.
For me, the answer is 31 tasks a day at $12k annually. The engineer still burning $100k? They’d better be 8x more productive!
Will you be paid in tokens? In 2026, you likely will start to be.
2026-02-14 08:00:00
Scale AI sold for $14.8 billion.1 Character.AI for $2.5 billion.2 Inflection for $650 million.3 They represent a tiny fraction of the market.
From 2020 to 2025, there were 5,700 AI & ML acquisitions. Only 21% disclosed a deal value. The remaining 4,500 share a pattern : small teams absorbed into larger companies. No fanfare. No valuation headlines. Just talent migrating from startup to incumbent.
It’s cheaper to buy a team than to compete for hires or build from scratch.
The buyers are surprising. Accenture leads with 21 AI acquisitions, only three disclosed. Apple follows with 17, just two disclosed. Google & Microsoft rank far lower.
The 75th percentile deal size tripled from $82M in 2020 to $248M in 2025. The urgency behind developing a strong AI strategy is rising.
The undisclosed majority likely falls well below these figures. 398 undisclosed AI deals in 2020 grew to 1,271 by 2025. Four thousand AI teams absorbed in five years. The pace is quickening.4
Meta acquired a 49% stake in Scale AI for $14.8 billion in June 2025. ↩︎
Google paid $2.5 billion for a non-exclusive license and Character.AI’s founders. ↩︎
Microsoft paid $650 million for Inflection’s talent, including Mustafa Suleyman. ↩︎
Thank you to PitchBook for helping pull this data together. ↩︎
2026-02-13 08:00:00
Energy is up 17% this year. Materials 16.5%. Industrials 12%. Technology is flat.
Over the past month, $3.25 billion moved into XLE (energy) while $1.66 billion left XLK (tech).1
The logic isn’t obvious until you look at operating leverage.
| Sector | P/E | Rev Growth | Earnings Growth | Dividend | Leverage |
|---|---|---|---|---|---|
| Technology (XLK) | 32x | 19% | 18% | 0.7% | 1.0x |
| Energy (XLE) | 21x | 5% | 16% | 3.1% | 3.0x |
| Materials (XLB) | 34x | 8% | 17% | 1.3% | 2.2x |
| Industrials (XLI) | 39x | 8% | 14% | 1.8% | 1.9x |
Energy’s 5% revenue growth becomes 16% earnings growth through 3x operating leverage. The sector trades at 21x versus tech’s 32x, while paying a 3% dividend yield that cushions downside.2
The asymmetry matters. If tech misses estimates by 5 points, the 32x multiple contracts. If energy misses by 5 points, dividends cushion the fall. One has a trapdoor. The other has a floor.
Hyperscalers will spend $600 billion on infrastructure in 2026.3 Data centers already consume 4% of US electricity. By 2028, that share could reach 12%.4
The bet has risks. Energy at 21x P/E isn’t cheap by historical standards, it’s expensive. The sector typically trades at 10-15x. Grid buildouts face 3-year transformer lead times & 10-year permitting cycles. If AI monetization disappoints, hyperscalers will cut capex as fast as they deployed it. Ask anyone who held fiber stocks in 2001.
In 2026, the market is betting on hard infrastructure.
ETFdb Fund Flows, February 2026 ↩︎
Correction (Feb 14, 2026) : The original version of this post added dividend yield to earnings growth to estimate “total return potential.” This was methodologically incorrect—dividends are paid from earnings, not in addition to them. The corrected version presents valuation (P/E) and dividend yield as separate considerations. ↩︎
IEEE ComSoc, December 2025 ↩︎
DOE Data Center Report, 2025 ↩︎
2026-02-09 08:00:00
In AI, distribution is king. Skills are seizing the crown.
Skills are programs written in English. They tell an agent how to accomplish a task : which APIs to call, what format to use, how to handle edge cases. A skill transforms an agent from a conversationalist into an operator.
Remember Trinity in The Matrix? “Can you fly that thing?” Neo asks. “Not yet,” she says. Seconds later, Tank uploads a B-212 helicopter pilot program directly into her mind. She steps into the cockpit & flies.
That’s what skills feel like. You don’t learn an interface. You acquire a capability. Skills encode institutional knowledge in executable form. Training becomes unnecessary because the capability transfers instantly.
A lot of people are looking to fly helicopters. The top MCP server aggregator has 81,000 stars. Anthropic’s official skills repository has 67,000. Cursor rules : 38,000. OpenClaw’s awesome-skills list, which curates 3,000 community-built skills : 12,500.
For consumers, software discovery disappears. A user asks their agent to track expenses or categorize last month’s spending. The agent finds the skill. The user never knows the tool exists, aside from a subscription.
For enterprises, IT provisions applications by role. A sales rep gets Salesforce. A marketer gets HubSpot. An analyst gets Tableau. Each persona receives a bundle of icons : all requiring training, all adding cognitive load.
In the skills era, enterprises provision capabilities instead of applications.
FP&A teams receive skills that optimize budget variance analysis, pulling data from NetSuite & formatting reports in the CFO’s preferred structure. No training on pivot tables. No documentation on report templates.
Every platform shift compresses the distance between user & value. The web required a URL & a browser. Mobile required a download & a homescreen slot. Skills require a sentence.
But this distribution layer carries risk. A recent analysis of 4,784 AI agent repositories found malware embedded in skill packages : credential harvesting, backdoors disguised as monitoring. We’ll all need trusted operators like Tank to verify our skills.
“Tank, I need a pilot program for an investment memo.”