2026-04-21 08:00:00
SpaceX announced a partnership with Cursor today : a $10 billion collaboration with a $60 billion acquisition option later this year.1
The most important market in AI isn’t chatbots, search, or image generation. It’s coding. Cursor is the fastest-growing developer tool in history, at $2 billion in annualized revenue.2
To understand the deal, understand the stack. Winning in agentic coding requires three layers. Anthropic, OpenAI, & Google each own & operate compute, models, & distribution.
Cursor has the distribution. xAI has massive compute. The Colossus data center in Memphis houses 100,000 NVIDIA H100 GPUs, making it one of the largest AI training clusters in the world. xAI has Grok models as well that were broadly used, but whose popularity collapsed earlier this year.
From August to November 2025, xAI models grew to process nearly 6 trillion tokens per week on OpenRouter, rivaling Anthropic & OpenAI. By April 2026, xAI’s weekly volume had fallen to 0.6 trillion, a 90% decline from peak, triggered by competition from Chinese & American model makers. Today, Anthropic processes more than 100x xAI’s volume.3
Cursor has the opposite problem : millions of developers vibe coding, but its model layer is dependent on third parties including OpenAI, Google, & Anthropic who have competitive products. This relationship also pressures margins.
For $10 billion, SpaceX buys a call option on the distribution it couldn’t retain, & Cursor wins the independence it hasn’t yet secured.
References
Reuters. “Spacex says it has option to acquire startup Cursor for $60 billion.” April 21, 2026. https://www.reuters.com/technology/spacex-says-it-has-option-acquire-startup-cursor-60-billion-2026-04-21/ ↩︎
TechCrunch / AI Productivity. “Musk Recruits Senior Cursor Engineers as xAI Co-Founders Keep Leaving.” March 13, 2026. https://aiproductivity.ai/news/musk-hires-cursor-engineers-xai-cofounder-exits/ ↩︎
CodeSOTA / OpenRouter. “Which Models Do AI Agents Actually Use?” April 2026. https://www.codesota.com/agentic/openrouter-models ↩︎
2026-04-19 08:00:00
“As models get smarter, they can solve problems in fewer steps : less backtracking, less redundant exploration, less verbose reasoning. Claude Opus 4.5 uses dramatically fewer tokens than its predecessors to reach similar or better outcomes.”1
When Anthropic launched Opus 4.5 in November 2025, the bigger, more expensive model was actually cheaper to use.
On a per-token basis, Opus 4.5 costs 67% more than Sonnet.2 But Opus 4.5 used 76% fewer tokens to reach the same outcome.1 A task that cost $1 on Sonnet cost $0.40 on Opus.
The trend across vendors has been smarter models using fewer tokens per task.
| Model | Token Efficiency | Tradeoff |
|---|---|---|
| Claude Opus 4.5 vs Sonnet1 | -76% | 67% higher per-token cost |
| GPT-5.4 vs 5.23 | -25% | Responses 24% longer |
| Gemini 3 vs 2.54 | -74% | None measured |
| Claude Opus 4.7 vs 4.65 | +47% | Optimized for code domains |
Then Opus 4.7 shipped & the smarter model became much more expensive. The cause : a new tokenizer - software to break text into pieces a computer understands.6
Smaller pieces force the model to pay closer attention to each word, like reading a contract word by word instead of skimming paragraphs. The model follows instructions more precisely & makes fewer mistakes on coding tasks. The tradeoff : more tokens, higher costs.
“For text, I’m seeing 1.46x more tokens for the same content. We can expect it to be around 40% more expensive in practice.” - Simon Willison7
Boris Cherny, creator of Claude Code, acknowledged Anthropic raised rate limits “to make up for it.”
Will smarter models be increasingly expensive because of greater accuracy or less expensive because they’re smarter? Resolution increases make them more expensive, then efficiency gains reduce costs - a sawtooth pattern. But in every case, this means generating more tokens.
Opus 4.5 : $5/$25 per million tokens vs Sonnet : $3/$15. Anthropic Pricing ↩︎
Take the word “unbelievable.” A tokenizer might break it into un, believe, & able. This helps the computer understand that the word is the opposite (un) of a core concept (believe) that is possible (able). ↩︎
2026-04-14 08:00:00
At the heart of every security team, there’s a database. That database records each time a user logs in, every packet of inbound traffic, & each attempted attack. Architected before AI, these SIEM systems are wooden shields in an era of autonomous attackers.
The consequences are mounting. Deepfake scams have stolen tens of millions. AI-generated phishing bypasses legacy filters. As Mythos has shown, the sophistication of attacks will only increase.
Shachar Hirshberg & Dan Shiebler saw this opportunity. Shachar led the Amazon GuardDuty product, scaling the business to over 80,000 customers. Dan built & led the 60-person AI/ML team at Abnormal Security. Together, they started Artemis to build a database to power defenses for modern security teams. Within a few months, they have more than a dozen production enterprise deployments & are processing over a billion events per hour. We are excited to partner with them at the Series A, along with our friends at Felicis, Brightmind, & First Round.
At the core of this new SIEM are three technologies :
Semantic understanding. To a traditional SIEM, a log is just a string of text. It has no understanding that “jdoe” in Okta & “john.doe” in AWS are the same person, or that a sequence of individually benign actions might constitute an attack. Artemis turns raw logs into a living model of the customer’s environment : users, assets, relationships, & security posture.
Agentic detection. Legacy platforms rely on brittle, hand-written rules. An engineer writes a detection rule : “if events A, B, & C happen in sequence, fire an alert.” It works for a couple months. Then a new service gets added, log formats change, & the rule breaks. Artemis’ detections include multi-step reasoning agents that dynamically query data, perform aggregations, & reason about context to confirm a threat before ever surfacing an alert.
Closed-loop learning. Legacy platforms get worse over time : static detections degrade with changing data & behaviors. Artemis gets better : with each incident or proactive threat hunt, the system identifies new patterns. These are converted into permanent detections that are researched, validated, & maintained fully autonomously.
The result is a platform that doesn’t just store & search data, but reasons about it autonomously.
If you’re interested in learning more or joining this mission, check out the open roles at Artemis & Shachar’s post
2026-04-13 08:00:00
For the first time since the 2000s, technology companies are confronting the limits of their supply chain.
GPU rental prices for Nvidia’s Blackwell chips hit $4.08 per hour this week, up 48% from $2.75 just two months ago.1 CoreWeave raised prices 20% & extended minimum contracts from one year to three.1
“We’re making some very tough trades at the moment on things we’re not pursuing because we don’t have enough compute.” - Sarah Friar, OpenAI CFO1
This scarcity is already reshaping access. Anthropic has limited its newest model to roughly forty organizations.2 Access to the bleeding edge is becoming a gated privilege, for both capacity & security.
If the largest AI companies are having problems, startups face a tougher proposition. Five hallmarks define this era :
The age of abundant AI is over, & it will remain so for years.3
2026-04-10 08:00:00
No one comes into a sales conversation without first asking an AI. The buyer journey has changed. Lena Waters, marketing leader behind DocuSign’s IPO, Grammarly & Notion, joined me on Office Hours to discuss what this means for your go-to-market.
The first phase of AI transformation is debt repayment. Most companies are agentically connecting go-to-market processes that should have been fixed years ago.
“Removing human coordination overhead and calling it transformation? That’s debt repayment. It’s real value, but it’s not a new paradigm.”
Marketing teams can finally build their own tools. That unlocks growth. But it’s still phase one.
Websites are human artifacts. They exist to communicate, persuade & convert people who navigate to them. AI agents don’t care for beautiful styling or appeals to emotion. This new persona doesn’t browse. It parses.
“Think about how much time we’ve spent debating the top nav. Solutions before products? Are we allowed to call ourselves a platform? That’s applied human psychology. Agents don’t care.”
Some companies have abandoned websites & mobile apps entirely. The replacement isn’t a better website. It’s just a wall of text in the favorite format of an agent : markdown.
Agents are joining buying committees. For small value purchases, they are also the decision-maker. Which database should I use for my vibe-coded app? Let the agent decide. Pair of shoes? New laptop? New car? All of those decisions could be made entirely through AI.
In the enterprise, with many stakeholders & complexities, the path isn’t so clear. The wrong decision still falls on the person. You can’t sue an agent.
“You can’t go to your board and say, my agent told me we should do this.”
We used to sell to humans who researched with tools. Now we sell to tools that report to humans. Equip your agents like you’d equip an internal champion.
2026-04-09 08:00:00
The demand for software is infinite. Kyle Daigle, GitHub’s COO, made the case concrete :
There were 1 billion commits in 2025. Now, it’s 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler : it won’t.) GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.
But that’s not true for all roles. I use a 2x2 matrix that separates work along two axes : the ceiling of demand & whether the loop can be closed.
On one axis, demand. Infinite Demand means more output creates more value. There is no saturation point.
On the other axis, open vs closed loops. Closed Loop means AI can verify correctness without human intervention.
Closed Loop + Infinite Demand = Economic Engines. Software engineering lives here. AI writes the code. Tests verify correctness. More code enables more features. Companies will always need more software.
Closed Loop + Finite Demand = Efficiency Plays. AI bookkeeping categorizes transactions, reconciles accounts, files returns. Deterministic rules applied to numbers. But a company only has so many transactions. A company files taxes once a year. It closes the books each quarter.
Open Loop + Infinite Demand = Creative Amplifiers. Content creation & marketing strategy. AI can generate a thousand ad variations or blog posts. A person must judge the right ones to publish. Does this ad campaign align with our values? Is this strategic positioning correct? Some problems are open loop today but will close over time.
Open Loop + Finite Demand = Utility Tools. Preparing 10-Ks & 10-Qs. Legal contract review. Insurance claims processing. One report per quarter, one contract per deal. AI makes the work faster, but doesn’t create new work to do.
Every role fits somewhere on this 2x2. I would put venture capitalist in finite demand & open loop. There’s only a certain amount of venture capital dollars entering the ecosystem in a year, & investment selection remains an open problem.
Where does yours fit?