2026-04-24 08:00:00
Google commoditized its complements : free maps, free email, free browsers, a free mobile OS. They removed every toll booth between the user & search.
| Category | The Castle | The Complement | The Play |
|---|---|---|---|
| Search | Ad Revenue | Original Search Engine | The technical leap that started the feedback loop |
| User Data | Gmail (2004) | Turned paid storage into a free utility; killed Hotmail | |
| Maps | Local Ad Intent | Google Maps (2005) | Made GPS hardware & licensing free to own local search data |
| Mobile OS | Search Access | Android (2007) | Gave away an OS to prevent Apple/Microsoft from blocking search |
| Browsers | Search Speed | Chrome (2008) | Built a free, fast browser to increase total web usage |
Anthropic’s strategy parallels Google’s, a natural extension of the strength of the core product, the model.
| Product | Launch | Category Attacked | The Play |
|---|---|---|---|
| MCP1 | Nov 2024 | Data Integration | Open standard for AI-to-data connections; destroys walled garden lock-in |
| Claude Code Security2 | Feb 2026 | AppSec | AI-powered vulnerability scanning; found 500+ bugs in production OSS |
| Claude Cowork3 | Jan 2026 | File/Task Orchestration | Agentic file management without dedicated UI |
| Claude Design4 | Apr 2026 | UI/UX Design | Prompt-to-prototype; reads codebases & auto-generates design systems |
| Interactive Apps5 | Jan 2026 | Productivity Suite | Embeds Slack, Figma, Asana inside Claude |
For Anthropic, more usage across diverse tasks means more data, which produces a smarter model—just as more queries improved Google search.
The commoditization flywheel : both companies give away complements to drive usage of the core.
The risk of this strategy to the ecosystem is that it makes previously attractive categories no longer viable. Commoditizing the complement does not demand a best-in-class replacement. A free, good-enough product is enough to change market dynamics.
Some categories never developed a competitive response to this strategy : email, advertising infrastructure, user-generated video.
But plenty of categories survived through specialization or direct competition : cloud, travel, domain registration, social networking. Commoditizing complements doesn’t always work because focus is scarce even for the largest, fastest growing businesses.
A startup’s greatest advantage is that it can outfocus the giant. But it needs to pick the right place to pressure.
2026-04-23 08:00:00
“BI became dashboards. And now it is re-expanding to business intelligence.”
— Colin Zima, CEO & co-founder, Omni
Colin describes how AI fuses structured & unstructured data, and why the future of business intelligence isn’t a better dashboard.
A support leader at one of Omni’s customers went through 75 pages of conversation with an AI to identify 10 categories of rep mistakes. The system read support logs, cited specific examples per rep, & suggested concrete changes. Not just a dashboard.
This is intelligence about the business.
A bug intake skill at Omni takes a Slack link and description. Searches GitHub and support tickets. If the issue exists, the skill points to the thread. If the behavior is novel, it drafts a GitHub issue and outputs a prefilled link. The user clicks and submits.
BambooHR launched Elite Analytics to 30,000 people in four months. Cribl consolidated legacy BI into Omni in three months, migrating 100 dashboards in five weeks.
Underneath, a semantic model stores definitions, logic, & permissions. These models know about data in many different places, both structured & unstructured. It powers dashboards, workbooks, spreadsheets, & AI queries.
“Omni makes all of our knowledge structured & durable for smarter AI.”
— Sarah Fischbach, Staff Analytics Engineer, Checkr
Congratulations to Omni on raising $120 million at a $1.5 billion valuation, led by our friends at ICONIQ. We are thrilled to continue supporting the team in their efforts to transform business intelligence as we have since inception.
2026-04-21 08:00:00
SpaceX announced a partnership with Cursor today : a $10 billion collaboration with a $60 billion acquisition option later this year.1
The most important market in AI isn’t chatbots, search, or image generation. It’s coding. Cursor is the fastest-growing developer tool in history, at $2 billion in annualized revenue.2
To understand the deal, understand the stack. Winning in agentic coding requires three layers. Anthropic, OpenAI, & Google each own & operate compute, models, & distribution.
Cursor has the distribution. xAI has massive compute. The Colossus data center in Memphis houses 100,000 NVIDIA H100 GPUs, making it one of the largest AI training clusters in the world. xAI has Grok models as well that were broadly used, but whose popularity collapsed earlier this year.
From August to November 2025, xAI models grew to process nearly 6 trillion tokens per week on OpenRouter, rivaling Anthropic & OpenAI. By April 2026, xAI’s weekly volume had fallen to 0.6 trillion, a 90% decline from peak, triggered by competition from Chinese & American model makers. Today, Anthropic processes more than 100x xAI’s volume.3
Cursor has the opposite problem : millions of developers vibe coding, but its model layer is dependent on third parties including OpenAI, Google, & Anthropic who have competitive products. This relationship also pressures margins.
For $10 billion, SpaceX buys a call option on the distribution it couldn’t retain, & Cursor wins the independence it hasn’t yet secured.
References
Reuters. “Spacex says it has option to acquire startup Cursor for $60 billion.” April 21, 2026. https://www.reuters.com/technology/spacex-says-it-has-option-acquire-startup-cursor-60-billion-2026-04-21/ ↩︎
TechCrunch / AI Productivity. “Musk Recruits Senior Cursor Engineers as xAI Co-Founders Keep Leaving.” March 13, 2026. https://aiproductivity.ai/news/musk-hires-cursor-engineers-xai-cofounder-exits/ ↩︎
CodeSOTA / OpenRouter. “Which Models Do AI Agents Actually Use?” April 2026. https://www.codesota.com/agentic/openrouter-models ↩︎
2026-04-19 08:00:00
“As models get smarter, they can solve problems in fewer steps : less backtracking, less redundant exploration, less verbose reasoning. Claude Opus 4.5 uses dramatically fewer tokens than its predecessors to reach similar or better outcomes.”1
When Anthropic launched Opus 4.5 in November 2025, the bigger, more expensive model was actually cheaper to use.
On a per-token basis, Opus 4.5 costs 67% more than Sonnet.2 But Opus 4.5 used 76% fewer tokens to reach the same outcome.1 A task that cost $1 on Sonnet cost $0.40 on Opus.
The trend across vendors has been smarter models using fewer tokens per task.
| Model | Token Efficiency | Tradeoff |
|---|---|---|
| Claude Opus 4.5 vs Sonnet1 | -76% | 67% higher per-token cost |
| GPT-5.4 vs 5.23 | -25% | Responses 24% longer |
| Gemini 3 vs 2.54 | -74% | None measured |
| Claude Opus 4.7 vs 4.65 | +47% | Optimized for code domains |
Then Opus 4.7 shipped & the smarter model became much more expensive. The cause : a new tokenizer - software to break text into pieces a computer understands.6
Smaller pieces force the model to pay closer attention to each word, like reading a contract word by word instead of skimming paragraphs. The model follows instructions more precisely & makes fewer mistakes on coding tasks. The tradeoff : more tokens, higher costs.
“For text, I’m seeing 1.46x more tokens for the same content. We can expect it to be around 40% more expensive in practice.” - Simon Willison7
Boris Cherny, creator of Claude Code, acknowledged Anthropic raised rate limits “to make up for it.”
Will smarter models be increasingly expensive because of greater accuracy or less expensive because they’re smarter? Resolution increases make them more expensive, then efficiency gains reduce costs - a sawtooth pattern. But in every case, this means generating more tokens.
Opus 4.5 : $5/$25 per million tokens vs Sonnet : $3/$15. Anthropic Pricing ↩︎
Take the word “unbelievable.” A tokenizer might break it into un, believe, & able. This helps the computer understand that the word is the opposite (un) of a core concept (believe) that is possible (able). ↩︎
2026-04-14 08:00:00
At the heart of every security team, there’s a database. That database records each time a user logs in, every packet of inbound traffic, & each attempted attack. Architected before AI, these SIEM systems are wooden shields in an era of autonomous attackers.
The consequences are mounting. Deepfake scams have stolen tens of millions. AI-generated phishing bypasses legacy filters. As Mythos has shown, the sophistication of attacks will only increase.
Shachar Hirshberg & Dan Shiebler saw this opportunity. Shachar led the Amazon GuardDuty product, scaling the business to over 80,000 customers. Dan built & led the 60-person AI/ML team at Abnormal Security. Together, they started Artemis to build a database to power defenses for modern security teams. Within a few months, they have more than a dozen production enterprise deployments & are processing over a billion events per hour. We are excited to partner with them at the Series A, along with our friends at Felicis, Brightmind, & First Round.
At the core of this new SIEM are three technologies :
Semantic understanding. To a traditional SIEM, a log is just a string of text. It has no understanding that “jdoe” in Okta & “john.doe” in AWS are the same person, or that a sequence of individually benign actions might constitute an attack. Artemis turns raw logs into a living model of the customer’s environment : users, assets, relationships, & security posture.
Agentic detection. Legacy platforms rely on brittle, hand-written rules. An engineer writes a detection rule : “if events A, B, & C happen in sequence, fire an alert.” It works for a couple months. Then a new service gets added, log formats change, & the rule breaks. Artemis’ detections include multi-step reasoning agents that dynamically query data, perform aggregations, & reason about context to confirm a threat before ever surfacing an alert.
Closed-loop learning. Legacy platforms get worse over time : static detections degrade with changing data & behaviors. Artemis gets better : with each incident or proactive threat hunt, the system identifies new patterns. These are converted into permanent detections that are researched, validated, & maintained fully autonomously.
The result is a platform that doesn’t just store & search data, but reasons about it autonomously.
If you’re interested in learning more or joining this mission, check out the open roles at Artemis & Shachar’s post
2026-04-13 08:00:00
For the first time since the 2000s, technology companies are confronting the limits of their supply chain.
GPU rental prices for Nvidia’s Blackwell chips hit $4.08 per hour this week, up 48% from $2.75 just two months ago.1 CoreWeave raised prices 20% & extended minimum contracts from one year to three.1
“We’re making some very tough trades at the moment on things we’re not pursuing because we don’t have enough compute.” - Sarah Friar, OpenAI CFO1
This scarcity is already reshaping access. Anthropic has limited its newest model to roughly forty organizations.2 Access to the bleeding edge is becoming a gated privilege, for both capacity & security.
If the largest AI companies are having problems, startups face a tougher proposition. Five hallmarks define this era :
The age of abundant AI is over, & it will remain so for years.3