MoreRSS

site iconMatt BrownModify

invest in and help early-stage startups at Matrix, where I focus on fintech, vertical software, and adjacent areas.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Matt Brown

Vertex 2026: notes from the vSaaS world

2026-04-21 00:33:41

400 vertical software and fintech founders, operators, and investors met in New York last week for Rainforest’s Vertex conference. Everyone from public-company fintech GMs to growth-stage operators to early-stage founders. Companies building for spaces as diverse as construction, field services, legal practices, dental offices, restaurants, and laundromats.

Despite the barely-shared vocabulary, I kept hearing the same language. Lots of talk about the SaaSpocalypse and why it’s overblown. How AI is an accelerant when you own the system of record. How much room embedded payments still have to run. And more. Here’s a recap of what I learned.

What SaaSpocalypse?

The AI discourse has been writing SaaS off for months. Seat compression. Zero-marginal-cost competitors. Vertical was supposed to be especially exposed because it’s “just” workflow software.

The strongest AI work in vertical isn’t being marketed as AI. A vertical ERP that parses every unstructured customer email into the CRM is sold as a better ERP. A field service platform that generates quotes from a voicemail is sold as a faster estimator. The software gets smarter around the customer, and the customer uses more of it, not less. Nobody I talked to was seeing AI drive churn, and many were seeing it drive acceleration.

Trust is the new taste

Horizontal software loves to talk about taste as the differentiator. In vertical, it’s trust. These customers have been sold software for years by vendors who over-promised and under-delivered, and the new wave of agentic point solutions is hitting the same wall of skepticism.

Trust looks different than taste. It’s the vertical-specific brand that tells an HVAC owner you understand HVAC, rather than a generic tech logo that happens to serve field service. It’s showing up at the industry’s trade shows, ride-alongs, and shop floors rather than waiting for customers to come to you. It’s training everyone in your company (especially customer support and sales reps) on the customer’s industry, not just on your product. Taste wins the demo. Trust wins the renewal, the referral, and the second product line.

Embedded payments is huge (and just getting started)

The conventional wisdom on embedded payments has hardened. Most platforms have shipped it, take rates are compressing, and the easy money is behind us. Vertex was a useful corrective on all three. The founders who have been at it longest were the most bullish, not the least. Payments has a lot of room left. Most platforms aren’t close to their ceiling.

Start with take rates. The “compressing to zero” narrative is overstated. Nearly half of companies in Rainforest’s 2026 embedded payments benchmark report take rates above 90 bps. And when a platform grows from less than $50M processing volume to $250M+ volume, the median take rate doubles (0.46-0.60% below, 0.91%-1.05% above). Payments is a game of scale, and the thresholds where the economics inflect are closer than most platforms realize.

Then look at how much room is left even at maturity. Median share of revenue from payments peaks at 3-4 years after launch (35%) and dips slightly at 5+ years (30%). The tempting read is natural maturation. But split the 5+ year cohort into “Scaling” versus “Optimized” and the dip is concentrated in the Scaling group. The Optimized cohort keeps compounding past year five. The long-tail dropoff isn’t gravity. It’s the companies that quietly reallocated attention somewhere else.

The leadership data tells the same story. Companies with a C-suite payments leader post almost 2x the take rates of those with a no-leader. Take rate, attach, and revenue share don’t drift upward on their own. They respond to scale, focus, and ownership. The platforms that keep treating payments as a serious product line will keep finding growth in it long after everyone else has moved on.

The winning AI is less agentic than you think

The AI discourse assumes the endpoint is full automation. The founders selling to contractors, clinics, and shop owners don’t agree. The teams winning here are designing for a spectrum. An AI voice answering service that routes to a human on the first hint of complexity. An agentic AP workflow that auto-posts low-risk vendor bills and kicks medium-risk ones to an accounting clerk. A scheduling tool that drafts the reply and makes the owner hit send.

What makes these products work isn’t how much they automate, but how cleanly they hand off when they shouldn’t. Customers choose their comfort level, and the trust compounds every time the product respects the choice. The calibration is doing the work, not the autonomy itself.

Context graphs are the moat vertical software already built

Every AI product is chasing context, and vertical software has been quietly building vertical-specific context graphs for years. The schema of a restaurant’s menu, modifiers, kitchen stations, and labor model. The workflow of a law firm’s matter, phases, timekeepers, and trust ledger. The relationships on a construction project between job, cost codes, change orders, and retainage.

That structured data is the substrate AI needs to be useful, and vertical software already owns it. AI makes software itself cheaper to build, which pushes the moats toward the hard things AI can’t touch. Proprietary context. Regulatory relationships. Hardware integration. The messy physical world these companies live in.


Embedded grows up

A couple of years ago, the embedded narrative was peak hype. Platforms were integrating embedded providers left and right on the assumption that anything embedded would compound. That first wave underdelivered: integrations underperformed, embedded products were thinner than they looked, and many ended in awkward unwinds.

Platforms got pickier. The bar is higher, vetting is tighter, and second-tier partners are getting displaced. The embedded providers themselves have used those years to grow up. The new generation is more honest about what it does and doesn’t solve, and more valuable to platform customers. Established categories are maturing (e.g., accounting with Tight and Layer, lending with Kanmon, payroll with Check and Salsa), and emerging categories are getting serious heat (e.g., embedded marketing with Reach, embedded voice with Dialstack).

And the surface area is still largely unbuilt. 52% of vertical software companies in the study have no embedded fintech beyond payments at all. The ones that do cluster in capital/lending, payroll, and bill pay. Plenty of runway, just not for anyone showing up with a checklist.


Vertical software has developed its own gravity. The businesses differ wildly on the surface. Restaurants don’t look much like law firms, and neither looks much like a wellness studio or a construction GC. But the playbook is shared, and shared freely. At most conferences, every conversation is competitive. At Vertex, most conversations were comparative. The category is still early enough that the pie is growing faster than the fight for it.


My name is Matt Brown. I’m a partner at Matrix, where I invest in and help early-stage fintech and vertical software startups. Matrix is an early-stage VC that leads pre-seed to Series As from an $800M fund across AI, developer tools and infra, fintech, B2B software, healthcare, and more. If you’re building something interesting in fintech or vertical software, I’d love to chat: [email protected]

The 150-Pound Computer

2026-04-10 00:35:51

For decades, this was the operating assumption. Humans were the cheapest general-purpose computers available. Hire them, give them tools, let them process information and make decisions.

In the 1970s, an Air Force Colonel named John Boyd formalized what that 150-pound computer does. He called it the OODA loop: Observe, Orient, Decide, Act. A decision cycle built for fighter pilots. Cycle faster than your opponent and you win. It proved far more universal than Boyd intended.1

Every time you hire someone and hand them a set of tools and a set of problems, you’re paying a 150-pound computer to run an OODA loop.

The OODA loop is everywhere. But let’s take insurance underwriting as an example. Submissions land on an underwriter’s desk. She reads the submission, matches it against her carrier’s appetite, decides whether to bind or decline, and issues the quote. Observe, Orient, Decide, Act. Dozens of times a week.

Software ate the edges. AI enters the middle.

Software made parts of that loop faster, particularly Observe and Act. Data warehouses and submission intake tools on the Observe side. Policy admin systems and workflow automation on the Act side. Rating engines and risk scores nibbled at Orient and Decide, but the core of both (judgment, context, the actual call) stayed human. You can only cycle as fast as the slowest step allows.

Now AI enters the middle of the loop. The prevailing assumption is that it completes the job software started: automate the whole cycle, remove the human entirely.

For some roles, that’s right. But for most of the interesting ones, it’s a fundamental misunderstanding of what the human was actually doing in Orient and Decide.

Orient and Decide are bundles, not functions

The underwriter wasn’t performing one function. Orient and Decide are each bundles of distinct sub-functions, packaged into a single role because they came free with every hire. We never had to decompose the bundle because there was no reason to. Now there is.

AI absorbs some of these functions today, will absorb others as it matures, and will probably never absorb the rest. Automate the wrong ones and you’ve built a system nobody trusts. Leave the wrong ones to humans and you’re paying for work a model should handle. Here are some of the critical parts of those bundles:

Orient: building the picture

The pattern matcher. Recognizing what kind of thing you’re looking at by matching against known categories. Enough examples exist that AI classifies faster and more consistently than any human. In underwriting, this can mean reading a submission and recognizing it as a standard mid-market GL risk the carrier has seen ten thousand times. Once matched, guidelines prescribe the response. This is the first function AI absorbs end to end.

The context adapter. Every organization accumulates unwritten rules and institutional knowledge that never make it into any system. This function translates that tacit knowledge into judgment on a specific case. In underwriting: knowing that a contractor class is fine in Texas but toxic in Florida, or that this agency’s “preferred” submissions mean the agent’s brother-in-law. LLMs with access to historical data already do this better than any individual. They don’t forget, and they don’t walk out the door with ten years of institutional knowledge.

The loop connector. Bridging decision loops the org chart never connected, i.e., someone in one process notices a signal that matters to a different process. The underwriter who flags a claims trend to the actuarial team. The valuable part isn’t the communication, but noticing it was relevant to someone else’s problem. AI today runs one loop well. Connecting loops means recognizing what matters outside your own process. This is still largely unsolved and enormously valuable.

Decide: committing to action

The circuit breaker. Human slowness was a feature. The underwriter who sleeps on a borderline case isn’t necessarily being slow, but running a second, slower loop. AI doesn’t have that instinct. The hard problem isn’t implementing confidence thresholds. It’s knowing which decisions need the overnight hold and which don’t. Fraud detection should run at machine speed, while underwriting authority on a novel risk class probably shouldn’t. Choosing the right clock speed is itself a judgment call about the nature of the decision.

The exception decider. The case that doesn’t fit the guidelines: not deciding within the rules but whether to change them. In underwriting: two carriers with identical models and data diverge entirely based on how they handle exceptions, because that’s where underwriting philosophy lives. AI optimizes for the objective you give it. It doesn’t question the objective.

The decision owner. Someone’s name is on the file, not just for blame, but for iteration: the person who feels feedback from outcomes and adjusts. Regulators require a human in the chain, reinsurers demand it, etc. This isn’t a cognitive limitation. It’s a structural requirement that changes at the speed of law, not tech.

These aren’t the only functions in the bundle, but they illustrate the gradient. AI is already better at the first two. It will likely absorb the third and fourth as agentic systems mature. The last two are probably irreducible. Not because AI can’t get smart enough, but because exception-deciding is an act of authority, and decision-ownership is a social and legal prerequisite for the transaction to exist.

The same decomposition applies wherever humans orient and decide at scale. In contract law, AI agents already draft standard redlines and learn each client’s preferences, but “commercially reasonable” and “reasonable” look identical to a model and mean entirely different things in court. The lawyer decides whether to push back, and the lawyer’s name is on the opinion.

In procurement, an autopilot benchmarks supplier pricing and handles standard reorders. But staying with a vendor at 20% above market because they alone can deliver by your launch date? That’s exception-deciding. And when that vendor misses a delivery, the relationship that resolves it is human. The labels change but the decomposition doesn’t.

Where value accrues

If your product’s primary value is executing routine decisions, you’re competing with every AI wrapper that can ingest inputs and produce outputs. The moat moves in two directions.

  1. Down to the data layer: the system that has seen ten years of submissions, outcomes, and exception decisions has a better-oriented model than any new entrant. That advantage compounds with every cycle.

  2. Up to the exception-handling layer: the product that makes the exception decider and the decision owner more effective (better context, tighter feedback loops, smarter routing) owns the part of the workflow that can’t be commoditized.

The exception-to-routine flywheel

AI surfaces a genuinely novel case. A human makes a judgment call. That judgment gets fed back into the system so next time it’s routine, not exceptional. The exception pool shrinks, the system compounds, and a competitor starting fresh has to build that history one judgment call at a time.

But the exception pool doesn’t shrink to zero. The frontier moves. Automate today’s exceptions and you can take on risk classes you couldn’t touch before. That generates new exceptions at a higher level of complexity. The human doesn’t become unnecessary. The human operates on increasingly valuable problems.

Building for the human functions

In practice this means surfacing exceptions with full context, not just “this case is unusual” but why it’s unusual, what similar cases looked like, and what happened when someone decided differently. Tightening the decision owner’s feedback loop means connecting outcomes back to the specific decisions that produced them, in real time, not in a quarterly report. The trick is engineering productive friction: confidence scores, mandatory holds on high-stakes decisions, escalation paths that route the right cases to the right humans.

Products that dump 500 AI-flagged items on a human are treating the human as a pattern matcher, a function AI should handle. Products that surface five genuine exceptions with full context are routing work to the exception decider and the decision owner, the functions where humans are irreplaceable. That’s the design problem worth solving and where the real value will accrue.

The unbundling

The 150-pound computer isn’t being replaced. It’s being unbundled. Every company that employs humans to orient and decide is now answering the same question, whether it knows it or not: which functions in the bundle are you absorbing, which are you augmenting, and which are you designing around as permanently human?

Get that decomposition wrong and you’ve built a fast system nobody trusts, or an expensive human doing work a model should handle. Get it right and you’ve found something better than automation: a system that converts today’s exceptions into tomorrow’s routine, freeing the 150-pound computer to work on problems it couldn’t reach before.


My name is Matt Brown. I’m a partner at Matrix, where I invest in and help early-stage fintech and vertical software startups. Matrix is an early-stage VC that leads pre-seed to Series As from an $800M fund across AI, developer tools and infra, fintech, B2B software, healthcare, and more. If you’re building something interesting in fintech or vertical software, I’d love to chat: [email protected]

1

The OODA loop directly inspired many of the frameworks that startups have used for years, from Steve Blank’s Customer Development methodology to Eric Ries’s Lean Startup concept.

Fintech's moats don't compile

2026-03-12 00:35:21

“Fintech” has long traded on the ambiguity in its name.

The “fin” implied lots of emails from .gov domains, months-long audits, compliance officers who know your SAR filing history better than your product roadtmap, and mid-week flights to Charlotte or DC. The “tech” is a slick mobile app, a 10x user experience, and investor coffees at Blue Bottle.

“Fin” and “tech” were always a spectrum, but the market generally rewarded fintechs for being as much “tech” as possible and as little “fin” as they could get away with.

And that’s understandable. In 2021, software had a gross profit pool of roughly $0.7 trillion, valued at a steep premium. Financial services had a gross profit pool an order of magnitude larger, valued far more conservatively.1 Fintech let you arbitrage both: financial services economics with a software multiple.

That gap in profit pools also tells you where the real money is. Financial services generates more gross profit than any other sector globally. The “fin” side of fintech isn’t just more defensible. It’s a far larger market.

Then AI arrived and the arbitrage broke. Software valuations compressed as investors re-priced what code was worth in a world where code is getting cheaper. Fintechs got caught in the downdraft because the market had categorized them as software companies.

But the market has the category wrong. Fintech’s costs, and its moats, were never in the code, and look increasingly antifragile to AI disruption.

A tale of two cost structures

Software had one of the best business models in history: code was expensive to produce, but once written, it could be distributed for almost nothing. The gap between “expensive to build” and “free to distribute” was the margin. If you’re a SaaS company spending 22-25% of revenue on R&D, that spend is also your barrier to entry. Competitors couldn’t easily replicate what took years and tens of millions to build.

AI closes that gap from the top. If code is cheap to build and cheap to distribute, the margin compresses. The wall that kept competitors out gets shorter, more players get in, and pricing power erodes.

That’s a real problem if your business is software. But fintech’s expenses aren’t engineering expenses. Follow the money and the distinction gets obvious fast.

PayPal spends 9% of revenue on R&D. Block spends 12%. That’s not because fintech engineering doesn’t matter. Stripe’s engineering is world-class and a real competitive advantage. It’s that engineering isn’t where the majority of the money goes.

It goes to the fin. And unlike R&D spend, these costs don’t just produce a product, they produce moats:

Credit losses buy underwriting data.

Affirm spends 35% of revenue on credit losses and cost of capital, before a single engineer is paid. Every dollar lost to defaults is a dollar of repayment data a competitor doesn’t have. A new entrant training on synthetic data has no ground truth. You can’t build reliable loss history on synthetic data alone.

Compliance spend buys regulatory permission.

Wise dedicates a third of its workforce to compliance and financial crime prevention across 65+ regulatory licences. Money transmission licenses across 50 states. BSA/AML programs. Bank charter requirements. These aren’t advantages you build. They’re permissions you earn, continuously. You can’t vibe-code a banking license.

Transaction volume buys proprietary data.

Toast’s payments segment runs at 22% gross margins versus 70% for its SaaS segment, yet generates nearly twice the gross profit. Those costs buy merchant-level transaction data that feeds Toast Capital, which has originated over $1 billion in loans. Adyen’s risk models are trained on transaction patterns across 30+ markets.

Fintech’s margins were never high in the first place, and that’s the point

A payments company runs at 20-50% gross margins, not 80%. But lower margins aren’t the same as weaker businesses. Fintech margins are lower because many of those costs generate compounding advantages. And even the ones that don’t still exist outside the blast radius of AI-driven cost compression.

And AI makes every one of these moats stronger. Better models tighten loss rates. Better fraud detection reduces chargebacks. Better compliance tooling lets smaller teams hold more licenses. AI doesn’t replace the moat. It rewards the companies that chose to build in the hard parts of fintech: money movement, risk, proprietary data, and regulation.

So the real argument is not just “AI helps fintech.” It’s that AI shifts value away from product surface area and toward proprietary data, risk-bearing capacity, regulatory permission, and distribution embedded in real money movement. If you’re building in those areas, AI is compounding in your favor. If your differentiation is in the code, it’s compounding against you.

And the demand side keeps growing. Every vibe-coded checkout is a new fraud vector. Every AI agent transacting autonomously is a chargeback risk. The more that gets built on top of fintech’s rails, the more essential the rails become.

Fin for the win

This realization is already forcing smart fintech founders to re-think where they sit along the “fin” and “tech” spectrum:

  • Do we underwrite and price risk ourselves, or pass it to a partner who keeps the margin?

  • Do we own the regulatory relationship, or rent it from someone who does?

  • Does every transaction make our risk models sharper, or are we training someone else’s?

  • Is our ledger the source of truth, or an imperfect mirror of someone else’s?

This distinction cuts the fintech landscape in two. The companies that own the regulatory relationship, eat the credit losses, and accumulate the transaction data are building moats that AI deepens. The ones that rent the fin, wrapping a partner bank’s license, a BaaS provider’s ledger, someone else’s risk models in a better UI, have exactly the same problem as SaaS companies. Their differentiation is in the code, and the code just got cheaper.

The old arbitrage of financial services economics with a software multiple is dead. The new one is simpler: own the fin.


My name is Matt Brown. I’m a partner at Matrix, where I invest in and help early-stage fintech and vertical software startups. Matrix is an early-stage VC that leads pre-seed to Series As from an $800M fund across AI, developer tools and infra, fintech, B2B software, healthcare, and more. If you’re building something interesting in fintech or vertical software, I’d love to chat: [email protected]

1

Gross profit pool estimates from Coatue’s 2022 fintech report. “Gross profit” is structurally cleaner in software than in financial services, where P&L structures vary across sub-sectors, so treat the comparison as directionally accurate rather than precise.

Vertical software already won the context graph

2026-01-23 04:46:55

Context graphs have become the new battleground in enterprise software. @JayaGup10 and @ashugarg argued that the next trillion-dollar platform opportunity isn’t in systems of record. It’s in capturing the decision traces that systems of record miss. That’s true, but it’s missing the key point that vertical software companies have been building these context graphs for over a decade.

What’s a context graph & why does it matter for agents?

A context graph is a queryable record of business logic: the reasoning, precedents, and decision traces that explain why things happened, not just what happened. Every company has one in theory. It’s the accumulated knowledge of how the business operates: the exceptions that get approved, the precedents that govern decisions, the tribal knowledge in people’s heads.

Agents need this to move from automation to autonomy. An agent can run a workflow, but it can’t handle exceptions or apply precedent without access to the reasoning behind past decisions. The context graph separates an agent that follows rules from one that exercises judgment.

In most companies and products, the context graph exists in theory but not in practice. It’s scattered, implicit, and inaccessible because:

  • No one logs the reasoning behind decisions. The VP approved the exception on a Zoom call but never recorded why.

  • Systems capture outcomes, not context. The CRM shows the final discount, not the service outages or churn threat behind it.

  • What context exists is siloed. It’s scattered across tools that don’t share a worldview.

These aren’t product bugs. They’re structural features of horizontal SaaS. Generic platforms use flexible abstractions that can model any business, which means they model no business precisely. Humans must bridge the gap between “what the system captures” and “how decisions get made.” But humans don’t leave audit trails.

The trillion-dollar question: who fixes this? The prevailing thesis is that agent startups have a structural advantage. They sit in the execution path at decision time, so they can capture context that systems of record never see. The assumption is that the context graph needs to be built from scratch. But that’s not entirely true.

Vertical software has been quietly building context graphs for decades

The debate has a blind spot: it focuses entirely on horizontal enterprise software.

Think about the workflows and software stack of a typical company. It’s a patchwork of horizontal tools, each built for a broad use case, and none designed to really work together. Sure they have APIs and integrations, but even the best integrations drop tons of context as data and actions move through them. The context graph fragments because the tools don’t share a worldview.

But vertical software is different. The difference starts with something I’ve written about before: the data model.

Horizontal platforms like Salesforce use generic abstractions (”accounts,” “contacts,” “opportunities”) that can model almost any business. That flexibility helps breadth but hurts depth. The ontology doesn’t map to how any particular industry works. It’s a blank canvas customers must configure into meaning.

Vertical software starts from the opposite premise. The data model isn’t flexible. It’s opinionated, purpose-built, and mapped to that industry’s reality.

Look at Toast’s data model. You’ll see objects like Order, MenuItem, Customer, PrepStation, etc. These aren’t generic transactions dressed up with custom fields. They’re first-class entities with built-in relationships: orders connect to customers, menu items, locations, and payments as native concepts. The ontology is the domain.

Compare that to Salesforce’s data model: Account, Opportunity, Lead, Contact, etc. Powerful abstractions, but they describe a law firm, restaurant, or rocket manufacturer equally well. They describe none of them well. Configuration, integrations, and human memory must bridge the gap.

Opinionated data models bootstrap context graphs

In vertical software:

The reasoning gets recorded because the system is built around actual business decisions. When a construction platform tracks change orders, it captures why the change happened (weather delay, subcontractor default, scope change) as structured data, not notes in a generic “activity” field.

The context isn’t siloed because vertical platforms consolidate functions into one system. The construction platform isn’t just project management; it’s also scheduling, procurement, billing, and field operations. One system, one data model, one view of reality.

The full decision trace is reconstructible because the ontology supports it. When everything from estimate to invoice lives in a system built around menu items or job sites or patient treatments, you can trace how a price changed, why a timeline slipped, or what drove a discount.

Inferential density: a different way to capture “why”

There’s a deeper point about how context graphs work. The original context graph thesis assumes decision traces need explicit capture: an agent logs “I did X because of Y” at the moment of decision. But there’s another way the “why” becomes accessible: through inferential density. When the graph is sufficiently rich, the reasoning can be reconstructed from relationships between nodes.

Consider an order in Toast. Alone, it’s just a transaction with attributes. Connect it to inventory and you know which items are selling faster than expected and what needs restocking. Connect it to customer data and you know who your most valuable customers are, what they order, and when. Connect it to payments and you know revenue by payment method and which orders are outstanding.

Now when you see a spike in refunds, you don’t need someone to log what went wrong. You can see that the spike correlates with a specific menu item, the inventory system shows that ingredient was substituted due to a stockout, and the affected customers skew toward your highest-value segment. The reasoning is embedded in the relationships.

Each new function adds inferential power to everything already in the graph. Inventory data makes orders more meaningful. Customer data makes inventory decisions more meaningful. Payment data makes customer relationships more meaningful. The context graph doesn’t just get wider; it gets denser. Density makes the “why” reconstructible without explicit capture.

This is the structural advantage vertical platforms carry into the age of agents.

How vertical software is extending the context graph at the edges

Vertical software companies didn’t build these context graphs by accident. They were forced to go deep because they couldn’t go broad. A construction platform or dental practice system faces a ceiling: there are only so many construction companies, only so many dental practices. If you can’t grow by adding customers across industries, you grow by capturing more value from each customer within your industry.

Embedded payments was the proof point. When a vertical platform embeds payments, such as with Rainforest, it isn’t just adding a monetization layer. It’s extending the context graph into new categories. The platform knows not just what transactions happened, but the full economic reality: payment velocity, cash flow patterns, default rates. Every transaction adds to the accumulated knowledge.

The same logic now drives the next wave. Vertical platforms have spent a decade consolidating the operational core. But activity at the edges (customer acquisition, purchasing, workforce management) still happens outside the platform. Marketing in Mailchimp. Procurement in email threads. Accounting in QuickBooks. Each represents a gap in the graph.

Why vertical platforms can now absorb more edge functions

The foundation exists. Vertical platforms have domain-specific ontologies, core operational modules, and deep customer relationships. They understand what a job site is, what a patient treatment plan looks like, what a menu item costs. This is the precise business logic edge functions need to plug into.

The financial layer is in place. Embedded payments gave vertical platforms visibility into the money moving through their customers’ businesses. That financial context is essential for procurement and accounting, which live at the intersection of operations and money.

AI changes the economics. Marketing, procurement, and bookkeeping require labor. They’re judgment-intensive and context-dependent. AI can now handle workflows that previously required humans. The vertical platform, with its deep context graph, is the ideal substrate.

A few categories we’re tracking here:

  • Marketing pulls customer acquisition into the graph. Reach routes customer communications and other marketing data through the vertical platform itself, connecting acquisition cost to the customers it produced.

  • Accounting pulls financial categorization into the graph. With Layer or Tight, transactions get categorized within the platform rather than exported to QuickBooks, closing the loop between operations and the books.

  • Procurement pulls purchasing into the graph. Companies like Sticker and Faliam bring supplier relationships out of email threads and into the system of record, making spend visible alongside operations.

Each follows the same pattern: absorb an adjacent function, blend context with financial data, deploy AI, and deepen the graph.

So what?

Horizontal SaaS will lose ground. These companies have brand equity, data gravity, and deep workflow integration. But they’ll lose share to agent-first startups smart about bootstrapping context graphs, and to vertical incumbents picking off customers tired of stitching together horizontal solutions.

Agent-first startups face a cold start problem. The context graph thesis is right that agents can capture decision traces. But vertical software already owns the workflow for millions of businesses. An agent startup building for restaurants competes with Toast’s distribution, data, and decade of accumulated context. A hard gap to close.

Vertical software incumbents are undervalued. The market prices vertical software on revenue and growth. It doesn’t price the context graph. As horizontal SaaS struggles and agent startups face distribution challenges, vertical platforms with dense context graphs have a structural advantage the market undervalues.

Embedded service providers are an emerging category worth watching. Not every vertical platform will build the graph extensions in house. Embedded solutions will be more attractive. marketing, procurement, and accounting in-house. Their growth is complementary to vertical software’s expansion. As vertical platforms absorb more functions, the infrastructure providers that enable it become more valuable.

The trillion-dollar context graph opportunity is real. But it won’t be captured by whoever builds the best agent. It will be captured by whoever already has the deepest graph and knows how to extend it.


My name is Matt Brown. I’m a partner at Matrix, where I invest in and help early-stage fintech and vertical software startups. Matrix is an early-stage VC that leads pre-seed to Series As from an $800M fund across AI, developer tools and infra, fintech, B2B software, healthcare, and more. If you’re building something interesting in fintech or vertical software, I’d love to chat: [email protected]

The thermodynamics of risk

2026-01-13 03:15:44

Risk, like energy, cannot be destroyed: only transferred or transformed.1 Call it the thermodynamics of financial services.

Every financial transaction involves risk: uncertain exposure that someone must bear. A loan might not be repaid. A payment might be fraudulent. A counterparty might fail. Someone is always holding this exposure.

Risk is balanced by cost. Interest rates on loans. Interchange fees on payments. Spread on foreign exchange. You take the risk, you get paid for it.

Risk is also balanced by friction. The three-day settlement window. The paper application. The in-branch visit. These delays make risk more manageable. Friction is another form of cost (implicit rather than explicit, paid in time and conversion rather than dollars).

Fintechs reduce both cost and friction. Lower fees. Faster approvals. Seamless onboarding. The pitch is simple: same product, better experience.

But reducing cost and friction doesn’t reduce risk. That’s the thermodynamics at work. The risk remains, conserved in the system. The friction you removed was managing something. Where does that exposure land?

Risk takes many forms, including:

Each requires different capabilities to manage. Being good at credit risk doesn’t make you good at fraud risk. Operational excellence doesn’t help with concentration.

This creates an opportunity. A process change, a new data source, or a different product structure can shift risk from one form to another. A company with a specific edge in data, technology, or operational discipline can deliberately transform risk into a form they’re better equipped to manage. The goal isn’t to avoid risk. It’s to hold the right risk.

The difference between success and failure is whether this transformation is intentional. Reducing cost and friction without understanding the new risk you’re holding is how fintechs blow up. The ones who win recognize the transformation and build specific capabilities to handle it.

In each of these examples, notice the trade: what friction came out, what risk moved in, and what capability the winner built.

Square (underwriting risk → fraud and chargeback risk)

Traditional payment processors manage risk through friction: site visits, reserve accounts, manual underwriting. The merchant is vetted before they ever process a transaction. Square removed that friction, letting anyone accept payments instantly. But the risk didn’t disappear; it transformed. They won because they built real-time detection systems designed for fraud and chargebacks. They deliberately traded a risk managed through slow process for a risk they could manage through technology.

Toast (credit risk → operational and concentration risk)

Traditional lenders manage credit risk through friction and cost: lengthy applications, document collection, credit checks, high interest rates. Toast removes that friction for restaurants on their platform. A restaurant can get capital with minimal paperwork, underwritten on real-time transaction data flowing through its point-of-sale system. Concentration risk cuts both ways: their entire loan book is restaurants, so when COVID hit, the whole portfolio was correlated. But focusing on a single sector means deeper knowledge, purpose-built software, and better underwriting for that specific business. They win when depth of insight outweighs concentration exposure.

Buy now, pay later (consumer credit cost → provider credit risk)

Traditional card payments balance risk through cost: merchants pay interchange, consumers pay high APRs if they revolve. BNPL changes the equation. Merchants pay a higher fee (4-8% vs 1.5-3%) but shed chargeback exposure and gain conversion lift. Consumers get interest-free financing. The BNPL provider absorbs credit risk that was previously managed with high APRs. Companies like Affirm and Klarna are betting they can manage this through short durations, transaction-level underwriting, and merchant-funded economics. They win when underwriting keeps losses below the merchant fee. They lose when credit losses spike and the math stops working.

If you’re building a fintech that reduces cost or friction, ask yourself: what risk are you now holding? The friction you removed was managing something. Do you have data, technology, or expertise that makes you better at managing the new form? Or are you just hoping it doesn’t materialize?

Risk can’t be destroyed. But it can be transformed, and the best fintechs turn that transformation into their advantage.


My name is Matt Brown. I’m a partner at Matrix, where I invest in and help early-stage fintech and vertical software startups. Matrix is an early-stage VC that leads pre-seed to Series As from an $800M fund across AI, developer tools and infra, fintech, B2B software, healthcare, and more. If you’re building something interesting in fintech or vertical software, I’d love to chat: [email protected]

1

For the physics-minded: diversification and better information can reduce risk at the portfolio level (that’s the point of insurance). But compression in one dimension creates exposure in another. Diversify across a thousand borrowers and you’ve traded idiosyncratic credit risk for systematic risk and operational risk. The risk isn’t gone; it’s reshaped. Squeeze it in one place and it bulges out somewhere else.

Your data model is your destiny

2025-10-15 00:49:21

Product market fit is the startup holy grail. “Product” and “market” are essential, but a startup’s data model is the dark matter that holds them together.

“Data model” refers to what a startup emphasizes in its product, i.e., which parts of reality matter most in how the product represents the world. It’s the core concepts or objects a startup prioritizes and builds around, the load-bearing assumptions at the heart of their strategy and worldview. It’s partially captured in the database architecture (hence the name), but it shapes everything from the UI/UX to the product marketing, pricing model, and GTM strategy.

This shows up differently depending on the layer. In the database, it’s which tables are central and how they relate. In the product, it’s which UI elements dominate and what actions are easiest. In pricing, it’s what you charge by. In GTM, it’s the workflow or pain point you lead with. But they all stem from the same choice about what deserves to be the center of gravity.

Every founder has a data model, whether they realize it or not. Either you choose it explicitly or it gets inherited from whatever you’re copying. Most founders never articulate it. By the time the architecture solidifies around these implicit choices, it’s nearly impossible to change.

And that’s generally fine, because most companies shouldn’t innovate on their data model. Customers have existing mental models and workflows built around incumbent tools. Fighting that is expensive and slow. But at the extreme ends of markets—where you’re toppling multi-billion-dollar incumbents or creating entirely new categories—a distinctive data model becomes a critical and non-obvious edge.

The biggest breakout companies of the last decade often trace their success to an early, non-obvious data model choice that seemed minor at the time but proved decisive. Consider:

Slack’s persistent channels vs 1:1/group messages: While Yammer and HipChat replicated email’s ephemeral group messages, Slack made persistent, searchable channels the atomic unit. This created organizational memory—every decision, discussion, and document lives forever in context. Incumbents couldn’t match this without rebuilding from scratch.

Toast’s menu-item-centric architecture vs generic POS SKUs: Toast makes menu items first-class objects with embedded restaurant logic—prep times, kitchen routing, and modifier hierarchies built in. Generic point-of-sale systems treat menu items as retail SKUs, requiring third-party integrations for kitchen workflows. Toast’s model enables native order routing and real-time kitchen management, plus natural extensions like ingredient-level inventory and prep-based labor scheduling—creating a locked-in ecosystem that becomes the restaurant’s operational backbone.

Notion’s blocks vs Google’s documents: Google Docs gives you documents; Notion gives you Lego blocks. Every piece of content can be rearranged, nested, or transformed into databases, kanban boards, or wikis. This modularity collapses entire tool categories into one system. Traditional tools can’t compete without abandoning their document-centric architecture.

Figma’s canvas vs files: Photoshop and Sketch are built on local files. Figma is built on a shared web canvas where everyone sees changes instantly. This eliminates version conflicts and “final_final_v2” chaos. Adobe couldn’t respond without deprecating their entire desktop-first ecosystem.

Rippling’s employee data model vs siloed tools: Rippling treats the employee record as the lynchpin connecting HR, IT, payroll, and finance. Not separate products sharing data, but one product with multiple views. Each new product module is automatically more powerful than standalone alternatives because it inherits full employee context. Competitors remain trapped in single categories or attempt inferior integrations.

Klaviyo’s order-centric data model vs email-centric tools: MailChimp optimizes for email campaigns. Klaviyo optimizes for customer lifetime value by making order data a first-class citizen alongside emails. This lets e-commerce brands segment by purchase behavior, not just email engagement. Generic email tools can’t match this without rebuilding for vertical-specific data.

ServiceNow’s connected services vs standalone tickets: Traditional help desks treat tickets like isolated emails. ServiceNow links every ticket to a service map—showing which system is down, who owns it, and what it affects downstream. This transforms IT from ticket-closing to problem-preventing, making ServiceNow irreplaceable once companies reorganize operations around this model.

Data models matter more than ever now

The importance of a differentiated data model is rising dramatically. AI is commoditizing code. Technical execution is table stakes rather than a competitive advantage. AI can generate code, but it can’t refactor the organizational reality customers have built around your architecture—the workflows, integrations, and institutional muscle memory that compound over time.

Meanwhile, many markets have become so crowded that single-product companies can’t survive. This is particularly true in vertical markets, where companies are expanding into adjacent software products, embedding payments and other financial products, and even competing with their customers’ labor and supply chains with AI and managed marketplaces.

This all points to the same conclusion: when code is cheap, competition is fierce, and vertical depth matters, your data model is the foundation of your moat. The companies that win won’t be those with the most or even the best features. AI will democratize those. The winners will be built on a data model that captures something true about their market, which in turn creates compounding advantages competitors can’t replicate.

Consider how this plays out. Rippling’s employee-centric model made it trivial to add payments, benefits, and spend management. Each new product inherits rich context, making it instantly more powerful than standalone alternatives. Toast’s menu-item architecture naturally extended to inventory, labor, and supplier management. The data model wasn’t just their first product decision. It was their platform destiny.

Designing the right data model

The path to a differentiated data model depends on your market. The more horizontal you go, the more your moat comes from technical and interface innovation. The more vertical you go, the more your moat comes from elevating the right domain objects with the right attributes.

Horizontal tools serve broad use cases where underlying concepts are already familiar. Leverage comes from changing how the product is built or experienced. Notion reimagined documents as composable blocks. Figma rebuilt the foundation entirely as a multiplayer web canvas.

Vertical tools serve specific industries with deep domain complexity. Leverage comes from what you choose to emphasize. Toast elevated menu items—not transactions—with prep times and kitchen routing as first-class data. Klaviyo promoted order data to equal status alongside email metrics.

A good place to start is by looking for model mismatches in existing successful products. Where are incumbent products forcing an incorrect or outdated model on their customers? Where are customers using workarounds—spreadsheets, low/no code tools, extensive in-product configuration—to make the product match how they think and work?

Despite all the emphasis on data models, start with the workflow, not the technical implementation. Don’t ask “what data do we need to store?” Ask “what’s the atomic unit of work in this domain?” For restaurants it’s the menu item. For design it’s the canvas. For employee operations it’s the human.

If you’ve already built a product, you can audit how powerful and correct your data model is. Open your database schema and see which table has the most foreign keys pointing to it. Is that the atomic unit your customers actually think in? List your product’s core actions. Do they all strengthen one central object, or are you building a feature buffet? What would break if you deleted your second-most important table? If the answer is “not much,” you probably have the wrong data model.

Test whether your data model creates compound advantages. When you add a new feature or product, does it automatically become more powerful because of data you’re already capturing? If your answer is “we’d need to build that feature from scratch with no inherited context,” you don’t have a compounding data model—you have a product suite. The right model creates natural expansion paths that feel obvious in retrospect but were invisible to competitors.

Conclusion

Your data model is your destiny. The paradox is that this choice happens when you know the least about your market, but that’s also why it’s so powerful when you get it right. Competitors who’ve already built on different foundations can’t simply copy your insight. They’d have to start over, and by then, you’ve compounded your advantage.


My name is Matt Brown. I’m a partner at Matrix, where I invest in and help early-stage fintech and vertical software startups. Matrix is an early-stage VC that leads pre-seed to Series As from an $800M fund across AI, developer tools and infra, fintech, B2B software, healthcare, and more. If you’re building something interesting in fintech or vertical software, I’d love to chat: [email protected]