← Back Home

The Future of SaaS

11 mins read

Many think that we are approaching the decline of software-as-a-service businesses as a whole. Once Claude Cowork launched its 11 vertical-specific agents, the market reacted fast and hard: within the first trading week this February, the public market lost $1.2 trillion in value. So far, I believe the market is telling a tale of longing the model providers and shorting the SaaS businesses. This is not the right way to think about things. The real divide lies in probabilistic vs. deterministic systems, and opportunity surely presents itself in the public markets.

The Panic Narrative

Claude Cowork's plugins were interpreted as a stock collapse. Why? If AI agents can now execute legal, sales, marketing, finance, and other workflows, then the software layers that are built to support those workflows are extremely redundant and relatively expensive. The consensus thus became: AI agents replace human seats => SaaS revenue is generally seat-based => SaaS revenue should compress => capital should move from application to model companies. While this logic is certainly coherent, it is incomplete. Hiring freezes explicitly tied to automation reinforce the seat-destruction thesis. This being considered a uniform extinction is where I think opportunity can be found.

The category assumptions for this SaaS dies thesis is (i) all SaaS is primarily probabilistic workflow, (ii) all SaaS moats are feature-based, (iii) all switching costs collapse symmetrically, and (iv) the model layer retains durable pricing power. I don't think that any of these assumptions hold universally. Some SaaS products are thin wrappers around structured, repetitive workflows. Others are deeply embedded systems of record with regulatory/audit/integration entrenchment. Some revenue is tied to human seats, whereas other revenue is tied to institutional data.

Notably, the model layer is trending to a point of commoditization. API pricing is collapsing at unprecedented rates, market share leadership among frontier models rotates every ~12 months, and model companies' gross margins are structurally lower than mature SaaS gross margins (not to mention insane capital intensity). Something I read earlier is that the market is selling 80% gross margin businesses to buy 40% gross margin businesses that have much more volatile competitive positioning. The market is generally betting on the value stack inverting permanently.

This narrative ignores a critical asymmetry, which is that AI does not eliminate the need for systems of record but operates through them. An AI agent producing a contract still needs a definite source of truth for the underlying data, or an agent resolving a support ticket still needs access to customer history, billing status, and compliance flags. The story of extinction assumes that enterprises may rip out very integrated systems and replace them at large with probabilistic agents. Instead, it is more likely that (i) seat-based tools face compression, (ii) systems-of-record platforms adapt and embed AI, and (iii) AI-native startups build on top of model APIs (not enterprise infra).

The Model Layer's Fragility

Mature SaaS companies have 75–85% gross margins, 105%+ NRR, strong FCF conversion (80%+), and, importantly, an extremely low marginal cost of serving the nth customer. Foundation model companies are very different with heavy and recurring R&D spend, significant inference costs per interaction (albeit decreasing as a rate), pricing under sustained competitive pressure, and large capital requirements for each new model generation/training run. For model companies, even as compute margins improve, total operating margins are extremely compressed by training, talent intensity, and ongoing infra commitments. Model providers, unlike SaaS companies, do not reflect a zero-marginal-cost software model but more of an infrastructure economics one – high fixed cost, competitive pricing, and shaky leadership.

API pricing has seen orders-of-magnitude cost reductions in <3 years. Open-source alternatives also deliver competitive performance at fractions of closed-source costs. Admittedly, some of these open-source models have followed perhaps questionable methods of reaching certain benchmarks. Recently, OpenAI called out DeepSeek for using distillation of US models to train its models. This means DeepSeek trained its models using leading providers' outputs. Regardless, that was just an interesting tangent. Besides API pricing, switching costs as an end user across Claude Opus vs GPT 5.2, for instance, are extremely lower – all it requires is redirecting an API call.

The early bull thesis for these model providers was that capital intensity was a huge barrier. Open-source acceleration certainly weakens that argument, meaning the moat for providers shifts from training budgets towards distribution, data, and integration. With this in mind, it is very logical that we see Anthropic moving into application layer plugins. Someone high up in the company agrees that just selling model access alone may not sustain durable margins. Therefore, model quality/performance commoditizing will feed into value accruing to the application layer.

History validates this way of thinking. When cloud computing became ubiquitous and cheap, SaaS companies captured the margin. In other words, databases standardized and application companies stood out on workflow and domain depth. Value moved up!

Deterministic vs Probabilistic

LLMs are probabilistic engines. They take some input token(s) and predict the next token (output). Typically, they are reliable for summarizing, recognizing patterns, classifying, ideating, exploring, and surface-level analyzing. Why? This is because they are non-binary tasks, where being directionally correct is sufficient. Thus, horizontal SaaS (general purpose tools useful across industries) can be replaced by AI.

Enterprise systems, however, need deterministic systems. The edge case that the LLM falls outside what is sufficient is not practical here. Invoicing systems cannot be mostly correct, compliance cannot be 95% executed, etc. Systems of record will remain to enforce determinism – AI agents must operate within guardrails. And so, a really adept way of thinking about things would be as follows: where probabilistic systems replace probabilistic features, deterministic systems absorb probabilistic enhancements. The closer a SaaS product is to probabilistic output, the more vulnerable it becomes. The closer a SaaS product is to deterministic infrastructure, the more AI becomes complementary than competitive.

Reviewing Which SaaS Remains

This part is largely inspired by an article I read, published by a few London equity analysts. There are four zones to put SaaS companies into: the dead zone, the compression zone, the adaptation zone, and the fortress zone.

The first includes feature-layer SaaS companies with low workflow complexity, minimal regulatory entrenchment, etc. AI agents can natively automate much of what these businesses do. Here, I'm thinking of Asana (lightweight project management), Five9 (contact center automation), or Monday.com (workflow management).

The second includes companies with low workflow complexity but some type of data moat. These are companies that own important data but also have a meaningful portion of human interaction layers. Some examples are HubSpot (CRM data but no need for all the marketing seats) or Salesforce (valuable as a system of record; its analytics add-ons are not). If they play their cards right, these companies will likely see a changed revenue mix but continue to survive as a platform.

The third includes companies with high workflow complexity but a minimal data moat. Bluntly, these companies should be aggressively embedding AI faster than startups can replicate workflows and win customers [over long sales cycles]. Adobe (deep workflows but has creative tools facing competition) or Intuit (AI-native fintech apps are emerging despite its similarly deep workflows). Startups have the opportunity to compete against these types of companies, who don't have institutionalized data moats.

The fourth includes companies with high workflow complexity and a meaningful data moat. Often, these are companies in regulated and compliance-heavy workflows. In full function, AI is complementary. Veeva, Procore, and Snowflake are some examples because they all provide, in some way, deterministic systems of record, have regulatory lock-in, and work with data that cannot be recreated cheaply.

I talked about these in increasing order of confidence.

Where Value Accrues

Anish Acharya, GP at a16z, had a great interview with Harry Stebbings. Incumbents tend to win in existing categories, while startups tend to win categories created by technological shifts. That pattern is historically consistent. If you can't tell, I like Howard Marks' explanation of history not necessarily repeating but rhyming. We certainly have some cases in point: Microsoft not losing office productivity to the internet or Google not losing search to mobile. I would put money on AI following the same structure.

This means incumbents will improve existing workflows amidst this AI race – e.g., ServiceNow may build better workflow automation. These companies own distribution, customer trust, integration, and procurement relationships. They just need to embed intelligence, not rebuild the stack. As Anish put it, you don't point the "innovation bazooka" at rebuilding payroll. You extend advantage where it matters. Startups win in domains that did not exist pre-AI, including areas like coding orchestration tools (Cursor), AI companions (Shizuku AI…very recent), AI-native creative workflows (RunwayML, ElevenLabs, etc.), AI-first legal drafting platforms (Harvey), etc. Bluntly, these are new surfaces.

Anish also had an insightful comment related to our multi-model ecosystem. Sure, foundation models are converging in baseline capability while differentiating at the margins. It's that fragmentation increases the value of orchestration. Enterprises do not want to manage five model endpoints across departments. They want a platform layer that abstracts that complexity. That abstraction layer is far more likely to sit inside existing enterprise software than inside standalone model companies.

There is also a second-order dynamic that the market underestimates: ambition expands as intelligence becomes cheaper. Enterprises won't just eliminate seats as I may have unintentionally implied earlier in the article; rather, they can now attempt workflows that were previously uneconomical. AI thus has the capacity to increase the number of software-mediated interactions. I really do appreciate the perspectives venture/growth investors share on public markets.

A quick counterpoint for the "seat destruction = revenue destruction" narrative. Historically, power users did not pay meaningfully more than average users. The best Spotify subscriber might generate 5x the usage but still paid roughly the same subscription price. In AI-native products, that relationship is breaking. We are now seeing $200–300 per month subscriptions for ChatGPT Pro, Gemini Ultra, and Grok Heavy tiers. Power users are paying an order of magnitude more than casual users because the marginal value they extract from the product is dramatically higher. Just some food for thought.

To be concise, value accrues in layers. At the base, the mode layer becomes an input. Above it sits orchestration, which can embed model selection logic and guardrails. Above that sits the deterministic system of record. Here, AI lowers costs, increases workflow velocity, and deepens entrenchment. At the edge are the new surfaces – coding orchestration tools, AI companions, etc. For me, a standing question is if AI will flow through existing systems of record or bypass them entirely. Regardless, value accrues where distribution, data, integration, and governance intersect with cheap intelligence.

Jevon's Paradox and Volume Expansion

Inference pricing is decreasing exponentially fast. Model quality improves while cost/unit of intelligence falls. Instinctively, thoughts center on the notion that if AI gets cheaper, software gets replaced.

That logic ignores a centuries-old economic principle, Jevons' Paradox. This observed that as coal-powered engines became more efficient, coal consumption increased rather than decreased. Cheaper input expanded usage – efficiency unlocked new applications. As intelligence becomes cheaper, more workflows become economically viable, more experimentation occurs inside enterprises, more processes become software-mediated, and more agents query and update enterprise systems.

The critical question is routing. If AI-driven volume flows through systems of record (CRM platforms, workflow engines, compliance systems, data warehouses) then incumbents benefit. Cheap intelligence lowers their cost structure and increases the frequency of interaction with their platforms. However, if AI-driven volume flows around them (replacing systems of record rather than operating inside them) then the disruption narrative accelerates.

So far, enterprise behavior suggests augmentation over complete replacement. Agents require guardrails, and guardrails require deterministic systems. Deterministic systems are entrenched.

Separation

The probable outcome is layered value capture. This means the model layer commoditizes, new AI-native surfaces grow, deterministic platforms absorb intelligence and deepen moats, and feature-layer SaaS compresses. AI does not kill software – it stratifies it! To me, this is clear opportunity to invest in many of the adaptation/fortress categorized tech names that almost blindly went down.

Related Posts