The Rule of 40 Is Dead

The last several years (at least in Canada) have been characterized by a reversal from a “growth at all costs” philosophy to one that tried to strike a better balance between growth and profitability. This marked a return to a familiar safety blanket: The Rule of 40.

For those that don’t know, The Rule of 40 is a blended growth & profitability metric primarily used to screen for high performing SaaS companies. Add the growth rate % (say 25) to the EBITDA margin % (say 15). So, in this case, 25 + 15 = 40. If this metric is greater than 40, the SaaS company being evaluated would generally command a premium revenue multiple (often +7x). If not, a 2-3x multiple would be more likely. It became a shorthand for “quality.” Investors, boards, LPs - the entire private and public markets ecosystem - accepted this metric, and it has been used to guide decision making at all levels.   

 
 

In our view, this has always been lazy thinking. In 2026, not only is it lazy but it is also dangerous. AI has completely changed the landscape for SaaS investing, and we firmly believe a new set of criteria will need to emerge.  

 

Where the Friction Starts: The Category Shift 

The issue is not just the metric itself; it is the evolution of how technology businesses are built. The Rule of 40 assumes a world where software businesses share a common structure: low marginal cost, asset-light scaling, and sales- and product-driven growth. That world is fragmenting. The emergence of AI has accelerated already blurring lines between hardware and software and the Rule of 40 fails to capture critical capital intensity and IP nuance.   

An increasing number of frontier technology companies no longer behave like pure software. Take OpenAI and Anthropic. Yes, they sell monthly subscriptions, but foundationally they combine software with infrastructure-heavy asset build-out and the associated massive compute costs. They scale through compute rather than code and “go-to-market" motions and their capital intensity is a feature of their moat.  

Further, software is no longer confined to purely digital workflows. It is increasingly embedded in the physical economy: factories, defence, energy grids, and construction workflows. 

Companies like Anduril and Hadrian are proving that billion-dollar moats are being built in physical systems. Because they require heavy capex to scale and real investment in IP, these firms look ugly by old software standards. Lumpy margins. Messy revenue cycles. Operational drag. But they unlock structural economic value that pure SaaS wrappers never touch. Applying software-native heuristics in these contexts doesn’t help in the slightest.  

 

The Moat Paradox: AI as the Disruptor  

The second issue is even more structural: AI is actively compressing traditional software moats. A measured balance of growth and productivity in world of AI is a recipe for disaster.  

Yes, traditional SaaS companies have an install base advantage (and, yes, they can certainly deploy AI tools to take out a developer here or there), but there is little question that Claude Code, Cursor, and open-source alternatives are commoditizing core capabilities rapidly. Feature velocity is no longer a moat. Category leaders will need to invest aggressively in new architecture and capabilities and data to compete. This may require stepping back from growth and sacrificing profitability in the near term to better position for the medium term. In this context, a fixation on Rule of 40-like balanced optimization could be precisely the wrong direction for a company to take. It’s not that growth and profitability stop interacting, but the relationship will likely become vastly more complex than a simple addition problem. Metrics built around a stable tradeoff start to lose precision when that underlying structure becomes more fluid. 

 

The Bar is Higher 

Based on what is technically possible today (let alone a year from now), AI has likely raised the bar on speed and efficiency.   

Elite AI-native startups are capable of growing faster and more efficiently than previously thought possible for traditional SaaS. As such, why not the Rule of 60 or the Rule of 100? More importantly, does a blended metric even make sense anymore? 

Further, how should we think about the traditional “first mover” advantage that the SaaS model rewarded? How much of an advantage is being first if there is no defensible IP and applications can be “vibe-coded” in a matter of days or weeks? If there’s no proprietary data, model advantage or embedded infrastructure, then what does being “first” even mean? 

Given how quickly AI is evolving, it is worth asking whether any fixed metric remains relevant at all, or if the very idea of stable evaluation frameworks is breaking down as the ground beneath them shifts faster than they can be recalibrated. The issue is not just that the benchmarks may move; it’s that the underlying assumptions those benchmarks rely on may no longer hold.

 

Where do we go from here  

The Rule of 40 is not entirely obsolete for mature SaaS companies, but at a minimum it is dangerously insufficient for the next era of technology. We cannot and should not evaluate 2026 businesses with 2016 heuristics.  

Forget chasing a blended 40. To evaluate capital efficiency and system-level tech today, here are some considerations we’re using:   

  • System Throughput Value: How does the economic value created for the customer compare to the total capital deployed by the startup? 

  • Defensible IP: If the “code” is commoditized, what is truly proprietary? Models, Data or Hardware integration?

  • Structural Data Advantage: Is there a feedback loop that compounds over time, or is the dataset easily replicable?

  • Return on Invested Capital: How does the business effectively convert upfront compute or physical capex into long-term operating leverage? 

  • Physical & Infrastructure Payback Periods: How fast does deployed hardware or compute infrastructure pay for itself compared to traditional software CAC? 

  • Post-Capex Gross Margins: What is the margin trajectory once the foundational infrastructure is built? 

  • Revenue per Employee vs. Compute Spend: Is the company achieving hyper-productivity by scaling through software and silicon rather than human headcount, and do those non-human infrastructure costs still allow for durable gross margins? 

The market no longer prizes growth at all costs, nor does it prize simplistic arithmetic. It prizes durable systems, absolute productivity, and capital-efficient execution. The cost of using lazy evaluation criteria for investment decision making will lead to painful capital misallocation.  

Scott Kaplanis