Silicon Valley finally admits SaaS is commoditising. The real money is in AI that sells finished outcomes. But you can't run outcome engines at enterprise scale if every prompt leaks to the cloud. Here's why private on-prem AI is now non-negotiable for UK enterprises.
On April 16, 2026, Portal AI CTO Hamudi Naanaa dropped a thread that crystallized what the entire AI industry has been avoiding saying out loud. Reacting to Sequoia’s “Services: The New Software” thesis, Naanaa’s response wasn’t just another opinion piece—it was a reality check for anyone still betting on cloud-based AI for enterprise value.
Here’s what Silicon Valley is finally admitting: SaaS has commoditized software. The real money is in AI that sells finished outcomes—closed books, won grants, compliant filings, shipped stores—at software margins. But here’s the part nobody in the hype cycle wants to say: you can’t run outcome engines at enterprise scale if every prompt and artifact is leaking to OpenAI, Anthropic, or whoever owns the model this week.
The number that matters: 1,300+ AI agents created $32M in verifiable economic value in 21 days — measured, not marketed. Not promises. Not projections. Actual, auditable, invoice-ready value.
Why this hits hard for UK businesses: Law firms, financial services, charities, and anyone under SRA, FCA, or GDPR regulation can’t afford the next cloud privacy scandal just to chase productivity. The trade-off between capability and confidentiality is no longer theoretical. It’s career-limiting.
Recap the March 2026 Sequoia essay for those who missed it: for every £1 spent on software, £6 still goes to services. SaaS ate the software pound; AI now eats the services pound — but only if you sell work delivered, not another copilot.
Let me be blunt: copilots get crushed by every new model release. They’re commodities. But outcome engines (the AI accounting firm, the AI law firm, the AI compliance office) get better margins as models improve. This isn’t incremental change. This is a fundamental shift in how value is created and captured.
Here’s the provocative truth that keeps CTOs awake: most UK enterprises are still buying “AI assistants” that force them to choose between capability and confidentiality. That trade-off is about to become career-limiting.
The question isn’t if you’ll adopt AI. The question is: will you adopt an AI that exposes your data, or one that protects it while delivering measurable value?
Let me quote the tweet that started this conversation:
“you can’t build an outcome engine if you don’t know how to measure an outcome.”
This is the industry’s dirty secret. Everyone talks outcomes. Almost no one measures them properly.
The attribution problem: How do you know which agent actually delivered that £500,000 in cost savings?
The counterfactual problem: What would have happened without the AI?
The data sovereignty problem: Every trace ends up in someone else’s data center.
Real enterprise buyers don’t want token counts or fuzzy ROI projections. They want pounds saved or revenue generated — and they want proof. Humans won’t adopt agents that feel like expensive chatbots.
This isn’t a technical problem. It’s a governance problem. Cloud AI makes governance impossible because you can’t control where data flows, how long it persists, or who has access to it.
Portal AI and Everwhy AI published a paper in March 2026 that finally gives the industry a scoring system: “Outcome Primitives: A Framework for Measuring AGI Value in the World.”
This isn’t academic. It’s practical. Three dimensions that actually work:
These aren’t feature lists. These are business outcomes.
This is how you price AI. Not by tokens. By hours saved.
Why this matters more than GPQA or ARC-AGI benchmarks: It measures what actually hits the balance sheet, not lab scores.
Let me give you the methodology so you know this isn’t marketing:
Persistent private platform, 17,921 value episodes auto-classified from artifacts (no raw chats, no self-reported metrics).
Headline stats:
Real examples that matter:
Enterprise reality check: High-magnitude Asset Production and Compliance Assurance primitives are exactly where regulated UK firms live — and exactly where cloud AI becomes a liability.
You can’t run a compliance engine on OpenAI’s infrastructure. You can’t build a trading bot on a public cloud and expect regulatory approval. This isn’t paranoia. This is compliance.
Stop shipping copilots. Instrument every workflow with Outcome Primitives from day one and price on delivered value, not seats or tokens. If you can’t measure the outcome, you can’t sell it.
The next $1T winners won’t look like SaaS dashboards. They’ll look like vertical services firms running on invisible, private AI back-ends. Accounting, legal, and consulting firms that don’t become AI firms will simply disappear.
Most “AI productivity” noise is still vapourware. Real economic impact clusters in verifiable, high-magnitude primitives — and only organizations that keep the data inside their own walls can actually capture and prove it without triggering a regulator.
This isn’t anti-cloud. This is pro-compliance. There’s a difference.
Cloud outcome engines create an impossible bind:
The hardware and sovereignty realities echo what we’ve been saying about the Memory Wall and capability sovereignty: bandwidth, cost, and compliance all point the same direction — private infrastructure.
Our on-premises AI platform lets UK enterprises run exactly these outcome engines inside their own walls:
This isn’t a feature. This is the foundation for everything that follows.
The tweet got it right: choose software margins (racing the labs to zero) or services margins (replacing the £6 trillion inefficient economy).
But for any UK business with regulated data, the only viable path is private AI.
Outcome Primitives finally give the industry a scoreboard. Download the paper, audit your current tools against the framework, then ask yourself: can you actually run this securely?
The AI revolution isn’t about smarter software in the cloud. It’s about replacing services with code — securely, measurably, and inside your own infrastructure.
If you’re a UK law firm, financial institution, or regulated organisation ready to move from copilots to outcome engines without compliance nightmares, get in touch.
No pitch. No pressure. Just a conversation about private AI that actually stays inside your walls.
This post is published for JD Fortress AI’s blog. For the full “Outcome Primitives” paper and technical appendix, contact us directly.
© 2026 JD Fortress AI. All rights reserved. This content may not be reproduced without permission.
If you're thinking about secure AI for your business, we'd love to have a conversation.
Get in Touch →