Happy Sunday and welcome to Investing in AI. Be sure to follow our AI in NYC podcast if you want the latest take on AI from the applied AI capital of the world. And if you want a weekly bottoms up AI tech analysis of individual stocks, upgrade to our paid version.
When I started Neurometric a little over a year ago, I was betting on something I saw as a very early trend – that model fragmentation would soon become a thing. The frontier labs have been moving up the stack, which means they realized it. AI mature companies (read this if you want to understand “AI maturity”) were using multiple models for things in late 2024. I looked around and knew we were moving to a world where systems mattered more than models.
For the past three years, the AI narrative has been dominated by model releases. GPT-4, Claude, Gemini, Llama — each new launch triggers a wave of benchmarks, hot takes, and breathless commentary about which model is “winning.” The implicit assumption has been that the best model wins the market. But our research at Neurometric showed that models vary significantly on a per-task basis.
That assumption that you need to use a benchmark- winning model continues to break down. We are entering a post-model world — one where the model represents a shrinking percentage of the total value created by AI, and the system built around it is what actually matters. The model is not irrelevant. But it is no longer the moat.
Models Are Commoditizing
Three forces are pushing foundation models toward commodity status faster than most people realize.
First, performance at the frontier is converging. The gap between the top foundation models — OpenAI, Anthropic, Google, Meta — has narrowed dramatically. On most practical enterprise tasks, the differences between them are marginal. When your customers can’t tell the difference between model A and model B in production, the model is not your differentiator.
Second, open-source is closing the gap from below. Llama, Mistral, Qwen, and a growing roster of open-weight models now deliver 90+ percent of frontier performance at a fraction of the cost. The “good enough” threshold keeps rising. For a large class of applications, an open model behind a well-designed system will outperform a frontier model dropped into a poorly designed one.
Third, switching costs are collapsing. Abstraction layers, standardized APIs, and multi-model routing architectures mean companies can swap models with minimal friction. When you can change your foundation model provider in an afternoon, the model is a component — not a competitive advantage.
None of this means model research is slowing down or that frontier labs aren’t doing extraordinary work. They are. But the dynamics of commoditization are familiar to anyone who has watched the technology industry long enough. The raw capability layer always trends toward parity. Value migrates upward.
Where The Value Is Migrating
If the model is shrinking as a share of value, the obvious question is: where is that value going? The answer is into the system.
Consider the layers that surround a model in any production AI deployment. Data and context pipelines — RAG architectures, memory systems, knowledge graphs, and proprietary data integration — determine the quality of what goes into the model. In most real-world applications, the quality of context retrieval matters more than which model processes it.
Orchestration and agentic frameworks handle multi-step reasoning, tool use, planning, and coordination between multiple AI agents. The model is one node in a larger workflow, and the intelligence of that workflow is where the real leverage lives.
Evaluation and guardrails provide reliability, safety, compliance, and domain-specific accuracy. Production AI requires layers of trust infrastructure that the model alone cannot provide. Companies that solve evaluation at the system level build compounding advantages that are hard to replicate.
Human-in-the-loop design governs the interface between AI and human decision-making. UX, feedback loops, and workflow integration are where adoption actually happens, and adoption is where value gets captured.
Finally, infrastructure and cost optimization — inference routing, caching, fine-tuning pipelines, edge deployment — are systems-level problems that determine whether AI is economically viable at scale.
The useful analogy is this: the model is the engine, but customers buy the car. And increasingly, they’re buying access to the transportation network. Each layer up the stack captures more of the value and is harder to displace.
The AI Systems Era
The companies that win the next phase of AI will be systems companies, not model companies. This is not a prediction — it is a pattern.
We have seen this before. The database mattered enormously, but the ERP system built on it captured the enterprise value. The microprocessor was a breakthrough, but the device and its ecosystem won the consumer market. The cloud was transformational infrastructure, but the SaaS applications running on it built the durable businesses. Raw capability layers commoditize. System layers compound.
For investors, this has direct implications. Evaluating AI companies primarily on model access or model performance is increasingly a mistake. The questions that matter now are about proprietary data advantages, depth of workflow integration, switching costs at the system level, and whether the company has feedback loops that make its system smarter over time. The moat is not the model. The moat is the system that makes the model useful.
For builders, the strategic imperative is clear: move up the stack. Dependence on a single model provider is a fragile position. System-level differentiation — in data, orchestration, evaluation, and user experience — is where defensibility lives.
What Comes Next
Models will continue to improve, and they will continue to matter. But they are becoming the foundation, not the differentiator. The post-model world does not mean models are irrelevant. It means the locus of value creation has shifted.
The winners in this next era will be those who build intelligent systems that happen to use great models — not those who build great models and hope a system emerges around them. The model era gave us the raw capability. The systems era is where that capability becomes real-world value.
Thanks for reading.