Enterprise AI is already a massive market, estimated at roughly $40 billion and growing over 3x year-over-year.¹ Yet despite the explosion of AI tools, foundation models, and agentic platforms, the reality of what enterprises are actually doing with AI is surprisingly narrow.
Over the past few months, a wave of reports has attempted to capture the state of enterprise AI adoption at the end of 2025, from Menlo Ventures’ bottoms-up analysis to OpenRouter’s study of nearly 100 trillion tokens. Each offers a different slice of the market, and at times, the snippets can seem contradictory. We wanted to step back and synthesize these reports into a coherent narrative about where enterprise AI stands today and, more importantly, where it’s headed.
The short answer? Enterprises are spending tens of billions on AI, but they’re primarily doing two things: coding and copilots. And of those two, only one has proven ROI.
What Enterprises Are Actually Doing with AI
Coding has emerged as the clear first ‘killer use case’ for generative AI. It now accounts for roughly 50% of all tokens processed through OpenRouter, and an estimated 85%+ of enterprise tokens (!!).² Menlo Ventures pegs coding at about 60% of departmental AI spend, growing at 8x year-over-year.³ This makes sense — programming is inherently a generative activity, and generative AI is a natural fit. Tools like Claude Code, GitHub Copilot, and Cursor are delivering measurable productivity gains, and we’re still in an early growth phase as these capabilities continue to improve rapidly.

Copilots are the second major category of spend — think ChatGPT Enterprise, Microsoft Copilot, and Claude for Work. Enterprises are paying for these tools at scale, but there remains skepticism about their ROI. Usage has often fallen short of expectations, and the jury is still out as to whether or not employees are finding meaningful value in them.
Beyond coding and copilots, adoption thins out quickly. Some large SaaS companies, like Salesforce, ServiceNow, and HubSpot, are embedding AI into their products with varying degrees of success, but most enterprises still aren’t deploying AI agents at scale. OpenRouter’s data shows that “tool invocation” (a proxy for agentic behavior where the AI calls other tools) remains flat and relatively low. Spend on agent platforms is still modest. As one Microsoft executive put it, AI agents should be viewed as an “R&D budget”⁴— not exactly a ringing endorsement for near-term enterprise adoption.
Proprietary vs. Open Source: Who’s Winning the Enterprise Stack
When it comes to the technology stack, proprietary models currently dominate, capturing roughly 70% of token share and an estimated 88% of enterprise API spend, according to OpenRouter’s study. But this headline masks an important nuance: proprietary models have won coding, not necessarily other enterprise use cases.

If you disentangle the data, coding skews heavily toward proprietary models — over 90% market share. For all other use cases (which admittedly remain smaller), the split is closer to 50–50 between proprietary and open-source models.⁵ No one approach has emerged as a standard.

There are strong leading indicators that open source will capture a significant share in non-coding enterprise use cases:
- Developer surveys show a clear preference for building with open-source models⁶
- Most enterprises already host open-source models in their cloud environments (usage that wouldn’t appear in API-based market data)
- Downloads of open-source models are skyrocketing⁷
- Open-source models are dramatically cheaper and catching up in capability
- 80% of a16z portfolio companies are already building on Chinese open-source models⁸

Cost matters. A lot. Some startups building on proprietary APIs are reportedly operating at negative gross margins because of model costs. As enterprises move beyond simple pilots to production-scale deployments, the economics will drive a multi-model approach that leans heavily on open-source capabilities.
What’s Holding Back Non-Coding Enterprise AI?
If the models are good enough and the market opportunity is massive, why hasn’t enterprise AI adoption taken substantial hold beyond coding?
First, a lack of predictable outcomes. Getting an LLM to do something consistently is surprisingly difficult. Programming is a naturally generative activity, so the probabilistic nature of LLMs is a feature, not a bug. But many enterprise workflows require a mix of flexibility and consistency. You might want the system to adapt its language or approach, but you still need it to escalate certain issues the same way every time or send a customer survey after every support interaction. Generative models often struggle with this kind of deterministic behavior, frustrating many early enterprise AI initiatives.
Second, customization and professional services are required to extract value. Foundation models don’t just “work” out of the box for many real-world use cases. They need integrations with multiple data sources, reliable connectors, and significant customization to fit into existing workflows. Even Microsoft has acknowledged that its connectors aren’t working as well as needed.⁹ The gap between a promising demo and a production-ready system is wide, and filling it requires hard work.
The Path Forward
So, with these challenges, what will the next generation of winning AI products look like? We have a few ideas:
- They will marry deterministic frameworks with generative capabilities, ensuring that the AI solution delivers the consistency needed while retaining the flexibility of generative AI.
- They will abstract away “model choice” from their customers, using a multi-model approach to balance cost vs capability.
- They will often customize or fine-tune models to specific customer use cases and data.
- Finally, they will handle the “last-mile” challenges of making AI work reliably in real production environments, including integrating with key data stores, adding observability, and connecting to other workflow tools.
We are currently seeing entrepreneurs build both products and platforms with these approaches in mind, doing the hard work of turning the incredible power of generative AI into reliable, scalable solutions for enterprise use cases. If you’re building in this space, we’d love to hear from you.
Sources
1, 3: Menlo Ventures, “2025: The State of Generative AI in the Enterprise,” December, 9 2025, https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/#b80db139-f442-4176-8179-56de9fa744e8
2, 5: OpenRouter, “State of AI An Empirical 100 Trillion Token Study with OpenRouter,” December 2025, https://openrouter.ai/state-of-ai
4: The Information, “Anthropic, AWS Give Customers of AI Agents a Helping Hand,” November 3, 2025, https://www.theinformation.com/articles/anthropic-aws-give-customers-ai-agents-helping-hand
6: Theory VC, “AI in Practice Survey 2025,” December 2025, https://survey.theoryvc.com/
7: NVIDEO, “GTC March 2025 Keynote with NVIDIA CEO Jensen Huang,” March 2025, https://www.youtube.com/watch?v=_waPvOwL9Z8
8: The Economist, “China is quietly upstaging America with its open models,” August 21, 2025, https://www.economist.com/business/2025/08/21/china-is-quietly-upstaging-america-with-its-open-models
9: The Information, “Microsoft’s Nadella Pressures Deputies to Accelerate Copilot Improvements,” December 22, 2025, https://www.theinformation.com/articles/microsofts-nadella-pressures-deputies-accelerate-copilot-improvements
Why Products, Not Models, Will Win Enterprise AI was originally published in G2 Insights on Medium, where people are continuing the conversation by highlighting and responding to this story.