Happy Sunday and welcome to Investing in AI. Check out the latest AI in NYC podcast on Spotify, where we interview Albert Chun from AI Circle. And also, if you are not familiar with my company, Neurometric, this tweet from the All-In podcast below describes the problem we solve. When your AI spend gets too high, or your latency is too long, we run your specific AI workloads on thousands of model and algorithm combinations to recommend an ensemble of several models that optimize time, money, and accuracy for your system. On average our customers see a 10x cost improvement and 4x latency improvement at the same level of accuracy.

Now on to the main content of this week, which is about the limits of AI.

There’s a quiet assumption running beneath every AI pitch deck, every enterprise deployment, every breathless LinkedIn post about the future of intelligent automation. The assumption is this: given enough data and enough compute, AI can predict anything worth predicting.

It can’t. And the reason it can’t isn’t a limitation of current models. It’s a law of computation itself.

The Prediction Machine Meets Its Match

AI, at its core, is a prediction engine. Large language models predict the next token. Recommendation systems predict what you’ll click. Forecasting models predict demand, churn, revenue. The entire value proposition of modern AI rests on the idea that patterns exist in data and that sufficiently powerful models can find them and extrapolate forward.

This works beautifully for a large category of problems. But Stephen Wolfram identified something decades ago that the AI industry still hasn’t fully reckoned with: computational irreducibility.

The concept is straightforward. Some systems, even ones governed by simple rules, produce behavior that cannot be shortcut. There is no formula, no model, no algorithm that can tell you the outcome faster than simply running the process step by step. The only way to know what happens at step one million is to execute all 999,999 steps before it.

This isn’t a gap in our knowledge. It’s a proven property of certain computational systems. No amount of intelligence, artificial or otherwise, can compress the irreducible.

The Business Problem No One Talks About

Now ask yourself an uncomfortable question: how many of the business problems we desperately want AI to solve are computationally irreducible?

Consider market dynamics. A market is millions of agents making decisions based on other agents’ decisions, reacting to reactions, adapting to adaptations. We’d love AI to predict which product will win, which trend will dominate, which startup will break out. But markets may be fundamentally irreducible systems. The interplay of human behavior, competitive response, regulatory shifts, and cultural momentum doesn’t compress into a predictable trajectory. You can’t skip ahead. You have to play it out.

Or take innovation itself. Which ideas will work? Which product configurations will resonate? Which business models will survive contact with reality? These aren’t questions with latent answers sitting in historical data. They emerge from complex interactions that have to actually happen before anyone, including an AI, can know the result.

This creates a fascinating paradox for the current AI narrative. We keep hearing that AI will make execution easy. Code generation, content creation, workflow automation — the cost of building and shipping is collapsing toward zero. Great. But if execution becomes trivially cheap, the entire game shifts to prediction. Knowing what to build becomes the only competitive advantage. And what if the things most worth predicting are exactly the things that can’t be predicted?

Where This Leaves Us

This doesn’t make AI useless. Far from it. Plenty of business problems are computationally reducible. Optimizing logistics, personalizing content, detecting fraud, automating routine analysis — these are pattern-rich domains where prediction works and AI delivers real value.

But it does mean the ceiling on AI’s strategic value might be lower than we think. The highest-value business questions — what market to enter, what product to build, what bet to make — may live in irreducible territory. AI can inform those decisions with better data and faster analysis, but it cannot resolve them. No model can, because the answer doesn’t exist until the system runs.

This has practical implications. Companies banking on AI to replace strategic judgment are likely to be disappointed. The organizations that win won’t be the ones with the best prediction models. They’ll be the ones that combine cheap execution with rapid iteration — running the irreducible computation faster than their competitors by actually doing things and learning from what happens.

In other words, the future might not belong to those who predict best. It might belong to those who experiment fastest.

AI makes the experiments cheaper. But nobody gets to skip them.

Thanks for reading.

Leave a Reply

Sign Up for TheVCDaily

The best news in VC, delivered every day!