Happy Sunday and welcome to Investing in AI. Be sure to check out our AI in NYC podcast, and if you are interested in the world of small language models and what they can do, read my post on the Neurometric blog about how we distilled down a Qwen 4B model to beat frontier models on a real work task. Models this small are nearly free to run. Today I want to talk about what poker and AI have in common.
Most people think of artificial intelligence as a magic box. You put a question in, and an answer comes out. But that mental model is dangerously wrong, and it’s going to cost a lot of people a lot of money over the next decade. AI doesn’t deal in certainties. It deals in probabilities. And if you want to thrive in a world increasingly shaped by AI, you need to stop thinking like a chess player and start thinking like a poker player.
Chess is a game of perfect information. Every piece is visible. Every position is knowable. For decades, this was the metaphor we used for strategic thinking — study the board, find the optimal move, execute. But the AI-driven world doesn’t work like a chessboard. It works like a poker table. You’re making decisions with incomplete information, estimating the likelihood that a given action will pay off, and constantly recalculating as new cards are revealed.
This shift matters because AI models are, at their core, probability engines. When a large language model generates a response, it isn’t retrieving a fact from a filing cabinet. It’s predicting the most likely next token based on patterns in its training data. When a computer vision model identifies a tumor in a medical scan, it’s assigning a probability — say, 87% — that the mass is malignant. The outputs feel certain, but they never are. Every answer carries a confidence level, and understanding that distinction is the difference between using AI wisely and using it recklessly.
This has enormous implications for how businesses and individuals will spend money. In an AI-driven economy, nearly every decision becomes a bet. And like any good poker player will tell you, the goal isn’t to win every hand. The goal is to make positive expected-value decisions over time.
Consider a mid-size insurance company evaluating whether to deploy an AI claims-processing system. The vendor says the model can auto-approve straightforward claims with 94% accuracy. That sounds impressive, but the poker-minded executive asks the next question: what happens with the other 6%? If the average erroneous approval costs the company $12,000, and they process 50,000 claims a year, that 6% error rate represents $36 million in potential losses. Suddenly the decision isn’t about whether the AI is “good.” It’s about whether the expected savings from automation outweigh the expected cost of its mistakes. That’s a pot-odds calculation, not a technology evaluation.
Or take a marketing team deciding whether to use AI-generated content for a product launch campaign. An AI writing tool can produce 30 blog posts in a day at near-zero marginal cost. But suppose each post has a 15% chance of containing a factual error or an off-brand message that requires a human editor to catch and fix. If editing each flagged post costs $200 in staff time and reputational risk, the team has to weigh the cost of producing and reviewing 30 posts against the value those posts are expected to generate in traffic and conversions. The math might still favor the AI — but only if someone actually does the math. The teams that don’t will either overspend on bad content or miss the opportunity entirely because they were too afraid to play the hand.
This is the new literacy. Not prompt engineering. Not learning to code. The essential skill of the AI era is probabilistic thinking — the ability to assess confidence levels, estimate expected value, and make decisions under uncertainty. It’s knowing when to fold a losing hand and when to push your chips in.
The people who will struggle are the ones who demand certainty before acting. They’ll wait for AI to be “perfect” and miss the window. The people who will win are the ones who understand that perfection isn’t the standard — positive expected value is. They’ll make more bets, lose some, win more, and compound those gains over time.
So if you want to prepare for an AI-driven future, don’t just learn how the technology works. Learn how to think in probabilities. Read about Bayesian reasoning. Study decision theory. Or, frankly, just go play some poker. The table has more to teach you about the next economy than most business books ever will.
Thanks for reading.