Last week I was attending a tech and finance conference. During the day of panels and presentations, I noticed the young, early 20s analyst next to me was furiously typing notes on his laptop, anxiously attempting to ensure he captured every key word.

I was surprised. Didn’t he know that there are a plethora of free, highly accurate, and easy to use notetaking tools out there like Read.ai, Granola, Otter, Fireflies, Notion, etc?

So of course, after the session, I asked him about it. “I noticed you’re hand typing all your notes. Have you tried using an AI notetaking tool?”

His answer: “Sure I’ve heard of them. But it’s my job to take notes in these meetings. I’m responsible for this output.”

His answer was telling. He didn’t question the tools’ accuracy. He didn’t cite data security concerns. He just felt like he needed to do the work. Taking notes was how he processed the information. It was how he proved – to himself, to his firm, maybe to some internalized version of a demanding boss – that he was adding value.

This was a really interesting response, as it reveals something important about where we are in the AI adoption curve – and where the next wave of durable companies will be built.

And its a behavior Betty Crocker figured out 50 years ago…


Share

Just Add an Egg

In the 1950s, General Mills launched a line of Betty Crocker cake mixes that included everything – flour, sugar, and even powdered eggs. All you had to do was add water, stir, and bake. It was an engineering marvel of convenience. It also didn’t sell.

The popular version of this story, popularized by consumer psychologist Ernest Dichter, goes like this: housewives felt guilty. The mix was too easy. It felt like cheating. Baking a cake was supposed to be an act of care and effort, and a product that reduced it to adding water stripped away the meaning of the gesture. So General Mills pulled the powdered eggs from the formula and told customers to crack in a fresh one themselves. Sales took off.

Now, the truth is more nuanced than the legend. Historians like Laura Shapiro have pointed out that fresh eggs also just made better cakes. Pillsbury kept selling complete mixes and did fine. The real marketing innovation Dichter sparked may have been less about the egg itself and more about repositioning the cake as a canvas – encouraging women to express creativity through frosting, decoration, and presentation rather than the batter.

But here’s why the myth endures, and why it matters for AI: the psychological core is real. People value what they put effort into. They trust outcomes more when they’ve had a hand in creating them. Behavioral economists call this the IKEA Effect: the well-documented finding that people assign significantly higher value to things they’ve partially built themselves, even when the result is objectively no better than a store-bought alternative. A 2012 study by Norton, Mochon, and Ariely at Harvard showed that participants valued their own amateurish origami creations nearly as much as expert-made ones. A 2025 meta-analysis across 55 studies confirmed a statistically significant and robust effect.

So what’s the take-home message? Humans are not rational optimizers. Automation isn’t always the optimized end goal. We derive meaning, identity, satisfaction, and trust from the act of doing.

The Knowledge Worker’s Identity Crisis

This matters enormously for AI because knowledge work is where identity and output are most tightly fused. A consultant’s value is the deck. A lawyer’s value is the brief. An analyst’s value is the model. When AI can produce a passable version of any of these in seconds, it doesn’t just change the workflow, it threatens the psychological contract these professionals have with their own careers. And yes we realize this even as we use AI to help us with our work (present content included).

While AI adoption is more rapid in certain technical fields like coding, it tends to lag in prestige-driven industries. It’s not a capabilities gap. It’s an identity gap. The analyst at the conference wasn’t being irrational. He was protecting something real: his sense that the work matters because he did it.

This matters in a world where AI is quickly coming for all knowledge work. Anthropic recently announced tools for investment banking, equity research, private equity and more. The stock market wiped out $800B based on a viral blog post. The future is coming.

We see this pattern constantly across the portfolio and in diligence conversations. Teams that adopt AI tools for back-office automation move fast. Teams asked to adopt AI tools that replace their core craft — the thing they were hired and trained to do — resist, even when the output quality is equivalent or better.

So what can we do to make this transition easier?

The Labor Illusion as a Design Principle

UX designers have understood a version of this for years. There’s a concept called the “labor illusion” – the finding that people actually trust and value results more when they see evidence of effort being expended, even if that effort is artificial. Kayak popularized this with its flight search: instead of returning instant results, it shows you a progress bar ticking through hundreds of airline websites. The results would be identical either way, but users rate the delayed experience as more trustworthy and thorough.

Many great AI products are already applying a version of this. GitHub Copilot doesn’t write your code — it suggests the next line while you drive. Midjourney doesn’t create art — it responds to your prompts and requires your curation and iteration. Figma’s AI features augment your design process rather than replacing it. The human stays in the loop not because the AI can’t do more, but because the human needs to be there for the output to feel legitimate, at least today.

We would argue the products that get this balance right will be the ones that endure. And the ones that optimize purely for automation — that try to make the metaphorical cake mix so easy you don’t even need to add water — will face persistent adoption headwinds that have nothing to do with product quality.

Where This Creates Opportunity

So what does this all mean for startups and the broader AI ecosystem? There’s likely going to be a messy and somewhat bumpy transition as we go to a fully automated, agentic future (if that happens).

Here are a few trends we expect to play out.

Users hang on to co-pilot modes before going full autopilot. Despite fully agentic options, there will be a set of users who continue to prefer AI co-pilots, particularly in categories of work where identity is at stake. The most adoptable AI products will be the ones that make you feel like a better version of yourself, not a redundant one. Think of it this way: the winning product isn’t “AI writes your investment memo.” It’s “AI does the 4 hours of data gathering so you can spend your time on the synthesis that actually requires judgment.”

The apprenticeship model is under real stress. For decades, the rite of passage in professional services was doing the grunt work. Junior analysts built models cell by cell. Junior associates marked up documents line by line. Junior consultants assembled decks slide by slide. That labor wasn’t just about output – it was how you learned the craft and earned credibility. AI disrupts this entire pipeline. If the grunt work disappears, how do you train the next generation? Companies that figure out new apprenticeship models for the AI era will have a major talent advantage.

“Human-in-the-loop by design” is an underexplored product category. Most AI product pitches we see are racing toward full automation. We believe there’s an equally interesting category of products designed to keep humans meaningfully involved – not as a limitation, but as a feature. Products that handle the tedious data gathering but leave the pattern recognition to you, or draft options but require your judgment call. Where AI automates much of the labor but hands off higher level judgment to the human user.

The transition will take time. The analyst typing notes at the conference grew up in a world where that behavior was how you demonstrated competence and built knowledge. His instinct isn’t wrong; it’s adapted to a different environment. Over time, as AI-native professionals enter the workforce, the relationship between effort and perceived value will shift. But “over time” can mean years, not a single product cycle. Companies building for the long term should design for the messy middle. And that messiness can also mean major layoffs like we recently saw with Block.

Conclusion

We’re in an awkward, “messy middle” period. The tools are ahead of the culture. AI can do things that humans aren’t yet psychologically ready to let it do – not because they don’t trust the technology, but because they haven’t renegotiated their relationship with work itself.

The Betty Crocker story, mythologized as it is, captures something timeless: when you make things too easy, you don’t just remove friction — you remove meaning. The companies that win the AI era won’t be the ones that automate the most. They’ll be the ones that understand where effort matters to the humans in the loop and design accordingly.

The egg was never really about the egg. It was about the baker’s need to feel like a baker.

And right now, every knowledge worker using AI is trying to figure out: what’s my egg?

Thanks for reading Aspiring for Intelligence! Subscribe for free to receive new posts and support our work.

Leave a Reply

Sign Up for TheVCDaily

The best news in VC, delivered every day!