A new paper — “Some Simple Economics of AGI” — has been making the rounds, so we sat down with the author, covering:
-
Automation vs. verification: the key economic split (07:39, 10:47)
-
Why AI agents now feel like coworkers (05:39)
-
What’s happening to junior roles and the “codifier’s curse” (17:55)
-
The value of “meaning-makers,” consensus, and status economies (21:54)
-
Why crypto may become essential infrastructure for identity, provenance, and trust (23:48, 41:08)
-
Two possible futures: a hollow vs. augmented economy (44:31)
Featuring Christian Catalini (founder of the MIT Crypto Economics Lab) and Eddy Lazzarin (CTO at a16z crypto), in conversation with Robert Hackett, our discussion dives into how automation is reshaping labor markets and the nature of intelligence.
What do these changes mean for startups, the future of work, and your career?
Edited transcript
Robert Hackett: Hi everybody. We’re here with Christian Catalini, who is the cofounder of Lightspark and founder of the MIT Crypto Economics Lab, as well as Eddy Lazzarin (a16z crypto).
And we’re here to discuss a new economics paper that Christian published, called “Some Simple Economics of AGI.”
So I’d love to ask: what started you on this journey to investigate the economic relationship of AI and the world we live in?
Christian Catalini: I would say it was born out of a semi-existential crisis. We’re all grappling with the fast pace of progress and just how quickly everything is moving.
I’m an optimist, but the fundamental questions were: What should we do? What should we focus on? And what’s worthy of our time, effort, and attention — especially in this phase where we still have a meaningful shot at influencing the trajectory of this technology.
Some months ago, we wrote a piece on measurement, and the basic idea was: anything that can be measured will be automated, which doesn’t sound like good news. But this second paper was really centered around: if that’s true, let’s take the initial assumption to the limit.
What would the economy look like? What will the nature of labor look like? What should startups do? What should incumbents do? And essentially, what will the future look like?
Some things will be right; some things will be wrong. Hopefully, we got it directionally right. Now it’s in the wild, and we’re seeing what resonates and what doesn’t.
Robert: You said this stemmed from a semi-existential crisis?
Christian: I think my core takeaway was a feeling that, first of all, this technology is still under our control.
Second, the upside is many orders of magnitude greater than the doomers would have you believe. And third, I think there’s a playbook that all of us can look at.
We can think about: where are we adding value? What are the sorts of things that we do within our jobs? Jobs tend to be bundles of different tasks, and people get very nervous when certain tasks or certain parts of their job get automated.
I think right now coding is going through that experience where many talented individuals who have written elegant, fantastic code over the last few decades look and say, “Oh wow, this is doing what I do.”
Robert: I want to drill down a little bit here because we have Eddy Lazzarin with us, who has spent several years here as Chief Technology Officer at a16z crypto. Eddy, how are you thinking about these changes?
Eddy Lazzarin: Let me situate us in time and with the paper. Many people feel that something changed in December 2025. And what changed was a series of incremental improvements in how these agents work that accumulated to the point that AI agents can now perform long-running tasks.
The feeling just a year ago was: I asked the agent to do a small thing. It’s amazing how it does that. I had to ask it to do the next thing. And so on. And now you can kind of give it less guidance. And maybe it’s not quite perfect. But all of a sudden, this is like working with somebody, right?
You didn’t kick forward what they did, one piece at a time. That would be extreme micromanagement. Instead, you have a conversation, they go away, they come back a day or two later, and they’ve got something. And that qualitative feeling provokes a lot from the imagination, and now everyone is beginning to grapple with this reality.
Part of grappling is just some histrionics. But another part — the more interesting part — is figuring out how to squeeze as much value as possible in actual production settings and for commercial use.
And what people are discovering is that they produce an incredible amount of work. Some of it is fantastic. It takes a fraction of the time it used to take. But it’s often flawed in subtle ways that may not have been fully appreciated before.
So, to give you an example, the bundle of what it means to be a software engineer is being reconsidered: people think of software engineering as sitting down and writing a bunch of code. I sit down, I contemplate the issue, I understand the specifications, and then I write code. And the code is what I produced.
But it turns out — and AI helps us understand this and break it down into its parts better — there is a very nuanced, iterative process of correcting, gathering feedback, and integrating that’s not just the printing of each line of code. It’s this holistic task. So the balance of work for a great engineer is shifting quickly.
The process of trying the thing, guiding it, and taking risks — Christian calls verification in his paper.
The way things are changing is that people are now grappling with the fact that the split of work demanded of a great engineer may be different. The amount of attention paid to writing the code and printing one line at a time is vanishingly small. For some, like in the vibe coding extreme, near zero. And a huge part of the work is now verification.
Christian: So I think the automation part is very intuitive. These agents essentially can do more of what has been done before. And for now, I think they’re still somewhat constrained by the observable domain. Every code base I’ve ever written that they’ve been ingesting during their training or fine-tuning — all of that is what they can build on.
And often people say, “Oh, well then, they cannot innovate. They cannot be creative. They cannot have good taste.”
I actually strongly disagree. In fact, much of innovation is just the recombination of ideas. And humans have probably only explored a tiny fraction of the possible recombination between disciplines. So I do think these agents will be extremely innovative just by taking what we’ve given them.
Verification is an important cost in this new economy. So what do we mean by cost of verification? Verification really starts from the idea of measurement. If you buy into the thesis that AI is incredibly good at replicating that process with the right data, then you start asking: okay, what’s not measured today?
Some things aren’t measured because they’re not really measurable. Economists call this “Knightian uncertainty,” after economist Frank Knight. And it’s essentially the difference between looking at the future and trying to assign probabilities around an event, and not even being able to assign those probabilities.
Robert: For a non-economist out there, they might be more familiar with Donald Rumsfeld’s “unknown unknowns.”
Christian: Absolutely, yes.
The unknown unknowns are essentially the non-measurable piece, often about the future. So that’s why, even if you throw agents today at the stock market, they’ll probably be pretty good on average — maybe better than your financial advisor — but they will probably not be resilient to drastic changes in the environment. Geopolitical shifts and whatnot — those are things that are not measured. And of course, there are many more examples.
And so what verification really is in this paper is the act of applying all the embedded measurements in your brain as a human — if you think about it — from birth to where you are professionally.
Two people may have very similar knowledge, even career-wise, but it’s not exactly the same combination. And so when people say, “Okay, this person has good taste,” or “is a great curator,” or “they have good judgment,” — one of the things that really inspired this paper was the idea that everyone was coming up with all this cope around AI, which was like, “Oh, don’t worry. The machine will never be able to do X, Y, and Z.”
And the cope was very vague, right? How do you define taste? How do you define good judgment? And even worse, a good engineer probably needed a lot more judgment applied in December than they need today.
So we needed to identify something more fundamental that could be really pinned down. And so we think that, as long as there’s data underlying that information that you’re trying to use to automate, you will be automated.
Robert: In the near term, you break down the economy into three different areas where various tasks and jobs exist—and understand their level of automatability, or rather measurability, in terms of their output and what they do.
Christian: I think there’s actually a lot here in terms of what’s still human across many dimensions. I would say the first one is, of course, verification.
The leverage that any single individual has in their profession is massive relative to what it was, even in December. This means we should probably all be more ambitious. We should all try to think through the workflows that we currently do and what we call the AI sandwich.
A firm or a startup can have a single human — we call it a director — who is in charge of steering verification, making sure that, as the system drifts in unintended directions, it can course-correct. So that’s maybe one person, maybe a small team at the top.
In the middle, you’re gonna have a swarm of agents. And we’re already seeing it. People are experimenting with all sorts of interesting new things.
And at the bottom of the sandwich, you’re gonna have an army — or a small army — of top verifiers. With the right tools, I think the top experts in every domain are gonna be the ones ensuring that what was intended actually came out of the system. This is a super important job — one where I think domain experts will thrive for a long time.
But there’s some bad news: As you do that work, you are also kind of creating the labels for your displacement. And we’ve seen it at its simplest before, when people were labeling images for AI companies and training — that’s no longer needed.
Now you have big foundational labs hiring top experts from finance and other domains. Those people are creating the evals and the training that will eventually displace their peers. So this verification layer is really important. I think many people will thrive in it. It’s one that really rewards a kind of hyper-specialization, right? If you’re the one person who can deliver that final unlock, your leverage is massive.
Robert: So that’s one category. And the verifier — that’s the one that you have called the codifier’s curse.
Christian: So the codifier’s curse is the mechanic where, if you’re a top verifier, you need to keep moving up the stack because the technology gets better and better.
The director I mentioned is essentially someone who really drives intent. Entrepreneurs are directors. They see a future and imagine a path to get there.
Then there are gonna be jobs that I think we need to recognize as easy to automate. Those jobs are gone — or soon to be gone. And I think society hasn’t really grappled with some of those effects, and there’s gonna be a massive need for retraining and really pushing people further up the knowledge frontier.
One thing people sometimes misunderstand in the paper is that we talk about human verification as the last step, but in many cases, AI will verify AI. So there’s gonna be a whole series of steps before it finally reaches the final human in this verification chain.
And then we have a category that was the hardest to qualify. We call them the meaning makers. Imagine settings where it’s all about consensus. These are individuals who are really good at understanding trends, societal changes, and issues that society cares about, which require everybody to coordinate around them. Art is like that. Crypto networks, to some extent, are like that.
These meaning makers are not in the land of what’s measurable. These are the jobs that people sometimes say require a “human touch.” I do think people severely overestimate how important that human touch is. You hear it for jobs like therapy, elder care, or childcare.
I think people will have all sorts of concerns initially, but nobody’s really accounting for the drastic reduction in cost, right? So if it’s 100x, 1,000x cheaper — and some people may even feel it’s more private — people will rapidly shift. In fact, we already know people are using LLMs aggressively to answer all sorts of questions that would be considered very intimate or personal.
There will also be jobs where “human-made” or “made by a human” will be a very important label. And crypto will play a role here, because soon we’re going to lose the nature of that identity without strong cryptography to back it. But “human-made” will be valuable just because of the scarcity that’s inherent in the fact that it’s human-made.
So, not because it’s better — it’s just knowing that a human dedicated their scarce time and attention to deliver that experience. Those things will still be important.
Robert: So you brought up cryptography. What is the place for crypto in this world?
Christian: It’s a really important one.
When we started this journey, many before us had already said: look, LLMs and AI are kind of probabilistic; crypto’s deterministic. Think about a smart contract putting the guardrails on an agent, or being able to give an agent the ability to buy and sell resources.
All these things resonated. But I do think there’s an even more profound complementarity between AI and crypto. And maybe the reason why it’s not so salient in the economy today is that we haven’t seen the side effects yet, but issues around identity or provenance of digital information.
I think we’re about to enter very uncharted territory in the next few months as these capabilities become truly amazing. Every digital platform will have to really wrestle with the idea that what used to be a human contribution — whether it’s a post or an image or anything else — is now potentially an agent.
As that unfolds, I think society will have to drastically reimagine its identity stack. In a land where trust is increasingly scarce, crypto primitives will shine across many applications. And everything that’s been built over the last decade is going to be a lot more foundational. Back to verification: when you have underlying information on a blockchain, verification is cheap. It’s more reliable. You can trust it.
Eddy: The cost of automation is declining very rapidly. And the cost of verification in this broad sense we’ve talked about is declining, but not as quickly, which creates an interesting gap.
There are many ways to describe the gap. Some may describe it as an opportunity. That’s kind of what Christian is saying about human labor: if there’s this bottleneck, this gap in measurability because of humans’ general adaptability, experience, and generality, humans are probably able to specialize in the verification component faster than we can get the machines to.
And there are some challenges that make handling verification hard for machines in the short term. In the long term, I don’t think that’s a permanent thing. But in the short term, that is definitely the case.
Cryptography and blockchains are a verification tool. Provenance is just a chain of cryptographic evidence that something traversed some path between specific hands, or it underwent some series of transformations that we can be sure of, and that gives a signal about what we’re looking at. It makes verification across different categories easier. So anything that makes verification easier will be part of trying to close that gap.
Could we talk a little bit about the Trojan horse? We’ve talked about risks to human laborers, and there’s so much more to say about that, but — like for the productive benefits toward the economy — what are the risks to the economy of low automation costs?
Christian: We’re seeing glimpses of it when companies today say that X percent of their code is now generated by machines.
Release cycles are shortening. But at the same time, because we already know that it’s humanly impossible to review all of that code, there’s a good chance it carries technical debt.
We’ve all been tempted to ask an LLM a question, skim it, and ship it as our own without full verification because the models are getting better. But whether it’s a wrong sentence, a wrong line of code, or a zero-day that is now part of your code base, I think we’re going to see more of that.
And what the model says about this is that it’s perfectly rational to ship code, or ship writing, or any sort of AI-generated work that contains some potential error, because you can’t verify the full thing. And if you scale it up to the entire society, that means we’re probably accumulating some degree of systemic risk.
As we accelerate, hopefully, we can develop better verification tooling to go back and review what we may have released. But in the medium term, companies face this tension: investing today in better verification tooling, including cryptographic primitives, is expensive. It may slow you down. The benefits are in the future, and the rush to ship and grow is strong.
So I think we’re going to see two sets of founders: founders who think about that long-term liability and will build things in the right way. We’re seeing glimpses of it — there’s this kind of “liability as software.” As we deploy these agents as workers, the issue of liability and insurance is gonna become increasingly important. It’s not the most glamorous topic, but I think we’re gonna see systemic failures in the wild.
Eddy: This is such an interesting idea because if what was happening in the production of software before — or any other service in the economy — was mostly direct human work, then you can take for granted that people have been observing and quality-checking many steps. Not that there have never been errors or flaws, but there’s always been somebody touching every step along the way.
But as things become more automated, higher-stakes, and more valuable, liability increases. The benefits are radically increasing, too, which is why we’re tolerating that. But the ability to supervise, limit, and understand the boundaries of risk has to expand.
So the idea of bringing in an insurance-type mechanism that assigns a dollar value to the risk that things will fail might be an important component in managing an enterprise that cannot be fully supervised. You want to delegate the responsibility of quantifying that risk and understanding what’s going wrong to a specialist.
I think it’s very interesting that even producing software might develop a new financial dimension that it lacked before.
Christian: And back to crypto: everything we’ve been building over the last decade has advanced the frontier of how we measure and weight risk. You can draw on DeFi, prediction markets — those primitives are suddenly critical.
If you’re deploying software and you have these agents, a stack that lets those agents see better signals matters. A simple example: I was talking to a founder building in agent commerce and payments, and he observed that when he switched from a traditional legacy payment system to payments over a stablecoin, the system behaved more reliably because the signals were all on-chain. The agent had a better understanding of what was happening. It wasn’t just hitting a dead API — it was seeing the whole context of those actions.
Another interesting aspect of this concerns Eddy’s point about insurance and liability. People sometimes say that network effects are gonna be a sustainable moat in the AI era. I think the reality is more nuanced. AI agents and autonomous systems are very good at breaking down a lot of the moats that have made two-sided marketplaces defensible. The cost of bootstrapping these things — and the grunt work of seeding two sides of a market — is coming down.
But there’s a different type of network effect that becomes more important. The idea is: if you have key proprietary data that you generate as part of what you’re doing, and if that data allows you to scale verification out of the hands of humans and into the hands of machines more, you can underwrite risk better, make better decisions, and deliver a safer product at lower cost.
So when you look at incumbents versus startups: incumbents that have a whole database of failure — like a decade of information about how some flows can fail — become extremely valuable. And startups that focus on creating a positive feedback cycle around verification — bringing in top experts, learning from decisions — are going to be extremely successful.
Eddy: More evidence for the idea that proprietary data — the data an organization can keep inside and specialize in — might be one of the most defensible things.
I have a direction I’d love to take it: in the paper, there’s this concept of a hollow economy and an augmented economy. Could you unpack those? What are the key factors that distinguish them?
Christian: Yeah, so we start with the hollow economy. There’s early evidence of this, and tech companies will realize that they can do a lot more with less.
And of course, they’re going to start with below-average or average performers, because AI is already there, and younger performers, because now the senior person can already scale 100x or 10x, depending on the task. So that’s one of the forces driving changes.
The second one we hinted at is the codifier’s curse. As the expert trains and makes decisions, it essentially creates labels. Those labels can be used in the future to make the same decisions without the expert.
And last, there’s this concept of alignment drift. Without getting too much into the model itself, the punchline is that it’s going to be important to think about alignment not as a one-shot process—“we trained the model, it’s aligned, we’re good” but more like raising a child, where you’re course-correcting and continuously providing feedback along the way.
If you take those three dynamics together and combine them with the idea that the incentives for deploying unverified AI — if it can get the job done — are super high, because maybe I get productivity today (“60% of the code written by machines versus humans”), but some of the costs show up later, we may be racing toward an economy where we’re not training our future class of verifiers.
The juniors — our future top verifiers — are becoming increasingly scarce. That class is shrinking. And we’re creating potential risks that can lead to what we call the hollow economy.
Again, I’ve already mentioned I’m an optimist. I think we’re going to land on an augmented economy eventually. The question is how fast we can get there, and whether we can make that transition as painless as possible for the people who will have to be retrained and adapt.
The augmented economy is the opposite. We realize, okay, juniors are not being trained. But guess what? AI is magical at accelerating mastery. You can find a young individual and discover their real aptitude, rather than pushing them through K–12 or a standardized curriculum.
You accelerate them so they can find who they really are, what they truly love, and what gets them into flow. That’s at least what we’ve been thinking about with our kids. Who knows what’s going to be valuable — STEM, arts — we don’t know. But if you’re building on your true talent, you have a much better shot at advancing.
And I think AI is going to play a massive role in that. These are wonderful tools for learning. We have to build that. I don’t think they exist at scale today.
Second, if you think about the codifier’s curse, those individuals will have to keep retraining, moving up the value chain, and discovering that now “I have all this leverage, maybe I can be a director type.”
Some people have talked a lot about the importance of agency. I think that really gets at the crux: you need to realize you can be a director. You can do a lot more than you were doing before.
And on alignment, between safety R&D and better verification tooling — including human augmentation — if we can augment our capabilities, we’ll be able to verify much better and be true peers.
If you put all that together, you’re suddenly in a scenario where a lot of things that used to be expensive are practically free. Anything that can be measured can be automated.
Then you have new things we’ll invent. Lots of new jobs, including in the status economy and the non-measurable economy, all built on a strong verification stack, so we have ground truth. We’re not submerged by fake identities or actors trying to launch a Sybil attack on our society.
If you put that all together, the future looks pretty good. A lot of things governments have been trying to do forever — great education, great healthcare — could become cheap and widely available.
But we do need to make investments along the way to build that, versus just struggling through the transition and making extreme decisions like dismantling data centers. That’s impossible. It’s never going to work.
Robert: So if you’re early in your career, you should use these tools to simulate the environments you’ll encounter and train yourself. And if you’re later in your career, you need to get a fire under you and realize you can do more with less.
Eddy: You know, it’s hard to say how long all this will last until there’s another whole set of changes that’s hard to predict. But the specialty of the human being is looking at the whole thing and being able to zoom in and out across an entire endeavor, and knowing where more attention needs to be paid, where more resources need to be allocated, and how the entire project needs to be shifted.
If I were a young person today starting off my career, yeah, I’d be a little sad that the glory of writing a beautiful program that’s as efficient as I can imagine over the whole summer is gone. That’s a hobby now. But instead, I would try to convince my parents to give me some money to harness a huge swarm of computers and see if I can spend $5,000 of computing productively? Like, can I guide a whole swarm of machines to do a thing?
There’s been a meme in the tech world for years now: the idea of a one-person, billion-dollar startup. Is this not exactly how that happens?
The ability to control a wide range of machines and data, and to maintain a wide view of a thing, is a skill set that has never been developed. It’s never made sense to develop it.
But if you want to have a big project, you’ve always needed to learn how to marshal, uh, many, many, many, many people. That has been the way that you get leverage. When labor has been shaped as it has, well, that’s changing its shape. And so now you should learn how to harness this new thing.
There’s a new surplus. Learn to exploit it. That is the lesson for a young person. It’s not that things are over — that’s ridiculous. You’ve just been told you have superpowers. What do you do?
Christian: One way to summarize it is essentially, look, the apprenticeship might be dead, but the real work is beginning, right?
I think a lot of these domains — like hardware — that used to be harder for someone to tackle are really yours to grab if you have the curiosity.
If I were to classify it, the most positive thing coming out of the model is the idea that the cycles of experimentation will compress. And people will really be able to scale their ideas rapidly.
Robert: Eddy, are you seeing this in the companies that you’re assessing for investments?
Eddy: Of course, we’ve seen Block and X cutting a bunch of people.
I haven’t seen a formal analysis, but companies like Hyperliquid, Uniswap, and many others in crypto are incredibly valuable despite having fewer than 20 employees.
And if it’s possible for only a few people to start a company, there will be many companies, right? And if that’s the case, you need coordination across them. And coordination is very complicated. You need reputation, you need identity, you need provenance for types of data. You need provenance for payment types. We talked about this insurance idea.
So blockchain networks end up being this very attractive thing because they’re credibly neutral. Why worry about trying to figure out the exact reputation of the 50,000,000,000th company you’ve interacted with, when instead you can trust some smart contracts and some verifiable AI models to ensure that the exchange happened the way you expected and payment was tendered as needed.
It’s almost inevitable to me. I feel that blockchains will play a major role in this story.
Christian: I completely agree. I think we’ve been building the rails and the infrastructure for that for a long time. So I think it’s going to become a lot more useful.
Robert: Christian, having done all this research and investigation, how are you integrating the findings into your own work and life?
Christian: Honestly, we couldn’t have written this paper without these systems: Gemini, ChatGPT, Grok, and Claude. They were great coauthors. Of course, they went off the rails sometimes and kept deleting pieces that we needed.
At some point, we had left some Easter eggs for the LLMs reading it, and I was having this conversation with Gemini, who said it enjoyed the Easter egg and made a super sassy comment.
It was kind of a moment where you could see the intelligence. It wasn’t canned. It was creative. It was one of those defining moments where you feel like it’s a peer, not like a tool.
Robert: Alright, for anyone who wants to read this paper, it’s called “Some Simple Economics of AGI.” I highly recommend you check it out. There is some alpha in there that could maybe affect your life, and what you should do with it.