Why the Smartest AI Bet Right Now Has Nothing to Do With AI
The Bottleneck Economy Thesis
Edited Transcript — Original Source: Video Presentation
The Abundance Narrative at Davos
A week or so ago in Davos, Switzerland, Elon Musk told the World Economic Forum that we're approaching "abundance for all." Ubiquitous AI, ubiquitous robotics, everything's going to be great—an explosion in the global economy "truly beyond all precedent." He recommended we not save for retirement. Meanwhile, Dario Amodei predicted half of white-collar jobs would disappear, but apparently that's good because the abundance is just going to be everywhere.
The abundance narrative was everywhere at Davos. It echoed through every panel, every fireside chat, every op-ed, every private conversation. But I want to suggest that the abundance economy is probably the wrong frame for most of us to think about the next few years. Instead, we should think about the bottleneck economy. It's much more practical, much more likely to get you employed, and much more likely to help you as a builder or company leader find ways to succeed in the AI economy.
The $4.5 Trillion Asterisk
Cognizant released telling research on AI, claiming that it could—and "could" is the keyword—unlock four and a half trillion dollars in US labor productivity. But there was a massive caveat that no one paid attention to: the value will only materialize if, quote, "businesses can implement it effectively."
That is the biggest asterisk I've ever seen.
Most businesses, according to Cognizant CEO Ravi Kumar, have not yet done the hard work. That's the gap between the abundance narrative that sounds so good in Switzerland and the reality. It's not about the capability of models—it's about implementation. It's about value capture. The AI already exists, but the trillion-dollar value that people like to talk about doesn't just show up and flow automatically. This is not the fountain of youth.
This is the story everyone is missing when they debate AGI narratives. The interesting question is really not whether AI creates abundance—it does. The interesting question is: where are the bottlenecks? Because that's where value concentrates.
Understanding Bottlenecks
AI is creating an unprecedented abundance of intelligence, but that just means the bottleneck flows downstream. That's where the leverage lives, and that's where fortunes will be made or lost in the next decade. Abundance is super hand-wavy. I'm not interested in hand-wavy. Bottlenecks are specific, and specificity is where strategy happens, where careers happen, and where companies happen.
A bottleneck is the binding constraint in a system. It's not just any constraint—it is the high-leverage binding constraint, the one that determines actual throughput in the system. If you improve anything else, you've accomplished nothing because you didn't improve the bottleneck. But if you improve the bottleneck just a little bit, everything will move.
This is basic systems thinking, and it's also something that most people ignore. They optimize for whatever's visible, whatever's comfortable, whatever they're already good at. They work harder instead of differently. They add capacity where there's already lots of capacity in the system, and they ignore the choke point because that's been really painful to view and consider and address.
Historical Precedent: Corporations as Bottleneck Dissolvers
The history of the corporation illustrates this perfectly. Every dominant organizational form emerged to dissolve a specific bottleneck.
The Dutch East India Company solved the capital lockup problem of multi-year oceanic voyages. Railroads cracked the energy constraint on overland transport. Banks emerged to allocate capital across time. Stock exchanges aggregated capital at scales that exceeded any private fortune. Walmart solved the information bottleneck in retail supply chains—just knowing what was selling where and getting it there before stockouts.
The pattern is consistent: whoever solves the binding constraints captures disproportionate value. Everybody else participates in the abundance that's created.
The AI era absolutely has its own bottlenecks, and they're not the ones most people are watching.
Bottleneck One: Physical Infrastructure
The binding constraint on AI capability is increasingly atoms, not bits.
Jensen Huang told Davos that AI needs more energy, more land, more power, and more trade-skilled workers. Contemporary hyperscale data centers consume 100-plus megawatts. Training a single frontier model can require sustained exaflops of compute for weeks. The electricity demands are approaching those of small nations in some cases.
This matters because physical infrastructure operates on very different timelines than software. You can ship a new model in months if you have the compute, but building a data center to run it at scale—that takes moving atoms around, that takes time. Permitting alone can take years in some cases. Expanding grid capacity is even harder. Google recently shared that they are bottlenecking on the ability to establish connections to the grid.
This is not the only bottleneck in the system, but it's a great example of all the specific upstream bottlenecks constraining the ability of hyperscalers to build right now. The result is a structural wedge between what's technically possible and what is deployable today. Capability sprints ahead while infrastructure really plods.
We're seeing this also with the memory crisis, where DRAM prices are skyrocketing because there's not enough memory to go around. A model can exist in potential, but the physical substrate to run it at scale is what's required to deliver value.
Who captures value from this gap? The joke is, it's always Jensen and Nvidia, and that's not entirely wrong. But it's also more than that. It's whoever can navigate the physical constraints faster—who can pick the better site, who can get faster permitting, who can do more efficient construction, who can do smarter energy sourcing.
This is not a temporary bottleneck. This is structural. The companies that understand this are securing power purchase agreements, advanced memory purchase agreements, locking up construction capacity, and building relationships with utilities years in advance. The companies that don't are assuming compute will magically appear.
The chip supply chain is even more constrained. TSMC and a handful of other fabs control the production of advanced semiconductors. Packaging, testing, and high-bandwidth memory all have their own separate bottlenecks. Nvidia's market position isn't really about better chips—it's about having chips at all when everyone else is capacity-constrained. The hardware advantage compounds because access to compute determines who gets to train the next generation of models, who gets a seat at the table.
The physical layer creates an opportunity for an entirely different kind of company—one we normally don't think of as an AI business. Someone has to build these facilities, someone has to provision the power, someone has to manufacture the cooling systems, install the racks, connect the fiber. This is what Jensen is calling "high-quality jobs," because he can't get enough of them and neither can any of the other hyperscalers. He says trade-craft jobs in these spaces have salaries that have nearly doubled, and I'm not at all surprised.
The abundance of AI at the application layer depends on scarcity being resolved at the physical layer, and that resolution means people.
The geographic distribution matters too. Data centers need stable grids, friendly permitting environments, and access to cooling—whether through climate or water. This means certain regions effectively become strategic assets. Local politics become unexpectedly relevant to the trajectory of AI. The infrastructure to build AI—the AI that we have in our pocket and assume is global—that infrastructure lives locally.
Bottleneck Two: The Trust Deficit
When Demis Hassabis spoke at Davos, his biggest concern wasn't technical. It was "the loss of meaning and purpose in a world where productivity is no longer the priority." He also worried that we lack "institutional reflection" about AI.
What he's really saying is these are coordination problems, and coordination runs on trust. He's worried about trust.
Consider what happens when anyone can get sophisticated AI and generate whatever they want at the touch of a button. Text, images, video, code—all become cheap to produce. The cost of generation collapses, but the cost of trust doesn't get cheaper. If anything, trust gets harder because the difference between synthetic and authentic is becoming indistinguishable.
Every piece of content could be fabricated. Every credential could be gamed. Every piece of information might be generated to manipulate you. When you can't distinguish the signal from the noise, you're overwhelmed as a human, and you look for someone to trust.
Trust is the infrastructure of coordination. When I trust that a counterparty will honor a commitment, I don't need to write every contingency into legal language. When I trust that a credential signals competence, I don't need to administer all of my own tests. When I trust that published information is accurate, I don't need to verify it independently. Trust reduces transaction costs. It's the trust in the system that makes coordination possible.
Now imagine that trust degrading. You don't have to imagine it—you see it and feel it. Transaction costs tend to rise across the entire economy. Deals take longer, verification layers multiply, everything gets harder.
Who captures value here? Whoever can mediate trust. The institutions that can verify, authenticate, and certify. The platforms that develop reputations for signal in a world of noise. The networks where track records are visible and accountability actually exists.
We're kind of looking for trust banks in the 21st century—essential infrastructure that everyone can rely on, controlling a scarce resource that must be accumulated over time and can be allocated across different uses. The parallels between trust and capital are definitely thought-provoking.
Bottleneck Three: The Integration Gap
Cognizant's research points to something specific: the value is conditional on implementation. Four and a half trillion dollars sitting there, chained up because organizations can't figure out how to use AI effectively.
This is the integration bottleneck. AI has general capacity but no specific context. And we know after a couple of years of implementation at the corporate level that a general capability is a tool that works well for individuals, and without specific work on the part of the company, it just dies at the team level. It does not go anywhere.
A general AI can write code, but it doesn't know your codebase. A general AI can draft strategy, but it doesn't know your competitive dynamics. It can talk about board politics generally, but it doesn't know your board. It can talk about product strategy for someone in your category, but it doesn't know you.
The gap between "AI can do this" and "AI does this usefully right here" is four and a half trillion dollars.
Bridging it requires context that's often tacit. It embeds practices, it embeds relationships—it doesn't just embed documents. The person who's been at the company for 20 years knows things that aren't written down anywhere. The AI doesn't. This knowledge is not promptable.
The interface between general AI capability and specific organizational reality is where value gets lost or captured. Some companies are going to figure out how to solve this integration problem and unlock massive productivity gains by tying AI into their workflows. Others are going to deploy AI tools on the side that sit unused or, worse, get actively misused, generating outputs that look deceptively productive but don't connect to anything that matters.
The difference isn't the AI or the tool—the AI is increasingly a commodity. The difference is the organizational capacity to integrate.
Who builds that capacity? That's not obvious. Maybe it's a new category of consultancy that specializes in AI-org fit. Maybe it's internal roles that don't exist yet—people whose job it is to translate between what the business needs and what AI can do. Maybe it's software that encodes organizational context in ways that make AI outputs more relevant. Whatever the form, this is a bottleneck, and bottlenecks are where value concentrates.
Bottleneck Four: The Coordination Problem
The coordination problem is broader than trust. AI doesn't magically dissolve the challenge of getting humans to work together. It doesn't make them align magically. It might make coordination even harder.
When anyone can generate sophisticated arguments for any position, groups have even more trouble reaching consensus or alignment. Larry Fink's warning at Davos was pointed: "If AI does to white-collar workers what globalization did to blue-collar workers, we need to confront that reality directly."
It's comforting for him to say that, sitting in his chair in Davos. But he's describing a coordination problem: how do we actually share the gains from AI in ways that don't trigger social disruption? That's a question of human alignment. And really, no one at Davos has those answers—everyone just wanted to talk about it over cocktails.
The IMF managing director said a tsunami was hitting the labor market and 40% of jobs globally would be affected, and "we don't know how to make it inclusive."
The people who are closest to knowing how to put AI and jobs together aren't the ones going to Davos. They're the ones actually building workflows where AI and people work together. They don't get those invitations.
Individual Bottlenecks: The Fractal Principle
Everything above applies to individuals too. The bottleneck principle is a fractal principle. You are also a system with binding constraints. Your output, your impact, and your leverage are functions of which bottleneck you're solving and whether you're optimizing the right constraint.
The old individual bottlenecks are dissolving. Access to information is abundant. Access to tools is cheap. Skill acquisition is rapidly getting easier. It used to take five years to become a proficient programmer, or more. AI compresses or eliminates those runways.
Dario Amodei noted at Davos that his own engineers no longer program from scratch—they supervise and edit the work of models. This is something that's come out of OpenAI as well, and we're hearing it over and over again from extremely experienced engineers who are now saying they don't really touch code.
This is disorienting if your identity was built around skills that are commoditizing, like programming. But disorientation is not a strategy—not for your career or mine.
The question is: where are the new individual bottlenecks?
I wasn't happy with what the Davos participants said. They asked lots of questions and didn't have answers. Hassabis's advice to young people was to "become incredibly competent with AI tools." That's a throwaway line. That's not a great line.
New Individual Bottlenecks
Taste and Judgment
Tool fluency is table stakes. The constraint shifts to what you do with those tools. Taste and judgment become really critical. When generation is cheap because people have all those tools, the curation of what's good is expensive. Knowing what to make, when to stop, what's good enough versus what's actually good—these are capacities that still take a lot of time to learn.
The AI can generate a hundred options, but knowing which option is right is still human terrain.
The challenge is that taste develops slowly while AI devalues output. If you spend three years developing good taste in design and AI makes okay design a commodity before you can capitalize on your extra 10% or 20% of taste, you end up losing a race you didn't know you were running.
I feel and hear that frustration from a lot of early-career folks right now. The window to good taste is getting narrower, and the people who are surviving and thriving and developing good taste are narrowing their focus earlier. It used to be that when you developed good taste, you were really broad to start with, and then you discovered how to narrow over time as you learned. These days, the folks I see who have extraordinary taste are diving in super deeply on something, rapidly pushing to the frontier past the edge of where "AI good enough" is acceptable.
We all know AI can solve front-end design in many ways, but if you want extraordinary design, people are still turning to humans who have extraordinary taste. That kind of dynamic is going to persist in a lot of different corners of the economy and supply a lot of different jobs.
Problem Finding
Problem finding eclipses problem solving. AI solves well-specified problems with increasing fluency. But specifying the right problem and framing it right—that remains very, very human.
What should we build? What is wrong here? Have I had time to think about it? What question, if answered, would unlock everything else?
Our education system has largely optimized for problem solving, and the market is increasingly rewarding problem finding. The skill increasing in value is not execution—it's direction-setting. It's a management skill.
Institutional Knowledge
Context and institutional knowledge are becoming moats for individuals in the way that data is becoming a moat for companies. AI is general; usefulness is specific.
The person who understands why the organization really operates the way it does, what the stakeholder actually wants beneath what they're saying—that tacit knowledge is very hard to replicate and increasingly valuable.
This creates a strange dynamic. Juniors who would historically have accumulated context through years of apprenticeship now face a compressed path. Why spend five years learning how the organization works when AI can help you skip the grunt work? But the grunt work was also where that context got absorbed. The implicit knowledge that made senior people really valuable often came from thousands of little exposures that never happen if AI handles all the tasks.
How do you develop institutional knowledge without that slow accumulation? I think it still takes slow accumulation, and people are trying to speed-run it and they're going to learn that the hard way. No one has a better answer yet. There is no fast-forward to 20 years of deep experience in a domain.
Execution and Follow-Through
Execution and follow-through are emerging as a binding constraint for many. I said that solving problems was going out of style and finding problems was in style. But there's an element of follow-through that we still see as a bottleneck.
AI can generate a lot of plans. It can generate a workout plan for me tomorrow, but I have to show up to the gym. Turning any of these AI-generated plans into reality requires a human to decide and commit and persist and navigate politics, to hold people accountable, to keep going when things get hard.
Execution has always been underrated because it's much less legible than ideation. People love to ask about Steve Jobs's brilliant mind when he created the iPhone; they don't ask about Steve's relentless execution to get it done—calling Corning and making them produce the glass he knew was right for the iPhone. A brilliant strategy document is visible. It might get you a promotion in some companies. But the grinding work of implementation—Steve calling Google and saying the yellow in the O on Google looks terrible on the iPhone, my engineers will be at your door to fix it—that's grinding work of implementation. That's not a strategy document.
Tolerance for Ambiguity
Tolerance for ambiguity separates those who thrive from those who freeze. The environment is shifting really fast. Best practices are shifting all the time. People are desperate for stable ground in that world.
The constraint you face is actually your ability to metabolize change. How much uncertainty can you hold onto in a rapidly changing world without freezing, while continuing to execute and follow through on a longer-term perspective? People who are able to master that balancing act are in huge demand.
The Leverage Shift
All of this adds up to a leverage shift. The old model of talent development was super linear: you acquired skills, traded your time for money, let it accumulate slowly.
The new model has a really different shape. Some individuals are discovering next-level leverage through AI augmentation—not because they work harder, but because they've identified their bottleneck and directly dissolved it. Maybe a developer was bottlenecked on boilerplate. Maybe a strategist was limited by analysis bandwidth. Whatever it is, they found the constraint and removed it and unlocked capacity that was latent.
Most of us are not finding that leverage for ourselves because we are optimizing against the old pre-AI constraints. We're still trying to prove we have the skills when the skills are commoditizing.
The diagnostic question for each of us is deeply personal: What is constraining my output right now? What is the actual binding constraint today?
For some of us, it is tool fluency because we haven't genuinely integrated AI into a workflow. For others, it's taste. Maybe it's problem finding for you. The bottleneck is going to be specific to you. Solving it requires first honesty about what's actually holding you back.
Conclusion: The Real Question
I keep going back to Davos and the abundance narrative that dominated there. It feels clangy, out of touch.
The conditional is doing a lot of work in these predictions. Yes, the capability might exist—I increasingly don't doubt that. But the value captured depends on solving bottlenecks that are organizational, institutional, physical, and social—not technical. And that is hard work. That is hard human work.
The businesses and people that are going to thrive in the next 10 years are going to be the ones that correctly identify where scarcity has migrated to—into physical infrastructure, into trust, into integration, into coordination—and build systems and careers out of addressing those constraints.
Intelligence is getting cheaper. The promise of abundance is absolutely real. AI is going to keep getting smarter. Cognitive output is going to keep getting easier to produce every single month.
But abundance doesn't eliminate scarcity. Abundance shifts where scarcity lives, and we haven't been honest about that.
The question isn't whether to believe in the coming abundance as an article of faith. No, no, no. The question is: where are the bottlenecks, and are you positioning yourself and your business to solve them?
That's really the only question that matters, and it doesn't get enough airtime.
End of Transcript