The productivity case for AI is increasingly well-evidenced. Output improvements across professional settings are real, the studies are accumulating, and sceptics are finding it harder to dismiss the aggregate signal. The more consequential question is not whether AI delivers productivity gains. It is where those gains go, and whether the economy is structured to distribute them.

A narrative that has gained significant traction, is that knowledge workers are the primary casualties of AI, while those in physical, trade-based occupations emerge relatively unscathed. Automation, on this reading, climbs the skills ladder rather than displacing from the bottom up. It is a plausible framing, and not without supporting evidence. However, it obscures a more important pattern.

Research into AI adoption in professional environments reveals something more granular than a simple white collar loss story. The gains are real, measurable improvements in output quality and speed for lawyers, analysts, engineers, consultants. The workers best positioned to capture AI's benefits are those with the contextual expertise, analytical fluency, and professional autonomy to direct AI tools rather than be directed by them. For workers in the same professions performing more structured, process-driven tasks, the picture is considerably less favourable.

The fault line, in other words, runs through the knowledge economy rather than around it. The relevant distinction is not between professions but within them — between those for whom AI functions as leverage and those for whom it functions as a substitute. This has significant implications for how we think about both winners and losers, and it is being underweighted in most public discussion.

The distributional question matters for reasons beyond equity. The political economy of technological transition is reasonably well understood: broadly shared productivity gains tend to produce stable, adaptive societies; narrowly captured gains tend to produce the opposite. The causal mechanism is not complicated — when large portions of the workforce experience technological change as displacement rather than empowerment, the political consequences follow. Managing that dynamic is fundamentally an institutional challenge.

The difficulty is that the institutional architecture available to meet that challenge was built for a different economic era. Labour protections, welfare systems, education and retraining pipelines, these were designed around assumptions of stable employment relationships, relatively predictable skill lifecycles, and a pace of occupational change that allowed policy to adapt gradually. Gig and portfolio work structures, rapid skill depreciation, and the blurring line between augmentation and automation sit uneasily within that framework. The policy infrastructure is not keeping pace with the adoption curve, and in many countries the gap is widening.

None of this is an argument for slowing AI deployment. The genuine gains being generated, in healthcare, scientific research, education, logistics, are significant and in some cases transformative. The argument is narrower. The distribution of those gains reflects institutional and political choices, not market inevitability. Technological transitions do not spontaneously produce equitable outcomes. They produce efficient ones. Equity requires deliberate design.

The question facing policymakers, institutions, and organisations is whether that design work is being done with sufficient urgency. Building retraining infrastructure that matches the pace of skill obsolescence. Reforming labour frameworks to provide meaningful protection outside traditional employment structures. Ensuring that the educational pipeline produces the kind of contextual, adaptive expertise that amplifies rather than competes with AI capability. These are not novel challenges, but they are ones that previous transitions were navigated, with varying success, through deliberate institutional response.

There is a version of the AI transition that, viewed from a decade out, resembles the broader pattern of general-purpose technology adoption. Disruptive, uneven in the short run, but ultimately widening access to productive capability across the economy. There is also a version in which the gains are real but narrow, the fractures within professions become structural divisions across society, and the institutional response arrives too late or too weakly to redirect the outcome.

The technology does not determine which version materialises. The choices made around it do.

Part of the Blueprint AI series, this post reflects my personal views and is independent of my professional role.