In 1943, Thomas Watson, chairman of IBM, allegedly predicted that the world market for computers was about five. He almost certainly never said it, but the line persists because it captures something true about technological prediction: it is very often wrong, and it tends to be wrong in ways that reflect the predictor's present rather than the technology's future.
The AI prediction industry has been considerably more prolific than Watson, and not conspicuously more accurate. In 2016, a prominent researcher predicted that AI would replace radiologists within five years. Radiologists are still employed. In 2018, multiple reports predicted that autonomous vehicles would dominate urban transport by the early 2020s. They have not. The list of missed predictions is long enough to constitute a literature in its own right.
This is not a counsel of despair about AI's transformative potential. The prediction failures do not mean that AI is less significant than its proponents claim. They mean something more interesting: that the path from capability to consequence is not linear, not smooth, and not predictable from the technology alone. It runs through economic incentives, regulatory environments, social norms, political decisions, and the behaviour of millions of individual actors, all interacting in ways that produce emergent outcomes no single model can capture.
Economists have a term for this kind of system: complex. A complex system is not merely complicated: a jet engine is complicated, but its behaviour is in principle fully predictable from its design. A complex system is one whose behaviour arises from the interaction of many components in ways that cannot be reduced to the properties of any individual part. Markets are complex systems. Political systems are complex systems. The economy-wide diffusion of a general-purpose technology like AI is, emphatically, a complex system.
The implications for governance are significant. If the consequences of AI adoption cannot be reliably predicted, then a governance strategy built on prediction, wait to see what happens, then regulate, will always be lagging. It will regulate the last crisis while the next one is already developing. This is not a hypothetical concern. It describes, with some accuracy, the trajectory of digital platform governance over the past two decades. The harms of social media, to democracy, to mental health, to public discourse, were extensively documented years before meaningful regulatory action was taken, precisely because the dominant posture was reactive.
This approach has a venerable intellectual lineage. John Rawls, in constructing his theory of justice, asked people to reason about social arrangements from behind a 'veil of ignorance': without knowing what position they would occupy in the society they were designing (Rawls, 1971). This thought experiment was not a prediction tool; it was a design tool. It helped to identify the principles that rational people would endorse if they did not know whether they would be advantaged or disadvantaged by particular arrangements. Applied to AI, the question becomes: what kind of AI-enabled society would we design if we did not know whether we would be a high-skill professional who benefits from AI augmentation, or a lower-skill worker whose job is automated away?
There is a practical objection: surely we need to know something about what AI will do before we can design responses to it? This is true. The point is not to ignore evidence or pretend that technological trajectories are irrelevant. It is to use evidence as input to design rather than as the basis for prediction.
The Solow paradox famously observed that you could see the computer age everywhere but in the productivity statistics. Our equivalent risk is that we will see the consequences of AI everywhere but in our governance frameworks, always arriving too late, always surprised by what we failed to anticipate. Stop predicting. Start designing.