
“Over the next decade, AI systems will play a significantly larger role – but with far more continuity than rupture. The most illuminating historical analogue is not a particular prior technology, but the long arc from oral culture to written culture, to print, to near-universal literacy – and then, more recently, to computing. AI fits naturally as the next phase in that trajectory. Literacy dramatically changed what people could know, how knowledge could be stored and transmitted, who could participate in public life, and how institutions functioned. It enabled abstraction, coordination across time and space and the accumulation of durable legal, scientific and administrative systems. Yet literacy did not ‘take over’ most human decisions. Instead, it became an ambient capability: indispensable in some domains, largely irrelevant in others, and unevenly distributed for a very long time. Its effects were profound but rarely felt as coercive or centrally managed. I expect AI to follow a similar pattern. Within the next 10 years, AI systems will influence a meaningful but minority share of daily decisions for most people.
“Their influence will often be indirect and infrastructural – helping draft, summarize, recommend, flag, optimize and predict – rather than directly controlling outcomes. As with literacy, the most important change will not be that machines decide for people, but that they reshape what people can reasonably know, evaluate and attempt.
AI is best understood as part of a long epistemic and institutional evolution, akin to literacy: uneven, powerful, imperfect and deeply shaped by policy choices. Whether AI ultimately expands or constrains human agency will depend less on the technology itself than on the quality of the institutions we build around it.
“Seen this way, much of the anxiety around ‘keeping up with AI’ reflects a category error. Humans have always extended cognition beyond the individual mind: first through language, then writing, institutions, bureaucracies and computing systems. AI accelerates and thickens this extended mind, but it does not fundamentally alter the underlying pattern. It is therefore extremely likely that many people will experience genuine cognitive gains from AI – not because AI replaces thinking, but because it changes the cost structure of reasoning, synthesis and exploration.
“This perspective also explains why I am skeptical of attempts to quantify ‘resilience’ in isolation from institutional context. Asking what percentage of people will master various resilience dimensions begs the question: relative to what baseline and under what policy regime?
“Literacy itself did not produce resilience automatically. It interacted with education systems, economic structures, political inclusion and public goods. Where those institutions were inclusive and well-functioning, literacy was broadly empowering. Where they were extractive or exclusionary, literacy often amplified inequality.
“The same will be true for AI. The cognitive and emotional capacities people need – judgment, skepticism, responsibility, agency – are not fundamentally new. Knowing when to interrogate AI is not categorically different from knowing when to interrogate bureaucracies, markets or expert systems. What matters most is whether these systems expand or constrain the real capabilities of the people and institutions using them.
“This leads to what I see as the most underappreciated point in current debates: The policies that best support resilience in an AI-rich world are largely AI-invariant. Economic efficiency, inclusive institutions, broad access to education, investment in public goods and governance structures that distribute power rather than concentrate it were good policy before AI and remain good policy regardless of how AI progresses.
“There is no special ‘AI resilience lever’ that substitutes for these fundamentals.
We should be bullish on AI as a complement to human labor and creativity and as an accelerant for innovation that can improve living standards and help address planetary-scale challenges. But current policy choices do not reliably incentivize that outcome. In particular, tax systems that heavily tax labor while favoring capital investment and labor-substituting automation risk pushing AI development in a direction that undermines broad-based resilience.
“AI’s most novel risks do not primarily come from misuse by governments or corporations, which – however imperfectly – remain subject to law, public pressure and accountability. The sharper risk is that AI dramatically lowers the cost of scale for organized criminal and adversarial actors, who operate outside those constraints.
“In that sense, AI does not so much introduce a new policy problem as radically intensify an old one: Societies that fail to suppress organized crime will see that failure amplified. The resulting harms are therefore not chiefly problems of individual over-reliance or cognitive weakness, but collective-action and governance failures – demanding institutional capacity, enforcement and international coordination, not moral exhortation.
“Finally, policy ambition matters. We should be bullish on AI as a complement to human labor and creativity and as an accelerant for innovation that can improve living standards and help address planetary-scale challenges. But current policy choices do not reliably incentivize that outcome. In particular, tax systems that heavily tax labor while favoring capital investment and labor-substituting automation risk pushing AI development in a direction that undermines broad-based resilience.
“Shifting taxation away from labor (below generous thresholds) and toward inelastic goods such as land, along with environmental and social externalities, would better align incentives with human flourishing – AI or no AI. This kind of reform is often dismissed as politically infeasible, but low expectations are themselves a source of fragility. The same society capable of deploying transformative technologies at scale should be capable of updating the policy frameworks that govern them.
“If we resist framing AI as either an existential rupture or a purely technical problem, a clearer picture emerges. AI is best understood as part of a long epistemic and institutional evolution, akin to literacy: uneven, powerful, imperfect and deeply shaped by policy choices. Whether AI ultimately expands or constrains human agency will depend less on the technology itself than on the quality of the institutions we build around it.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”