
“AI systems will unquestionably play a far more significant role in shaping decisions, work and daily lives. The question is not whether this transformation occurs, but how it is governed – and who benefits. Over the next decade people will not respond uniformly. Some will embrace AI as a tool for augmentation, using it to extend human cognitive capacity while retaining agency over goals and values. Others will resist, perceiving AI as a threat to employment, autonomy and meaning. And most will oscillate – welcoming convenience while fearing displacement. The deeper struggle is structural. As AI automates cognitive and coordination tasks the economic surplus shifts from labor to capital ownership. If AI remains concentrated in the hands of a few corporations or states, we risk what Luke Drago and Rudolf Laine have described as the ‘intelligence curse’: A setting in which the elites no longer require the consent or the productivity of the majority. This is not a technical problem – it is a governance problem. The question becomes: Can we design systems in which AI-generated value flows back to citizens, not just to owners and shareholders?
The Capacities We Must Cultivate Are:
“Cognitive: Citizens must develop AI literacy – not to become engineers, but to understand what AI can and cannot do, where it errs and how it can be questioned. Critical reasoning about algorithmic outputs becomes as essential as reading.
“Emotional: Resilience requires confronting uncertainty. AI disrupts identity tied to work. We need emotional frameworks that decouple self-worth from employment and cultivate meaning through contribution, creativity, presence and care.
“Social: The capacity for collective deliberation becomes paramount. If AI concentrates power, only organized, informed publics can counterbalance it. Sortition-based assemblies, participatory governance, and structured dialogue are not luxuries—they are infrastructure for human survival.
“Ethical: We must cultivate the habit of asking: who benefits? Who is harmed? Who decides? These questions must be embedded in institutions, not left to individual conscience.
Practices and Resources for Resilience
- Deliberative institutions that give citizens binding decision rights over AI deployment, not just advisory input
- Distributed ownership models where AI-generated surplus flows to commons, cooperatives, or universal basic income – not exclusively to shareholders
- Transparency infrastructure requiring open audits of algorithmic systems affecting public life
- Education systems that prioritize adaptability, collaboration, and ethical reasoning over narrow technical skills
Actions Required Now
- “Experiment with alternative governance architectures before path dependencies lock in. Once AI systems are embedded in infrastructure, retrofitting democratic oversight becomes exponentially harder.
- “Build prototypes that integrate production, governance, and value distribution by design – a proving that coordination-based models can work under real economic conditions.
- “Create new institutions for citizen oversight of AI, drawing on proven deliberative methods (citizens’ assemblies, participatory budgeting) and adapting them to the speed and complexity of AI decision-making.
- “Resist the narrative that AI governance is purely technical. Alignment is not just a machine learning problem – it is a political problem requiring democratic input on values, priorities, and trade-offs.
New Vulnerabilities and Coping Strategies
“AI-powered manipulation of information ecosystems – deepfakes, synthetic media, personalized persuasion – threatens the epistemic foundations of democracy. Coping: Invest in verification infrastructure, media literacy, and institutional trust anchors.
“Rapid displacement without transition pathways creates social instability. Coping: Proactive distribution mechanisms (UBI, profit-sharing, retraining) embedded in production systems, not added as afterthoughts.
“Governance capture – those who control AI shaping the rules that govern AI. Coping: Sortition and deliberative processes that resist elite capture; decision rights held by randomly selected citizens rather than self-selected stakeholders.
“Loss of agency and meaning as AI handles more cognitive tasks. Coping: Reframe AI as a tool that handles drudgery, freeing humans for creativity, care, and governance. Cultivate identities rooted in contribution, not just productivity.
“The future is not determined by AI’s capabilities – it is determined by the structures we build around it. The risk is not that AI becomes too powerful, but that we fail to organize ourselves to govern it. The opportunity is that – for the first time – we now have tools capable of generating abundance – IF we design systems so they distribute it.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”