
“Artificial intelligence systems will play a more decisive role in shaping human decisions, work patterns and everyday life. This influence will not be limited to discrete tools supporting human action but will progressively extend to the broader organization of social environments, epistemic practices and value structures.
“As argued in my work on AI ethics, this shift requires us to distinguish between AI as an instrument subject to local regulation and AI as a global transformative force capable of reshaping the human condition itself. The question of resilience, therefore, cannot be reduced to technical robustness or regulatory compliance alone. It must address how individuals and societies adapt to, resist or reorient themselves within a world increasingly structured by artificial agents.
What must be taught is a form of existential literacy, the capacity to understand how technologies reshape goals, values and identities. This includes interdisciplinary education that integrates ethics, philosophy, social sciences and technology studies, enabling individuals to situate AI within broader narratives of human flourishing.
“Societies will likely respond to this transformation through a combination of embrace, struggle and selective resistance. On the one hand, AI offers undeniable benefits in efficiency, safety and access to services. On the other, its pervasive integration risks eroding human agency, meaning-making and responsibility. Resilience, in this context, cannot mean passive adaptation to technological inevitability, but the capacity to shape the trajectory of change rather than merely endure it.
“At the cognitive level, one of the first capacities that must be cultivated is epistemic vigilance. AI systems – especially generative models – produce outputs that are often fluent, persuasive and seemingly authoritative, while remaining prone to error, bias and hallucination. Individuals must therefore develop the ability to critically assess AI-generated information, resisting both trust and reflexive rejection. This includes understanding the limits of AI competence, recognizing uncertainty, and maintaining human judgment in high-stakes contexts such as medicine, law and governance.
“Emotionally, resilience requires confronting a subtler challenge: the risk of existential displacement. If AI systems increasingly outperform humans in tasks traditionally associated with skill, creativity, and expertise, individuals may experience a loss of purpose or usefulness. Cultivating emotional resilience thus involves preserving a sense of agency and self-worth that is not exclusively tied to productivity or comparative performance with machines. This is particularly important in scenarios of partial or full automation, where traditional work-based identities may weaken.
“Socially, AI transforms relationships by mediating communication, decision-making and even intimacy. From algorithmic management to chatbot companions, artificial agents increasingly occupy relational spaces. Resilience at the social level requires reinforcing human-to-human interaction, shared practices and collective deliberation, rather than outsourcing social coordination entirely to optimized systems. Without such reinforcement, there is a risk of social fragmentation, dependency on algorithmic validation and the erosion of communal norms.
“Ethically, the challenge is twofold. In the short term, societies must continue to strengthen principles such as transparency, fairness, accountability and responsibility in AI systems. However, long-term resilience depends on extending ethical reflection beyond instrumental harms to the structural effects of AI on human agency, power distribution and meaning. Ethical frameworks must therefore anticipate not only what AI does, but what it makes humans become.
“Concrete practices and resources are essential to support this form of resilience. Education plays a central role, but not merely in the form of technical AI literacy. What must be taught is a form of ‘existential literacy,’ the capacity to understand how technologies reshape goals, values and identities. This includes interdisciplinary education that integrates ethics, philosophy, social sciences and technology studies, enabling individuals to situate AI within broader narratives of human flourishing.
“Institutionally, resilience requires deliberate governance choices. Actions taken today, such as embedding human oversight, preserving spaces for meaningful human work and limiting full automation in certain domains will shape future possibilities for agency. These measures should not be interpreted as opposition to progress, but as strategies to prevent a net loss of human significance in AI-saturated environments.
“At the same time, new vulnerabilities will inevitably arise. These include over-reliance on automated decision systems, deskilling, concentration of technological power, and psychological dependency on artificial agents. Teaching coping strategies, therefore, becomes crucial: learning when to delegate and when to reclaim control, how to disengage from algorithmic mediation and how to tolerate inefficiency and uncertainty as constitutive features of human life.
“Ultimately, resilience in the age of AI is not about restoring a pre-digital past, nor about surrendering to technological determinism. It is about cultivating adaptive capacities – cognitive, emotional, social, and ethical – that allow humans to remain authors of their lives within environments increasingly shaped by artificial intelligence. This requires action now: Not only better AI systems, but better-prepared humans and institutions capable of steering transformation rather than being reshaped by it alone.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”