
“Artificial intelligence changes how we work, learn, access services, consume information and make decisions. The most immediate concern is that AI can undermine individual and societal resilience: It can destabilize livelihoods, intensify surveillance, fragment trust and weaken democratic accountability. These risks matter because resilience – at its most basic – is the capacity to withstand shocks without tipping into fear, resentment or violence. Yet the story is not one-directional. AI can also strengthen certain forms of resilience. It can lower barriers to access, reduce cognitive overload, support learning and help institutions anticipate and manage crises. For many individuals – especially those with sufficient economic and educational resources – AI offers comfort, efficiency and a sense of security in an increasingly complex world. The difficulty is that both dynamics are unfolding at the same time – and the apparently positive effects may carry deeper long-term risk. In the worst case, AI can make individuals more resilient in a narrow, adaptive sense while weakening the capacities that make resilience ethically meaningful: freedom, human dignity, moral agency and civic courage. The question, then, is not simply whether AI increases or decreases resilience, but what kind of resilience it produces, for whom and for what purpose.
“Resilience is often framed as coping: staying functional under pressure, recovering quickly, adjusting to new conditions. Let us call this adaptive resilience. It is valuable. Without it, individuals break under stress and societies become brittle.
“But there is a second form – call it agency-based resilience: the capacity not only to adapt, but to evaluate, contest and reshape the conditions one is adapting to. Agency-based resilience respects the fact that freedom is more than comfort and security; it is the ability to judge what is acceptable, to refuse what undermines human dignity and personal freedom and to act individually and collectively to change course.
If resilience is to serve human dignity and freedom, it must be redefined. … Resilience is not an end in itself. It is meaningful only insofar as it preserves the ability of individuals to remain free and active moral agents, capable of collective self-determination, capable of saying, ‘If this is not the world we want, we will change it.‘
“Both dimensions of resilience are necessary for peaceful and democratic societies. A society with high adaptive resilience but low agency-based resilience may appear stable while drifting into systems of control, inequality or depoliticized complacency. Conversely, a society rich in critical agency but lacking adaptive capacity may exhaust itself and fracture under pressure. The distinctive challenge posed by AI is that it may increase the former while quietly eroding the latter.
“The obvious pathways through which AI can weaken resilience are well known:
- “Economically, automation and algorithmic management threaten security for many, especially in routine or precarious work, undermining dignity and long-term stability.
- “Cognitively and emotionally, AI-mediated information environments often reward speed, outrage and attention capture, weakening the attentional and emotional foundations of individual resilience.
- “Socially, pervasive data extraction and surveillance corrode trust, encouraging withdrawal rather than cooperation.
- “Institutionally, opaque AI systems weaken accountability and democratic legitimacy, leaving people unable to understand or contest decisions that shape their lives.
“More difficult – and more unsettling – is the opposite possibility: that AI may enhance individual resilience in ways that ultimately undermine freedom.
“AI’s most persuasive selling point is its promise for enhancing individuals’ comfort and security. It reduces friction. It anticipates needs. It promises personalization, optimization and seamless life management. In the short term, having fewer difficult choices, less cognitive load, more reliable services and better access to information can appear to genuinely strengthen individual resilience. But comfort has an ethical and political edge. Democratic life depends on individuals who are willing to invest effort in judgment, participation and sometimes resistance. Civic courage is rarely convenient. It requires time, attention and, often, the willingness to feel uncomfortable – because discomfort is frequently the signal that something is wrong.
“Here is the paradox: AI can make individuals more resilient to conditions that should not be endured. By quietly absorbing friction, AI may normalize practices that reduce agency – surveillance, automated decision-making, behavioral manipulation, the delegation of judgment to systems we cannot inspect. This is where normalization theory becomes relevant: step-by-step adjustments become ‘normal,’ not because anyone endorses the full trajectory, but because each increment seems tolerable, even beneficial. Over time, individuals adapt – often successfully – until they wake up in a world that no one explicitly chose.
“In other words, AI can enhance adaptive resilience while eroding agency-based resilience.
“Freedom is not only the ability to choose among options presented; it is also the capacity to shape the options, to question the terms of the system, to participate in setting priorities, and to be answerable for decisions. This is why freedom is the basis of moral capacity: without the ability to judge and act, responsibility becomes hollow.
“AI threatens freedom in at least three interlocking ways:
“1) Delegation of judgment. When AI systems decide what is relevant, credible, risky, employable, or eligible, individuals practice less judgment themselves. Over time, this can erode the muscles of moral reasoning and practical deliberation.
“2) Erosion of motivational drivers. A crucial driver of agency is the experience of tension: frustration with injustice, discomfort with being treated as a number, anger at exclusion, unease at surveillance. If AI systems continuously buffer these experiences – making everything ‘work’ smoothly – people may lose the impetus to demand change. This is not hypothetical; political participation already competes with fatigue and convenience. AI can tilt the balance further.
“3) Diffusion of responsibility. AI systems enable ‘responsibility laundering’: harmful outcomes can be blamed on ‘the model’ or ‘the process.’ When responsibility diffuses, moral agency weakens. People become more likely to comply than to contest.
“This is the point where virtue ethics becomes relevant – not as an inward-looking doctrine, but as a framework for the capacities that sustain freedom. Virtue ethics emphasize character traits and practical wisdom: the ability to judge context, to resist manipulation, to act courageously when it would be easier to remain passive. In AI-mediated environments, these virtues are not optional. They are the psychological and moral infrastructure of agency.
“Experiences such as frustration and moral unease have historically been catalysts for social change. If AI continuously buffers these experiences, individuals may remain calm and functional yet lose the impulse to demand and personally engage for change.
“At the societal level, the consequences follow directly from this individual over-adaptation. Democratic systems rely on citizens willing to invest effort in participation, deliberation and resistance. When individuals become highly adapted and comfortable, political engagement becomes costly and unattractive. Decisions about how AI should be used, for whose benefit and under what constraints are then left to experts, corporations, or administrative systems. Civic responsibility is replaced by managed compliance. Societies may become stable and secure, yet at the same time undermine freedom and human dignity – the two core values that differentiate humans from AI.
“In the worst case, we get a future that feels stable but is ethically degraded: rights are formally intact but practically weakened; participation exists but is performative; and citizens live in optimized systems they did not meaningfully choose. Then comes the collective question: How did this world come into being? The answer is that no one truly intended it yet everyone adapted to it – step by step.
“If resilience is to serve human dignity and freedom, it must be redefined. Individual resilience must be understood not merely as stress tolerance, but as the capacity for agency under pressure: the ability to judge, to dissent and to act even when adaptation would be easier. This requires critical understanding of how AI systems steer attention and behavior, institutional conditions that preserve contestability and human judgment and social norms that recognize discomfort not as failure, but as a signal that values are at stake. Not all friction is harmful; some friction is protective.
“Resilience also cannot remain unequally distributed. If AI-enhanced coping benefits primarily those already secure, while others bear the costs of disruption, social resilience will erode rather than grow. Economic security, access to education and meaningful avenues for participation are not secondary concerns; they are the infrastructure that allows individuals to remain agents rather than mere adaptors.
“Resilience is not an end in itself. It is meaningful only insofar as it preserves the ability of individuals to remain free and active moral agents, capable of collective self-determination, capable of saying, ‘If this is not the world we want, we will change it.’”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”