
“AI already plays a significant role in shaping human decisions, work and daily life. The real question is not whether further such transformation will occur, but how unequal, silent and normatively it will unfold, and whether human resilience will be cultivated or eroded in the process. AI systems transform decision-making environments. They are filtering information, prioritizing options, configuring – so to speak – incentives, and they increasingly function as what could be called our ‘cognitive prostheses.’
“Most people will adapt functionally, but not necessarily in a resilient way, because as this mediation deepens over the next decade, adaptation should not be confused with resilience. The latter requires agency, reflection and ethical orientation; the former is quite accommodative.
AI amplifies existing inequalities in education, critical literacy and emotional regulation who already possess solid cognitive and ethical frameworks will tend to benefit – e.g., the generation born before the internet and computers. Those who know only the digital world will become increasingly dependent. … Just imagine children whose entire education and life will be mediated by LLMs and AI.
“At the individual level, responses to AI-driven change will likely follow three general patterns: acceptance, resistance and passive dependence. A minority will actively adopt AI as a tool for cognitive extension, deliberately cultivating co-intelligence and using systems to deepen reasoning rather than replace it. Another minority will resist, whether for ethical, psychological, or cultural reasons, attempting to preserve autonomy by minimizing exposure or simply because they will not have the access that others have. The majority, however, will fall into passive dependence, externalizing judgment, memory and even moral evaluation to systems they do not fully understand but that may be replacing even their basic reasoning functions.
“I believe this asymmetry constitutes the main risk to resilience. AI amplifies existing inequalities in education, critical literacy and emotional regulation who already possess solid cognitive and ethical frameworks will tend to benefit, – e.g., the generation born before computers and the internet. Those who know only the digital world will become increasingly dependent. The result is not a collapse of human agency, but its stratification. Just imagine children whose entire education and life will be mediated by LLMs and AI.
“Cognitively, resilience in an AI-saturated environment requires more than digital literacy; it requires epistemic vigilance: the ability to question outputs, recognize uncertainty and maintain independent judgment under conditions of persuasive automation. If we as parents and educators do not succeed in explicitly cultivating these skills, convenience will dominate cognition. Hybrid intelligence will exist, but possibly in a superficial form – that is, efficient, but fragile.
“Emotionally, the challenge is more subtle, since AI systems reduce friction but also increase existential ambiguity. As work identities change and human singularity becomes less evident, anxiety, loss of purpose and diminished self-efficacy are likely to increase. The sense of achievement can become atomized or simply lost in rapid results without cognitive effort and lacking meaning. In this way, emotional resilience will depend on the ability to tolerate uncertainty without succumbing to technophilia or technophobia. This capacity is learned; it is not automatic.
“Socially, AI reconfigures cooperation by mediating trust, as algorithmic systems increasingly decide who is visible, credible, or worthy of attention. While they can improve coordination, they can also fragment shared reality, and in this case, resilience depends on maintaining human-centered institutions (education, deliberative spaces, professional standards) that preserve collective understanding beyond algorithmic optimization.
‘The question of resilience in an AI-mediated world will not be technological, but ethical, since the systems we build will increasingly determine what we will be able to do and what we will come to expect of ourselves. If resilience is reduced to mere adaptability, humans will adjust, but at the cost of autonomy, depth and responsibility. If, instead, resilience is understood as the sustained capacity to think, feel, judge and act with integrity under conditions of uncertainty, AI may become an ally rather than a substitute.
“Ethically, the greatest vulnerability is moral deskilling. When systems recommend actions regarded as neutral or optimal, responsibility shifts away from human agents. Ethical imagination and moral courage – already scarce – risk becoming even scarcer if they are not deliberately reinforced. Resilience requires resisting the normalization of moral abdication. Human beings must remain responsible even when decisions are partially delegated.
“What practices and resources can foster resilience? First, educational systems must prioritize metacognition, ethics and critical thinking alongside technical competence. Second, institutions must design AI systems that preserve contestability and explanation rather than opacity and behavioral nudging. Third, societies must normalize periods of disconnection and cognitive autonomy, treating attention as a finite human resource rather than an extractable good.
“Waiting for disruption to fully manifest guarantees reactive and inequitable responses. We must teach how to use AI, but at the same time also how to disagree with it, how to distance ourselves from it and how to govern it collectively. Otherwise, resilience will be framed as an individual coping strategy rather than a systemic responsibility.
“New vulnerabilities will emerge, of course: excessive dependence and attentional fragmentation. And this will be the erosion of moral autonomy, although I hope to be mistaken. Therefore, coping strategies must include ethical reflection, emotional grounding and collective governance, not only personal productivity hacks.
“AI will not eliminate human resilience. But it will expose its limits. Whether resilience becomes a widely shared capacity or a privilege of a few depends less on technological progress than on the normative decisions we make now.
“Ultimately, the question of resilience in an AI-mediated world will not be technological, but ethical, since the systems we build will increasingly determine what we will be able to do and what we will come to expect of ourselves. If resilience is reduced to mere adaptability, humans will adjust, but at the cost of autonomy, depth and responsibility. If, instead, resilience is understood as the sustained capacity to think, feel, judge and act with integrity under conditions of uncertainty, AI may become an ally rather than a substitute.
“The future will not be determined by the development of machines, but by whether humans will be willing to cultivate the cognitive, emotional, social and moral capacities that no system will be able to meaningfully replace. Therefore, the work of resilience will have to begin now, as a deliberate commitment to preserving human agency in an era of delegated intelligence, not when it will have already become an ethical, epistemic and even ontological crisis.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”