
“Over the past year, my understanding of resilience in the age of artificial intelligence shifted from theory to lived experience. Not because of one breakthrough model or report, but because of what happens when AI stops being a future topic and starts shaping how people study, work and make decisions every day. I spent 2025 building a youth-led AI community connected to the global AI for Good network. What emerged quickly was a pattern I did not expect: People are not primarily afraid of AI. They feel extremely uncertain about their own place within systems that evolve faster than institutions, curricula and norms.
“AI is becoming infrastructure. It is embedded into productivity tools, education platforms, hiring workflows, customer service and public administration. AI is becoming an invisible part of the interface, shaping human behavior quietly through defaults, rankings, recommendations and automation.
Coping strategies include practicing ‘intentional friction’ – this occurs in the important, introspective moments when people pause before delegating judgment – and sustained investment in core human practices such as deep reading, independent reasoning and real-world relationships.
“I saw this clearly at our flagship event ‘War for AI Talent,’ where students, researchers, founders and senior leaders from technology and consulting came together to discuss Europe’s AI skills gap. The dominant emotion is not fear of job loss. It is uncertainty. Students ask how to stay relevant when tools evolve faster than curricula. Employers ask how to hire for skills that barely exist yet. Everyone assumes AI will be present. The real question is whether humans will remain in control of how it is used.
“That pattern repeats in every venue we meet the public in. Staffs and students at the AI literacy workshops we deliver at schools are enthusiastic and curious. They quickly learn how powerful AI tools are. But many struggle with a more difficult question: When should they not rely on them? Teaching prompt engineering is easy. Teaching judgment, verification and restraint is harder. This is where resilience begins to matter.
“Most people will embrace AI where it reduces friction. That is already visible. Writing assistance, translation, tutoring, planning and ideation tools are normalized because they are convenient and accessible. In the hackathons we co-organize teams naturally lean on AI to move faster. Some use it as a thinking partner, questioning outputs and validating assumptions. Others treat it as a shortcut generator and struggle when systems hallucinated or miss context. The difference is not technical skill. It is the ability to stay resilient under uncertainty.
“Resistance will grow where AI feels imposed rather than chosen. This shows up most clearly around hiring, education and public services. In discussions we facilitated with students, companies and public-sector partners, concerns about the types of untransparent AI-based decision-making and judgments that impact individuals’ lives surface repeatedly. People are willing to use AI. They are far less willing to be silently evaluated by it. Resistance is rarely ideological. It emerges when agency feels threatened.
“Most people, however, will neither fully embrace nor actively resist. They will cope. AI becomes ‘how things work now,’ even if discomfort remains. This quiet adaptation explains why future satisfaction is likely to be mixed. Convenience increases. Trust lags behind.
Resilience in an AI-saturated world is not about resisting technology. It is about preserving agency, dignity and collective responsibility as we adapt. The future will be defined not by how capable AI becomes, but by whether humans retain the ability to steer it toward public benefit rather than quietly live inside its outcomes.
“In an AI-saturated world, resilience is not about speed or toughness. It is about maintaining agency in environments shaped by probabilistic, untransparent and always-on systems.
“Cognitive resilience is foundational. Over the past year, I repeatedly saw how quickly people outsource judgment once an AI system sounds confident. Resilience means knowing how to verify, contextualize and override AI outputs. It also means staying comfortable with uncertainty rather than treating AI as an authority.
“Emotional resilience is tested by acceleration. AI makes productivity look effortless and constant, raising expectations and fueling comparison. In mentoring conversations, anxiety about keeping up was often more present than excitement. Emotional steadiness requires practices that anchor self-worth beyond output and efficiency.
“Social resilience depends on human connection. AI can support coordination, but trust, belonging and accountability remain human achievements. One of the most valuable outcomes of building Young AI Leaders Linz was the fact that we came together to build community itself. People need spaces to compare experiences, voice doubts and develop shared norms for responsible use.
“Ethical resilience is the rarest capacity. It appears when someone asks not only ‘Can we build this?’ but ‘Should we?’ In the national and international AI governance discussions we join, ethical courage often comes from individuals who are willing to slow things down or push back. Those voices remain a minority, but they often help to shape better long-term outcomes.
“Resilience does not emerge without effort. AI literacy must focus on agency, not just on tool use. Human-in-the-loop practices must be protected in high-stakes contexts. Active human leadership and activism in communities and institutions matter because individuals adapt best when they work together to improve systems, not in isolation.
“There are new vulnerabilities to deal with in the age of AI. Over-reliance on systems that fail silently. Deskilling in reasoning and communication. Manipulation through hyper-personalized synthetic media. Emotional attachment to agents that simulate care without responsibility. Coping strategies include practicing ‘intentional friction’ – this occurs in the important, introspective moments when people pause before delegating judgment – and sustained investment in core human practices such as deep reading, independent reasoning and real-world relationships.
“My main takeaway from the past year is simple. AI will reshape human life whether we are ready or not. Resilience in an AI-saturated world is not about resisting technology. It is about preserving agency, dignity and collective responsibility as we adapt. The future will be defined not by how capable AI becomes, but by whether humans retain the ability to steer it toward public benefit rather than quietly live inside its outcomes.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”