Rosa_Daneshmandnia
Rosa Daneshmandnia is head of research and publishing for Young AI Leaders of Linz, Austria. This essay is her written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“We don’t just ‘use’ AI anymore. We delegate to it. That changes the definition of resilience. As AI systems begin to play a much more significant role in shaping our decisions, work and daily lives, the most important transformation in the next few years won’t be due to AI models getting smarter. It will be the fact that delegation has become the default. In the early large language model era, we asked AIs for outputs. In the emerging agentic era, we are increasingly asking AI to draft, decide, schedule, filter, purchase, screen, triage, recommend next steps, flag ‘risk’ and optimize workflows. When delegation becomes infrastructure, society doesn’t experience AI as a tool anymore. They experience it as an environment. This is why the core resilience question is not, ‘Will AI change everything?’ Instead, it is, ‘Do we have the cognitive, emotional, social and ethical capacity to manage AI’s influence before it manages us?

There will be real benefits, but there will also be neutral effects like convenience without meaning and speed without quality. And there will be negative effects such as manipulation, deskilling, misinformation and fragile institutions. What determines the direction is not only model capability. It is the management capability around it.

How might individuals and societies embrace, resist and struggle with this shift?

Many individuals will embrace AI because it feels like relief. Less admin. Faster work. Personalized support. Translation, tutoring and accessibility tools. Organizations will embrace it because everyone is afraid of being late to the catch the wave. Some of this will be real progress.

Resistance will also be rational. People will resist when they begin to see that the arrival of AI is the force behind displacement of jobs, granular extraction of personal data, heightened manipulation of attention, perfected acts of persuasion and the rendering of unfair digital judgments. Whole communities will push back when they feel they are being scored or governed by systems they cannot question. Some resistance will be healthy pressure for transparency, limits, rights and safety. Some will be fear-based and chaotic. Both will happen.

The biggest category for worry is fear of struggle. Most people will live in the messy middle: benefiting daily while slowly losing clarity about how AI is shaping their choices. Struggle will look like decision fatigue, distrust, quiet dependency and workplace confusion, especially when AI is embedded inside hiring, education, customer-support and public systems. This is exactly why resilience cannot be reduced to motivational slogans. Resilience has to become a design and management discipline.

“The ripple effects of digital change will not be purely positive or purely negative. They will be mixed and often simultaneous. There will be real benefits, but there will also be neutral effects like convenience without meaning and speed without quality. And there will be negative effects such as manipulation, deskilling, misinformation and fragile institutions. What determines the direction is not only model capability. It is the management capability around it.

“Merriam-Webster named AI ‘slop’ its 2025 Word of the Year, defining it as the low-quality digital content produced – often in large quantities – using artificial intelligence. Research from BetterUp Labs, in partnership with the Stanford Social Media Lab, shows how AI generated ‘slop’ can masquerade as productivity. Their workplace framing calls this ‘workslop’ – output that looks productive but creates hidden downstream work, reviewing, correcting, redoing and escalating. The point is not the label. The point is what it reveals. Without strong management, AI can inflate noise faster than organizations can verify quality and, thus, resilience breaks inside most everyday decisions and actions: in time, trust and decision quality.

Resilience has to be built into our operational infrastructure, into our institutions; coping with this transformational change is not merely the responsibility of individuals alone. In practical terms, societies and organizations need clear decision rights … There should be requirements for objective human review of AI systems that is real, with authority, time and incentives to say no. And we also need to have robust AI incident response. ... We have to stop treating AI only as innovation and start treating it as operational risk.

“So what capacities must we cultivate to ensure effective resilience?

“First, cognitive resilience. People do not need to become machine-learning engineers, but they do need calibration. Knowing when AI is actually helpful and being able to discern when it is confidently wrong, when it is biased and when it is optimizing for something other than truth. Resilience can be boosted by the normalization of spending the time and effort for accurate verification: regularly asking for evidence, checking sources and understanding failure modes.

“Second, emotional resilience. One major vulnerability is ‘learned dependence.’ When this happens, people stop thinking, allowing the system to do it for them. Another vulnerability is chronic anxiety. One cause of anxiety is that reality can feel unstable because anything can be generated. We have to work to develop and deepen the type of emotional skills that protect agency; always taking the time to calmly and intentionally pause, reflect and then make choices – especially when under pressure and in a hurry.

“Third, social resilience. When synthetic content floods the information environment, the first casualty is shared reality. Resilience requires communities, workplaces, schools and institutions that can deliberate under uncertainty, that can: disagree without collapsing into hostility; correct misinformation without humiliation; and keep trust intact.

“Fourth, ethical resilience. We allow AI to make decisions, saying ‘the AI decided’ is the fastest way for individuals to seemingly absolve themselves from responsibility. Resilience requires responsible human decision-making to remain a cultural rule: if humans deploy the AI, those humans must own the outcomes. AI should never become a convenient place to hide accountability.

“These capacities do not develop automatically. They require practice and resources.

We have to stop treating AI only as innovation and start treating it as operational risk. Every meaningful AI deployment should have ownership, boundaries, monitoring and a fallback mode. We should build verification habits into workflows, because speed without validation becomes fragility. We should design for graceful failure, because AI will fail, and the question is whether failure becomes a small inconvenience or a systemic breakdown. We should protect the information ecosystem through provenance, labeling norms and anti-spam enforcement because trust is a societal dependency.

“Resilience has to be built into our operational infrastructure, into our institutions; coping with this transformational change is not merely the responsibility of individuals alone. In practical terms, societies and organizations need clear decision rights outlining who is allowed to deploy an AI system and when; who can stop it; and who is accountable when it harms. There should be requirements for objective human review of AI systems that is real, with authority, time and incentives to say no.

“We also need to have robust AI incident response because – in the same way cybersecurity matured through incident reporting and response playbooks – we require clear procedures for when AIs and AI systems fail. AI requires monitoring and measurement because drift, bias and error patterns are not philosophical concepts, they are feedback loops. This requires special training for engineers, managers and non-technical decision makers, because many of the highest-impact AI choices are approved by people who do not build models but by the people who shape deployment and hold accountability.

“What actions must we take right now to reinforce human and systems resilience?

“We have to stop treating AI only as innovation and start treating it as operational risk. Every meaningful AI deployment should have ownership, boundaries, monitoring and a fallback mode. We should build verification habits into workflows, because speed without validation becomes fragility. We should design for graceful failure, because AI will fail, and the question is whether failure becomes a small inconvenience or a systemic breakdown. We should protect the information ecosystem through provenance, labeling norms and anti-spam enforcement because trust is a societal dependency. And we should make resilience equitable, because if only privileged groups get safer tools and better literacy we will create a two-tier society: AI resilient and AI exposed.

“Finally, what new vulnerabilities might arise and what coping strategies are important to teach and nurture?

  • “Automation bias will rise, along with the tendency to over-trust AI because we are in a hurry and/or it seems confident. We must create a culture that prioritizes pause-and-verify routines and evidence-first processes.
  • “Deskilling will rise along with a gradual loss of human competence because ‘we just let AI do it.’ Manual practice loops and periodic ‘AI-off’ drills will play critical roles in keeping our skills fresh because ‘we do it ourselves.’
  • “Slop inflation will rise, leading to magnitudes more content with less meaning, and we can have far less trust in it. We must invest in quality filters, provenance tools and norms that reward substance over speed.
  • “Manipulation at scale will rise through hyper-personal persuasion and behavioral targeting. We must reinforce privacy boundaries, transparency and limits on sensitive inference.
  • “Accountability collapse will rise when responsibility evaporates across vendors, tools and models. We must require named ownership, escalation paths and enforceable governance.

“AI will shape our work and daily lives, but resilience will not come from pretending we can slow the world down. It will come from building the management capacity to steer AI’s influence with accountability, verification and human agency. The real risk is not that AI becomes powerful. The real risk is that we delegate power to it faster than we build the societal systems, skills and ethics to manage it.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”