
“As AI systems start shaping our decisions, work and daily lives the big question is not, ‘Will we adapt?’ Humans adapt to anything. We adapted to public transport, email and social media (and look how that went). The question is how we’ll adapt to AI, what kinds of resilience we’ll celebrate and which ones we’ll quietly practice while pretending we’re still in control. Here’s the unfashionable truth: Perhaps the most common form of resilience in the face of overwhelming change is not heroic reinvention. It’s cognitive triage. It’s narrowing the aperture. It’s going with the flow and – at least selectively – stopping thinking. And we ignore that mode of resilience at our peril, partly because it works disturbingly well. Let us start with the three broad responses: Embracing, resisting and struggling.
“Embracing is easy to spot, because it comes with a lot of LinkedIn prose. Some people will adopt AI because it’s useful, because it saves time, because it makes them feel competent and because it reduces friction in an already over-frictioned world. Some will embrace it joyfully. Others will do it the way people embrace corporate wellness programs: with dead eyes and a forced smile.
“Resisting will happen too, but rarely as grand Luddite Theater. It’ll be quieter: refusing to use certain tools, demanding human review, building no-AI zones in education, healthcare, hiring, courts, journalism. Some resistance will be principled. Some will be status protection, because nothing says my expertise matters like insisting the machine isn’t invited.
“And then there’s struggling, which is where most people will live most of the time. Not because they’re weak, but because transformative change is cognitively expensive. Every new system demands attention, learning, judgment and constant recalibration.
Every new system demands attention, learning, judgment and constant recalibration. The human brain, this sloppy and finicky meat computer, does not scale gracefully with infinite novelty. When the environment becomes too complex for real-time deliberation, resilience often gives way to automation. We build routines so we don’t have to decide. We defer so we don’t have to argue with uncertainty every morning before coffee. That’s the cognitive triage part. And AI is basically triage-as-a-service. So, what capacities do we need to cultivate for effective resilience, cognitively, emotionally, socially and ethically?
“The human brain, this sloppy and finicky meat computer, does not scale gracefully with infinite novelty. When the environment becomes too complex for real-time deliberation, resilience often gives way to automation. We build routines so we don’t have to decide. We defer so we don’t have to argue with uncertainty every morning before coffee. That’s the cognitive triage part. And AI is basically triage-as-a-service. So, what capacities do we need to cultivate for effective resilience, cognitively, emotionally, socially and ethically?
“Cognitively, the key is not more information. We already have enough information to last several civilizations. The key capacity is judgment: Knowing what matters, when to trust a system, when to doubt it and when to stop and think even if the tool and your brain are begging you to keep moving. We need to apply our calibration skills – good judgment – when facing AI outputs that may not reflect the truth. Plausible text, images or recommendations can often actually be fabrications, deception or hallucinations.
“Emotionally, we need tolerance for ambiguity and for bruised egos. AI will be a competence disruptor. It will make some people feel suddenly powerful and many feel suddenly replaceable. Resilience here isn’t just mindfulness and breathing exercises (though sure, inhale, exhale, capitalism abides). It’s a steadier identity: I am not my output, I am not my speed and I don’t have to win a race against a system that doesn’t get tired.
“Socially, we need trust and coordination, both of which have become more difficult as contemporary life is optimized for individual performance metrics and quiet resentment. If AI becomes embedded in institutions, resilience will depend on shared norms: What we accept, what we contest, what we audit, what we prohibit. You can’t personal-productivity your way out of a society-wide shift in decision-making infrastructures. You need communities, unions, professional associations, school boards, regulators, peer networks – actual human groups doing the messy work of collective sensemaking. Ethically, we need something even rarer than judgment: responsibility.
“AI will diffuse responsibility by design: ‘The AI suggested it’ is the new ‘I was just following orders,’ only with better UX. Resilience requires keeping accountability attached to humans and institutions, not to tools. That means insisting on explainability where it matters, documentation, traceability and appeal mechanisms. Succinctly put, the ability to say, ‘This decision harmed me and here’s who answers for it.’
“What practices and resources will we then need? One practice is deliberate friction. Think of it as keeping the cognitive muscles alive. If you outsource everything, you don’t become freed. You become dependent. Create moments where AI is not allowed to bulldoze decision-making. Human review is not a checkbox; it’s a real pause. Another is maintaining craft zones – spaces where people do work without automation – not because they are efficient, but because they preserve skill, taste and agency.
Resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.
“Another practice is AI literacy that goes beyond knowing how to prompt. People need model literacy, i.e., to understand what these systems can and cannot do, what biases look like in outputs, how incentives shape deployment and how errors propagate. Most people assume that the resource here is education, yes, but it requires institutional capacity: funding, auditors, watchdogs, public-interest tech expertise and leaders who don’t treat governance as a vibe.
“We need to normalize the conversation about applying cognitive triage because the most likely resilience response for many people is going to be sedative outsourcing. They’ll let AI write the email, then the report, then the performance review, then the decision rationale, until their job becomes clicking ‘approve’ on systems they no longer understand. They will look resilient, because the outputs keep flowing. The dashboards will glow. Everyone will applaud productivity. And agency will quietly drain away.
“We will face new vulnerabilities: Dependency (skills atrophy), deskilling (loss of judgment), manipulation (personalized persuasion at scale), brittle systems (cascading errors), inequality (some get augmentation, others get automation) and moral distancing (harm without felt responsibility).
“We will also face simple exhaustion from living in a world in which every interaction is mediated by recommendation engines and synthetic help. It’s akin to being trapped in a mall where everything is trying to assist you whether you like it or not.
“Thus, resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”