
“What skills or practices will help us stay resilient as AI reshapes work and life? Maybe people will look to algorithms to optimize everything – including just how much fat we need in a system to reach a desired level of redundancy. AI will do this – deploying its probabilistic genius and maybe replacing us due to our inability to deal with probabilities. This sounds reasonable until you realize that optimizing for resilience metrics isn’t the same as building actual resilience. You can hit every measurable target – backup systems in place, redundant pathways established, risk scores minimized – while still being fundamentally fragile because you’ve optimized for the wrong things. The metrics capture what’s easy to measure, not necessarily what matters when systems actually fail. So, I wonder if resilience is not something we can train or optimize. It might be closer to a philosophical stance: the capacity to care about things that resist codification.
Being resilient might require deliberately choosing uncertainty, choosing to care about things that resist measurement. Not because it’s more efficient, but because that’s where values live. … It means maintaining parallel systems of thinking – your own notes alongside AI outputs, your own frameworks even when AI provides better ones – not because it’s efficient but because it’s insurance against a dependency you won’t notice until you’ve lost the capacity to think independently.
“A conventional approach might treat resilience as capabilities to develop – adaptability, learning agility, emotional intelligence. But those are just more things to quantify and optimize. AI could get good at those, too. An alternative view is that resilience is the capacity to keep caring about things that can’t be captured in data or resolved through optimization; the ability to operate in genuine uncertainty rather than accepting AIs’ often-false certainty.
“AI’s core promise is reducing uncertainty. It offers optimal decisions, maximum expected value. But it smuggles in a dangerous assumption: that uncertainty is always a problem to be solved rather than a condition to be navigated. Some questions don’t work that way. What career should I pursue? How should I raise my children? These aren’t optimization problems. They’re questions on which reasonable people will always disagree because the disagreement is about values, not facts.
“Ambiguity is where human agency lives. When something can be fully specified and measured, it can be automated. When it remains irreducibly uncertain, when multiple frameworks give different answers, when context matters in ways that can’t be standardized – that’s where humans still have meaningful work to do.
“AI offers people what appears to be an escape from uncertainty. They use AI to make decisions less ambiguous. They let it quantify what matters. They accept its simplified metrics as proxies for the messy, complicated values we actually care about. My tells me I’ve closed my exercise rings, so I feel accomplished. This seems much simpler than grappling with what living well means for me specifically. Using the proxy is easy. And, if I’m not careful, I’ll organize my life around closing rings rather than around the value the rings were supposed to represent.
“Scale this up to AI making recommendations about what job to take, what neighborhood to live in, who to maintain relationships with. The recommendations will be data-driven and probably pretty good on average. But ‘pretty good on average’ isn’t the same as right for you specifically, given values that can’t be fully articulated even to yourself. Personal AI assistants and bots will promise that you are special – as you indeed are – but they will be limited in their ability to escape average as you will be limited in your ability to escape their sycophancy.
“The real vulnerability isn’t that AI will give bad advice. It’s that the advice seems good enough that we stop doing the hard work of figuring out what we really need to know or what we really care about. Students use AI to outsource the process of discovering what they should know, what they should think. When you struggle to articulate an argument, to figure out what evidence matters and why – in that struggle is how you discover your own intellectual stance. Skip it, and you skip the self-discovery.
“So, resilience in the AI age might be the capacity to resist value capture at scale. To keep grappling with questions that don’t have clear answers even when AI offers to resolve them. When AI suggests a decision path based on optimizing measurable outcomes, you need the capacity to ask: What am I losing by reducing this to frictionless optimization? What values am I implicitly accepting?
“These questions have no algorithmic answers. They require judgment that can’t be codified because the judgment is about what should be codified in the first place.
“The people who stay resilient won’t be the ones who get best at working with AI tools. They’ll be the ones who can tell when a question shouldn’t be fully resolved, when ambiguity serves a purpose, when optimization would destroy the thing being optimized. There’s a timing issue too – the more we lean on AI to handle uncertainty, the less practice we get operating in genuinely ambiguous situations. By the time we encounter something AI can’t help with, we might have lost the ability to navigate without algorithmic guidance.
“Being resilient might require deliberately choosing uncertainty, choosing to care about things that resist measurement. Not because it’s more efficient, but because that’s where values live. And values – the real ones, not their algorithmic proxies – are what make decisions meaningful rather than just optimal.
“So, what does this actually look like in practice? In education, it means protecting the struggle – letting students wrestle with problems before offering AI assistance, creating spaces where the friction of figuring things out is the point rather than an inefficiency to eliminate. In organizations, it means consciously choosing not to optimize certain decisions even when you could, recognizing that some ambiguity serves a purpose and some context can’t be standardized without destroying what makes the work valuable.
“Personally, it means maintaining parallel systems of thinking – your own notes alongside AI outputs, your own frameworks even when AI provides better ones – not because it’s efficient but because it’s insurance against a dependency you won’t notice until you’ve lost the capacity to think independently. These are small choices to keep practicing capabilities we might not need today but can’t rebuild once they’ve atrophied. But the pull toward convenience is strong and the costs of optimization won’t be obvious until we’re already locked in. If resilience is the capacity to care about things that resist measurement, then it starts with the deliberate, inefficient choice to keep caring anyway.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”