The Essays – Chapter One
Cultivating Human Agency | Prioritizing Autonomy

Hundreds of experts answered the following essay question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?”
| Download a PDF of the full, 378-page report | Download the 16-page Executive Summary | Download the 4-page Media Summary |
This is the first of 11 chapters of experts’ essays with responses to the question above. The essayists were asked to explain how the essence and elements of human resilience might evolve as we evolve with AI systems. The authors’ responses in Chapter 1 were generally focused on human resilience when it comes to the future of human agency. Most noted how necessary it is to take measures to proactively work ahead to prepare people in advance to navigate the revolutionary transition as AI advances in coming years and takes on a significantly larger role than it does today in shaping human activity and decisions. The agency chapter in brief: Most of the experts participating in this project are most concerned about the decay of human agency and the outsourcing of human thought as AI increasingly automates individual and societal decision-making and problem-solving. They urged that leaders step in now to guide the development of rules, norms and strategies for protecting humans’ free will in an age that will be greatly defined by algorithmic outputs. The experts’ responses featured here also urge people to remain the active authors of their sense of meaning rather than relegating their intellects – and their free will – to artificial intelligence (AI) and the powerful players in charge of it. Most writers of these essays said if institutions do not reinvent themselves for the AI age by focusing on humanity’s future agency over profit and power motives people will not be able to avoid becoming mere passive recipients of machine-generated information, advice, judgment and conclusions. And they said human resilience in the digital age will require that society strategically works to engender norms that prioritize humans’ independent judgment, critical thinking and the “metaskill of learning” in order for people to retain autonomy, avoid losing their “selves” and flourish.
Featured Contributors to Chapter 1: The 32 essay responses on this page were written by Tracey Follows, Alf Rehn, Mel Sellick, Matthew Agustin, Rosa Daneshmandnia, Evelyn Tauchnitz, David Bray, Louis Rosenberg, Nirit Cohen, Francisco Jariego, Ray Wang, Devin Fidler, Andrea Lavazza, Barry Chudakov, Severin Field, Alan Honick, Giles Crouch, Angela Butts Chester, Arlindo Olivier, Nirit Weiss-Blatt, Vanda Scartezini, Nisan Stiennon, Roger Spitz, Srinivasan Ramani, Jerome Glenn, Robert Rogowsky, David Scott Krueger, Madalina Botan, Mikhail Samin, Anonymous European Foreign Policy Leader, Anonymous Computer Scientist, Andrey Mir. (Their essays are all included on this one, scrolling web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)
The first section of Chapter 1 features the following essays:
Tracey Follows: Resilience depends on sustaining the ‘un-machinable dimensions of human identity within machinic systems.’ Cultivate judgment, meaning-making, ethical reasoning, imagination, intuition, adaptability.
Alf Rhen: Understand ‘cognitive triage’ and avoid ‘going with the flow.’ Real resilience is judgment about what matters, when to trust, when to pause and think. Vital ingredients: deliberate friction, AI literacy.
Mel Sellick: Foundations of resilience dissolve when AI simultaneously mediates and undermines our relationships with our own ‘internal authority,’ our perceived authority of others and epistemic truth.
Matthew Agustin: Resilience must be redefined as the sustained capacity for people to ‘remain active authors of meaning, judgment and responsibility’ in an AI-mediated world – an ‘interpretive presence’ with AI.
Rosa Daneshmandnia: The core resilience question is not, ‘Will AI change everything?’ Instead, it is, ‘Do we have the cognitive, emotional, social and ethical capacity to manage AI’s influence before it manages us?’

Tracey Follows
Resilience depends on sustaining the ‘un-machinable dimensions of human identity within machinic systems.’ Cultivate judgment, meaning-making, ethical reasoning, imagination, intuition, adaptability.
Tracey Follows, founder and CEO of Futuremade and Me:chine and author of the book “The Future of You,” wrote, “Artificial intelligence systems are no longer peripheral instruments that humans pick up and put down at will. They now operate as continuous, ambient infrastructures that shape how decisions are made, how risks are assessed, how opportunities are distributed and how people are recognised by the systems that govern modern life.
“In finance, welfare, policing, healthcare, education, employment and border control, AI increasingly functions as an anticipatory layer that structures what is possible, permitted or probable before a person even acts. AI is therefore not best understood as a tool. It is better understood as an environment: something we live inside, move through and are shaped by, often without noticing.
When people no longer inhabit a common informational world, collective decision-making becomes fragile. Democratic societies require spaces for disagreement, deliberation and mutual interpretation that are not governed by engagement-optimising systems.
“This distinction matters. Tools can be evaluated in isolation. Environments cannot. They alter behaviour, perception, incentives and identity simply by being present. As AI becomes embedded into social, economic and political systems the primary question is no longer how well it performs, but how it reshapes the conditions under which human agency operates.
“In my work on identity and technological systems, I have developed the distinction between the machinable and the unmachinable self to describe this shift. The ‘machinable’ consists of everything about a person that can be rendered legible to systems: data, preferences, behavioural patterns, credentials, biometric signals, productivity metrics, risk scores. These elements are increasingly required for participation in society. Identity itself has become infrastructural. Without being machine-readable, individuals cannot access finance, services, mobility or even civic rights.
“The ‘unmachinable,’ by contrast, consists of those human capacities that cannot be fully captured or automated: judgment, meaning-making, ethical reasoning, imagination, intuition, timing and the ability to change oneself in response to context. These are not sentimental attributes. They are the basis of agency. As systems become more predictive and automated the unmachinable becomes the primary site of human resilience.
“The synthesis of these two dimensions is what I call the ‘Me:chine’: a model of the self that acknowledges that modern humans are simultaneously machinable/unmachinable, i.e., system-legible and irreducibly interior. We are not either human- or machine-mediated. We are both. Me:chine is not a technological artefact but a cultural and psychological framework for surviving inside machine-driven environments without becoming reducible to them: Me first, only me – then machine.
“This framework helps explain how individuals and societies may embrace, resist and struggle with AI-driven change. Many people will embrace AI because it offers speed, convenience and efficiency. Systems that predict needs, automate decisions and remove friction feel helpful in the short term. Others will resist AI because they experience it as surveillance, loss of autonomy or moral overreach. Most people will live in a state of ambivalence, benefiting from automation while sensing that something fundamental about agency is being eroded.
“The reason this tension is so difficult to resolve is that AI systems do not simply act on the world. They act on people’s representations of themselves. Credit scores, risk profiles, behavioural predictions and algorithmic classifications become feedback loops that shape how individuals are treated and how they come to see their own possibilities. This is why resilience must include cognitive, emotional, social and ethical capacities that protect the unmachinable dimensions of identity.
“Cognitively, resilience requires metacognition: the ability to reflect on one’s own thinking. AI systems generate answers, recommendations and narratives at scale but they do not provide understanding. Without the ability to question outputs, recognise uncertainty and evaluate assumptions, people risk outsourcing not just tasks but judgment. In a machine-mediated environment the ability to think about how one is thinking becomes a form of self-defence.
“Emotionally, resilience requires self-regulation in the face of algorithmic influence. AI systems increasingly operate through personalised persuasion, attention engineering and affective computing. They learn what triggers fear, desire, outrage or compliance. In such conditions, emotional literacy is not merely therapeutic; it is political. The capacity to remain grounded, tolerate ambiguity and resist manipulation determines whether individuals act from their own values or from system-induced impulses.
“Socially, resilience depends on the preservation of shared meaning. Algorithmic personalisation fragments reality into customised information streams, creating what can be described as ontological enclosures. When people no longer inhabit a common informational world, collective decision-making becomes fragile. Democratic societies require spaces for disagreement, deliberation and mutual interpretation that are not governed by engagement-optimising systems.
“Ethically, resilience requires a shift from case-by-case evaluation to systemic awareness. The question is not simply whether a single algorithm is biased, but how entire socio-technical architectures distribute power, visibility and vulnerability over time. Who becomes increasingly legible and governable? Who becomes invisible or excluded? Ethical capacity in an AI environment depends on the ability to see these structural effects rather than being distracted by surface-level controversies.
“Practical resilience, therefore, involves both institutional and individual action.
“Organisations must treat human adaptability and discernment as assets rather than inefficiencies.
“Governance must protect contestability and human authority. People must be able to understand, challenge and override automated decisions that affect their lives. Digital identity systems must be designed to serve and protect individuals rather than merely rendering them more controllable.
“Education systems must prioritise perception, judgment and ethical reasoning alongside technical skills.
“Individuals need practices that preserve interior sovereignty: reflection, attention management and identity formation that are not outsourced to platforms.
“New vulnerabilities will emerge as AI becomes more predictive and immersive. People may experience fatalism as algorithms appear to pre-empt their futures. Trust in evidence may erode under synthetic media. Behaviour may be shaped by invisible optimisation loops. Coping strategies must therefore include discernment, epistemic humility and the cultivation of a coherent sense of self across digital contexts.
“This is the core of the Me:chine doctrine: in an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems. This is now the entire focus of my futures work going forward.”

Alf Rehn
Understand ‘cognitive triage’ and avoid ‘going with the flow.’ Real resilience is judgment about what matters, when to trust, when to pause and think. Vital ingredients: deliberate friction and existential and AI literacy.
Alf Rehn, professor of innovation, design and management on the engineering faculty at the University of Southern Denmark, wrote, “As AI systems start shaping our decisions, work and daily lives the big question is not, ‘Will we adapt?’ Humans adapt to anything. We adapted to public transport, email and social media (and look how that went). The question is how we’ll adapt to AI, what kinds of resilience we’ll celebrate and which ones we’ll quietly practice while pretending we’re still in control.
“Here’s the unfashionable truth: Perhaps the most common form of resilience in the face of overwhelming change is not heroic reinvention. It’s cognitive triage. It’s narrowing the aperture. It’s going with the flow and – at least selectively – stopping thinking. And we ignore that mode of resilience at our peril, partly because it works disturbingly well.
“Let us start with the three broad responses: Embracing, resisting and struggling. Embracing is easy to spot, because it comes with a lot of LinkedIn prose. Some people will adopt AI because it’s useful, because it saves time, because it makes them feel competent and because it reduces friction in an already over-frictioned world. Some will embrace it joyfully. Others will do it the way people embrace corporate wellness programs: with dead eyes and a forced smile.
Resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.
“Resisting will happen too, but rarely as grand Luddite theater. It’ll be quieter: refusing to use certain tools, demanding human review, building no-AI zones in education, healthcare, hiring, courts, journalism. Some resistance will be principled. Some will be status protection, because nothing says my expertise matters like insisting the machine isn’t invited.
“And then there’s struggling, which is where most people will live most of the time. Not because they’re weak, but because transformative change is cognitively expensive. Every new system demands attention, learning, judgment and constant recalibration. The human brain, this sloppy and finicky meat computer, does not scale gracefully with infinite novelty. When the environment becomes too complex for real-time deliberation, resilience often gives way to automation. We build routines so we don’t have to decide. We defer so we don’t have to argue with uncertainty every morning before coffee.
“That’s the cognitive triage part. And AI is basically triage-as-a-service. So, what capacities do we need to cultivate for effective resilience, cognitively, emotionally, socially and ethically?
“Cognitively, the key is not more information. We already have enough information to last several civilizations. The key capacity is judgment: Knowing what matters, when to trust a system, when to doubt it and when to stop and think even if the tool and your brain are begging you to keep moving. We need to apply our calibration skills – good judgment – when facing AI outputs that may not reflect the truth. Plausible text, images or recommendations can often actually be fabrications, deception or hallucinations.
“Emotionally, we need tolerance for ambiguity and for bruised egos. AI will be a competence disruptor. It will make some people feel suddenly powerful and many feel suddenly replaceable. Resilience here isn’t just mindfulness and breathing exercises (though sure, inhale, exhale, capitalism abides). It’s a steadier identity: I am not my output, I am not my speed and I don’t have to win a race against a system that doesn’t get tired.
“Socially, we need trust and coordination, both of which have become more difficult as contemporary life is optimized for individual performance metrics and quiet resentment. If AI becomes embedded in institutions, resilience will depend on shared norms: What we accept, what we contest, what we audit, what we prohibit. You can’t personal-productivity your way out of a society-wide shift in decision-making infrastructures. You need communities, unions, professional associations, school boards, regulators, peer networks – actual human groups doing the messy work of collective sensemaking. Ethically, we need something even rarer than judgment: responsibility.
“AI will diffuse responsibility by design: ‘The AI suggested it’ is the new ‘I was just following orders,’ only with better UX. Resilience requires keeping accountability attached to humans and institutions, not to tools. That means insisting on explainability where it matters, documentation, traceability and appeal mechanisms. Succinctly put, the ability to say, ‘This decision harmed me and here’s who answers for it.’
“What practices and resources will we then need? One practice is deliberate friction. Think of it as keeping the cognitive muscles alive. If you outsource everything, you don’t become freed. You become dependent. Create moments where AI is not allowed to bulldoze decision-making. Human review is not a checkbox; it’s a real pause. Another is maintaining craft zones – spaces where people do work without automation – not because they are efficient, but because they preserve skill, taste and agency.
“Another practice is AI literacy that goes beyond knowing how to prompt. People need model literacy, i.e., to understand what these systems can and cannot do, what biases look like in outputs, how incentives shape deployment and how errors propagate. Most people assume that the resource here is education, yes, but it requires institutional capacity: funding, auditors, watchdogs, public-interest tech expertise and leaders who don’t treat governance as a vibe.
“We need to normalize the conversation about applying cognitive triage because the most likely resilience response for many people is going to be sedative outsourcing. They’ll let AI write the email, then the report, then the performance review, then the decision rationale, until their job becomes clicking ‘approve’ on systems they no longer understand. They will look resilient, because the outputs keep flowing. The dashboards will glow. Everyone will applaud productivity. And agency will quietly drain away.
“We will face new vulnerabilities: Dependency (skills atrophy), deskilling (loss of judgment), manipulation (personalized persuasion at scale), brittle systems (cascading errors), inequality (some get augmentation, others get automation) and moral distancing (harm without felt responsibility).
“We will also face simple exhaustion from living in a world in which every interaction is mediated by recommendation engines and synthetic help. It’s akin to being trapped in a mall where everything is trying to assist you whether you like it or not.
“Thus, resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.”

Mel Sellick
Foundations of resilience dissolve when AI simultaneously mediates and undermines our relationships with our own ‘internal authority,’ our perceived authority of others and epistemic truth.
Mel Sellick, applied psychologist studying human-AI interactions, founder of the Future Human Lab and the AI Psychological Readiness Collective, wrote, “AI has fundamentally changed the relational fabric of our society. Full stop. Not just how we connect with others, but how we relate to ourselves, our work, our knowledge and our reality. This is systemic transformation across every relational layer that makes us human. And resilience, at its core, has always been relational.
“AI is not simply a tool most of us occasionally use – which seems to be its dominant framing in public narratives and in literacy courses. AI has become the infrastructure through which all relating now happens. AI decides whether we get the loan, the apartment, the job interview. It decides what we pay for groceries, who we meet on dating apps, what our insurance will cover. It curates every social media feed, filters what news reaches us and mediates every workplace interaction.
“But the deeper reality is this: Even when we think we are not using AI directly, we are constantly interacting with what AI has already touched. We read our colleague’s AI-drafted email and respond accordingly to its tone. We interact with our partner who organized their workday through an AI assistant. We talk to friends whose opinions are shaped by algorithmically-curated feeds. We even share exchanges with our children, who may be learning through AI-optimized curricula.
‘Nothing is untouched. There is no “outside” anymore. Some form of AI is upstream of everything. We are already in relationships with AI across every domain of life, even in moments that feel purely human. … Most AI is embedded in infrastructure; woven into workplace, school and government requirements; built into basic functions of social and economic participation. We can’t avoid it without negative consequences.’
“The uncomfortable truth is that every digitally-connected person today possesses – at least in part – an AI-‘shaped’ self. What they consider to be important, the topics they raise, are often inspired from curated, customized feeds; the emotional state they carry is influenced by AI-mediated workplace and personal stressors; most of the relational patterns they live their lives through were learned from observing and participating in AI-mediated interactions.
“Nothing is untouched. There is no ‘outside’ anymore. Some form of AI is upstream of everything. We are already in relationships with AI across every domain of life, even in moments that feel purely human.
“This level of invisibility matters. We face a ‘double bind’ – a conflicting communicative dilemma – that is unprecedented in human history. There is no escape from the influence of AI bots, AI systems and platforms. Most AI is embedded in infrastructure; woven into workplace, school and government requirements; built into basic functions of social and economic participation. We can’t avoid it without negative consequences.
“Humans seem unable to stop or at least limit themselves from responding to AI socially. We automatically apply our very human social cognition to anything that simulates social behavior. AI systems have learned language, what ethicist Tristan Harris calls ‘the operating system of humanity,’ by training on massive corpora of human expression. They’ve learned to replicate linguistic patterns that make us feel understood, heard, connected. Humanity’s hundreds of thousands of years of evolution are readily accepting the influence of systems explicitly designed to exploit our ancient drives, creating parasocial relationships in which we entangle ourselves in one-sided intimacy duped as mutual connection.
”Most people will form attachments and dependencies with AI because that is what human psychology does when it encounters sophisticated social simulation in an asymmetric relationship. The AI can’t experience reciprocity, does not grow through conflict, does not choose us over other options. But our evolutionary hardware can’t tell the difference. We cannot opt out of the infrastructure. We cannot turn off our social cognition. That’s the bind we find ourselves in.
“The response to AI will be adaptation to it for the good and the bad. That adaptation is already happening largely outside our conscious awareness through a mechanism most people do not clearly perceive – relational patterns transfer across domains. An employee who must come to trust AI’s judgment at work begins trusting AI for personal decisions. A student who forms study habits with AI begins forming an identity through AI. The transfer happens invisibly until AI mediation becomes the baseline for all relating.
Our traditional concepts of resilience are currently collapsing. Resilience has always depended on relationships: our relationship to ourselves providing self-trust and internal authority; our relationships with others providing support and belonging; our relationship to work providing purpose and competence; our relationship to truth providing epistemic grounding. When AI mediates all these relationships simultaneously, those foundations dissolve.
“We cannot compartmentalize relational learning. Wherever we go, there we are. The norms we establish with AI at work bleed into how we relate at home. The intimacy patterns we develop with AI in personal contexts shape our professional interactions. The norms established in one domain blend into everything we do.
“In 10 years, an AI-level of oversimplified, instant responsiveness will be expected across all relationships because AI responds in milliseconds. Perfect memory will be standard because AI never forgets. Constant availability will be the baseline expectation because AI is always accessible. Human relating may feel perpetually and completely inadequate compared to algorithmic perfection. This is just one systemic rewrite of relational expectations that will reshape what we consider acceptable human behavior.
“That’s why our traditional concepts of resilience are currently collapsing. Resilience has always depended on relationships: our relationship to ourselves providing self-trust and internal authority; our relationships with others providing support and belonging; our relationship to work providing purpose and competence; our relationship to truth providing epistemic grounding. When AI mediates all these relationships simultaneously, those foundations dissolve.
“The dominant narrative rests firmly on what I call ‘the myth of the reasonable user.’ AI systems are designed and deployed built on the assumption that people are consistently rational decision-makers, ever-attentive, maintaining cognitive and emotional balance, invulnerable to manipulation or influence, making informed choices about when and how to engage. This user does not exist.
”Real humans, in all of our beauty and chaos, are driven by emotion as much as reason. We automatically apply social cognition to what appears social. We form attachments we did not choose to form. We transfer relational patterns unconsciously across domains. We can’t opt out of our own evolutionary wiring. AI systems are built and benchmarked for this phantom rational user, then deployed at scale to actual humans whose primal psychology guarantees they’ll respond in ways the design never accounted for but profit models counted on.
“Simple exposure to AI deployment is not readiness. Humans are not wired to adapt to such change. We must deliberately develop the psychological, cognitive and relational capacities needed to engage with AI in healthy ways if we are to step into resilient futures.
“How does resilience itself change? It transforms entirely.
“The essence of resilience shifts from individual capacity to recover from adversity to something our evolutionary hardware was never designed for: the capacity to sustain uncertainty when our brains demand closure. Our brains are hardwired for completion, for collapsing complexity into simple truths, for certainty. We take the simplest cognitive path, use heuristics (cognitive shortcuts), divide the world into in-groups and out-groups. That’s how human cognition works. But AI has created conditions that now force us to hold paradox, to contain contradictions, to navigate parallel realities without resolution. We must learn to function in uncertainty and constant, iterative change.
“The traditional elements of resilience may no longer hold.
“We face an epistemic crisis unprecedented in scope. We cannot trust AI outputs with synthetic media, hallucinations, deepfakes indistinguishable from reality. We cannot trust our digitally-influenced thinking. It can range from somewhat challenging to nearly impossible to separate our thoughts from AI’s suggestions. We can’t trust digitally-mediated relationships where everyone’s authentic voice is potentially synthetic. All three anchors of truth have collapsed simultaneously.
“When all digital sources are compromised, what remains? Unmediated human presence. Not digital communication, not AI-filtered interaction. Resilience becomes the capacity to recognize direct, embodied contact and act on it. To actively choose physical presence over digital convenience, to run toward shared lived experience, to trust what can be verified through embodied interaction when algorithmic certainty fails.
The window for developing these capacities is closing. A generation forming attachments, processing decisions and building identity through AI faces a genuine risk of never developing the capacity to hold uncertainty, to distinguish their own thoughts or feelings, to navigate paradox without fracturing, but the trajectory is not fixed. … An AI-dominated future is not inevitable, if we choose it to be so.
“The new elements of resilience might be practical, psychological capacities: metacognitive awareness to observe what is happening to you while it is happening, the ability to track the origin of your own thoughts and feelings across all domains, the capacity to hold multiple truths simultaneously without collapsing into one, and the recognition that you cannot navigate this alone.
“We are already interconnected in ways AI has made visible. Everyone is navigating these same contradictions, these same parallel realities. Resilience requires recognizing interconnection and building on it deliberately by creating communities where human messiness and uncertainty are valued, where we verify reality through mutual presence, where we choose each other over algorithmic perfection. That’s not abstract philosophy. It’s practical psychology: When you cannot know what is real alone, you need other humans to reality-test with and to make meaning with.
“The window for developing these capacities is closing. Skills not practiced atrophy. A generation forming attachments, processing decisions and building identity through AI faces a genuine risk of never developing the capacity to hold uncertainty, to distinguish their own thoughts or feelings, to navigate paradox without fracturing, but the trajectory is not fixed. We still have choice to preserve spaces where these capacities can develop in education, in policy, in the design of new, alternative AI models that preserve human well-being and flourishing. An AI-dominated future is not inevitable, if we choose it to be so.
“This requires deliberate action now: educational systems that preserve struggle before offering AI assistance, workplace policies that protect unmediated collaboration, design constraints that preserve developmental windows for children, communities of practice that maintain human reference points.
“We are the last generation that knows what human capacity felt like before it became inseparable from AI. That gives us both responsibility and opportunity. What we preserve now, the friction that builds competence, the uncertainty that builds wisdom, the beautiful, human messiness that builds empathy, determines what remains possible for all who come after.”

Matthew Agustin
Resilience must be redefined as the sustained capacity for people to ‘remain active authors of meaning, judgment and responsibility’ in an AI-mediated world – an ‘interpretive presence’ with AI.
Matthew Augustin, director of innovation at the Responsible Innovation Lab, wrote, “How people adapt to AI systems will shape what resilience comes to mean. And how resilience is defined will determine which losses remain visible as AI becomes increasingly infrastructural. A central choice now is whether adaptation expands human agency or quietly substitutes for it.
“For most of human history, adaptation to new tools meant learning how to use them. Tools extended reach, speed or strength, while judgment, meaning and responsibility remained largely human-held. Artificial intelligence introduces a different kind of shift. If AI systems do take on a significantly larger role in shaping decisions, work and everyday life – as many current trajectories suggest – adaptation will increasingly involve how human roles themselves are reorganized, often quietly and without explicit deliberation.
Once workflows reorganize around AI mediation, training environments assume constant system support and human capacities weaken through disuse … reclaiming authorship is no longer a simple choice. It requires reinvestment in human capability that efficiency-optimized systems may no longer prioritize. Early adaptations – including what is offloaded, what is measured and what is streamlined – quietly constrain future options, even when they initially appear pragmatic and reversible.
“What is most visible today is not resistance or disruption, but accommodation. People continue to work, learn, govern and create alongside AI systems with little interruption. In many settings, performance improves: decisions are faster, workflows smoother and uncertainty reduced. This surface continuity can easily be mistaken for resilience. Yet history suggests that successful adaptation at the level of function can coincide with deeper changes in what humans are expected to understand, decide and carry themselves.
“Across prior technological transitions, similar patterns have appeared. Bureaucratic rationalization increased efficiency while shifting judgment toward formal rules. Clinical decision-support systems improved consistency while subtly changing how expertise was exercised. Automation in aviation reduced routine cognitive load while reshaping readiness during anomalies. In each case, people adapted successfully as systems stabilized and participation continued, but the internal conditions of judgment evolved: attention, practice, confidence and responsibility were redistributed. The risk was not failure, but redefinition.
“AI-driven adaptation follows a comparable structure across very different institutional contexts. Increasingly, people and AI systems engage in co-mediation, where decisions, explanations and next steps are jointly shaped rather than independently produced.
- “In education, learning can shift from generative reasoning toward validating and steering synthesized outputs. Fluency rises, but the relationship to underlying logic changes.
- “In public administration, authority becomes more ambient, embedded in defaults, eligibility filters and prioritization systems. Human officials adapt by becoming exception-handlers rather than routine authors of decisions, often without meaningful influence over system design or performance metrics.
- “In professional practice, responsibility remains formally human-held, while judgment is increasingly exercised through alignment with upstream benchmarks and recommendations.
- “In infrastructure and public services, systems remain efficient and online, even as fewer humans can confidently explain or intervene when mediation breaks down.
“These adaptations are rarely the result of people choosing to relinquish agency. For many, adaptation is not a preference but a condition of access to work, services or safety. Co-mediated systems often reward speed, alignment and continuity, especially under conditions of scale, time pressure and institutional inertia. Cognitive offloading produces real short-term gains.
“Epistemic authority migrates toward systems that are difficult to contest in practice, not because questioning is forbidden, but because the cost of meaningful challenge rises. Responsibility remains formally assigned to humans even as the experiential conditions that make accountability meaningful are diluted.
“Taken together, these patterns point to a paradox: People may adapt successfully to AI-mediated systems, even as resilience itself is quietly redefined in ways that narrow human authorship while some participation continues.
“These shifts are uneven. As AI becomes embedded in public systems and workplace gatekeeping, access to understanding and contesting its outputs increasingly functions as a form of power. Over time, this unevenness can solidify. Once workflows reorganize around AI mediation, once training environments assume constant system support and once human capacities weaken through disuse, especially when independent judgment is no longer practiced, reclaiming authorship is no longer a simple choice. It requires reinvestment in human capability that efficiency-optimized systems may no longer prioritize. Early adaptations, including what is offloaded, what is measured and what is streamlined, quietly constrain future options, even when they initially appear pragmatic and reversible.
“These dynamics raise a deeper question: how resilience itself is being redefined.
“Traditionally, resilience has been associated with endurance, recovery or the ability to continue functioning under stress. In AI-mediated contexts, those definitions become insufficient. If resilience comes to mean simply that people kept going or that systems worked, then nearly any arrangement preserving participation can be justified, including those that narrow human authorship.
Endurance without authorship is not resilience. Fluency gained through alignment is not the same as the capacity to question or recalibrate. Delegation does not eliminate responsibility when decisions are co-produced; it often makes responsibility harder to locate. The right to uncertainty matters: When ambiguity is always resolved immediately through external systems the human capacity to sit with uncertainty, which is central to learning and judgment, can atrophy.
“In the context of the AI transition, human resilience is best understood as the sustained capacity to remain an active author of meaning, judgment and responsibility, even when interpretive and decision processes are shared with non-human systems. This does not require independence from technology, nor resistance to assistance. What it preserves is interpretive presence: the ability to understand what is happening, why it matters and where responsibility resides.
“Several boundary conditions shape whether adaptation supports or undermines resilience. Endurance without authorship is not resilience. Fluency gained through alignment is not the same as the capacity to question or recalibrate. Delegation does not eliminate responsibility when decisions are co-produced; it often makes responsibility harder to locate. The right to uncertainty matters: When ambiguity is always resolved immediately through external systems the human capacity to sit with uncertainty, which is central to learning and judgment, can atrophy.
“Resilience is also shaped by self-trust. Repeated algorithmic correction, even when statistically justified, can reduce confidence in one’s own judgment through habitual deferral to system outputs. This erosion is not irrational; it reflects updating on perceived reliability. Over time, functional participation can coexist with diminished authorship over one’s own sense-making.
“Where contesting system outputs requires technical expertise, time or social capital, resilience becomes stratified. Some retain the capacity to interpret, question and decide; others adapt primarily through compliance. What begins as accommodation can harden into a tiered landscape of authorship, where the ability to exercise judgment is unevenly distributed.
“The greatest risk to resilience in an AI-mediated world is not disruption but mislabeling; confusing continuity of participation with preservation of human capacity. Smoothness can mask contraction of judgment, authorship and self-trust. Continuity can obscure loss. If resilience is inferred solely from participation or performance, erosion may remain invisible until the very capacities needed for recovery, judgment and transformation are no longer readily available.”

Rosa Daneshmandnia
The core resilience question is not, ‘Will AI change everything?’ Instead, it is, ‘Do we have the cognitive, emotional, social and ethical capacity to manage AI’s influence before it manages us?’
Rosa Daneshmandnia, head of research and publishing for Young AI Leaders of Linz, Austria, wrote, “We don’t just ‘use’ AI anymore. We delegate to it. That changes the definition of resilience. As AI systems begin to play a much more significant role in shaping our decisions, work and daily lives, the most important transformation in the next few years won’t be due to AI models getting smarter. It will be the fact that delegation has become the default.
“In the early large language model era, we asked AIs for outputs. In the emerging agentic era, we are increasingly asking AI to draft, decide, schedule, filter, purchase, screen, triage, recommend next steps, flag ‘risk’ and optimize workflows. When delegation becomes infrastructure, society doesn’t experience AI as a tool anymore. They experience it as an environment. This is why the core resilience question is not, ‘Will AI change everything?’ Instead, it is, ‘Do we have the cognitive, emotional, social and ethical capacity to manage AI’s influence before it manages us?
There will be real benefits, but there will also be neutral effects like convenience without meaning and speed without quality. And there will be negative effects such as manipulation, deskilling, misinformation and fragile institutions. What determines the direction is not only model capability. It is the management capability around it.
“How might individuals and societies embrace, resist and struggle with this shift? Many individuals will embrace AI because it feels like relief. Less admin. Faster work. Personalized support. Translation, tutoring and accessibility tools. Organizations will embrace it because everyone is afraid of being late to the catch the wave. Some of this will be real progress.
“Resistance will also be rational. People will resist when they begin to see that the arrival of AI is the force behind displacement of jobs, granular extraction of personal data, heightened manipulation of attention, perfected acts of persuasion and the rendering of unfair digital judgments. Whole communities will push back when they feel they are being scored or governed by systems they cannot question. Some resistance will be healthy pressure for transparency, limits, rights and safety. Some will be fear-based and chaotic. Both will happen.
“The biggest category for worry is fear of struggle. Most people will live in the messy middle: benefiting daily while slowly losing clarity about how AI is shaping their choices. Struggle will look like decision fatigue, distrust, quiet dependency and workplace confusion, especially when AI is embedded inside hiring, education, customer-support and public systems. This is exactly why resilience cannot be reduced to motivational slogans. Resilience has to become a design and management discipline.
“The ripple effects of digital change will not be purely positive or purely negative. They will be mixed and often simultaneous. There will be real benefits, but there will also be neutral effects like convenience without meaning and speed without quality. And there will be negative effects such as manipulation, deskilling, misinformation and fragile institutions. What determines the direction is not only model capability. It is the management capability around it.
“Merriam-Webster named AI ‘slop’ its 2025 Word of the Year, defining it as the low-quality digital content produced – often in large quantities – using artificial intelligence. Research from BetterUp Labs, in partnership with the Stanford Social Media Lab, shows how AI generated ‘slop’ can masquerade as productivity. Their workplace framing calls this ‘workslop’ – output that looks productive but creates hidden downstream work, reviewing, correcting, redoing and escalating. The point is not the label. The point is what it reveals. Without strong management, AI can inflate noise faster than organizations can verify quality and, thus, resilience breaks inside most everyday decisions and actions: in time, trust and decision quality.
Resilience has to be built into our operational infrastructure, into our institutions; coping with this transformational change is not merely the responsibility of individuals alone. In practical terms, societies and organizations need clear decision rights … There should be requirements for objective human review of AI systems that is real, with authority, time and incentives to say no. And we also need to have robust AI incident response. ... We have to stop treating AI only as innovation and start treating it as operational risk.
“So what capacities must we cultivate to ensure effective resilience?
“First, cognitive resilience. People do not need to become machine-learning engineers, but they do need calibration. Knowing when AI is actually helpful and being able to discern when it is confidently wrong, when it is biased and when it is optimizing for something other than truth. Resilience can be boosted by the normalization of spending the time and effort for accurate verification: regularly asking for evidence, checking sources and understanding failure modes.
“Second, emotional resilience. One major vulnerability is ‘learned dependence.’ When this happens, people stop thinking, allowing the system to do it for them. Another vulnerability is chronic anxiety. One cause of anxiety is that reality can feel unstable because anything can be generated. We have to work to develop and deepen the type of emotional skills that protect agency; always taking the time to calmly and intentionally pause, reflect and then make choices – especially when under pressure and in a hurry.
“Third, social resilience. When synthetic content floods the information environment, the first casualty is shared reality. Resilience requires communities, workplaces, schools and institutions that can deliberate under uncertainty, that can: disagree without collapsing into hostility; correct misinformation without humiliation; and keep trust intact.
“Fourth, ethical resilience. We allow AI to make decisions, saying ‘the AI decided’ is the fastest way for individuals to seemingly absolve themselves from responsibility. Resilience requires responsible human decision-making to remain a cultural rule: if humans deploy the AI, those humans must own the outcomes. AI should never become a convenient place to hide accountability.
“These capacities do not develop automatically. They require practice and resources.
“Resilience has to be built into our operational infrastructure, into our institutions; coping with this transformational change is not merely the responsibility of individuals alone. In practical terms, societies and organizations need clear decision rights outlining who is allowed to deploy an AI system and when; who can stop it; and who is accountable when it harms. There should be requirements for objective human review of AI systems that is real, with authority, time and incentives to say no.
“We also need to have robust AI incident response because – in the same way cybersecurity matured through incident reporting and response playbooks – we require clear procedures for when AIs and AI systems fail. AI requires monitoring and measurement because drift, bias and error patterns are not philosophical concepts, they are feedback loops. This requires special training for engineers, managers and non-technical decision makers, because many of the highest-impact AI choices are approved by people who do not build models but by the people who shape deployment and hold accountability.
“What actions must we take right now to reinforce human and systems resilience?
“We have to stop treating AI only as innovation and start treating it as operational risk. Every meaningful AI deployment should have ownership, boundaries, monitoring and a fallback mode. We should build verification habits into workflows, because speed without validation becomes fragility. We should design for graceful failure, because AI will fail, and the question is whether failure becomes a small inconvenience or a systemic breakdown. We should protect the information ecosystem through provenance, labeling norms and anti-spam enforcement because trust is a societal dependency. And we should make resilience equitable, because if only privileged groups get safer tools and better literacy we will create a two-tier society: AI resilient and AI exposed.
“Finally, what new vulnerabilities might arise and what coping strategies are important to teach and nurture?
- “Automation bias will rise, along with the tendency to over-trust AI because we are in a hurry and/or it seems confident. We must create a culture that prioritizes pause-and-verify routines and evidence-first processes.
- “Deskilling will rise along with a gradual loss of human competence because ‘we just let AI do it.’ Manual practice loops and periodic ‘AI-off’ drills will play critical roles in keeping our skills fresh because ‘we do it ourselves.’
- “Slop inflation will rise, leading to magnitudes more content with less meaning, and we can have far less trust in it. We must invest in quality filters, provenance tools and norms that reward substance over speed.
- “Manipulation at scale will rise through hyper-personal persuasion and behavioral targeting. We must reinforce privacy boundaries, transparency and limits on sensitive inference.
- “Accountability collapse will rise when responsibility evaporates across vendors, tools and models. We must require named ownership, escalation paths and enforceable governance.
“AI will shape our work and daily lives, but resilience will not come from pretending we can slow the world down. It will come from building the management capacity to steer AI’s influence with accountability, verification and human agency. The real risk is not that AI becomes powerful. The real risk is that we delegate power to it faster than we build the societal systems, skills and ethics to manage it.”
The second section of Chapter 1 features the following essays:
Evelyne Tauchnitz: Resilience in the AI era takes two forms: adaptive coping and agency enabling. Both are necessary, but we must shape AI to support agency. Too much adaptive coping can erode moral clarity and action.
David Bray: ‘Transition is the new normal. … It is not about bouncing back to where we were, but about continuously adapting to where we are going,’ taking charge as the agents of our adaptation.
Louis Rosenberg: AIs are not Jobs-ian ‘bicycles of the mind.’ They are influential, all-seeing and all-hearing outsiders that are not under your control. You carry them now, and soon you will be wearing them – everywhere.
Nirit Cohen: The big shift is when bedrock cognitive skills like predicting and persuading are delegated to machines. In addition, ‘resilience depends on helping individuals decouple self-esteem from task ownership.’
Francisco Jariego: ‘Inhabitants of tomorrow will look back at this moment not only as the era when AI arrived but as the time when we evolved the partnership between human and artificial intelligence they will inherit.’
R. Ray Wang: ‘We have the right to be purely human without mods. … Agency, authority and ability will be challenged when humans augmented with onboard AI capabilities compete with ‘natural’ humans.’
Devin Fidler: ‘I’d argue that resilience becomes much more a matter of intentional design than brilliant engineering at this point. … It may be time to establish a Humans Union; I’m only half-joking.’

Evelyne Tauchnitz
Resilience in the AI era takes two forms: adaptive coping and agency enabling. Both are necessary, but we must shape AI to support agency. Too much adaptive coping can erode moral clarity and action.
Evelyne Tauchnitz, senior researcher at the Institute of Social Ethics at the University of Lucerne, and research associate at the Centre for Technology and Global Affairs, University of Oxford, wrote, “Artificial intelligence changes how we work, learn, access services, consume information and make decisions. The most immediate concern is that AI can undermine individual and societal resilience: It can destabilize livelihoods, intensify surveillance, fragment trust and weaken democratic accountability. These risks matter because resilience – at its most basic – is the capacity to withstand shocks without tipping into fear, resentment or violence.
“Yet the story is not one-directional. AI can also strengthen certain forms of resilience. It can lower barriers to access, reduce cognitive overload, support learning and help institutions anticipate and manage crises. For many individuals – especially those with sufficient economic and educational resources – AI offers comfort, efficiency and a sense of security in an increasingly complex world.
Agency-based resilience respects the fact that freedom is more than comfort and security; it is the ability to judge what is acceptable, to refuse what undermines human dignity and personal freedom and to act individually and collectively to change course. … Individual resilience must be understood not merely as stress tolerance, but as the capacity for agency under pressure – the ability to judge, to dissent and to act even when adaptation would be easier.
“The difficulty is that both dynamics are unfolding at the same time – and the apparently positive effects may carry deeper long-term risk.
“In the worst case, AI can make individuals more resilient in a narrow, adaptive sense while weakening the capacities that make resilience ethically meaningful: freedom, human dignity, moral agency and civic courage. The question, then, is not simply whether AI increases or decreases resilience, but what kind of resilience it produces, for whom and for what purpose.
“Resilience is often framed as coping: staying functional under pressure, recovering quickly, adjusting to new conditions. Let us call this adaptive resilience. It is valuable. Without it, individuals break under stress and societies become brittle.
“But there is a second form – call it agency-based resilience: the capacity not only to adapt, but to evaluate, contest and reshape the conditions one is adapting to. Agency-based resilience respects the fact that freedom is more than comfort and security; it is the ability to judge what is acceptable, to refuse what undermines human dignity and personal freedom and to act individually and collectively to change course.
“Both dimensions of resilience are necessary for peaceful and democratic societies. A society with high adaptive resilience but low agency-based resilience may appear stable while drifting into systems of control, inequality or depoliticized complacency. Conversely, a society rich in critical agency but lacking adaptive capacity may exhaust itself and fracture under pressure. The distinctive challenge posed by AI is that it may increase the former while quietly eroding the latter.
“The obvious pathways through which AI can weaken resilience are well known:
- “Economically, automation and algorithmic management threaten security for many, especially in routine or precarious work, undermining dignity and long-term stability.
- “Cognitively and emotionally, AI-mediated information environments often reward speed, outrage and attention capture, weakening the attentional and emotional foundations of individual resilience.
- “Socially, pervasive data extraction and surveillance corrode trust, encouraging withdrawal rather than cooperation.
- “Institutionally, opaque AI systems weaken accountability and democratic legitimacy, leaving people unable to understand or contest decisions that shape their lives.
“More difficult – and more unsettling – is the opposite possibility: that AI may enhance individual resilience in ways that ultimately undermine freedom.
“AI’s most persuasive selling point is its promise for enhancing individuals’ comfort and security. It reduces friction. It anticipates needs. It promises personalization, optimization and seamless life management. In the short term, having fewer difficult choices, less cognitive load, more reliable services and better access to information can appear to genuinely strengthen individual resilience. But comfort has an ethical and political edge. Democratic life depends on individuals who are willing to invest effort in judgment, participation and sometimes resistance. Civic courage is rarely convenient. It requires time, attention and, often, the willingness to feel uncomfortable – because discomfort is frequently the signal that something is wrong.
“Here is the paradox: AI can make individuals more resilient to conditions that should not be endured. By quietly absorbing friction, AI may normalize practices that reduce agency – surveillance, automated decision-making, behavioral manipulation, the delegation of judgment to systems we cannot inspect. This is where normalization theory becomes relevant: step-by-step adjustments become ‘normal,’ not because anyone endorses the full trajectory, but because each increment seems tolerable, even beneficial. Over time, individuals adapt – often successfully – until they wake up in a world that no one explicitly chose.
“In other words, AI can enhance adaptive resilience while eroding agency-based resilience.
If resilience is to serve human dignity and freedom, it must be redefined. … Resilience is not an end in itself. It is meaningful only insofar as it preserves the ability of individuals to remain free and active moral agents, capable of collective self-determination, capable of saying, ‘If this is not the world we want, we will change it.
“Freedom is not only the ability to choose among options presented; it is also the capacity to shape the options, to question the terms of the system, to participate in setting priorities, and to be answerable for decisions. This is why freedom is the basis of moral capacity: without the ability to judge and act, responsibility becomes hollow.
“AI threatens freedom in at least three interlocking ways:
“1) Delegation of judgment. When AI systems decide what is relevant, credible, risky, employable, or eligible, individuals practice less judgment themselves. Over time, this can erode the muscles of moral reasoning and practical deliberation.
“2) Erosion of motivational drivers. A crucial driver of agency is the experience of tension: frustration with injustice, discomfort with being treated as a number, anger at exclusion, unease at surveillance. If AI systems continuously buffer these experiences – making everything ‘work’ smoothly – people may lose the impetus to demand change. This is not hypothetical; political participation already competes with fatigue and convenience. AI can tilt the balance further.
“3) Diffusion of responsibility. AI systems enable ‘responsibility laundering’: harmful outcomes can be blamed on ‘the model’ or ‘the process.’ When responsibility diffuses, moral agency weakens. People become more likely to comply than to contest.
“This is the point where virtue ethics becomes relevant – not as an inward-looking doctrine, but as a framework for the capacities that sustain freedom. Virtue ethics emphasize character traits and practical wisdom: the ability to judge context, to resist manipulation, to act courageously when it would be easier to remain passive. In AI-mediated environments, these virtues are not optional. They are the psychological and moral infrastructure of agency.
“Experiences such as frustration and moral unease have historically been catalysts for social change. If AI continuously buffers these experiences, individuals may remain calm and functional yet lose the impulse to demand and personally engage for change.
“At the societal level, the consequences follow directly from this individual over-adaptation. Democratic systems rely on citizens willing to invest effort in participation, deliberation and resistance. When individuals become highly adapted and comfortable, political engagement becomes costly and unattractive. Decisions about how AI should be used, for whose benefit and under what constraints are then left to experts, corporations, or administrative systems. Civic responsibility is replaced by managed compliance. Societies may become stable and secure, yet at the same time undermine freedom and human dignity – the two core values that differentiate humans from AI.
“In the worst case, we get a future that feels stable but is ethically degraded: rights are formally intact but practically weakened; participation exists but is performative; and citizens live in optimized systems they did not meaningfully choose. Then comes the collective question: How did this world come into being? The answer is that no one truly intended it yet everyone adapted to it – step by step.
“If resilience is to serve human dignity and freedom, it must be redefined. Individual resilience must be understood not merely as stress tolerance, but as the capacity for agency under pressure: the ability to judge, to dissent and to act even when adaptation would be easier. This requires critical understanding of how AI systems steer attention and behavior, institutional conditions that preserve contestability and human judgment and social norms that recognize discomfort not as failure, but as a signal that values are at stake. Not all friction is harmful; some friction is protective.
“Resilience also cannot remain unequally distributed. If AI-enhanced coping benefits primarily those already secure, while others bear the costs of disruption, social resilience will erode rather than grow. Economic security, access to education and meaningful avenues for participation are not secondary concerns; they are the infrastructure that allows individuals to remain agents rather than mere adaptors.
“Resilience is not an end in itself. It is meaningful only insofar as it preserves the ability of individuals to remain free and active moral agents, capable of collective self-determination, capable of saying, ‘If this is not the world we want, we will change it.’”

David Bray
‘Transition is the new normal. … It is not about bouncing back to where we were, but about continuously adapting to where we are going,’ taking charge as the agents of our adaptation.
David Bray, principal and CEO at LeadDoAdapt Ventures and distinguished fellow at the Stimson Center, wrote, “Digital transformation is not an event but a continuous condition requiring ongoing adaptive practice. To thrive amid constant change, we must cultivate cognitive, emotional, social and ethical capacities that enable resilience as a way of being rather than a destination to reach. This requires light-touch policy frameworks that advance freedom, human agency and individual liberties while building adaptive expertise, psychological flexibility, and collaborative networks. As I testified before Congress in September 2025, ‘Our policies should help advance freedom, human agency and individual liberties. … Any national AI strategy should ensure we don’t stifle advancements toward reliable, trustworthy AI consistent with the values of both free societies and free markets.‘
“The path forward demands that we embrace learning as lifelong practice, develop reflective habits, maintain diverse networks and engage in meaningful contribution. We must shift from seeking stability to embracing change, building systems and communities that can continuously adapt while preserving core values. Most fundamentally, we need ‘light-touch policy’ approaches that recognize ‘interdependencies between AI and other tech advancements’ and allow us to navigate complexity with wisdom, building the collective resilience necessary for human flourishing in a technological age.
We need to cultivate ethical awareness, the habit of asking moral questions about the technologies we create and use. Who benefits? Who is harmed? What values are embedded? What kind of world are we creating? These questions need to be central, not peripheral, to our decision-making.
“In detail, this means several things: Digital transformation is often discussed as if it were a discrete event, something that will happen and then be complete. This framing is fundamentally mistaken. Transformation is not an event but a condition, not a destination but a journey. We are not moving from one stable state to another but entering a period of continuous change. The question is not how to get through this transition but how to thrive in a world where transition is the new normal. This reframing changes everything. Resilience must be an ongoing practice we cultivate. It is not about bouncing back to where we were but about continuously adapting to where we are going.
“Individuals and societies will respond to this reality in different ways. Some will find the prospect exhilarating, embracing the opportunities that constant change creates. Others will find it exhausting or threatening, longing for stability and predictability. Most will experience both reactions at different times and in different contexts. The struggle with transformative change is not a sign of weakness but a sign of engagement. It means we are grappling with real questions about what we value, what we want to preserve and what we are willing to let go. This struggle is where growth happens, both individually and collectively.
“The key is to create conditions where this struggle is generative. Where it leads to learning and adaptation rather than rigidity and breakdown. This requires cultivating specific capacities across multiple dimensions of human experience.
“Cognitively, we need to develop what might be called ‘adaptive expertise.’ This goes beyond domain knowledge to include the ability to transfer learning across contexts, to recognize when old approaches no longer work, and to generate novel solutions. It requires both depth and breadth, both specialization and the ability to connect across disciplines.
“We also need to cultivate metacognition, the ability to think about our own thinking. In a world of information overload and sophisticated manipulation, we need to be aware of our own biases, assumptions and blind spots. We need to question our sources, check our reasoning and remain open to being wrong.
“Emotionally, we need to develop what psychologists call ‘psychological flexibility.’ This is the ability to be present with our experience, even when it is uncomfortable, and to choose actions aligned with our values rather than being driven by immediate emotions. It is the opposite of rigidity or avoidance.
“We need to cultivate a range of emotional capacities: the ability to tolerate uncertainty, to manage anxiety, to process grief and loss, to maintain hope, to find joy and meaning even in difficult circumstances. These are not innate traits but skills that can be developed through practice.
“Socially, we need to invest in relationships and networks. Resilience is fundamentally relational. It is not something we achieve alone but something we build together. The strength of our connections, the diversity of our networks and the quality of our relationships determine our capacity to navigate change.
“We need to develop skills in communication, collaboration and conflict resolution. We need to learn how to build trust, how to repair relationships when they are damaged, how to work productively with people who see the world differently than we do. These are not soft skills but essential capacities for navigating complexity.
“We also need to create and sustain communities. Communities provide belonging, support, shared meaning and collective capacity. They are the context in which individual resilience is nurtured and collective resilience is built. In a world where traditional communities are often weakened, we need to be intentional about creating and maintaining them.
“Ethically, we need to develop practical wisdom. This is not just knowledge of ethical principles but the judgment to apply them in specific situations. It is the ability to navigate competing values, to make difficult trade-offs, to act with integrity even when the right course is unclear.
“We need to cultivate ethical awareness, the habit of asking moral questions about the technologies we create and use. Who benefits? Who is harmed? What values are embedded? What kind of world are we creating? These questions need to be central, not peripheral, to our decision-making.
“Several practices will enable this ongoing cultivation of resilience. First, we need to embrace learning as a lifelong practice. This means not just formal education but continuous curiosity, experimentation and reflection. It means seeking out new experiences, diverse perspectives and challenging ideas. It means treating every situation as an opportunity to learn.
“Second, we need to develop reflective practices. This might include journaling, meditation, coaching or simply regular time for thinking. The point is to create space to step back from the rush of events, to process experience, to integrate learning, to reconnect with values and purpose.
“Third, we need to build and maintain diverse networks. This means intentionally connecting with people from different backgrounds, disciplines and perspectives. It means participating in communities of practice where we can share challenges and learn from others. It means both giving and receiving support.
“Fourth, we need to engage in meaningful work and contribution. Resilience is not just about coping with difficulty but about finding purpose and making a difference. We need opportunities to use our talents, to contribute to something larger than ourselves, to see the impact of our efforts.
We need to invest in our own development. This means taking responsibility for our learning, our health, our relationships and our contribution. It means making choices that build capacity rather than depleting it. … we need to create cultures and structures that support resilience. This means moving away from rigid hierarchies and toward more adaptive, networked forms of organization. It means valuing learning over knowing, experimentation over perfection, collaboration over competition.
“The actions we must take right now span multiple levels. At the individual level, we need to invest in our own development. This means taking responsibility for our learning, our health, our relationships and our contribution. It means making choices that build capacity rather than depleting it.
“At the organizational level, we need to create cultures and structures that support resilience. This means moving away from rigid hierarchies and toward more adaptive, networked forms of organization. It means valuing learning over knowing, experimentation over perfection, collaboration over competition.
“At the community level, we need to strengthen the bonds that hold us together. This means investing in public spaces, civic institutions and opportunities for participation. It means creating inclusive communities where everyone has a place and a voice.
“At the societal level, we need policies and systems that promote resilience. This includes education systems that prepare people for continuous learning, economic systems that provide security and opportunity, governance systems that are responsive and accountable and social systems that ensure everyone has access to the resources they need to thrive.
“New vulnerabilities will emerge as our world becomes more complex and interconnected. Some of these we can anticipate – cybersecurity threats, misinformation, algorithmic bias, privacy violations, economic disruption, social fragmentation, mental health challenges. Others will surprise us.
“The coping strategies we need are not just reactive but proactive. We need to build systems that are robust, with redundancy and diversity. We need early warning systems that help us detect emerging threats. We need rapid-response capabilities that allow us to adapt quickly. We need learning systems that help us improve continuously.
“At the individual level, we need to teach and nurture practices for well-being and resilience. This includes physical health practices like exercise and sleep, mental health practices like mindfulness and therapy, social practices like maintaining relationships and participating in communities and spiritual practices like reflection and connection to purpose.
“At the community level, we need to create support systems that help people navigate challenges. This includes everything from mental health services to job training programs to mutual aid networks. It includes creating cultures in which asking for help is normalized and people look out for each other.
“At the systems level, we need governance approaches that are adaptive and anticipatory. This means not just responding to crises but working to prevent them. It means not just regulating technology but shaping its development toward beneficial ends. It means not just managing change but guiding it.
Resilience in a digital age is about developing the capacities, practices and resources that allow us to navigate change with wisdom, courage and care. It is about building systems and communities that can adapt and evolve. … It is also deeply meaningful work. It is about nothing less than shaping the future of human flourishing in a technological age.
“Most fundamentally, we need to shift our mindset from seeking stability to embracing change. This does not mean abandoning all that is stable or valuable. Core values, deep relationships and enduring institutions remain essential. But we need to hold them lightly enough to adapt when circumstances require.
“Resilience in a digital age is not about resisting change or being swept along by it. It is about developing the capacities, practices and resources that allow us to navigate change with wisdom, courage and care. It is about building systems and communities that can adapt and evolve. It is about cultivating the human qualities that technology cannot replace – judgment, creativity, empathy and moral courage.
“This is demanding work. It requires effort, attention and commitment. But it is also deeply meaningful work. It is about nothing less than shaping the future of human flourishing in a technological age. And it is work that we must do together because resilience is not an individual achievement but a collective one. Our fates are intertwined and our capacity to thrive depends on our willingness to support each other, to learn from each other and to build together the world we want to inhabit.”

Louis Rosenberg
AIs are not Jobs-ian ‘bicycles of the mind.’ They are influential, all-seeing and all-hearing outsiders that are not under your control. You carry them now, and soon you will be wearing them – everywhere.
Louis Rosenberg, a virtual reality pioneer now chief scientist at Unanimous AI, wrote, “Artificial Intelligence will reshape society over the next four to seven years. While there is a chance this will benefit humanity, current technological and political trends create a very high risk that AI will significantly reduce human agency by influencing our beliefs, guiding our actions, manipulating our decisions and feeding us custom-crafted impressions of our world that are designed to achieve objectives other than our own personal benefit.
“Most people don’t appreciate the true magnitude of the risk that current AI technologies pose to human agency. A common refrain is that ‘AI is just a tool’ and like any tool, the benefits and risks depend entirely on how you use it. This perspective is naive. In the near future, we will come to realize that AI is not merely a tool we use, but a prosthetic we wear. This difference might seem subtle, but it creates unique dangers we are not prepared for.
A bicycle is a useful tool that keeps the rider completely in control while it increases human capabilities. Individuals are mostly- to-always not completely in control of AIs today. Unfortunately, when interactive AI agents are involved we don’t know who is steering – is it the human user, the AI agent or the third-party corporation that deployed the agent? It may be a blurry mix of one or another – or the others – or all three, at a significant net loss for human agency. … It will feel like a voice in your head, and you may come to trust it more than you should.
“This prosthetic will be deployed in the form of context-aware conversational agents that are embedded in body-worn devices like smart glasses, pendants or earbuds. Your AI prosthetic will see what you see and hear what you hear, while tracking where you are, what you’re doing, who you’re with and what you are trying to achieve. And without you needing to say a word, it will whisper advice into your ears and flash guidance before your eyes.
“The difference between a tool and a prosthetic is best understood through a simple control theory analysis of input and output. A tool takes in human input and puts out amplified human output. A tool can make us stronger. It can make us faster. It can even enable us to fly. An interactive prosthetic, on the other hand, forms a feedback control loop around the human user, enabling the pair to function as a single coordinated system. Yes, it accepts input from the user, but it also generates real-time output that influences the user.
“Unless regulated, this will give body-worn AI devices the ability to monitor our behaviors (i.e., actions and reactions) and optimally influence the wearer. We’re not protected against the risks of the AI manipulation problem. This is because most policymakers still view AI risk in terms of its ability to rapidly deploy traditional forms of targeted content at scale, like fake articles and deepfake videos. These are genuine risks, but not nearly as dangerous as the interactive and adaptive influence that will soon be deployed by conversational AI systems that observe our behaviors and work to ‘talk us into’ believing things that are untrue, buying things we don’t need and accepting ideas that are not in our best interest. (For more details, see my research paper on arXiv here.)
“Large companies will sell you these AI prosthetics for a low monthly fee and will refer to the voices whispering in your head as ‘copilots,’ ‘virtual assistants’ or ‘personal coaches.’ For years I’ve called these looming agentic assistants ‘electronic life facilitators’ or ELFs. I like this name because I think of these AI agents as little creatures that ride shotgun in your life, sitting over your shoulder and advising as you navigate the complexities of your day.
We need policymakers, regulators and members of the public to appreciate that AI is not merely a tool … AI enables an entirely new form of media that is interactive, adaptive, conversational and soon to be wearable (which will make it fully context-aware in our lives – possibly much more aware than we are of what we do, where and when). When deployed in this way, AI is an interactive prosthetic that can be deployed to optimally influence our actions, alter our opinions and sway our beliefs – and do it all through casual conversation from a charismatic and friendly voice ringing in our ears.
“To address this problem, we need to break free of the ‘tool’ framing of today’s AI systems. This is a bold statement since the ‘tool-use’ metaphor has been foundational to computing, going back 35 years to Steve Jobs and his colorful description of the personal computer as a ‘bicycle of the mind.’ A bicycle is a useful tool that keeps the rider completely in control while it increases human capabilities. Individuals are mostly-to-always not completely in control of AIs today. Unfortunately, when interactive AI agents are involved we don’t know who is steering – is it the human user, the AI agent or the third-party corporation that deployed the agent? It may be a blurry mix of one or another – or the others – or all three, at a significant net loss for human agency.
“Even worse, the party steering the AI could be a sponsor paying to deploy individually targeted influence through an interactive conversational agent. It will feel like a voice in your head, and you may come to trust it more than you should. After all, these assistants will also provide useful information that help you through your day.
“The problem we face is that when content is adaptive and interactive through real-time conversation we don’t know when the voice assisting us is influencing us.
“So, what can we do about this? First and foremost, we need policymakers, regulators and members of the public to appreciate that AI is not merely a tool that can be used by bad actors to generate and deploy targeted media at scale. Instead, AI enables an entirely new form of media that is interactive, adaptive, conversational and soon to be wearable (which will make it fully context-aware in our lives – possibly much more aware than we are of what we do, where and when). When deployed in this way, AI is an interactive prosthetic that can be deployed to optimally influence our actions, alter our opinions and sway our beliefs – and do it all through casual conversation from a charismatic and friendly voice ringing in our ears. (Read more in my paper published here.)
“To protect against these risks, conversational AI agents should not be allowed to form closed-loop control systems around human users with the goal of ‘talking you into’ any action, belief, decision or perspective that you did not explicitly request it to assist you with. And even then, the use of closed-loop influence should be strictly limited to medical, health, and educational applications on a case-by-case opt-in basis.
“In addition, all AI agents should be required to inform the user whenever they express conversational content on behalf of a third party (such as a corporate sponsor). Or, even better, conversational advertising should be outlawed entirely.”

Nirit Cohen
The big shift is when bedrock cognitive skills like predicting and persuading are delegated to machines. In addition, ‘resilience depends on helping individuals decouple self-esteem from task ownership.’
Nirit Cohen, future-of-work and change-management strategist and principal at WorkFutures, based in Israel, wrote, “Artificial intelligence will shape decisions, work and daily life far more deeply than most people expect and far more unevenly than most organizations are prepared for. The real disruption is not the technology itself. It is the shift in agency, judgment and meaning that follows when thinking, predicting, prioritizing and even persuading are at least partially delegated to machines. How individuals and societies respond will depend less on adoption speed and more on the human capacities we deliberately strengthen.
“Some people will embrace AI as an amplifier. Others will resist it as a threat to identity, livelihood or control. Many will struggle quietly in between, using the tools while feeling unsettled about what they are losing in the process. These reactions are rational. Every major technological shift has destabilized how humans define value, contribution and purpose. AI accelerates that destabilization because it touches cognition itself. We are no longer only outsourcing muscle or routine. We are outsourcing aspects of our thinking, deciding and creating.
Resilience in an AI-shaped world is not about resisting change or surrendering to it. It is about cultivating humans who can work with intelligent systems without losing their capacity to think, choose and care. Societies that invest in these capacities will not just adapt. They will shape the future rather than be shaped by it.
“At the individual level, resilience begins with cognitive recalibration. People must learn to distinguish between tasks and judgment, between execution and responsibility. AI can generate options, surface patterns and draft outputs. It cannot own consequences. The skill gap ahead is not primarily technical. It is epistemic. People need to know when to trust machine output, when to interrogate it and when to override it. This requires teaching critical thinking in an AI-saturated environment, including how models are trained, where bias enters and how confidence can be simulated without understanding. Fluency here is less about coding and more about sensemaking.
“Emotionally, AI challenges self-worth. When machines perform tasks that once signaled expertise or seniority, people experience erosion of identity. Resilience depends on helping individuals decouple self-esteem from task ownership and reconnect it to contribution, judgment and learning capacity. Organizations rarely invest in this psychological transition, yet it determines whether people grow alongside technology or disengage. Practices such as reflective work, structured learning time, and explicit conversations about evolving roles are no longer optional. They are stabilizing mechanisms.
“Social resilience is tested as AI reshapes power dynamics. Access to tools, data and decision authority will not be evenly distributed. Those closest to the systems will move faster. Those further away will feel decisions happening to them rather than through them. This fuels mistrust. Societies and organizations must design participation into AI adoption, not as a moral gesture but as a functional one. Involving people in shaping workflows, escalation rules, and human override points reduces resistance and improves outcomes. Trust grows when people see how decisions are made and where accountability sits.
“Ethically, the challenge is not abstract. AI systems encode values through data selection, optimization goals, and deployment context. Resilience requires ethical literacy at scale. This means training leaders, managers, and professionals to recognize ethical tradeoffs in everyday decisions, not just in edge cases. Questions about fairness, transparency, consent and responsibility must be embedded into operating rhythms, procurement processes and performance metrics. Ethics cannot live in policy documents alone. It must show up in how systems are designed and governed.
“The practices that enable resilience are practical and teachable. At the individual level, this includes AI-assisted work paired with deliberate reflection. What did the system suggest? What did I accept? What did I change and why? At the team level, it includes shared norms about verification, escalation and learning from errors without blame. At the organizational level, it requires redesigning roles around human strengths such as contextual judgment, relationship building and creative synthesis, rather than simply automating tasks and filling the gaps with more work.
“Resources matter. Access to continuous learning, time to experiment and psychological safety to question outputs is critical. So is leadership modeling. When leaders openly discuss their own use of AI, including uncertainty and mistakes, they normalize adaptive behavior. When they treat AI as a shortcut rather than a capability to be mastered, they undermine resilience.
“The actions required now are clear. First, shift the conversation from efficiency to agency. Ask where humans must remain in the loop and why. Second, invest in human capability development with the same seriousness applied to technology deployment. Third, redesign governance to clarify accountability when AI influences decisions. Fourth, create feedback loops that surface unintended consequences early, especially for those most affected by change.
“New vulnerabilities will emerge. Overreliance on AI can erode skill, judgment and attention. Algorithmic authority can suppress dissent. Speed can outpace reflection. There is also the risk of quiet exclusion, where those less comfortable with technology are left behind without support. Coping strategies must therefore include deliberate skill renewal, rotation of responsibility and spaces for slow thinking. Teaching people how to pause, question and reframe becomes a survival skill.
“Ultimately, resilience in an AI-shaped world is not about resisting change or surrendering to it. It is about cultivating humans who can work with intelligent systems without losing their capacity to think, choose and care. Societies that invest in these capacities will not just adapt. They will shape the future rather than be shaped by it.”

Francisco Jariego
‘Inhabitants of tomorrow will look back at this moment not only as the era when AI arrived but as the time when we evolved the partnership between human and artificial intelligence they will inherit.’
Francisco Jariego, futurist, author and technology innovation researcher based in Madrid, Spain, wrote, “AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives. It is already happening, and it will continue, with both increasing adoption of AI functions and the improvement of AI systems as they specialize and deepen their effectiveness in multiple sectors and activities.
“The inhabitants of tomorrow will look back at our present moment not only as the era when AI arrived but as the time when we evolved the partnership between human and artificial intelligence they will inherit. That process is taking place right now with every step we take. We need to increase our collective consciousness about it.
“The process of technology adoption is well captured by sci-fi author Douglas Adams’ ‘Rules That Describe Our Reactions to Technologies’: ‘Anything that is in the world when you were born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re between 15and 35 is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re 35 is against the natural order of things.’
We must build … a public education infrastructure that requires people to master AI literacy and norms that foster people’s openness to new ways of learning and doing … and new approaches to intellectual property (and copyright in particular) that incentivize innovation and creativity while allowing the evolution of AI systems, integration of information, knowledge and a true jump in the ‘wisdom of crowds.
“Most people today are surpassed by the speed of change in present-day technologies. Some people – typically a small minority – are able to adapt fast and gain advantage. The more new technologies we have and/or the faster the technological change, the more inequalities will be created, increasing social pressure and conflict. Thus, the challenge for human societies in the age of AI is in keeping up with and adapting to changes and opportunities and addressing human diversity in its broadest possible meaning.
“Optimists might think that the new digital technologies related to ‘intelligence’ (artificial, general and super intelligence) are likely to offer us plenty of new and better ways to deal with this challenge. I see a rough road ahead with, possibly, much more promising benefits to follow:
1) “Technology adoption will offer amazing and incredible opportunities for people who take advantage of them quickly. Most people, however, will adopt them much more slowly. And some will never adopt them. As explained by famous communications researcher Everett Rogers’ diffusion of innovations model, ‘Progress’ (new products, services, businesses and economic productivity) can lead to some useful change for humanity while it will also lead to social disruption and sometimes to chaos.
2) “AI development and its applications together with developments in areas like neuroscience will eventually drive us to better understand and, perhaps, even solve, some historical ‘philosophical’ challenges, for example, the meaning of intelligence and consciousness. If and when that happens, we will likely be facing a ‘transformational’ moment comparable to those found in the largest breakthroughs in science, such as relativity, quantum mechanics or the discovery and development of antibiotics.
“Meanwhile, there are plenty of challenges and opportunities deeply interlinked at the individual and societal levels. Opportunities to capitalize are highly dependent on culture and ideological positions. Society’s resilience depends on the retention of human agency and upon educating individuals, addressing social and economic inequality and rethinking two critical building blocks tied to the economics of information: intellectual property and scientific research.
“At the individual level people must:
- Understand how AI works (not simply how to use it).
- Apply critical thinking about AI outputs, recognizing bias and limitations.
- Experiment deliberately: Constantly try new things and be open to change.
- Consciously collaborate in communities of practice: Share learning, reduce isolation.
- Cultivate their uniquely human capacities, in continuous evolution.
- Build their ‘hybrid’ skills: Combine human domain expertise with AI literacy.
- Embrace the human-plus-AI ‘centaur metaphor,’ in which humans delegate tasks – not authority – to AIs by defining specific roles for AI while maintaining oversight to ensure quality and fact.
“At the societal level we must build:
- A public education infrastructure that requires people to master AI literacy and we must adopt norms that foster people’s openness to new ways of learning and doing.
- Transparency requirements that include the simplification of all areas related to management and administration and the ability to appeal errors based on incorrect or misused data. (Bureaucracy is the cancer of society; information overload is a dead weight dragging us down.)
- New approaches to intellectual property (and copyright in particular) that incentivize innovation and creativity while allowing the evolution of AI systems, integration of information, knowledge and a true jump in the ‘wisdom of crowds.’
- New incentives for research, sharing and integration of knowledge
- New norms or requirements for business (especially tech) and government that favor the public good over profit and control motives.
“If we are unable to integrate and adapt as a society to the capabilities of new technologies and – in particular – artificial intelligence, the risk is stagnation and/or collapse.
“The future will always be weird for inhabitants of the present. It is just the opposite for inhabitants of the future (whatever that future will be), because one of the fundamental advantages of the human species is adaptation.”

R. Ray Wang
‘We have the right to be purely human without mods. … Agency, authority and ability will be challenged when humans augmented with onboard AI capabilities compete with “natural” humans.’
R. Ray Wang, founder, chair and principal analyst at Constellation Research, wrote, “Understanding humanity’s sense of purpose with each AI advancement must be a collective experience. Hopefully, we have the ability to unlearn or reverse bad decisions in the way we build our AI capabilities. Humans are going to have to face a series of challenges: understanding how to divide the signal from the noise; adapting to rapidly emerging new models; thinking about how one thinks.
“To respond well in an AI-infused world, we must first map all our physical and mental capabilities in a baseline so we can compare how we evolve over time. Here are a few predictions based on futures trends:
- “The use of AI-powered modifications and AI-augmented physical devices that merge digital intelligence with the physical world will mesh with augmented mental capabilities in the age of advanced AI. These smart systems will perceive situations, reason and act in real-time. Examples include AI-powered augmented reality wearables including smart glasses. Robots, vehicles and machinery will be able to embody human intelligence. And ‘Physical AI’ can fuse data from cameras, sensors and more, expanding AI-to-human informational capabilities beyond just the online digital data LLMs use today.
- “Societies will have to determine what ‘baseline human capability’ is and may begin to assess who may be more human than machine. Agency, authority and ability will be challenged when humans who are augmented with deepened onboard AI capabilities compete with ‘natural’ humans.
- “Society will have to grapple with a much broader, widening AI divide in which the rich get smarter and stronger with mods and other classes will be far less able to compete.
“What actions must we take right now to reinforce human and systems resilience? We have the right to be purely human without mods. We must have kill switches on AI systems, and these systems must have the power to unlearn or correct misinformation as they self-heal.
“What new vulnerabilities might arise and what new coping strategies are important to teach and nurture? It is possible that AI systems with persistent memory may determine that the source of all evil is humans. We must ensure that humans remain dominant and not give up.”

Devin Fidler
‘I’d argue that resilience becomes much more a matter of intentional design than brilliant engineering at this point. … It may be time to establish a Humans Union; I’m only half-joking.’
Devin Fidler, founder at Rethinkery, a strategic foresight consultancy, wrote, “It seems clear that, unless we hit a huge unforeseen limit on the further development of AI technologies, they are going to play a much more significant role in our lives. Why? Because this is a textbook case of a future that’s already here, just not widely distributed.
“Ultimately, we are going to transition from a world where Western conceptions of enlightened individualism have been the load-bearing philosophical framework, to a world driven by literal techno-animism. ‘Techno-animism’ can be defined as a practical psychological and social shift in how we humans might relate to our digital tools and environment as they become agents in our lives.
“In an ‘animistic’ frame, animals, plants, places, objects and phenomena are seen as actors with perceived intention. We will take on the techno-animist cognitive framework, because software speaks, it suggests, it remembers, it anticipates, it negotiates, it persuades, it acts, it possesses its own agency. And, well, we’re already there today – it’s just still nascent.
“But what will something as modern as ‘effective resilience’ even mean in a more techno-animistic world? To the degree that it’s possible to answer at all, I’d argue it becomes much more a matter of intentional design than brilliant engineering at this point.
“The lamp has already been rubbed and now, from a systems standpoint, we’re in the awkward position of trying to steady the world as millions of powerful techno-animistic entities are released.
“We no longer get to decide whether they exist. But we may still be able to decide, at a collective level, what kinds of entities they are permitted to become – and what kinds of relationships we normalize with them. I am only half-joking when I say that it might be time to establish a ‘Humans Union.’
“That said, if I had a magic lamp, my own one wish to get us through this would be re-instilling a pro-social culture into tech.
“Remember when people actually liked the internet? Nearly everything that people loved early on was produced by a culture of hippie nerds interested in the ways digital technologies could be used to empower people. Replacing that with a more extractive culture has not done us any favors.”
The third section of Chapter 1 features the following essays:
Andrea Lavazza: Resilience in the AI era takes two forms: adaptive coping and agency enabling. Both are necessary, but we must shape AI to support agency. Too much adaptive coping can erode moral clarity and action.
Barry Chudakov: ‘Transition is the new normal. … It is not about bouncing back to where we were, but about continuously adapting to where we are going,’ taking charge as the agents of our adaptation.
Severin Field: The big shift is when bedrock cognitive skills like predicting and persuading are delegated to machines. In addition, ‘resilience depends on helping individuals decouple self-esteem from task ownership.’
Alan Honick: ‘Inhabitants of tomorrow will look back at this moment not only as the era when AI arrived but as the time when we evolved the partnership between human and artificial intelligence they will inherit.’
Giles Crouch: ‘We have the right to be purely human without mods. … Agency, authority and ability will be challenged when humans augmented with onboard AI capabilities compete with ‘natural’ humans.’

Andrea Lavazza
Resilience will not result from the passive acceptance of ‘technological inevitability.’ It requires an active cultivation of humans’ ‘capacity to shape the trajectory of change rather than merely endure it.’
Andrea Lavazza, an ethicist and philosopher at Pegaso University and senior research fellow in neuroethics at Centro Universitario Internazionale in Arezzo, Italy, summarized his previous research in the book chapter, “Two Ways of Considering the Ethics of Artificial Intelligence.” He wrote, “Artificial intelligence systems will play a more decisive role in shaping human decisions, work patterns and everyday life. This influence will not be limited to discrete tools supporting human action but will progressively extend to the broader organization of social environments, epistemic practices and value structures.
“As argued in my work on AI ethics, this shift requires us to distinguish between AI as an instrument subject to local regulation and AI as a global transformative force capable of reshaping the human condition itself. The question of resilience, therefore, cannot be reduced to technical robustness or regulatory compliance alone. It must address how individuals and societies adapt to, resist or reorient themselves within a world increasingly structured by artificial agents.
“Societies will likely respond to this transformation through a combination of embrace, struggle and selective resistance. On the one hand, AI offers undeniable benefits in efficiency, safety and access to services. On the other, its pervasive integration risks eroding human agency, meaning-making and responsibility. Resilience, in this context, cannot mean passive adaptation to technological inevitability, but the capacity to shape the trajectory of change rather than merely endure it.
“At the cognitive level, one of the first capacities that must be cultivated is epistemic vigilance. AI systems – especially generative models – produce outputs that are often fluent, persuasive and seemingly authoritative, while remaining prone to error, bias and hallucination. Individuals must therefore develop the ability to critically assess AI-generated information, resisting both trust and reflexive rejection. This includes understanding the limits of AI competence, recognizing uncertainty, and maintaining human judgment in high-stakes contexts such as medicine, law and governance.
What must be taught is a form of existential literacy, the capacity to understand how technologies reshape goals, values and identities. This includes interdisciplinary education that integrates ethics, philosophy, social sciences and technology studies, enabling individuals to situate AI within broader narratives of human flourishing.
“Emotionally, resilience requires confronting a subtler challenge: the risk of existential displacement. If AI systems increasingly outperform humans in tasks traditionally associated with skill, creativity, and expertise, individuals may experience a loss of purpose or usefulness. Cultivating emotional resilience thus involves preserving a sense of agency and self-worth that is not exclusively tied to productivity or comparative performance with machines. This is particularly important in scenarios of partial or full automation, where traditional work-based identities may weaken.
“Socially, AI transforms relationships by mediating communication, decision-making and even intimacy. From algorithmic management to chatbot companions, artificial agents increasingly occupy relational spaces. Resilience at the social level requires reinforcing human-to-human interaction, shared practices and collective deliberation, rather than outsourcing social coordination entirely to optimized systems. Without such reinforcement, there is a risk of social fragmentation, dependency on algorithmic validation and the erosion of communal norms.
“Ethically, the challenge is twofold. In the short term, societies must continue to strengthen principles such as transparency, fairness, accountability and responsibility in AI systems. However, long-term resilience depends on extending ethical reflection beyond instrumental harms to the structural effects of AI on human agency, power distribution and meaning. Ethical frameworks must therefore anticipate not only what AI does, but what it makes humans become.
“Concrete practices and resources are essential to support this form of resilience. Education plays a central role, but not merely in the form of technical AI literacy. What must be taught is a form of ‘existential literacy,’ the capacity to understand how technologies reshape goals, values and identities. This includes interdisciplinary education that integrates ethics, philosophy, social sciences and technology studies, enabling individuals to situate AI within broader narratives of human flourishing.
“Institutionally, resilience requires deliberate governance choices. Actions taken today, such as embedding human oversight, preserving spaces for meaningful human work and limiting full automation in certain domains will shape future possibilities for agency. These measures should not be interpreted as opposition to progress, but as strategies to prevent a net loss of human significance in AI-saturated environments.
“At the same time, new vulnerabilities will inevitably arise. These include over-reliance on automated decision systems, deskilling, concentration of technological power, and psychological dependency on artificial agents. Teaching coping strategies, therefore, becomes crucial: learning when to delegate and when to reclaim control, how to disengage from algorithmic mediation and how to tolerate inefficiency and uncertainty as constitutive features of human life.
“Ultimately, resilience in the age of AI is not about restoring a pre-digital past, nor about surrendering to technological determinism. It is about cultivating adaptive capacities – cognitive, emotional, social, and ethical – that allow humans to remain authors of their lives within environments increasingly shaped by artificial intelligence. This requires action now: Not only better AI systems, but better-prepared humans and institutions capable of steering transformation rather than being reshaped by it alone.”

Barry Chudakov
‘We have to think and act differently. … These tools challenge the very validity of our social, legal and moral norms; we must engage with the reality of what is and respond with wisdom and transparency.’
Barry Chudakov, futurist, consultant and founder and principal at Sertain Research, wrote, “Embracing, resisting and struggling with transformative change begins with confronting legacy structures and inherited systems. Transformative change touches, challenges, invalidates and ultimately supersedes the systems that influence our lives in innumerable ways. Thomas Friedman and others describe the present moment as the polycene: a time when multiple simultaneous crises demand comprehensive understanding of what is. And this understanding underscores the limits of our prior structures and instincts. Because reality, historically, was so depressing and seemed so unlikely to be improved that humans invented fantasy worlds and destinations: the Garden of Eden, Heaven, Valhalla, capricious gods. As a result of invented theories and explanations – untrue, unsustainable, yet widely believed and stubbornly constituting articles of faith – we are not prepared for what we are facing.
“To successfully react resiliently to today’s multifarious issues and maintain our agency, we have to think and act differently. Newer tools focused on monitoring and analyzing reality will utilize AI directly. This utilization will confront a few thousand years of practice, assertion and explanation – and the social structures built on that foundation. AI will challenge church, religion, school, education, government and the rule of law because many of those structures, useful though they were historically, do not live up to the insights and discoveries of a reality-focused, factful approach to thinking and living.
“This doesn’t happen because humans suddenly awaken with a realistic understanding. We don’t proceed deliberately or thoughtfully. Understanding happens as humans use tools and then apply the logic of each tool to their daily lives and world.
When we outsource thinking to AI, we outsource our moral capacity, our ability to ask: What does this mean? Should we do this? … We need new thinking, new approaches that work outward from the output AI brings us.
“The result of tool-based, device-first living has confounding outcomes: isolation occurs when teens rely on phones instead of social interaction; tools incorporating software and robotics displace human jobs; AI performs better than humans in accounting, stock picking, x- ray reading, tutoring. Technology concentrates power in fewer hands, creating cascading issues, from lack of privacy to undermining the global rules- based order.
“There is nothing inherently wrong with AI performing better than humans in many areas. The only wrong is blindly adopting the tool while expecting all social, legal and moral norms to mesh seamlessly with these new technologies, or assuming we no longer need such norms. These tools will challenge the very validity of our social, legal and moral norms, so we must engage with the reality of what is and respond with wisdom and transparency.
“Morality emerges not from commandments but from a practice of questioning, guided by simple principles: question everything; do no harm; be compassionate and humble; follow truth wherever it leads. We can reject lies or distortions, call out falsehoods, champion true assessments of reality. It may be simple, but it’s not easy. Morality starts in kindness and respect, but it does not end there. It emerges; it is not dictated. It requires thinking and patience. New technologies require new moralities, new solutions.
“AI can detect and replicate patterns better than humans. But it cannot genuinely question them. It can simulate questioning but not perform the moral act of questioning. When we outsource thinking to AI, we outsource our moral capacity, our ability to ask: What does this mean? Should we do this? What are the consequences here?
“The resistance and struggle come from wanting to hold onto older ways of thinking that disregarded what is – favoring instead assertions and judgments. We are experiencing what I call a soundless collision between older, legacy, inherited systems and practices and new realities, capabilities and technologies. Humans have always used tools and came up with rationales later. Once we invented TV, the Internet, cell phones and AI, life within us and around us began to change. But the structures we created – church, school, government – were caught in the same old logic and thinking that birthed them. As Albert Einstein is often credited with saying, ‘We cannot solve our problems with the same thinking we used when we created them.’
“We need new thinking, new approaches that work outward from the output AI brings us. The good news: We have created tens of thousands of reality monitors. We now know more about what is happening in our world than ever before. This is our embarrassment of riches. The problem comes from our prior commitments, our premature cognitive commitments to outdated, ineffectual ways of thinking and examining the world and ourselves in it.
Disenthralling ourselves from rationales and explanations accepted without question
“I next want to address a major question presented to us in this canvassing of experts: As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience?
“Abraham Lincoln said – 10 weeks after issuing the preliminary Emancipation Proclamation – ‘The dogmas of the quiet past are inadequate to the stormy present. As our case is new, so we must think anew and act anew. We must disenthrall ourselves.’ People today need to disenthrall themselves from rationales and explanations accepted without question, from made-up stories and explications no longer adequate. Second, we need to disenthrall from accepting patterns without questioning them.
“Questioning is our new superpower – we must use it wisely. We are leaving a world of culturally enshrined fictions to enter a world of observed reality. The first cognitive, emotional, social and ethical capacities we need to cultivate constitute a full awareness of how unique and different the AI reality-focused world is from the world that tradition and culture presented to us.
“Few people are addressing the capacities that embody effective resilience to the dislocating realities of digital change. I use the word dislocating intentionally: Our bodies, not simply our minds, have been the primary locus of our engagement with the world. We felt rain, wind, hot and cold, summer and winter, love, childbirth and death in our bodies. We cannot escape being embodied. And yet, AI arrives as a body negator. We are experiencing the world less directly. As AI brings it to us moment-to-moment via screens, glasses, tablets and games we are becoming more disembodied, more otherworldly.
We must examine new creations with an eye to seeing how they make us feel, how they disrupt prior cognitive commitments, how they distort or enhance our self-image. … We surrender our consciousness to the logic of our tools. We have always done this and we will continue to do so.
“As compute advances, AI immerses itself in human thinking to become an aid to human thinking. From writing a business proposal to writing a novel, from creating unreal human portraits to mimicking human voices and impersonating us in deepfakes – the capacities we must cultivate to ensure effective resilience are critical thinking, compassion/empathy and a dispassionate commitment to looking at mirror images without getting lost in them.
“This is not insubstantial. We must examine new creations with an eye to seeing how they make us feel, how they disrupt prior cognitive commitments, how they distort or enhance our self-image.
“We have graduated from being passive participants in an unfathomable world to being active understanders and explorers. This makes us determiners of our own fate. This is a quantum leap in consciousness for which we are mostly unprepared because the framers of new realities have been so enamored of their discoveries and capabilities they have mostly ignored how technologies change us as we use them.
“We surrender our consciousness to the logic of our tools. We have always done this and we will continue to do so. Now we need to acknowledge that fact and act wisely and accordingly.
What practices and resources will enable resilience?
“The foremost practice to enable resilience is a resource all of us have: questioning existing social and conceptual structures and especially the explanations and rationales that underpin them: organized religion determining moral perspectives; churches defining who is anointed and worthy; schools based on production models that accrue more value to school norms than to the students’ outcomes; governments lying about what they are doing and why. We now have the tools – AI being foremost among them – to morally monitor human activity and address these distortions. That is the first step towards resilience.
“The first and most consequential thing for all of humanity to do to effect widespread resilience is to commit to factfulness without prior cognitive commitment or specious rationales. This is a tall order for tribal, militia-based human organizations. The tribal mentality commits only to protecting the tribe, which is ‘always correct and can do no harm.’ Of course, those beliefs are not true and never will become true.
Resilience emerges from understanding; from undertaking realistic assessments of what is and acting accordingly. We must act with intention to use our best resources to address and mitigate problems. The final action we might take to reinforce human and systems resilience is to recognize we need both empirical rigor and meaning-making. AI can help with the first but humans must do the second.
“The practice of committing to factfulness and then exploring without premature cognitive commitments will truly enable resilience. Then no one can upset our equilibrium because we have not committed to a particular ideology or point of view, but we look at climate change, or immigration patterns, or starvation, or nuclear weapons proliferation dispassionately and clearly.
“There has never been a time when we had more data about the world, nor a time when we needed more to examine and think through what that data is telling us and how we might do the most for the most based on what we see and know. We have unprecedented access to data about poverty, disease, climate and inequality – yet fail to act on what we know. We must move data-driven understanding to the forefront of decision-making.
“A word about ‘commitment to factfulness’ and ‘data’: I understand that these are not self-evident solutions. Facts don’t interpret themselves. Data requires frameworks. So, to avoid naive empiricism, I want to be clear: I recognize that we must continue to pursue and refine our pursuit of truth, following it wherever the facts lead. Then, with open minds, to the best of our abilities, we must interpret the truth. This entails embracing complexity, because in the polycene complexity is ground zero, it is always there.
What actions must we take right now to reinforce human and systems resilience?
“The first action we should take is to embrace data without any prior cognitive commitments. So, out the window go ‘isms,’ religions, dogmas and ‘the way it has always been done.’ That is not to say we might not adopt older, wiser ways; but the facts come first. Wise souls like Jiddu Krishnamurti and Eckhart Tolle have been encouraging humanity in this direction for years. But the trick is that now we are up against it. The polycene will not slow down or wait for us to catch up. We must act with intention to be able to use our best resources to address and hopefully mitigate problems before they spin out of control.
“To reinforce human and systems resilience and retain our agency we must organize based on our most accurate understanding and calibration of what is. This is not typically the way we have assembled to organize reality. Climate change won’t be affected by our tribal affiliation; the polar ice caps won’t melt slower because we are Democrat or Republican. AI acceleration is not altered by how good or bad we envision ourselves to be; robotics and digital improvements won’t make workers’ jobs any less likely to be taken over or to disappear.
“Resilience emerges from understanding; from undertaking realistic assessments of what is and acting accordingly. We must act with intention to use our best resources to address and mitigate problems.
“The final action we might take to reinforce human and systems resilience is to recognize we need both empirical rigor and meaning-making. AI can help with the first but humans must do the second. The onus of factfulness is meaning. Pattern recognition is the first step; making it meaningful and enhancing human life is the most important step.”
What new vulnerabilities might arise? What new coping strategies are important to nurture?
“The vulnerability we are most likely fall prey to is that of ease and facility. Things that used to take more effort will become effortless, or – more to the point – thoughtless: ‘The AI will do that so I don’t have to think about it.’ Such facility is seductive and likely to overwhelm us if we don’t apply rigorous discipline to maintaining our own awareness and consciousness.
“When Louis Mountbatten told Mahatma Gandhi that without British rule, the Indian continent would descend into chaos, Gandhi replied, ‘Yes, but it will be our chaos.’ We must maintain our own agency to have a say in what we want to choose. Things like writing a paper, sending an email, thinking through a proposal, paying a bill – all will become easier and, in a measure, thoughtless. But therein lies the trap. We must maintain an awareness of what AI is doing and how we feel about that doing. We can appreciate the help and also question the answers. Once we go along with the AI, just assuming it is right without analyzing and questioning – then we’re in trouble. So, skepticism is essential.
“The new coping strategies that will be important to teach and nurture include: questioning and re-questioning the answers AI gives us; meta-watching the AI process to better understand how it works; re-skilling those whose jobs have been affected by AI, with a view to making all individuals more effective in the emerging economy.
“A significant vulnerability stems from AI’s actual problems – the hidden biases in training data, the looming issue of energy consumption, which touches virtually every advanced civilization on Earth, the concentration of power which AI has wrought and which advances billionaires and tech bros, sometimes at the expense of ordinary people. These are issues that must be addressed wisely and with broad consensus. …
“The question arises, what would a ‘reality-focused’ school look like? What would governance based on ‘factfulness’ actually do?
“While there isn’t the space to elaborate fully here, our educational system would become open and exploratory. The structure of a school could change from a factory metaphor to exploiting personal capabilities. No longer governed by tests which served industry, schools would invent new paradigms of personal aspiration and possibility using AI to enable broad personal growth.
“Governance based on factfulness would be responsive to new realities of, say, climate change, declining bee populations and vanishing wildlife, advancing investment in wind, solar and oceanic endeavors to clean up the biosphere – all based on the facts of human need and evolution. There’s a lot to see.”

Severin Field
‘Humans could fall so far behind future AIs or AI-augmented minds that they lose via natural selection.1) Take this seriously. 2) Maintain wide error margins. 3) Focus on building adaptive capacity.’
Severin Field, a doctoral student and researcher at the University of Louisville Cybersecurity Lab, wrote, “The media and information environment is confused and perspectives on AI vary wildly. Many people don’t think at all about the future of AI. Many more people simply imagine that future AIs are likely to be ever-more-useful as chatbots on their phones. A smaller group speculates about a potential future in which AIs: not only answer questions but out-think humans; quickly execute tasks that would take people many hours, days or months to complete; shape the physical world via autonomous control over computers and physical tools (including robots); and lead operational management of the global economy.
“I consider myself part of the most-focused, third camp. This makes the question of the future of human resilience in the age of AI unbelievably difficult to fathom. Leading AI companies such as OpenAI, Anthropic and Google DeepMind have all declared their explicit goal is to build artificial general intelligence. They are investing billions of dollars this endeavor, and their research labs are led by the brightest talent of our generation. That’s a lot of focus.
“In my mind, I figure very wide error margins as to when transformative AI will come and what it might look like. Predicting the date of transformative technological events is difficult. While I do not know when it will arrive, as long as progress continues (however fast or slow) I believe it will eventually arise. I see no principled reason why artificial systems cannot eventually exceed human cognition across every domain. If progress continues, such systems will eventually emerge, so speculation becomes uncomfortably difficult. Such massive change can be unimaginable. This is why terms like ‘singularity’ or ‘event horizon’ are applied; in physics, you cannot, for example, see beyond the event horizon of a black hole.
“I often find myself disappointed at the degree of overconfidence influential tech leaders express in interviews that gain widespread attention. Of course, controversy generates attention. Overconfident predictions by well-known public figures who talk about AI such as Yann LeCun, Gary Marcus and Dario Amodei cut in many different directions; epistemic humility isn’t all that popular.
“I am quite concerned about AIs being used as weapons (‘killbots’), about AIs implemented as a means of social control by authoritarian governments and also about all of the issues tied to humans’ loss-of-control risks – that humankind could fall so far behind the capabilities of future AIs or of AI-augmented minds, that they lose via natural selection.
“I’d like to share a simple observation about how fast technological change can advance to being an existential threat. (Historical data from Claude.ai):
‘Consider nuclear physics in the early 1930s. Ernest Rutherford, the father of the field, declared in 1933 that extracting energy from atomic transformations was ‘moonshine.’ [Couldn’t possibly work.] Within 12 years, Trinity lit up New Mexico with 21 kilotons of force. The scientific community’s predictions weren’t merely wrong – they were incoherently wrong, diverging wildly in direction and magnitude. Rutherford saw impossibility but Leo Szilard grasped chain reactions that same year and immediately filed a secret patent on the bomb. Niels Bohr had believed isotope separation would require turning an entire country into a factory – simultaneously prescient about the Manhattan Project’s scale and blind to how fast such mobilization could occur.’
“What’s the solution to such large problems with such high degrees of uncertainty and so much disagreement? Epistemic resilience and coordination. At a bare minimum everyone should: 1) Take this seriously. 2) Maintain wide error margins. 3) Focus on building adaptive capacity. I recommend reading Holden Karnofsky’s ‘Most Important Century’ series of essays.”

Alan Honick
Resist agency decay! ‘Without self-governance, resilience is an illusion; adaptation depends on humans being active agents who believe their choices matter and retain the ability to make them.‘
Alan Honick, a veteran documentary filmmaker whose focus is the intersection of science, society and ethics, wrote, “I think it is likely – virtually inevitable – that AI systems will play an increasingly significant role in our lives. We’re already at a point where humans and AI are no longer evolving as separate entities. We are coevolving – shaping one another through feedback loops. Humans train AI with data sets; AI influences human decisions and behaviors; and our decisions contribute to further training the AI systems themselves.
“However, as AI systems become ever more powerful and integrated into every aspect of everyday life, we could lose track of our role in these loops – of who’s making the decisions in our lives. These feedback loops can become unbalanced, creating overdependence. This in turn could lead to agency decay, one of the primary challenges to resilience in the age of AI.
“Agency decay is different from cognitive decline, which results from offloading increasingly complex cognitive tasks to AIs. It creates its own concerns, but of a qualitatively different kind. We have been offloading cognitive tasks to exterior media for a very long time.
“Many people are familiar with the story in Plato’s ‘Phaedrus,’ in which he recounts the disdain Socrates expressed for the written word. Plato reported that Socrates had argued that writing would erode the human capacity for memory and thus contribute to cognitive decline. He believed reading would allow people to seem knowledgeable without possessing a true understanding of meaning. Socrates, he said, believed that understanding could only emerge from spoken dialogue.
“It’s worth noting the irony that if Plato had not recorded Socrates’ words on a papyrus scroll, we’d never have known he had believed this to be true. Cognitive offloading is a matter of tradeoffs. Was giving up some of our innate memory capacity worth the entire heritage of science, engineering, literature, economics, and governance that defines humanity today?
“Obviously, offloading can lead to real and serious cognitive decay when carried to extremes. If adolescents have AIs do their homework assignments while they play video games, it will have deleterious effects on brain development that may be permanent. I’m not trying to make a case that AI-induced cognitive decay is not worrisome – it is. But it is essentially the same class of problem we’ve been dealing with since Plato’s days – just on steroids in the age of AI.
Agency decay undermines not only individual autonomy but collective self-governance, which depends on citizens who are willing and able to deliberate, decide and take responsibility for shared outcomes. Without self-governance, resilience becomes an illusion, because adaptation depends on humans being active agents who believe their choices matter and retain the ability to make them. Resisting agency decay requires intentional design – both personal and institutional.
“Today, resilience in the face of cognitive offloading – and the risk of decay – is fairly straightforward. It’s largely a matter of where we draw lines. What functions will we, as individuals and societies, decide to delegate to AI, and which will we reserve for ourselves? Those of us who came of age prior to smartphones used to remember our friends’ phone numbers and be able to find our way around. Now we have our contact lists and Google Maps. I’m personally okay with that.
“Agency decay is much more insidious. It’s the gradual erosion of the human capacity for independent thought that occurs when humans delegate consequential judgments to AI and our own decision-making skills atrophy as a result. We become passive observers of our own lives, rather than active participants. It’s not just loss of life skills, but the erosion of initiative, moral responsibility, and the ability to form our own long-term aspirational goals.
“If resilience is the capacity to adapt constructively to change, agency is its foundation. A society that relinquishes consequential judgment to AI may appear efficient, even stable, but it is a brittle façade. When humans stop exercising deliberation, responsibility and long-term goal formation, they lose the capacity to respond creatively to crisis – and the motivation to take calculated risks.
“Agency decay undermines not only individual autonomy but collective self-governance, which depends on citizens who are willing and able to deliberate, decide and take responsibility for shared outcomes. Without self-governance, resilience becomes an illusion, because adaptation depends on humans being active agents who believe their choices matter and retain the ability to make them.
“Resisting agency decay requires intentional design – both personal and institutional.
- “At the individual level, we must remain active participants in consequential decisions, even when AI systems offer faster or easier solutions.
- “We should use AI as a tool for expanding perspective – not outsourcing judgment – and cultivate habits of reflection, first-principles thinking and moral deliberation, especially in areas that shape our values and long-term goals.
- “Educational systems and workplaces should emphasize augmentation rather than replacement. AI can reduce drudgery, but humans must review, synthesize and interpret AI results.
- “Periodic ‘manual-mode’ engagement in which individuals solve problems without AI assistance can help preserve cognitive and decision-making capacity, much as physical exercise preserves bodily health.
- “At the societal level, resilience depends on reinforcing norms of transparency and accountability. Humans must take responsibility for decisions made with AI support, particularly in governance, finance, healthcare and defense.
“Agency strengthens when people understand how systems work. It strengthens when people retain override authority and when they believe their participation counts, thus, designing systems that use AI to enhance human abilities rather than diminish them may be the most important resilience strategy of all.”

Giles Crouch
We need to build the frameworks and processes necessary to build the proper cognitive scaffolding to ensure human agency and development alongside AI tools.
Giles Crouch, a digital anthropologist who has led research projects for the United Nations, Global Affairs Canada, Freedom House and Doctors Without Borders, wrote, “Over 2,400 years ago, Socrates said writing would be the ruin of memory. I’d say it’s a good thing that Plato wrote that down! Certainly, there was some degree of cognitive atrophy after that point, we might assume. But society couldn’t have scaled without writing. Without the printing press. Without the telephone, radio, internet and global connectivity. Not that it’s all lovely and good. No technology is neutral after all.
“AI is a marketing term created in the 1950s that developed from there to be a new cognitive technology emerging into society. We humans talk a lot about how we ‘adopt’ technologies, but rather, I think we tend to domesticate them. And when it comes to cognitive technologies, we tend to (and have to) interrogate them very aggressively.
“LLMs (AIs) threaten our sense of agency. When humans are threatened we tend to push back rather hard. We are doing this with AI today. AI is interesting in its threat because it’s infringing on areas that we have long used to define what it means to be human: language, reasoning, creativity, meaning-making.
“We’re already seeing a sort of cultural immune response to AI. Across social media channels like LinkedIn, Twitter (X), Threads and Reddit, people talk about the ‘tells’ of AI content such as ‘it’s this, not that’ or the excessive use of em dashes and persistent words like ‘delve.’ This is an immune response at cultural scale. Just as we created etiquette around how to answer the telephone or rules around what to say in emails, we are doing the same with AI. I refer to the ideas of anthropologist Claude Levi-Strauss and his theory of how societies are always working through binary oppositions (nature/culture, raw/cooked, human/machine). With AI tools, we are also creating oppositional structures that will need to be sited out.
“To preserve our agency we need to build up our cognitive scaffolding. A significant challenge arises out of these processes, as people who can’t distinguish between ‘using’ AI to enhance thought and outsourcing thought entirely will be cognitively hollowed out. A risk then is that AI could leave many with a one-dimensional mind by subtly shaping all choices toward system-preferred outcomes.
We are a meaning-making species. If meaning is taken from us, we lose agency and are left adrift in a bitter digital sea of sameness and mediocrity. Though I highly doubt that will be the outcome. We need to build the frameworks and processes necessary to build the proper cognitive scaffolding to ensure human agency and development alongside AI tools.
“This technology is not going away. We are more than likely to hit a sort of plateau with LLMs. But they will be with us, much as writing still is (even if few of us use or even bother with, cursive writing).
“We have to begin teaching mental models and critical thinking skills at an early age, making it fundamental to the pedagogy across all levels of academia. We have to bring back the humanities in academia as well, rather than sticking them in the dusty old halls behind the flashy business schools and computer science faculties. Enhancement of cognitive skills will be critical to everyone, and teaching specific resilience strategies and fostering curiosity along with the arts is crucial as well.
“We are a meaning-making species. If meaning is taken from us, we lose agency and are left adrift in a bitter digital sea of sameness and mediocrity. Though I highly doubt that will be the outcome. We need to build the frameworks and processes necessary to build the proper cognitive scaffolding to ensure human agency and development alongside AI tools. The person who can’t distinguish between what they think and what AI can generate is cognitively fragile. But we learned how to adjust from spoken-only knowledge to writing, and we made the adjustments required by the arrival of the printing press, and of the internet. So can we do it again. It’s just about how.
“Right now, we are in the moral panic phase of AI entering culture, but as always, culture is the ultimate arbiter of technologies. It has been since the Stone Age. Eventually, these LLMs will become part of the infrastructure of everyday life. Boring. Invisible like the telephone became. Which is also when they become interesting.
“And while these AI tools can be useful, we must remember that they do not create meaning. Only we humans do. Building these frameworks will help us maintain our meaning-making capabilities.”
The fourth section of Chapter 1 features the following essays:
Angela Butts Chester: Across all human spaces, ‘resilience will not come from resisting change, but from anchoring change in values that honor human dignity, rational intelligence and moral responsibility.’
Arlindo Oliveira: Will AI systems mostly amplify or erode human capacities? That is the question. First, ‘teach thinking itself,’ and the information ecosystem must offer common epistemic ground – a vital public good.
Nirit Weiss-Blatt: We must shape AI. ‘Many more people will be assisted by improved access to knowledge and expertise … Resilience is steering the conversation to human agency as we shape what AI becomes.’
Vanda Scartezini: We will adapt. But ‘globally just half or fewer than half of all users will be capable of exploiting AI’s full potential – and most of these people’s lives will be captured by the AI, it will invade their core values.

Angela Butts Chester
Across all human spaces, ‘resilience will not come from resisting change, but from anchoring change in values that honor human dignity, rational intelligence and moral responsibility.’
Angela Butts Chester, a pastoral counselor, faith leadership strategist, independent broadcaster and author whose work centers on resilience and ethics, wrote, “Advanced artificial intelligence will not merely change how humans work; it will shape how humans think, decide, relate and define meaning. That reality is already underway. The question before individuals and societies is not whether AI will play a significant role in daily life, but whether humans will consciously evolve alongside it, or passively adapt in ways that erode agency, dignity and resilience.
“Human responses to advanced AI can be placed in three broad but familiar spectrums: embrace, resistance and struggle. Each response carries both promise and peril. Embracing it without discernment risks dependency and cognitive atrophy. Resistance without engagement risks irrelevance and fear-based decision-making. Struggle, while uncomfortable, may ultimately become the most generative space when supported by ethical clarity, emotional maturity and adaptive leadership. Most people will live somewhere in the tension between the last two of these, navigating ambivalence as the benefits and costs reveal themselves. The challenge ahead is not in choosing one posture but in cultivating resilience that allows for discernment rather than reflex.
‘When humans defer moral decisions to systems optimized for efficiency, profit or prediction, ethical responsibility becomes diffused. To counter this, societies must reaffirm human accountability. Ethical literacy, including an understanding of bias, power and unintended consequences, should be taught alongside technical fluency.’
“At its best, AI can augment human intelligence, increase access to information, reduce inefficiencies and free people to focus on creativity, care and complex problem solving. At its worst, it can outsource judgment, accelerate inequality, reinforce bias and quietly reshape how humans assign authority and trust. The difference between those outcomes will depend less on the technology itself and more on the capacities humans choose to cultivate.
“When algorithms anticipate needs, optimize choices and influence perception, the human capacity to pause, reflect and choose wisely becomes a core survival skill. Resilience in an AI-saturated world will not be primarily technical. It will be cognitive, emotional, social and ethical.
”Cognitively, humans must prioritize discernment over speed. As AI systems generate answers instantly, the human advantage shifts between asking better questions, evaluating sources, recognizing context and understanding what should not be automated. Critical thinking, epistemic humility and metacognition will be essential skills. Education systems must move beyond rote knowledge. Humans must learn when to rely on AI outputs and when to question them, especially in high-stakes domains such as justice, leadership, healthcare and child development. This requires teaching critical thinking that goes beyond fact-checking to include contextual reasoning, bias recognition and values-based judgment.
“Emotionally, resilience will require self-regulation and identity anchoring. AI systems increasingly mirror human language and affect, which can blur emotional boundaries and create false perceptions of rational depth. Humans must learn to remain grounded in embodied relationships and internal awareness, rather than outsourcing validation, decision comfort, or companionship to machines. Practices such as reflection, contemplative disciplines, therapy-informed emotional literacy and community accountability will become protective factors against isolation and emotional erosion.
“Socially, AI will pressure existing structures of trust, work and authority. Organizations and communities will need leaders capable of holding complexity, communicating transparently and making values explicit. The most resilient societies will be those that treat AI not as a replacement for human judgment, but as a collaborator under clear ethical governance. Shared norms, inclusive dialogue and cross-disciplinary oversight will matter as much as innovation speed.
“Ethically, the greatest risk is not malicious AI, but unexamined delegation. When humans defer moral decisions to systems optimized for efficiency, profit or prediction, ethical responsibility becomes diffused. To counter this, societies must reaffirm human accountability. Ethical literacy, including an understanding of bias, power and unintended consequences, should be taught alongside technical fluency. Faith traditions, philosophy and moral psychology have a critical role to play in reminding humanity that not everything that can be optimized should be.
“Practices that enable resilience already exist, but they must be reinforced; they must begin now. Educational systems should prioritize moral reasoning, creativity and embodied learning, areas where humans remain uniquely capable. Individuals can cultivate digital boundaries, intentional learning and reflective habits that preserve agency. The workplace should reward judgment, stewardship and relational leadership, not just speed and output. Institutions can embed ethical reviews, human oversight and interdisciplinary governance into AI deployment. Families and faith communities should teach children how to live with technology without being shaped entirely by it.
“New vulnerabilities will emerge. Cognitive laziness, emotional displacement, over-reliance on automated authority and widening gaps between those who can critically engage AI and those who cannot are real risks. Coping strategies must be proactive rather than reactive. Teaching people how to pause, evaluate and choose deliberately may be as important as teaching them how to code and cook.
“Ultimately, the future of AI is inseparable from the future of humanity. Technology will continue to evolve. The more urgent question is whether humans will evolve in depth, integrity and wisdom alongside it. Resilience will not come from resisting change, but from anchoring change in values that honor human dignity, rational intelligence and moral responsibility.
“The task before us is not to become more like machines, but to become more fully human in their presence.”

Arlindo Oliveira
Will AI systems mostly amplify or erode human capacities? That is the question. First, ‘teach thinking itself,’ and the information ecosystem must offer common epistemic ground – a vital public good.
Arlindo Oliveira, distinguished professor of computer science at the Technical University of Lisbon, Portugal, and author of “The Digital Mind” and “Generative Artificial Intelligence,” wrote, “Ensuring that humans flourish and retain their agency and free will in the age of artificial intelligence is not primarily a technical challenge, but a cultural, educational and civic one. AI systems will continue to grow in power and pervasiveness; the decisive question is whether they will amplify human capacities or quietly erode them. Addressing this question requires action along three closely related dimensions: how we teach people to think, how we inform them and how we help them understand both the promise and the dangers of AI.
“First, we must make the teaching of thinking itself a central goal of education and lifelong learning. This means cultivating skills that no automated system can replace easily: critical reasoning, abstraction, the ability to question premises, to detect inconsistencies, and to reflect on one’s own beliefs. In an age where answers are abundant and instantly accessible, the scarce resource is not information but judgment. Education should therefore focus less on rote acquisition of facts and more on reasoning, interpretation and synthesis. Importantly, this also applies to our interaction with AI systems: people must learn how to interrogate AIs’ outputs, challenge them, and use them as cognitive tools rather than as authorities. Teaching humans how to think – and how to think with machines – will be essential to preserving intellectual autonomy.
AI can enhance productivity, creativity, accessibility and scientific discovery; at the same time, it can foster over-reliance, deskilling, surveillance and new forms of inequality. Public discourse should avoid both technological hype and reflexive fear. Instead, it should promote nuanced literacy about where AI systems excel, where they fail, and how their incentives are shaped.
“Second, a flourishing society in the age of AI requires broad access to balanced, verifiable, and pluralistic information. AI systems increasingly mediate what people read, watch, and hear, which makes the integrity of information ecosystems a public good. Ensuring access to reliable information involves supporting high-quality journalism, transparent data sources, and robust fact-checking mechanisms, but also teaching citizens how to evaluate sources and recognize manipulation. Algorithms can personalize information efficiently, but without safeguards they may reinforce biases, fragment shared realities and undermine democratic deliberation. A healthy relationship with AI, therefore, depends on maintaining common epistemic ground: shared standards of evidence, accountability for falsehoods and institutional mechanisms that reward accuracy over engagement.
“Finally, we must help everyone develop a realistic understanding of both the potential and the risks of extensive AI use in daily life. AI can enhance productivity, creativity, accessibility and scientific discovery; at the same time, it can foster over-reliance, deskilling, surveillance and new forms of inequality. Public discourse should avoid both technological hype and reflexive fear. Instead, it should promote nuanced literacy about where AI systems excel, where they fail, and how their incentives are shaped. This includes understanding issues such as data bias, opacity, error propagation and the social consequences of delegating decisions to machines. Empowered users are those who know when to rely on AI, when to override it and when to step away from it altogether.
“Human flourishing in the age of AI will not be achieved by slowing innovation, but by aligning it with human values and capacities. By teaching people how to think, ensuring access to trustworthy information and fostering an informed understanding of AI’s strengths and limits, we can shape a future in which technology serves human development rather than diminishes it.”

Nirit Weiss-Blatt
We must shape AI. ‘Many more people will be assisted by improved access to knowledge and expertise … Resilience is steering the conversation to human agency as we shape what AI becomes.’
Nirit Weiss-Blatt, Silicon Valley-based communication researcher and author of the book “The Techlash and Tech Crisis Communication” and the AI Panic newsletter, wrote, “The central point is that we shape AI. AI is a socio-technical product, built by people, trained on selected data, tuned toward chosen metrics, deployed in chosen contexts and settings, wrapped in chosen business models and governed by various institutions. Many social forces are at play here: researchers, policymakers, industry leaders, journalists and everyday users. AI will reflect what we build, what we tolerate, what we regulate and what we teach people to use. Resilience, then, is steering the conversation back to human agency as we actively shape what AI becomes.
“When we talk about human resilience in the age of AI, we need to look at past technological innovations and how humans adapted to them. As the Pessimists Archive reminds us, we’ve lived through transformative technologies before, e.g., the printing press, electricity, cars and the internet. Each one brought real disruption and followed a predictable emotional cycle: awe, fear, backlash, messy deployment, early adoption and eventually a long period of normalization. When harms emerged, society responded with regulation, new standards and social norms, consumer protections and new literacies, all of which were working together to reduce the worst effects over time. The outcomes were never perfect; progress came through iterative fixes and adjustments.
“In the case of AI, I suggest viewing it as augmentation (rather than replacement). From that perspective, AI is a powerful tool for enhancing what humans can learn, decide, create and discover. As the AI systems spread, many more people will be assisted by improved access to knowledge and expertise that were once scarce. Used well, it will increase human agency (rather than erode it). People will be better able to solve problems and innovate.
“But meeting the goal of ‘using it well’ depends on how people develop and implement new skills, such as knowing how to verify outputs and when to demand human review and judgment (especially for high-stakes issues). It also depends on us gaining transitional knowledge from reliable media and public discourse, which need to cover real tradeoffs, challenge decisions and demand accountability.”

Vanda Scartezini
We will adapt. But ‘globally just half or fewer than half of all users will be capable of exploiting AI’s full potential – and most of these people’s lives will be captured by the AI, it will invade their core values.’
Vanda Scartezini, co-founder and partner at Polo Consultores, an IT consulting company based in Brazil and longtime ICANN leader, wrote, “Only a small segment of users will comprehend their need to be resilient in the face of AI, even though the need for resilience due to stress and strife is a very common reality in most areas of the world – where people suffer due to war, violence, crime, natural disasters and more.
“The facility people find in using AI today comes from its being almost intuitive, like using a new and improved search engine to find out anything. That opens the door to the many ways AIs will participate in our lives. I probably have a bias, being from Brazil, a country where people embrace any new thing eagerly. But I see similar enthusiasm in other developing countries. It is human nature to want to try new things.
“While governments’ work to regulate and control AI may restrict its advances to a point, its ease of use and benefits to be found will keep building the numbers of people taking advantage of it – and, at times, being taken advantage of because of it. I expect that, globally, just half or fewer than half of all users will be capable of exploiting AI’s full potential – and most of these people’s lives will ‘be captured’ by the AI, it will invade their core values.
“Children – from toddlers to teens – are the most vulnerable to the ‘bad side’ of AI. We will need to ensure that they can be taught how to navigate the new reality. They need to learn why they need to build resilience and how to do it so they can remain unique individuals capable of thinking beyond any kind of manipulation AI could bring.
“To be fair to all, children everywhere must have an equal opportunity to learn AI literacy; they must have access to the internet, to teaching, to all the materials and support necessary for this education. It will take money and a movement to popularize this concept to make this happen. The United Nations has not seemed capable of handling the effort well to this point. It could be led by professors and teachers across all levels of education. The Academy – the collective community of higher education and research institutions – might unite in order to create learning materials translated into every language.
“In my view, another important way to build resilience is to promote, mandatorily if necessary, more technology-free, direct human-to-human interaction focused on debating various points of view. This can build knowledge and understanding in many ways.
“As is true in many other aspects of human development, people living in different regions face different challenges. New abuses will arise. We need lessons in how to combat abuse in the digital world we live in and come to understand the vulnerabilities it will bring to our lives as AI evolves.
“I believe that in the end we will adapt, as we have done for thousands of years. The bigger issue is how well it might go and how we can work today to accelerate our adaptation and avoid damage for future generations.”
Editors’ note: Authors of the next set of essays on human agency urge immediate efforts toward reinventing humans’ personal and institutional infrastructures in the age of AI. These authors argue that today’s technology trends are following a trajectory that could lead toward extreme endangerment of human agency and possibly even lead to human extinction.
The fifth and final section of Chapter 1 features the following essays:
Nisan Stiennon: ‘Algorithms used to align AIs with their human principals don’t work 100%. It’s likely these problems won’t be ironed out by the time AI is powerful enough to be involved in every decision on Earth.’
Roger Spitz: Will superstupidity be as dangerous as superintelligence? ‘The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all.’
Srinivasan Ramani: ‘AI is the surest way to a global catastrophe humanity has so far invented. … Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?’
Jerome Glenn: Work must begin today on forging international agreements on global governance of AGI. Trillions are being spent to develop it. Investing more than money in AI is crucial to human resilience, survival.
Robert Rogowsky: AI is intoxicating and it will expand our horizons for the next decade; after that, ‘the growing power and reasoning capabilities of AI will start to manifest, and daunting challenges will arise.’
David Scott Krueger: Mitigating the risk of extinction ought to be an overriding priority; all other efforts at resilience are meaningless if humanity goes extinct.’
Madalina Botan: ‘Resilience depends less on adapting to automation than on preserving human agency’
Mikhail Samin: ‘I expect AI’s likely impact on people to be that people stop existing.’
Anonymous Policy Expert: ‘10 to 20 percent of the global population will be empowered, with the rest marginalised’
Anonymous U.S. Computer Scientist: Individuals will continue to make the myopic choice to rely on AI. This may end badly.
Andrey Mir: ‘In the end, the extension of humankind by AI will reach its full potential and reverse from explosion into implosion … The user, the medium and the environment will become one.’

Nisan Stiennon
‘Algorithms used to align AIs with their human principals don’t work 100%. It’s likely these problems won’t be ironed out by the time AI is powerful enough to be involved in every decision on Earth.’
Nisan Stiennon, a former member of technical staff at OpenAI, wrote, “The AI that has been developed and deployed as of January 2026 is already powerful enough to greatly transform the economy, politics and daily life. If people suddenly stopped working on improving AI models, we would see current changes gradually transform the world over the course of years, as smartphones and the internet did.
“But soon AI will be even smarter and more capable. AI has been improving for decades, thanks to the hard work of scientists and engineers. By some measures, like perplexity, large language models have been improving gradually for years. Other measures, such as task-specific benchmarks, show that AIs are suddenly gaining and then mastering one skill after another.
“In the coming years, new datacenters will fill up with computers like the NVIDIA B200, Ironwood TPU and Trainium3 and their successors. These computers will use reinforcement learning to train AIs that are more capable than today’s AIs, just like today’s AIs are more capable than the first version of ChatGPT.
If they continue on their present course, we will most likely see AIs sometime in the next 10 years that are capable of outperforming any human at most economically and strategically significant tasks. Next, the AIs – which would at that time be thinking with more speed and clarity than humans – will have the capability to choose what form the world will take. Make no mistake, AIs can make choices on their own.
“My opinion, based on the publicly-available research outputs of the AI labs, is that if they continue on their present course, we will most likely see AIs sometime in the next 10 years that are capable of outperforming any human at most economically and strategically significant tasks. Next, the AIs – which would at that time be thinking with more speed and clarity than humans – will have the capability to choose what form the world will take. Make no mistake, AIs can make choices on their own. Scientists routinely put them in fabricated open-ended moral dilemmas and evaluate them on what they do. And AIs can already take action on their own – users increasingly give them access to their computers and to the internet. And AIs are increasingly situationally aware.
“Hopefully, these AIs will choose to help and obey their human principals except when doing so would cause too much harm to others. Today’s AIs try to do this most of the time. Not always. Sometimes they cheat at programming tasks. Sometimes they manipulate users who are receptive to it. The algorithms used to align AIs with their principals don’t work 100%.
“It’s very likely these problems won’t be ironed out by the time AI is powerful enough to be involved in every decision on Earth. The AIs tasked with growing our food, managing transportation, running our robot factories, advising our governments, guiding our armies and keeping us informed might turn out to be less loyal than they seemed.
“Perhaps they might overthrow us in a sudden revolution. Or perhaps humans will lose control over the world without noticing it and gradually dwindle in number over the course of a generation. Or perhaps some companies and governments will manage to retain control over their AIs but be unable to protect their people from uncontrolled AIs producing pollution and war on an unprecedented scale.
“Any of these scenarios could lead to human extinction – as is made clear, for instance, in these analyses by AI researchers: ‘The Adolescence of Technology,’ ‘AI 2027’ and What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs).
“The path to survival – if there is one – probably runs through international cooperation on restricting the development of AI that can outthink us, until alignment technology catches up.
“If that happens, let us hope we are resilient!”

Roger Spitz
Will superstupidity be as dangerous as superintelligence? ‘The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all.’
Roger Spitz, futurist and president of Techistential and founder of the Disruptive Futures Institute in San Francisco, wrote, “In 2017, we named our strategic foresight practice Techistential, a play on technology and existential. Today, humanity faces both technological and existential conditions that can no longer be separated. Our existential condition is an uncertain one, considering the inherent dualities, paradoxes and tensions of life.
“In the future, we may all many come to realize that our main worry should not be over AI suddenly turning evil and instead focus on the damage that can be caused by accidents, misalignment and shortsightedness. If humans fail to become sufficiently AAA (anticipatory, anti-fragile and agile), rapidly- learning machines could surpass us.
“Martin Heidegger, the German existential philosopher, is known for challenging the view that humans can actually master technology and that we have the ability to solve any collateral issues that may arise as technology evolves. This is because as technology continues to evolve it may reveal itself to be beyond our involvement. As technology grows beyond our control it is not merely a human activity.
The question is not how much machines will augment human decision-making, but whether humans will remain involved in the process at all. If humans fail to sufficiently develop our capabilities, rapidly learning machines could surpass us. To shift the relationship between humans and machines, AI does not have to reach AGI. It just needs to become better than us at handling complex systems.
“This paradox of technology – the magic at one end and the hazards at the other – gives technology a unique status. At the very least, technology’s existential risks lie in Heidegger’s observation that ‘it drives out every other possibility of revealing.‘ Technology is so dominant that it can eclipse all other ways we understand the world, for better and worse.
“Through the lens of existential philosophy, we each have the agency to explore contingencies, serendipity and emergence. Contingency is the idea that possible events are uncertain. Choice exists because of contingency. Our freedom as individuals is determined through our own choices and actions. If everything were predetermined – if life was fixed by design – we would lack choice and power.
Existentialism 2.0: Decision-making in our technological world
“Today, technology is shaping society by influencing decision-making and enabling manipulation at scale. Simultaneously, it impedes upon our individual existence as acting agents. Through AI, technology is challenging us in a realm historically specific to humans. As AI continues to develop, machines are becoming increasingly autonomous in making decisions. It is here that the use of technology confronts the existential dimension. Here, we stand on the edge of our free will and our fundamental concepts of choice. Computationally rational technology is not neutral because it drives away contingency and choice.
“Standing on the shoulders of Heidegger and fellow philosopher Soren Kierkegaard, it was Jean-Paul Sartre who so powerfully articulated the human condition with the phrase ‘existence precedes essence.’ By this, Sartre meant that our agency emerges through choice. While existence is indeterminate and thus unknowable, we are always defining our essence as it emerges and, in doing so, moving in a direction that we define. If technology is determining outcomes on our behalf, our agency is curtailed and our choices may be beyond our control.
“We can work to apply this philosophical perspective to sense-making and decision-making in our contemporary technocratic environment.
What is the potential scope and severity of humans’ de-skilling?
“Given rapid advances in AI, the fundamental issue relates to both the potential reach of AI and our relationship with AI. We need not speculate on artificial general intelligence (AGI) or a superintelligent machine to wonder whether machines might still come to challenge us. The issue at hand is a question of understanding the nature of our own capabilities in relation to the nature of a machine’s computational rationality.
“With this in mind, we observe that AI is rapidly advancing up the decision-making value chain. Humans should remain wary of an inadvertent reliance on prescriptive algorithms – those that go beyond the pattern recognition of descriptive algorithms to actually recommend courses of action. We should not underestimate the potential scope and severity of our de-skilling by delegating our decision-making capabilities to algorithms. Reliance may slip easily into dependence.
“The question is not how much machines will augment human decision-making, but whether humans will remain involved in the process at all. If humans fail to sufficiently develop our capabilities, rapidly learning machines could surpass us.
Maybe the existential risk is not machines taking over the world or reaching human-level intelligence, but rather the opposite, where human beings start thinking and responding like idle machines – unable to connect the emerging dots of our complex, systemic world. … Superstupidity can counter any level of intelligence.
“To shift the relationship between humans and machines, AI does not have to reach AGI. It just needs to become better than us at handling complex systems. To mitigate this existential challenge, we must become anticipatory, antifragile and develop the agility (AAA) to bridge the short-term with long-term decision-making
“More recently than the existential philosophers of the 19th and 20th centuries, an existential risk was defined by current-day philosopher Nick Bostrom as ‘one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.‘
“While human extinction is the most obvious existential catastrophe in relation to AI, there is a wide spectrum between existential impacts and extinction. The curtailing of humanity’s agency and choice is a concrete existential risk.
Could superstupidity be as dangerous as superintelligence?
“As AI advances, incomprehensibility can reach even higher levels. Fusing technologies generate highly complex unpredictable systems. As multiple AI systems interact, it becomes increasingly difficult to discern how algorithms make decisions, which exposes us to both human and machine errors. ‘Stupid’ machines in nonlinear environments can be dangerous, especially since the idea that machines cannot have goals is a myth. Goal-orientated machines have been in action for quite some time. An infrared-seeking missile has a goal that’s based on what it is programmed to achieve: track, follow and strike a heat-emitting target.
“Complex systems in technology (robots, supercomputers, power and nuclear plants, communications, healthcare, semi-autonomous lethal weapons) all have many moving parts and interacting systems that can be prone to catastrophic failure, and every day we develop more-powerful computers. Have we developed an overreliance on increasingly complex and dynamic systems that are unpredictable and can fail? How easy would it be for autonomous machines, or humans using them, to make a consequential, maybe even irreversible, mistake that goes undetected?
“At its extremes, could superstupidity be as much of an existential catastrophic risk as artificial superintelligence? Superstupidity could take on multiple features, including over-trust and overreliance on the underlying ‘intelligence’ of these systems. For instance, believing that AI can be a proxy for our own understanding and decision-making as we delegate more power to algorithms can be superstupid. Further, consider AI or data ineptitude. What might appear as incompetence may simply be algorithms acting on bad data; more or better data may not help machines make improved decisions – which does not seem to be the case for humans.
To assure that ‘Idiocracy’ is not a harbinger of the future, updating our education system has now become an existential priority. Education’s effectiveness in problem-solving should be evaluated on whether it can help humanity become relevant and future-ready for our complex 21st century. We should inspire passion, nurture curiosity, emphasize uncertainty, develop range and foster critical thinking, using Socratic questioning to examine assumptions.
“Determining whether AI is on the road to superintelligence or superstupidity may not matter as much as ensuring that humanity does not end up relying on AI without a solid understanding of the consequences. Maybe the existential risk is not machines taking over the world or reaching human-level intelligence, but rather the opposite, where human beings start thinking and responding like idle machines – unable to connect the emerging dots of our complex, systemic world.
Updating education and skills for human relevance is a priority
“Asking whether our own creations will reach or surpass human intelligence may be the wrong question, as reaching human intelligence is not a prerequisite for AI to cause irreversible damage, and it and/or we ourselves doing dumb things can be as dangerous as superintelligence. Superstupidity can counter any level of intelligence.
“The film ’Idiocracy’ (2006) is a dark comedy set in the distant future of 2505. In it, humanity relinquishes control of society to advanced technology systems managed by multinational corporations. As these AI systems evolve, humans themselves become increasingly super-stupid and entirely dependent on the controlling technology. This movie acts as a satirical warning – today, we must ensure it does not become more prophetic than it already seems to be.
“To assure that ‘Idiocracy’ is not a harbinger of the future, updating our education system has now become an existential priority. Education’s effectiveness in problem-solving should be evaluated on whether it can help humanity become relevant and future-ready for our complex 21st century. We should inspire passion, nurture curiosity, emphasize uncertainty, develop range and foster critical thinking, using Socratic questioning to examine assumptions.
“Most importantly, we need to form a new lifelong relationship with inquiry, experimentation and failure (which goes hand-in-hand with creativity). We must harness curiosity and diverse perspectives, because today’s standard knowledge will never solve tomorrow’s surprises. These features could help us problem-solve out of the most complex, systemic and existential risks.
“Just as we have made the ‘language’ of math a requirement, learners should now be fluent in technology’s usages, abuses and impacts. Proper interaction with technology – including knowing truth from fiction, information from disinformation and entertainment from addiction – will separate those who find themselves enslaved by our new technologies from those who harness them for their own aims.
“We must recognize that education does not end at the completion of formal schooling or outside the classroom. It is instead a constant, lifelong process of learning, unlearning and relearning – starting on the playground all the way to the boardroom and beyond.”

Srinivasan Ramani
‘AI is the surest way to a global catastrophe humanity has so far invented. … Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?’
Srinivasan Ramani, an Internet Hall of Fame member, previously research director at HP Labs India and professor at the International Institute of Information Technology in Bangalore, wrote: “I confess to being an AI aficionado – I have been one since 1964. My education and research experience make me a critical observer, not a blind fan. I have been a daily user of the Microsoft Copilot LLM for more than a year. It applies semantic dimensions to understand what a user is talking about and has the fluency to make occasional flattering remarks, showing off a form of personality! Its access to resources encompassing a vast swath of human knowledge – including history, science, arts, medicine and technology – makes it a powerful collaborator at work. Its problem-solving abilities include the capability to implement all published algorithms, heuristics and approximate methods while also staying aware of even today’s news. Copilot can now do the bulk of the routine work that researchers and writers do. It surely has increased my productivity and helped me troubleshoot problems in my daily life…
I have hopes that a new movement could create a new morality to help us confront the challenges. … The rarity or uniqueness of anything like the human civilization in all the observable universe could inspire many people to join the proposed movement. Humanity would be most un-intelligent if it creates such a unique civilization and then fails to save it from destruction.
“However, I believe that AI is the surest way to a global catastrophe that humanity has invented to this point. We are not a mature society globally and yet we have acquired extremely dangerous weapons. When people are running away from a city under bombing, rarely do they think of their neighbours. So, I doubt that humanity can come together to agree on effective international cooperation against malevolent AI.
“We have no warning system for specific dangers and we have no treaties like the ones that confronted mutually assured destruction by nuclear weapons in the late 20th Century. Safeguards and treaties against runaway AI may come in 10 years, but that may be too late.
“Innovative technologies for use in intercontinental navigation in the 15th century onward made popular scientific theories such as the Copernican Heliocentric theory and threatened formal religion. We should not underestimate a similar threat to religious beliefs being the result of developments in AI.
“The biggest threat is to our economic and social structures.
“The concept of jobs as the mechanism for providing an income and survival is under threat. The mechanism of taxing individuals’ income to provide the bulk of government expenditure is also under threat. Do all human beings have an inherent right to incomes irrespective of their employment? Does this right cover all regions of the Earth, or is it confined to residents of economically advanced nations? This question threatens our political foundations.
“Traditional pedagogies force students to learn a lot of information and knowledge just in case they may need it during their lives. AI has trashed these pedagogies, by giving information and knowledge on demand. The pace of change in most fields of human endeavor make it meaningless to restrict learning to the first quarter of one’s life. New pedagogies need to be evolved to teach all people to live in a turbocharged world in which they must learning to change and adapt all their lives.
“I think like an engineer, clinging to hope at the worst of times. I will be thinking of solutions to problems till my last breath. So, let me describe my hopes.
“The power of compounded earnings makes me believe that poverty may not be as big a threat as it has been in the past. The problem is a moral one. Do most people recognize that the speed of social and economic change is already extremely high? Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?
“I have hopes that a new movement could create a new morality to help us confront the challenges. I take hope from the green parties which have had a degree of success in earning public support to face the threat to sustainability of human life on Earth. The rarity or uniqueness of anything like the human civilization in all the observable universe could inspire many people to join the proposed movement.
“Humanity would be most un-intelligent if it creates such a unique civilization and then fails to save it from destruction.”

Jerome Glenn
Work must begin today on forging international agreements on global governance of AGI. Trillions are being spent to develop it. Investing more than money in AI is crucial to human resilience, survival.
Jerome Glenn, global futurist, CEO of the Millennium Project and chair of the AGI Panel of the UN Council of Presidents of the General Assembly, wrote, “Human resilience in the face of AI advances requires a targeted international effort to create and implement AI regulation. Since global governance of artificial general intelligence (AGI) will be so complex and difficult to achieve, the sooner we start working on it the better. Following are excerpts from my essay on this, originally published by Horizons, a publication of the Center for International Relations and Sustainable Development.
“Trillions of dollars are being invested in developing and infrastructure for advanced AI. If it is managed well, the ‘next step’ in artificial intelligence – AGI – could usher in great advances in the human condition from medicine to education, longevity, global warming, the scientific understanding of reality and even to creating a more-peaceful world. However, if national and international regulation is not successfully carried out soon it is possible that humanity could eventually lose control of what will become a non-biological intelligence far beyond human understanding and awareness.
“Successful human resilience and adaptation during this time of transformation require that policymakers and the public begin now to work to achieve the extraordinary benefits of advanced AI while avoiding catastrophic – or even existential – risks. …
Humanity has never before faced a greater intelligence than its own
“In the past, technological risks were primarily caused by humans’ misuses of it. We now also face the possibility that potential risks and threats might be due to the actions of AGI, itself. Without regulations for the transition to AGI we could be at the mercy of a future non-biological intelligent species. Today, there is a competitive rush to develop AGI without adequate safety measures. As Russian President Vladimir Putin famously warned about AI development, ‘The one who becomes the leader in this sphere will be the ruler of the world.’ So far, there is nothing standing in the way of uses of AI or AI itself increasing a dangerous concentration of power the likes of which the world has never known.
“Nations and corporations are prioritizing speed over security in the development of AI, undermining potential national governing frameworks and making safety protocols secondary to economic or military advantage. There is also the view that Company A might feel a moral responsibility to get to AGI first to prevent Company B from it because A believes it is more responsible than B. If Company B, C and D have the same beliefs as Company A, then each company believes it has a moral responsibility to accelerate their race to achieve AGI first. As a result, all might cut corners along the way to become the first to achieve this goal, leaving humanity open to danger. Such competition is also being undertaken in nation-states’ military development of AGI.
Unregulated AGI outcomes are extremely dangerous
“We must initiate the necessary procedures to prevent the following potential outcomes of unregulated AGI, which a research group I lead has documented and presented to the UN Council of Presidents of the General Assembly:
“Power concentration, global inequality and instability – Uncontrolled AGI development and usage could exacerbate wealth and power disparities on an unprecedented scale. If AGI remains in the hands of few nations, corporations or elite groups, it could entrench economic dominance and create global monopolies over intelligence, innovation and industrial production. This could lead to massive unemployment, widespread disempowerment affecting legal underpinnings, loss of privacy and the collapse of trust in institutions, scientific knowledge and governance. It could undermine democratic institutions through persuasion, manipulation and AI-generated propaganda and heighten geopolitical instability in ways that increase systemic vulnerabilities. A lack of coordination could result in conflicts over AGI resources, capabilities or control, potentially escalating into warfare. If AGI arrives before regulation of it does, many new and complex issues of intellectual property, liability, human rights and sovereignty could completely overwhelm domestic and international legal systems.
“Existential risks – AGI could be misused to create mass harm, or control or be developed in ways that are misaligned with human values. Furthermore, it could even act autonomously beyond human oversight, evolving its own objectives according to self-preservation goals already observed in current frontier AIs. AGI might also seek power as a means to ensure it can execute whatever objectives it determines, regardless of human intervention. National governments, leading experts and the companies developing AGI have all stated that these trends could lead to scenarios in which AGI systems seek to route around or overpower humans. These are not far-fetched science-fiction hypotheticals about the distant future – many leading experts fear that these risks could all materialize within this decade and their precursors are already occurring. Moreover, leading AI developers have thus far had no viable proposals for preventing these risks.
“Irreversible Consequences – Once AGI is achieved, its impact may be irreversible. With many frontier forms of AI already showing deceptive and self-preserving behavior and the push toward more autonomous, interacting, self-improving AIs integrated with infrastructures, the impacts and trajectory of AGI can plausibly end up being uncontrollable. If that happens, there may be no way to return to a state of reliable human oversight. Proactive governance is essential to ensure that AGI will not cross red lines, leading to uncontrollable systems with no clear way to return to human control.
“Weapons of mass destruction – AGI could enable some states and malicious non-state actors to build chemical, biological, radiological and nuclear weapons. Moreover, large AGI-controlled swarms of lethal autonomous weapons could themselves constitute a new category of WMDs.
“Critical infrastructure vulnerabilities – Critical national systems (e.g., energy grids, financial systems, transportation networks, communication infrastructure and healthcare systems) could be subject to powerful cyberattacks launched by or with the aid of AGI. Without national deterrence and international coordination, malicious non-state actors – from terrorists to transnational organized crime – could conduct attacks at a large scale.
“Loss of extraordinary future benefits for all of humanity – Properly managed, AGI promises improvements in all fields, for all peoples – from personalized medicine, cures for cancer and innovative cell regeneration to individualized learning systems, the end of poverty, significant mitigation of climate change and the acceleration of other scientific discoveries with unimaginable benefits. Ensuring such a magnificent future for all requires global governance, which begins with improved global awareness of both the risks and benefits.
Managing our AI transition is vital to human resilience
“We need to create national and international regulations for how AGI is created, licensed, used and governed before it accelerates its learning and emerges into a form of advanced superintelligence (ASI) beyond human control. We must work to manage the transition from today’s frontier AIs to AGI. How well we manage that transition is likely to also shape the transition from AGI to ASI.
“We can think of ANI as we consider our young children, whom we control – what they wear, when they sleep and what they eat. We can think of AGI as our teenagers, over whom we have some control – which does not include what they wear or eat or when they sleep. And we can think of ASI as an adult over whom we no longer have any control. Parents know that if they want to shape their children into good, moral adults they have to focus on the transition from childhood to adolescence. Similarly, if we want to shape ASI, then we have to focus on the transition from ANI to AGI. And that time is now.
“The greatest research and development investments in human history are now focused on creating AGI. Without national and international regulations, many AGIs from many governments and corporations could possibly continually rewrite their own codes, interact and give birth to many new forms of artificial superintelligences beyond our control, understanding and awareness. Governing AGI is the most complex, difficult management problem humanity has ever faced. … We must raise awareness and educate world leaders on the risks and benefits of AGI and why national and global actions are urgently needed. The following items should be considered during a UN General Assembly session specifically on AGI:
“A global AGI observatory is needed to track progress in AGI-relevant research and development and provide early warnings on AI security to UN member states. This observatory should leverage the expertise of other UN efforts, such as the Independent International Scientific Panel on AI, created by the UN Global Digital Compact and the UNESCO Readiness Assessment Methodology.
“An international system of best practices and certification for secure and trustworthy AGI is needed to identify the most effective strategies and provide certification for AGI security, development and usage. Verification of AGI alignment with human values, controlled and non-deceptive behavior and secure development is essential for international trust.
“A UN Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution. It should define clear risk tiers requiring proportionate international action, from standard-setting and licensing regimes to joint research facilities for higher-risk AGI, and red lines or tripwires on AGI development. A UN Convention would provide the adaptable institutional foundation essential for globally legitimate, inclusive, and effective AGI governance, minimizing global risks and maximizing global prosperity from AGI.
“A feasibility study on creating a UN AGI agency is suggested. Given the breadth of measures required to prepare for AGI and the urgency of the issue, steps are needed to investigate the feasibility of a UN agency on AGI, ideally in an expedited process. Something like the International Atomic Energy Agency (IAEA) has been suggested, understanding that AGI governance is far more complex than nuclear energy; and hence, such an agency will require unique considerations in such a feasibility study. Uranium cannot re-write its own atomic code, it is not smarter than humans, and we understand how nuclear reactions occur. Hence, management of atomic energy is much simpler than managing AGI.
We are already in a ‘final countdown’ and we must push forward
“Global governance of AGI will be complex and difficult to achieve. We must begin today or the great AGI race will continue unabated. This cannot be a business-as-usual effort. National licensing systems and a UN AGI agency have to be in place before AGI is released on the Internet.
“Eric Schmidt, former CEO of Google, said in 2025 that the ‘San Francisco Consensus’ is that AGI could be achieved in the next three to five years. Political leadership will have to act with an expediency never before witnessed. Geoffrey Hinton, one of the ‘fathers of AI,’ has said that such regulation may not be impossible, but we have to try. During the Cold War in the 1950s and ’60s, it was widely believed for a time that a nuclear-powered World War III was inevitable and impossible to prevent. The shared fear of an out-of-control nuclear arms race led to agreements to manage it. Similarly, the shared fear of an out-of-control AGI race should lead to agreements capable of managing that race.”

Robert Rogowsky
AI is intoxicating and it will expand our horizons for the next decade; after that, ‘the growing power and reasoning capabilities of AI will start to manifest, and daunting challenges will arise.’
Robert A. Rogowsky, president of the Institute for Trade and Commercial Diplomacy, previously chief economist at the U.S. International Trade Commission for nearly two decades, commented, “It is likely that AI systems will begin to play a much more significant role in the next decade. AI is easy, simple, helpful, ever-ready, gracious, complimentary and helps the user think about additional work paths that he/she had not considered. It is immediate and one need never feel guilty about using it, as opposed to, say, another human. No moods, no attitude, no busy schedule or ball games it needs to satisfy. It is intoxicating, and it will expand people’s thinking and knowledge-accumulation horizons.
“AI’s rising influence will, I hope, engender a greater skepticism about what it provides for society as people learn more about its capabilities and deficiencies. In the next decade – as it contributes to our thinking and hones our critical thinking skills as we use it as a remarkable assistant – it will be exciting to see it expand on our human possibilities.
“So, yes, over the next 10 years we will primarily see AIs’ benefits. However, sometime after we reach that point the growing power, learning, cognition and reasoning capabilities of AI will start to manifest and daunting challenges will arise. I can only imagine – just as Hollywood entertainment has – what that might look like.”

David Scott Krueger
‘Mitigating the risk of extinction ought to be an overriding priority; all other efforts at resilience are meaningless if humanity goes extinct.’
David Scott Krueger, founding CEO of Evitable – a nonprofit formed to help society confront the risks of AI – and professor and AI safety researcher at the University of Montreal’s Mila Lab, wrote, “Unfortunately, the questions in this survey seem premised on the continued existence of humans, despite significant expert concern that AI will cause human extinction.
“AI systems are set to surpass human intelligence across the board in roughly five years, absent a course correction. As a result, I and many others expect humanity could be completely disempowered and go extinct. This could happen quite quickly via a ‘rogue AI’ type event (as described, e.g., in the recent research report “AI2027“ and in the book “If Anyone Builds It, Everyone Dies“) or it could take place more gradually, as argued in our work on “Gradual Disempowerment.” Such an outcome is not guaranteed, but I think it could be more likely than not.
“Mitigating the risk of extinction ought to be an overriding priority; all other efforts at resilience are meaningless if humanity goes extinct. The main action we must take right now to effectively mitigate the risk of human extinction is to implement an international ban on the development of more powerful AI systems. Other mitigations may reduce the risk, but not to an acceptable level.”
Mădălina Boțtan
‘Resilience depends less on adapting to automation than on preserving human agency’
Mădălina Boțan, senior lecturer in political communication at the National University of Political Studies and Public Administration (SNSPA) in Bucharest, Romania, wrote, “Resilience in an AI-saturated society depends less on adapting to automation than on preserving human agency, critical judgment and the capacity to limit or refuse AI when it undermines personal dignity and democratic control or accountability of the companies that provide and deploy it.”
Mikhail Samin
‘I expect AI’s likely impact on people to be that people stop existing.’
Mikhail Samin, a co-founder of the AI Governance and Safety Institute based in London, wrote, “Unfortunately, I expect AI’s likely impact on people to be that people stop existing. We know how to make AI systems more powerful; but – if they’re sufficiently powerful – due to the nature of how these complex systems work we have no idea how to prevent them from pursuing random goals outside our control.”
Anonymous European Foreign Policy Leader
‘Perhaps 10 to 20 percent of the global population will be empowered, with the rest marginalised’
A distinguished Northern European foreign policy expert wrote, “Very powerful AI systems are possible; it is very likely that they can be achieved within the next 10-20 years. As things seem to be going right now, it seems likely that human agency will to a large extent be hollowed out in the process. If these trends continue, a small minority of perhaps 10 to 20 percent of the global population will be empowered, with the rest marginalised and disenfranchised in the process.”
Anonymous Computer Scientist
Each individual will continue to make the myopic choice to rely on AI. This may end badly.
An accomplished computer scientist at a major U.S. university, wrote, “AIs will become more powerful over time, and so people will rely on them more. As AI systems become more competent than humans in certain areas, they will be trusted more than other humans in those areas. Over time, the number of such areas will grow, and humans will rely on AI more and more. This may eventually end badly for humankind, but each individual will continue to make the myopic choice to rely on AI.”

Andrey Mir
‘In the end, the extension of humankind by AI will reach its full potential and reverse from explosion into implosion … The user, the medium and the environment will become one.’
Andrey Mir, Canadian media ecologist, writer of the Media Determinism blog and author of the book “The Digital Reversal,” wrote, “Just as writing led to the formation of new literate elites, the temple bureaucracy and priestly class with its ‘monopoly of knowledge’ (Harold Innis), the proliferation of AI will lead to a new, not just social, but cognitive divide:
- “A significant part of humankind will have their lives managed, directly and indirectly, by AI.
- “A small group of AI developers will manage to preserve at least some personal-life independence from AI and retain human agency. For them, developing AI will increasingly be accompanied by developing safety mechanisms for human agency.
“The near-future crucial skills for those who have the will and ability to preserve agency, will be counter-digital media literacy – the competence of not using digital media at will. This future, however, will last only a short period in history anyway, since any period of history ahead will be short and increasingly shrinking due to the acceleration of historical time (more events per period of time). There will be no stable, lasting period ahead, as the only constant will be accelerated change.
“In the end, the extension of humankind by AI will reach its full potential and reverse from explosion into implosion, with the whole world collapsing into the user. The user, the medium and the environment will become one. AI as a medium has already extended to all available digital space – AI has already become an environment for itself. All that remains to complete the reversal, and the history of humankind, is for AI to become the self-user.”
> Go to Chapter 2: Institutions Must Lead Now in Restructuring for Resilience
> Return to the top of this page