Joel_Christoph
Joel Christoph is an economist and political scientist – a researcher on AI governance, global coordination and political economy and Technology and Human Rights Fellow at the Harvard Kennedy School and founder of 10billion.org. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“AI systems will play a much more significant role in shaping our decisions, work and daily lives, not because ‘AI takes over,’ but because institutions will embed AI into the plumbing of society. Search and discovery, hiring and credit, education and health triage, compliance and procurement, content visibility and enforcement will increasingly run through AI-mediated pipelines. Most people will not experience this as a single rupture. They will experience it as many small defaults that quietly reallocate agency. That creates a paradox for resilience. AI can increase individual capability by helping people learn faster, plan better, communicate across languages, access expert knowledge and coordinate with others. At the same time, it can make societies more brittle by concentrating power in opaque systems, accelerating manipulation, eroding shared reality and encouraging cognitive dependence. The central question is not whether humans adapt, but what kind of adaptation becomes normal, adaptation that expands human agency and dignity, or adaptation that trains people to cope inside systems they no longer understand.

“Most people will embrace AI in domains where it reduces friction, such as drafting and research, navigating bureaucracy, health and life administration, translation, tutoring and creative support. This will feel like an extended mind, a practical cognitive prosthesis. For many, it will be the first time high-quality guidance is always available. In places with weak institutions or scarce professional support, AI may become the default layer for education, legal triage and mental health coaching.

Resilience in an AI-saturated world is not mainly individual grit. It is epistemic resilience that preserves shared reality, true agency resilience that protects the ability to choose and contest and institutional resilience that ensures systems fail safely and correct quickly.

“Resistance will take several forms. Some will be cultural and professional, with communities defending human judgment, craftsmanship and authenticity in teaching, journalism, art, medicine and public service. Other resistance will be political, driven by backlash against surveillance, discrimination, automated denial of services and the sense that no one is accountable. The struggle will be sharpest where AI functions as a gatekeeper for benefits eligibility, policing risk scoring, insurance, credit, hiring and content moderation, because errors and bias in these contexts are not merely inconvenient. They can reshape life chances.

“Resilience in an AI-saturated world is not mainly individual grit. It is epistemic resilience that preserves shared reality, true agency resilience that protects the ability to choose and contest and institutional resilience that ensures systems fail safely and correct quickly.

“At the individual level, the most important cognitive capacity is independent judgment under uncertainty. People will need to ask good questions, notice contradictions, check sources and understand the incentives behind recommendations.

Emotional resilience will include identity security that is not tied solely to marketable cognitive output and habits that resist persuasive or addictive interfaces.

Social resilience will depend on sustaining human trust networks, relationships and communities that are not fully mediated by ranking algorithms and synthetic personas.

Ethically, we must preserve responsibility for delegation. As AI systems recommend actions, individuals and institutions must remain accountable for outcomes and ‘the model suggested it’ cannot become a moral alibi.

“The most practical resilience resource is contestability, the ability to appeal high-stakes AI-mediated decisions and obtain meaningful explanations and correction. A society without contestability will teach people resignation rather than resilience.

Public-interest information institutions and authenticity standards should be strengthened so that shared reality is not at the mercy of commercial platform dynamics. … What we do now will shape whether adaptation is empowering or corrosive. Accountability must be built into deployments.

“Resilience also requires authenticity infrastructure, including tools and standards that help people distinguish verified information, real identities and traceable media from synthetic or manipulated content. Without this, public life becomes vulnerable to scaled deception and people retreat into tribal epistemologies.

“Resilience further depends on redundancy, because critical services should not rely on a single model, vendor or automated pipeline. AI should be treated like other critical infrastructure, with audits, monitoring and design that degrades gracefully under failure.

“Education also matters, but AI literacy should be civic rather than technical. People should understand where AI is used in their lives, how optimization can conflict with human goals and what rights and recourse they have.

“What we do now will shape whether adaptation is empowering or corrosive. Accountability must be built into deployments through clear liability for harms, documented model use and auditable decision trails in high-stakes settings. Due process must be protected through appeals, meaningful human review and transparent criteria when AI influences access to jobs, credit, housing, healthcare, or justice. Incentives must shift away from extraction, because tools optimized for engagement, persuasion, or data harvesting will undermine autonomy and social trust.

Most people will adapt enough to function. Whether they adapt in a way that preserves freedom, fairness and shared reality depends on choices made now about accountability, contestability, authenticity, incentives and education. Resilience in the AI age is not only the capacity to endure change. It is the capacity to shape it.

“Public-interest information institutions and authenticity standards should be strengthened so that shared reality is not at the mercy of commercial platform dynamics. Education systems should preserve minimum viable independence by explicitly teaching critical reading, numeracy, argumentation and long-form reasoning – skills that reduce total cognitive offloading and keep people capable of independent judgment.

“New vulnerabilities will emerge even in generally positive trajectories. A major risk is loss of agency through defaults, with people nudged, ranked and filtered into choices without noticing. Another risk is epistemic fragmentation, as AI-tailored persuasion and synthetic content dissolve common ground. A third risk is automation complacency, where fluent and confident systems are over-trusted. Coping strategies should include deliberate practice of core skills without assistance, routine verification habits, community-based sensemaking and normalized use of appeals mechanisms when systems fail. At the societal level, coping means treating AI not as a gadget, but as governance.

“Most people will adapt enough to function. Whether they adapt in a way that preserves freedom, fairness and shared reality depends on choices made now about accountability, contestability, authenticity, incentives and education. Resilience in the AI age is not only the capacity to endure change. It is the capacity to shape it.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”