
“AI will almost certainly play a far more significant role in shaping our decisions, work and daily lives, not because it is ‘intelligent’ in a human sense, but because it is becoming ambient infrastructure. Once AI is embedded into workflows, interfaces and institutions, it stops feeling like a tool and starts behaving like an environment. The key shift is not that machines will think, but that organisations will increasingly act as if machine outputs are reliable inputs for decisions because they are fast, cheap and scalable. This will bring real benefits such as productivity gains, accessibility and new capabilities in education, healthcare, public administration and creative work.
“But it will also reshape trust, authority and agency. AI does not simply automate tasks; it changes how people form beliefs, how institutions allocate resources and how societies coordinate. The most likely risk is gradual overreliance, where plausible outputs are treated as truth and accountability becomes blurred.
“Individuals and organisations will adopt AI first where it reduces friction: drafting, searching, summarising, customer service, analytics, compliance triage, software development and decision support. Governments will adopt it where it appears to expand capacity.
Many institutions were built for a slower tempo: policies that take years, education systems that update slowly, legal processes that assume stable facts and governance structures that treat technology as an IT issue rather than a strategic and ethical one. AI accelerates feedback loops and amplifies second-order effects. It does not fit neatly inside yesterday’s playbook. Resilience, therefore, must become a core capability.
“Resistance will also be rational. Some will resist due to job displacement and the feeling of being managed by opaque systems. Others will resist because AI-mediated media erodes shared reality as deepfakes, synthetic text and automated persuasion make truth feel negotiable. Many will struggle less from ideology than from fatigue and cognitive overload in a world of accelerating change and contradictory signals.
“The deepest challenge is institutional. Many institutions were built for a slower tempo: policies that take years, education systems that update slowly, legal processes that assume stable facts and governance structures that treat technology as an IT issue rather than a strategic and ethical one. AI accelerates feedback loops and amplifies second-order effects. It does not fit neatly inside yesterday’s playbook.
“Resilience, therefore, must become a core capability.
“Cognitively, we need practical AI literacy: understanding where AI is strong, where it fails, what hallucination looks like and why fluent language is not grounded truth. The norm must shift from accepting outputs to treating them as hypotheses to verify.
“Emotionally, we need better self-regulation in an attention economy increasingly optimised by AI, otherwise manipulation, polarisation and helplessness become easier to scale.
“Socially, we need systems of trust, not just individual critical thinking: provenance, transparency, contestability and clear human recourse when AI influences outcomes.
“Ethically, we must move from principles to operational choices:
- What may be automated?
- What must remain meaningfully human?
- Who carries risk?
- And how do we prevent the quiet normalisation of surveillance and widening inequality?
“Actions to take now are straightforward and urgent:
- Treat AI as governance, not just adoption.
- Require clear accountability for AI-influenced decisions, basic quality assurance and verification practices and risk management that covers dependency, concentration, reputation and workforce impacts.
- Invest in public and organisational infrastructure for trust, including authentication and provenance norms and in education that strengthens sensemaking and media literacy.
“If AI is new infrastructure, resilience must become a shared literacy built deliberately before convenience hardens into dependency.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”