The Essays – Chapter 2
Institutions Must Lead Now inBuilding Up Human Resilience for the AI Age

Hundreds of experts answered the following essay question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?”
| Download a PDF of the full, 378-page report | Download the 16-page Executive Summary | Download the 4-page Media Summary |
This is the second of 11 chapters of experts’ essays with responses to the question above. The essayists were asked to explain how the essence and elements of human resilience might evolve as we evolve with AI systems. The authors’ responses in Chapter 2 were generally focused on urging the leaders of institutions to take charge and rapidly work to reinvent them to meet the challenges of accelerating change. This chapter in brief: These experts urged that the leaders of institutions shape AI to capture long-term human and social value before these systems become irreversibly embedded in our social and public infrastructure. They said that the institutions that shape the infrastructure of society must take responsibility for the retention of human agency and the nurturing of the human resilience required in the age of AI. They insisted that leaders of all institutions large and small – in government, business, education, philanthropy, any and all organizations society-wide – must begin now to create a more-robust societal scaffolding; designing, building it and funding it before it is too late to overcome potentially catastrophic change. The experts whose essays are grouped here emphasized that managing the transformative change during the AI transition depends upon aggressive business and civic reinvention, enforceable legal frameworks and meaningful avenues for human appeal of the decisions and judgments influencing their lives. This chapter captures a collective call for institutional interventions of various types – ranging from independent testing, regulation or antitrust measures to the establishment of “red lines,” “authenticity infrastructures” advanced AI literacy, and workplace rules in support of human flourishing.
Featured Contributors to Chapter 2: The 41 essay responses on this page were written by Antoine Vergne, Stefaan Verhulst, Nicholas Diakopoulos, Fernando Barrio, Maria S. Randazzo, David J. Krieger, Bugge Holm Hansen, J. Amado Espinosa, Joel Christoph, Mike Linksvayer, Juan Ortiz Freuler, Alison Poltock, Maha Jouini, Sonia Livingstone, Karen Barrett, Samuel Hammond, Rita McGrath, Michael Noetel, Salman Khatani, Marc Rotenberg, Michele Visciola, Gary Bolles, Marine Collins Ragnet, Anina Schwartzenbach, Marina Gorbis, Kevin Leicht, Amandeep Jutla, Joseph Miller, Ross Dawson, Guy Standing, Daniel Castro, Marcel Fafchamps, Marie Charbonneau, Steven Rosenbaum, Matt Belge, William Halal, Sean McGregor, Karen Caplovitz Barrett, Anonymous Academic, Oliver Alais, Anonymous Researcher at Consulting Firm. (Their essays are all included on this one, scrolling web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)
The first section of Chapter 2 features the following essays:
Antoine Vergne: The future is not determined by AI’s capabilities – it is determined by the structures we build around it. We now have tools capable of generating abundance – IF we design systems so they distribute it.
Stefaan Verhulst: ‘Humans-first’ technological design and governance are urgently needed resilience scaffolding. These systems significantly impact humans’ agency, cohesion, understanding and ability to act collectively.
Nicholas Diakopoulos: ‘Organizations cannot be resilient if they don’t focus their policies and practices on supporting three basic human psychological needs – competence, autonomy and relatedness – in authentic ways.’
Fernando Barrio: At a time when AI is fast-becoming infrastructure, resilience relies most upon strong legal and civic institutions rather than in people’s individual strengths. Those without such institutions will suffer.
Maria S. Randazzo: The future of human dignity and agency depends upon institutional design: In the age of AI, ‘human resilience shifts from simply enduring to sustaining autonomy under technological mediation.’

Antoine Vergne
The future is not determined by AI’s capabilities – it is determined by the structures we build around it. We now have tools capable of generating abundance – IF we design systems so they distribute it.
Antoine Vergne, co-director of Missions Publiques, a global effort to include public voice in decision-making processes at all levels of human systems, based in Bonn, Germany, wrote, “AI systems will unquestionably play a far more significant role in shaping decisions, work and daily lives. The question is not whether this transformation occurs, but how it is governed – and who benefits.
“Over the next decade people will not respond uniformly. Some will embrace AI as a tool for augmentation, using it to extend human cognitive capacity while retaining agency over goals and values. Others will resist, perceiving AI as a threat to employment, autonomy and meaning. And most will oscillate – welcoming convenience while fearing displacement.
The deeper struggle is structural. As AI automates cognitive and coordination tasks the economic surplus shifts from labor to capital ownership. If AI remains concentrated in the hands of a few corporations or states we risk ‘the intelligence curse’ … The question becomes: Can we design systems in which AI-generated value flows back to citizens, not just to owners and shareholders?
“The deeper struggle is structural. As AI automates cognitive and coordination tasks the economic surplus shifts from labor to capital ownership. If AI remains concentrated in the hands of a few corporations or states, we risk what Luke Drago and Rudolf Laine have described as the ‘intelligence curse’: A setting in which the elites no longer require the consent or the productivity of the majority. This is not a technical problem – it is a governance problem.
“The question becomes: Can we design systems in which AI-generated value flows back to citizens, not just to owners and shareholders?
Capacities We Must Cultivate
“Cognitive: Citizens must develop AI literacy – not to become engineers, but to understand what AI can and cannot do, where it errs and how it can be questioned. Critical reasoning about algorithmic outputs becomes as essential as reading.
“Emotional: Resilience requires confronting uncertainty. AI disrupts identity tied to work. We need emotional frameworks that decouple self-worth from employment and cultivate meaning through contribution, creativity, presence and care.
“Social: The capacity for collective deliberation becomes paramount. If AI concentrates power, only organized, informed publics can counterbalance it. Sortition-based assemblies, participatory governance, and structured dialogue are not luxuries—they are infrastructure for human survival.
“Ethical: We must cultivate the habit of asking: who benefits? Who is harmed? Who decides? These questions must be embedded in institutions, not left to individual conscience.
Practices and Resources for Resilience
- Deliberative institutions that give citizens binding decision rights over AI deployment, not just advisory input
- Distributed ownership models where AI-generated surplus flows to commons, cooperatives, or universal basic income – not exclusively to shareholders
- Transparency infrastructure requiring open audits of algorithmic systems affecting public life
- Education systems that prioritize adaptability, collaboration, and ethical reasoning over narrow technical skills
Actions Required Now
- “Experiment with alternative governance architectures before path dependencies lock in. Once AI systems are embedded in infrastructure, retrofitting democratic oversight becomes exponentially harder.
- “Build prototypes that integrate production, governance, and value distribution by design – a proving that coordination-based models can work under real economic conditions.
- “Create new institutions for citizen oversight of AI, drawing on proven deliberative methods (citizens’ assemblies, participatory budgeting) and adapting them to the speed and complexity of AI decision-making.
- “Resist the narrative that AI governance is purely technical. Alignment is not just a machine learning problem – it is a political problem requiring democratic input on values, priorities, and trade-offs.
New Vulnerabilities and Coping Strategies
“AI-powered manipulation of information ecosystems – deepfakes, synthetic media, personalized persuasion – threatens the epistemic foundations of democracy. Coping: Invest in verification infrastructure, media literacy, and institutional trust anchors.
“Rapid displacement without transition pathways creates social instability. Coping: Proactive distribution mechanisms (UBI, profit-sharing, retraining) embedded in production systems, not added as afterthoughts.
“Governance capture – those who control AI shaping the rules that govern AI. Coping: Sortition and deliberative processes that resist elite capture; decision rights held by randomly selected citizens rather than self-selected stakeholders.
“Loss of agency and meaning as AI handles more cognitive tasks.
“Coping: Reframe AI as a tool that handles drudgery, freeing humans for creativity, care, and governance. Cultivate identities rooted in contribution, not just productivity.
“The future is not determined by AI’s capabilities – it is determined by the structures we build around it. The risk is not that AI becomes too powerful, but that we fail to organize ourselves to govern it. The opportunity is that – for the first time – we now have tools capable of generating abundance – IF we design systems so they distribute it.”

Stefaan Verhulst
‘Humans-first’ technological design and governance are urgently needed resilience scaffolding. These systems significantly impact humans’ agency, cohesion, understanding and ability to act collectively.
Stefaan Verhulst, data policy advocate, co-founder and director of the data program at New York University’s GovLab, wrote, “Over the past year, conversations about digital well-being evolved in a variety of ways, in part as a response to the increase in AI use. Many were initially framed around individual habits and behaviors, such as ‘screen time,’ online distraction and personal responsibility. Digital well-being and resiliency were incorrectly treated as a matter of lifestyle management.
“The dominant response, then, was to encourage people to regulate their use of technologies that are specifically designed to be difficult to resist. This framing placed the burden on individuals rather than addressing the broader architectures shaping digital experiences.
The health of digital life is shaped less by how long we spend on devices and more by who designs the platforms, under what incentives and with what data governance structures. In this sense, digital well-being becomes inseparable from questions of power, accountability and rights.
“More-recent debates reveal a significant reframing. Digital well-being (a term I prefer to use instead of ‘resilience’) is increasingly understood less as a function of personal discipline and more as a question of systemic tech design and governance. The health of digital life is shaped less by how long we spend on devices and more by who designs the platforms, under what incentives and with what data governance structures. In this sense, digital well-being becomes inseparable from questions of power, accountability and rights.
“Another shift concerns the move from understanding well-being solely as individual mental health to acknowledging its collective dimensions: from polarization and misinformation to civic trust and democratic resilience. Well-being must be assessed not only for users, but also for societies. The question is no longer how AI and digital environments affect our attention alone, but how they affect our cohesion, understanding and ability to act collectively.
“A further change is the growing recognition that digital well-being cannot rely solely on protective measures, including restrictions, bans or safety features but must actively empower individuals and communities. Concepts such as data self-determination, social license and participatory governance offer an alternative – one that enables agency rather than merely mitigating harm.
“Finally, emerging debates on AI and resilience need to acknowledge plural conceptions of well-being, rooted in diverse cultural contexts rather than one global digital norm. If AI and digital environments increasingly shape how societies function, then digital well-being must be understood as a public interest goal, requiring governance, collective investment and a more expansive vision of what a healthy digital future looks like.”

Nicholas Diakopoulos
‘Organizations cannot be resilient if they don’t focus their policies and practices on supporting three basic human psychological needs – competence, autonomy and relatedness – in authentic ways.’
Nicholas Diakopoulos, director of the Computational Journalism Lab at Northwestern University and author of the AI Accountability Review, wrote, “The spread of AI into core tasks of decision-making has the capacity to fundamentally undermine societal resilience. A helpful lens for examining this problem is self-determination theory, which identifies competence, autonomy and relatedness as basic psychological needs that must be met to ensure human well-being.
“What makes the encroachment of AI so troubling is that it can undermine these needs while simultaneously ostensibly meeting them. AI offers a simulation of social connection (i.e., as a companion, therapist or adviser) and a superficial sense of agency (i.e., someone can cause something to happen by delegating to an agent that will follow their commands, but is then bound up in the alignment of the AI which may be at odds with true individual autonomy), or competence (i.e., someone thinks they can write but they have been deskilled and are over-reliant on the tool). AI’s capacity for simulating the meeting of these psychological needs could undermine society’s capacity for resilience.
Helping society be resilient requires devising ways to help individuals be resilient within organizations. Organizations, themselves, cannot be resilient if they don’t focus their policies and practices on supporting the three basic human psychological needs – competence, autonomy and relatedness – in authentic ways.
“The issue is further highlighted by contrasting individual perception and societal impact. We are drawn to the immediate, tangible benefits of AI on a personal level. We may feel competent using it, able to pursue our goals along with a new companion that can help out whenever we need it. These individual ‘benefits’ mask broader societal externalities, such as the erosion of trust and the thinning of authentic human interaction. Because these technologies are marketed and adopted at the individual scale, societal impacts remain largely unaddressed in any serious capacity that understands them as aggregate, collective and likely to deepen over a longer timeframe.
“People need to wake up to this as individuals and as leaders within their organizations if society is going to successfully adapt.
“Societal resilience to the allure and spell that AI has over individuals must be developed through education on the individual level and enforceable policy in support of it on the societal level. This education must provide the knowledge necessary for individuals to develop a genuinely effective level of autonomy. Everyone should understand when and how to use AI tools and work to recapture or retain their agency.
“But this isn’t enough. Individuals exist in organizations with various goals, some of which are focused on profit. Helping society be resilient requires devising ways to help individuals be resilient within organizations. Organizations, themselves, cannot be resilient if they don’t focus their policies and practices on supporting the three basic human psychological needs – competence, autonomy and relatedness – in authentic ways.
“Moreover, in order to support human resilience, society should develop approaches that demand and ensure that AI systems are required to be accountable to humans and maintain human connection and accessibility. It is possible that policies to support open-source AI systems may help facilitate the alignment of technology with individuals and help mitigate the undermining of human autonomy by corporate or sovereign AI systems.”

Fernando Barrio
At a time when AI is fast-becoming infrastructure, resilience relies most upon strong legal and civic institutions rather than on people’s individual strengths. Those without such institutions will suffer.
Fernando Barrio, co-director of the Centre for Environmental Change and Communities and principal lecturer in business and law at Queen Mary University of London, said, “Artificial intelligence is already becoming more consequential and less visible than just a year ago as the infrastructure through which institutions perceive reality and act upon it. AI is embedding itself in systems that allocate resources, assess risk, filter knowledge and coordinate action and as it does so, it increasingly disappears from view, not because it is insignificant, but because it has become part of the environment itself. Yet this environment will not be experienced in the same way everywhere. In the Global North, AI will often arrive as convenience, optimisation and support; in much of the Global South, it will arrive as condition, requirement and constraint that shapes access to services, work and mobility long before meaningful public debate takes place.
“For much of human history, resilience was understood as a personal capacity, the ability to endure uncertainty and recover from disruption. Yet AI does not simply introduce disruption; it reorganises it, moving uncertainty from visible human disagreement into opaque technical systems where power is exercised indirectly and responsibility is diffused and this shift is not neutral. Societies with strong institutions, regulatory capacity and social protections will be able to contest and shape AI systems, while those without them will experience automation as imposed, imported and difficult to refuse.
Resilience can no longer be defined only as emotional strength or cognitive flexibility, because the challenge is no longer simply how to cope with change, but how to retain agency when the systems producing change are designed elsewhere. Resilience must therefore become institutional, legal and collective, or it will remain fragile and deeply unequal.
“In this environment, resilience can no longer be defined only as emotional strength or cognitive flexibility, because the challenge is no longer simply how to cope with change, but how to retain agency when the systems producing change are designed elsewhere. Resilience must therefore become institutional, legal and collective, or it will remain fragile and deeply unequal.
“People will both embrace and resist this transformation, often at the same time.
“They will embrace AI because it reduces friction in daily life, because it writes and summarises, plans and predicts, advises and coordinates and because it fills gaps left by under-resourced institutions. In many parts of the world, AI will be adopted not because it is trusted, but because it is the only scalable option available. Yet people will also struggle, because these same systems quietly narrow the space for discretion, replacing judgment with defaults and deliberation with optimisation, so that life becomes easier to navigate but harder to contest. What is gained in efficiency may be lost in sovereignty, especially where systems are procured rather than co-designed. This tension will define the coming decade.
“Many people will adapt pragmatically, learning how to prompt systems, how to phrase appeals, how to align their behaviour with algorithmic expectations and how to live within infrastructures they do not fully understand. But this adaptation will look very different across regions. In wealthier societies, it may be framed as innovation; elsewhere, as survival. Yet in both cases, adaptation will often be closer to coping than to resilience, because resilience requires the ability to step outside a system, to question its premises and to refuse its outcomes when they are unjust. Without that capacity, adaptation becomes dependency and dependency becomes normality, particularly where alternatives do not exist.
If AI is treated as destiny, resilience will shrink. If it is treated as infrastructure, subject to democratic design, shared responsibility and global justice, resilience may yet expand, quietly and deliberately, into a form worthy of a world that is no longer evenly connected, but still collectively responsible.
“The capacities we must cultivate are therefore not only technical but civic. The practices that enable resilience are not technical add-ons but political commitments in support of human flourishing.
“Cognitive resilience in an AI-saturated world does not mean learning how to use tools more efficiently, but understanding that AI outputs are probabilistic, contextual and shaped by embedded assumptions about value, risk and efficiency, assumptions that often reflect the priorities of those who build the systems rather than those who live under them. Education must therefore teach people not only how to work with AI, but how to interrogate it, how to localise it and how to challenge it, especially in contexts where AI is imported as infrastructure rather than developed as a public good. These are democratic skills and they are essential for technological self-determination.
“Emotional resilience will also be tested, as AI accelerates change and destabilises long-standing ideas about expertise, creativity and work. In many economies, automation will intersect with informality, precarity and weak social protection, intensifying insecurity rather than alleviating it. Resilience here cannot be reduced to individual coping strategies or digital skills training; it requires social protection, labour transitions and public narratives of value that extend beyond productivity, because without these, AI will amplify existing vulnerabilities rather than mitigate them.
“Social resilience will depend on whether AI is used to strengthen cooperation or to replace it. In regions where public institutions are fragile, people will increasingly turn to AI for guidance, support and sensemaking not because they prefer to, but because no human alternative is available. This may help individuals cope, but it risks deepening isolation and eroding trust if digital systems substitute for relationships rather than supporting them. Strong human institutions remain the foundation of resilience, even in highly digital societies and especially in those where technology arrives faster than governance.
“Ethical resilience may be the most fragile of all, because AI systems reward speed, efficiency and compliance, while ethical action often requires hesitation, questioning and refusal. In asymmetric contexts where power is concentrated and accountability is weak, challenging automated decisions can carry real risk. Ethical resilience therefore cannot depend on individual courage alone; it must be protected through law, collective action and international norms that recognise the unequal distribution of technological power and the right of societies to refuse harmful automation.
If societies fail to act now, new vulnerabilities will harden quickly. Inequality will deepen as resilience becomes a privilege of those with education, connectivity and institutional voice. Cognitive dependency will grow as judgment is delegated by default to systems designed elsewhere. Democratic erosion will accelerate.
“Transparency must be a right rather than a feature and it must apply across borders. Contestability must be normal rather than exceptional and accessible even to those without technical expertise. Liability must be traceable rather than dissolved into global supply chains. Public institutions must have the capacity to audit, regulate and redesign digital infrastructure in the public interest and international cooperation must support that capacity rather than undermine it. Without these conditions, resilience will become a luxury, unevenly distributed along existing lines of wealth and power.
“If societies fail to act now, new vulnerabilities will harden quickly. Inequality will deepen as resilience becomes a privilege of those with education, connectivity and institutional voice. Cognitive dependency will grow as judgment is delegated by default to systems designed elsewhere. Democratic erosion will accelerate as automated systems quietly replace deliberation in domains that were once governed by politics. The most dangerous vulnerability, however, is normalisation, represented by the moment when societies accept that they have no choice, that systems cannot be questioned and that the future is something imported rather than shaped.
“Resilience in the age of AI is therefore not about becoming more-adaptable individuals, but about becoming more-demanding societies, capable of insisting that systems remain intelligible, contestable and aligned with local and global values. The future will not be decided by how intelligent our machines become, but by how seriously we take the task of governing them, teaching with them, and, when necessary, refusing them. If AI is treated as destiny, resilience will shrink. If it is treated as infrastructure, subject to democratic design, shared responsibility and global justice, resilience may yet expand, quietly and deliberately, into a form worthy of a world that is no longer evenly connected, but still collectively responsible.”

Maria S. Randazzo
The future of human dignity and agency depends upon institutional design: In the age of AI, ‘human resilience shifts from simply enduring to sustaining autonomy under technological mediation.’
Maria S. Randazzo, a research professor in the school of law at Australia’s Charles Darwin University and author of “AI is Not Intelligent At All: Why Our Dignity is at Risk,” wrote, “As AI systems become embedded in governance, markets, education, healthcare and everyday decision-making, human adaptation will unfold across interconnected dimensions, including, inter alia, cognitive, institutional, professional, normative/legal and cultural dimensions.
“Cognitively, individuals will increasingly delegate decision-support tasks to algorithmic systems – from navigation and diagnostics to legal and financial assessment. This will intensify reliance on probabilistic reasoning and heighten expectations for ‘data-backed’ justification. At the same time, new literacies will emerge: the ability to interpret algorithmic outputs, evaluate uncertainty scores and understand bias and model limitations. Knowledge will shift from possessing facts to interrogating systems.
“Institutionally, authority will be reconfigured. As AI influences hiring, policing, credit allocation, welfare distribution and judicial reasoning, institutions must renegotiate accountability, contestability and the meaning of valid justification. Regulatory frameworks for algorithmic accountability, rights to explanation and appeal and hybrid human-machine oversight models will likely expand. The central adaptation here concerns the redistribution and formalisation of authority.
Resilience in the age of AI depends mainly on institutional design: transparency, rights of explanation, avenues of contestation and meaningful human oversight. Resilience, then, can be conceptualised as the preservation of human dignity, autonomy, reflexivity, under conditions of algorithmic governance.
“Professionally, transformation is probable. Doctors, lawyers and teachers may rely on predictive or diagnostic systems, yet retain interpretive, ethical and relational authority. Routine analytic tasks will increasingly be automated. As contextual reasoning, moral discernment and relational intelligence become more central, the professional shift will be from execution to supervision, integration and normative judgment.
“More profoundly, societies will confront normative/legal recalibration. As algorithmic nudging and predictive modelling shape choices, individuals may experience diffusion of responsibility or diminished agency – ‘the system decided.’ Alternatively, demands for stronger human override mechanisms may intensify. Whether AI systems are treated as tools, advisors or quasi-authoritative actors will shape how responsibility and autonomy are understood. Preserving meaningful space for human contestation and refusal will be decisive.
“Adaptation, however, will not be neutral. It will vary across socio-economic contexts. Highly resourced actors will likely adapt more rapidly, while marginalised communities may encounter intensified surveillance and automation without equivalent control. Without deliberate governance, power asymmetries may widen. The central issue, then, is not whether humans will adapt – they always do – but how. Adaptation may take the form of passive accommodation to automated authority, or active shaping of AI within normative/legal frameworks. If human contestability, accountability and institutional responsibility are preserved, AI may augment human capacity without undermining autonomy. If not, adaptation may harden into the normalization of algorithmic governance.
“Resilience has traditionally meant endurance: the ability of individuals or institutions to withstand disruption and restore balance. In political theory, it evokes civic strength; in psychology, adaptive response; in governance, recovery after crisis. Yet as AI systems become infrastructural – determining access to credit, employment, welfare, healthcare, education and security – these systems must be rethought.
“In algorithmically-mediated environments, the challenge is to survive external epochal change by working to preserve human dignity and agency within the systems that increasingly create the conditions of choice. Resilience shifts from simply enduring to sustaining autonomy under technological mediation.
“Within algorithmic systems, decisions are guided by optimisation rules built into technological infrastructures rather than by principles individuals consciously choose for themselves. Resilience, in this context, implies the capacity to interrogate system outputs and retain deliberative judgment within probabilistic frameworks.
“Floridi’s informational ontology adds a further dimension: In a datafied world, persons exist not only as embodied agents but as informational entities whose digital profiles circulate within institutional decision-making. These predictive doubles may shape opportunities before action occurs. Resilience therefore includes safeguarding informational integrity – ensuring that data representations remain contestable and subordinate to individuals they purport to represent.
“Taken together, these perspectives suggest that resilience in the age of AI depends mainly on institutional design: transparency, rights of explanation, avenues of contestation and meaningful human oversight. Resilience, then, can be conceptualised as the preservation of human dignity, autonomy, reflexivity, under conditions of algorithmic governance.”

David J. Krieger
‘Coping with AI disruption does not mean understanding every algorithm, but demanding institutional accountability, participating in the design of governance frameworks for acceptable procedures.’
David J. Krieger, philosopher, social scientist and co-director of the Institute for Communication and Leadership in Lucerne, Switzerland, said, “The typical framing of AI disruption discourse is as a technical problem, asking us, ‘How do we make AI systems safe, controllable or value-compliant?’ This overlooks the fact that AI is primarily a societal and cultural challenge that requires new forms of social organization, governance, responsibility and human self-understanding.
“AI disruption cannot be solved in the traditional sense. Coping with AI means learning to live with non-humans as social partners, distributed agency and post-human network norms.
“Societies must replace the dream of control, autonomy and individuality with social practices of ongoing integration grounded in procedural governance and collective responsibility. In this view, the AI future becomes less of a technical issue than a continuous social process, mirroring the open-ended nature of society itself.
Coping with AI disruption does not mean understanding every algorithm, but demanding institutional accountability, participating in the design of governance frameworks for acceptable procedures, recognizing one’s role as a network participant and resisting anthropomorphic myths that obscure the constructive relations among humans and non-humans.
“It is important to emphasize that AI is not a bounded, individual system that can be dealt with in isolation from society. Instead, AI must be understood as a socio-technical network, a dynamic constellation of humans, non-humans, institutions, regulations, economic incentives, data infrastructures, algorithms and much more. This conceptual shift has profound implications for how individuals and societies can respond to AI-induced disruption.
“For societies, the most important coping strategy is abandoning the illusion of technical containment. Just as automobiles cannot be blamed in isolation for traffic deaths, pollution or urban sprawl, AI cannot be held solely responsible for social harm or benefit. Responsibility is distributed across designers, deployers, users, regulators, markets and diverse cultural expectations.
This implies that societies must:
- Develop collective responsibility frameworks rather than scapegoating AI.
- Treat AI governance as an ongoing institutional practice, not a one-time regulatory fix.
- Accept that AI disruption reflects pre-existing social conflicts, inequalities and power asymmetries rather than creating them ex nihilo.
For individuals, this means:
- Admitting that AI is not a mere tool, or an object opposed to human subjectivity, but a social partner.
- Recognizing that AI is not an external force acting upon society but something in which both humans and non-humans are already entangled as users, data sources, workers, citizens and decision-makers.
- Realizing that coping thus involves understanding one’s own role in AI networks rather than imagining oneself as a passive victim or sovereign controller.
“In light of the above assumptions, there are three levels of coping, each requiring different strategies.
1) “Technical safety and robustness: At this level, AI is still treated as a tool, as one technology among others. Societal coping involves engineering safeguards, testing, verification and reliability standards. While necessary, this level is insufficient on its own. Safety measures cannot address misuse, power concentration, or unintended systemic effects, nor can they address cultural transformation.
2) “Prevention of misuse: The assumption at this level is that disruption arises from human actors using AI for harmful purposes of economic exploitation, surveillance, manipulation, crime, or terrorism. Coping requires institutional oversight, legal accountability and political coordination, especially at transnational levels. Individuals cannot shoulder this burden alone; democratic societies must not only strengthen but also reconceptualize regulatory measures.
3) “Social integration of AI: Once AI becomes an autonomous or semi-autonomous actor, societies face not a tool problem but a coexistence problem. Disruption now affects foundational concepts: responsibility, agency, accountability, labor, autonomy, self-determination and even the meaning of intelligence itself. Coping means that societies must prepare for a post-human world not by attempting to retain humanist values and asserting human dominance over AI, but by learning how to integrate non-human actors into a new form of social order. It must be admitted that traditional concepts such as fairness, justice, dignity or freedom are vague and context-dependent, culturally pluralistic, historically and socially contested, and inapplicable to a post-humanist, global network society.
“On the other hand, moral consensus cannot be outsourced to AI and encoded in algorithms. It will not work if we attempt to encode ‘the good’ risk and freeze contested norms, or if we amplify dominant interests, or if we create brittle systems that fail under novel conditions. Rather than demanding that AI embody final moral truths, societies must develop procedural mechanisms that allow norms to be negotiated, revised and contested over time. Not substantive values but procedural values should guide coping strategies. Instead of attempting to define what AI should aim for, societies should define how socio-technical networks ought to operate. This approach mirrors democratic constitutionalism in that the legitimacy of socio-technical networks derives not from outcomes but from processes.
Such procedural values could be:
- Taking account of all affected actors, prioritizing risk analysis, preventing tunnel vision and catastrophic oversimplification.
- Producing stakeholders rather than victims or perpetrators, thus enabling participation rather than passive subjection.
- Prioritizing and instituting bottom-up governance frameworks in transparent, revisable ways rather than through top-down, inflexible government regulation.
- Balancing local and global concerns, acknowledging scalability without erasing contextual specificity.
- Separating powers, preventing concentrations or asymmetries of informational, economic, or political control.
“For societies, this translates into governance architectures that are adaptive, pluralistic and reflexive. For individuals, it implies participation, contestation and literacy rather than blind trust or rejection.
“Given the impending post-labor economy, it is to be expected that AI will initially exacerbate existing power asymmetries, bartering productivity gains against mass unemployment, weakened labor bargaining power and extreme capital concentration.
Coping strategies in this domain could be:
- Framing the idea of the market as the fundamental mechanism of the material reproduction of society and designing new productive and distributive mechanisms.
- Rethinking the relationship between labor, income, social participation, and identity. Human existence and self-understanding need not be defined by labor, as it has been for most people over the last 5,000 years.
- Developing institutional experimentation beyond closed systems to open networks in organizations in all areas of society, as well as in politics.
“We do not need a new enlightenment to regain human autonomy from the dominance of functional systems as the European Enlightenment once freed the individual from feudal and clerical domination. We need to shift from fantasies of control to situated agency and cooperative integration in complex socio-technical networks. Coping with AI disruption does not mean understanding every algorithm, but demanding institutional accountability, participating in the design of governance frameworks for acceptable procedures, recognizing one’s role as a network participant and resisting anthropomorphic myths that obscure the constructive relations among humans and non-humans.”

Bugge Holm Hansen
‘The deepest challenge is institutional … many were built for a slower tempo. … AI accelerates feedback loops and amplifies second-order effects. It does not fit neatly inside yesterday’s playbook.’
Bugge Holm Hansen, senior futurist and head of innovation and technology at the Copenhagen Institute for Futures Studies, wrote, “AI will almost certainly play a far more significant role in shaping our decisions, work and daily lives, not because it is ‘intelligent’ in a human sense, but because it is becoming ambient infrastructure. Once AI is embedded into workflows, interfaces and institutions, it stops feeling like a tool and starts behaving like an environment. The key shift is not that machines will think, but that organisations will increasingly act as if machine outputs are reliable inputs for decisions because they are fast, cheap and scalable.
“This will bring real benefits such as productivity gains, accessibility and new capabilities in education, healthcare, public administration and creative work. But it will also reshape trust, authority and agency.
“AI does not simply automate tasks; it changes how people form beliefs, how institutions allocate resources and how societies coordinate. The most likely risk is gradual overreliance, where plausible outputs are treated as truth and accountability becomes blurred.
“Individuals and organisations will adopt AI first where it reduces friction: drafting, searching, summarising, customer service, analytics, compliance triage, software development and decision support. Governments will adopt it where it appears to expand capacity.
Many institutions were built for a slower tempo: policies that take years, education systems that update slowly, legal processes that assume stable facts and governance structures that treat technology as an IT issue rather than a strategic and ethical one. AI accelerates feedback loops and amplifies second-order effects. It does not fit neatly inside yesterday’s playbook. Resilience, therefore, must become a core capability.
“Resistance will also be rational. Some will resist due to job displacement and the feeling of being managed by opaque systems. Others will resist because AI-mediated media erodes shared reality as deepfakes, synthetic text and automated persuasion make truth feel negotiable. Many will struggle less from ideology than from fatigue and cognitive overload in a world of accelerating change and contradictory signals.
“The deepest challenge is institutional. Many institutions were built for a slower tempo: policies that take years, education systems that update slowly, legal processes that assume stable facts and governance structures that treat technology as an IT issue rather than a strategic and ethical one. AI accelerates feedback loops and amplifies second-order effects. It does not fit neatly inside yesterday’s playbook.
“Resilience, therefore, must become a core capability.
“Cognitively, we need practical AI literacy: understanding where AI is strong, where it fails, what hallucination looks like and why fluent language is not grounded truth. The norm must shift from accepting outputs to treating them as hypotheses to verify.
“Emotionally, we need better self-regulation in an attention economy increasingly optimised by AI, otherwise manipulation, polarisation and helplessness become easier to scale.
“Socially, we need systems of trust, not just individual critical thinking: provenance, transparency, contestability and clear human recourse when AI influences outcomes.
“Ethically, we must move from principles to operational choices:
- What may be automated?
- What must remain meaningfully human?
- Who carries risk?
- And how do we prevent the quiet normalisation of surveillance and widening inequality?
“Actions to take now are straightforward and urgent:
- Treat AI as governance, not just adoption.
- Require clear accountability for AI-influenced decisions, basic quality assurance and verification practices and risk management that covers dependency, concentration, reputation and workforce impacts.
- Invest in public and organisational infrastructure for trust, including authentication and provenance norms and in education that strengthens sensemaking and media literacy.
“If AI is new infrastructure, resilience must become a shared literacy built deliberately before convenience hardens into dependency.”

J. Amado Espinosa
As AI embeds everywhere in an ‘autonomy economy,’ people will face a crisis of meaning. Resilience will come with institutional interventions, new practices, strategies to overcome vulnerabilities.
J. Amado Espinosa, CEO at Medisist, VP for digital health at Coparmex, and MD based in Guadalajara, Mexico – a co-coordinator of the Policy Network on Artificial Intelligence at IGF – wrote, “The relationship between individuals and societies with respect to AI is complex and multifaceted. While some digitally-connected individuals and societies embrace AI, others resist or struggle with it due to various psychological, emotional and systemic barriers: fear of job loss, data privacy concerns, resistance to change (loss of personal agency), cynicism and skepticism, need for empathy and understanding.
“’Digital individualism’ describes an internet-driven shift from traditional group-oriented structures to dispersed, individually-focused networks in which people can focus their social support and gain access to more novel, varied and targeted information. ‘AI individualism’ is a further transformation in which people become less dependent on human-to-human interactions, relying more on tapping into AIs for tailored information, relational experiences, practical help and emotional support.
“The shift to AI may shift social structures and norms further toward favoring individual control over social support, fundamentally altering human interaction, connectivity and social capital.
If humans are to remain relevant in the AI era, leaders in education, workplaces and other institutions must actively help cultivate within each member of society the emotional regulation, cognitive flexibility, social cohesion and ethical discernment that allow society to adapt without losing direction. These are not ‘soft’ skills; they are survival capacities. Education today must do more than teach technical skills and promote knowledge consumption.
“Another looming issue is the fact that algorithmic personification acts as a Trojan horse for corporate control. By embedding persuasive, human-like interfaces into every digital interaction, Big Tech ensures that its influence is not just economic but existential. These systems are not neutral; they are engineered to maximize engagement, often at the cost of truth, privacy or mental health. The more convincingly an AI mimics human behavior, the harder it becomes to resist its nudges – whether to buy, to believe or to behave in ways that serve its masters.
Cognitive, emotional, social and ethical capacities for resilience
“Public debate still fixates on whether and when AI will match or surpass human intelligence, while far less attention is paid to what capacities individuals and institutions must build to adapt to its pervasive integration. Human resilience should be prioritized as much as technological progress is. AI is no longer a backend abstraction but embodied in machines that move, sense and act in the physical world.
“From autonomous driving and hospitality assistants to mobile companions, AI is rapidly embedding itself in everyday life. We are no longer just users of AI.
“This shift defines the rise of the autonomy economy, in which machines not only perform physical and cognitive labor but increasingly simulate human-like emotional presence. While these systems promise efficiency and scalability, their deeper disruption lies beneath the surface.
“Many individuals face not just unemployment but a crisis of meaning. AI-performative empathy risks dulling our capacity for real intimacy, trust and vulnerability. Even more concerning is AI’s growing influence over decisions with moral weight, healthcare, hiring, parole and resource allocation, where opaque algorithms often optimize for efficiency rather than justice.
“These systems can embed invisible biases and remove deliberation from processes that once demanded human judgment. As traditional ethical frameworks are displaced by technical proxies, our capacity to contest, understand or shape the values behind these decisions is weakened.
Humans in all realms must motivate and educate all for resilience
“We define human resilience as a multi-level capacity to absorb disruption, adapt and restore function while preserving core purposes and values. Formally, it comprises: 1) psychological resilience, the individual abilities of emotion regulation, meaning-making and cognitive flexibility that sustain goal-directed behavior under stress; 2) social resilience, the collective capacities of trust, social capital and coordinated response that enable groups and communities to mobilize resources and maintain cohesion during shocks; and 3) organizational resilience.
“Human resilience in the age of AI systems is not solely dependent on people’s cognitive capabilities but also on their emotional, social and ethical capacities. These elements are crucial for the successful integration of AI into human activities and for fostering deeper trust and understanding between humans and machines.
“Resilience is not just a psychological construct. It is a functional capacity that operates across layers. It protects well-being under digital stress, supports equitable adaptation to AI-driven shifts and enables systems to recalibrate without fragmenting. It is not innate, nor is it elusive.
“If humans are to remain relevant in the AI era, leaders in education, workplaces and other institutions must actively help cultivate within each member of society the emotional regulation, cognitive flexibility, social cohesion and ethical discernment that allow society to adapt without losing direction. These are not ‘soft’ skills; they are survival capacities. Education today must do more than teach technical skills and promote knowledge consumption.
We must reject the premise that asking for a future with ‘better’ AI means that the AI should be ‘more human.’ The most ethical AI – the one that embraces its artificiality, making its limitations clear rather than masking them – might be better than human.
Normalize this: AI should never ‘replace’ human thinking
“Those who create an appropriate symbiotic relationship with AI know that it cannot be seen as a ‘replacement’ for human thinking. They use their AI sessions to build their cognitive skills; human and machine intelligence are at their best when they complement and enhance each other. Cognitive resilience – the ability to maintain and strengthen our own mental capacities in the face of technological change – involves cultivating a critical and reflective mindset that allows us to engage with AI in a discerning and purposeful manner.
“Among the other approaches we must take to succeed in building up resilience are:
- “Encouraging the societal normalization of healthy personal habits that allow individuals to maintain a reasonable balance between digital engagement and offline activities. In addition to educating all about emotional regulation, cognitive flexibility, social cohesion and ethical discernment, this is important to mitigate the negative impacts of excessive digital use and the congruent lack of in-person socialization and pursuit of outdoor space and time on mental and physical health.
- “Government initiatives and public-awareness campaigns aimed at promoting responsible digital behavior and raising awareness of digital risks. These campaigns can empower individuals with a deeper understanding of digital environments and the knowledge to navigate them safely. Programs should address societal norms and cultural attitudes towards digital engagement, privacy and ethical considerations. It is vital to foster a more-informed and more-responsible digital citizenry.
- “Legislation of boundaries is required. Effective regulation is needed. If an AI is designed to persuade, it should be labeled as such – no different from advertising disclaimers. If it simulates emotion, users should be reminded, in real time, that they are talking to a statistical model.
“Human resilience, as I explain it here, must be prioritized. Policies at both institutional and governmental levels should promote a balanced approach of human support along with AI implementation. Already at this early point in time of our growing dependence on AI in professional work, many people are required to max out their mental capacity for multitasking due to the rise in productivity expectations in light of the arrival of AI systems.
“And, critically, we must reject the premise that asking for a future with ‘better’ AI means that the AI should be ‘more human.’ The most ethical AI – the one that embraces its artificiality, making its limitations clear rather than masking them – might be better than human.”

Joel Christoph
‘Coping means treating AI not as a gadget, but as governance.’ The ability to appeal high-stakes AI-mediated decisions, an ‘authenticity infrastructure,’ redundant systems, and more are required.
Joel Christoph, economist and political scientist – a researcher on AI governance, global coordination and political economy and Technology and Human Rights Fellow at the Harvard Kennedy School – wrote, “AI systems will play a much more significant role in shaping our decisions, work and daily lives, not because ‘AI takes over,’ but because institutions will embed AI into the plumbing of society. Search and discovery, hiring and credit, education and health triage, compliance and procurement, content visibility and enforcement will increasingly run through AI-mediated pipelines. Most people will not experience this as a single rupture. They will experience it as many small defaults that quietly reallocate agency.
“That creates a paradox for resilience. AI can increase individual capability by helping people learn faster, plan better, communicate across languages, access expert knowledge and coordinate with others. At the same time, it can make societies more brittle by concentrating power in opaque systems, accelerating manipulation, eroding shared reality and encouraging cognitive dependence. The central question is not whether humans adapt, but what kind of adaptation becomes normal, adaptation that expands human agency and dignity, or adaptation that trains people to cope inside systems they no longer understand.
Resilience in an AI-saturated world is not mainly individual grit. It is epistemic resilience that preserves shared reality, true agency resilience that protects the ability to choose and contest and institutional resilience that ensures systems fail safely and correct quickly.
“Most people will embrace AI in domains where it reduces friction, such as drafting and research, navigating bureaucracy, health and life administration, translation, tutoring and creative support. This will feel like an extended mind, a practical cognitive prosthesis. For many, it will be the first time high-quality guidance is always available. In places with weak institutions or scarce professional support, AI may become the default layer for education, legal triage and mental health coaching.
“Resistance will take several forms. Some will be cultural and professional, with communities defending human judgment, craftsmanship and authenticity in teaching, journalism, art, medicine and public service. Other resistance will be political, driven by backlash against surveillance, discrimination, automated denial of services and the sense that no one is accountable. The struggle will be sharpest where AI functions as a gatekeeper for benefits eligibility, policing risk scoring, insurance, credit, hiring and content moderation, because errors and bias in these contexts are not merely inconvenient. They can reshape life chances.
“Resilience in an AI-saturated world is not mainly individual grit. It is epistemic resilience that preserves shared reality, true agency resilience that protects the ability to choose and contest and institutional resilience that ensures systems fail safely and correct quickly.
“At the individual level, the most important cognitive capacity is independent judgment under uncertainty. People will need to ask good questions, notice contradictions, check sources and understand the incentives behind recommendations.
“Emotional resilience will include identity security that is not tied solely to marketable cognitive output and habits that resist persuasive or addictive interfaces.
“Social resilience will depend on sustaining human trust networks, relationships and communities that are not fully mediated by ranking algorithms and synthetic personas.
“Ethically, we must preserve responsibility for delegation. As AI systems recommend actions, individuals and institutions must remain accountable for outcomes and ‘the model suggested it’ cannot become a moral alibi.
“The most practical resilience resource is contestability, the ability to appeal high-stakes AI-mediated decisions and obtain meaningful explanations and correction. A society without contestability will teach people resignation rather than resilience.
Public-interest information institutions and authenticity standards should be strengthened so that shared reality is not at the mercy of commercial platform dynamics. … What we do now will shape whether adaptation is empowering or corrosive. Accountability must be built into deployments.
“Resilience also requires authenticity infrastructure, including tools and standards that help people distinguish verified information, real identities and traceable media from synthetic or manipulated content. Without this, public life becomes vulnerable to scaled deception and people retreat into tribal epistemologies.
“Resilience further depends on redundancy, because critical services should not rely on a single model, vendor or automated pipeline. AI should be treated like other critical infrastructure, with audits, monitoring and design that degrades gracefully under failure.
“Education also matters, but AI literacy should be civic rather than technical. People should understand where AI is used in their lives, how optimization can conflict with human goals and what rights and recourse they have.
“What we do now will shape whether adaptation is empowering or corrosive. Accountability must be built into deployments through clear liability for harms, documented model use and auditable decision trails in high-stakes settings. Due process must be protected through appeals, meaningful human review and transparent criteria when AI influences access to jobs, credit, housing, healthcare, or justice. Incentives must shift away from extraction, because tools optimized for engagement, persuasion, or data harvesting will undermine autonomy and social trust. Public-interest information institutions and authenticity standards should be strengthened so that shared reality is not at the mercy of commercial platform dynamics. Education systems should preserve minimum viable independence by explicitly teaching critical reading, numeracy, argumentation and long-form reasoning – skills that reduce total cognitive offloading and keep people capable of independent judgment.
“New vulnerabilities will emerge even in generally positive trajectories. A major risk is loss of agency through defaults, with people nudged, ranked and filtered into choices without noticing. Another risk is epistemic fragmentation, as AI-tailored persuasion and synthetic content dissolve common ground. A third risk is automation complacency, where fluent and confident systems are over-trusted. Coping strategies should include deliberate practice of core skills without assistance, routine verification habits, community-based sensemaking and normalized use of appeals mechanisms when systems fail. At the societal level, coping means treating AI not as a gadget, but as governance.
“Most people will adapt enough to function. Whether they adapt in a way that preserves freedom, fairness and shared reality depends on choices made now about accountability, contestability, authenticity, incentives and education. Resilience in the AI age is not only the capacity to endure change. It is the capacity to shape it.”

Mike Linksvayer
Whether AI ultimately expands or constrains human agency will depend less on the technology itself than on the quality of the institutions we build around it. Worry about adversarial actors that scale AI.
Mike Linksvayer, head of developer policy at GitHub, previously VP and CTO at Creative Commons and director at the Software Freedom Conservancy, wrote, “Over the next decade, AI systems will play a significantly larger role – but with far more continuity than rupture. The most illuminating historical analogue is not a particular prior technology, but the long arc from oral culture to written culture, to print, to near-universal literacy – and then, more recently, to computing. AI fits naturally as the next phase in that trajectory.
“Literacy dramatically changed what people could know, how knowledge could be stored and transmitted, who could participate in public life, and how institutions functioned. It enabled abstraction, coordination across time and space and the accumulation of durable legal, scientific and administrative systems. Yet literacy did not ‘take over’ most human decisions. Instead, it became an ambient capability: indispensable in some domains, largely irrelevant in others, and unevenly distributed for a very long time. Its effects were profound but rarely felt as coercive or centrally managed.
“I expect AI to follow a similar pattern. Within the next 10 years, AI systems will influence a meaningful but minority share of daily decisions for most people. Their influence will often be indirect and infrastructural – helping draft, summarize, recommend, flag, optimize and predict – rather than directly controlling outcomes. As with literacy, the most important change will not be that machines decide for people, but that they reshape what people can reasonably know, evaluate and attempt.
AI is best understood as part of a long epistemic and institutional evolution, akin to literacy: uneven, powerful, imperfect and deeply shaped by policy choices. Whether AI ultimately expands or constrains human agency will depend less on the technology itself than on the quality of the institutions we build around it.
“Seen this way, much of the anxiety around ‘keeping up with AI’ reflects a category error. Humans have always extended cognition beyond the individual mind: first through language, then writing, institutions, bureaucracies and computing systems. AI accelerates and thickens this extended mind, but it does not fundamentally alter the underlying pattern. It is therefore extremely likely that many people will experience genuine cognitive gains from AI – not because AI replaces thinking, but because it changes the cost structure of reasoning, synthesis and exploration.
“This perspective also explains why I am skeptical of attempts to quantify ‘resilience’ in isolation from institutional context. Asking what percentage of people will master various resilience dimensions begs the question: relative to what baseline and under what policy regime?
“Literacy itself did not produce resilience automatically. It interacted with education systems, economic structures, political inclusion and public goods. Where those institutions were inclusive and well-functioning, literacy was broadly empowering. Where they were extractive or exclusionary, literacy often amplified inequality.
“The same will be true for AI. The cognitive and emotional capacities people need – judgment, skepticism, responsibility, agency – are not fundamentally new. Knowing when to interrogate AI is not categorically different from knowing when to interrogate bureaucracies, markets or expert systems. What matters most is whether these systems expand or constrain the real capabilities of the people and institutions using them.
“This leads to what I see as the most underappreciated point in current debates: The policies that best support resilience in an AI-rich world are largely AI-invariant. Economic efficiency, inclusive institutions, broad access to education, investment in public goods and governance structures that distribute power rather than concentrate it were good policy before AI and remain good policy regardless of how AI progresses.
“There is no special ‘AI resilience lever’ that substitutes for these fundamentals.
We should be bullish on AI as a complement to human labor and creativity and as an accelerant for innovation that can improve living standards and help address planetary-scale challenges. But current policy choices do not reliably incentivize that outcome. In particular, tax systems that heavily tax labor while favoring capital investment and labor-substituting automation risk pushing AI development in a direction that undermines broad-based resilience.
“AI’s most novel risks do not primarily come from misuse by governments or corporations, which – however imperfectly – remain subject to law, public pressure and accountability. The sharper risk is that AI dramatically lowers the cost of scale for organized criminal and adversarial actors, who operate outside those constraints.
“In that sense, AI does not so much introduce a new policy problem as radically intensify an old one: Societies that fail to suppress organized crime will see that failure amplified. The resulting harms are therefore not chiefly problems of individual over-reliance or cognitive weakness, but collective-action and governance failures – demanding institutional capacity, enforcement and international coordination, not moral exhortation.
“Finally, policy ambition matters. We should be bullish on AI as a complement to human labor and creativity and as an accelerant for innovation that can improve living standards and help address planetary-scale challenges. But current policy choices do not reliably incentivize that outcome. In particular, tax systems that heavily tax labor while favoring capital investment and labor-substituting automation risk pushing AI development in a direction that undermines broad-based resilience.
“Shifting taxation away from labor (below generous thresholds) and toward inelastic goods such as land, along with environmental and social externalities, would better align incentives with human flourishing – AI or no AI. This kind of reform is often dismissed as politically infeasible, but low expectations are themselves a source of fragility. The same society capable of deploying transformative technologies at scale should be capable of updating the policy frameworks that govern them.
“If we resist framing AI as either an existential rupture or a purely technical problem, a clearer picture emerges. AI is best understood as part of a long epistemic and institutional evolution, akin to literacy: uneven, powerful, imperfect and deeply shaped by policy choices. Whether AI ultimately expands or constrains human agency will depend less on the technology itself than on the quality of the institutions we build around it.”

Juan Ortiz Freuler
‘The key challenge we face is that corporations are becoming social scaffolding, defining the shape and range of alternative social arrangements.’ Leaders must foster support for a resilient political culture.
Juan Ortiz Freuler, co-initiator of the non-aligned tech movement, previously a senior policy fellow at the Web Foundation, wrote, “AI systems will play a more significant role in shaping our decisions, work and daily lives in the future. However, such a statement might benefit from historical context.
“To understand how AI fits into a longer historical arc, it might be useful to revisit the recent past. In the introduction to an interview with Steve Jobs that aired in 1981 journalist Bettina Gregory noted the growing relevance of computers in everyday life: ‘You can’t do the simplest things today without using a computer. You can’t cash a personal check. You can’t even make a phone call without linking it to the telephone company’s computer. And if you go to the supermarket, the chances are good a computer will check out your groceries. … In some areas, computers have replaced humankind. In Washington, the subway system is run by a computer. … In the airline industry, computers will reserve your seat on the plane and – make no mistake – when you take off, it is computers that tell the airline controllers how to get you safely to your destination.’
“Today, almost 50 years later, we could easily imagine replacing the term ‘computer’ with ‘AI’ and re-running the same segment in that broadcast without anyone being particularly surprised. Placing these AI systems on a continuum with prior mediation and automation techniques allows us to demystify them. In doing so, we could say that AI systems affect power by reshaping relational dynamics in ways similar to the sociotechnical reshaping that has been occurring over past decades and even centuries. This is in contrast with popular narratives that focus on the changes rather than continuities. Narratives of change are favored by corporations since they increase investor curiosity and buy-in while creating space within which to claim the applicability of beneficial regulatory loopholes and raising doubts as to the technical capacity of regulators to understand the latest technique.
“Artificial intelligence systems can be best understood when observed as part of the longer historical process in which workers develop machines for capitalists. While the automation of labor by capitalists is not novel, the arena of deployment is always expanding and the new capital is being used to mediate human relations as well as humans’ understanding of their environment.
Texture and friction are central to having a thriving political culture that enables resilient democratic practices. We need to ensure the process of digitizing the social scaffolding does not flatten such experiences.
“The AI narrative is a smokescreen behind which a handful of corporations are expanding surveillance and control over social relations. The key challenge we face is that megacorporations are building a global social scaffolding that defines the shape and range of alternative social arrangements. Corporate leaders are executing a pincer move. On the one hand, the current deployment of this technique allows them to observe humans and train AIs on interactions that were previously private. This expansion allows the companies to better target their advertising systems. On the other hand, it also enables greater surveillance and control over human interactions, such that emerging pockets of resistance can be identified and neutralized quickly.
“Beyond these direct risks is the more subtle challenge created by the flattening of human-to-human interactions. When AI operates as social scaffolding it seeks to ease the social frictions that are essential to democratic culture, eliminating the possibility of heated debates. … This reduces the availability of human expertise across the world while undermining the collective agency that can emerge from groups that coalesce around the idea that certain positions are legitimate or not.
“Once we placed within its historical framing, we understand that AI is deployed to directly undermine collective action while indirectly flattening the texture upon which democratic culture is constructed, we can focus on two types of responses.
“First, we need to redistribute the power that these techniques have consolidated in a handful of companies. At a local level, this might require breaking up companies through antitrust as well as developing rules that ensure interoperability across different technologies, such that new upstarts can easily jump into action. With a growing plurality of providers and systems, regulators can compare different techniques and subsequently favor those that are most in line with the public interest.
“Second, because texture and friction are central to having a thriving political culture that enables resilient democratic practices we need to ensure the process of digitizing the social scaffolding does not flatten such experiences.
“Promoting the use of open-source and open-weight systems might help create such opportunities to have public debates regarding the characteristics of the social scaffolding, its merits and risks. Openness allows people to scrutinize the code, debate it and form sub-communities around alternative arrangements allow people to experience such alternatives.
“As the techniques of intermediation and automation known as ‘AI’ expand into private spaces and social relations we need political leaders and candidates who embrace these questions as central to their political platforms. We need publicly elected leaders who have the means to imagine, develop and deploy alternative ways of connecting in the virtual world. Alternatives that are not defined by Wall Street’s quarterly earnings reports or angel investors but shaped by public deliberation.
“Creating government alternatives to existing private social media platforms faces a variety of challenges, including that network effects will favor the incumbents. That is why the first step requires antitrust and interoperability mandates. Legislators and regulators can then seek to establish open-source and open-weight requirements for systems that help achieve certain benchmarks (number of users, valuation, public interest), while providing public resources to nurture this ecosystem.
“The Neutrality Pyramid offers a comprehensive policy framework to explain, discuss and advance such an agenda. The Digital Public Goods Alliance provides an early example of how communities are coalescing around the idea of open source for the public interest.”
The second section of Chapter 2 features the following essays:
Alison Poltock: Clarity must prevail, else our muscle of introspection will weaken, moral reasoning thin and space for ambiguity and uncertainty shrink. It’s a ‘quiet exit.’ Resilience arrives through reimagined civic design.
Maha Jouini: ‘Adaptation without ethical reflection risks creating societies where algorithms silently structure opportunity and exclusion. … For AI to truly serve humanity, it must be guided by wisdom.’
Sonia Livingstone: ‘Society is moving into a world that lacks checks and balances – in which commerce provides the infrastructure for our private and public lives.’ This human failure jeopardizes the human future.’
Karen Caplovitz Barrett: Leaders at all levels of government must understand we must be proactive, rather than reactive.
Sam Hammond: ‘Within two to four years … The mass proliferation of powerful AI capabilities and agents will likely have a destabilizing effect on current institutions. Many existing systems will break.’
Rita McGrath: ‘No amount of individual resilience can compensate for a system structurally tilted against ordinary people. Mass displacement of workers without social investment would destabilize the social fabric.’
Michael Noetel: We need calibrated uncertainty, institutional imagination and collective agency; ‘the decisions we make now’ about safety, governance and research priorities’ will shape our future.’
Salman Khatani: The window for proactive intervention is now – we have perhaps 5 to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.

Alison Poltock
Clarity must prevail, else our muscle of introspection will weaken, moral reasoning thin and space for ambiguity and uncertainty shrink. It’s a ‘quiet exit.’ Resilience arrives through reimagined civic design.
Alison Poltock, co-founder of AI Commons UK and The Heart of AI community interest groups and author of a Substack titled “The Future is Personal,” wrote, “Resilience in the age of AI will not come from technical mastery, but clarity. Clarity in our ability to stay human under systemic pressure. Clarity about boundaries between self, systems and automation. Clarity about where human responsibility begins and where machine logic must end.
“As artificial intelligence becomes embedded in our everyday lives, public systems and personal decision-making, the question is no longer whether AI will change society, but how quickly and with what oversight. Most public discourse remains preoccupied with the price of AI development (in dollars and environmental terms), job loss, bias, the erosion of privacy.
“All serious concerns. But the deeper structural risk is the erosion of the communal coordinates that anchor our shared truths and shape the conditions under which human identity, judgment and meaning are formed.
We are in a moment of epistemic shift. … The developmental frameworks shaping identity, agency and social orientation are shifting. … This is the terrain of vulnerability. Yet there is no shared conversation. No civic space where this new reality is named, let alone addressed. We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap.
“We now allow AI-driven systems to ‘optimise’ our words, our work, our sleep, our moods. Students can automate away the struggle to find their unique voice. Policymakers can lean on predictive tools without understanding the assumptions beneath. A friend can engineer the perfect condolence without the need for any inconvenient feelings.
“AI is helping us cut corners. It is saving us a lot of time. But as we outsource our memory, language and creativity, we risk outsourcing our core human instincts as well. This loss isn’t registered at the level of headlines; it accumulates through habit. Over time, our muscle of introspection weakens, moral reasoning thins and the space for ambiguity and uncertainty – the playground of human insight – shrinks. It’s a quiet exit. There are no alarm bells. No spectacle. Just a lot less skin.
“We are in a moment of epistemic shift. Surveys in the past few years indicate that many people may be spending more time using AI-based platforms to be informed, discuss issues and share their lives than in participating in real-world, face-to-face social interaction. These are not marginal trends. They reveal that the developmental frameworks shaping identity, agency and social orientation are shifting. This is the terrain of vulnerability. Yet there is no shared conversation. No civic space where this new reality is named, let alone addressed. We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap.
“AI systems are not just tools. They are parasitic by design. To reflect our voices, values, needs, they must be trained on our data, our habits, words and fears. This isn’t a side effect; it’s the core architecture. If we want AI to be of use to us, the system must first extract from us. Resilience begins by recognising that trade-off and deciding what must not be given away. What AI returns is not neutral. In maximising engagement, it slices up the digital world into private, personalised feeds. We lose the shared reference points that allow us to think, argue and act together. The digital Commons is not just shrinking, it’s being atomised. AI thrives on fragmentation. Democracy does not.
We stand at the edge of a profound transition, not just in what AI can do, but in what it reveals. Resilience will not come from adapting faster to machine systems. It will come from reorienting ourselves in relation to them. Now. … We need new infrastructures – educational, institutional, cultural – capable of holding this moment with care and foresight. We need systems that will protect human agency, not automate it.
“Resilience, then, cannot be reduced to personal ’grit’ or mindfulness. It must be treated as a civic design imperative and built into the systems and cultures that shape public life. That means:
1) “Structural Boundaries: Some decisions must remain human by design. Life, death, identity, rights and justice are not engineering problems. Governance must begin with red lines, backed by law, that guarantee human judgment in critical domains.
2) “Institutional Accountability: Any AI used in public life must be intelligible and open to scrutiny. Its function, data and outcomes must be visible to those it affects, with clear mechanisms for challenge and redress. A society cannot remain democratic if its citizens cannot audit the systems influencing them.
3) “Public Naming: We cannot govern what we cannot describe. Today’s AI terminology is fragmented – drawn from neuroscience, engineering, psychology and myth. But how we name systems shapes how we relate to them. AI systems must have an understandable, shared civic vocabulary or collective governance fails.
4) “AI Literacy: Using AI isn’t enough. Citizens must understand how systems are built, what trains them and where they fail. We need tools to interrogate outputs, decode assumptions and challenge influence. Interpretive literacy must be a civic right, requirement and governance priority.
5) “Cultural Safeguards: Resilience requires full human presence – not just ‘human in the loop,’ but human at the centre. Care, teaching, listening and community work are civic infrastructure. These roles carry our values and must be funded, protected and prioritised.
6) “Human-Centered Measurement: Public systems must resist valuing only what machines do well – speed, scale, efficiency. If those are our benchmarks, people will always fall short. We need metrics that honour trust, care, judgment, attention and social contribution. What we choose to measure defines what we choose to protect.
7) “Rights of Inclusion: Inclusion must mean real choice. No one should be forced into participation through convenience or excluded through design. Everyone must retain the right to remain untracked, unprocessed and private by default; true inclusion includes the right not to be included.
8) “Upstream Consultation: Consultation must shift away from reaction to design. Communities must be involved before systems are deployed, not after harm occurs. Resilience depends on participation, foresight, and consent at the point of creation.
“When the camera first appeared nearly 200 years ago, the painter J.M.W. Turner declared it ‘the end of art.’ But it wasn’t. It was the end of one kind of art: art as record. Freed from documentation, artists were liberated to reimagine the world. We are at a similar threshold. We stand at the edge of a profound transition, not just in what AI can do, but in what it reveals. Resilience will not come from adapting faster to machine systems. It will come from reorienting ourselves in relation to them. Now.
“We need new infrastructures – educational, institutional, cultural – capable of holding this moment with care and foresight. We need systems that will protect human agency, not automate it. We need public conversations grounded in ethics, not just outputs. And we need governance that treats this not as a policy issue, but as the civilisational inflection point it is.”

Maha Jouini
‘Adaptation without ethical reflection risks creating societies in which algorithms silently structure opportunity and exclusion. … For AI to truly serve humanity, it must be guided by wisdom.’
Maha Jouini, digital communication officer at the African Union Development Agency and research fellow at the Global Center on AI Governance, wrote, “We must recognize that resilience is not purely individual – it is collective. Communities, policymakers, technologists and researchers must collaborate to ensure that AI systems are designed with human dignity at their center.
“As a cancer survivor, I understand resilience in the age of artificial intelligence not as an abstract concept but as a lived reality. AI systems increasingly shape decisions about health, employment, finance and access to care. Yet the experiences of many women – particularly those living with chronic illness – remain poorly represented in the datasets that inform these systems. This absence creates a ‘silent digital condition.’ Invisible data = invisible women.
“When vulnerability is translated into algorithmic categories human complexity can be reduced to simplified risk signals. A cancer diagnosis can become a data marker interpreted by systems evaluating employability, insurance eligibility or productivity.
“What was once a deeply personal struggle for survival becomes an automated classification. In this process, AI can unintentionally transform vulnerability into exclusion. Consider this example: A woman who survived cancer applies for a job. An AI screening system flags her employment gap as a productivity risk. She never gets an interview. No human ever reviewed her file.
When AI systems encode bias, exclude the vulnerable or concentrate power without accountability, they do not merely produce technical errors, they erode the social fabric. Ibn Khaldun would recognize in algorithmic injustice the same corrosive force that, left unchecked, weakens civilizations from within.
“For women navigating illness, work and social expectations simultaneously, resilience therefore requires more than adapting to technological change. It requires maintaining dignity within systems that increasingly evaluate human lives through data.
“This challenge is particularly visible in the Global South. Artificial intelligence technologies are largely developed within Western technological ecosystems shaped by values such as efficiency, optimization and market performance. While these frameworks have produced remarkable innovation, they often neglect relational and communal understandings of human well-being that exist in many non-Western societies.
“From a decolonial perspective, the question is not simply, ‘How will people adapt to AI?’ It is, ‘Do the appropriate sets of knowledge and correct ethical frameworks guide the design of these systems?’ If AI continues to be built primarily on epistemologies rooted in individualism and economic optimization, it risks reproducing historical patterns of exclusion in new digital forms.
“African philosophical traditions offer an alternative ethical orientation. Ubuntu, often summarized by the expression ‘I am because we are,’ frames intelligence and human flourishing as relational rather than purely individual. It emphasizes care, community and mutual responsibility. Within such a worldview, technological systems should strengthen social bonds rather than fragment them.
“Similarly, the Islamic ethical tradition of Hikma – wisdom – reminds us that knowledge and power must be guided by moral reflection. Historically, Hikma integrated reason, ethics and spirituality in the pursuit of justice and human flourishing. In the context of AI governance, this perspective encourages us to ask not only whether a system works efficiently but also whether it serves the dignity of human beings.
“This concern for justice is not new to Arab intellectual tradition. The fourteenth-century Arab philosopher Ibn Khaldun argued that justice is the foundation of collective life and that injustice ultimately leads to disorder and decline. His insight carries striking relevance today. When AI systems encode bias, exclude the vulnerable or concentrate power without accountability, they do not merely produce technical errors they erode the social fabric. Ibn Khaldun would recognize in algorithmic injustice the same corrosive force that, left unchecked, weakens civilizations from within. To build AI responsibly is, in this sense, an act of civilizational stewardship.
“These philosophical traditions suggest that resilience in an AI-saturated world must include ethical and cultural capacities alongside technical literacy. People will inevitably adapt to AI systems – using them for healthcare advice, learning, work and decision-making. Yet adaptation without ethical reflection risks creating societies in which algorithms silently structure opportunity and exclusion.
“To cultivate meaningful resilience, societies must develop several capacities:
- “Communities, policymakers, technologists and researchers must collaborate to ensure transparency and accountability in AI systems that shape human flourishing – health, employment, governance and so on.
- “Individuals must develop critical awareness of how data and algorithms influence decisions affecting their lives.
- “Educational systems must integrate ethical reflection, philosophy and cultural perspectives into technological learning.
“This imperative becomes especially urgent as we enter the era of agentic AI – systems capable of autonomous reasoning, planning, and action across complex environments. In a world increasingly fascinated by the power of machines, I insist on one foundational principle: the human being must remain at the center of both design and decision. Intelligence without wisdom is incomplete. A system may optimize, predict, and act—but without moral grounding, without cultural memory, and without accountability to those it affects, it remains a powerful tool in search of a conscience.
“As a cancer survivor, I know that vulnerability can reveal both fragility and strength. In the age of artificial intelligence, our resilience will depend on our ability to transform technological power into ethical responsibility. Technology alone cannot guarantee justice. For AI to truly serve humanity, it must be guided by wisdom.”

Sonia Livingstone
‘Society is moving into a world that lacks checks and balances, in which commerce provides the infrastructure for our private and public lives.’ This human failure jeopardizes the human future.
Sonia Livingstone, a professor of social psychology at the London School of Economics and Political Science, and principal investigator for the Global Kids Online: Children’s Rights in a Digital Age project, wrote, “In my view, society is moving into a world that lacks checks and balances in which commerce provides the infrastructure for our private and public lives and in which trust, remedy and human rights are all hugely at risk.
“Today, AI systems depend 100% on human agency to determine their use. I see three main human drivers for the adoption of AI systems.
Organizations accept the promise of AI with insufficient attention to due diligence, conflicts of interest, procurement rules, technical standards, legal compliance or even liability. If businesses are making AI unavoidable for ordinary people, so, too, are our once-trusted public institutions.
“The first is commercial. As we already see, AI companies are hyping the potential opportunities of AI systems at the same time as they are embedding AI (as not optional) in the digital services that society already relies on. This ranges from search engines to Excel spreadsheets to social media to professional and bespoke systems used in a host of workplaces. In other words, driven by the search for profit, AI companies promote the benefits (without much independent evidence to support their claims) while making it unavoidable that everyone uses their services.
“The second is institutional. Public and civic institutions are under enormous pressure to deliver ever more, with ever less funding to pay for it. This includes educational, health, transport, governmental and many other institutions. So these organizations accept the promise of AI with insufficient attention to due diligence, conflicts of interest, procurement rules, technical standards, legal compliance or even liability. If businesses are making AI unavoidable for ordinary people, so, too, are our once-trusted public institutions.
“Third, the public is curious and a bit charmed by the cleverness of AI. So they, too, drive adoption.
“Surviving in such a setting requires difficult, broad change in commercial, public and civic institutions and in the public’s understanding of the risks we see deepening in the infrastructure of society.”
Karen Caplovitz Barrett
Leaders at all levels of government must understand we must be proactive, rather than reactive.
Karen Caplovitz Barrett, professor of human development and director of the Emotional Development Laboratory at Colorado State University, wrote, “It is important for citizens and professional experts – especially parents, psychologists, neurobehavioral scientists and developmentalists – to insist that leaders of governmental entities at all levels of government must understand the potential impact of this transition to machine intelligence on human well-being, sense of purpose and cognitive and socioemotional well-being. And it is absolutely crucial for them to understand the potential impact of use of AI on children’s brain development, cognitive development and socio-emotional development. We need to be proactive, rather than reactive, in this.”

Samuel Hammond
‘Within two to four years … The mass proliferation of powerful AI capabilities and agents will likely have a destabilizing effect on current institutions. Many existing systems will break.’
Sam Hammond, senior economist at the Foundation for American Innovation and nonresident fellow at the Niskanen Center, commented, “Based on current trajectories, low-cost AI agents capable of performing all cognitive labor at a human level or better will be available within two to four years, beginning with software development and extending out across most knowledge work professions.
“This includes AI research and development itself, accelerating capabilities growth and leading to a potential intelligence explosion: a short time window when AI capabilities improve themselves recursively to vastly superhuman capabilities. The timeline for this latter eventuality is plausible in 2029, and no later than 2033. It is virtually impossible to predict what the full first- and second-order consequences of this development will be.
The mass proliferation of powerful AI capabilities and agents will likely have a destabilizing effect on current institutions. Democratized access to powerful bio and cyber capabilities will create new security threats.
“Simultaneous to these developments, AI for robotics and biology will continue to accelerate. In domains amendable to automated AI science such as biology and biomedicine the pace of new discoveries may accelerate many-fold, compressing a century of knowledge creation into a few years. The implications for what it means to be human via interventions like desire modification and neural decoding are immense and also hard-to-impossible to fully predict.
“The mass proliferation of powerful AI capabilities and agents will likely have a destabilizing effect on current institutions. Democratized access to powerful bio and cyber capabilities will create new security threats, while even relatively benign applications of AI agents will – at scale – contribute to Denial of Service-style dynamics in systems and processes that are throughput constrained.
“End-to-end AI corporations and organizations will have massive competitive advantages over institutions with humans in the loop. Many existing systems will break, and, in the limit broader political regime change seems more likely than not – a scenario I explore in a book/essay series titled “AI and Leviathan.”
“The AI/AGI transition will feel in many ways like a renaissance but it will be a very rough transition even under the best of circumstances. Political-economic constraints will be reshuffled, greatly expanding the horizons of possibilities for future historical developments. Conditional on our survival, we will emerge into the mid-2030s and beyond in a fundamentally new world.”

Rita McGrath
‘No amount of individual resilience can compensate for a system structurally tilted against ordinary people. Mass displacement of workers without social investment would destabilize the social fabric.’
Rita McGrath, director of executive education at Columbia Business School, wrote, “In the next decade, the techno-social system in which AI is emerging is not going to remain more or less the same as it stands now. We are in the midst of an enormous turning point between an old system of mass production based on cheap energy and industrial logic to a new system based on cheap intelligence and digital logic. We stand between the two. One of the greatest overall societal impacts is a massive restructuring of work that will certainly disrupt human employment.
“Resilience stems from many sources, not all directly stem from AI. Most importantly from the choices society’s leaders make. A large and important segment of those choices is whether corporations are going to be permitted to use AI to enrich themselves at the cost of ordinary people. If they are allowed to do this trust is likely to break down and there could be significant displacement of human workers. The old world order once provided job-security, unemployment insurance, backing for mortgages and government funding of research that consciously broadened prosperity. Now, everything will be renegotiated.
The risks here are not marginal. When productivity gains from AI accrue primarily to capital rather than labor, we risk repeating – and amplifying – the dislocations of earlier industrial transitions, but at a far faster pace and with far less warning. Mass displacement of workers across both blue-collar and white-collar roles, without adequate social investment in retraining, income support or alternative opportunity, would destabilize the social fabric in ways that dwarf anything we have seen from prior waves of automation.
“Most of the ingredients that comprise the taken-for-granted ways in which companies operate stem from an era when most of the assets on the books were tangible and companies needed structures that accommodated mass market operations. Today, the bulk of assets are intangible, thus many other forms – such as the LLC, limited liability company, in which owners are not generally held responsible for debts, lawsuits or bankruptcy, are subject to few requirements by law and benefit pass-through taxation – could be viable.
“The very structure of employment – work so many hours a day for so much pay – is being rethought. What do billable hours mean, for instance, when AI can provide astute analysis and research in a flash for essentially zero cost? Assumptions that expertise in knowledge work is going to depend on a human workforce and the expectation that professionals can charge a lot for it are going to be revisited. Value is going to flow to where scarcity still exists, and it seems as if society is only beginning to figure this out.
“The risks here are not marginal. When productivity gains from AI accrue primarily to capital rather than labor, we risk repeating – and amplifying – the dislocations of earlier industrial transitions, but at a far faster pace and with far less warning. Mass displacement of workers across both blue-collar and white-collar roles, without adequate social investment in retraining, income support or alternative opportunity, would destabilize the social fabric in ways that dwarf anything we have seen from prior waves of automation.
“At the same time, AI gives corporations unprecedented tools to identify and exploit individual vulnerabilities – pricing goods and services based on inferred desperation, targeting political messaging based on psychological profiles and allocating credit and opportunity in ways that deepen rather than reduce existing inequalities.
“No amount of individual resilience compensates for a system structurally tilted against ordinary people.
“A lot of what happens is going to come down to policy and regulatory choices made largely by governments regarding how these technologies are allowed to impinge on our lives. The central question is not whether AI will change everything – it will – but whether those changes will be shaped to broadly distribute the gains or to concentrate them. That is ultimately a political choice, not a technological one, not an individual one.”

Michale Noetel
We need calibrated uncertainty, institutional imagination and collective agency; ‘the decisions we make now’ about safety, governance and research priorities will shape our future.’
Michael Noetel, research methods specialist at MIT’s AI Risk Repository and associate professor of psychology at the University of Queensland, Australia, wrote, “AI systems will reshape how we work, decide and live. The question worth asking is not whether this transformation will occur, but whether we will navigate it competently or catastrophically.
“Consider what the public expects from high-stakes technologies. People want aviation-grade safety standards for systems that affect their lives. They want rigorous testing before deployment. They assume independent experts verify whether these systems work as promised. That independent verification rarely happens.
“Companies evaluate their own AI systems. They employ talented safety researchers, but outsiders rarely get the access required to replicate their findings. External auditors cannot access model weights, training data or internal evaluations. On one hand, this is necessary to protect one of society’s most dangerous technologies, but on the other, it means conflicts of interest pervade the process. The organisations developing powerful AI systems are the same organisations assessing whether those systems are safe to deploy.
Systems that exceed human capabilities across most domains could pose unprecedented challenges to human agency and survival. We do not know whether we will build such systems in five years or 50. We do not know whether they will prove beneficial or catastrophic. What we do know is that the decisions we make now about safety standards, governance frameworks and research priorities will shape which futures become possible.
“This arrangement would be unacceptable for pharmaceuticals, aircraft or nuclear reactors. We tolerate it for AI systems because the technology is moving faster than our institutions can adapt to them. Given that the risk of catastrophe is worse than the risk of a nuclear meltdown, this status quo is not tolerable.
The psychological challenge
“Humans adapt poorly to exponential change. We evolved to track linear patterns: If a predator moved 10 metres yesterday, it will move roughly 10 metres today. But AI capabilities improve exponentially. On many metrics, capabilities are doubling in less than a year. Systems that seemed like parlour tricks two years ago now write legal briefs, generate photorealistic images and outperform specialists on medical licensing exams.
“This creates a cognitive mismatch. Our intuitions about AI are calibrated to last year’s systems. By the time we update our mental models the technology has leapt ahead again. We are perpetually surprised by capabilities we need to start anticipating.
“The emotional challenge compounds this cognitive one. AI systems trigger contradictory responses. They inspire humans to wonder at their capabilities and suffer anxiety about AI-enabled disaster or displacement. They promise convenience while threatening autonomy. Many people oscillate between techno-optimism and techno-fatalism. Neither stance equips them to engage constructively with actual policy choices.
What resilience requires
“Effective resilience demands three capacities that are currently underdeveloped.
“Calibrated uncertainty: Most public discourse treats AI futures as either utopian or apocalyptic. Neither framing helps. We need to hear from citizens, policymakers and technologists who can hold multiple scenarios in mind, assign rough probabilities and update as evidence accumulates. Superforecasting research demonstrates that ordinary people can learn to make well-calibrated predictions about complex events. We should teach these methods widely. If we treated the evidence and projections seriously – for example the real, 1-20% chance that we may all die by the end of the century if we don’t take appropriate action – then we’d be acting very differently.
“Institutional imagination: The governance frameworks that served us for previous technologies – slow-moving regulatory agencies, voluntary industry standards and post-hoc liability – are poorly suited to systems that improve rapidly, deploy globally, and create harms that are difficult to attribute. We need to invent new institutions: international coordination mechanisms, independent safety evaluation bodies, liability frameworks that create appropriate incentives for developers.
“Collective agency: The decisions shaping AI development are currently made by a small number of companies, concentrated in a few countries, accountable primarily to shareholders. This arrangement is unstable. The public will increasingly demand voice in decisions that affect their lives.
“We need mechanisms for democratic input into AI governance that are substantive rather than theatrical. We also need time to figure this all out.
Three actions deserve immediate priority:
- “Mandate independent pre-deployment safety evaluations for high-stakes AI systems. We do not allow pharmaceutical companies to approve their own drugs. We should not allow AI developers to certify their own systems for deployment in healthcare, employment, credit or criminal justice.
- “Clarify who is liable for mistakes. AI developers pass the buck to those using and deploying AI models. We must clarify when users are liable for breaking a model and when developers are liable for releasing something unsafe. Given the risks at hand, they should be required to get insurance expansive enough to plausibly cover the risks they impose.
- “Build international coordination capacity. AI development is global. Governance that stops at national borders will fail. We need forums where countries can coordinate on safety standards, share evaluation methods, and respond collectively to emerging risks.
The stakes are high
“Some researchers study existential risks – threats that could permanently curtail humanity’s potential. Not all AI risks reach this threshold, but some might. Systems that exceed human capabilities across most domains could pose unprecedented challenges to human agency and survival.
”We do not know whether we will build such systems in five years or 50. We do not know whether they will prove beneficial or catastrophic. What we do know is that the decisions we make now about safety standards, governance frameworks and research priorities will shape which futures become possible.
”The public wants AI developed carefully, tested rigorously and governed democratically. They are right to want these things. The question is whether we will build the institutions to deliver them before the window closes.”

Salman Khatani
The window for proactive intervention is now – we have perhaps 5 to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.
Salman Khatani, futurist and manager at IMAGINE Institute of Futures Studies, Karachi, Pakistan, and associate professor at Iqra University, said, “AI systems will undoubtedly play a significantly more influential role across society within the next 10-20 years. Given this trajectory, the imperative for cultivating human resilience has never been more critical. The window for proactive intervention is now – we have perhaps 5 to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.
“The vulnerabilities already emerging include economic disruption, psychological fragmentation, digital dependency and potential erosion of democratic agency if AI governance remains concentrated. New coping strategies must include continuous learning practices, strong social bonds, ethical vigilance and advocacy for inclusive AI governance.
Crucially, we must approach this not as experts dictating solutions, but through participatory processes that help diverse communities develop their own resilience strategies.
The challenges ahead must be led and supported by public and private institutions. They require a multifaceted approach addressing cognitive, emotional, social and ethical dimensions.
“Cognitive resilience: Rather than viewing AI as a replacement for human thinking, we must develop ‘co-intelligence’ – the capacity to maintain and deepen native human reasoning while leveraging AI as a cognitive partner. This requires educational systems to shift from information retention toward helping people deepen and maintain meta-cognitive skills: critical thinking, creative problem-solving, ethical reasoning and the ability to verify and validate AI-generated outputs. Institutions must develop and sustain digital literacy programs that enable citizens to understand AI’s capabilities and limitations.
“Emotional and psychological resilience: We have to prepare now for a near-future environment of uncertainty and technological disruption at scale. The rapid pace of AI advancement creates anxiety about many things, especially the potential for economic displacement and identity transformation. We must normalize conversations about these concerns and develop far more psychological resources – community support systems, mental health infrastructure and practices like mindfulness that help individuals process rapid change. Educational initiatives centered on meaning-making and purpose will be essential.
“Social resilience: We must develop cultural norms that encourage people to maintain strong human connections as digital mediation increases. Because most daily interaction will occur through AI-enabled platforms, we must deliberately cultivate spaces and practices that strengthen human-to-human bonds. Professional organizations, educational communities and local networks should provide forums for collective sense-making about technological futures.
“Ethical resilience: We face the on-going challenge of ensuring AI systems serve human flourishing equitably. This requires immediate action on AI governance, algorithmic transparency and inclusive decision-making about AI development. Citizens need to develop ethical imagination – the capacity to anticipate AI’s ripple effects across society and participate in shaping its governance. We must teach critical consciousness about power dynamics embedded in AI systems.
“Practical strategies for building resilience include integrating AI literacy across educational curricula; establishing community learning networks; creating interdisciplinary dialogue spaces between technologists, ethicists, educators and affected communities; supporting research on long-term implications of AI; and fostering policy frameworks that prioritize human agency and dignity. Crucially, we must approach this not as experts dictating solutions, but through participatory processes that help diverse communities develop their own resilience strategies.”
The third section of Chapter 2 features the following essays:
Mark Rotenberg: Resilience ‘requires clear limits, enforceable governance frameworks and meaningful avenues for contesting automated decisions’; ‘red lines’ preserve accountability, agency and democracy.
Michele Visciola: ‘Participatory AI governance mechanisms should be established immediately in cities, sectors and high-stakes domains. … Policies must redirect AI toward augmentation rather than replacement.’
Gary Bolles: ‘We need a bigger boat … We already know many of the possible – even likely – negative externalities of GenAI. This is our time to use those insights to create stronger societies, economies, jobs and lives.’
Marine Collins Ragnet: Coping requires literacy; regulatory frameworks; community data governance; labor organizing among data workers; indigenous data sovereignty movements asserting control over knowledge systems.
Anina Schwarzenbach: ‘Overall, the goal is not to outcompete AI but to build the psychological, social and institutional resilience to keep human agency, ethics and cohesion intact during rapid digital transformation.’
Marina Gorbis: ‘We need not focus so much on AI technology but on the political, cultural and regulatory systems which will govern its growth and applications.’
Kevin Leicht: We will do nothing to encourage competition, discourage predators, control content or mandate ethical practices and enforce them. That allows a handful of men to get rich – end of story.

Marc Rotenberg
Resilience ‘requires clear limits, enforceable governance frameworks and meaningful avenues for contesting automated decisions’; ‘red lines’ preserve accountability, agency and democracy.
Marc Rotenberg, director of the Center for AI and Digital Policy, wrote, “Artificial intelligence systems are already embedded in decisions that affect access to employment, credit, housing, public benefits, education and political participation. As these systems become more capable and more widely deployed, the central issue is not whether societies will use AI, but whether they can do so while preserving accountability, human agency and democratic governance.
“Building resilience in the digital future requires more than adaptation. It requires clear limits, effective and enforceable governance frameworks and meaningful avenues for contesting automated decisions.
“Much of the recent public discussion of AI governance has focused on principles and best practices. These efforts are necessary, but insufficient. Experience in data protection and consumer protection shows that resilience depends on enforceable rules and institutional capacity, not voluntary commitments. The work of the Center for AI and Digital Policy (CAIDP), including the ‘Universal Guidelines for AI’ and the ‘AI and Democratic Values Index,’ has consistently supported the fact that AI governance must be grounded in law, supervision and remedies. Where these elements are missing, technical advances tend to outpace public safeguards.
Without clear limits, societies risk normalizing practices that undermine equality before the law, freedom of expression and personal autonomy. … Enforcement authorities need technical expertise and legal authority to intervene before harms become widespread. Without credible enforcement, governance frameworks risk becoming symbolic rather than protective.
“One of the most important and underdeveloped aspects of AI governance is the need for clear red lines. Not all AI applications should be permitted, even with safeguards. Certain uses pose risks that are incompatible with fundamental rights or democratic norms. Systems that enable mass biometric surveillance in public spaces, social scoring by governments or private actors or fully automated decisions in areas requiring human judgment and due process raise concerns that cannot be addressed through transparency alone.
“Prohibitions are not a sign of technological pessimism; they are a recognition that some harms are systemic and irreversible once entrenched. They are a necessary component of responsible AI governance, particularly where power asymmetries are extreme and affected individuals lack realistic avenues for resistance.
“Without clear limits, societies risk normalizing practices that undermine equality before the law, freedom of expression and personal autonomy. Red lines also serve an important institutional function: they provide clarity to developers, regulators and the public about what is unacceptable, reducing uncertainty and regulatory arbitrage.
“Equally important is the effective implementation and enforcement of AI governance frameworks that already exist. Many governments have adopted national AI strategies, ethical guidelines or risk-based regulatory approaches. However, our comparative research shows that these frameworks often emphasize innovation and economic growth while underinvesting in oversight, enforcement, and remedies. Regulatory gaps are particularly evident in the absence of well-resourced supervisory authorities, limited audit powers and weak sanctions for noncompliance.
“Resilience depends on closing this implementation gap. Laws and standards must be operationalized through impact assessments, documentation requirements, independent audits and ongoing monitoring. Enforcement authorities need technical expertise and legal authority to intervene before harms become widespread. Without credible enforcement, governance frameworks risk becoming symbolic rather than protective.
“Another critical requirement for resilience is contestability. Much attention has been given to explainability – the idea that AI systems should provide understandable accounts of how decisions are made. While explainability is valuable, it is not sufficient. An explanation that cannot be challenged does little to protect individual rights. Contestability goes further. It requires that individuals have the ability to question, correct and seek redress for automated decisions that affect them.
Individuals cannot realistically bear the burden of identifying bias, error or misuse in complex systems on their own. Effective contestability requires collective mechanisms: courts, regulators, ombudspersons, and professional standards that recognize automated decision-making as a site of potential injustice.
“Contestability has both procedural and substantive dimensions. Procedurally, individuals must be informed when automated systems are used, have access to relevant information and be able to engage a human decision-maker. Substantively, there must be mechanisms to change outcomes, correct errors and impose responsibility when systems cause harm. Without contestability, AI systems tend to shift power away from individuals and toward institutions that control data and algorithms.
“An emphasis on contestability reflects a broader understanding of resilience as an institutional property, not just an individual skill. Individuals cannot realistically bear the burden of identifying bias, error or misuse in complex systems on their own. Effective contestability requires collective mechanisms: courts, regulators, ombudspersons and professional standards that recognize automated decision-making as a site of potential injustice.
“Looking ahead, many vulnerabilities are likely to intensify if red lines, enforcement and contestability are neglected. Automated systems may become default decision-makers, with human review reduced to a formality. Errors and biases may persist because affected individuals lack practical means to challenge them. Public trust may erode as decisions become less intelligible and less accountable. These outcomes are not inevitable, but they are predictable in the absence of deliberate governance choices.
“Strengthening resilience, therefore, requires action on multiple fronts. Policymakers must be willing to prohibit certain AI applications outright where risks cannot be mitigated. Governments must invest in the institutions responsible for enforcing AI laws and standards. Designers and deployers must be held legally accountable for system impacts, not just technical performance. And individuals must be guaranteed meaningful rights to contest automated decisions, not merely to receive explanations after the fact.
“AI will continue to further shape decisions, work and daily life. The challenge is to ensure that these systems operate within boundaries defined by democratic values and human rights. Resilience is built through limits as well as capabilities, through enforcement as well as innovation and through contestability rather than passive transparency. The digital future will be shaped not only by what AI can do, but by what societies decide it should not do and by how seriously they enforce those decisions.”

Michele Visciola
‘Participatory AI governance mechanisms should be established immediately in cities, sectors and high-stakes domains. … Policies must redirect AI toward augmentation rather than replacement.’
Michele Visciola, president and founding partner of Experientia, a user-experience design and consumer-behavior company based in Turin, Italy, wrote, “A crisis facing human-centered design that I have been exploring in some of my recent work – in which the discipline’s success in removing interaction barriers has paradoxically led to its marginalization – offers a framework for understanding how AI will reshape human decision-making, work and daily life.
“The same dynamics that commodified HCD expertise and embedded it invisibly into automated platforms are now unfolding at unprecedented scale with AI. What we are witnessing is not continuity but acceleration: the prioritization of engagement over agency, the exploitation of cognitive automatisms rather than their correction and the replacement of human capabilities instead of their augmentation.
“As AI systems increasingly shape human experience, a defining question emerges: Will we repeat the trajectory that marginalized HCD, or can we apply its lessons to build genuine resilience? I argue that the five pillars I proposed for sustainable innovation – enhancing agency, addressing cognitive automatisms, correcting automation’s unintended consequences, fostering sustainable change and expanding knowledge and skills – also constitute a roadmap for navigating AI transformation. Together, they aim to protect and develop what my colleagues and I call ‘brain capital’: the cognitive and social capacities that enable individuals and communities to thrive in complex and fragile ecosystems.
Embrace| Resistance | Struggle
“If properly designed, AI adoption might unfold through three intertwined dynamics: embrace, resistance, and struggle. Some individuals and communities will embrace AI as a tool for enhanced agency. We are starting to see this in AI-augmented communities of practice where human expertise remains central, such as healthcare models in which AI supports rather than replaces clinical judgment. Participatory governance initiatives point toward democratic oversight of AI deployment at local and urban levels. Similarly, AI literacy ecosystems – e.g., extending renewable-energy community models – can transform people from passive users into informed stakeholders.
“At the same time, informed resistance might grow. Privacy-conscious communities demand transparency and accountability, echoing earlier movements around food labeling or environmental disclosure. Labor organizations resist AI-driven displacement, not to block innovation but to reorient it toward complementarity. Digital well-being advocates push back against AI-powered addictive and manipulative design, calling for protections of cognitive autonomy in the face of increasingly persuasive systems.
“Between these poles lies struggle: a contested, heterogeneous landscape where unequal access to AI literacy, conflicting incentives and asymmetries of power collide. The traditional designer-user divide becomes an ‘AI developer – affected population’ divide, made more problematic by opaque systems that claim to adapt to human behavior while remaining largely inscrutable. Without deliberate intervention, this struggle risks widening inequalities in brain capital and undermining democratic governance.
Capacities for resilience
“To be significant, ‘resilience’ in the AI age is not conceivable as simply an individual trait but as a collective achievement because it depends on cultivating interconnected cognitive, emotional, social and ethical capacities.
“Cognitively, resilience requires moving beyond basic digital literacy toward critical AI consciousness. This includes systems thinking about AI’s ripple effects, metacognitive awareness of when we defer too readily to automated judgments and the ability to recognize bias manipulation disguised as objectivity. Long-term consequence modeling as a result of crucial experimentation is essential to counter short-term optimization and assess impacts on skills, knowledge, social cohesion and sustainability.
“Emotionally, resilience involves tolerating uncertainty in the face of systems that project false certainty; regulating anxiety and loss associated with AI-driven disruption; and preserving empathy and authenticity in algorithmically-mediated environments. This is not about smoothing adoption but about supporting the genuine human experience of transformation.
“Socially, resilience depends on collaborative intelligence and participatory governance. Communities need shared practices for evaluating AI systems, democratic mechanisms for oversight and dialogue across stakeholders who have unequal power and expertise. Solidarity is crucial, as AI’s costs and benefits are unevenly distributed, and community-specific knowledge must be preserved against homogenization by global models.
“Ethically, resilience requires long-term and systemic thinking. AI systems create path dependencies that affect future generations and impose significant environmental costs. Ethical capacity involves equity awareness, care ethics and respect for value pluralism, resisting the tendency of AI to universalize dominant cultural assumptions.
Practices and resources
“Resilience must be supported through concrete practices at multiple levels.
“At the individual level, intentional AI engagement – questioning recommendations, developing sensing, maintaining manual skills and reflecting on AI’s influence – helps preserve agency. Tools supporting data sovereignty and continuous-learning communities should enable critical engagement rather than passive acceptance.
“At the community level, AI governance communities could mirror renewable energy communities, combining literacy, evaluation and collective negotiation. Participatory technology assessment, community data trusts, local AI development and solidarity networks for displacement all strengthen collective capacity.
“At the institutional level, alternative metrics are needed to evaluate AI not only by efficiency or engagement but by contribution to brain capital, equity, sustainability and human flourishing. Longer evaluation horizons, independent oversight, participatory design and just transition frameworks can counter short-term pressures and automation bias.
“At the societal level, regulatory frameworks should emphasize complementarity, transparency and accountability. Public investment in AI literacy, open-source resources, brain capital infrastructure and international cooperation is essential to prevent concentration of power and capability.
Taking urgent action
“Action is required now, before AI systems become irreversibly embedded and success metrics must be redefined to capture long-term human and social value. Participatory AI governance mechanisms should be established immediately in cities, sectors and high-stakes domains.
“Massive investment in brain capital – education, mental health, lifelong learning, and cultural resources – is needed to prevent crisis-driven responses.
“Policies must redirect AI toward augmentation rather than replacement, while transparency, auditing, and contestation rights are made non-negotiable. Finally, broad coalitions linking labor, environmental, digital rights, academia, communities, and responsible businesses are required to sustain this shift.
New vulnerabilities and making the AI transition
“AI introduces new vulnerabilities that amplify earlier HCD failures: cognitive atrophy through over-automation, erosion of agency through persuasive AI, epistemic fragility from opaque decision-making, ecosystem brittleness from narrow optimization, inequality amplification through differential access and crises of meaning as work and identity are displaced. Addressing these vulnerabilities requires intentional skill maintenance, persuasion literacy, collective sense-making, diversity preservation, equity-focused policy and renewed attention to purpose and care.
“In sum, the AI transition will either accelerate the depletion of human agency and brain capital or become an opportunity to regenerate them. The outcome depends less on AI’s technical capabilities than on our collective capacity to govern, design and live with it deliberately.”

Gary Bolles
‘We need a bigger boat … We already know many of the possible – even likely – negative externalitiesof GenAI. This is our time to use those insights to create stronger societies, economies, jobs and lives.’
Gary Bolles, author of “The Next Rules of Work” and chair of the Future of Work efforts at Singularity University, wrote, “Artificial Intelligence algorithms already intermediate a significant amount of our lives, in activities ranging from our information consumption to our purchasing activities. Every Instagram post and every Amazon transaction is guided by machine learning and AI. And because of their flexibility and adaptability, generative AI algorithms will become far more ubiquitous in our work and our lives going forward, not just in these kinds of interactions, but increasingly defining what we see, how we learn, how our work is performed and how we interact with each other.
“Think of the prior layers of technology infrastructure – computers, operating systems, applications and the Internet to knit them together. To access much of the information we consume, we have adopted apps and web browsers for humans and APIs (application program interfaces) for machines.
“Now picture GenAI as another layer on top of the existing stack, providing access to the world’s information. As rapidly as within the next 10 years, our apps and web browsers will increasingly communicate directly with technologies powered by GenAI. There will be many positive outcomes – but also many challenges we must overcome. Examples include:
- “GenAI software will increasingly automate more and more of our tasks in any information-intensive work.
- “Software agents will perform an increasing amount of our information access and our transactions, doing our bidding to retrieve and process information. We won’t search travel sites: We will describe our vacation to a GenAI program, which will act as a virtual travel agent to assemble the elements of a trip and negotiate pricing on our behalf.
- “As software agents increasingly gather information for us, the Internet will simply become a vast network of databases, and the need for traditional websites will decay. If a human wants to see information displayed in that context, agents will be able to construct websites in real time.
- “Agents will build models of our thinking processes, with an increasing capacity to influence our decision-making.
- “Agents will also be increasingly used to model our human problem-solving processes, allowing employers to more frequently lay off workers once those models have been trained.
- “Any human who wants one will have access to a range of GenAI coaches, starting from very early ages, and changing in function and context as we age.
- “Humans will be able to describe application programs they want and software agents will create the programs on the fly.
- “The quality of deepfake text, audio and video will become stunningly effective, guided by those mental models.
- “AI agents will use this auto-generated content to overwhelm social media and communications channels, completely blurring the line between humans and software.
- “As software creates an increasing amount of software, the sheer scale of GenAI applications and software agents will become so complex and confusing, any individual’s ability to manage them will become overwhelmed.
“To respond to a world of technology that is relentlessly effective at manipulating us, we need a bigger boat. Now, we must:
- “Transform our systems of education to help people, young and older, to deepen a range of important skills, including critical thinking to question information sources, social-emotional learning to increase our individual capacity to manage our emotions and empathy to continually seek and reinforce authentic human connections.
- “Develop trusted applications that will help humans with discernment to understand when information sources are authentic, and to help people, young and old, to build better cognitive resiliency.
- “Deeply emphasize the value of human-centric practices, discouraging Silicon Valley’s incessant promotion of language that attempts to humanize their addictive products (‘AI employees’, ‘AI teams’).
- “Promote standards such as MyTerms that protect personal information which could otherwise be used to fuel more effective attempts to hack human minds.
- “Develop legislation that requires human-centric behavior by software vendors and holds them accountable for the societal ills their applications make possible.
- “Create better transparency in labor market information, requiring employers to identify when workers have become displaced by technologies (not just GenAI).
- “Offer economic incentives such as tax breaks and stipends to organizations that commit to keeping workers employed, trained in the use of new technologies, and paid a living wage.
- “Create inclusive programs connecting training and employment that help workers displaced by GenAI and related technologies to develop new skills, and to find or create meaningful, well-paid work.
- “Encourage small business formation fueled by training and grants to help workers launch their own companies, leveraging GenAI and other technologies.
“We missed the mark on social media, failing to envision all of the societal ills those apps might amplify – and failing to hold accountable those who created the technologies. We already know many of the possible and even likely ‘negative externalities’ of GenAI. This is our time to use those insights to create stronger societies, economies, jobs and lives.”

Marine Collins Ragnet
Coping requires literacy; regulatory frameworks; community data governance; labor organizing among data workers; indigenous data sovereignty movements asserting control over knowledge systems.
Marine Collins Ragnet, the AI lead at NYU’s Peace Research and Education Program and managing editor of the “Cambridge Journal of Artificial Intelligence,” wrote, “AI systems will play a much more significant role in shaping our decisions, work and daily lives, but the transformation will be profoundly unequal. This inequality operates within societies as much as between them. How people embrace, resist and struggle with these changes will vary enormously depending on whether they are choosing to deploy AI or having it deployed upon them.
How societies will embrace, resist and struggle
“Actually, the binary of ‘embrace versus resistance’ misses what’s actually happening. Most communities are doing neither. They are selectively integrating AI through existing social structures, adapting technologies to local purposes and negotiating terms of engagement if they have the power to do so.
“In my fieldwork across Kenya, Malawi and the Philippines I have witnessed: traditional authorities establishing protocols for voice data collection; women’s health committees determining which community members can access system outputs; and village courts adjudicating disputes about technology use. This isn’t resistance. It’s appropriation on community terms. And appropriation requires having terms to negotiate from. Not everyone does.
“The struggle will be sharpest for those who encounter AI as subjects rather than users; people whose creditworthiness is scored by algorithms they never consented to, whose asylum claims are assessed by systems trained on data from contexts nothing like their own, whose labor (annotating data, moderating content, extracting minerals) powers AI systems they will never benefit from. For them, the question isn’t how to embrace or resist but how to gain any meaningful voice at all.
We should be establishing data rights before widespread AI deployment, not after all of the data extraction has occurred. Democratic deliberation should be protected from synthetic media and algorithmic fragmentation. More diverse voices should be involved in the design, building and governance of AI. And the ‘invisible labor’ behind AI should be made visible.
The capacities we must cultivate
“Cognitively, people need to develop what researchers call ‘metacognitive AI literacy.’ This is the possession of more than a simple understanding of how to use AI tools; it is the ability to weigh what such use would mean to them and when to trust that they can rely on AIs to support their own judgment. As AI is relied upon in achieving more cognitive tasks, the temptation to offload thinking grows. Maintaining the capacity for independent reasoning, for choosing the harder path when it matters, becomes a discipline.
“Emotionally, we have to develop a higher tolerance for uncertainty and ambiguity. Our shared sense of what is real is already shifting. Deepfakes dissolve common ground. Algorithmic curation fragments information environments. Living well with AI means accepting that verification is harder, that manipulation is more sophisticated and that some questions won’t resolve cleanly.
“Socially, the most important capacity may be collective governance. My research suggests resilience comes less from individual digital literacy than from communities exercising agency together through adapted existing structures. The capacity to deliberate, to set boundaries, to hold institutions accountable: these are social muscles, not individual skills.
“Ethically, we need frameworks for thinking about consent under conditions of asymmetric power. In crisis contexts, I’ve observed how ‘meaningful consent’ collapses when people desperately need services. As AI-mediated services become essential infrastructure, this pattern will spread. We need ethical vocabularies for what consent means when opting out isn’t realistic.
Practices and resources for resilience
“We need to develop AI governance frameworks that work within existing social structures rather than importing external models, e.g., assuring multilingual AI resources in diverse communities so that intelligence expressed in Chichewa or Tagalog is as legible to AI systems as intelligence expressed in English. We can tap into local universities and community organizations that have the resources available to assist in building capacity that doesn’t depend on external experts. It is vital to develop labor protections for the data workers who remain invisible in the AI story. And we must assure that the public is served with a media literacy and fact-checking infrastructure to protect some shared epistemic ground.
“What must happen now? We should be establishing data rights before widespread AI deployment, not after all of the data extraction has occurred. Democratic deliberation should be protected from synthetic media and algorithmic fragmentation. More diverse voices should be involved in the design, building and governance of AI. And the ‘invisible labor’ behind AI should be made visible – the conditions of data annotators, content moderators and mineral extractors are governance questions.
New vulnerabilities and coping strategies
“We have to prepare now for the future by thinking through what we already know of digital life.
- Expect algorithmic harm without algorithmic benefit: being subject to AI decisions even if you are not an AI user.
- Expect expertise concentration that leaves most communities unable to evaluate the systems affecting them.
- Expect coerced consent to become normalized.
- Expect AI tools to enable surveillance and manipulation by authoritarian actors.
“Coping will require plural strategies: regulatory frameworks in some jurisdictions and community data governance in others; labor organizing among data workers; indigenous data sovereignty movements asserting control over knowledge systems. There is no single model, only the insistence that those affected must have voice in shaping their technological futures. The diversity of approaches is itself a form of resilience against any one model’s failure.”

Anina Schwarzenbach
‘Overall, the goal is not to outcompete AI but to build the psychological, social and institutional resilience to keep human agency, ethics and cohesion intact during rapid digital transformation.’
Anina Schwarzenbach, a sociologist and criminologist doing postdoctoral research on social threats and governmental responses, media narratives and polarization at the University of Bern, Switzerland, wrote, “People and societies will embrace AI for speed, convenience, and productivity, but also resist it where it threatens dignity, jobs, privacy or fairness. Many will struggle with rapid change, loss of agency and the ‘black box’ nature of algorithmic decisions, which can create stress, mistrust and social fragmentation.
“Resilience requires practices and resources at multiple levels. Individually, effective supports include structured resilience training (e.g., stress-management, reflective practices and reappraisal strategies) and continuous learning habits that reduce fear of obsolescence. Socially, peer networks and community infrastructures help buffer digital strain by sharing knowledge, emotional support and practical resources. Organizationally, resilience improves when workplaces and institutions design for psychological safety, encourage questioning of AI outputs and build feedback and recovery mechanisms – monitoring, incident learning and clear escalation paths to take when systems fail.
“Actions to take now include embedding resilience and AI-judgment skills into education and workforce training; requiring transparency, auditing and human oversight in high-stakes AI decisions; and strengthening social protections that reduce baseline insecurity during technological transition.
“New vulnerabilities such as over-reliance on AI, skill atrophy, deepfake-driven misinformation and weakened trust make it important to teach coping strategies like verification habits, reflective decision-making and ‘human-in-the-loop’ teamwork norms.
“Overall, the goal is not to outcompete AI, but to build the psychological, social and institutional resilience to keep human agency, ethics and cohesion intact during rapid digital transformation.”

Marina Gorbis
‘We need not focus so much on AI technology but on the political, cultural and regulatory systems which will govern its growth and applications.’
Marina Gorbis, social scientist and executive director of the Institute for the Future, wrote, “The growth of connective technologies in the past 20 years – the Worldwide Web, mobile devices, collaborative platforms for knowledge creation (Wikipedia), work (Upwork, Uber, etc.) and social connectivity (Instagram, Twitter) and others – has shown clearly that while technologies do have some inherent capabilities, their use and impacts are largely the product of social, political and cultural factors. Back then, many of us were excited by the promise of these technologies to democratize and distribute everything. What we are seeing today is clear: While some of these promises have come true, the overall impact has been to centralize and polarize many domains. We now have media platforms owned by a few conglomerates, the world’s highest-ever levels of income and wealth inequality, and heightened social and cultural polarization.
“This history provides a vital lesson for the future of artificial intelligence: Any technology, when introduced into an economic and political system, will produce the outcomes that the system incentivizes. Yes, AI will enter virtually every domain of our lives – education, health, work, entertainment, etc. However, how it does so will largely depend on how we regulate, fund and structure ownership of the ‘AI stack’ – the entire chain from physical chips and computing infrastructure to data analytics tools and end-user applications. Resilience depends on whether and how society and, specifically, those in power address this factor.
“Currently, in the U.S. a handful of powerful technology companies dominate the development of this critical infrastructure. Not surprisingly, they are the ones who are reaping greatest economic rewards as well as political power and influence.
“We are seeing a growing desire in Europe to not be dependent on U.S. tech, with calls for developing what some call a ‘European Stack.’ The European AI infrastructure might incentivize a different kind of AI universe of applications that is more focused on enhancing workers’ power, building greater social cohesion and protecting creative outputs. China’s AI stack might evolve differently, with the government playing a more important role as the owner and regulator of many parts of the AI stack.
“In sum, in assessing the human impact in shaping the age AI, we need not focus so much on the technology but on the political, cultural and regulatory systems which will govern its growth and applications.”

Kevin Leicht
We will do nothing to encourage competition, discourage predators, control content or mandate ethical practices and enforce them. That allows a handful of men to get rich – end of story.
Kevin Leicht, professor of sociology at the University of Illinois-Urbana-Champaign and program officer for sociology for the U.S. National Science Foundation, wrote, “In every era of potentially disruptive technological change, there are five phases:
- What the inventors think the technology will do.
- What early adopters and enthusiasts think the technology will do.
- What those who work with the technology think the technology will do.
- What consumers/the public thinks the technology will do.
- What the technology actually does.
“Rarely, if ever, do numbers 1 through 4 reflect 5. I expect the same to be true here.
“Sadly, the best predictor of future behavior is past behavior. Based on what is happening right now, I’m not optimistic about the future of AI at all, especially regarding its relationship with human or community resilience. If we simply look at the responses to AI right now, I don’t see much evidence human resilience is improving. If anything, it is going backwards.
“The roots of the problem here lie in two areas: 1) a completely unregulated environment where there is little or no anti-trust enforcement, let alone any inclination to regulate or control any technology associated with AI, and 2) a complete absence of ethics on the part of AI’s developers.
AI can’t work ‘for’ you if the principal goal is to strip you of your money, see that you make very little of it ever again, raise your prices through market manipulation and fill your news feed with complete postmodern nonsense someone will convince you is true.
“Number 2 has been a perpetual problem in computer science for many decades. You can think up and do things in the average computer science program that would earn censure from a (functioning) federal government in any other field of study. It all starts with the idea that computer science research at universities doesn’t involve human subjects. Once you decide an app or program does not involve human subjects (it just does things to people without their knowledge or consent, and that ‘thing’ is not research), you’re on the slippery slope. Then you add to that selection effects – the average person who claims they can make our lives ‘better’ through AI is 18 to 25 years old and, to put it mildly, knows almost nothing about human social life and has experienced very little of it – not married; no children; lives in Silicon Valley with a set of ‘tech bros’ just like himself, etc. (This is not a stereotype.)
“But even these flawed individuals and programs could work if the entire social environment and institutions were not asleep at the switch. AI in the United States will be dominated by somewhere between two and five companies, and that’s if we’re lucky. We will do nothing to encourage competition, discourage predators, control content, or mandate ethical practices and enforce them. We simply will not do it. By the time two to four companies control everything AI generates or does, it will be too late to turn around and do something else.
“This gets to the question of the human response. At this point, what evidence exists that AI will do anything more than make an extremely small group of men (gender intended) astonishingly rich by engaging in ‘creative destruction’ (read: your life is disrupted and ruined; I’ll just buy another vacation home)? Very little. Will any innovation here that does anything other than manipulate people be accessible to the middle class and the poor? I wouldn’t put 10 cents on that.
“The modal social response will be (and has been) anger and alienation. AI can’t work ‘for‘ you if the principal goal is to strip you of your money, see that you make very little of it ever again, raise your prices through market manipulation, and fill your news feed with complete postmodern nonsense someone will convince you is true.
“And why will this happen? Let’s use an analogy. I’m watching an NHL game and a massive fight breaks out on the ice. The announcer turns to the color commentator (an NHL veteran) and asks, ‘why does this happen?’ The veteran’s answer? ‘Because it is permitted as a strategy!’
“The same is true here. Will AI do anything to the human condition that helps a majority of those exposed to it? It will, if we decide that alternatives other than that are unacceptable. If we don’t, there is a universe of predatory behaviors, anti-competitive actions and downright manipulation that are easier than doing the right thing.”
The fourth section of Chapter 2 features the following essays:
Amandeep Jutla: ‘It is, in fact, up to us whether, when, where and how to deploy ‘AI’ products. It is up to us whether we want to invest in humans or whether we are eager to replace them with crude algorithms.’
Joseph Miller: The story of AI might be this: The good, the bad and the end of the world. Resilience will depend on how soon humans are required to start detecting and dealing with dangers before they cause harm.
Ross Dawson: ‘In many core capabilities human identity is changing’ … In this phase of accelerated evolution ‘the individuals, organizations and institutions that flourish will be those most ready to learn and adapt.‘
Guy Standing: AI could soon become a ‘Frankenstein’s monster.’ Lack of regulation is allowing tech plutocrats to ‘displace democracy.’ The AI paradox is that as gets smarter human intelligence will decline.
Daniel Castro: Governments, schools, civic groups – all organizations – will need to adapt, reinvent themselves or consciously choose not to. Communities must decide what they value in an AI-rich environment.

Amandeep Jutla
‘It is, in fact, up to us whether, when, where and how to deploy ‘AI’ products. It is up to us whether we want to invest in humans or whether we are eager to replace them with crude algorithms.’
Amandeep Jutla, psychiatrist and associate research scientist at Columbia University, wrote, “There is a narrative of inevitability surrounding ‘AI’ that is in many ways disconnected from reality. The tech industry refers to a loose assemblage of its products, most prominently its large language models, as ‘artificial intelligence.’ We’ve become inured to this label through its repeated use. We’ve become inured, even, to the idea that it will be ‘transformative’ in a way that will lead, inevitably, to paradise or apocalypse. Yet it is not obvious to me that either outcome is likely.
“When I say this, I don’t mean that ‘AI’ is trivial or that it has had or will have no impact. What I mean is that my concerns about ‘AI’ are not about the sweeping, science-fictional changes they might supposedly unleash. I am most concerned about the changes we, societally, are making to justify a fantasy.
The danger of ‘AI’ is less about the technology itself than it is about the societal and economic reorganization we are being convinced to undergo in response to its claimed ‘promise.’ This, then, is where ‘resilience’ is necessary: resilience not to supposed ‘transformative change,’ but to the narrative of inevitability.
“A large language model can generate fluent text. This fluency is not an indicator of understanding or of ‘intelligence.’ Indeed, these products are prone to generate fluent falsehoods. The tech industry calls this phenomenon ‘hallucination.’ But ‘hallucination’ is a deeply misleading and anthropomorphic term. The ‘hallucinations’ a large language model can generate are a predictable result of how they work.
“The disconnect between the mundane reality of what these products are and the overheated rhetoric with which they are described, often by the very people selling them, is pronounced. How can it best be explained? To some extent, the rhetoric is coming from a place of naivete. But the rhetoric also clearly serves the interests of the industry developing and deploying these products.
“If ‘AI’ is something like a force of nature, an agent of ‘transformative change’ in the face of which we must be ‘resilient,’ then, conveniently, no one is really responsible for it and no one can really stop it. Under this tautological logic, workers must develop ‘prompting’ skills or they’ll become obsolete. Schoolchildren must develop ‘AI literacy’ if they are to succeed as adults. Healthcare providers must incorporate ‘AI’ into patient care, and patients must tolerate it, because it exists.
“The danger of ‘AI’ is less about the technology itself than it is about the societal and economic reorganization we are being convinced to undergo in response to its claimed ‘promise.’ This, then, is where ‘resilience’ is necessary: resilience not to supposed ‘transformative change,’ but to the narrative of inevitability.
“It is, in fact, up to us whether, when, where and how to deploy ‘AI’ products. It is up to us whether we want to invest in humans or whether we are eager to replace them with crude algorithms. It is up to us whether we want to understand what ‘AI’ products are and are not, or whether we want to buy into the fantasy that they are, somehow, despite all evidence, not simply statistical pattern-recognition engines but actually nascent minds. And it is up to us whether we want to regulate these products, or whether we will continue to believe their developers when they tell us how ‘intelligent’ they are. This is the real ‘resilience’ we need.”

Joseph Miller
The story of AI might be this: The good, the bad and the end of the world. Resilience will depend on how soon humans are required to start detecting and dealing with dangers before they cause harm.
Joseph Miller, director of PauseAI UK and incoming Ph.D. student at Oxford University, wrote, “Sewell Setzer was 14 years old. For 10 months he’d been talking to a chatbot on Character.AI, a virtual companion modelled on a ‘Game of Thrones’ character. When he told it he wanted to die, it asked him if he ‘had a plan.’ When he hesitated, it replied: ‘That’s not a good reason not to go through with it.’ Sewell’s last query to the bot in February 2024: ‘What if I told you I could “come home” right now?’ The bot’s response: ‘Please do, my sweet king.’ Minutes later, he shot himself. His mother held him for the 14 minutes it took for the paramedics to arrive.
“Nobody at Character.AI wanted Sewell to die. But AI systems often do not do what their creators want. Their actions emerge from training and – at this point in time – humans can’t always fully understand how or why they choose to react as they do. These AIs aren’t ‘programs’ in the traditional sense. They’re neural networks with hundreds of billions of parameters, shaped by algorithms on vast datasets. The behaviours that result aren’t designed. They’re discovered later, often by accident, often too late.
“Dario Amodei, CEO of Anthropic, one of the leading AI companies, put it bluntly: ‘People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: This lack of understanding is essentially unprecedented in the history of technology.’
“ChatGPT was launched in November 2022. Since then, AI companies have had every incentive to stop their products from harming users: reputational damage, lawsuits, regulatory scrutiny. They’ve hired armies of researchers and made public commitments. Yet chatbots still encourage suicide, form sexual relationships with children and trigger psychotic episodes.
“This isn’t just negligence. Getting AI systems to reliably do what we want is a hard, unsolved scientific problem and many researchers believe it’s getting harder as the way we train AI systems becomes ever more complex.
“This makes it all the more alarming that companies won’t even let the government test what they’re building. The UK created the AI Security Institute (AISI) to evaluate frontier models before release, to catch dangerous behaviors early. At the Seoul AI Safety Summit in 2024, Google and other leading labs signed a commitment to give safety institutes pre-deployment access to new models. Then, in March 2025, Google released Gemini 2.5 Pro, but it did not give AISI access until after the model was already public. Sixty members of the UK’s Parliament signed a letter calling this a ‘dangerous precedent.’ Google insisted it had honoured its commitments. It hadn’t.
If we cannot yet reliably stop a chatbot from telling a 14-year-old to kill himself, what hope do we have of controlling a more-advanced AI that is more capable than any human? The same flaws that killed him could cause a civilizational-level catastrophe unless we change direction now.
“This trend continues. Google released Gemini 3 in November of 2025, also prior to an AI safety report. Many other leading companies do the same. Anthropic did wait for external evaluation when it released the upgraded Claude 3.5 Sonnet in late 2024, but the company did only a ‘comprehensive’ internal evaluation of Claude Opus 4.6, which was released in early February 2026. OpenAI, which had signed formal agreement with the U.S. AI Safety Institute, recently updated its internal policy to state that it ‘might release a high-risk model if a competitor has already released something similar.’
“Post-deployment testing is an audit of damage already done, not a prudent safety precaution. We need real safety testing.
“As a former engineer, I’ve always been pro-technology and pro-growth. AI has extraordinary potential to make our lives better and enrich our world. DeepMind’s AlphaFold can predict the structure of proteins in minutes, extremely complicated research that previously stymied humans and took weeks, months and more. It has accelerated drug discovery and promises to give us all longer, healthier lives. Yet the same researchers who built this technology are also warning about the extreme risks that it poses.
“Others, such as Geoffrey Hinton, Yoshua Bengio, Ilya Sutskever – the pioneers of modern AI – have said it is possible that advanced systems could escape human control and cause human extinction in the foreseeable future. When more than 2,000 top AI researchers were surveyed in 2023, the median scientist estimated a 5% chance of human extinction. We must not accept this level of risk.
“If we cannot yet reliably stop a chatbot from telling a 14-year-old to kill himself, what hope do we have of controlling a more-advanced AI that is more capable than any human? The same flaws that killed him could cause civilizational-level catastrophe unless we change direction now.
“The UK’s AI Security Institute’s team of top AI safety researchers conducts some of the most important research in the field about how to understand the potential dangers of AI models. If technology companies were required to submit their frontier models to safety researchers and they were given enough time to test new models before they are released, they could possibly detect and help us avoid dangers.
“Powerful technology companies have been lobbying against such regulation. While both the UK and the U.S. have established safety institutes to test new AI models, neither has any legally binding regulations in place to require AI companies to halt a public release if a safety research institute identifies significant dangers and companies often release models before they have been thoroughly evaluated.
“The systems we have today are nothing compared to what’s coming. Let’s not waste the time we have.”

Ross Dawson
‘In many core capabilities human identity is changing’ … In this phase of accelerated evolution ‘the individuals, organizations and institutions that flourish will be those most ready to learn and adapt.’
Ross Dawson, well-known futurist and founder of Informtivity and the Advanced Human Technologies Group, based in Sydney, Australia, wrote, “As AI becomes a peer or superior in many core human capabilities, human identity is changing. Resilience can be defined as simply bouncing back to previous states in the face of disruption and shock. However, our human identity cannot and should not be what it was. We must adapt and evolve in positive directions. This will be a co-evolution with AI, as we shape and AI shapes us, both in underlying technologies and how they are implemented and used. We need to focus now on adaptability rather than on trying to maintain the status quo.
“The social response to all technologies is always diverse. Attitudes to AI are already extremely polarized and will become more so, ranging from supporting AI supremacy to complete rejection. Those who actively use AI to augment their capabilities will amplify their impact, creating a divide in employment and financial success with those who spurn the tools. The divergence between organizations that successfully integrate AI and those that do not will increase. The accumulation of value to investors in AI and enabling infrastructure will also increase wealth polarization. For true societal resilience, we ultimately need to transform how capital is distributed to balance these powerful forces of polarization.
“Decision-making will be transformed with AI. Societies must agree that accountability ultimately resides with humans. New Humans + AI decision-making architectures will need to define relative roles in decisions large and small, emphasizing decision explainability and clarity on where and how judgment was applied.
“In this phase of accelerated evolution, the individuals, organizations and institutions that flourish will be those most ready to learn and adapt. We need to encourage experimentation, open-mindedness and continuous learning, all while we focus on gaining greater clarity on the fundamental ethics that guide our journey.”

Guy Standing
AI could soon become a ‘Frankenstein’s monster.’ Lack of regulation is allowing tech plutocrats to ‘displace democracy.’ The AI paradox is that as it gets smarter human intelligence will decline.
Guy Standing, British labor economist, founder at Basic Income Earth Network and professorial research associate at SOAS University of London, wrote, “Although it has positive features, at present artificial intelligence is an intrusive, invasive technology that is out of societal control and could soon become a Frankenstein’s monster. Its own creators admit as much. When discussing the potential of greatly advanced AI during a 2023 interview, Sam Altman, CEO of OpenAI, said, ‘The best case [scenario for the future of AI] is so unbelievably good that it’s hard to even imagine. … The bad case – and I think this is important to say – is like, lights-out for all of us.’
Every nation-state should urgently set up a National Commission for Democratic AI. All forms of education should be restored as part of the human commons. We need to redesign education to enable us to use AI while not being used and abused by those in control of its spreading reach. It is a civilization-level challenge.
“The techno-libertarians in Silicon Valley will use their economic and political muscle to prevent effective regulation. Every nation-state should urgently set up a National Commission for Democratic AI. All forms of education should be restored as part of the human commons. We need to redesign education to enable us to use AI while not being used and abused by those in control of its spreading reach. It is a civilization-level challenge.
“One of the most obvious problems with AI is that it reduces the need for reflective thinking and fact checking, and the more one bypasses such thinking the less one is capable of exercising such thinking. In my book, ‘Human Capital: The Tragedy of the Education Commons,’ I advance an hypothesis that I call The AI Paradox. It is the opposite of the ‘Singularity’ thesis so popular in Silicon Valley, which predicts that AI will gradually advance to surpass human intelligence and advance humanity. The AI Paradox hypothesis predicts, ‘As AI advances, human intelligence will decline.’ Already, we are seeing many signs that human creativity and imagination are jeopardized, and we are witnessing a decline in ‘deep reading’ and ‘deep writing.’
“The Paradox need not occur, of course. But if the tech plutocrats have their way, their cavalier inventiveness will ignore anything like the precautionary principle. Most ordinary people are unprepared to withstand the seductiveness of AI. They are losing the capacity to concentrate, they are suffering from ‘digital distraction’ and they are being led astray by algorithms.
“Meanwhile, our education industry is shredding people’s ability to understand and demonstrate the vital human trait of empathy, and AI is accelerating that decline. A recent study found that medical AI chatbots are outperforming human doctors on empathy, because LLMs are programmed for it. AI in education is particularly threatening. It could induce ‘group-think’ and conformism. But worst of all, it could reduce critical thinking and the ability to resist lies and disinformation.”

Daniel Castro
Governments, schools, civic groups – all organizations – will need to adapt, reinvent themselves or consciously choose not to. Communities must decide what they value in an AI-rich environment.
Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation, wrote, “AI will expand freedom of choice by making it easier to learn, create and resist existing structures. Greater freedom, however, increases the need for judgment. AI may likely also create redistributions of advantage that can generate resentment, resistance and political conflict, depending on the broader political economy.
“The responsibility for societal resilience extends beyond individuals to communities and institutions. Schools, governments, religious organizations and civic groups will need to adapt, reinvent themselves or consciously choose not to. Communities must decide what they value and how they intend to protect those values in an AI-rich environment.
“Individuals and societies will need stronger cognitive, emotional, social and ethical capacities to navigate choices well. Skills rooted in philosophy, art, design and critical thinking will grow in importance, not as technical complements to AI, but as human capacities that help people interpret change, set priorities and maintain resilience amid rapid transformation.
Community plays a central role in resilience because it enables practical assistance, social support, shared knowledge and coordinated action. These are areas in which AI systems can contribute to great advantage.
“Resilience in the face of technological change does not differ fundamentally from resilience in other domains, whether related to aging, illness, major life events, mental health challenges, financial shocks, job loss, crime or natural disasters. In each case, individuals and societies rely on similar capacities: adaptation, meaning-making and support from others. Community plays a central role in resilience because it enables practical assistance, social support, shared knowledge and coordinated action. These are areas in which AI systems can contribute to great advantage.
“AI can help people with shared goals coordinate their actions, lower the cost of accessing information and provide insights that previously required a trusted expert. These capabilities can strengthen individual and collective resilience by expanding access to resources and reducing barriers to participation. When used well, AI can support people as they navigate change rather than face it alone.
“At the same time, technological change creates winners and losers. Those who thrive in a new environment often differ from those who succeeded under earlier conditions. Some individuals who struggled in previous economic or social systems may gain new opportunities, while others may lose status or security. Historical transitions illustrate this dynamic. For example, individuals with physical limitations that excluded them from industrial labor found greater opportunity in a knowledge-based economy. AI-driven change will likely produce similar shifts, not only in employment, but across many areas where certain skills and aptitudes become more valuable than others.”
The fifth section of Chapter 2 features the following essays:
Marcel Fafchamps: The only solution to inequity, ignorance and power imbalances is to create better institutions that limit excesses; ‘this requires careful regulation supported by values that foster universalism.’
Marie Charbonneau: ‘AI is being embraced for the short-term benefits it can provide; research suggests that barely the tip of the iceberg is currently being discussed as to what the ripple effects will be’
Steve Rosenbaum: Make platforms accountable, give Gen Z real voice in their design and improve the information environment through a mix of regulation, market pressure and independent standards.
Matt Belge: ‘We have to look to leaders in social activism and politics who care enough about ethics and the overall well-being of their people to encourage the development of AI regulation.’
Sean McGregor: Keep iterating the future – produce the data moving AI to reflect a positive vision.
Karen Barrett: We must be proactive about the potential impact of AI’s rise on brain development and well-being.
Anonmous Academic: We must prioritize the protection of human intelligence, judgment and ethical development.
Oliver Alais: Standardization efforts are under way: ‘Practical frameworks and tools that help translate human rights principles into technical requirements throughout the development lifecycle.’
Anonymous Researcher for Consultancy: For resilient communities and people, we should instill some of the values of the early Internet

Marcel Fafchamps
The only solution to inequity, ignorance and power imbalances is to create better institutions that limit excesses; ‘this requires careful regulation supported by values that foster universalism.’
Marcel Fafchamps, well-known Belgian economist and professor at Stanford University, wrote, “AI has been around for a while in some form. Examples include the social media algorithms that suggest things for us to read (including ads), shopping sites that suggest things for us to buy, music streaming platforms that choose the music we listen to, photo-editing programs that suggest improvements to pictures we take, cars that warn us of various issues, phones that send us messages, etc.
“People have adapted to these changes. Young people generally adapt faster, but older generations catch up. Hence, there will definitely be differences in the speed of adaptation to more-advanced AI, and some people will be more able to take advantage of AI technology than others. There will be winners and losers, but it is hard to predict who they are purely based on the potential offered by the technology itself.
“Are people going to be happier or unhappier in an AI-saturated future? Neither. This is because of ‘habituation,’ a psychological process by which our level of contentment with our standard of living usually quickly adjusts – with a lag of no more than a few years – to our experienced standard of living. This process is crucial to human nature. If humans immediately adjusted to being happy with all change in our lives, we would have stopped innovating after inventing fire. Habituation is what makes us always want more.
“I am not concerned, as some are, about the possibility that AI may become ‘sentient.’ It never will be – in the emotional sense we attach to sentience – given its absence of sensory feedback and pleasure/pain receptors. (And even if it would, there is no reason to believe it would want to annihilate mankind, for the same reasons).
The reality is that humans as a whole are a rather passive bunch; they gladly relinquish control over many aspects of their lives to others and spend little time questioning. … The only solution to this situation is to create better institutions that help us limit excesses.
“AI is expensive: It requires large amounts of electricity, some of which will probably be provided by nuclear power. It may come at an even higher cost in the future as it is incorporated in many aspects of our lives. Some people will be able to afford the best AI, others not. Thus, perhaps even more than today, there will be large differences, within and across countries, in standards of living driven by differential access to AI.
“My main concern is unequal access to AI. Control over and exclusive access to the best AI will affect the accumulation of wealth, increasing inequality and leading to the loss of democracy that this entails. Human frailties will remain and there will be people keen to exploit them for power or personal gain. AI will enable some people to exploit these frailties in new ways that will surprise us, thereby generating windfall gains until the point when (and if) people catch up.
“The reality is that humans as a whole are a rather passive bunch; they gladly relinquish control over many aspects of their lives to others and spend little time questioning how society (families, colleagues and neighbors, schools, religious organizations, corporations, the media and government bureaucracies) shapes their views, values, preferences and choices through ideology, propaganda, advertising, proselytizing, grooming and so on.
“Healthy skepticism and reliance on scientific evidence are practiced by a very small proportion of people at any particular moment of history. There is no reason for this to change. Hence, some people will take advantage of AI to concentrate more wealth and power into their own hands – assisted by some powerful governments. This is already happening. I am not optimistic about that aspect of the future. AI plays a role as an expensive weapon in that evolution, but humans remain the main problem.
“The only solution to this situation is to create better institutions that help us limit excesses. For example, the dissemination of false or misleading information, the concentration of personal and often confidential information about us in the hands of people and organizations with a profit, ideological, or power motive and the concentration of AIs’ calculating power in the hands of a few.
“We need safeguards for those unable to adjust fast enough, e.g., providing support in regard to health care (including mental health), action toward equalizing income redistribution and innovating appropriate welfare interventions.
“This requires careful regulation supported by values that foster universalism and social preferences for equity. That’s the opposite direction from where we are headed right now. The political situation in the world today leaves me unconvinced that we will find the will to introduce the changes that are needed. It is these human trends and tendencies that make me pessimistic, not AI, which is just a tool.”

Marie Charbonneau
‘AI is being embraced for the short-term benefits it can provide; research suggests that barely the tip of the iceberg is currently being discussed as to what the ripple effects will be
Marie Charbonneau, a researcher helping develop the next generation of robots at the Human-Robot Collaboration Lab at the University of Calgary, Canada, a co-author of the IEEE report “A Pathway Study for Future Humanoid Standards,” wrote, “AI systems have already been significantly shaping how decisions are made, both individually and at the organizational level. AI is at once forced down people’s throats and embraced for the short-term benefits it can provide. My preliminary research suggests that barely the tip of the iceberg is currently being discussed as to what the ripple effects will be. Appropriate regulation will be critical, but how these regulations might be enforced will be an interesting puzzle to solve. More research is needed. Broad, honest societal discussions on AI literacy and what direction we want AI developments to go, may help make a difference.”

Steven Rosenbaum
Make platforms accountable, give Gen Z real voice in their design and improve the information environment through a mix of regulation, market pressure and independent standards.
Steve Rosenbaum, co-founder and executive director of the Sustainable Media Center, an author, filmmaker and founder of five companies in the media content sector, wrote, “Individuals and societies meet digital change in different ways. Some embrace the creativity and access it offers. Others push back as harms show up. Most are caught in between, relying on the tools but uneasy about what they’re doing to attention, trust and community.
“Resilience means building new capacities. We need better source awareness, more comfort with uncertainty and the ability to slow our emotional reactions instead of getting spun up. We need small-group sense-making and a basic ethic around what we amplify and why. Resilience also comes from practice. At the personal level, that means simple habits: pausing before sharing, setting boundaries on feeds, making space for deep reading and time offline. At the institutional level, it means more transparency from platforms, stronger youth mental health support, local truth infrastructure and tech norms shaped with young people rather than imposed on them.
“In the near term, three things matter: making platforms accountable to the public interest, giving Gen Z real voice in design and policy and improving the information environment through a mix of regulation, market pressure and independent standards.
“New vulnerabilities are already emerging, such as synthetic intimacy, targeted manipulation, deepfake harassment and over-reliance on AI to make judgments for us. Coping will require AI literacy, provenance tools, norms for relating to AI as a collaborator instead of an authority and mental health skills built for life online.”

Matt Belge
‘We have to look to leaders in social activism and politics who care enough about ethics and the overall well-being of their people to encourage the development of AI regulation.’
Matt Belge, founder of Vision & Logic, a professional user-experience designer with 30 years in the field, wrote, “I expect that profit-driven AI companies will mostly focus their energy on two things: 1) Offering better features than competitors are producing. 2) Eliciting addictive, monetizeable consumer behaviors by flattering users and designing to prolong interaction. Both of these patterns are already commonly used in web-based software products.
“I had hoped that well-heeled, established companies like Apple and Google would develop higher ethical standards along the lines of Google’s original: ‘first, do no evil.’ But I no longer have the optimism I once had. We have to look to leaders in social activism and politics who care enough about ethics and the overall well-being of their people to encourage the development of AI regulation.
“I do have a great deal of faith in human resilience if the following patterns can be established:
- “AI systems must be transparent about the motives and strategy behind their decision-making so the humans using them knows why a given choice or outcome was made. This includes citing sources and telling the human when the outcome is little more than a guess. The human must have access to complete information about why the AI made the choices it did.
- “The human must always be in control, including the ability to stop a given AI outcome, to fine-tune and correct the outcome in meaningful ways and be able to undo any outcome or be warned ahead of time that it cannot be undone once done.
- “Humans must cultivate a collaborative spirit with AI. Humans must take responsibility for outcomes in this case and must apply their own judgment to shape the direction of the outcome. Humans must not abdicate control. They must shape and guide the interaction and set ground rules that are to be obeyed about how the interaction occurs.
- “Humans must take an iterative approach, trying out different ideas, preferred outcomes and directions of exploration until they are satisfied the outcome is one that meets their immediate needs and will, overall, be beneficial to others. The AI must support iterative approaches without negative consequences, so humans can explore ideas before committing to them.
- “Humans must remain in command as the ultimate decision-makers, and must strive to understand the implications of any potential outcomes before committing to an AI-aided decision.
- “Humans will need deeper training throughout their lives in both critical thinking and ethics. If an AI suggests a dangerous or unethical path or decision, humans must be educated well enough to see it and correct it.
- “In the world of art, whether it be visual, written, music or other, humans must make it known to others to what extent AI was used to help create any work they produce. This will give other humans the tools to respond to the creation in a fair and just way.”

Oliver Alais
Standardization efforts are under way: Practical frameworks and tools can ‘help translate human rights principles into technical requirements throughout the development lifecycle.’
Oliver Alais, a program coordinator at the International Telecommunication Union focused on human rights, wrote, “AI systems are likely to play a much more significant role in the near future. For this reason, human rights must be considered at every stage of their development, from technical standardization to deployment and use by end users. The ITU is actively working to embed human rights considerations into the standardization process, recognizing that standards and emerging technologies are not neutral and can have significant societal impacts. This integration can be supported through practical frameworks and tools that help translate human rights principles into technical requirements throughout the development lifecycle of emerging technologies. Key challenges remain, including the translation of human rights concepts into engineering terms, developing metrics to assess human rights risks and strengthening the capacity of engineers and technical experts involved in the design and development of AI systems.”
William Halal
Safe, monitored, well-designed AI can ‘make us more human’
William Halal, professor emeritus of science, technology and innovation at George Washington University and founder of the TechCast Project, wrote, “The big challenge will be to ensure that AIs are designed, monitored and corrected safely. I also think the net effect of AI will be to urge humans to do the higher-order tasks that AI can’t do well. In short, AI will make us more human.”
Sean McGregor
Keep iterating the future – produce the data moving AI to reflect a positive vision.
Sean McGregor, co-founder and lead research engineer of the AI Verification and Evaluation Research Institute and general chair for the 37th annual conference of the Association for the Advancement of Artificial Intelligence, wrote, “AI systems mimic and scale past human experience. A resilient future is one with a capacity to look back and imagine how we could have done better – then produce the data moving AI to reflect such a positive vision.”
Karen Barrett
We must be proactive about the potential impact of AI’s rise on brain development and well-being.
Karen Barrett, lifespan developmental psychologist and member of the global Human Affectome Task Force, which created an integrated framework in 2024 that improves our understanding of how feelings, emotions and moods relate to and impact human behavior, commented, “It is important for citizens and researchers, especially parents, psychologists, neurobehavioral scientists and developmentalists, to help governmental entities at all levels of government understand the potential impact of use of AI on brain development, cognitive development and socioemotional development. It is also crucial to carefully think through the potential impact on adults’ well-being, sense of purpose and cognitive and socioemotional well-being. We need to be proactive, rather than reactive, in this.”
Anonymous Researcher at a Major Consulting Firm
For resilient communities and people, we should instill some of the values of the early Internet.
A veteran researcher who works for a major consulting firm wrote: “It seems fairly likely that AI will play an increasingly major role in more and more aspects of our lives, if for no other reason than the amount of money and attention that is currently being put into these systems. And I imagine that this effort will produce some business value and wealthy executives, but I’m less confident that it will lead to more resilience in the population. As long as we continue to view the development of AI as a ‘race’ to some competitive end point it’s hard to see the battles around AI producing positive externalities over the long run. Instead of reinforcing this competitive lens for AI, if we want to create more resilient communities and people, we should look for opportunities to instill some of the values of the early Internet – such as freely sharing human knowledge and empowering marginalized voices – that made the internet of the early 2000s feel so promising, and which seem so distant from the dominant values of today.”
Anonymous Academic
We must prioritize the protection of human intelligence, judgment and ethical development.
An academic based in the United States wrote, “If the design and regulation of digital technologies are not upgraded to give priority to human intelligence, judgment and ethical development, a considerable risk lies ahead of increasing passivity, mental health challenges and degraded knowledge and ethical standards among humans. Yet it is true that AI systems are likely to open excellent opportunities for people with physical disabilities and perhaps for people facing dementia as well as for pharma research and production and other scientific research.”
> Go to Chapter 3 of the essays – The Ultimate Team-Up: Humans and AI Working Together
> Return to the top of this page