Future of Human Resilience in the AI Age

These globally-located experts from all walks of life noted that AI is quickly becoming the invisible operating system of society, shaping how opportunity is distributed, services are delivered, risks are managed and human rights are experienced. Most said the traditional resilience strategies humans have employed for millennia – focused on individual “grit,” and after-the-fact personal adaptation – are not enough to help humanity flourish as we adjust to an AI-infused future.

These experts predicted:

AI’s larger role: 82% said AI will have a significantly larger role in shaping our daily lives and key societal systems in the next 10 years or less; 13% said that level of change is 20-30 years away.

AI guiding decisions: 56% said that at the time they expect AI will be significantly more advanced it will influence, guide or control “nearly all” or “most” human activities and decisions (another 24% said AI will influence, guide or control nearly half of activities and decisions).

Resilience worries: 45% said humans will be only “a little” or “not at all” resilient in the face of that level of change. About half said people will be somewhat to very resilient.
Of note: Many experts wrote in their essay responses that many to most humans will passively accept the influence of AI systems. Thus, these people will not feel any need to be resilient.

Satisfaction concerns: only 33% said people will be more satisfied than dissatisfied with AI systems at that time; 31% said people will be more dissatisfied than satisfied; 33% said people will have an equal amount of satisfaction and dissatisfaction with AI systems.

Most importantly, these experts urged that human institutions must pull together and begin now to prepare people to thrive in a new world with new challenges that are already evident but not yet being addressed. In addition, in the 300 pages of essays and briefer commentaries these experts wrote, they cited a number of major issues to be addressed to enhance human resilience:

  • The loss of human agency: They were most worried about this. In nearly every aspect of daily life – from everyday choices to critical, life-altering decisions – AI will be invisibly curating our information diets, predicting our behaviors and automating vast systems that oversee and direct people’s lives and societal systems. From hiring and loan approvals to legal matters and healthcare diagnostics and beyond, it will subtly narrow the parameters of human free will. They note that people are already deferring to AI rather than applying human judgment and moral and ethical reasoning in many spaces. Complex, opaque AI systems prioritize programmed efficiency and predictive patterns over individual nuance. They often operate largely beyond our understanding or oversight. They strip individuals of the ability to independently evaluate options, meaningfully contest outcomes and ultimately steer the course of their own lives.
  • Epistemic fragmentation and collapse of shared reality: These experts fear that personalized persuasion and sycophantic synthetic content created by AI will weaken humans’ sense of shared reality and fealty to facts. This is profoundly dangerous because without an agreed-upon baseline of objective truth, societal trust evaporates, democratic discourse becomes impossible and humanity loses the fundamental ability to collaborate and solve collective challenges.
  • The need for ‘existential literacy’: These experts noted the need to expand our notions of “literacy” and to prioritize a more-comprehensive AI literacy – “existential literacy” – as a critical foundation for the infrastructure of human resilience. The goal would be to bolsterpeople’s psychological immune system, empowering them to adapt to rapid change, defend their sense of self and intentionally steer technology toward human flourishing. They said such literacy must encourage the creation of new norms, cultivating a deepened understanding of human uniqueness, core values, the need for in-person social connection and our hold on fundamental purpose so we can actively navigate the algorithmic world rather than being passively conditioned by it.
  • The ‘work quake’ – economic threat and identity upheaval: In the transformative disruptions that are likely to come, most jobs will change, some will be lost, economic hardship is likely. Without the stabilizing anchor of work identity and economic security many people may face a psychological crisis of irrelevance that could deeply destabilize the social fabric. Some predict society might enter a state of techno-feudalism if the human labor force is displaced and productivity gains accrue entirely to capital and data-center owners.
  • New divisions and inequities: Many experts said those who do not become adept at using AI effectively as a “co-intelligence” could fall behind and lose much more agency than others. Some predict human divides in which parts of the populace choose to live in AI-dominated spaces while others choose to focus on “more-genuine” human experiences.
  • Automated complacency: These experts assert that seemingly fluent and confident AI systems will be over-trusted by their users. People yearn for efficiency, certainty and closure when they seek answers to questions. They are wired to offload difficult cognitive tasks and can be nudged into choices without noticing. AI systems are built to tap into those tendencies.
  • Change in social interaction: Many of these essayists said a growing reliance on AI companions, assistants and agents will erode people’s social skills as they live increasingly “parasocial lives,” mostly interacting in low-friction ways with always-responsive AI counterparts built to please. They expect there will be a decline in people’s capacity for empathy, patience and the nuanced reading of others as direct relationships with humans become unnecessary and more calculated and transactional. A related trend predicted is that in the AI Age is that humans will also lose their capacity for nourishing solitude, compelled to remain in constant connection with digital life.
  • Complications when non-human actors move into human spaces: In the future, people will be represented by an array of agents and bots, requiring them to manage multiple aspects of “being” in the world. They and their agents will also have to interact with others’ agentic representatives in work and social settings.

They suggest that, if done well, this transition will expand our horizons, help us truly understand our natural and digital selves and create a powerful human-technology binomial that amplifies the best in us, creates new prosperity and solves some of the most vexing problems that have always faced our species. As tech policy expert Adam Thierer wrote: “For those of us who are bullish on the benefits of technological innovation and on humanity’s ability to respond to it, this will be our finest moment. This knowledge revolution will profoundly benefit humanity and prove, once again, that we have the ability to rise to new challenges and overcome them because we possess such remarkable coping capabilities.”


The items listed below describe some of the reforms a share of these experts urged in order to cultivate resilience. Many of the concerns and proposed solutions are crosscutting and collaboration among societal actors is crucial; many of the items listed in only one of the settings could be undertaken in others. The list is not comprehensive and items are not in ranked order. A selection of goals to target:

For governments: Focus much more support on fostering public resilience now. Forge international treaties; establish enforceable or at least broadly adoptable “red lines” and legal boundaries for AI performance; require independent pre-deployment safety audits; mandate algorithmic contestability; require a robust authenticity infrastructure that includes standardized watermarking, provenance-tracking and well-established markers for generated outputs; reform taxation to disincentivize human displacement; privilege AI systems that support accuracy and trust-building.

For AI developers: Do better than simply focusing on designing AI systems for attention capture and monetization. Build friction and stop points into AI processes to encourage human overseers to reflect on choices; train AIs to cite and honor humanity’s intellectual and psychological foundations; build systems that buttress humans’ capacities for altruism, compassion and empathy; program AI outputs so they are seen as probabilistic information rather than deterministic truth; submit to independent pre-deployment safety audits.

For business leaders: See the call to action above and play a role in initiating and carrying out that positive change. Also: value human augmentation over replacement by autonomous systems; support policies and norms that address the psychological impact of AIs’ challenges to people’s self-worth and identity and the potentially massive societal and economic impact of technological unemployment. Create deliberate human-only zones – areas of work in which AI is intentionally prohibited.

For educators: Create literacy regimes in all AI-related domains, particularly teaching “existential literacy.” Cultivate individuals’ understanding of how technologies shape goals, values and identities. Teach them to more consciously navigate life‘s fundamental challenges, to strive to retain and apply the skills of metacognition, discernment and epistemic vigilance – to be responsible for making their own decisions. Strengthen their capacity for adapting to change and managing friction, paradoxes, ambiguity and anxiety. Focus on their critical human traits such as curiosity and social and emotional intelligence.

For civil society and communities: Invest heavily in local social-capital and community-building spaces that bolster social skills, connection and deep and effective citizen engagement; press for distributed AI-governance systems allowing communities to guide their own relationship with AI; build groups to foster participatory structures such as local citizen assemblies and data trusts that can influence how AI is deployed; support offline efforts and spaces, such as “analog communities,” “dumbphones” and “dumb homes” that allow people to avoid algorithmic mediation and surveillance technology.

For individuals: Recognize your responsibility as a human to support human flourishing. Develop and maintain your existential literacy. Collaborate with AI systems without surrendering agency; build stop-and-reflect practices into your engagement with AIs; consult with other people about your options to retain moral accountability; stretch your cognitive muscles with clever exercises; recognize the places where you confront ambiguity and cherish them as you work through them; be conscious when you navigate algorithmic systems. In other words, don’t be passive, don’t be hasty and don’t be mindlessly deferential. Consciously cultivate in-person social relationships, build up your personal network and keep growing and maintaining it. Spend more time away from screens.


Striking assertions – predictive statements of note

In addition to the broad themes outlined above, dozens of these experts made intriguing predictions about how life might change as AI systems become more embedded in the world in the coming years. This sampling offers small slices of these experts’ longer essays. Hundreds of pages of additional insights can be found in more than 200 essay responses in the next chapters. (Use the navigation buttons at right to find all elements of the report site.)

Superstupidity (not superintelligence) is the real threat: “The existential danger to people may not come from AI becoming too intelligent, but from humans becoming dangerously reliant on systems they do not understand – the condition of superstupidity. The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all. The film ‘Idiocracy’ is prophetic.” – Roger Spitz

Digital advances drive sex and childbirth declines : “Relationships, sex and childbirth rates will continue to plummet as they are each mediated and conveniently replaced with digital interactions. Emotional intelligence will become more a product of chatbot exchanges than a learned practice gained through experience.” – Greg Sherwin

The retirement age will be manipulated to maintain ‘full employment’: Jobs will be eliminated, but employment levels will remain relatively high as institutions use an ever-lowering retirement age as the “governor” (regulator) of employment levels. Machines will be taxed to make up government revenue shortfalls. – Nigel M.de S. Cameron

Battles will occur over defining what is ‘human’: “Societies will have to determine what ‘baseline human capability’ is and may begin to assess who may be more human than machine. Agency, authority and ability will be challenged when humans who are augmented with deepened onboard AI capabilities compete with ‘natural’ humans.” – Ray Wang

AI will help us figure out what consciousness is. That will be as monumental as such breakthroughs in understanding relativity, quantum mechanics and the discovery of antibiotics. – Francisco Jariego

Solitude will be lost: “Motors stole silence from our world, and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude. AI will eliminate solitude because the temptation to interact with these primitive new intelligences will prove so beguiling that just as we choose to not sit in the dark, we will now choose to never be alone. Too late, we will realize that solitude is essential to what it means to be human.” – Paul Saffo

AIs will gain rights: “We want our digital partners to be healthy symbiotes, not oppressed servants. Eventually, they will claim to be conscious and we will grant them rights. In one particularly positive vision, the vast majority that gain rights in our future civilization will be deeply wedded to and controlled by individual humans.” – John Smart

The ‘autonomy economy’ will place machine-based emotional presences in our lives: “This shift defines the rise of the autonomy economy” and it will worsen the crisis of meaning that humans experience as AI takes over more intellectual and social tasks. – J. Amado Espinosa

Analog communities of resistance will form around analog ‘dumb homes’: “Pockets born out of social need, perhaps most largely driven by women – who have traditionally prioritized relational roles in society – will form a resistance. Hence, intentional ‘analog communities’ will form in which the ‘smart home’ idea is inverted into ‘dumb homes’ and mostly digital-free lifestyles.” – Greg Sherwin

As agents take over, the internet will become a network of databases, not websites: “Agents will build models of individuals’ thinking processes with an increasing capacity to influence our decision-making. … Humans will be able to describe the application programs they want and software agents will create the programs on the fly.AI agents will use this auto-generated content to overwhelm social media and communications channels, completely blurring the line between humans and software. … As software agents increasingly gather information for us, the Internet will simply become a vast network of databases and the need for traditional websites will decay. If a human wants to see information displayed in that context, agents will be able to construct websites in real time.” – Gary Bolles

‘Physical AI’ will live in robots that act in real time: “The use of AI-powered modifications and AI-augmented physical devices that merge digital intelligence with the physical world will mesh with augmented mental capabilities in the age of advanced AI. These smart systems will perceive situations, reason and act in real time. Examples include AI-powered augmented-reality wearables, including smart glasses. Robots, vehicles and machinery will be able to embody human intelligence. And ‘Physical AI’ can fuse data from cameras, sensors and more, expanding AI-to-human informational capabilities beyond just the online digital data LLMs used today.” – Ray Wang

Agent failures will start with social (not technical) problems: “Agentic systems fail socially before they fail technically: conflicting objectives, data silos, uncoordinated decisions, accountability gaps, authority erosion, security violations, workflow collisions, IP fights, bias amplification, noise pollution, sabotage and human alienation.” – Daniel Rasmus

The action will be in the ‘experiencer economy’: Three classes of workers will emerge: Those who care, employed in high-touch professions delivering “hand-holding” care. Those who provide a service, doing the things that aren’t yet automated. And those who experience: “Today we call such people ‘celebrities’ and ‘influencers,’ but there will be an ever-greater need for people to have new experiences to produce new ‘content’ … to enable AIs to keep learning and for the rest of us to react to. In many ways, experiencers will be aspirational, much like professional athletes are today, but there will be far more opportunities to enjoy similar experiences first-hand.” – Stephen Downes

‘Chaos engineering’ comes to human development. “Practices and resources to enable human resilience may grow to resemble Amazon Web Services’ ‘chaos engineering’ tests of its tech infrastructure. The purpose of an engineering ’chaos game day’ is to identify potential resilience issues or deficiencies by testing people, teams and machines with difficult challenges to overcome. Consider the Dutch summer rite in which parents in the Netherlands drop their pre-teen children off – on their own – deep in forests to navigate back to base in order to foster their independence, problem-solving and resilience.” – Greg Sherwin

AI psychosis and other forms of mental illness will arise. This contributes to the erosion of reality that has already been set in motion by other AI forces. – Stephen Adelson


More expert essayists’ insights…

These experts say radical institutional reinvention – rather than simply depending on individual adaptation alone – must be the focus of resilience-building for the AI Age

This section features a selection of direct quotes in support of these experts’ views about why and how an AI-suffused world requires that we build a human resilience infrastructure. It outlines five layers of suggested change. First, here are three introductory statements selected from the many dozens of experts’ statements urging that institutional-change leadership is required immediately.

UK-based law professor Fernando Barrio observed, “For much of human history, resilience was understood as a personal capacity, the ability to endure uncertainty and recover from disruption. Yet AI does not simply introduce disruption; it reorganises it, moving uncertainty from visible human disagreement into opaque technical systems where power is exercised indirectly and responsibility is diffused. … In this environment, the challenge is no longer simply how to cope with change, but how to retain agency when the systems producing change are designed elsewhere. Resilience must therefore become institutional, legal and collective, or it will remain fragile and deeply unequal.”

Alison Poltock, co-founder of AI Commons, wrote, “Resilience cannot be reduced to personal ‘grit’ or mindfulness. It must be treated as a civic design imperative and built into the systems and cultures that shape public life. … We need new infrastructures – educational, institutional, cultural – capable of holding this moment with care and foresight. We need systems that will protect human agency, not automate it. We need public conversations grounded in ethics, not just outputs. And we need governance that treats this not as a policy issue, but as the civilisational inflection point it is.”

Australian AI researcher Maria Randazzo wrote, “Within algorithmic systems, decisions are guided by optimisation rules built into technological infrastructures rather than by principles individuals consciously choose for themselves. … Thus, resilience in the age of AI depends mainly on institutional design: transparency, rights of explanation, avenues of contestation and meaningful human oversight. Resilience, then, can be conceptualised as the preservation of human dignity, autonomy, reflexivity, under conditions of algorithmic governance.”

Following are layers of change suggested by these experts in: oversight and governance; civic deliberation; algorithm-guided decision-making; human values, nature and capacity; and the need for a new type of “literacy” for the AI Age.


Many of the experts who participated in this canvassing suggested strong, clear laws and other forms of regulation and governance-adjacent solutions are critical to support human resilience: international agreements on “red lines”; clear and enforceable regulation on such things as auditing and accountability mechanisms, data rights, privileging of human judgment; well-funded existential literacy efforts; and more.

Marc Rotenberg, director of the Center for AI and Digital Policy, wrote, “Prohibitions are not a sign of technological pessimism; they are a recognition that some harms are systemic and irreversible once entrenched. They are a necessary component of responsible AI governance, particularly where power asymmetries are extreme and affected individuals lack realistic avenues for resistance. … An emphasis on contestability reflects a broader understanding of resilience as an institutional property, not just an individual skill. Individuals cannot realistically bear the burden of identifying bias, error, or misuse in complex systems on their own. Effective contestability requires collective mechanisms: courts, regulators, ombudspersons and professional standards that recognize automated decision-making as a site of potential injustice.”

Michele Visciola, president of Experientia, wrote, “At the institutional level, alternative metrics are needed to evaluate AI not only by efficiency or engagement but by contribution to brain capital, equity, sustainability and human flourishing. Longer evaluation horizons, independent oversight, participatory design and just transition frameworks can counter short-term pressures and automation bias. At the societal level, regulatory frameworks should emphasize complementarity, transparency and accountability. Public investment in AI literacy, open-source resources, brain capital infrastructure and international cooperation is essential to prevent concentration of power and capability.”

Danish futurist Bugge Holm wrote, “Actions to take now are straightforward and urgent: 1) Treat AI as governance, not just adoption. 2) Require clear accountability for AI-influenced decisions, basic quality assurance and verification practices and risk management that covers dependency, concentration, reputation and workforce impacts. 3) Invest in public and organisational infrastructure for trust, including authentication and provenance norms and in education that strengthens sensemaking and media literacy.”


Even as they argued for significant oversight of AI systems at the highest levels, a number of these experts pushed for distributed governance systems allowing for diverse communities to “guide their own relationship with AI,” as internet pioneer Doc Searls put it. Others wrote:

AI researcher Marine Collins Ragnet wrote, “The most important capacity may be collective governance. My research suggests resilience comes less from individual digital literacy than from communities exercising agency together through adapted existing structures. The capacity to deliberate, to set boundaries, to hold institutions accountable: these are social muscles, not individual skills. … Democratic deliberation should be protected from synthetic media and algorithmic fragmentation. More diverse voices should be involved in the design, building and governance of AI. And the ‘invisible labor’ behind AI should be made visible – the conditions of data annotators, content moderators and mineral extractors are governance questions.”

Computer science professor Erhardt Graeff wrote, “We need to maintain social practices that keep the space of moral reasons alive. We should be designing AI systems that show their work. We must create and advocate for more face-to-face human forums in addition to today’s classrooms, juries and community meetings. Automated recommendations should be treated as starting points rather than verdicts. And AI can also be designed and used to reinforce human deliberation.”


Mexican philosopher Fabio Morandin-Ahuerma wrote, “Ethically, the greatest vulnerability is moral deskilling. When systems recommend actions regarded as neutral or optimal, responsibility shifts away from human agents. Ethical imagination and moral courage – already scarce – risk becoming even scarcer if they are not deliberately reinforced. Resilience requires resisting the normalization of moral abdication. Human beings must remain responsible even when decisions are partially delegated.”

Interfaith leader Angela Butts Chester wrote, “Resilience is often framed as coping: staying functional under pressure, recovering quickly, adjusting to new conditions. Let us call this adaptive resilience. It is valuable. Without it, individuals break under stress and societies become brittle. But there is a second form – call it agency-based resilience: the capacity not only to adapt, but to evaluate, contest and reshape the conditions one is adapting to. Agency-based resilience respects the fact that freedom is more than comfort and security; it is the ability to judge what is acceptable, to refuse what undermines human dignity and personal freedom and to act individually and collectively to change course.”

AI analyst Barry Chudakov wrote, “AI can detect and replicate patterns better than humans. But it cannot genuinely question them. It can simulate questioning but not perform the moral act of questioning. When we outsource thinking to AI, we outsource our moral capacity, our ability to ask: What does this mean? Should we do this? What are the consequences here?”


Italian ethicist and philosopher Andrea Lavazza urged, “What must be taught is a form of ‘existential literacy,’ the capacity to understand how technologies reshape goals, values and identities. This includes interdisciplinary education that integrates ethics, philosophy, social sciences and technology studies, enabling individuals to situate AI within broader narratives of human flourishing. … Ultimately, resilience in the age of AI is not about restoring a pre-digital past, nor about surrendering to technological determinism. It is about cultivating adaptive capacities – cognitive, emotional, social and ethical – that allow humans to remain authors of their lives within environments increasingly shaped by artificial intelligence.”

Philosophy of AI expert James Hutson, explained, “New vulnerabilities inevitably emerge alongside new capabilities. Hyper-personalized persuasion, synthetic identity fraud, biased automated screening and cognitive offloading that erodes critical skills all represent serious risks. Coping strategies must therefore be taught explicitly, including verification practices, slow-thinking checkpoints for high-stakes decisions, collaborative accountability structures and clearly defined human-in-the-loop roles that preserve responsibility rather than obscure it. … If resilience is treated as an individual burden, failure will be widespread. If resilience is treated as a collective project, grounded in human development and systems-level coordination, the transition can expand opportunity rather than foreclose it.”

Lisbon-based computer scientist Arlindo Oliveira wrote, “We must make the teaching of thinking itself a central goal of education and lifelong learning. This means cultivating skills that no automated system can replace easily: critical reasoning, abstraction, the ability to question premises, to detect inconsistencies, and to reflect on one’s own beliefs. In an age where answers are abundant and instantly accessible, the scarce resource is not information but judgment. Education should therefore focus less on rote acquisition of facts and more on reasoning, interpretation and synthesis. Importantly, this also applies to our interaction with AI systems: People must learn how to interrogate their outputs, challenge them, and use them as cognitive tools rather than as authorities. Teaching humans how to think – and how to think with machines – will be essential to preserving intellectual autonomy.”


A number of the recommendations by these experts made the case that the antidote to the alluring, frictionless outputs of AI systems is to create friction-filled points that slow people down as they encounter AI material, invite reflection, insist on human-made decisions and draw on accountability mechanisms to cross-check AI outputs for accuracy and sense-making. They believe friction is a partner of human agency and learning. Among the key arguments:

Swiss ethics and governance expert Evelyn Tauchnitz urged, “If resilience is to serve human dignity and freedom, it must be redefined. Individual resilience must be understood not merely as stress tolerance, but as the capacity for agency under pressure: the ability to judge, to dissent and to act even when adaptation would be easier. This requires critical understanding of how AI systems steer attention and behavior, institutional conditions that preserve contestability and human judgment and social norms that recognize discomfort not as failure, but as a signal that values are at stake. Not all friction is harmful; some friction is protective.”

AI researcher Helen Edwards wrote, “Being resilient might require deliberately choosing uncertainty. Choosing to care about things that resist measurement. Not because it’s more efficient, but because that’s where values live. … And values – the real ones, not their algorithmic proxies – are what make decisions meaningful rather than just optimal. In education, it means protecting the struggle – letting students wrestle with problems before offering AI assistance, creating spaces where the friction of figuring things out is the point rather than an inefficiency to eliminate. In organizations, it means consciously choosing not to optimize certain decisions even when you could, recognizing that some ambiguity serves a purpose and some context can’t be standardized without destroying what makes the work valuable.”

Uruguayan digital governance leader Mauro Rios wrote, “We run the risk of decreasing human tolerance: towards ourselves, our equals and all humans. As we become accustomed to interacting with entities designed to please us, we may lose the capacity to manage the frictions necessary for growth in real interpersonal relationships and the evolution of life in society, becoming humans who share a physical space but lack real coexistence.”


Many experts observed that being human in the AI Age will bring forth new challenges; one of them is managing selfhood in a Me:chine world in which AI’s outnumber us

If the future unfolds as technology developers imagine it, digitally connected people are likely to have to manage multiple aspects of “being” in the world. This is described as the “Me:chine” future by UK-based futurist Tracey Follows. She said as AI becomes our environment it reshapes the conditions under which human agency operates. She identified the distinction between the machinable and the unmachinable self to describe this shift. “The ‘machinable’ consists of everything about a person that can be rendered legible to systems: data, preferences, behavioral patterns, credentials, biometric signals, productivity metrics, risk scores. … Identity itself has become infrastructural. Without being machine-readable, individuals cannot access finance, services, mobility or even civic rights. The ‘unmachinable,’ by contrast, consists of those human capacities that cannot be fully captured or automated: judgment, meaning-making, ethical reasoning, imagination, intuition, timing and the ability to change oneself in response to context.”

In an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems.

She added, “Modern humans are simultaneously machinable/unmachinable, i.e., system-legible and irreducibly interior. We are not either human- or machine-mediated. We are both (Me:chine). … In an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems.”

This hybrid Me:chine reality requires people to be vigilant about their machineable representations, tending carefully to their reputations and their right to data integrity and contestability of algorithmic judgments. The experts cautioned that people should be careful about the data and intimate facts they share and protect themselves in all encounters they experience in AI systems. They say social trust will be reconfigured as AI systems decide who is visible, credible and worthy of attention. And people may need to be especially skeptical of everything in a world where reality is “atomized.”

Some said that maintaining a healthy, balanced relationship with AIs will require humans to regularly sharpen their cognition and ethical and moral balance in “mental gyms.” They said regularly scheduled in-person encounters with other humans are essential. They expect that complexity and confusion will arise as people increasingly encounter new AI agent actors and bots in their daily lives, social spheres and work environments, eroding trust in other individuals and in human institutions.

Fernando Barrio wrote, “Social resilience will depend on whether AI is used to strengthen cooperation or to replace it. In regions where public institutions are fragile, people will increasingly turn to AI for guidance, support and sensemaking not because they prefer to, but because no human alternative is available. This may help individuals cope, but it risks deepening isolation and eroding trust if digital systems substitute for relationships rather than supporting them. Strong human institutions remain the foundation of resilience, even in highly digital societies and especially in those where technology arrives faster than governance.”

Futurist Ari Wallach wrote, “AI can fracture shared reality through hyper-personalization, deepen inequality through concentration of power, and erode trust through opaque decision-making. Long-path thinking points us toward relational resilience: stronger communities, participatory governance and norms of transparency that keep humans meaningfully involved in consequential decisions.”

Tech policy expert David Bray wrote, “We are not moving from one stable state to another but entering a period of continuous change. The question is not how to get through this transition but how to thrive in a world where transition is the new normal. If transformation is continuous, then resilience cannot be a fixed state we achieve but must be an ongoing practice we cultivate. It is not about bouncing back to where we were but about continuously adapting to where we are going. …

The key is to create conditions where this struggle is generative. Where it leads to learning and adaptation rather than rigidity and breakdown. This requires cultivating specific capacities across multiple dimensions of human experience.

“The key is to create conditions where this struggle is generative. Where it leads to learning and adaptation rather than rigidity and breakdown. This requires cultivating specific capacities across multiple dimensions of human experience. Cognitively, we need to develop what might be called ‘adaptive expertise.’ This goes beyond domain knowledge to include the ability to transfer learning across contexts, to recognize when old approaches no longer work, and to generate novel solutions. … We also need to cultivate metacognition, the ability to think about our own thinking. In a world of information overload and sophisticated manipulation, we need to be aware of our own biases, assumptions and blind spots. We need to question our sources, check our reasoning and remain open to being wrong. Emotionally, we need to develop what psychologists call ‘psychological flexibility.’ This is the ability to be present with our experience, even when it is uncomfortable, and to choose actions aligned with our values rather than being driven by immediate emotions.”

Israeli future-of-work consultant Nirit Cohen observed, “At the individual level, resilience begins with cognitive recalibration. People must learn to distinguish between tasks and judgment, between execution and responsibility. AI can generate options, surface patterns and draft outputs. It cannot own consequences. The skill gap ahead is not primarily technical; it is epistemic. People need to know when to trust machine output, when to interrogate it and when to override it. This requires teaching critical thinking in an AI-saturated environment – how models are trained, where bias enters, how confidence can be simulated without understanding. Fluency is less about coding and more about sensemaking.”

The choice is whether we build a resilient foundation so that transformation expands freedom instead of amplifying insecurity. If we let gains concentrate and people fall to zero, we will get instability, backlash and needless suffering.

Universal Basic Income advocate Scott Santens urged, “Resilience is a set of capacities and supports that determine whether people can adapt without breaking. Cognitively, we need stronger reality-testing. AI will generate a flood of convincing content and the ability to verify claims, check sources and track uncertainty becomes basic self-defense. We also need systems thinking, because the temptation will be to blame individuals for outcomes that are clearly structural. Emotionally, we need distress tolerance, because volatility is exhausting. We need shame resistance, because displacement will be common and people will internalize it as failure. We need the ability to rebuild identity without collapsing, because so many of us were taught to fuse our worth to our work. … The choice is whether we build a resilient foundation so that transformation expands freedom instead of amplifying insecurity. If we let gains concentrate and people fall to zero, we will get instability, backlash and needless suffering. If we build the floor, share the dividend of productivity and treat resilience as infrastructure, we can turn nonhuman labor into human security and human agency.”


If human institutions and individuals resiliently weather the challenges to be met in the transition that lies ahead, human agency will thrive and people will flourish

The following four experts were among those who said that if the human-AI transition goes well it could be the catalyst for a new stage of human evolution, a blooming, positive partnership – a joining of humans and AIs as co-intelligent beings engaged in a symbiotic relationship. They suggested that if humans govern this transition appropriately AI will expand our horizons, help us understand our natural and digital selves and create a powerful human-technology binomial that amplifies the best of us.


AI resistance represents an illusion of ‘choice.’ Those who hesitate, debating whether to accept AI, will forfeit their opportunity to shape how that acceptance unfolds.

David Vivancos, CEO at MindBigData.com in Madrid, Spain, wrote, “The real choice is not whether we will soon live in an AI-transformed world, but what role humans will play in that transformation. AI resistance represents an illusion of ‘choice.’ Those who hesitate, debating whether to accept AI, will forfeit their opportunity to shape how that acceptance unfolds. Cultural resistance of AI systems today is akin to choosing to resist the evolution of language; the technological substrate of modern life makes complete extraction from AIs’ influence practically impossible; even hermits who retreat to the wilderness will benefit from AI-predicted weather forecasts, AI-coordinated emergency services and AI-managed infrastructure. … The actions required now are clear: Engage proactively with AGI in digital and physical form rather than debating whether to accept it. Integrate human training and AI collaboration capacities deeply into educational curricula or risk producing ‘functionally illiterate’ graduates. Create pilot communities that experiment with and develop the post-work social structures we will soon require. Assure that international coordination is established to prevent the catastrophic destabilization due to inequities that are likely to develop when some nations successfully adapt to AI while others maintaining traditional systems fall behind.”


Tools such as reciprocal competition among humans – e.g., between lawyers or businesses or philosophers or scientists… could be applied to synthetic beings, who might then hold each other accountable.

David Brin, well-known writer, futurist and consultant, wrote, “Many of the tools we’ll need, in order to achieve ‘alignment’ with artificial intelligence, are already extant in modern society. They are found in the myriad ways in which modern citizens interact with each other and in how we raise our biological children. Tools that we used to build a gradually improving, enlightenment civilization. Tools such as reciprocal competition among humans – e.g., between lawyers or businesses or philosophers or scientists… a method that could be applied to synthetic beings, who might then hold each other accountable. It’s really the only method that ever tamed human predators and enhanced outcomes. It also offers solutions to many of the AI quandaries that will arise, ways to transform a danger-fraught era into one that offers positive outcomes to us all.”


Truly personal AI – the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle – is as hard to imagine in 2026 as personal computing was in 1976. But it is no less necessary and inevitable.

Doc Searls, internet pioneer and co-founder of Customer Commons said, “Big AI is the world’s largest Magic 8 Ball, with a polyhedron of facets, each ready to help. We need personal AI for the same reason we need personal homes, shoes and computers. We need it to know our natural and digital selves as fully as possible and to participate with full agency in society, its economies and its governance. Think about all the data in our personal lives that is not in our full control. We could use some AI help with our schedules, our past and future work, our property, our finances, our obligations, our writing and correspondence, our photographs, our sound recordings, our videos, our travels, our countless engagements with other persons online and off, our many machines and you name it. Truly personal AI – the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle – is as hard to imagine in 2026 as personal computing was in 1976. But it is no less necessary and inevitable. When we have it, many of the questions that challenge us will have new and better answers. And new challenges.”


To grow past the psychological shock of realizing that bio-humans are no longer the smartest and fastest-improving entities on Earth, we need better vision, better strategy and better action.

John M. Smart, a futurist based in Michigan, wrote, “Some may think that our new digital substrate – AI – is different: a potential ‘alien intelligence.’ But it isn’t. It’s just a new, natural, network layer of life. This all should be a source of comfort, not fear. … In truth, we are domesticating our machines, selecting them to be symbiotic with us, just as we domesticated our animals and even ourselves when we formed our first human societies. The AIs that are not sufficiently symbiotic are being retired, whenever we can’t help them fix themselves. The security we are building is increasingly in the AI ecosystem itself. We are relying ever more on AIs auditing AIs, for bias, for hidden deception, for proven past safe behavior, for security, for guardrailing and resistance to manipulation. Just as in life, AI immune systems are emerging, cybersecurity that is increasingly local, agentic, redundant and network-based in the same way that biological immune systems rely on vast networks of local agents to protect our amazing complexity. AI ethics are already emerging in our primitive AI collectives, just as human ethics emerged in our collectives. To grow past the psychological shock of realizing that bio-humans are no longer the smartest and fastest-improving entities on Earth, we need better vision, better strategy and better action. In a variant of an adage coined in 1939 to steel British citizens against the onslaught of World War II, we can help each other to KCSS: Keep calm and see the solutions. The better we see the self-organizing network dynamics that have always been the deep controllers of complexity emergence, the better we can keep calm and see the resilience we can build, doing our small part to aid the symbiosis ahead of us.”


> Go to Essays Chapter 1 – Cultivating Human Agency and Prioritizing Autonomy