
““The relationship between individuals and societies with respect to AI is complex and multifaceted. While some digitally-connected individuals and societies embrace AI, others resist or struggle with it due to various psychological, emotional and systemic barriers: fear of job loss, data privacy concerns, resistance to change (loss of personal agency), cynicism and skepticism, need for empathy and understanding. ‘Digital individualism’ describes an internet-driven shift from traditional group-oriented structures to dispersed, individually-focused networks in which people can focus their social support and gain access to more novel, varied and targeted information. ‘AI individualism’ is a further transformation in which people become less dependent on human-to-human interactions, relying more on tapping into AIs for tailored information, relational experiences, practical help and emotional support. The shift to AI may shift social structures and norms further toward favoring individual control over social support, fundamentally altering human interaction, connectivity and social capital.
If humans are to remain relevant in the AI era, leaders in education, workplaces and other institutions must actively help cultivate within each member of society the emotional regulation, cognitive flexibility, social cohesion and ethical discernment that allow society to adapt without losing direction. These are not ‘soft’ skills; they are survival capacities. Education today must do more than teach technical skills and promote knowledge consumption.
“Another looming issue is the fact that algorithmic personification acts as a Trojan horse for corporate control. By embedding persuasive, human-like interfaces into every digital interaction, Big Tech ensures that its influence is not just economic but existential. These systems are not neutral; they are engineered to maximize engagement, often at the cost of truth, privacy or mental health. The more convincingly an AI mimics human behavior, the harder it becomes to resist its nudges – whether to buy, to believe or to behave in ways that serve its masters.
Cognitive, emotional, social and ethical capacities for resilience
“Public debate still fixates on whether and when AI will match or surpass human intelligence, while far less attention is paid to what capacities individuals and institutions must build to adapt to its pervasive integration. Human resilience should be prioritized as much as technological progress is. AI is no longer a backend abstraction but embodied in machines that move, sense and act in the physical world.
“From autonomous driving and hospitality assistants to mobile companions, AI is rapidly embedding itself in everyday life. We are no longer just users of AI.
“This shift defines the rise of the autonomy economy, in which machines not only perform physical and cognitive labor but increasingly simulate human-like emotional presence. While these systems promise efficiency and scalability, their deeper disruption lies beneath the surface.
“Many individuals face not just unemployment but a crisis of meaning. AI-performative empathy risks dulling our capacity for real intimacy, trust and vulnerability. Even more concerning is AI’s growing influence over decisions with moral weight, healthcare, hiring, parole and resource allocation, where opaque algorithms often optimize for efficiency rather than justice.
“These systems can embed invisible biases and remove deliberation from processes that once demanded human judgment. As traditional ethical frameworks are displaced by technical proxies, our capacity to contest, understand or shape the values behind these decisions is weakened.
Humans in all realms must motivate and educate all for resilience
“We define human resilience as a multi-level capacity to absorb disruption, adapt and restore function while preserving core purposes and values. Formally, it comprises: 1) psychological resilience, the individual abilities of emotion regulation, meaning-making and cognitive flexibility that sustain goal-directed behavior under stress; 2) social resilience, the collective capacities of trust, social capital and coordinated response that enable groups and communities to mobilize resources and maintain cohesion during shocks; and 3) organizational resilience.
“Human resilience in the age of AI systems is not solely dependent on people’s cognitive capabilities but also on their emotional, social and ethical capacities. These elements are crucial for the successful integration of AI into human activities and for fostering deeper trust and understanding between humans and machines.
“Resilience is not just a psychological construct. It is a functional capacity that operates across layers. It protects well-being under digital stress, supports equitable adaptation to AI-driven shifts and enables systems to recalibrate without fragmenting. It is not innate, nor is it elusive.
“If humans are to remain relevant in the AI era, leaders in education, workplaces and other institutions must actively help cultivate within each member of society the emotional regulation, cognitive flexibility, social cohesion and ethical discernment that allow society to adapt without losing direction. These are not ‘soft’ skills; they are survival capacities. Education today must do more than teach technical skills and promote knowledge consumption.
We must reject the premise that asking for a future with ‘better’ AI means that the AI should be ‘more human.’ The most ethical AI – the one that embraces its artificiality, making its limitations clear rather than masking them – might be better than human.
Normalize this: AI should never ‘replace’ human thinking
“Those who create an appropriate symbiotic relationship with AI know that it cannot be seen as a ‘replacement’ for human thinking. They use their AI sessions to build their cognitive skills; human and machine intelligence are at their best when they complement and enhance each other. Cognitive resilience – the ability to maintain and strengthen our own mental capacities in the face of technological change – involves cultivating a critical and reflective mindset that allows us to engage with AI in a discerning and purposeful manner.
“Among the other approaches we must take to succeed in building up resilience are:
- “Encouraging the societal normalization of healthy personal habits that allow individuals to maintain a reasonable balance between digital engagement and offline activities. In addition to educating all about emotional regulation, cognitive flexibility, social cohesion and ethical discernment, this is important to mitigate the negative impacts of excessive digital use and the congruent lack of in-person socialization and pursuit of outdoor space and time on mental and physical health.
- “Government initiatives and public-awareness campaigns aimed at promoting responsible digital behavior and raising awareness of digital risks. These campaigns can empower individuals with a deeper understanding of digital environments and the knowledge to navigate them safely. Programs should address societal norms and cultural attitudes towards digital engagement, privacy and ethical considerations. It is vital to foster a more-informed and more-responsible digital citizenry.
- “Legislation of boundaries is required. Effective regulation is needed. If an AI is designed to persuade, it should be labeled as such – no different from advertising disclaimers. If it simulates emotion, users should be reminded, in real time, that they are talking to a statistical model.
“Human resilience, as I explain it here, must be prioritized. Policies at both institutional and governmental levels should promote a balanced approach of human support along with AI implementation. Already at this early point in time of our growing dependence on AI in professional work, many people are required to max out their mental capacity for multitasking due to the rise in productivity expectations in light of the arrival of AI systems.
“And, critically, we must reject the premise that asking for a future with ‘better’ AI means that the AI should be ‘more human.’ The most ethical AI – the one that embraces its artificiality, making its limitations clear rather than masking them – might be better than human.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”