
“Understanding the context of AI is everything. It is the difference between lacking an awareness of our vulnerabilities – such as cognitive offloading, motivation erosion and dependency – or, instead, capturing its potential and documented benefits. Whether AI systems’ impact is positive or negative depends on us and our ability to prepare and adapt. The role of AI in the ‘dimensions of resilience’ is more influenced by the socio-economic and political environment than by technology. We are simultaneously navigating multiple stressors beyond rapid technological change, including deepening political polarization and COVID’s lingering effects physically, developmentally and in regard to institutional trust. Eric Kandel’s research shows that chronic stress rewires neurons, leaving brains hypersensitive to threat. We have collective PTSD, and the stress of the past 15-plus years shapes how we respond to change and will influence how we embrace or resist the ‘idea of AI.’
“AI has been in the works for decades, but ChatGPT and TikTok’s algorithms made it feel like a sudden development. The perceived suddenness, arriving after a pandemic amid political chaos, creates conditions that historically produce backlash, conflict and tribalism. This argues against our society’s ability to integrate AI thoughtfully, depriving us of many benefits.
The benefits of AI – if applied appropriately – include enhanced coping with stress and adversity, reduced distress through accessible support and emotional disclosure, increased self-efficacy and sense of control, improved problem-solving and productivity.
“We’re already seeing fear-driven legislative responses that suppress technology ‘for protection’ despite thin empirical support. Most problematic over recent years is the fact that restricting technology has become a substitute for education and preparation. We’re now in danger of doing it again with AI, defaulting to control, however implausible, instead of building users’ competencies around how it works and how to use it effectively.
“The resistance is understandable. Disruptive technologies accelerate structural change, leaving lasting imprints on social trust and identity. Rethinking role-based identities, such as who we are in relation to our work and expertise, is threatening, especially when we’re already stressed. But resistance won’t work because AI is already woven into our environment in many ways we don’t even notice.
“Positive outcomes from AI depend on institutionalizing digital literacy capacities that aren’t currently widely taught, making those benefits conditional rather than automatic. Skills needed:
- Critical thinking to evaluate AI outputs rather than accept them reflexively.
- Promoting co-creating using AI as a thinking partner.
- Stress tolerance for navigating uncertainty and recognizing when anxiety is driving technology use.
- Collaborative problem-solving for human-AI teams.
- Ability to maintain meaningful human connections despite algorithmically mediated interactions.
- Knowing when to trust AI, when to verify, when to override.
- Understanding of AI’s limitations and biases and how design choices encode values.
“The human-AI relationship is reciprocal. Our AI systems structure what information we see and which behaviors get rewarded, which can shape how we perceive our competence and our emotional responses. Passively, our behaviors provide feedback, further training the system. Actively, we can make decisions to influence the structure of AI systems and our use of them.
- Build digital literacy now, learning how AI works conceptually, so we can practice critical evaluation.
- Shift from requiring restrictions to required skills training, teaching people to recognize the potential positives and negatives of the AIs operating in the background in their lives and how to evaluate their influence and outputs critically.
- Make intentional decisions about transparency and architecture and test for impacts on engagement and well-being.
- Invest in digital literacy infrastructure and require transparency in AI deployment.
“We must work to avoid our vulnerabilities, dimensions such as cognitive offloading, motivation erosion and dependency and we must work to consciously capture its potential – its documented benefits. The benefits – if applied appropriately – include enhanced coping with stress and adversity, reduced distress through accessible support and emotional disclosure, increased self-efficacy and sense of control, improved problem-solving and productivity, enhanced individual creativity and broadened idea generation and personalized learning.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”