
“Artificial intelligence is no longer a distant possibility. It is shaping how we work, decide, learn and relate to one another today. The real question before us is not whether AI will soon play a larger role in our lives, but whether we will allow that role to be defined by short-term efficiency or long-term human flourishing.
“’Long-path’ thinking asks us to widen our time horizon. It reminds us that the most consequential technologies in history, from the printing press to industrialization to the internet, did not simply change tools. They changed values, institutions and how people understood their place in the world. Those transitions were rarely smooth. They involved resistance, overreach, fear and repair. AI represents a similar inflection point, but one that operates at the level of cognition itself, accelerating change while compressing the time available for reflection.
Resilience in an AI-shaped world will require new capacities. … Ethically, the challenge is to become great ancestors. This means developing the moral imagination to anticipate downstream effects, including impacts on people who are not yet born.
“Unsurprisingly, responses to AI are polarized. Some embrace it as a source of productivity and problem-solving. Others resist, fearing job displacement, surveillance, or the loss of meaning. Many experience both at once. From a long path perspective, this tension is not a flaw in the system. It is the work. Societies grow not by avoiding struggle, but by learning how to move through it without abandoning their core commitments.
“Resilience in an AI-shaped world will require new capacities. Cognitively, we must strengthen sensemaking. As algorithmic outputs grow more fluent and authoritative, the human task shifts from producing answers to interpreting them. This means understanding where AI systems are useful, where they are biased, and where they should not be trusted at all. It also requires epistemic humility, the discipline of recognizing that speed and confidence are not the same as wisdom.
“Emotionally, AI challenges our sense of worth. In a world optimized for comparison and performance, resilience depends on sustaining intrinsic motivation and dignity beyond metrics. Practices that slow us down, such as reflection, ritual and time in community, become essential infrastructure, not luxuries.
“Socially, the risks are collective. AI can fracture shared reality through hyper-personalization, deepen inequality through concentration of power, and erode trust through opaque decision-making. Long-path thinking points us toward relational resilience: stronger communities, participatory governance and norms of transparency that keep humans meaningfully involved in consequential decisions.
The actions we take now matter. Education systems must prioritize lifelong learning, critical thinking and human capabilities such as care, judgment, creativity and wisdom. Workplaces must redesign roles so humans remain stewards of context and values, not just supervisors of automation. Governments and institutions must adopt anticipatory governance tools, including foresight and scenario planning to act before harms become entrenched.
“Ethically, the challenge is to become great ancestors. This means developing the moral imagination to anticipate downstream effects, including impacts on people who are not yet born. It means setting boundaries around uses of AI that undermine dignity or agency, even when those uses promise short-term gains. Becoming good ancestors requires courage, restraint and a willingness to prioritize long-term resilience over immediate advantage.
“The actions we take now matter. Education systems must prioritize lifelong learning, critical thinking and human capabilities such as care, judgment, creativity and wisdom. Workplaces must redesign roles so humans remain stewards of context and values, not just supervisors of automation. Governments and institutions must adopt anticipatory governance tools, including foresight and scenario planning to act before harms become entrenched.
“New vulnerabilities will emerge, including over-reliance on algorithmic judgment, skill erosion, manipulation at scale and a subtle loss of agency. Our coping strategies must therefore focus on discernment, connection and time horizon expansion.
“The task before us is not to outrun AI. It is to outgrow our short-termism. If we succeed, we can ensure that these systems serve the long arc of human and planetary flourishing, and that those who come after us will look back and recognize that we chose to become the great ancestors our futures needed.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”