
“The massive integration of artificial intelligence into the fabric of contemporary civilization should not be understood simply as a new technological revolution, but as a process of perpetual co-evolution between it and us. In this paradigm, the human being and algorithms no longer operate in separate spheres. On the contrary, they influence each other – no longer only because we are the builders, the creators, but due to the existence of a constant feedback loop.
“For example, we no longer speak only of the information that humans provide to AI for its training, but rather of AI using its own information generated by our prompts as new input for its own training. Meanwhile, human decisions, based on data, are being influenced by the information generated by AI. And all of this is shaping our preferences, behaviours and social structures. This transition, fraught with both optimistic promises and structural risks, demands a profound reconfiguration of our social, labour, educational and recreational lives.
If we uncritically delegate our capacity for independent analysis, if we delegate our reason, we lose something that algorithms can never simulate; we will lose common sense, a sense of empathy and even love. We run the risk of eroding fundamental faculties that make us human.
“Human decision-making, which will determine the future world, continues to be based on education; this is at the centre of the transformation. AI allows for the reimagining of the classroom, liberating the potential of teachers, as well as administrative and routine management tasks in education. All of this allows for greater relevance to be given to pedagogy, which must be rewritten. We are facing a great opportunity for educational adaptation in the broadest sense; it is not only formal classroom education that is shifting, but lifelong learning itself is changing.
“This requires a necessary adaptation unlike anything humanity has never faced before. This adaptation surely resides in ‘cognitive offloading,’ since by using AI as a functional structure that assumes low-level tasks, students can free up mental resources to focus on critical thinking, computational strategic thinking and deep creativity.
“However, there is a risk of cognitive atrophy. If we uncritically delegate our capacity for independent analysis, if we delegate our reason, we lose something that algorithms can never simulate; we will lose common sense, a sense of empathy and even love. We run the risk of eroding fundamental faculties that make us human. We can lose long-term memory and the mental discipline necessary to detect flawed logic. Therefore, we must maintain a focus on learning to formulate complex problems, exploring autonomously and maintaining unshakeable critical judgment.
“In the labour field, significant impact is imminent. The need for adaptation cannot wait. The inevitable displacement of workers and the reconversion of skills does not necessarily imply the end of work as such, but a metamorphosis. While mechanical roles disappear, new essential functions emerge in areas such as data analysis, cybersecurity, education, automated industry and, of course, in the ethical management of AI. The key to navigating this transition is permanent ‘reskilling.’
“The workforce must incorporate new skills, reconvert skills that have become obsolete and enhance human competencies that algorithms and computers cannot yet successfully replicate: empathy, creativity and the resolution of ethical problems.
“Despite AIs’ potential for cognitive enhancement, the risks of labour precariousness cannot be ignored. Phenomena such as the so-called ‘uberisation’ of various professions and constant algorithmic surveillance can undermine worker security and autonomy. To avoid it, responsible regulation and the protection of professional identity are essential so efficiency does not erode the ethical agency of the individual.
“Beyond the office and the classroom, AI is altering the very structure of our social interactions. On the one hand, it offers the possibility of substantial improvements in the manufacture of products, services, management and logistics. But, on the other hand, we should be concerned by the massive accumulation of personal data that jeopardizes privacy and facilitates the dissemination of algorithmic biases or misinformation.
“A critical aspect of this new reality is the appearance of ‘artificial intimacy.’ Links with AI agents can alleviate loneliness for many, but they also pose a social risk. We run the risk of decreasing human tolerance: towards ourselves, our equals, and all humans. As we become accustomed to interacting with entities designed to please us, we may lose the capacity to manage the frictions necessary for growth in real interpersonal relationships and the evolution of life in society, becoming humans who share a physical space but lack real coexistence.
“Artificial intelligence must become a complement, a companion or even a peer to humans, but never a total replacement for humans in our lives. Successful human-AI integration will depend on our ability to maintain balance at perhaps most important point in the history of humanity. We have to find a way to maintain dominance over evolution, not only of ourselves, but over that of the world as a whole. We must decide to steer society in a direction in which technology acts as a catalyst for human existence and excellence, not as a veil that opaquely masks our capacity to think, to feel and to maintain our autonomy to self-determine our future.
“As we evolve with these systems, how might the essence and elements of human resilience change? In the dizzying scenario of digital transformation our understanding of our strengths as a species is undergoing a profound change. Traditionally, resilience has been defined as a static personality trait, one that is not always present in all of us – as an individual ‘shield’ that allows a person to recover their original state after a crisis. Today, in the context of our coexistence with intelligent systems, this definition has become too small. Today, resilience is evolving towards a dynamic and multi-level social capacity. It is no longer just about resisting impact, but about cultivating an aptitude to absorb disturbances and transform positively into an active component of the human-technology binomial.
“This new resilience, which we are still defining, does not occur in a vacuum; it unfolds in three interconnected dimensions: psychological, social and organizational. Accelerated technological transformation has given rise to a new stress, new phobias and some resistance to change due to new fears. At an individual level, resilience is now manifested through cognitive flexibility and emotional regulation. The modern worker must possess a high degree of self-control so as not to be overwhelmed. In this sense, artificial intelligence presents a fascinating duality.
For resilience to be sustainable, it must be integrated into the DNA of the organizational structures of society. Resilient organizations are those that cultivate psychologically safe environments and practice compassionate leadership. These elements are fundamental for maintaining well-being during revolutionary technological and industrial revolutions, which often tend to be traumatic.
“On the one hand, AI can act as a cognitive coach. Studies show that the appropriate use of language models can help humans to reformulate complex objectives and explore alternatives that were previously invisible, thus strengthening their capacity for adaptation. It works as an amplifier that provides real-time emotional support and tools for self-reflection. However, this advantage carries an important warning: the risk of dependence. If humans rely excessively on what algorithms produce to manage their stress or make decisions, they could weaken their independent psychological immunity. The challenge consists in using AI to enhance our faculties, not to atrophy them, ensuring that we remain equipped to act when the technology is not available.
“Such resilience is not a solitary effort. In its social dimension, it is nourished by support networks, trust and shared social norms. Social support is often the best regulator of ‘digital overload.’ People who tap into trusted collaborative communities that share resources and knowledge in the face of technological disruptions are much more robust than isolated individuals. AI can be a conduit for collective knowledge in such groups. Technology allows group wisdom to flow more efficiently. Social resilience can become a flow of cognitive cooperation in which the machine facilitates coordination and empathy, alongside ethical and societal responsibility, under the guidance of human judgment. Social cohesion is thus strengthened and the digital transition does not fragment the community but unites it.
“For resilience to be sustainable, it must be integrated into the DNA of the organizational structures of society. Resilient organizations are those that cultivate psychologically safe environments and practice compassionate leadership. These elements are fundamental for maintaining well-being during revolutionary technological and industrial revolutions, which often tend to be traumatic.
“The synthesis of all these considerations is ‘intelligent resilience.’ This concept integrates ethical wisdom and human empathy with the analytical power of machines. Its objective is not only efficiency, but the prevention of systemic failures and the preservation of human agency in an automated world.
“Although AI offers unprecedented opportunities for social, work and educational progress, its success depends on our ability to adapt proactively. The ultimate challenge is not to compete with the machine, but to strengthen that which no AI can replicate: to be fully resilient humans.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”