Francisco_Jariego
Francisco Jariego is a futurist, author and technology innovation researcher based in Madrid, Spain. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives. It is already happening, and it will continue, with both increasing adoption of AI functions and the improvement of AI systems as they specialize and deepen their effectiveness in multiple sectors and activities. The inhabitants of tomorrow will look back at our present moment not only as the era when AI arrived but as the time when we evolved the partnership between human and artificial intelligence they will inherit. That process is taking place right now with every step we take. We need to increase our collective consciousness about it. The process of technology adoption is well captured by sci-fi author Douglas Adams’ ‘Rules That Describe Our Reactions to Technologies’: ‘Anything that is in the world when you were born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re between 15and 35 is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re 35 is against the natural order of things.’

We must build … a public education infrastructure that requires people to master AI literacy and norms that foster people’s openness to new ways of learning and doing … and new approaches to intellectual property (and copyright in particular) that incentivize innovation and creativity while allowing the evolution of AI systems, integration of information, knowledge and a true jump in the ‘wisdom of crowds.

“Most people today are surpassed by the speed of change in present-day technologies. Some people – typically a small minority – are able to adapt fast and gain advantage. The more new technologies we have and/or the faster the technological change, the more inequalities will be created, increasing social pressure and conflict. Thus, the challenge for human societies in the age of AI is in keeping up with and adapting to changes and opportunities and addressing human diversity in its broadest possible meaning.

“Optimists might think that the new digital technologies related to ‘intelligence’ (artificial, general and super intelligence) are likely to offer us plenty of new and better ways to deal with this challenge. I see a rough road ahead with, possibly, much more promising benefits to follow:

1) “Technology adoption will offer amazing and incredible opportunities for people who take advantage of them quickly. Most people, however, will adopt them much more slowly. And some will never adopt them. As explained by famous communications researcher Everett Rogers’ diffusion of innovations model, ‘Progress’ (new products, services, businesses and economic productivity) can lead to some useful change for humanity while it will also lead to social disruption and sometimes to chaos.

2) “AI development and its applications together with developments in areas like neuroscience will eventually drive us to better understand and, perhaps, even solve, some historical ‘philosophical’ challenges, for example, the meaning of intelligence and consciousness. If and when that happens, we will likely be facing a ‘transformational’ moment comparable to those found in the largest breakthroughs in science, such as relativity, quantum mechanics or the discovery and development of antibiotics.

“Meanwhile, there are plenty of challenges and opportunities deeply interlinked at the individual and societal levels. Opportunities to capitalize are highly dependent on culture and ideological positions. Society’s resilience depends on the retention of human agency and upon educating individuals, addressing social and economic inequality and rethinking two critical building blocks tied to the economics of information: intellectual property and scientific research.

“At the individual level people must:

  • Understand how AI works (not simply how to use it).
  • Apply critical thinking about AI outputs, recognizing bias and limitations.
  • Experiment deliberately: Constantly try new things and be open to change.
  • Consciously collaborate in communities of practice: Share learning, reduce isolation.
  • Cultivate their uniquely human capacities, in continuous evolution.
  • Build their ‘hybrid’ skills: Combine human domain expertise with AI literacy.
  • Embrace the human-plus-AI ‘centaur metaphor,’ in which humans delegate tasks – not authority – to AIs by defining specific roles for AI while maintaining oversight to ensure quality and fact.

“At the societal level we must build:

  • A public education infrastructure that requires people to master AI literacy and we must adopt norms that foster people’s openness to new ways of learning and doing.
  • Transparency requirements that include the simplification of all areas related to management and administration and the ability to appeal errors based on incorrect or misused data. (Bureaucracy is the cancer of society; information overload is a dead weight dragging us down.)
  • New approaches to intellectual property (and copyright in particular) that incentivize innovation and creativity while allowing the evolution of AI systems, integration of information, knowledge and a true jump in the ‘wisdom of crowds.’
  • New incentives for research, sharing and integration of knowledge
  • New norms or requirements for business (especially tech) and government that favor the public good over profit and control motives.

“If we are unable to integrate and adapt as a society to the capabilities of new technologies and – in particular – artificial intelligence, the risk is stagnation and/or collapse.

“The future will always be weird for inhabitants of the present. It is just the opposite for inhabitants of the future (whatever that future will be), because one of the fundamental advantages of the human species is adaptation.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”