Arlindo_Oliveira
Arlindo Oliveira is a distinguished professor of computer science at the Technical University of Lisbon, Portugal, and author of “The Digital Mind” and “Generative Artificial Intelligence.” This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“Ensuring that humans flourish and retain their agency and free will in the age of artificial intelligence is not primarily a technical challenge, but a cultural, educational and civic one. AI systems will continue to grow in power and pervasiveness; the decisive question is whether they will amplify human capacities or quietly erode them. Addressing this question requires action along three closely related dimensions: how we teach people to think, how we inform them and how we help them understand both the promise and the dangers of AI. First, we must make the teaching of thinking itself a central goal of education and lifelong learning. This means cultivating skills that no automated system can replace easily: critical reasoning, abstraction, the ability to question premises, to detect inconsistencies, and to reflect on one’s own beliefs.

“In an age where answers are abundant and instantly accessible, the scarce resource is not information but judgment. Education should therefore focus less on rote acquisition of facts and more on reasoning, interpretation and synthesis. Importantly, this also applies to our interaction with AI systems: people must learn how to interrogate AIs’ outputs, challenge them, and use them as cognitive tools rather than as authorities. Teaching humans how to think – and how to think with machines – will be essential to preserving intellectual autonomy.

“Teaching humans how to think – and how to think with machines – will be essential to preserving intellectual autonomy.

AI can enhance productivity, creativity, accessibility and scientific discovery; at the same time, it can foster over-reliance, deskilling, surveillance and new forms of inequality. Public discourse should avoid both technological hype and reflexive fear. Instead, it should promote nuanced literacy about where AI systems excel, where they fail, and how their incentives are shaped.

“Second, a flourishing society in the age of AI requires broad access to balanced, verifiable, and pluralistic information. AI systems increasingly mediate what people read, watch, and hear, which makes the integrity of information ecosystems a public good. Ensuring access to reliable information involves supporting high-quality journalism, transparent data sources, and robust fact-checking mechanisms, but also teaching citizens how to evaluate sources and recognize manipulation. Algorithms can personalize information efficiently, but without safeguards they may reinforce biases, fragment shared realities and undermine democratic deliberation. A healthy relationship with AI, therefore, depends on maintaining common epistemic ground: shared standards of evidence, accountability for falsehoods and institutional mechanisms that reward accuracy over engagement.

“Finally, we must help everyone develop a realistic understanding of both the potential and the risks of extensive AI use in daily life. AI can enhance productivity, creativity, accessibility and scientific discovery; at the same time, it can foster over-reliance, deskilling, surveillance and new forms of inequality. Public discourse should avoid both technological hype and reflexive fear. Instead, it should promote nuanced literacy about where AI systems excel, where they fail, and how their incentives are shaped. This includes understanding issues such as data bias, opacity, error propagation and the social consequences of delegating decisions to machines. Empowered users are those who know when to rely on AI, when to override it and when to step away from it altogether.

“Human flourishing in the age of AI will not be achieved by slowing innovation, but by aligning it with human values and capacities. By teaching people how to think, ensuring access to trustworthy information and fostering an informed understanding of AI’s strengths and limits, we can shape a future in which technology serves human development rather than diminishes it.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”