Haruki_Ueno
Haruki Ueno is a distinguished global expert on AI and knowledge engineering, professor emeritus of the National Institute of Informatics of Japan and deputy editor of the journal CAAI AI Research. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“The reality of generative AI: Although only five years have passed since the emergence of LLM-based Generative AI it has already become an indispensable tool in social activities. Yes, it will permeate even more areas of society in the future. But we must maintain a correct understanding and a calm approach to its use.

“These systems do not possess human-like intelligence. They are merely statistical learning and probabilistic generation systems dependent on training data. Currently, few users choose to recognize these characteristics; many misunderstand the ‘intelligence’ of the responses. I can confidently assert that artificial general intelligence does not lie beyond the current path of Generative AI.

Education should focus on humans’ successful coexistence with AI. … In all cases, a ‘human-centric’ philosophy must be maintained. … A larger issue with global differences is the rapid advancement of autonomous weapons systems.

“The future of human resilience in light of change due to AI depends on several factors.

Ethical requirements for AI developers: AI is a technology that should bring prosperity to all of humanity. If left solely to AI researchers working for the leaders of companies that are only interested in potential and performance, it could lead to the ruin of humankind. AI researchers and their employers must be held to high ethical standards, and a legal system is necessary to support and enforce this.

AI literacy education: It is vital to provide AI literacy education starting from at least the middle and high school levels. Students should be required to understand the principles, utility and limitations of AI, as well as its differences from humans’ capabilities. Through appropriate practical experience, education should focus on humans’ successful coexistence with AI, specifically emphasizing that humans and AI are fundamentally different.

Tackling short-term and long-term challenges: In the short term, hybrid models that combine LLMs with knowledge-based AI are likely to be effective in countering hallucinations. In the long term, we require research into human cognitive mechanisms and AI development based on those findings, alongside the realization of innovative neural network models based on neuroscience. (Even if these efforts toward perfecting AI are successful, I still believe it is impossible to grant machines a human-like mind or consciousness.)

“I do believe we will soon see significant benefits in fields such as autonomous vehicles and autonomous caregiving robots. In all cases, a ‘human-centric’ philosophy must be maintained.

International cooperation: Views on the present and future of humans and AI can vary greatly from culture to culture. International collaboration among experts who share these concerns can propose effective approaches and frameworks for global AI governance. This is a task well-suited for the activities of the United Nations. A larger issue with global differences is…

The crisis of asymmetric governance in AWS and LAWs: In the realm of dual-use AI, the rapid advancement of AWS (Autonomous Weapon Systems) is fundamentally altering the nature of warfare. This creates a dangerous ‘governance gap’ between global political systems:

  • “Democratic nations: In these societies, the development of LAWs (Lethal Autonomous Weapons) is met with significant internal opposition based on human rights, accountability and ethical red lines. Democratic governance naturally imposes constraints that prioritize moral responsibility.
  • “Autocratic/dictatorship states: These regimes are largely immune to such ethical governance or domestic pressure. For autocratic states, the strategic advantage of AI-driven warfare outweighs moral considerations, allowing them to pursue these technologies without the ‘ethical drag’ found in democracies.

“Unfortunately, the gap between ethical ideals and geopolitical reality in the realm of AI in warfare is likely to persist. While it is a grave concern that dictatorships may gain a tactical edge by ignoring AI ethics, there is a distant, albeit cynical, hope: that warfare might eventually shift into a conflict strictly between machines, potentially sparing human life on the battlefield.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”