Jerome_Glenn
Jerome Glenn is a global futurist, CEO of the Millennium Project and chair of the AGI Panel of the UN Council of Presidents of the General Assembly. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“Human resilience in the face of AI advances requires a targeted international effort to create and implement AI regulation. Since global governance of artificial general intelligence (AGI) will be so complex and difficult to achieve, the sooner we start working on it the better. Following are excerpts from my essay on this, originally published by Horizons, a publication of the Center for International Relations and Sustainable Development. Trillions of dollars are being invested in developing and infrastructure for advanced AI. If it is managed well, the ‘next step’ in artificial intelligence – AGI – could usher in great advances in the human condition from medicine to education, longevity, global warming, the scientific understanding of reality and even to creating a more-peaceful world. However, if national and international regulation is not successfully carried out soon it is possible that humanity could eventually lose control of what will become a non-biological intelligence far beyond human understanding and awareness.

“Successful human resilience and adaptation during this time of transformation require that policymakers and the public begin now to work to achieve the extraordinary benefits of advanced AI while avoiding catastrophic – or even existential – risks. …

Humanity has never before faced a greater intelligence than its own

“In the past, technological risks were primarily caused by humans’ misuses of it. We now also face the possibility that potential risks and threats might be due to the actions of AGI, itself. Without regulations for the transition to AGI we could be at the mercy of a future non-biological intelligent species. Today, there is a competitive rush to develop AGI without adequate safety measures. As Russian President Vladimir Putin famously warned about AI development, ‘The one who becomes the leader in this sphere will be the ruler of the world.’ So far, there is nothing standing in the way of uses of AI or AI itself increasing a dangerous concentration of power the likes of which the world has never known.

“Nations and corporations are prioritizing speed over security in the development of AI, undermining potential national governing frameworks and making safety protocols secondary to economic or military advantage. There is also the view that Company A might feel a moral responsibility to get to AGI first to prevent Company B from it because A believes it is more responsible than B. If Company B, C and D have the same beliefs as Company A, then each company believes it has a moral responsibility to accelerate their race to achieve AGI first. As a result, all might cut corners along the way to become the first to achieve this goal, leaving humanity open to danger. Such competition is also being undertaken in nation-states’ military development of AGI.

Unregulated AGI outcomes are extremely dangerous

“We must initiate the necessary procedures to prevent the following potential outcomes of unregulated AGI, which a research group I lead has documented and presented to the UN Council of Presidents of the General Assembly: 

“Power concentration, global inequality and instability – Uncontrolled AGI development and usage could exacerbate wealth and power disparities on an unprecedented scale. If AGI remains in the hands of few nations, corporations or elite groups, it could entrench economic dominance and create global monopolies over intelligence, innovation and industrial production. This could lead to massive unemployment, widespread disempowerment affecting legal underpinnings, loss of privacy and the collapse of trust in institutions, scientific knowledge and governance. It could undermine democratic institutions through persuasion, manipulation and AI-generated propaganda and heighten geopolitical instability in ways that increase systemic vulnerabilities. A lack of coordination could result in conflicts over AGI resources, capabilities or control, potentially escalating into warfare. If AGI arrives before regulation of it does, many new and complex issues of intellectual property, liability, human rights and sovereignty could completely overwhelm domestic and international legal systems. 

“Existential risks – AGI could be misused to create mass harm, or control or be developed in ways that are misaligned with human values. Furthermore, it could even act autonomously beyond human oversight, evolving its own objectives according to self-preservation goals already observed in current frontier AIs. AGI might also seek power as a means to ensure it can execute whatever objectives it determines, regardless of human intervention. National governments, leading experts and the companies developing AGI have all stated that these trends could lead to scenarios in which AGI systems seek to route around or overpower humans. These are not far-fetched science-fiction hypotheticals about the distant future – many leading experts fear that these risks could all materialize within this decade and their precursors are already occurring. Moreover, leading AI developers have thus far had no viable proposals for preventing these risks.

“Irreversible Consequences – Once AGI is achieved, its impact may be irreversible. With many frontier forms of AI already showing deceptive and self-preserving behavior and the push toward more autonomous, interacting, self-improving AIs integrated with infrastructures, the impacts and trajectory of AGI can plausibly end up being uncontrollable. If that happens, there may be no way to return to a state of reliable human oversight. Proactive governance is essential to ensure that AGI will not cross red lines, leading to uncontrollable systems with no clear way to return to human control.

“Weapons of mass destruction – AGI could enable some states and malicious non-state actors to build chemical, biological, radiological and nuclear weapons. Moreover, large AGI-controlled swarms of lethal autonomous weapons could themselves constitute a new category of WMDs. 

“Critical infrastructure vulnerabilities – Critical national systems (e.g., energy grids, financial systems, transportation networks, communication infrastructure and healthcare systems) could be subject to powerful cyberattacks launched by or with the aid of AGI. Without national deterrence and international coordination, malicious non-state actors – from terrorists to transnational organized crime – could conduct attacks at a large scale. 

“Loss of extraordinary future benefits for all of humanity – Properly managed, AGI promises improvements in all fields, for all peoples – from personalized medicine, cures for cancer and innovative cell regeneration to individualized learning systems, the end of poverty, significant mitigation of climate change and the acceleration of other scientific discoveries with unimaginable benefits. Ensuring such a magnificent future for all requires global governance, which begins with improved global awareness of both the risks and benefits.

Managing our AI transition is vital to human resilience

“We need to create national and international regulations for how AGI is created, licensed, used and governed before it accelerates its learning and emerges into a form of advanced superintelligence (ASI) beyond human control. We must work to manage the transition from today’s frontier AIs to AGI. How well we manage that transition is likely to also shape the transition from AGI to ASI. 

“We can think of ANI as we consider our young children, whom we control – what they wear, when they sleep and what they eat. We can think of AGI as our teenagers, over whom we have some control – which does not include what they wear or eat or when they sleep. And we can think of ASI as an adult over whom we no longer have any control. Parents know that if they want to shape their children into good, moral adults they have to focus on the transition from childhood to adolescence. Similarly, if we want to shape ASI, then we have to focus on the transition from ANI to AGI. And that time is now.

“The greatest research and development investments in human history are now focused on creating AGI. Without national and international regulations, many AGIs from many governments and corporations could possibly continually rewrite their own codes, interact and give birth to many new forms of artificial superintelligences beyond our control, understanding and awareness. Governing AGI is the most complex, difficult management problem humanity has ever faced. … We must raise awareness and educate world leaders on the risks and benefits of AGI and why national and global actions are urgently needed. The following items should be considered during a UN General Assembly session specifically on AGI:

“A global AGI observatory is needed to track progress in AGI-relevant research and development and provide early warnings on AI security to UN member states. This observatory should leverage the expertise of other UN efforts, such as the Independent International Scientific Panel on AI, created by the UN Global Digital Compact and the UNESCO Readiness Assessment Methodology.

“An international system of best practices and certification for secure and trustworthy AGI is needed to identify the most effective strategies and provide certification for AGI security, development and usage. Verification of AGI alignment with human values, controlled and non-deceptive behavior and secure development is essential for international trust. 

“A UN Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution. It should define clear risk tiers requiring proportionate international action, from standard-setting and licensing regimes to joint research facilities for higher-risk AGI, and red lines or tripwires on AGI development. A UN Convention would provide the adaptable institutional foundation essential for globally legitimate, inclusive, and effective AGI governance, minimizing global risks and maximizing global prosperity from AGI. 

“A feasibility study on creating a UN AGI agency is suggested. Given the breadth of measures required to prepare for AGI and the urgency of the issue, steps are needed to investigate the feasibility of a UN agency on AGI, ideally in an expedited process. Something like the International Atomic Energy Agency (IAEA) has been suggested, understanding that AGI governance is far more complex than nuclear energy; and hence, such an agency will require unique considerations in such a feasibility study. Uranium cannot re-write its own atomic code, it is not smarter than humans, and we understand how nuclear reactions occur. Hence, management of atomic energy is much simpler than managing AGI.

We are already in a ‘final countdown’ and we must push forward

“Global governance of AGI will be complex and difficult to achieve. We must begin today or the great AGI race will continue unabated. This cannot be a business-as-usual effort. National licensing systems and a UN AGI agency have to be in place before AGI is released on the Internet.

“Eric Schmidt, former CEO of Google, said in 2025 that the ‘San Francisco Consensus’ is that AGI could be achieved in the next three to five years. Political leadership will have to act with an expediency never before witnessed. Geoffrey Hinton, one of the ‘fathers of AI,’ has said that such regulation may not be impossible, but we have to try. During the Cold War in the 1950s and ’60s, it was widely believed for a time that a nuclear-powered World War III was inevitable and impossible to prevent. The shared fear of an out-of-control nuclear arms race led to agreements to manage it. Similarly, the shared fear of an out-of-control AGI race should lead to agreements capable of managing that race.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”