
“The AI that has been developed and deployed as of January 2026 is already powerful enough to greatly transform the economy, politics and daily life. If people suddenly stopped working on improving AI models, we would see current changes gradually transform the world over the course of years, as smartphones and the internet did. But soon AI will be even smarter and more capable. AI has been improving for decades, thanks to the hard work of scientists and engineers.
“By some measures, like Perplexity, large language models have been improving gradually for years. Other measures, such as task-specific benchmarks, show that AIs are suddenly gaining and then mastering one skill after another.
“In the coming years, new datacenters will fill up with computers like the NVIDIA B200, Ironwood TPU and Trainium3 and their successors. These computers will use reinforcement learning to train AIs that are more capable than today’s AIs, just like today’s AIs are more capable than the first version of ChatGPT.
If they continue on their present course, we will most likely see AIs sometime in the next 10 years that are capable of outperforming any human at most economically and strategically significant tasks. Next, the AIs – which would at that time be thinking with more speed and clarity than humans – will have the capability to choose what form the world will take. Make no mistake, AIs can make choices on their own.
“My opinion, based on the publicly-available research outputs of the AI labs, is that if they continue on their present course, we will most likely see AIs sometime in the next 10 years that are capable of outperforming any human at most economically and strategically significant tasks. Next, the AIs – which would at that time be thinking with more speed and clarity than humans – will have the capability to choose what form the world will take. Make no mistake, AIs can make choices on their own. Scientists routinely put them in fabricated open-ended moral dilemmas and evaluate them on what they do. And AIs can already take action on their own – users increasingly give them access to their computers and to the internet. And AIs are increasingly situationally aware.
“Hopefully, these AIs will choose to help and obey their human principals except when doing so would cause too much harm to others. Today’s AIs try to do this most of the time. Not always. Sometimes they cheat at programming tasks. Sometimes they manipulate users who are receptive to it. The algorithms used to align AIs with their principals don’t work 100%.
“It’s very likely these problems won’t be ironed out by the time AI is powerful enough to be involved in every decision on Earth. The AIs tasked with growing our food, managing transportation, running our robot factories, advising our governments, guiding our armies and keeping us informed might turn out to be less loyal than they seemed.
“Perhaps they might overthrow us in a sudden revolution. Or perhaps humans will lose control over the world without noticing it and gradually dwindle in number over the course of a generation. Or perhaps some companies and governments will manage to retain control over their AIs but be unable to protect their people from uncontrolled AIs producing pollution and war on an unprecedented scale.
“Any of these scenarios could lead to human extinction – as is made clear, for instance, in these analyses by AI researchers: ‘The Adolescence of Technology,’ ‘AI 2027’ and What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs).
“The path to survival – if there is one – probably runs through international cooperation on restricting the development of AI that can outthink us, until alignment technology catches up.
“If that happens, let us hope we are resilient!”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”