Severin_Field
Severin Field a doctoral student and researcher at the University of Louisville Cybersecurity Lab. He has co-authored work with researchers such as David Krueger, examining the divergence of opinion among experts regarding AI risks. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“The media and information environment is confused and perspectives on AI vary wildly. Many people don’t think at all about the future of AI. Many more people simply imagine that future AIs are likely to be ever-more-useful as chatbots on their phones. A smaller group speculates about a potential future in which AIs: not only answer questions but out-think humans; quickly execute tasks that would take people many hours, days or months to complete; shape the physical world via autonomous control over computers and physical tools (including robots); and lead operational management of the global economy. I consider myself part of the most-focused, third camp. This makes the question of the future of human resilience in the age of AI unbelievably difficult to fathom.

“Leading AI companies such as OpenAI, Anthropic and Google DeepMind have all declared their explicit goal is to build artificial general intelligence. They are investing billions of dollars this endeavor, and their research labs are led by the brightest talent of our generation. That’s a lot of focus.

I see no principled reason why artificial systems cannot eventually exceed human cognition across every domain. If progress continues, such systems will eventually emerge, so speculation becomes uncomfortably difficult. Such massive change can be unimaginable. This is why terms like ‘singularity’ or ‘event horizon’ are applied; in physics, you cannot, for example, see beyond the event horizon of a black hole.

“In my mind, I figure very wide error margins as to when transformative AI will come and what it might look like. Predicting the date of transformative technological events is difficult. While I do not know when it will arrive, as long as progress continues (however fast or slow) I believe it will eventually arise.

“I see no principled reason why artificial systems cannot eventually exceed human cognition across every domain. If progress continues, such systems will eventually emerge, so speculation becomes uncomfortably difficult. Such massive change can be unimaginable. This is why terms like ‘singularity’ or ‘event horizon’ are applied; in physics, you cannot, for example, see beyond the event horizon of a black hole.

“I often find myself disappointed at the degree of overconfidence influential tech leaders express in interviews that gain widespread attention. Of course, controversy generates attention. Overconfident predictions by well-known public figures who talk about AI such as Yann LeCun, Gary Marcus and Dario Amodei cut in many different directions; epistemic humility isn’t all that popular.

“I am quite concerned about AIs being used as weapons (‘killbots’), about AIs implemented as a means of social control by authoritarian governments and also about all of the issues tied to humans’ loss-of-control risks – that humankind could fall so far behind the capabilities of future AIs or of AI-augmented minds, that they lose via natural selection.

“I’d like to share a simple observation about how fast technological change can advance to being an existential threat. (Historical data from Claude.ai):

‘Consider nuclear physics in the early 1930s. Ernest Rutherford, the father of the field, declared in 1933 that extracting energy from atomic transformations was ‘moonshine.’ [Couldn’t possibly work.] Within 12 years, Trinity lit up New Mexico with 21 kilotons of force. The scientific community’s predictions weren’t merely wrong – they were incoherently wrong, diverging wildly in direction and magnitude. Rutherford saw impossibility but Leo Szilard grasped chain reactions that same year and immediately filed a secret patent on the bomb. Niels Bohr had believed isotope separation would require turning an entire country into a factory – simultaneously prescient about the Manhattan Project’s scale and blind to how fast such mobilization could occur.’

“What’s the solution to such large problems with such high degrees of uncertainty and so much disagreement? Epistemic resilience and coordination. At a bare minimum everyone should: 1) Take this seriously. 2) Maintain wide error margins. 3) Focus on building adaptive capacity. I recommend reading Holden Karnofsky’s ‘Most Important Century’ series of essays.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”