
“In 2017, we named our strategic foresight practice Techistential, a play on technology and existential. Today, humanity faces both technological and existential conditions that can no longer be separated. Our existential condition is an uncertain one, considering the inherent dualities, paradoxes and tensions of life.
“In the future, we may all many come to realize that our main worry should not be over AI suddenly turning evil and instead focus on the damage that can be caused by accidents, misalignment and shortsightedness. If humans fail to become sufficiently AAA (anticipatory, anti-fragile and agile), rapidly- learning machines could surpass us.
The question is not how much machines will augment human decision-making, but whether humans will remain involved in the process at all. If humans fail to sufficiently develop our capabilities, rapidly learning machines could surpass us. To shift the relationship between humans and machines, AI does not have to reach AGI. It just needs to become better than us at handling complex systems.
“Martin Heidegger, the German existential philosopher, is known for challenging the view that humans can actually master technology and that we have the ability to solve any collateral issues that may arise as technology evolves. This is because as technology continues to evolve it may reveal itself to be beyond our involvement. As technology grows beyond our control it is not merely a human activity. This paradox of technology – the magic at one end and the hazards at the other – gives technology a unique status. At the very least, technology’s existential risks lie in Heidegger’s observation that ‘it drives out every other possibility of revealing.‘ Technology is so dominant that it can eclipse all other ways we understand the world, for better and worse.
“Through the lens of existential philosophy, we each have the agency to explore contingencies, serendipity and emergence. Contingency is the idea that possible events are uncertain. Choice exists because of contingency. Our freedom as individuals is determined through our own choices and actions. If everything were predetermined – if life was fixed by design – we would lack choice and power.
Existentialism 2.0: Decision-making in our technological world
“Today, technology is shaping society by influencing decision-making and enabling manipulation at scale. Simultaneously, it impedes upon our individual existence as acting agents. Through AI, technology is challenging us in a realm historically specific to humans. As AI continues to develop, machines are becoming increasingly autonomous in making decisions. It is here that the use of technology confronts the existential dimension. Here, we stand on the edge of our free will and our fundamental concepts of choice. Computationally rational technology is not neutral because it drives away contingency and choice.
“Standing on the shoulders of Heidegger and fellow philosopher Soren Kierkegaard, it was Jean-Paul Sartre who so powerfully articulated the human condition with the phrase ‘existence precedes essence.’ By this, Sartre meant that our agency emerges through choice. While existence is indeterminate and thus unknowable, we are always defining our essence as it emerges and, in doing so, moving in a direction that we define. If technology is determining outcomes on our behalf, our agency is curtailed and our choices may be beyond our control.
“We can work to apply this philosophical perspective to sense-making and decision-making in our contemporary technocratic environment.
What is the potential scope and severity of humans’ de-skilling?
“Given rapid advances in AI, the fundamental issue relates to both the potential reach of AI and our relationship with AI. We need not speculate on artificial general intelligence (AGI) or a superintelligent machine to wonder whether machines might still come to challenge us. The issue at hand is a question of understanding the nature of our own capabilities in relation to the nature of a machine’s computational rationality.
“With this in mind, we observe that AI is rapidly advancing up the decision-making value chain. Humans should remain wary of an inadvertent reliance on prescriptive algorithms – those that go beyond the pattern recognition of descriptive algorithms to actually recommend courses of action. We should not underestimate the potential scope and severity of our de-skilling by delegating our decision-making capabilities to algorithms. Reliance may slip easily into dependence.
“The question is not how much machines will augment human decision-making, but whether humans will remain involved in the process at all. If humans fail to sufficiently develop our capabilities, rapidly learning machines could surpass us.
Maybe the existential risk is not machines taking over the world or reaching human-level intelligence, but rather the opposite, where human beings start thinking and responding like idle machines – unable to connect the emerging dots of our complex, systemic world. … Superstupidity can counter any level of intelligence.
“To shift the relationship between humans and machines, AI does not have to reach AGI. It just needs to become better than us at handling complex systems. To mitigate this existential challenge, we must become anticipatory, antifragile and develop the agility (AAA) to bridge the short-term with long-term decision-making
“More recently than the existential philosophers of the 19th and 20th centuries, an existential risk was defined by current-day philosopher Nick Bostrom as ‘one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.‘
“While human extinction is the most obvious existential catastrophe in relation to AI, there is a wide spectrum between existential impacts and extinction. The curtailing of humanity’s agency and choice is a concrete existential risk.
Could superstupidity be as dangerous as superintelligence?
“As AI advances, incomprehensibility can reach even higher levels. Fusing technologies generate highly complex unpredictable systems. As multiple AI systems interact, it becomes increasingly difficult to discern how algorithms make decisions, which exposes us to both human and machine errors. ‘Stupid’ machines in nonlinear environments can be dangerous, especially since the idea that machines cannot have goals is a myth. Goal-orientated machines have been in action for quite some time. An infrared-seeking missile has a goal that’s based on what it is programmed to achieve: track, follow and strike a heat-emitting target.
“Complex systems in technology (robots, supercomputers, power and nuclear plants, communications, healthcare, semi-autonomous lethal weapons) all have many moving parts and interacting systems that can be prone to catastrophic failure, and every day we develop more-powerful computers. Have we developed an overreliance on increasingly complex and dynamic systems that are unpredictable and can fail? How easy would it be for autonomous machines, or humans using them, to make a consequential, maybe even irreversible, mistake that goes undetected?
“At its extremes, could superstupidity be as much of an existential catastrophic risk as artificial superintelligence? Superstupidity could take on multiple features, including over-trust and overreliance on the underlying ‘intelligence’ of these systems. For instance, believing that AI can be a proxy for our own understanding and decision-making as we delegate more power to algorithms can be superstupid. Further, consider AI or data ineptitude. What might appear as incompetence may simply be algorithms acting on bad data; more or better data may not help machines make improved decisions – which does not seem to be the case for humans.
To assure that ‘Idiocracy’ is not a harbinger of the future, updating our education system has now become an existential priority. Education’s effectiveness in problem-solving should be evaluated on whether it can help humanity become relevant and future-ready for our complex 21st century. We should inspire passion, nurture curiosity, emphasize uncertainty, develop range and foster critical thinking, using Socratic questioning to examine assumptions.
“Determining whether AI is on the road to superintelligence or superstupidity may not matter as much as ensuring that humanity does not end up relying on AI without a solid understanding of the consequences. Maybe the existential risk is not machines taking over the world or reaching human-level intelligence, but rather the opposite, where human beings start thinking and responding like idle machines – unable to connect the emerging dots of our complex, systemic world.
Updating education and skills for human relevance is a priority
“Asking whether our own creations will reach or surpass human intelligence may be the wrong question, as reaching human intelligence is not a prerequisite for AI to cause irreversible damage, and it and/or we ourselves doing dumb things can be as dangerous as superintelligence. Superstupidity can counter any level of intelligence.
“The film ’Idiocracy’ (2006) is a dark comedy set in the distant future of 2505. In it, humanity relinquishes control of society to advanced technology systems managed by multinational corporations. As these AI systems evolve, humans themselves become increasingly super-stupid and entirely dependent on the controlling technology. This movie acts as a satirical warning – today, we must ensure it does not become more prophetic than it already seems to be.
“To assure that ‘Idiocracy’ is not a harbinger of the future, updating our education system has now become an existential priority. Education’s effectiveness in problem-solving should be evaluated on whether it can help humanity become relevant and future-ready for our complex 21st century. We should inspire passion, nurture curiosity, emphasize uncertainty, develop range and foster critical thinking, using Socratic questioning to examine assumptions.
“Most importantly, we need to form a new lifelong relationship with inquiry, experimentation and failure (which goes hand-in-hand with creativity). We must harness curiosity and diverse perspectives, because today’s standard knowledge will never solve tomorrow’s surprises. These features could help us problem-solve out of the most complex, systemic and existential risks.
“Just as we have made the ‘language’ of math a requirement, learners should now be fluent in technology’s usages, abuses and impacts. Proper interaction with technology – including knowing truth from fiction, information from disinformation and entertainment from addiction – will separate those who find themselves enslaved by our new technologies from those who harness them for their own aims.
“We must recognize that education does not end at the completion of formal schooling or outside the classroom. It is instead a constant, lifelong process of learning, unlearning and relearning – starting on the playground all the way to the boardroom and beyond.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”