Jamais_Cascio
Jamais Cascio is a well-known futurist and lead author of “Navigating the Age of Chaos: A Sense-Making Guide to a BANI World That Doesn’t Make Sense.” This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“Here’s the dilemma: It’s highly likely that AI systems will play a much more significant role in shaping our decisions, work and daily lives over the next few decades, but they will likely do so in a way that undermines our personal, cultural and social resilience. Resilience requires that people can recognize their own preferences and needs and can act on them. It relies on people having the knowledge of how something works and how it might fail. Resilience requires that people think critically, pay attention and recognize problems. Basic resilience depends on the ability to develop and maintain backup capacities and the emotional and economic resources that allow for continued action in a period of system failure. Ideally, it necessitates that people be able to freely communicate and share ideas with each other.

“It’s entirely possible for machine-substrate ‘minds’ to support and strengthen each of these measures of resilience. But that’s not what we have now. Instead, we have technology pundits saying, ‘This technology will take your jobs (and might even kill you), and we’re going to put it in everything,’ and tech companies saying, ‘It will lie to you and it might advise you to kill yourself, but please don’t call it slop.’

What we are headed for amounts to a world of getting by. There will be enough distracting entertainment and enough quick-turnaround of AI change with just-good-enough results to have people mostly accept it and go on with their lives. The distressing and the uncomfortable can quickly become the familiar and the banal.… Resilience requires agency, the ability to recognize danger and act accordingly. The ‘AI’ tools our society and economy want to give us now actively undermine that process.

“The current form of AI can actively weaken every characteristic of human resilience; in some cases, it seems intentionally designed to do so.

“The ongoing wave of generative machine learning technology has a wide array of drawbacks. Some are ethical, such as the plagiarism at the heart of most LLMs, the environmental footprint (especially concerning water) and the battles over restrictions and regulations. Some are economic, with the spiraling amount of investment meeting a persistent lack of actual profit. Some are technical, as it becomes increasingly clear that the ‘hallucination’/confabulation problem is intrinsic to the generative language model structure and the outputs of this wave of AI technology can simply never be 100% trusted. And a great many of the reasons are cultural, from sycophancy to suicide encouragement to the measurable decline in critical thinking skills arising from LLM use.

“Unfortunately, none of this means that the generative AI wave is going to fall apart any time soon. The people at the forefront of the ethical concerns – creatives, environmentalists, regulators – have very little power. The mass of money tied up in the technology may make the whole thing ‘too big to fail;’ even in a ‘bubble’ scenario the sheer size of the main players means that they’ll likely survive, even as startups and innovators get swallowed up or disappear.

“Hallucinations may become a non-issue, whether by brute-force correction algorithms, human software ‘janitors’ responsible for cleaning up code, or simple acceptance (whether through exhaustion or the previously mentioned decline in critical thinking). We’ll probably see the emergence of sufficiently-functional tools to block or otherwise push aside AI for the more knowledgeable skeptics, paralleling the advertisements/ad-blocking paradigm. (Actually, internet advertising may be an interesting parallel here: ubiquitous, irritating, highly intrusive, barely functional and the whole internet economy depends upon its continuance. Most people just put up with it, but a subset use tools to block it for themselves, even as tech companies try to get around those tools.)

“What we are headed for amounts to a world of getting by. There will be enough distracting entertainment and enough quick-turnaround of AI change with just-good-enough results to have people mostly accept it and go on with their lives. The distressing and the uncomfortable can quickly become the familiar and the banal.

“The people with power over these systems aren’t evil, for the most part, they are just focused on immediate returns. They’ll tell us that the next iteration of the AI will surely be the one to solve all of our problems. Undoubtedly, the Singularity will be a nifty sustainability strategy.

“In the meantime, companies and institutions focused on surveillance, face detection, thought policing and media control will eagerly continue to broadly apply these tools, as the drawbacks to all of this pale in comparison to the power offered by the present approach to AI.

“Although this all seems likely to me, it’s by no means inevitable. The cultural drawbacks mentioned earlier offer an important wild card in all of this. It is possible that the insults of the current AI paradigm – the sycophancy, the ‘AI girlfriends,’ the clear damage to cognitive capacities – may prove enough to trigger a backlash that incites action. The intrusive organizations may overplay their hand, generating enough bad publicity to limit cash flow.

“But one hard lesson I’ve learned over the 30-odd years of doing foresight work is that social transformation that depends upon changes to human nature is rare and highly unlikely. Probably the most likely catalyst for moving away from the distressing form of this future is the emergence of tools that offer most of the benefits with far fewer of the drawbacks. In other words, it may well be that the best hope for getting through the era of bad AI is for someone to finally develop good AI.

“As this should illustrate, I’m in no way anti-artificial intelligence, broadly conceived. I strongly suspect that the latter half of the century will be highly dependent upon advanced machine-substrate minds and better off for it. Looking at the broad spectrum of non- or only partially-generative technologies, such as brain emulation, non-generative machine learning, regression analysis systems or similar, narrowly task-focused but potentially highly efficient tools, there’s real potential for transformative developments. But that’s not where we are today and not where we’ll likely be for the next couple of decades.

“Resilience requires agency, the ability to recognize danger and act accordingly. The ‘AI’ tools our society and economy want to give us now actively undermine that process. Welcome to the Slop Future.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”