
“AI hype has often been followed by sobering AI winters, so it’s impossible to precisely predict the impact of artificial intelligence on humanity in the next decade and beyond. Yet both current and historic technology adoption trends suggest that people will continue to avidly embrace AI and that this transformation may come with steep costs. The biggest danger in the coming years will be human complacency. Our species has a natural and innate yearning for effortless flow and ease of life to save energy and boost survival. As well, tech companies have designed for frictionless user interaction in order to heighten engagement and profit. Just one example of built-in seamlessness: Some of the popular LLM models in 2025 were 50% more sycophantic than humans, according to research from Stanford and Carnegie-Mellon.
AI will help and hinder humanity. It will succeed and fail in spectacular and trivial ways. Unless we resist AI’s siren call of complacency and cultivate resilience born of fully contending with life, both our species and our own brief, fragile time on Earth will be diminished.
“One upshot of humans’ mostly choosing to take short-cuts when using LLMs can be an alarming level of automation bias, or deference to technology. A highly cited 2025 MIT study led by Nataliya Kosmyna showed that students’ uses of LLMs resulted in homogenous, middle-of-the-road prose that they didn’t really remember or value. Humans tend to rush to agreement as they defer to models. And frequent AI users often scored lower on tests of critical thinking, i.e., the cognitive skills that fuel independence of mind.
“In the social arena, people who consult sycophantic models on interpersonal conflicts become less willing to repair the bonds in question and more convinced of their own rightness, all while trusting pandering models more than neutral ones.
“Unthinking adoption is commonplace in the first years after any technology’s release. Only later do public conversations about tech’s impact mature and users grow more intentional. It’s encouraging, then that signs of resistance to AI complacency are already emerging.
“For instance, the idea of building friction into tech is slowly gaining traction in order to slow user snap judgment and curb incivility. (In one study, new users preferred a meditation app with built-in friction in the form of mandatory beginner tutorials over a seamless, just-start-meditating version.) Universities are moving to oral or pen-and-paper exams. I even see the rise of cringe comedy and public fascination with awkwardness as a collective yearning for experiencing the life-friction that is, after all, the main driver of human growth and achievement.
“Resisting complacency in interacting with AI will likely also bolster the resilience needed to contend with an era of rising unknowns. Resilience is bendability, a capacity to adapt to change and recover from setbacks. This capability stems from gaining skill in meeting life’s errors, detours, difficulties and frustrations. Deferring to friction-free AI stokes the fallacy that life can be smooth, easy and predictable. By resisting this illusion, we can better design AI and better confront the complex challenges of our day.
“To be clear, I don’t oppose the wonders of an extended mind. As many note, humans long have used cognitive prosthetics from stone tablets to smartphones. But let’s always remember that questions of value and benefit in tool use are nuanced, not zero-sum, and that no technological outcome is inevitable. Augmentation should always be complemented with human doubt, questioning and resistance. We only flourish when we confront, not avoid, life’s complexities, on- and offline.
“AI will help and hinder humanity. It will succeed and fail in spectacular and trivial ways. Unless we resist AI’s siren call of complacency and cultivate resilience born of fully contending with life, both our species and our own brief, fragile time on Earth will be diminished.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”