The Essays – Chapter 8
Overcoming Complacency and the Lure of Convenience

Future of Human Resilience in the AI Age

Featured Contributors to Chapter 8: The 15 essay responses on this page were written by Rosalie Day, Maggie Jackson, Jamais Cascio, Daniel Rasmus, Naomi Baron, Frank Kaufmann, Jon Lebkowsky, Adam Clayton Powell III, Alan Inouye, Glenn Ricart, Kevin Taglang, Ken Rogerson, a law professor who preferred to remain anonymous, Bronwyn Williams and Larissa May. (Their essays are all included on this one, long-scrolling web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)


The first section of Chapter 8 features the following essays:

Rosalie Day: ‘Future generations may accept displacement by AI as their lot in life.’ Due to humans’ tendency to ‘take shortcuts that serve immediate needs, most will respond with a despondent shrug.’

Maggie Jackson: Muster agency; avoid complacency. ‘Resilience stems from gaining skill in meeting life’s errors, detours, difficulties and frustrations.’ … Don’t defer to ‘friction-free’ AI; it leads to loss.

Jamais Cascio: ‘The current form of AI can actively weaken every characteristic of human resilience; in some cases, it seems intentionally designed to do so.’ Welcome to the Slop Future.

Daniel Rasmus: AI is stealthily sliding into everything we do, suggesting, summarizing, drafting, routing and efficiently becoming a default source of decision-making and ‘truth’ even though nobody really agreed to let it.

Naomi Baron: We must think carefully about ‘how resolute our willpower to resist negative aspects of AI is and how strongly we value understanding the technology – and its potential consequences.’

Frank Kaufmann: Compare AIs arrival to pouring water into a vessel. It takes the shape of the vessel. Human action causes human change and … ‘the vast majority of people will unconsciously lemming along.’

Jon Lebkowsky: AI may follow the path of impact described in a sci-fi story in which explorers find a world that seems primitive, but in the end discover the tech is so deeply embedded that it is invisible.

Adam Clayton Powell III: Fast-paced digital life had already dialed down most humans’ willingness to focus on getting the facts from reliable sources the right way. Unless they wise up, their AI use will magnify the damage.


Rosalie_Day

Rosalie Day
‘Future generations may accept displacement by AI as their lot in life.’ Due to humans’ tendency to ‘take shortcuts that serve immediate needs, most will respond with a despondent shrug.’

Rosalie R. Day, a co-founder of Blomma, a platform providing digital solutions to clinical research studies, wrote, “Responses to a larger role played by more-advanced AI in human activity will be shaped according to cultures, attentions and abilities. In an individualistic society like the one seen today in digitally-connected spaces, stratification will increase with AI power.

“Perceptions of unfairness are disruptive and discourage appropriate adaptation. How people respond often depends on whether they perceive AI systems to be fair and/or on how beneficially the systems fit their needs and beliefs. AI is biased by its model design and training data and, as such, ‘fair treatment’ is in the eye of the beholder and can vary from human to human. Because of this, humans’ choices based on the outputs they receive from AI systems can be sources of conflict.

“One of our most human attributes is our desire to be treated fairly. We inherently dislike biases, and cognitive dissonance makes us uncomfortable. In addition, humans are comforted by being on the bandwagon, finding agreeable groupthink and just plain ‘belonging.’ Those who find a like-minded group are likely to adopt its own set of biases: confirmation bias, anchoring bias and availability bias (the habit of taking mental shortcuts that estimate probabilities based only on how easily they come to mind). Unfortunately, monetized algorithms exacerbate this human tendency.

If we continue on the current trajectory, future generations may accept displacement by AI as their lot in life. … Societies with pervasively embedded AI are going to fundamentally change the interdependence of government and business to advantage the controllers of AI. Accordingly, the values which drive cultural norms will evolve.

“Because the general population has so little grasp of information technology, the self-declared progress of the developers of AI systems is shaping the overarching political economy, deepening the interdependencies of government and economic frameworks.

“If we continue on the current trajectory, future generations may accept displacement by AI as their lot in life. Because humans’ attentions are limited and constrained and because humans tend to take short-cuts that serve their immediate needs, most of the population will respond with a despondent shrug.

“Over the past two decades, businesses have been using automated online systems that analyze word frequencies to rank job applicants. To think of this software as equivalent a toddler’s shape-sorter toy would not be far off. Executives and human resources departments turned to using it to more easily handle the process. In the past few years we discovered that the software was systemically embedding bias toward linear careers. The systems did not do as promised – they did not optimize hiring on an individual basis. Yes, they cut process costs but they favored the job applicants who most resembled the hiring managers themselves. No human ever questioned how the software accomplished this work – that would be tantamount to second-guessing the technology – progress, a ‘problem solver.’

“Responses to AI are mostly following a similar path to our adoption of mobile phones: passive acceptance. The betterment of human communications allowed by the cell phone was followed by a more-advanced networked technology. The smartphone enabled social networks to proliferate, misinformation to go viral, the emergence of FOMO (fear of missing out) and an influencer economy. Thanks to smartphones, map-reading skills and more are obsolete. The ensuing splintering of human discourse ushered in the post-fact era; brain rot and AI-generated slop. Most of us have adapted to it – but at an inestimable social cost.

“Whether decisions are made by the AI itself or governments, employers or social influencers who adopt AI, a majority of people will not be attentive enough to differentiate the factors to be considered as this next ‘more-advanced’ technology takes its place. Particularly, as one example, the AI-advantaged populations will abdicate decisions to AI as a rationalization of ‘fairness.’ Humans misunderstand that not all technological change is progress.

Pulling oneself up by one’s bootstraps – by education or grit – will cease to be valued. To me, this has been the definition of individual resilience, the survival instinct, ingenuity, the persistent elements of humanity. However, if human adaptation to AI results in aggregations of individuals who think alike, then any outliers who display more acute survival instincts may not be tolerated.

“As we evolve with these systems, how might the essence and elements of human resilience change? What it means to be human will not be changed by AI, therefore the ‘essence’ will not evolve. We will remain social animals. The characteristics of specific cultures will evolve their values in response to their respective AI-adapted political-economic frameworks. Societies with pervasively embedded AI are going to fundamentally change the interdependence of government and business to advantage the controllers of AI. Accordingly, the values which drive cultural norms will evolve.

“If the prevalent societal message is that these AI systems are going to replace you, the work that you do or the creativity you bring, then it signals to human beings – social animals – that you do not matter. Collectivistic societies, characteristically exhibiting concern for the good of their group, will be more resilient and, counterintuitively protective of a variety of human attributes.

“Buy-in to Adam Smith’s ‘invisible hand’ did not inevitably lead to the current form of U.S. capitalism. Controllers of AI, in their myopic quest for efficiency in the guise of fiduciary responsibility, will finally rupture the intended libertarian social contract.

“Pulling oneself up by one’s bootstraps – by education or grit – will cease to be valued. To me, this has been the definition of individual resilience, the survival instinct, ingenuity, the persistent elements of humanity.

“However, if human adaptation to AI results in aggregations of individuals who think alike, then any outliers who display more acute survival instincts may not be tolerated. In individualistic cultures in which the societal power controls AI, evolved values and social norms may further the hazards of group think and going along to get along.”


Maggie_Jackson

Maggie Jackson
Muster agency; avoid complacency. ‘Resilience stems from gaining skill in meeting life’s errors, detours, difficulties and frustrations.’ … Don’t defer to ‘friction-free’ AI; it leads to loss.

Maggie Jackson, award-winning author of “Distracted: Reclaiming Our Focus in a World of Lost Attention” and “Uncertain: The Wisdom and Wonder of Being Unsure,” wrote, “AI hype has often been followed by sobering AI winters, so it’s impossible to precisely predict the impact of artificial intelligence on humanity in the next decade and beyond. Yet both current and historic technology adoption trends suggest that people will continue to avidly embrace AI and that this transformation may come with steep costs.

“The biggest danger in the coming years will be human complacency. Our species has a natural and innate yearning for effortless flow and ease of life to save energy and boost survival. As well, tech companies have designed for frictionless user interaction in order to heighten engagement and profit. Just one example of built-in seamlessness: Some of the popular LLM models in 2025 were 50% more sycophantic than humans, according to research from Stanford and Carnegie-Mellon.

AI will help and hinder humanity. It will succeed and fail in spectacular and trivial ways. Unless we resist AI’s siren call of complacency and cultivate resilience born of fully contending with life, both our species and our own brief, fragile time on Earth will be diminished.

“One upshot of humans’ mostly choosing to take short-cuts when using LLMs can be an alarming level of automation bias, or deference to technology. A highly cited 2025 MIT study led by Nataliya Kosmyna showed that students’ uses of LLMs resulted in homogenous, middle-of-the-road prose that they didn’t really remember or value. Humans tend to rush to agreement as they defer to models. And frequent AI users often scored lower on tests of critical thinking, i.e., the cognitive skills that fuel independence of mind.

“In the social arena, people who consult sycophantic models on interpersonal conflicts become less willing to repair the bonds in question and more convinced of their own rightness, all while trusting pandering models more than neutral ones.

“Unthinking adoption is commonplace in the first years after any technology’s release. Only later do public conversations about tech’s impact mature and users grow more intentional. It’s encouraging, then that signs of resistance to AI complacency are already emerging.

“For instance, the idea of building friction into tech is slowly gaining traction in order to slow user snap judgment and curb incivility. (In one study, new users preferred a meditation app with built-in friction in the form of mandatory beginner tutorials over a seamless, just-start-meditating version.) Universities are moving to oral or pen-and-paper exams. I even see the rise of cringe comedy and public fascination with awkwardness as a collective yearning for experiencing the life-friction that is, after all, the main driver of human growth and achievement.

“Resisting complacency in interacting with AI will likely also bolster the resilience needed to contend with an era of rising unknowns. Resilience is bendability, a capacity to adapt to change and recover from setbacks. This capability stems from gaining skill in meeting life’s errors, detours, difficulties and frustrations. Deferring to friction-free AI stokes the fallacy that life can be smooth, easy and predictable. By resisting this illusion, we can better design AI and better confront the complex challenges of our day.

“To be clear, I don’t oppose the wonders of an extended mind. As many note, humans long have used cognitive prosthetics from stone tablets to smartphones. But let’s always remember that questions of value and benefit in tool use are nuanced, not zero-sum, and that no technological outcome is inevitable. Augmentation should always be complemented with human doubt, questioning and resistance. We only flourish when we confront, not avoid, life’s complexities, on- and offline.

“AI will help and hinder humanity. It will succeed and fail in spectacular and trivial ways. Unless we resist AI’s siren call of complacency and cultivate resilience born of fully contending with life, both our species and our own brief, fragile time on Earth will be diminished.”


Jamais_Cascio

Jamais Cascio
‘The current form of AI can actively weaken every characteristic of human resilience; in some cases, it seems intentionally designed to do so.’ Welcome to the Slop Future.

Jamais Cascio, well-known futurist, speaker, and lead author of “Navigating the Age of Chaos: A Sense-Making Guide to a BANI World That Doesn’t Make Sense,” wrote, “Here’s the dilemma: It’s highly likely that AI systems will play a much more significant role in shaping our decisions, work and daily lives over the next few decades, but they will likely do so in a way that undermines our personal, cultural and social resilience.

“Resilience requires that people can recognize their own preferences and needs and can act on them. It relies on people having the knowledge of how something works and how it might fail. Resilience requires that people think critically, pay attention and recognize problems. Basic resilience depends on the ability to develop and maintain backup capacities and the emotional and economic resources that allow for continued action in a period of system failure. Ideally, it necessitates that people be able to freely communicate and share ideas with each other.

“It’s entirely possible for machine-substrate ‘minds’ to support and strengthen each of these measures of resilience. But that’s not what we have now. Instead, we have technology pundits saying, ‘This technology will take your jobs (and might even kill you), and we’re going to put it in everything,’ and tech companies saying, ‘It will lie to you and it might advise you to kill yourself, but please don’t call it slop.’

“The current form of AI can actively weaken every characteristic of human resilience; in some cases, it seems intentionally designed to do so.

What we are headed for amounts to a world of getting by. There will be enough distracting entertainment and enough quick-turnaround of AI change with just-good-enough results to have people mostly accept it and go on with their lives. The distressing and the uncomfortable can quickly become the familiar and the banal.… Resilience requires agency, the ability to recognize danger and act accordingly. The ‘AI’ tools our society and economy want to give us now actively undermine that process.

“The ongoing wave of generative machine learning technology has a wide array of drawbacks. Some are ethical, such as the plagiarism at the heart of most LLMs, the environmental footprint (especially concerning water) and the battles over restrictions and regulations. Some are economic, with the spiraling amount of investment meeting a persistent lack of actual profit. Some are technical, as it becomes increasingly clear that the ‘hallucination’/confabulation problem is intrinsic to the generative language model structure and the outputs of this wave of AI technology can simply never be 100% trusted. And a great many of the reasons are cultural, from sycophancy to suicide encouragement to the measurable decline in critical thinking skills arising from LLM use.

“Unfortunately, none of this means that the generative AI wave is going to fall apart any time soon. The people at the forefront of the ethical concerns – creatives, environmentalists, regulators – have very little power. The mass of money tied up in the technology may make the whole thing ‘too big to fail;’ even in a ‘bubble’ scenario the sheer size of the main players means that they’ll likely survive, even as startups and innovators get swallowed up or disappear.

“Hallucinations may become a non-issue, whether by brute-force correction algorithms, human software ‘janitors’ responsible for cleaning up code, or simple acceptance (whether through exhaustion or the previously mentioned decline in critical thinking). We’ll probably see the emergence of sufficiently-functional tools to block or otherwise push aside AI for the more knowledgeable skeptics, paralleling the advertisements/ad-blocking paradigm. (Actually, internet advertising may be an interesting parallel here: ubiquitous, irritating, highly intrusive, barely functional and the whole internet economy depends upon its continuance. Most people just put up with it, but a subset use tools to block it for themselves, even as tech companies try to get around those tools.)

“What we are headed for amounts to a world of getting by. There will be enough distracting entertainment and enough quick-turnaround of AI change with just-good-enough results to have people mostly accept it and go on with their lives. The distressing and the uncomfortable can quickly become the familiar and the banal.

“The people with power over these systems aren’t evil, for the most part, they are just focused on immediate returns. They’ll tell us that the next iteration of the AI will surely be the one to solve all of our problems. Undoubtedly, the Singularity will be a nifty sustainability strategy.

“In the meantime, companies and institutions focused on surveillance, face detection, thought policing and media control will eagerly continue to broadly apply these tools, as the drawbacks to all of this pale in comparison to the power offered by the present approach to AI.

“Although this all seems likely to me, it’s by no means inevitable. The cultural drawbacks mentioned earlier offer an important wild card in all of this. It is possible that the insults of the current AI paradigm – the sycophancy, the ‘AI girlfriends,’ the clear damage to cognitive capacities – may prove enough to trigger a backlash that incites action. The intrusive organizations may overplay their hand, generating enough bad publicity to limit cash flow.

“But one hard lesson I’ve learned over the 30-odd years of doing foresight work is that social transformation that depends upon changes to human nature is rare and highly unlikely. Probably the most likely catalyst for moving away from the distressing form of this future is the emergence of tools that offer most of the benefits with far fewer of the drawbacks. In other words, it may well be that the best hope for getting through the era of bad AI is for someone to finally develop good AI.

“As this should illustrate, I’m in no way anti-artificial intelligence, broadly conceived. I strongly suspect that the latter half of the century will be highly dependent upon advanced machine-substrate minds and better off for it. Looking at the broad spectrum of non- or only partially-generative technologies, such as brain emulation, non-generative machine learning, regression analysis systems or similar, narrowly task-focused but potentially highly efficient tools, there’s real potential for transformative developments. But that’s not where we are today and not where we’ll likely be for the next couple of decades.

“Resilience requires agency, the ability to recognize danger and act accordingly. The ‘AI’ tools our society and economy want to give us now actively undermine that process. Welcome to the Slop Future.”


Daniel_W_Rasmus

Daniel Rasmus
AI is stealthily sliding into everything: suggesting, summarizing, drafting, routing and efficiently becoming a default source of decision-making and ‘truth’ even though nobody really agreed to let it.

Daniel Rasmus, founder and principal analyst at Serious Insights, based in Seattle, previously a director at Microsoft and VP at Forrester Research, said, “Instead of writing an essay, I created a summary outline here of AI insights I have shared on these topics at SeriousInsights.net DIKW: Data, Information, Knowledge and Wisdom.

“AI is sliding into the background as ‘ambient features’ that suggest, summarize, draft, route, flag and smooth workflow edges. That’s the point – and the trap. Invisible AI quietly becomes a default source of truth even when nobody agreed to promote it.

People and societies will:

  • “Embrace convenience, speed and the small compounding wins that don’t show up neatly in accounting – for instance by rewriting paragraphs that clarify intent, creating summaries that prevent duplication, surfacing tiny moments that spark ideas.
  • “Resist distrust (especially as synthetic content pollutes channels), the fear of their de-skilling and the sense that their autonomy is being swapped for auto-complete.
  • “Struggle due to mismatched expectations – treating probabilistic output like deterministic truth; deploying agents without shared intent; mistaking ‘worked in a pilot’ for ‘safe at scale.’

Opportunities worth protecting (and expanding)

  • “Serendipity at scale: AI can widen the ‘bandwidth for serendipity’ by increasing the reach of networks and the exchange of ideas … if designs favor connection over pure efficiency theater.
  • “Knowledge Management revival with teeth: AI can force clarity about knowledge types (explicit, implicit, tacit; declarative, embedded, procedural, contextual) and turn KM from ‘nice to have’ into operational scaffolding.
  • “Better work design: AI can help treat the worker experience as a first-class design surface – balance, simplicity, integrity – this matters more when machines can optimize the wrong things at machine speed.

The predictable failure modes

“Agentic systems fail socially before they fail technically: conflicting objectives, data silos, uncoordinated decisions, accountability gaps, authority erosion, security violations, workflow collisions, IP fights, bias amplification, noise pollution, sabotage and human alienation. Zooming out, trust also becomes a ‘stack problem’ as synthetic media drives attribution failures, consent problems, brand damage and slow erosion of default trust.

“And energy/compute stops being slideware: efficiency becomes a survival tactic; smaller and more specialized models become a resilience play (cost, controllability, fewer nasty surprises).

Capacities to cultivate for resilience

Cognitive

  • Data Information Knowledge Wisdom (DIKW) literacy (epistemic hygiene): AI outputs should be treated as information – a momentary extraction of structure from data that modifies perspective – rather than as truth. Human (and organizational) knowledge and wisdom remain the mechanisms that transform data into information and decide what constraints matter.
  • Intent expression: specifying goals, constraints and acceptable risk – because ‘assertive vs. conservative automation’ becomes a meaningful preference, not a UI flourish.
  • Systems thinking: seeing where agents sit in workflows, where handoffs fail and where governance needs overrides.

Emotional

  • Tolerance for ambiguity without surrender: keeping judgment engaged when the system is confident-but-wrong.
  • Agency maintenance: resisting learned helplessness and the quiet shame spiral of ‘the machine knows better,’ especially in knowledge work.

Social

  • Trust building with receipts: shared standards for attribution, disclosure, escalation and accountability – so teams don’t devolve into ‘competing agents’ and competing truths.
  • Relationship preservation: explicit human-to-human moments where empathy and context live, because outsourcing interpersonal work to agents corrodes culture.

Ethical

  • Consent and provenance discipline: what data can be used, under what terms, what requires disclosure, what requires consent and where human authorship is non-negotiable.
  • Operational accountability: governance anchored in knowledge types (not just abstract principles), with traceability, drift detection and auditable artifacts.

Practices and resources that enable resilience

  • Make AI visible where it matters.
  • Clear markers for generated outputs.
  • Short explanations in human terms.
  • Escalation paths.
  • Controls aligned to meaningful preferences.

Treating AI ‘knowledge’ as governed assets

  • Version prompts, configurations and model metadata.
  • Build test harnesses to surface implicit behaviors and detect drift.
  • Create communities of practice to externalize tacit orchestration skill.
  • Put rollback/override mechanisms into agent workflows.
  • Use knowledge management as the operating system for adoption.
  • Design environments for knowledge creation, capture, sharing and utilization – with culture and trust doing the heavy lifting, not just tools.

Actions to take now

  • Define accountability before autonomy: who owns an agent, who approves scope, who audits outcomes, who is on the hook when it fails
  • Standardize ‘receipts’ as a norm: source verification, confidence signaling and lifecycle management for declarative knowledge so systems don’t hallucinate with confidence
  • Engineer for resilience, not hype: smaller/specialized models, routing and efficiency as a governance and cost-control story
  • Protect serendipity: measure value beyond productivity and design networks for discovery, not just throughput

New vulnerabilities to anticipate

  • Default-truth drift: ambient AI becoming authoritative by repetition
  • Noise-as-output: agents flooding organizations with low-signal updates until attention collapses
  • Weaponized agentics: passive-aggressive sabotage, biased micro-policies encoded into agents and workflow interference
  • Trust collapse via synthetic content: attribution and consent failures scaling faster than institutions can respond

Coping strategies to teach and nurture

  • ‘Receipts-first’ thinking: verify sources, track provenance, triangulate before acting
  • Deliberate friction: a taught pause between suggestion and action – especially for high-impact decisions – so reflection exists in the loop
  • Role clarity: humans own intent, constraints and accountability; machines provide candidate moves
  • Serendipity practice: structured exposure to diverse inputs and people to prevent personalization from narrowing the world into a smooth, dull corridor.”

Naomi_Baron

Naomi Baron
We must think carefully about ‘how resolute our willpower to resist negative aspects of AI is and how strongly we value understanding the technology – and its potential consequences.’

Naomi S. Baron, professor emerita of linguistics at American University and author of “Reader Bot: What Happens When AI Reads and Why It Matters,” wrote, “The pace at which AI is reshaping our lives will only accelerate. It’s important to distinguish between impacts resulting from conscious decisions regarding the technology and changes acting upon us, which we take in stride. In unpacking the distinction, we need to be clear about the varieties of AI at stake, plus remind ourselves of some fundamental aspects of human behavior.

The AI Picture: Modern AI is barely a decade old. The transformer model dates back only to 2017. OpenAI’s first generative pre-trained transformer appeared in 2018. Over the past seven years, we’ve witnessed a tsunami of developments, from large language models (think of ChatGPT, 2022) to foundation models (add in images, sound and other non-language functions) to frontier models (multimodal behemoths capable of planning, reasoning and directing agents to act on our behalf). In the process, we have gone from chasing artificial general intelligence (AI capable of the full range of human thinking) to reaching for superintelligence (AI that exceeds human mental ability).

“When contemplating the impact of AI on humans over the next decade, it’s prudent to pinpoint which form of AI we’re talking about. LLMs? AGI? We should also be realistic about how much the general public understands AI’s current or potential capabilities. Given these limitations, it’s methodologically tricky to try gauging the public’s conscious decision-making in response to AI. An alternative (or at least complementary) approach is asking how people are likely to behave when AI presents itself in the course of their work and leisure.

The better these tools become and the greater experience we have using them, the more we take their existence for granted. Most times, we’re not making individual decisions about whether to employ them, any more than our personal decision-making is at issue when tossing dirty socks into the washing machine. The more that AI drives our digital lives, the less frequent the occasions for questioning its presence.

The Human Picture: While the powers of AI are novel, the ways people react, along with decisions they make (when under their control), tend to be more predictable. Think about attitudes and propensities, be they individual or cultural, that might lead people to engage with change or avoid it. Some candidates to consider:

  • Curiosity
  • Fear
  • Laziness
  • Susceptibility to being influenced by the crowd
  • Ignorance (intentional or by circumstance)
  • Belief that the future will be like the past (no need to adapt)
  • Trust in experts (‘They’ will take care of it)
  • It won’t happen to me (Must I really follow that hurricane evacuation order?
  • Response to personal or economic necessity

“Next come two sociopsychological forces that affect almost all of us.

Domestication and the Principle of Least Effort: “In the 1990s, the sociologist Roger Silverstone applied the phrase the ‘domestication of technology’ to ways in which we come to take for granted once-new household technologies like washing machines or vacuum cleaners. With the growth of digital technologies, researchers began applying the notion to how everyday users come to take for granted the functioning of new computer-based conveniences.

“We abandoned the ‘Yellow Pages’ telephone directories since we could now go online to locate phone numbers for businesses. We began to take spellcheck for granted, eschewing print dictionaries. Today, we rely on predictive texting and autocomplete to simplify composing text messages or emails. Quests for information have moved us from the use of physical brick-and-mortar libraries to using resources such as Wikipedia or Google searches that summon AI Overviews.

“The better these tools become and the greater experience we have using them, the more we take their existence for granted. Most times, we’re not making individual decisions about whether to employ them, any more than our personal decision-making is at issue when tossing dirty socks into the washing machine. The more that AI drives our digital lives, the less frequent the occasions for questioning its presence.

Within the domains in which we have the opportunity to make conscious choices, it behooves us to think carefully about how much effort we’re willing to put forth, how resolute our willpower to resist negative aspects of AI is and how strongly we value understanding the technology – and its potential consequences.

“There’s a second factor impacting mere mortals when it comes to individual agency and AI: the principle of least effort, a concept popularized by linguist George Zipf in the late 1940s. Modern variants include Daniel Kahneman’s ‘fast thinking’ or Susan Fiske and Shelley Taylor’s notion of the ‘cognitive miser.’ Underlying them all is the concept of humans tendency to minimize the amount of effort expended on a task.

“In the digital world, the idea encompasses how we read webpages (notoriously, we don’t read them through) and how we conduct online searches (rarely checking for veracity, commonly quitting after a few hits). This naturally extends to the ways we use AI. We tend to believe what LLMs offer up in response to our prompts. We accept (without much editing) the essays and emails that AI writes for us. We invite AI to conduct research and summarize, bypassing doing the work ourselves. As AI agents increasingly make our travel reservations, arrange our meetings or manage our finances, we will come to take these labor-saving moves for granted. Using AI agents will be less a choice than a domesticated way of conducting our lives.

The State of Human Agency in an AI-Infused World: “We all like to feel we have personal agency, including when it comes to adoption or rejection of AI technology. We might choose to hand over writing assignments to bots – or opt to undertake all the drafting ourselves (if we’re willing to put forth the effort). We might binge on endless TikTok videos driven by AI algorithms or to restrict our viewing (if we have the willpower). We might depend on spellcheck or rely instead on our own abilities, even disabling the function in our software (assuming we know how).

“Effort. Willpower. Knowledge. As AI’s tentacles expand its reach, the technology becomes increasingly enmeshed in the fabric of everyday living. Opinion polls are but one step for probing our likely responses. We must also acknowledge forces including personality traits, domestication and cognitive miserliness to understand if users will be passive recipients or engaged actors.

“Not all choices will be up to individuals. Our boss might dictate how much we must lean on Copilot, and the future of artificial general intelligence resides – scarily – in the hands of Big Tech commercial interests. However, within the domains in which we have the opportunity to make conscious choices, it behooves us to think carefully about how much effort we’re willing to put forth, how resolute our willpower to resist negative aspects of AI is and how strongly we value understanding the technology – and its potential consequences.”


Frank_Kaufmann

Frank Kaufmann
Compare AIs arrival to pouring water into a vessel. It takes the shape of the vessel. Human action causes human change and … ‘the vast majority of people will unconsciously lemming along.’

Frank Kaufmann, president of the Twelve Gates Foundation, wrote, “This study asks two questions: First, if you do not think AI systems will play a much more significant role in shaping our decisions, work and daily lives in the future, please explain why. Second, if you do think it is likely that AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives: How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?

“So answer Option A: AI will not play much of a role in our lives going forward.

“Or answer Option B. AI will play an ever-greater role.

“If B, then:

  • How will people embrace or resist change?
  • What internal resources must we develop internally to embrace and or resist?
  • What practices can support such development?
  • What resources can support such development?

“The first part of my response is: Both A and B. How is such an answer possible? How can I say I believe AI will play a big role and say that it won’t play a big role? It is an answer based on the relationship between reality and perception.

“The reality? AI will (and already has) played a massive role in shaping our decisions, work and daily lives. Most people’s perception? The vast majority of people don’t have any idea how AI has changed their lives. They’ll only stop to think about it if someone points it out to them.

“My answer – yes, there will be a lot of change and yes, I don’t expect much change – is a version of the old ‘tree falling in the forest’ question. Does it make a noise? Here’s an example.

“Me: Hi Curt, has AI changed your life?

“Curt: Nah. But I can’t talk right now. I have to Zoom with my daughter in Texas and tell her the Uggs I ordered yesterday for her baby arrived.

“Me: Why did you get those particular boots?

“Curt: I don’t know, there was an ad for them in my email. And also the same ad in my birdseed delivery.

“Has AI played a significant role in shaping Curt’s decisions, work and daily life? Not if you ask Curt. Though it looks to me like AI is playing a huge role in Curt’s decisions, work and daily life.

“Next question to answer: How might individuals and societies embrace, resist and/or struggle with such transformative change?

“Me: Hey, Curt. Do you think AI is changing your life?

“Curt: Well, not really. But my sister’s always telling me to stop scrolling and watching all of baseball’s double plays today on TikTok while she’s talking to me. She’s always nagging me about looking at TikTok. But then, the other day, I was with Billy at the game. Next thing I know, Billie’s bashing me in the arm, and he yells at me, ‘Geez Curt, you’re so busy watching your damn phone that you just missed an incredible double play!’

“Curt continues: So I think Billie is right. That’s why now every day I turn off my phone for two hours.

“It seems to me that Billie’s input was more effective than Marge’s.”

“Next question to answer: What practices and resources will help us resist and enable resilience?

“I hope the seriousness of these vignettes has not come off as silly. I study AI and its impact intensely. I anticipate that the vast majority of humans will have almost zero awareness or interest in the progress of AI. They’ll just float along, be fascinated for a second and continue on with little or no curiosity or reflection. Each novel development will be noted at first and will thenceforth become habit and unnoticed. For example, the odd feeling of being transported in a self-driving taxi will last once or twice and then revert back down to looking for the cheapest ride.

“AI itself cannot reshape the basic breakdown of human order in the world. There is nothing in AI as a working tool that makes it capable of reordering or shuffling human order. Human order can only be impacted by humans.

“I see this as the basic, preset demographic breakdown of humans on Earth so far:

  • “The percentage of people who cause significant change in human life: A miniscule percent, PLUS only at rare times in history.
  • “The number of people who cause some change in human life: A small percent, maybe 7 or 8%. These people can cause minor change. Plus, these changes can be for good or for bad.
  • “The percentage of people who are basically OK, who are more or less good: A huge majority. Maybe as high as 85%. Conversely, there are probably about 10-15% of people in the world who can be described as seriously bad. Maybe even less.
  • “The number of people who pay any attention to their lives beyond immediate survival, epicurean or entertainment preferences, and the occasional (usually health related) condition of a relative or friend: A tiny few.”

“This stratification holds fast, whether we are hurling rocks at mastodons, or buying trips into space on Amazon Prime. The changes that will arise from AI will not disturb or alter this configuration. AI can be compared to pouring water into a vessel. It takes the shape of the vessel. The only thing that can alter human life in the world is humans.

“The vast majority of people who develop any genuine command over AI development will use it to create and expand addicted consumers. Everyone else (well, nearly everybody) will unconsciously lemming along, and at best put an occasional flag of some warred-on peoples on their profile picture.

“A small percentage (maybe around 20%) will seek to find the good AI can do for people – and we should all pray that these idealists resist the inevitable Overton shift to become rich and greedy as their goodness brings prosperity.

“Finally, three more answers to the questions: What actions must we take right now to reinforce human and systems resilience? Strengthen families.

“What new vulnerabilities might arise? No new vulnerabilities, just new versions of eternally existing vulnerabilities.

“What new coping strategies are important to teach and nurture?

  • Honor thy mother and thy father.
  • Love thy neighbor as thyself.”

Jon_Lebkowsky

Jon Lebkowsky
AI may follow the path of impact described in a sci-fi story in which explorers find a world that seems primitive, but in the end discover the tech is so deeply embedded that it is invisible.

Jon Lebkowsky, writer and co-wrangler of Plutopia News Network, previously CEO, founder and digital strategist at Polycot Associates, wrote, “When I think about the future of AI, I’m reminded of a line from Howard Rheingold: ‘What it is is up to us.’

“Although the roots of artificial intelligence stretch back decades we are almost certainly still in the early stages of its development. The most visible forms of AI – especially generative systems – are advancing quickly. Because these systems are built on accumulated human knowledge, they inevitably inherit human strengths and weaknesses. They are powerful but fallible. They should not be treated as superior, omniscient or autonomous authorities. AI’s greatest advantage lies in its ability to process vast amounts of data quickly and extract patterns and relationships that would otherwise remain hidden. Yet interpretation remains uncertain.

“AI can suggest, illuminate and accelerate, but it cannot guarantee truth. For that reason, arriving at what is real, accurate and meaningful must remain a human-AI hybrid endeavor. AI can extend human capability, but it cannot replace human judgment. It is, at its core, an extension of us.

When I think about the future of AI, I’m reminded of a line from Howard Rheingold: ‘What it is is up to us.’ … What seems most likely is a gradual reshaping of human endeavor, a retraining of how we work, create and decide, as we adapt to both the enhancements and the limits AI introduces. The transition will be uneven. Many will resist it and for some it will be genuinely difficult. But over time, it holds the potential to improve the lives of most people.

“Much of today’s enthusiasm around AI resembles a speculative bubble, one that will almost certainly burst. That does not mean AI itself will fade. The internet followed a similar path; the bubble collapsed in 2000, yet the technology only became more pervasive and valuable in the years that followed. AI may follow the same pattern, temporary over-inflation, followed by deeper and more durable integration.

“One practical constraint on AI’s growth is its resource intensity. Without improvements in efficiency, its demands on energy and water could become significant limiting factors. I assume, however, that necessity will drive innovation and that more sustainable methods will emerge as AI continues to evolve.

“I’m reminded of a science-fiction story – Arthur C. Clark’s ‘Encounter at Dawn’ – in which explorers land on a world that appears technologically primitive because no devices are visible. Only at the end do they discover that the civilization is extraordinarily advanced, its technology so fully embedded that it has become invisible. AI may follow a similar trajectory, not as a conspicuous tool but as an omnipresent, quietly integrated layer of daily life.

“If we aim for the best uses of AI, its development will follow human needs. But it will also be used to exploit and to control, and we will undoubtedly contend with those who pursue those ends. An AI apocalypse is imaginable, yet so far human judgment and restraint have prevented many of the worst catastrophes we have feared. I remain cautiously optimistic that we will avoid AI-driven collapse.

“What seems most likely is a gradual reshaping of human endeavor, a retraining of how we work, create and decide, as we adapt to both the enhancements and the limits AI introduces. The transition will be uneven. Many will resist it and for some it will be genuinely difficult. But over time, it holds the potential to improve the lives of most people.”


Adam_Clayton_Powell_III

Adam Clayton Powell III
Fast-paced digital life had already dialed down most humans’ willingness to focus on getting the facts from reliable sources the right way. Unless they wise up, their AI use will magnify the damage.

Adam Clayton Powell III, executive director of the initiative on election cybersecurity at the University of Southern California, wrote, “In just a few years, we have seen consumers and professionals relying on AI in ways that people may find convenient but that are unquestionably reducing their ability to learn, to function in their professions and even to form basic relationships with other individuals.

“Before AI, people had already grown accustomed to obtaining and interacting with instant information (including news) online and rapidly accomplished social media scrolling. They are now increasingly leaning on instant AI results in harmful ways, they are coming to trust AI assistants to conduct their interactions with the world and being manipulated by social media chatbots. As AI assistants assume more human and human-friendly forms this will only increase.

Professionals are discovering that younger AI-literate colleagues are relying on AI in inappropriate ways. Roland Trope, the outgoing co-chair of the American Bar Association’s AI Task Force, told me that law associates are relying on AI to write briefs that are riddled with inaccuracies and AI hallucinations. Even worse, he and others tell me that their younger colleagues have forgotten (or never even knew) how to write well. If you cannot write, you cannot think.

“Professionals are discovering that younger AI-literate colleagues are relying on AI in inappropriate ways. Roland Trope, the outgoing co-chair of the American Bar Association’s AI Task Force, told me that law associates are relying on AI to write briefs that are riddled with inaccuracies and AI hallucinations. Even worse, he and others tell me that their younger colleagues have forgotten (or never even knew) how to write well. If you cannot write, you cannot think.

“An example from another lawyer: He gave an associate an assignment to analyze a new piece of legislation and explore how it would affect a client. She returned a few minutes later with an analysis. She didn’t even have the time to read the legislation, she said, so my lawyer friend knew she had just plugged the question into AI – and she had not caught obvious inaccuracies.

“More broadly, in recent months, we have seen numerous reports that pre-teens, teenagers and young men and women are saying that their relationships with AI companions are more rewarding to them than their relationships with humans. As AI advances, it will generate ever more human-friendly interfaces that people throughout the world will find difficult to resist.

“So far not discussed is the role of politics and of money. Recent studies show that 2025-era chatbots are already more effective than advertising in changing voters’ political beliefs and preferences. If not in 2026, the 2028 U.S. elections will almost certainly feature candidate-produced, AI-powered avatars to interact with voters.

“This will inevitably be embraced by advertisers and sellers across the board. The power of money is never to be underestimated. Once, in a 1960s conversation I had with a CBS colleague, science editor Earl Ubell, we discussed how difficult it would be to send an individually addressable video to each individual household, so each viewer could select what he or she wanted, on demand. It seems impossible,’ Earl told me, ‘but there’s so much money to be made, someone will do it.’

“And so it is with AI: There is so much money to be made by AI manipulation of malleable humans that someone (or rather, many) will do it.’”


The second section of Chapter 8 features the following essays:

Alan Inouye: Work out in the ‘cognitive gym’ by developing intellectual abilities; carve out time for creative endeavors, read widely. Overall, AI disruption will create ‘actual and perceived winners and losers.’

Glenn Ricart: AIs will create highly addictive entertainment environments that will lure many into spending too many hours in them.’ Passive people will lose critical faculties. Creative thinkers will be enriched.

Kevin Taglang: We’ll be ‘living on our own, infrequently meeting face-to-face, communicating through screens. … We are likely to become more and more completely dependent on AI tools without even realizing it.’

Ken Rogerson: Complacency has set in and there is little ambition to improve the ways people can discover AI-related harms. ‘There are not enough people in the room who are asking hard questions.’

Professor of Law: Most people will not realize they are being affected by AI and will take no steps to avoid interacting with it. ‘Inertia is the most powerful force in human affairs.’

Bronwyn Williams: Complacency will come at the expense of agency.’ People will ‘happily surrender.

Larissa May: ‘Preserving the cognitive future and the richness of the human mind requires a new kind of rewiring, a deliberate cultivation of the very qualities that make us human.’


Alan_Inouye

Alan Inouye
Work out in the ‘cognitive gym’ by developing intellectual abilities; carve out time for creative endeavors, read widely. Overall, AI disruption will create ‘actual and perceived winners and losers.’

Alan Inouye, principal at The Policy Connection and longtime leader at the American Library Association, wrote, “Keep (or start!) thinking! The large majority of people will become increasingly dependent on AI systems in a passive way. Much as they have when transitioning to relying heavily on the geolocation software built into their cars and smartphones, people will just do as they are told.

“True, the information gained from AI large learning models is usually accurate and efficient and this innovation is time-saving. But the use of AIs eliminates the ‘cognitive work’ humans once did, for example, planning a route based on studying a paper map or full-screen image, examining alternate routes, thinking through obstacles like potential congestion and randomly discovering facts about the local geography and possible unexpected opportunities. Efficiency improves with AI but learning declines and in some instances, the experience deteriorates.

The social costs of new technologies need to be considered as well as the benefits. … This advance is a disruption in society with actual and perceived winners and losers. A current concern is that the U.S. is already under considerable strain. Further stress introduced by AI systems could trigger social upheaval.

“The substitution of technology for manual or personal labor is a historical phenomenon, of course. But the great reduction of manual labor in the workforce and the home since the industrial revolution has generated new problems such as obesity due to inadequate physical activity and poor nutrition. The social costs of new technologies need to be considered as well as the benefits.

“What to do? While people derive benefits from AI systems, they can make a conscious effort to maintain and develop their intellectual abilities, creating their own regimen in a ‘cognitive gym’ of sorts. They can engage in formal learning activities and carve out explicit time for creativity and exploration such as reading or scanning all the way through a newspaper, magazine or book in its entirety and not just reading the AI-selected articles that appear automatically in your morning feed.

“All of us will come to use AI systems to bolster our resilience in some respects. However, in doing so, some cognitive abilities will atrophy or perhaps not develop in the first place and so many of us will become more vulnerable/less resilient when faced with inaccessibility to systems.

“A small majority of us will leverage the systems to bolster our resilience and also maintain our foundational cognitive capabilities. This will require a conscious effort and discipline, but some will do so, or are encouraged/motivated in this direction, for example, perhaps those who receive educations from elite universities.

“There will be haves and have-nots; the segmentation in the population will evolve from prior generations of technological advance. Professional information workers who work in jobs that have limited need for human interaction and judgment are likely to be endangered due to the advance of AI. The same goes for many of the entry-level jobs for lawyers in large law firms.

“By contrast, professions with integral hands-on work coupled with a body of experiential knowledge will rise in relative compensation and prestige and those in trade occupations – electricians, plumbers and so on – will continue to have solid employment prospects.

“Professions in which the human touch is essential will endure and, in many instances, grow. AI-driven trends including a likely increase in human life spans and in heightened levels of loneliness will create new job opportunities in a variety of professions, from therapists and personal counselors to community service workers.

“The advance of technology lifts all boats. Some boats are lifted much higher and others not as much. This advance is a disruption in society with actual and perceived winners and losers. A current concern is that the U.S. is already under considerable strain. Further stress introduced by AI systems could trigger social upheaval.

“While disruptions caused by technological advances are a periodic feature of modern society, the timing of the present AI-centered revolution may be particularly untimely for the nation.”


Glenn_Ricart

Glenn Ricart
‘AIs will create highly addictive entertainment environments that will lure many into spending too many hours in them.’ Passive people will lose critical faculties. Creative thinkers will be enriched.

Glenn Ricart, founder and CTO of U.S. Ignite, driving the smart communities movement, wrote, “Resilience requires disciplined attention to how our time is spent. In future, people will divide their time moving back and forth along different locations on a spectrum that ranges from ‘I enjoy being informed’ to ‘I enjoy being entertained.’ The question is: What fraction of your time do you want to spend where?

“People today generally choose to be entertained by watching TV, playing video games or scrolling through social media platforms like Tik-Tok. Then there’s the choice of pursuing useful information online and spending non-digital time reading books that will challenge us, or attending functions where we discuss big ideas, or gathering face-to-face with others in a classroom or social group to share knowledge.

We will find AI continuing to shift our focus from using our minds for critical reasoning to entertaining those minds. AI will continue to engage thinkers, and – in doing it – can enrich the lives of students, academics, creatives, business leaders, everyone. However, we can expect that AIs will create highly addictive entertainment environments that will lure many into spending too many hours in them.

“Historically, we see a long-term trend toward spending more and more time being passively entertained in the ‘entertainment’ end of the spectrum rather than in the zone of ‘being informed.’ Radio started us on this roll, then came television and now those in digitally-advanced cultures spend a great deal of (if not most of) their time glued to the screens of digital devices – choosing from endless amounts of streaming entertainment choices.

“AI will prove to be the most powerful educator and entertainer humanity has ever known. While digital life has exploded the amount of information available, its enormous arsenal of entertainment is also having some significant potentially negative impact. We will find AI continuing to shift our focus from using our minds for critical reasoning to entertaining those minds. AI will continue to engage thinkers, and – in doing it – can enrich the lives of students, academics, creatives, business leaders, everyone.

“However, we can expect that AIs will create highly addictive entertainment environments that will lure many into spending too many hours in them.”


Kevin_Taglang

Kevin Taglang
We’ll be ‘living on our own, infrequently meeting face-to-face, communicating through screens. … We are likely to become more and more completely dependent on AI tools without even realizing it.’

Kevin Taglang, executive editor at the Benton Foundation, wrote, “AI will increasingly be embedded into all of our digital tools. In the same way that people have not been especially aware of how computers have increasingly been embedded in nearly all aspects of society over the past 20-30 years, we are likely to become more and more completely dependent on AI tools without even realizing it.

“How do you boil a frog? Let AI loose in common digital tools so that users are hardly aware of how AI impacts online searches, social media content, recommendations, health apps and so much more. There’s not likely to be much resistance because users are likely to be blissfully unaware of the invasion. Ironically, futurists have warned us of where we’re headed since at least E.M. Forster’s 1909 short story, ‘The Machine Stops.’ Living on our own, infrequently meeting face-to-face, communicating through screens – does any of this sound familiar?”


Kenneth_Rogerson

Ken Rogerson
Complacency has set in and there is little ambition to improve the ways people can discover AI-related harms. ‘There are not enough people in the room who are asking hard questions.’

Ken Rogerson, a professor of public policy at Duke University specializing in public interest technology, wrote, “AI modeling has been around for a while. It has helped society manage large datasets and learn about (and sometimes predict) trends and patterns. This has led to greater efficiency in some areas. However, with that efficiency has come a complacency to not improve methods for ascertaining AI-related harm.

“Some of the worst examples come to light, forcing private-sector AI platforms and providers to address them. Others only cause a little harm and can be ignored or swept under the carpet. I personally remain concerned that there are not enough people ‘in the room’ who are asking hard questions: questions that may not have answers.

“I also believe that – as with all innovation – some risk is acceptable. But when that risk turns into individual harm how can we stop powerful technology companies and encourage a government that currently is not really listening to respond to individual citizen or small community needs? This is an ongoing process. It should not be an all/nothing solution. Incrementalism works. But people have to listen and act. I am discouraged by the lack of this right now.

“Digital literacy would help, but some of the structural inequalities hinder those types of activities for those who might benefit the most from it. I would like to see digital literacy programs in public education from kindergarten on (not just high school/college as is principally the case now).

“I am not sure there are new vulnerabilities, but different vulnerabilities will be targeted at different times. There should always be someone pointing out these vulnerabilities and people listening to them.

“I am pessimistic about all of this, but I will continue to work locally and dream.”


Professor of Law
Most people will not realize they are being affected by AI and will take no steps to avoid interacting with it. ‘Inertia is the most powerful force in human affairs.’

A professor of law who works in the San Francisco Bay area wrote, “AI will have some significant effects soon, within 10 years. But I do not think that those effects are likely to be very important to the vast majority of those who use (or are used by) AI in that time. Most people won’t even realize that they are being affected by AI much of the time.

“As to people’s interest in getting ready for AI, adapting to it, being resilient in the face of it, I think a large majority of people, even if they know that AI is interacting with them to some extent, will do none of this. Inertia is the most powerful force in human affairs. People won’t engage with something unless or until they really need to, and even then they often won’t.

“For example, we just bought a new toaster. Did I read the instructions? Of course not. Am I confident I know where I put them (including if I put them in the trash)? No. If something goes wrong with the toaster or I can’t figure out one of its functions, a function that I really want to use, then I’ll engage, probably by googling for an answer (and then not noticing whether I’m clicking on a link to something put by a human or just reading Google’s AI ‘answer’).

“I am an academic who lives in Silicon Valley and many of my colleagues and neighbors would be much more excited and engaged, but they are a very non-random sample of the population. I think about the two large family reunions I attend each year, populated by groups that are somewhat, but not greatly above, the U.S. medians in socio-economic status and education. My guess is that in a decade, maybe up to half those who are now between 12 and 25 will be engaged with AI issues, less than a quarter of the middle-aged and less than 10% of the largest demographic group, the elderly (which includes me).

“Now, these predictions depend on AI being as small or at least unnoticeable part of our lives as I expect it to be. If it does become a pervasive presence, my estimates are almost certainly low but not that low. Don’t misunderstand the ability of people to ignore things.”


Bronwyn Williams
‘Complacency will come at the expense of agency.’ People will ‘happily surrender.

Bronwyn Ruth Williams, partner and director of foresight at Flux Trends, a strategic consultancy located in Johannesburg, South Africa, said, “AI will play a significant role in shaping decisions. People will mostly accept this (eventually) and even happily surrender, but complacency will come at the expense of agency, morality and humanity. Most humans choose ease over effort. Given a choice they choose no choice. AI satisfies this baseline but it also entrenches ‘baseline’ living.”


Larissa May
‘Preserving the cognitive future and the richness of the human mind requires a new kind of rewiring, a deliberate cultivation of the very qualities that make us human.’

Larissa May, founder of Half the Story (a digital wellness non-profit) and CEO of Ginko, a tool to help families navigate the complexities of the digital world, wrote, “Technology and humanity are converging. In a world shaped by artificial intelligence, we must fight to preserve infinite awareness, strengthening the innate human capacities of imagination, play, creativity, resilience and problem-solving. Every fiber of our being will be challenged to slip into autopilot, even emotionally. AI systems that optimize for human flow and efficiency will be rewarded. That is why preserving the cognitive future and the richness of the human mind requires us to consciously commit to a new kind of rewiring, a deliberate cultivation of the very qualities that make us human. As jobs that rely primarily on IQ are replaced by technology, we will be challenged to realize human potential in a different way, from the inside out, nurturing the capacities that are uniquely human and cannot be automated.”


> Go to Chapter 9 – Epistemic Vigilance: Discerning Truth, Illusion and Misinformation

> Return to the top of this page