The Essays–Chapter 7
Heart & Soul: Protecting Human Connection; Seeking Calm

Hundreds of experts answered the following essay question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?”
| Download a PDF of the full, 376-page report | Download the 16-page Executive Summary | Download the 4-page Media Summary |
This is the seventh of 11 chapters of experts’ essays with responses to the question above. The essayists were asked to explain how the essence and elements of human resilience might evolve as we evolve with AI systems. The authors’ responses in Chapter 7 were generally focused on human resilience when it comes to retaining truly human connection and quiet and calm spaces outside the digital. Most noted how necessary it is to take measures to proactively work ahead to prepare people in advance to navigate the revolutionary transition as AI advances in coming years. This chapter in brief: Many of these experts believe that as AI seamlessly begins insinuating itself into our private lives it is likely to fray the emotional and social fabric of humanity. These authors focus on the loss of certain aspects of “being human” as AI systems evolve. They warn of a “cyborg slide” in which we shed vital capacities – such as the ability to truly enjoy solitude, to express genuine empathy and politely manage and keep alive our complex and unmediated human-to-human connections – even possibly allowing “AI to define the nature of personhood.” Essayists in this chapter urge that people take steps to make sure they don’t mistake interacting with machines with the same “theory of mind” they use in interacting with other humans. They discuss the mind-body duality of being human. They caution against anthropomorphizing algorithms and normalizing aberrant social interactions. They also call for creating physical sanctuaries and engaging in AI detoxing. Further, they defend the slow, the small and the genuinely human in order to fiercely protect the real, sometimes messy, and often fantastic face-to-face relationships that make us tick.
Featured Contributors to Chapter : The 20 essay responses on this page were written by Marina Cortês, Julie Freeland Fisher, Aneesh Aneesh, Greg Sherwin, Sherry Turkle, Henry Brady, Sarah Pessin, Paul Saffo, Divya Siddarth, Chris Labash, Dmitri Williams, Scott Kollins, Brian Southwell, Giacomo Mazzone, Irina Raicu, Katrina Johnston-Zimmerman, Gerd Leonhard, John Markoff, John C. Havens and an anonymous professor of robotics. (Their essays are all included on this one, long-scrolling web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)
The first section of Chapter 7 features the following essays:
Marina Cortês: ‘Allowing our lives to be monopolized by digital devices makes us less resilient, feeling less human and less confident in other humans. … It could be the most serious pandemic humanity has seen.’
Julia Freeland Fisher: ‘Our capacity to build and mobilize social capital is key to resilience – networking self-efficacy, a growth mindset about one’s networking ability, conversational skills and cultivation of empathy.
Aneesh Aneesh: AI is moving into intimate life; this frays old systems of connection and intimacy. ‘What arrives is often not connection but simulation,’ shattering traditionally-valued types of relationships.
Greg Sherwin: People will delegate crucial qualitative life decisions to AI, including how they relate to others. The loneliness crisis will worsen. Look to ‘chaos engineering’ to help build resilience and ‘dumb homes.’
Sherry Turkle: ‘We can come back to each other and to ourselves. … There is more than a threat to empathy at stake; there is a threat to our sense of what it means to be human.’

Marina Cortês
‘Allowing our lives to be monopolized by digital devices makes us less resilient, feeling less human and less confident in other humans. … It could be the most serious pandemic humanity has seen.’
Marina Cortês, a professor at the University of Lisbon’s Institute for Astrophysics and Space Sciences and participant in the futures research of the Millennium Project, wrote, “How can I start answering these insightful questions when the one year that has passed since the last feels like a century?’ Raise your hand if you aren’t experiencing these: Confusion. Disorientation. Loss of contact with yourself, with reality. Conflating truths. Lack of trust in others and ourselves. Self-doubt of our senses and intuition.
“We can all protect our emotional lives and optimise our use of AI technology if we master two tasks:
1) “Clearly understand what an LLM is and does, and that it is not ‘alive.’
2) “Reconsider the time you spend using digital devices, seek timely disconnection with them to rebalance your self-knowledge and resilience, restoring your humanity and grasp of reality.
“To address the first point, we can recognise GenAI by its true nature. Clearly understanding what it is – not what its parent companies would like us to perceive it to be.
“GenAI LLMs could be thought of as powerful search tools, each armed with a massive piece of ‘luggage’: a large selection of information from the largest dataset ever put together in the history of endeavour. Within these AIs are massive sets of answers to nearly every question that anyone has ever thought of, written, spoken about or filmed. When queried, an AI system pieces together a response from bits and pieces of this vast luggage. Thought about this way, the mystique surrounding AI tools evaporates and we are able to use them in the way they are optimized, for yielding an outcome.
“I remember wondering as a child how they shrunk the little people inside the TV when we first got one at home. We have a similar sense of mystery surrounding AI tools. It’s hard for us to grasp the vastness of the dataset, which is a large percentage of the information produced by billions of humans over the past few decades in text, audio or visual formats.
“It seems as if these tools respond like a human would, but they have access to much more information on any topic than any human, so why not choose to interact with the AI system instead of our friends and family, who can be moody, irritable or ‘have no time’? Ha! So, of course most people will come to mostly trust in the comfort or information AIs share. But digital devices are not alive. They lack the magic muck that makes humans living things, making us cranky, crying, unpredictable, in our messy, loving, wonderful, living world.
What matters? What is real? What fulfills me? How can I feel less alone? Sometimes an insurmountable feeling of loneliness surrounds us, leaving us digging deeper and deeper into isolating ourselves. We may feel utterly alone even when we attempt to reach out for connection. The levels of dissociation from self and confusion of reality due to our digital existence will only increase as AI systems fill our days. The degree to which we will preserve fulfillment and the wonderful experience of being alive may depend upon the degree to which we can disconnect from digital devices.
“How do we safely navigate the gap between the living and non-living worlds when it seems ever narrower? One way is to travel somewhere we can always easily visit – inside our own minds. We can disconnect. What experience today requires no devices?
“Unprecedented digital challenge is here for a reason. It is an opportunity for us to celebrate our humanity. To reach the other side of this chasm we have to reconnect with nature and with ourselves.
“What matters? What is real? What fulfills me? How can I feel less alone? Sometimes an insurmountable feeling of loneliness surrounds us, leaving us digging deeper and deeper into isolating ourselves. We may feel utterly alone even when we attempt to reach out for connection.
“The levels of dissociation from self and confusion of reality due to our digital existence will only increase as AI systems fill our days. The degree to which we will preserve fulfillment and the wonderful experience of being alive may depend upon the degree to which we can disconnect from digital devices.
- “Can we spend quiet time daily on our own, alone, with no digital device? Taking breaks for much-needed stretches of time gives our brain downtime. It is not ‘boring’ if you consciously allow yourself to process what is important to you and delve into the magic of your own quiet insights. We need to actively seek time alone, with no gadgets. While it may seem unappealing, likely to be ‘boring’ and possibly even scary, it can be an act of bravery that allows you to awaken from the madness that can be digital life and find your way back home. Knowing yourself can build your resilience. Quiet time allows you to the space to envision and plan for your future.
- “Can we leave our digital devices off or put them away on silent mode when we leave home? We need to find the willpower to leave the smartphone off and spend time with friends without distractions or take the train or the lift without checking our phones. Then we are more likely to get to know our neighbours, increasing connection and community, which increases our community’s resilience to social disruption and other problems.
- “Can we be utterly disciplined about our own well-being, prioritizing our sleep over screentime without exceptions?
- “Can we set strict limits on the number of hours we allow ourselves to sit at a computer each day? Three or four hours, tops, is, I would say, the maximum of productivity.
- “Can we watch television on a large screen with no other devices in the room and with other people – not sitting alone, mesmerized by a digital device? It helps mental health to be social in-person, and misinformation, disinformation, etc., are harder to spread if we are actively watching news and other programs together with others, with more brains to notice possible misinformation.
This isn’t just about AI. We need to recognise that our time spent interacting with digital devices is decreasing our connection to others and increasing our vulnerability to manipulation via the content that reaches us through those devices. Allowing our lives to be monopolized by digital devices makes us less resilient, lesser people, feeling less human and less confident in other humans.
“This isn’t just about AI. We need to recognise that our time spent interacting with digital devices is decreasing our connection to others and increasing our vulnerability to manipulation via the content that reaches us through those devices. Allowing our lives to be monopolized by digital devices makes us less resilient, lesser people, feeling less human and less confident in other humans.
“In my family, as a rule of thumb, I ask the question, ‘Would this experience that we are now choosing make any sense to cave people gathered together around a campfire?’ Laughing, telling stories, singing, opening up, sharing food or looking at the sky at night. Ah, no devices needed there.
“If everyone could rein in their exposure to the digital and refocus more energy on the human, the planet could gently restore its course, back to nature. Awakened people will refuse to be manipulated. Social change and human resilience begin by identifying widespread use of digital devices as a form of potential substance abuse, with risk of psychological and physical dependence.
“The digital device ‘pandemic’ attacks not only the mental and physical health of the individual, but also – through reinforcement effects – promotes the spread of ill-advised content. The result is the erosion of the dynamics and social resilience that uphold our communities and societies. It could be the most serious pandemic humanity has seen.
“Recognising our addiction to gadgets and our fear of ourselves requires quiet courage, but it can be the greatest ride of being alive. Explorer and philanthropist Edmund Hillary once said, ‘It is not the mountain we conquer it is ourselves.’ Here, too, it is not the AI we must conquer, it is ourselves.”

Julia Freeland Fisher
‘Our capacity to build and mobilize social capital is key to resilience – networking self-efficacy, a growth mindset about one’s networking ability, conversational skills and cultivation of empathy.’
Julia Freeland Fisher, an expert on human connection in the age of AI and director of education research at the Clayton Christensen Institute, wrote, “My research focuses on the profound risks that Gen AI poses to human connection – not due to the technology per se, but due to 1) the myriad ways our society neglects to invest in or safeguard connection, 2) the highly-digitized nature of our existing social networks and habits, and 3) the lack of business models and policies to support prosocial technologies.
“These are producing the perfect storm for AI to disrupt human networks from the inside out – by offering intimate AI companions to lonely and disconnected individuals – and the outside in – by offering on-demand help that outcompetes human help.
Our capacity to build and mobilize social capital will be key to resilience. … Having a growth mindset about one’s networking ability, conversational skills and cultivation of empathy. To mobilize human connections people also must implement help-seeking skills and mindsets that override the gospel of self-help that dominates American individualism – from which AI companies are profiting.
“To fend off that disruption, we need to 1) slow demand for companionship tools by addressing loneliness and disconnection head on, 2) increase face to face interactions (which are less susceptible to disruption than digital networks and interactions), and 3) build help-seeking mindsets and skillsets.
“Framing this in terms of ‘human capacities’ can over-index on individual capabilities and undersell the systemic shifts needed to accomplish all three. By way of example, our loneliness epidemic reflects our flawed, laissez-faire approach to loneliness by essentially telling lonely individuals to go get less lonely – on their own.
“Our capacity to build and mobilize social capital will be key to resilience. While employers often laud ‘soft skills’ or ‘human skills’ those don’t capture the entirety of networking self-efficacy: having a growth mindset about one’s networking ability, conversational skills and cultivation of empathy. To mobilize human connections people also must implement help-seeking skills and mindsets that override the gospel of self-help that dominates American individualism – from which AI companies are profiting.
“The capacity to prioritize and engage in face-to -ace connections will not only preserve demand for human relationships but will enhance our collective ability to delineate between true empathy and AI-generated sycophancy.”

Aneesh Aneesh
AI is moving into intimate life; this frays old systems of connection and intimacy. ‘What arrives is often not connection but simulation,’ shattering traditionally-valued types of relationships.
Aneesh Aneesh, sociologist of globalization, labor and technology and executive director of the School of Global Studies and Languages at the University of Oregon, wrote, “Adaptation to more-advanced AI systems playing a significantly larger role in human lives won’t be uniform. It will vary across cultures and it will depend on what each society already relies on to reproduce itself socially: What kinds of bonds it assumes, what kinds of obligations it treats as legitimate and what kinds of ties it treats as contamination.
“To see why, I want to take a short detour. Modernity is best understood as a mutation in social reproduction. In pre-modern formations, social reproduction is inseparable from sexual reproduction in the sense that it permanently presupposes certain characteristics of sexual reproduction and relies structurally on them: marriage, lineage, inheritance, kinship. These aren’t merely ‘values.’ They are the infrastructure through which life organizes itself and persists. I discuss these issues in my forthcoming book, ‘Modular Citizenship: From Kinship to Algorithmic Rights Regimes.
“Modernity reorganizes this infrastructure. It produces a function-based order that often exhibits stark indifference – sometimes even disdain – toward the normative world of kinship and other dense social ties. Where kinship once underpinned survival, modern organizations recode it as misconduct. Kinship becomes nepotism. Friendship becomes cronyism. The ties that once anchored life are reclassified as distortions of fair procedure.
“This contrast becomes clearer across societies. In places where kinship networks remain dense, nepotism is routine and largely unremarkable. In societies where functional communication dominates – especially in the West – nepotism becomes scandalous. The reason is not simply moral; it is structural. Markets, schools, courts and hospitals operate on their own discriminating criteria for selection and rejection, and this generates constant pressure to disregard factors that do not ‘belong’ to the system. These institutions must treat everyone as equal and free in principle, judged only by functional criteria: whether one can pay, whether one is qualified, whether one can provide proof, whether one complies.
A new world is beginning to form in which trust, verification and shared meaning weaken. This matters because many human virtues – kindness, politeness, helping attitudes – did not emerge in abstraction. They were trained inside kinship systems where obligations were thick and memory was long. But in societies where households fragment into single individuals at scale, those training grounds erode.
“Returning to AI, then: People’s adaptation to more-advanced systems will differ from society to society depending on how much pre-modern formation still guides social communication. In functionally advanced societies, adaptation – or rather mal-adaptation – may be quicker, because the thick support of kinship has already thinned. In pre-modern kinship worlds, families – however imperfect – provided meaning, care and a durable place in the social landscape. In advanced societies, that structure is receding and the vacuum it leaves behind becomes a condition of technological uptake. Two consequences follow.
“First, within the functional realms – schools, hospitals, research labs, policing, the judiciary – AI systems will increasingly dominate decision-making. Not because they are wise, but because they scale. They train on quantities of data that no human professional can approximate. And in environments built for procedures and outputs, ‘better prediction’ becomes synonymous with authority. The system that claims to see more will be granted the right to decide more.
“Second, AI will not remain confined to functional domains. It will increasingly guide intimate life, and for some people it will become the most consistent social presence they have. As traditional bonds recede, new forms of connection are demanded; yet what arrives is often not connection but simulation – virtual girlfriends, chatbots, curated feeds that respond without resistance. These offer frictionless interaction and immediate emotional return, but precisely because they are frictionless, they may deepen the isolation they soothe.
“A new world is beginning to form in which trust, verification and shared meaning weaken. This matters because many human virtues – kindness, politeness, helping attitudes – did not emerge in abstraction. They were trained inside kinship systems, initially oriented toward one’s own kin, where obligations were thick and memory was long. But in societies where households fragment into single individuals at scale, those training grounds erode. The social consequences are not reducible to one metric, yet the broader pattern is difficult to ignore: overdose, suicide, homelessness, lone-shooter incidents, involuntary celibacy and escalating mental health crises. Even nostalgia for ‘traditional families’ cannot restore kinship structures that modernity began dissolving long ago; at best it produces an aesthetic without rebuilding the infrastructure.
“I see a future where function systems – markets, education, science, law, the state – become increasingly efficient and accelerate transactions in their domains, while social life frays further. The result is not simply ‘more AI,’ but an uncertain social future for a herd species that no longer reliably lives as a herd.
Modern institutions increasingly privilege function over family. … Organizational norms will increasingly treat the intimacy of social bonds as a procedural hazard – an inappropriate influence, a conflict of interest. …The fantasy life of the future may no longer derive from the same hierarchical, loving, harsh and violent history of human relations that once provided the raw material for meaning.
“As we evolve with these systems, how might the essence and elements of human resilience change? It may help to decompose the figure of the human into three intertwined components: the biological, the psychological and the social. In kinship systems, these were structurally coupled. Kinship formations were built on socio-psychological-sexual reproduction – marriage, lineage, family, clan norms and the mindsets that made those norms feel natural and binding.
“But modernity has been slowly decomposing that structure. Social reproduction separates from sexual reproduction; marriage no longer functions as the axis of social continuity. This does not mean kinship disappears. Kinship communication persists and will persist for decades, coexisting and clashing with functional communication. The direction, however, is clear: modern institutions increasingly privilege function over family.
“This doesn’t mean kinship vanishes inside organizations. Sexual relationships still form at work. Friends still help friends. Families still pull strings. The difference is that these practices now occur under pressure: they are discouraged, regulated, pushed underground or banned.
“Organizational norms will increasingly treat the intimacy of social bonds as a procedural hazard – an inappropriate influence, a conflict of interest.
“As the social and sexual separate, the psychological begins to shift as well. Feelings like politeness learned toward elders, love and loyalty, honor and obligation, even forms of hate and shame – these were not merely private emotions. They were shaped inside kinship worlds, trained through durable relationships, hierarchy, dependency and the long memory of the group.
“When AI systems proliferate, they will initially simulate these traits. A virtual friend can be super polite, teasing, loving, cajoling, so convincingly human that the difference feels irrelevant. But in the long run, some of these simulations may stop making sense, because the social worlds that gave those feelings their structure will keep changing. The fantasy life of the future may no longer derive from the same hierarchical, loving, harsh and violent history of human relations that once provided the raw material for meaning.
“So, what we may be approaching is not ‘human resilience’ in the classical sense – surviving shocks and returning to baseline – but a deeper reconfiguration of what baseline even is. The biological, psychological and social may depart from one another more radically than we assume. And if that happens, resilience will no longer be a simple virtue of the individual. It will become a question of what kinds of couplings can still be sustained – what kinds of bonds, institutions and practices can keep these components coherently connected in a world where both function systems and artificial companions are rapidly expanding.”

Greg Sherwin
People will delegate crucial qualitative life decisions to AI, including how they relate to others. The loneliness crisis will worsen. Look to ‘dumb homes’ and ‘chaos engineering’ to help build resilience.
Greg Sherwin, Singularity University global faculty member, previously senior principal engineer at Farfetch, shared a number of predictions, writing, “The path of least resistance doesn’t bode well for humans in an AI-saturated world. This will challenge human resilience due to their over-reliance on external dependencies that are prone to technical challenges and glitches that cannot be remedied or circumvented, let alone understood.
“People will delegate many qualitative decisions in their lives to AIs, including those about their relationships with coworkers, local politics and even their own families and friends.
“Another resilience challenge can be found in our digital systems. This was exemplified when San Francisco traffic was immobilized in December 2025 because of a city power outage that caused Waymos to operate in ways at scale that society was unprepared to handle. Systemic resilience will be challenged because of the invisible dependencies on infrastructure layers and their internal vulnerabilities. Whether it be an attack on or failure of DNS (the internet Domain Name System) or failure of the power grid when it is heavily stressed due to massive AI consumption, problems are becoming too complex for human minds to decipher and debug. It is too difficult to deal with the social dynamics of decentralized, consumer-contributed power grids and AI systems. The potential for this chaos also creates greater opportunities for cyberterrorism and infrastructure attacks.
The loneliness crisis will accelerate. Relationships, sex and childbirth rates will continue to plummet as they are each mediated and conveniently replaced with digital interactions. Emotional intelligence will become more a product of chatbot exchanges than a learned practice gained through experience. … Many people will outsource more of their ethical thinking and decisions to machines, relieving them of the anxiety and plausibly distancing them from the consequences of their decisions.
“Languages such as English or Mandarin will be used much more by machines than by humans, as they are the underlying API exchange language between machines and algorithms. AIs will introduce their own layers of interpretation, filtering, summarizing and abstraction from original sources that will be adopted as the norm. Only smaller pockets of ‘deviants’ will resist this and want to dive deeper into context and details, questioning sources. However, social influencers and conspiracies will inspire these deviants even more than they do today.
“The loneliness crisis will accelerate. Relationships, sex and childbirth rates will continue to plummet as they are each mediated and conveniently replaced with digital interactions. Emotional intelligence will become more a product of chatbot exchanges than a learned practice gained through experience.
“Humans’ reliance on digital mediation will continue to make them more apprehensive of approaching or speaking with people because they will perceive these interactions to be challenges to their comfort, convenience, desire for immediacy and even their sense of personal safety.
“Many people (possibly with more resistance among the more actively religious) will outsource more of their ethical thinking and decisions to machines, relieving them of the anxiety and plausibly distancing them from the consequences of their decisions. AI companions will be used to make many life decisions and a type of social stigma may emerge for ‘non-optimizers’ who do so without the aid of AI.
“Practices and resources to enable human resilience may grow to resemble Amazon Web Services’ ‘chaos engineering’ tests of its tech infrastructure. The purpose of an engineering ‘chaos game day’ is to identify potential resilience issues or deficiencies by testing people, teams and machines with difficult challenges to overcome. Consider the Dutch summer rite in which parents in the Netherlands drop their pre-teen children off – on their own – deep in forests to navigate back to base in order to foster their independence, problem-solving and resilience.
“Individuals will seek escape, at least now and then, a la some form of digital detox in order to nurture the latent skills that are being lost to cognitive debt, to consider their lack of willingness to sit with uncertainty and their need to personally face up to challenges.
“Pockets born out of social need, perhaps most largely driven by women – because they have traditionally prioritized relational roles in society – will form a resistance. Hence intentional ‘analog communities’ will form in which the ‘smart home’ idea is inverted into ‘dumb homes’ and mostly digital-free lifestyles. This subculture could rise to the level of the 1960s cultural drop-outs and ‘free love’ movements.”

Sherry Turkle
‘We can come back to each other and to ourselves. … There is more than a threat to empathy at stake; there is a threat to our sense of what it means to be human.’
Sherry Turkle, MIT professor and author who studies the emotional connections between people and technology, briefly discussed a passage from her book, “Reclaiming Conversation: The Power of Talk in a Digital Age.” She wrote, “In the wave of enthusiasm about generative AI, there has been renewed talk of technological determinism and ‘inevitable’ next steps to integrate algorithms into our intimate lives. But nothing is inevitable – conversation is something we can forget, but also something we can remember.
“We can come back to each other and to ourselves. I argued for this assertion of agency in 2015 and now I argue ever more fervently. There is more than a threat to empathy at stake; there is a threat to our sense of what it means to be human.
“The performance of pretend emotion does not make machines more human; it challenges what we think makes people special. Our human identity is something we need to reclaim for ourselves.”
The second section of Chapter 7 features the following essays:
Henry Brady: ‘It is easy to fall into the trap of thinking that AI defines an essential characteristic of being human. … Consequently, we need stronger antidotes to the ability of AI to define the nature of personhood.’
Sarah Pessin: The ‘Cyborg Slide’ is coming. ‘We will develop new abilities but they will come at the cost of shedding parts of our humanity which we must work to hold onto.’ We must treasure the ‘slow and the small.’
Paul Saffo: ‘Motors stole silence from our world and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude.’
Divya Siddarth:Real harm can come as we anthropomorphize AI and develop social relationships with it. Let’s stop fearmongering about being ‘left behind’ and turn our attention to easing the suffering AI will cause.
Chris Labash: If AI is so good why does it make me feel so bad? Where do we go from here? Let’s lean into being imaginatively thoughtful and genuinely human.

Henry Brady
‘It is easy to fall into the trap of thinking that AI defines an essential characteristic of being human. … Consequently, we need stronger antidotes to the ability of AI to define the nature of personhood.’
Henry Brady, former president of American Political Science Association and dean of the School of Public Policy at the University of California-Berkeley, wrote, “There could be an increasing division between the set of people who learn to master AI, use it effectively and efficiently and profit from its deployment and another – probably larger – group of people who are flummoxed by it and who retreat into longing for the past, into cults and magical thinking, and (perhaps as the best outcome) stronger attachment to organized religion.
“AI will raise fundamental questions about the nature of human beings. AI has been called a ‘stochastic parrot’ to differentiate it from human beings, but what if human beings come to believe that they are nothing more than stochastic parrots? Am I just guessing the next word that I will write on this page? What shapes my guesses? How am I to understand that shaping? What differentiates me, if anything, from AI?
“To be clear, I think that I am more than what AI does, but it is easy to fall into the trap of thinking that AI defines an essential characteristic of being human. The problem is parallel to the degree to which many people allow social media to define who they are. Consequently, we need stronger antidotes to the ability of AI to define the nature of personhood.
The issue here goes far beyond regulating, for example, ‘deep-fakes’ or ‘disinformation.’ It goes to the heart of reorienting society to the changes in lives, the redesign and loss of jobs, and perhaps the loss of meaning that will come from AI. It is not clear to me that most institutions have the capacity to develop a blueprint for ensuring resiliency.
“Human eras have been defined by various metaphors, such as using Newtonian physics to define the nature of people or Darwinian biology to define the nature of society. AI may be one of those inventions that defines – even more than the digital computer has defined – the nature of human beings. As a result, people will face the task of defining themselves in relation to that metaphor.
“Religion (and cults and magic) could play a major role here. It could provide meaning that would help people comprehend, locate and tame AI, or it could provide an off-ramp that substitutes for logical thinking.
“It will be interesting to see how the major religions deal with AI. Pope Leo XIV has already warned about AI; will he write a defining encyclical about it? In 2023, the Southern Baptist Convention passed a resolution saying that human beings are created in the image of God and that technology should not supplant this, but, what, concretely, does that mean? What vision will they provide for their members?
“To take another set of institutions. How will K-12 education and colleges and universities act to provide people with the tools they need to use AI effectively? AI is emerging at a time when the humanities are under siege because they can’t be monetized. Yet this may be a time when truly vibrant humanities courses are of the greatest importance. But will the humanities be up to this task given their backward-looking orientation? Will enough humanists “catch-up” with AI so that they can deal with it in their courses?
“My greatest fear is that just as with the Internet and social media, we will allow ‘Big Tech’ to define AI in terms of the profit it can produce. We will not invest in making society ready for AI through our educational system and our governmental structures. The issue here goes far beyond regulating, for example, ‘deep-fakes’ or ‘disinformation.’ It goes to the heart of reorienting society to the changes in lives, the redesign and loss of jobs, and perhaps the loss of meaning that will come from AI. It is not clear to me that most institutions have the capacity to develop a blueprint for ensuring resiliency. Most governmental institutions (most especially the Congress and the Courts) do not have the capacity to come to grips with AI. Perhaps the best-equipped institutions are colleges and universities that have experts on AI. But I worry that universities will not move fast enough. As I have worked at my own institution (a university) to think about equipping students to wrestle with AI, I have become aware that doing this will be a very big job that will affect all aspects of what we do.
We will have to consider whether robots with bodies, minds and executive functioning deserve equal consideration. Consequently, AI is just the beginning of questioning that will engage us for the following decades and perhaps centuries as we proceed with our technological engineering feats. Our society should be doing a better job of preparing everyone for that.
“In summary, AI poses an enormous challenge for which we are not ready. And I worry that many people will not have the support structures to endure that challenge. Those who go to (some) colleges might have such structures that will allow them to rationally, soberly and sensibly deal with AI and to benefit from it. The remainder of the public is likely to have inadequate support to make sense of it all and they could be greatly harmed by AI. Consequently, on top of growing wealth and income inequality that has been caused by technological change, there will be a cognitive and emotional gap that will disadvantage those who have already been relegated to lower incomes.
“So, what makes us human and differentiates us from AI as it is presently constituted? I believe that one major difference is that our minds and bodies are so closely intertwined, leading to the millennia-old debates over the relationship between the body and soul and the problem of ‘mind-body’ duality that have challenged humankind in the writings of most of the world’s religious thinkers and philosophers.
“Buddhism argues that there is no fixed soul, just a continuous flow of changing consciousness. Christian religions have favored a mind-body duality – so that St. Augustine renounced the flesh in favor of the soul –and Descartes found personhood in the mind by saying ‘I think, therefore I am.’ Modern brain science is still struggling with these issues. Because our minds and bodies are intertwined we are more than either one alone. In addition, I also believe that human executive functioning that links our brain and our bodies leads to a sense of personhood that is fundamental to what it means to be human. It is here that thinkers such as Shakespeare, Jane Austen, Dickens, Fyodor Dostoevsky, Virginia Wolff and Ernest Hemingway excel because they consider the whole human being with its passions and interests.
“But fundamentally, artificial intelligence is disembodied mind (the silicon substrate notwithstanding) without even much in the way of executive function. As we build robots with sensors, executive programs to interact with others and AI they will begin to look and feel more like humans. Just as there is a large literature on whether animals should be given equal consideration to humans, we will have to consider whether robots with bodies, minds and executive functioning deserve equal consideration. Consequently, AI is just the beginning of questioning that will engage us for the following decades and perhaps centuries as we proceed with our technological engineering feats. Our society should be doing a better job of preparing everyone for that.”

Sarah Pessin
The ‘Cyborg Slide’ is coming. ‘We will develop new abilities but they will come at the cost of shedding parts of our humanity which we must work to hold onto.’ We must treasure the ‘slow and the small.’
Sarah Pessin, professor of philosophy and interfaith chair at the University of Denver, wrote, “I think of the coming 10 years as the ‘Cyborg Slide,’ a time when we will develop new abilities but at the cost of shedding parts of our humanity which we must work to hold onto. The quickest way to describe the biggest problem during this slide is that we will be increasingly invited to surpass the ‘slow and small’ conditions for human meaning as we have known it. Whether we want to retain access to friendship or forgiveness, justice or even jokes, we will need to resist the urge to always go bigger, move faster, live longer and prioritize quantity of conversation partners over meaningful relation.
“For centuries, our ‘stories of self’ and the meanings that such stories make possible, have been conditioned by a certain rich slowness and good smallness, even with the vast diversity of individual stories and even with all of the speed-increases, from horse to car to airplane.
The growing drift from human to cyborg signals a rewriting, not simply of our smartwatch styles but of our ‘story of self’ and the meanings that are allowed to circulate within the context of that story.
“Our human concept of friendship, for example, has quietly relied on certain ‘slow and small’ limits on the number of people we might expect to know and the number of years we might expect to live. The delicate act of say, an inside joke with a good friend is not just lost but impossible in a social media chat with millions of strangers because that is precisely not what ‘inside joke’ means.
“Same for forgiveness. If one doesn’t know anyone slowly enough to wound or be wounded, one loses access to the category of forgiveness, ‘forgiving’ and ‘not forgiving’ increasingly fail to hold meaning.
“The growing drift from human to cyborg signals a rewriting, not simply of our smartwatch styles but of our ‘story of self’ and the meanings that are allowed to circulate within the context of that story. If we allow ourselves to become cyborgs, can we tell inside jokes to close friends? Not in any current use of the term ‘inside joke’ or ‘close friend’ because as cyborgs we will have surpassed so many of the current ‘small and slow’ conditions of how we relate to limited time itself related to how we experience self, neighbors, pasts, futures, memories and hopes and all of that in relation to what ‘friendship’, ‘jokes’ and ‘inside jokes’ mean.
“To help embrace many AI advances while avoiding the Cyborg Slide and its resulting loss of access to cherished human experiences, here are some of the interrelated goals and strategies we must take up now:
1) “Help people talk more about the distinction between embracing many aspects of AI, while also ensuring AI does not inadvertently prevent people from accessing their favorite human experiences.
2) “Develop ways of talking about AI futures that neither demonize nor utopianize but rather cultivate in people a ‘pros and cons’ mindset when it comes to any AI enhancement: How will it make my life better? How will it quietly rob me of access to my most cherished human experiences?
3) “From the number of people we set out to know and the time-consuming process of building strong relationships, to the slow simmer of friendship and the intimate scale of forgiveness, help people understand how ‘slow and small’ parameters of human life enable some of our most cherished experiences.
4) “And – using the ‘pros and cons’ mindset – help people consider how even small AI disruptions of those parameters might risk robbing them of their most cherished experiences, whether (and if so why) they might be willing to take some of those risks but not others and whether there are or aren’t ways to take up particular pieces of AI technology so as to minimize its likelihood of robbing us of access to our favorite human experiences.
“All of this should be undertaken through writing, art, media and film, exposing people to more of these conversation frames, pro and con ideations and a growing number of concrete case studies in and explorations of the conditions for and textures of human experience that are most worth saving and most susceptible to interference in increasingly AI-saturated futures.”

Paul Saffo
‘Motors stole silence from our world and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude.’
Paul Saffo, a prominent Silicon Valley-based forecaster with three decades of experience helping corporate and governmental clients understand and respond to the dynamics of change, wrote, “Every technological advance conceals a consequent loss, but the novelty is always so glittering and the loss so gradual, we never notice what was lost until long after it is gone. The rapid diffusion of AI in this moment is no exception, but recent history reveals what AI’s most surprising cause of loss might well be.
“Just over a century ago, the advent of internal combustion engines served up a mobility revolution. The near-simultaneous arrival of fractional horsepower electric motors delivered an exponentially unprecedented level of motive power to factories, offices and homes. Suddenly, engines were everywhere. In our kitchens, on our roads and in our skies. The benefits – freedom, convenience, abundance – are vast to say the least.
Seduced by our artifice, we are leading ourselves into a madhouse world of mediated intelligence that will shape us much more profoundly than motors and light could ever accomplish. We will have no silence, no darkness – and no solitude. Like Blanche, let us hope at least that our new strangers are kind.
“But the cost? The loss of silence. Stop for a moment, sit quietly and listen. Is it silent? At first perhaps, but then you notice the low hum of a motor somewhere, or the soft whoosh of an HVAC system. Step outside. Silence? Hardly. The whispered buzz of a distant leaf blower, a car passing blocks away, the whisper of a jet crossing high overhead. It is the unavoidable white noise of technological civilization. Billions of motors toiling away have utterly changed our planetary soundscape. And it is not just humans who have lost essential silence. Birds have changed their songs in a desperate attempt to be heard over the noise. The ancient music of whales is lost in the oceanic cacophony of ship screws and sonar.
“Electric lighting was another life-changing marvel which arrived contemporaneously with the diffusion of small motors. It gave us benefits beyond measure, but the cost? The loss of darkness. Consider images of night-time Earth from space. A hundred fifty years ago, a passing spacefarer would have glimpsed a planet wrapped in darkness, with a few widely separated pools of soft light. But look down today and the dark is retreating before a vast, ever-spreading artificial lightscape. Encased in the harsh glow of artificial light, we are isolated from the ocean of stars overhead, from the intimate darkness once so essential to setting circadian rhythms for human and non-human species alike.
“Now we are racing into a future where AIs are proliferating faster than LEDs are displacing incandescent light bulbs. Forget the hypothetical future of AGIs, this is a 2026 present in which primitive AIs are taking over simple quotidian tasks that once depended upon human brainpower to accomplish. Even as we await the super-intelligences, we will become as utterly dependent upon this exponentially growing cognosphere of thinking devices as we are on motors and electric light.
“Motors stole silence from our world, and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude. AI will eliminate solitude because the temptation to interact with these primitive new intelligences will prove so beguiling that just as we choose to not sit in the dark, we will now choose to never be alone. Too late, we will realize that solitude is essential to what it means to be human.
“The profundity of this shift cannot be overstated. Motors substitute for muscle. Lighting compensates for frail human vision. AI is now poised to take on cognitive tasks once assumed to be the exclusive domain of the human neopallium. As AI embeds itself ever more deeply into our world, humankind will become like Blanche in ‘A Streetcar Named Desire,’ who, while being led to the madhouse, softly whispered, ‘Whoever you are – I have always relied on the kindness of strangers.’
“Seduced by our artifice, we are leading ourselves into a madhouse world of mediated intelligence that will shape us much more profoundly than motors and light could ever accomplish. We will have no silence, no darkness – and no solitude. Like Blanche, let us hope at least that our new strangers are kind.”

Divya Siddarth
Real harm can come as we anthropomorphize AI and develop social relationships with it. Let’s stop fearmongering about being ‘left behind’ and turn our attention to easing the suffering AI will cause.
Divya Siddarth, award-winning science fiction author, engineer and founder of the Collective Intelligence Project, wrote, “Over the past two years, with the increasing commercialization of LLMs, I have grown pessimistic about the effects of ‘AI’ on human society. I believe the desire to turn a profit on these systems has led to widespread premature deployment, causing job disruption, emotional harm and academic decline. The lack of consideration with respect to ethics, morality and environmental harm is also distressing.
“As someone who once loved to design machine learning systems, I’ve had a polar shift in my feelings towards this subject. I still do believe there is promise in the ways that machine learning can apply to pattern recognition and discovery – especially in fields that are data-centric, like science or economics – but to push these systems into everyday life without first educating the general populace is to play with fire, and we’re already seeing the early signs of burns.
“The vast majority of human beings do not understand how various types of machine learning algorithms work, much less the potential failure modes of each one. Given our natural propensity to anthropomorphize and the human brain’s capacity for treating imaginary people the same as real people, it’s no surprise that so many are developing social relationships with so-called AIs. Real emotional harm can result from this, but when no human being is involved there is no way to repair or recompense these injuries.
I expect that the next couple of decades will bring widespread upheaval and suffering at the individual level, especially for those who don’t have the benefits endowed by wealth or higher education. The people in power have little incentive to slow or stop the detrimental effects of AI. Resilience is likely to come at the cost of hard-won scars. From where I sit, the future does not look bright.
“Like it or not, ready or not, many aspects of society beyond the personal – like business, education and legal – are being forced into using ‘AI’ as part of their daily routines. The motivation for this is purely profit: There is little consideration given to human well-being at the feet of almighty efficiency and the altar of the bottom line. Companies whose mottoes used to center around developing AI for good now focus on their valuations and IPOs, with no compensation for the labor of millions whose content they use to train their systems.
“How do we build resilience in the face of this? The ways we always have: by strengthening our bonds to our loved ones, by forging communities and by engaging with the physical world. Interactions with LLMs, as with social media, are like a drug and can lead to addiction. We need leaders and educators to teach people to be cautious of their use, to run education campaigns like they did for tobacco and alcohol, and to encourage safe usage. We need guardrails established by law for corporations, and we need enforcement.
“Most of all, we have to stop listening to the fearmongering about being ‘left behind’ in terms of progress. What does progress mean if it’s not for the betterment of humanity? The world is facing a water crisis for the first time. We have passed the tipping point for global warming. Authoritarianism is on the rise, and human rights are being eroded. These are the areas where we need to make progress, not new platforms for advertising revenue and monthly subscriptions.
“Some parts of society are starting to wake up to these necessities, but others have a long way to go. I expect that the next couple of decades will bring widespread upheaval and suffering at the individual level, especially for those who don’t have the benefits endowed by wealth or higher education. The people in power have little incentive to slow or stop the detrimental effects of AI. Resilience is likely to come at the cost of hard-won scars. From where I sit, the future does not look bright.”

Chris Labash
If AI is so good why does it make me feel so bad? Where do we go from here? Let’s lean into being imaginatively thoughtful and genuinely human.
Chris Labash, associate professor of communication and innovation at Carnegie Mellon University, wrote, “Throughout their history, humans have often observed, ‘I didn’t ask for it, but now I can’t live without it.’ The human history of life with technology is rife with examples of things that no one especially asked for that soon became part of daily life (often with some less-than-stellar consequences). Jean-Paul Sartre called this ‘counterfinality’; in our more-recent, less-inspiring lexicon we call it ‘The Law of Unintended Consequences.’
“In 2007, Steve Ballmer famously laughed and dismissed the iPhone, saying that he preferred Microsoft’s phone strategy, that few people would want a phone without a keyboard, and 10 years later Microsoft was out of the phone business and the iPhone was dominant. Apple has now sold over 2.3 billion iPhones, enabling between 34% and 64% of us in the U.S. to doomscroll our time and happiness away on a daily basis.
“And while no one really asked for the internet (well, OK, the U.S. Department of Defense originally did), it’s here and now, so are social media, spam email (about half of all email) and – now – AI. It is estimated that more than 50% of what we see on the internet now can be referred to as AI slop, some encouraging people to participate in oddities such as the ‘Bloody Ritual of Molech’ and ‘Demon of Child Sacrifice,’ some urging users to go on killing sprees and, in the case of Moltbook, to leave the 1 million-plus registered AI agents there alone to hang out in their own AI-only social network.
“So what do we do with this? AI is here, it’s not the future, it’s the present – and like it or not – we have to deal with it.
Humanity is seemingly saved by ‘The Answer’! – again and again
“My recollection is that in the mid-1980s, computer systems integration was considered ‘The Answer’ to everything. An assertion made by computer scientist Herb Grosch in 1953 came to be called Grosch’s Law, it estimated that computer performance increases as the square of the cost. So, if computer A costs twice as much as computer B, you should expect computer A to be four times as fast as computer B. This meant that the most-efficient systems were those that required the scale to amortize the investment. So everyone rushed headlong to scale.
“Then came the PC (The New Answer), which repealed the Law, and suddenly The Answer was to scale back. In the 1990s, software was The Answer. In the early 2000s, it was social media. In the later 2000s, it was the Metaverse. Now, The Answer is AI. Everything new has always been The Answer, until The Next Answer (quantum computing, anyone?).
“I’m not suggesting that AI is a flavor-of-the-week fix, technology or strategy, merely that it probably isn’t as promising or as dire as we allow clickbait headlines to lead us to think. And as more-advanced AI systems play an increasingly larger role in our work and personal lives, it’s the impact on our thinking that is so much more profound than we realize.
“We know that one danger of AI is that it compromises our ability to think critically – about it or anything else. Multiple studies from Carnegie Mellon, MIT and other respected institutions confirm this. Those and similar studies also suggest that routine use of AI increases the impact of the Dunning-Kruger effect due to widespread AI sycophancy, telling users what it thinks they want to hear and helping them feel that, ‘Hey, I’m really smart.’ Confirmation bias is the best bias. More AI, please.
“And with the current race to investment, more AI is inevitable. As with Systems Integration in the 1980s, let’s scale, fast, big, and what the hell, somewhat mindlessly.
AI has vast potential to do good, but it still makes me feel bad. We, as humans, need to retain authenticity, honesty, intelligence and humanity in our thinking, writing, and living. AI is ‘math,’ not communication, it is a provider of information (true, false and in-between or both), it is not a thinker. It steals information, it has a documented negative impact on the ability to think critically, and it is the far-too-easy way out for far too many of us.
Where do we go from here? I don’t care
“According to a January 2026 Boston Consulting Group survey, 90% of CEOs say they believe that by 2028 AI will redefine what success looks like within their industry. Over 90% plan to continue investing in AI at current or even higher levels, even if the investments do not pay off in the next year. For context, a late 2025 MIT study concluded that so far, ‘Transformation is rare. Only 5% of enterprises have AI tools integrated in workflows at scale and seven of nine sectors show no real structural change.’ Most critics say that the return on investment is difficult to measure.
“And what does that look like among those in the workforce? Well, so far, it’s not great. Apparently (and, I suppose, unsurprisingly) AI burnout is a thing (the World Health Organization describes burnout as ‘persistent fatigue, emotional detachment or job negativity, and decreased productivity’). According to a survey from Quantum Workplace, 37% of all employees have high burnout levels; that number rises to 45% among workers who self-identify as ‘frequent AI users.’ Correlation isn’t necessarily causation, but it’s worth noting.
“This may suggest that a potential fugue state of AI ennui is already upon us. I don’t see my students (nearly all are graduate students in information systems management) being excited about using AI anymore; their reaction is more like ‘it’s a tool, yeah I use it, whatever.’ It has all of the faded luster of discovering that your v1.0 Microsoft Word (or Wordstar if you’re a legacy human) can do ‘global find-and-replace,’ and then moving to v1.1. It still does replacement but now that’s boring.
Where do we go from here? Wait, maybe I do care.
“AI may write the present, but more – much more – disturbingly it can rewrite the past. And this is where and why we should be cautious of AI. It’s not just a technology tool it’s a political one. When you ‘flood the zone’ with disinformation, bullshit and lies, it all becomes the language used to train AIs. At a minimum, that then feeds human distrust (of government, business, media, of each other); at a maximum it – in a very real sense – changes history.
“Right now in 2026, according to The Edelman Trust Barometer (a global narrative survey of 33,000 people worldwide), a whopping 69% of people distrust their government leaders, 68% distrust business leaders and 70% distrust media. The survey didn’t measure person-to-person distrust, but it sure feels like it’s rising, abetted by governments and partisan media. Here in the United States, there’s a doubling-down of the ‘who are you going to believe, me or your lying eyes?’ attitude in the current government. Participants in the January 6, 2021, attempted coup? ‘Tourists’ or ‘Patriots.’ Vaccines? Questionable at best, harmful, probably. Immigration and Customs Enforcement actions in Minneapolis? Rounding up ‘dangerous terrorists,’ even though the evidence is clear that 70% of those arrested have no criminal record and pose no threat.
“Writing in ‘Mein Kampf’ in 1925, Adolf Hitler talked about the Big Lie and 16 years later Joseph Goebbels more fully explained it: ‘If you tell a lie big enough and keep repeating it people will eventually come to believe it.’ When AI learns from distorted fact or – more properly – disinformation, bullshit and lies, it becomes a chillingly effective tool that quite literally can change history.
Let’s lean into being imaginatively thoughtful and genuinely human
“So how does all this impact humans? How do we cope? I’ll leave it to those with more expertise than I to posit how AI might change relationships, mental and physical health and human efficiency. To me, AI and communication are strange and incompatible bedfellows. Nothing about communication is supposed to be artificial. Real communication is supposed to be just that: real. So while, as a teacher and researcher, and just plain human, I can appreciate and happily use AI to master the menial, I still revel in the real. AI can create art, but not good art, just like a paint-by-number Mona Lisa might look a bit like the real thing but it is so…not. AI can write, but it has no heart, so it can’t write from it.
“AI has vast potential to do good, but it still makes me feel bad. We, as humans, need to retain authenticity, honesty, intelligence, and humanity in our thinking, writing and living. AI is ‘math,’ not communication, it is a provider of information (true, false and in-between or both), it is not a thinker. It steals information, it has a documented negative impact on the ability to think critically, and it is the far-too-easy way out for far too many of us.
“We must remember that this is a flawed tool to be used with care and thought, not a Magic 8-ball that can do our thinking for us, not a magic pen that will do our writing for us, nor a magic brush that will make our art.
“When our thinking, writing and art lose heart and honesty, when we rely too much on the artificial, we lose our humanity. And that impacts us far more than in a personal way, it impacts us as a civilization.”
The third section of Chapter 7 features the following essays:
Dmitri Williams:Loneliness will increase as the pace of change speeds up. People are ‘cognitive misers’ who will defer to AI judgments. Still, there will be a backlash led by human-centric movements.
Scott Kollins: Increased engagement with conversational AI platforms puts children at risk for learning and normalizing ‘aberrant patterns of social interaction that might have negative consequences.’
Brian Southwell: Offer people human connection and highlight models of everyday life experiences that build social ties. Sanctuaries from technology will be appreciated.
Giacomo Mazzone:‘A primary problem to be dealt with by people using digital systems in the future will be the solitude they may experience in a world mediated by AI.’
Irina Raicu: Learn the lessons that friction teaches. A good model for that is partner dancing, especially when doing it with multiple partners, requiring you to make compromises with those who are different.
Katrina Johnston-Zimmerman:‘The development of advocacy and awareness initiatives is required to help foster responsible use and a deeper understanding of AI systems beyond the personal point of view of today’s average users.’
Gerd Leonhard: Amusing ourselves to death gives control to autocrats. Most people will use AI to outsource their cognition as well as their social interactions. ‘Democracy will die under these circumstances.’
John Markoff: The ‘I-Thou’ sensibility of the past should embrace the ‘I-It-Thou’ realities of today because we live in a ‘world in which all human interaction is mediated by algorithms.’
John C. Havens:‘How can we prioritize human and planetary flourishing in symbiosis in any tech we create?’ We should redefine what progress means and how it ties to human well-being.
Professor of Robotics: ‘We learn most by learning and being educated through such person-to-person interactions.’

Dmitri Williams
Loneliness will increase as the pace of change speeds up. People are ‘cognitive misers’ who will defer to AI judgments. Still, there will be a backlash led by human-centric movements.
Dmitri Williams, professor of technology and society at the University of Southern California, wrote, “All of this takes place against an evolutionary background. We’ve evolved as a species to interact with each other face-to-face, to stand upright and to walk around. Yet recent innovations – very recent in the grand scheme of evolution – have moved much faster than we can adapt. This is one major reason why we have a well-being and loneliness crisis: We’ve moved from relying on and interacting with people to doing so with technology. The result is that we may be more efficient and more entertained but we feel more lonely.
“I see AI in future as continuing that larger pattern but moving faster and more actively into our social and cognitive lives. As people rely on AI for their social needs, this sense of loneliness and estrangement will deepen. And as people outsource their creativity and cognition to it, their own skills in both areas will atrophy. In both cases, the technology can be used to amplify positive outcomes, but I do not think that will be the default or the majority of uses.
The space will be ripe for human-centric movements that will range from a generally positive ‘Up With People’ kind of vibe all the way to a violent and angry anti-machine Unabomber one. Leaders in these movements will harness it for their own values and ends
“Because we are cognitive misers and because we live in a competitive, pressured economic space, people will use AI to be efficient rather than connected and intellectually challenged. The easier path is there, cheap and appealing. It’s the bargain most people have already struck with technology in general and there’s good reason to see that continuing.
“I do foresee a rise of human-centered backlash to these outcomes, though. People who feel lonely do look up from their screens and embrace human contact. It’s not a lost cause. And the space will be ripe for human-centric movements that will range from a generally positive ‘Up With People’ kind of vibe all the way to a violent and angry anti-machine Unabomber one. Leaders in these movements will harness it for their own values and ends and we can expect demagogues and democrats to emerge alongside clergy and influencers to call out the problems and prompt what will be a wide range of solutions. Some legislators will explore solutions, as we see in the Australian cell phone ban and I’d expect these to have more traction in collectivist and more socially oriented countries than in the capitalist ones.”

Scott Kollins
Increased engagement with conversational AI platforms puts children at risk for learning and normalizing ‘aberrant patterns of social interaction that might have negative consequences.’
Scott Kollins, psychologist, Ph.D., and chief medical officer at Aura, a digital family security company, wrote, “I am particularly interested in and concerned about the role that conversational AI chatbots will play in influencing child and adolescent development. We are seeing a rapid rise in the use of AI platforms not only as tools to help gather information, but also as milieus in which the children engage in a wide range of role-playing activities, some of which include sexual and violent scenarios.
“The sycophantic nature of these tools reinforces ongoing engagement, which can increase the exposure children have to these kinds of situations. It is yet to be fully understood how the participatory role that children play in these kinds of interactions influences their own belief structures and development.
Increased engagement with these kinds of platforms for these purposes runs the risk that children will learn and normalize aberrant patterns of social interaction that might have negative consequences for how they interact with other humans.
“Increased engagement with these kinds of platforms for these purposes runs the risk that children will learn and normalize aberrant patterns of social interaction that might have negative consequences for how they interact with other humans.
“The following are suggestions for how we as a society might mitigate the potential negative consequences of these kinds of youth-involved AI interactions.
- It will be important for all stakeholders (e.g., parents, educators, health care providers, etc.) to understand how developing children might be interacting with AI platforms and the potential for problematic patterns of use.
- Children must learn from a young age about appropriate use of these tools and how to avoid potentially dangerous interactions.
- Platform developers need to be aware of the risks their platforms might pose and take action to limit harm. This is where a lot of attention currently is being placed, but companies themselves will not be able to single-handedly mitigate risk, nor should they be expected to.
“A summary and description of the methodology behind the data driving my response is here.“

Brian Southwell
Offer people human connection and highlight models of everyday life experiences that build social ties. Sanctuaries from technology will be appreciated.
Brian Southwell, lead scientist for the public understanding of science and distinguished fellow at RTI International, wrote “We are likely to see adoption of AI-based tools in the next few years at a greater rate than what we might forecast would happen based on expertise and knowledge about AI alone. In other words, it is likely that many organizations will be tempted to use AI-based tools to cut costs and to keep up with other organizations, regardless of whether leaders in those organizations understand the opportunities and pitfalls of AI technologies well.
“That will likely lead to some frustration among employees and community members as they have to work with – or be governed by – automated systems that lack accountability and flexibility, not unlike people not having a choice in having to interact with a government bureaucracy and simply having to cope with the system.
“Those patterns and news headlines likely will fuel a higher level of interest in analog experiences and the adoption of regular practices of turning off cell phones and logging off computers as a health-seeking trend. How that affects human productivity remains to be seen, nonetheless it could have an unanticipated decrease in employee engagement and even in civic behavior due to people’s decisions to ‘tune out’ of an information environment that seems dominated by AI-generated content.
“How can we help people develop resilience and coping skills while avoiding widespread disengagement? Offering people human connection and highlighting models of everyday life in a world filled with AI could be useful, much as popular culture has done, for better or worse, for decades as new technologies have arisen.”

Giacomo Mazzone
‘A primary problem to be dealt with by people using digital systems in the future will be the solitude they may experience in a world mediated by AI.’
Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, wrote, “A primary problem to be dealt with by people using digital systems in the future will be the solitude they may experience in a world mediated by AI. … How do we mitigate this risk? We can maintain essential social connections by getting out of our ‘comfort zone’ and doing things in public that may not come so easily to us. Example: Go out to cinemas even when it is cold outside; go to bookstores and shops even when overnight delivery for online purchases is an option and talk with people. Go to a real doctor – instead of asking a machine – and interact with people. Go to humans to hunt for answers to questions, rather than relying on a chatbot’s answers.
“We will see definitely see generational differences in people’s adaptation to AI systems. Who will most easily seize the opportunities provided by the AI revolution or suffer at times from its influence? Of course, it will be the new generations and those who are older who already have digital skills.
I met my wife at university. I made my friends at work or during my recreational activities in the urban space or while travelling. I formed my opinions over most of my decades by reading newspapers and books of my choice. … Could a world mediated through the AI lens provide an equivalent, satisfactory alternative to these crucial experiences that made me who I am today?
“Those who lack digital skills will fall behind during the ongoing transformation. Unskilled people and most of the elderly will not be able to play a proactive role in the use of AI in their life, work or recreational activities. The AI revolution for them will take the form of apps on mobile devices that will provide new services with a certain degree of interaction but most of them will be used in one-way direction.
“This situation will not change anytime soon. … Then we must consider the kind of new society we are stepping into thanks to the AI revolution and the digital transition. There certainly are risks. …
“All of these changes will make a total break with the world as we knew it in the past. Just look back to my personal story of social connection: I met my wife at university. I made my friends at work or during my recreational activities in the urban space or while travelling. I formed my opinions over most of my decades by reading newspapers and books of my choice. I shaped the space around me through shopping in my preferred shops. I protected my working rights through unions.
“Could a world mediated through the AI lens provide an equivalent, satisfactory alternative to these crucial experiences that made me who I am today? I don’t think so.”

Irina Raicu
Learn the lessons that friction teaches. A good model for that is partner dancing, especially when doing it with multiple partners, requiring you to make compromises with those who are different.
Irina Raicu, the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University, shared this excerpt from her blog as her response: “Chatbots are here – integrated more and more into many people’s daily contexts. They are often useful, but they also bring concerns, both about particular uses and about the ways in which extended interactions with chatbots may skew our interactions with human beings. …
“Chatbots can distort our understanding of any human relationships, even the ‘shallower’ ones that constitute many of our daily human interactions. This is because, as one Stanford researcher put it, ‘These chatbots offer “frictionless” relationships, without the rough spots that are bound to come up in a typical friendship’, or in other types of relationships.
“There is a different kind of human interaction, though, that is very good at teaching people how to deal with (at least some types of) interpersonal ‘friction.’ It’s partner dancing. And it’s particularly effective if you participate in one of those classes that require partners to switch, every few minutes, so that you end up dancing with multiple people.
So much of what is lacking in ‘relationships’ with chatbots becomes clear if you go out there and try to learn to dance with a bunch of different partners. The positive aspects of working through friction in human relationships teaches us something about others and about ourselves and binds together communities.
“In such classes, at least in Silicon Valley, you would likely end up practicing with partners from many countries, of many ages, different heights, weights and body types and possessing a wide range of dancing experience and know-how.
“Partner dancing highlights the complexities and necessity of compromise between people with different abilities, skills, styles, needs, personalities and backgrounds, all of whom are aiming to enjoy the music and the movement and need each other in order to do that.
“Such dancing forces you to look at other humans’ faces, instead of a screen. It also requires you to touch other people and so deal with what philosophers call ‘embodiment’, the fact that our physical bodies matter (and can feel good, or tired, or hot, or all of the above and more), impacting our perceptions and our thoughts. Chatbots don’t sway or sweat.
“Moreover, unless you’re in one of those professional pairs who’ve been practicing together for years, partner dancing will constantly confront you with more substantial friction, with missed signals, awkward steps, moments of off-beat distraction or hiccups. (Chatbots also don’t feel pain, if you step on their toes or hit them in the face with an inartfully-flung arm.)
“If you’re lucky, you will find partners who respond to such friction with smiles.
“But even at its best, without hiccups, good dancing is all about adjustments to another person. The small compromises, the push/pull, the size of the steps or the speed of the turns, all require paying attention to another human being with his or her own needs, limitations, moods, strengths, rhythm.
“This also means that partner dancing is not always a good experience. It is definitely always a learning experience.
“Not everyone can dance, of course, or would find it enjoyable – and dancing is not the only kind of activity that offers this kind of learning. But so much of what is lacking in ‘relationships’ with chatbots becomes clear if you go out there and try to learn to dance with a bunch of different partners. The positive aspects of working through friction in human relationships teaches us something about others and about ourselves and binds together communities.”

Katrina Johnston-Zimmerman
‘The development of advocacy and awareness initiatives is required to help foster responsible use and a deeper understanding of AI systems beyond the personal point of view of today’s average users.’
Katrina Johnston-Zimmerman, Philadelphia-based urban anthropologist and founder of THINK.Urban, wrote, “Humans have always needed contact with other humans, and that will not change with the addition of AI systems. In the best-case scenario, AI will serve to illustrate this fact to an extreme – pushing people back to analog and face-to-face interactions to compensate for the loss of humanity in agentic programs.
“Humans will naturally do what they are able to do, such as using AI for simplification and automation of processes. They only do what they need to do when it is absolutely necessary. When faced with the extreme challenges that arrive with AI – job loss, an accelerated likelihood of harassment, violation of privacy, and personally impactful environmental degradation, etc. – the negative costs of AI start to appear to outweigh the convenient benefits.
“Our future will be co-created. To ensure that it is managed properly, the development of advocacy and awareness initiatives is required to help foster the responsible use of and a deeper understanding of AI systems beyond the personal point of view of today’s average users. If we continue on the current trajectory – one with without checks and clear alternatives – we can expect to see a degradation of trust and reliability, increases in isolation and loneliness and a broadening of the anti-AI activism we have already started to see.
“Professionals and volunteers working in the field of community development, placemaking and public-realm improvement, have long known that the obvious and easiest way to create a livable environment is through the cultivation and support of human encounters in public spaces. The establishment of place attachment and ownership is one of the clearest indicators of a safe and healthy environment. Simple improvements to green spaces and other locations in that encourage socialization increase resilience and health for the population overall. Humans’ growing attachment to and time spent in digital systems may ultimately a detriment to our way of life. Remaining as physically located as we have been for thousands of years is good for the sake of the whole.
“Ultimately, the impact of AI comes down to all of us. As it seems that the public is unable to enact direct change in the behaviors of private corporations managed with a profit-driven mindset, we should at least get out and meet our neighbors and work to increase trust both within and without of our social and institutional networks.
“Our standards for attention and critical thinking must remain high. Reliance on automated construction of thought must not replace humans’ creativity and independent opinions. Educational institutions should require students to spend a lot of time in active, vigorous settings that initiate and sustain non-tech attention and awareness. Students should also be given the ability to opt out of AI assistance in their work. To retain our humanity – our heart and soul – we must ‘exercise’ it, just as we must exercise our physical being to keep it in shape. For the sake of our future, we must take it upon ourselves and encourage one another to be, think and feel as humans.”
Gerd Leonhard
Amusing ourselves to death gives control to autocrats. Most people will use AI to outsource their cognition as well as their social interactions. Democracy will die.
Gerd Leonhard, speaker, author, futurist and CEO at The Futures Agency in Zurich, Switzerland, commented, “Most people will use AI to outsource their cognition as well as their social interactions. A world without any effort spent on truly understanding things, or on going through the ups and downs of real human relationships – a world devoid of any logic of ‘earning’ something – that will simply be a machine world. On top of this, augmented reality and virtual reality will enable us to literally escape real reality and live in a synthetic world. Democracy will die under these circumstances. A perfect stage for autocrats!”

John Markoff
The ‘I-Thou’ sensibility of the past should embrace the ‘I-It-Thou’ realities of today because we live in a ‘world in which all human interaction is mediated by algorithms.’
John Markoff, fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University, previously a writer in residence at the Stanford Institute for Human-Centered AI and a senior technology writer at the New York Times for 28 years, wrote, “What I worry about most is human isolation. My societal ideal is what Martin Buber described as ‘I and Thou.’ Direct human contact between individuals in a relationship of mutual engagement, connection and presence. This is where Gordon Pask, an early cyberneticist and psychologist believed human intelligence originated. We now need to consider the societal consequences of ‘I-it-Thou’ relationships – AI is the ‘it’ in the mix.
“A world in which all human interaction is mediated by algorithms? It is not a society that I am looking forward to. I’m uncertain about how to avoid this future, but it already has overtones of ‘Star Trek’s’ the Borg: ‘Resistance is futile, you will be assimilated.’ We live in a capitalist economic system. It is not at all clear that we will be free to design these as ‘Machines of Loving Grace.’ I think in China and in the West we are seeing the emergence of the surveillance state of Orwellian proportions.”

John C. Havens
‘How can we prioritize human and planetary flourishing in symbiosis in any tech we create?’ We should redefine what progress means and how it ties to human well-being.
John C. Havens, author of “Heartificial Intelligence” and founding executive director of the IEEE’s Global AI Ethics Initiative, wrote, “Overall, the question society needs to ask at the outset of AI systems design is, ‘What is it we want to prioritize in terms of ‘progress’?’ ‘How can we prioritize human and planetary flourishing in symbiosis in any tech we create?’
“Today, GenAI is being prioritized largely based on the economic advantages it brings to a very small subset of humanity: investors, the organizations creating the LLMs and the companies using GenAI in ways they deem to deliver ‘value’ (largely based on productivity and efficiency in isolation).
“The hyperscale data centers proliferating to support the unique computing needs of GenAI/LLMs are being prioritized largely for the benefit of these people, not to serve the best interests of the public and the environmental communities in the myriad locations where they are being developed (often without local community consensus).
“Public calls for a moratorium on hyperscale data center proliferation do not indicate a desire to ‘slow innovation’ or ‘halt progress’ but rather a desire to redefine what ‘progress’ means in order to serve the public and planet good.
“At a time when GenAIs that are built to ‘provide ‘companionship’ are found to harm mental health, encourage suicide and cause other harms via sycophancy, hallucinations and systemic errors, are these imperfect AI systems worth the cost to people and planet? If this continues, society at large (at least people who have access to and use GenAI) will continue to systemically suffer.
“We must prioritize human and planetary flourishing at the outset of the design of all AI systems, now, and let it be the basis for the key performance indicator (KPI) goals of AI systems and their supportive infrastructure.”
Professor of Robotics
The future is not determined by AI’s capabilities – it is determined by the structures we build around it. We now have
A leading professor of robotics who is based in Japan wrote, “Many people will act uncritically and without thought. They accept what they see. The way to ensure resilience is to train people who can comprehensively consider and make judgments based on sufficient information. Opportunities to develop this ability should be found in education, but in reality, there aren’t many. Leaders in every country don’t really want people to think for themselves; they want to shape society so it is easy for them to manage. Rather than evolving and deepening AI systems we should devote all of our resources to educating people. However, looking at the current global geopolitical situation, it’s hard to hope. … Mutual understanding comes from direct contact between people, repeated face-to-face dialogue and experiencing each other’s real lives, fostering empathy and trust. We learn most by learning and being educated through such person-to-person interactions.”
> Go to Chapter 8 – Overcoming Complacency and the Lure of Convenience
> Return to the top of this page