The EssaysChapter 6
The Great Divide: Broadening Differences and Inequities

Future of Human Resilience in the AI Age

Featured Contributors to Chapter 6: The 18 essay responses on this page were written by a UK-based complexity scientist, Fabio Morandin-Ahuerma, Russ White, Rosita Scerbo, Avi Bar-Zeev, Jeff Eisenach, Rotimi Awaye, Megan Peters, Andy Opel, Bernie Hogan, Ted Underwood, Guido van Rossum, Toby Shulruff, Erich Huang, Thomas Reuter, Dave Karpf, an Asian research scientist and an executive at a major consulting company. (Their essays are all included on this one, long-scrolling web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)


The first section of Chapter 6 features the following essays:

UK-Based Complexity Scientist: Humans have developed a complex psychology that allows us to fight our nature, to aim for a life in which we explore ways of living far beyond it’ but it seems we are headed toward techno-feudalism.

Fabio Morandin-Ahuerma: AI amplifies existing inequalities. ‘The real question is not whether further transformation will occur, but how unequal, silent and normatively it will unfold.’ People with advanced frameworks will benefit.

Russ White: Individuals could move quickly from being the tool users to becoming the systems’ tools – the ‘haves and have-nots’ – suffering dehumanization effects on a path toward ‘indentured servitude.’

Rosita Scerbo: ‘Adoption of AI will be shaped by race, gender, class, disability, professional status and institutional power. … Resiliency must be analyzed as a social and structural condition.’

Avi Bar-Zeev: Three groups will emerge: those who build their lives around AI (transhumanists), those who resist (the modern Amish) and pragmatic late adopters. A notable worry is caste-like schisms.

Jeff Eisenach: People’s resilience will be affected by where they fit on the curve, from the majority who take AI in stride to those for whom it becomes a danger and to those who may innovate ‘the Singularity.’

Rotimi Awaye: ‘As we say in Africa, when two elephants fight, the grass suffers.’ As AI advances, there will be ‘pushback, pain and correction before real stability emerges.’

Megan Peters: ‘Costs of AI deployment are disproportionately borne by low- and middle-income countries, which are also excluded from decisions shaping the future trajectory of AI and, by extension, humanity itself.’


UK-based Complexity Scientist
Humans have developed a complex psychology that allows us to fight our nature, to aim for a life in which we explore ways of living far beyond it’ but it seems we are headed toward techno-feudalism.

A complexity scientist and collective intelligence researcher based in London who preferred to remain anonymous wrote, “Humanity is at its peak in the face of adversity. The expression of free will and agentic intentionality is what has brought us closest to transcending our animal nature towards something kinder than the cruel but efficient natural order. If there is one thing that makes us special the other animals, it is that we have developed a complex psychology that allows us to fight our nature, to aim for a life in which we explore ways of living far beyond it. Spirituality, art and music, abstinence, compassion and veganism are simple examples of humanity doing exactly this. And in many ways, creating AI is too.

“However, once we disguise adversity as comfort, we will never grow.

“For centuries, philosophers have been dreaming of a world in which the basic needs of all people are covered. The belief is that this would allow humans to live in more harmony and with less conflict, injustice or greed. This is echoed in Bertrand Russell’s 1932 essay ‘In Praise of Idleness’ – he wrote that once the material necessities of life can be met with less labor due to modern technology, the meaning of work will change and people can use their time for more fulfilling and creative pursuits.

“This act can be individual and passive – living a good quiet life – or active and world-changing. Let us take a moment to imagine a time when everyone has food, drink, heat, clothing and shelter, and would only work if they wish to: to compete with others, to improve themselves, to create, or to earn money to fulfill desires and dreams – that is, beyond their base needs. This is the world we have been hoping for with the aid of technology – a world where robots and AI handle the things we have to do, so that we can be given the things we need. And in this space, we can pursue the things we want.

What will we do when we finally no longer need to work – if all our intellectual pursuits, arts and hobbies have been automated too? … What will happen if humans do their ’emotional processing’ with AI ‘because it is easier’ and end up never sharing those same emotions with their peers? … AI will continue dethroning reason and taking away any voice that could fight against it.

“The direction we are going in today is the opposite. The AI ‘revolution’ of our times is focused around training agents to automate the top of the pyramid instead – how is it that we still have people in poverty, we still have factories with wage-slave workers, we still have mines in which children crawl, but tech companies are busy automating the work of writers, artists, filmmakers, coders and mathematicians? And why are we accepting this and even paying for these services?

“Just because – after burning several forests – we have produced an enormous, monolithic AI that can write ‘poetry,’ shouldn’t we invest more in our poets instead of forcing them to find another job or to ‘use AI in their workflow’? And supposing we still agree on automating the back-breaking work and tedious archaisms – what will we do when we finally no longer need to work – if all our intellectual pursuits, arts and hobbies have been automated too?

“Another aspect is that of trust. What will happen if humans do their ‘emotional processing’ with AI ‘because it is easier’ and end up never sharing those same emotions with their peers?

“If we continue in this direction, AI will keep serving the powerful. But not only that – by embedding it in the already existing toxic aspects of media, AI will continue dethroning reason and taking away any voice that could fight against it. We are moving towards ever-increasing techno-feudalism. AI could help build a better, more efficient, more functional society – but it is currently only used to feed the egos of the rich few by exploiting the intellectual work, shared for free, of the many.

We must keep cultivating love and passion for the human mind and soul. For the natural, for the analogue, for the object in our hands not the bits in the cloud.As for the AI itself – it should be taught to cherish the same things we should. It should be taught compassion, humbleness, kindness and creativity. So we can create with it side-by-side, not fear being replaced.

“Many of our thought leaders and experts – scientists, AI experts and computer engineers – are too excited by the technology itself to take in stride the possible outcome. While in the workplace I believe many people who are interested will use AI to grow professionally; in private life, in culture and in society as a whole, if we don’t take action, if we don’t protest and if we don’t introduce new forms of resource-sharing, AI will continue the current trend of injustice, surveillance capitalism and also make human expression and dissent more difficult, by repurposing and reclaiming tools that used to be uniquely human – words , music, art – and, through overloading with slop, make everyone even more apathetic. Finally, we already see how having to adopt AI is causing issues for many professionals, regardless of whether AI does or does not improve their workflow.

“By trying to automate one of the highest pursuits of our consciousness, namely literature and art, and taking humans away from the process of creating all media that surround us, we would lose probably the most important tool for human resilience – our power of expression and shared human experience. If we lose that, we’ll simply lose hope – hope as people, the hope that kept us going past wars and plagues. We already see this – more people feel alienated and suffer from depression than ever.

“We must keep cultivating love and passion for the human mind and soul. For the natural, for the analogue, for the object in our hands not the bits in the cloud. We should focus on preserving human crafts and arts and whatever action that is enjoyable. We should take the earnings and proceeds from AI companies and invest them in people – for example, starting with a UBI for the artists displaced and made redundant by AI, the same artists whose work allowed training the AI in the first place.

“As for the AI itself – it should be taught to cherish the exact same things we should. It should be taught compassion, humbleness, kindness and creativity. So we can create with it side-by-side, not fear being replaced.”


Fabio_Morandin_Ahuerma

Fabio Morandin-Ahuerma
AI amplifies existing inequalities. ‘The real question is not whether further transformation will occur, but how unequal, silent and normatively it will unfold.’ People with advanced frameworks will benefit.

Fabio Morandín Ahuerma, researcher in the philosophy of AI and a member of Mexico’s National System of Researchers, wrote, “AI already play a significant role in shaping human decisions, work and daily life. The real question is not whether further such transformation will occur, but how unequal, silent and normatively it will unfold, and whether human resilience will be cultivated or eroded in the process.

“AI systems transform decision-making environments. They are filtering information, prioritizing options, configuring – so to speak – incentives, and they increasingly function as what could be called our ‘cognitive prostheses.’ Most people will adapt functionally, but not necessarily in a resilient way, because as this mediation deepens over the next decade, adaptation should not be confused with resilience. The latter requires agency, reflection and ethical orientation; the former is quite accommodative.

 “At the individual level, responses to AI-driven change will likely follow three general patterns: acceptance, resistance and passive dependence. A minority will actively adopt AI as a tool for cognitive extension, deliberately cultivating co-intelligence and using systems to deepen reasoning rather than replace it. Another minority will resist, whether for ethical, psychological, or cultural reasons, attempting to preserve autonomy by minimizing exposure or simply because they will not have the access that others have. The majority, however, will fall into passive dependence, externalizing judgment, memory and even moral evaluation to systems they do not fully understand but that may be replacing even their basic reasoning functions.

AI amplifies existing inequalities in education, critical literacy and emotional regulation who already possess solid cognitive and ethical frameworks will tend to benefit – e.g., the generation born before the internet and computers. Those who know only the digital world will become increasingly dependent. … Just imagine children whose entire education and life will be mediated by LLMs and AI.

“I believe this asymmetry constitutes the main risk to resilience. AI amplifies existing inequalities in education, critical literacy and emotional regulation who already possess solid cognitive and ethical frameworks will tend to benefit, – e.g., the generation born before computers and the internet. Those who know only the digital world will become increasingly dependent. The result is not a collapse of human agency, but its stratification. Just imagine children whose entire education and life will be mediated by LLMs and AI.

“Cognitively, resilience in an AI-saturated environment requires more than digital literacy; it requires epistemic vigilance: the ability to question outputs, recognize uncertainty and maintain independent judgment under conditions of persuasive automation. If we as parents and educators do not succeed in explicitly cultivating these skills, convenience will dominate cognition. Hybrid intelligence will exist, but possibly in a superficial form – that is, efficient, but fragile.

“Emotionally, the challenge is more subtle, since AI systems reduce friction but also increase existential ambiguity. As work identities change and human singularity becomes less evident, anxiety, loss of purpose and diminished self-efficacy are likely to increase. The sense of achievement can become atomized or simply lost in rapid results without cognitive effort and lacking meaning. In this way, emotional resilience will depend on the ability to tolerate uncertainty without succumbing to technophilia or technophobia. This capacity is learned; it is not automatic.

“Socially, AI reconfigures cooperation by mediating trust, as algorithmic systems increasingly decide who is visible, credible, or worthy of attention. While they can improve coordination, they can also fragment shared reality, and in this case, resilience depends on maintaining human-centered institutions (education, deliberative spaces, professional standards) that preserve collective understanding beyond algorithmic optimization.

‘The question of resilience in an AI-mediated world will not be technological, but ethical, since the systems we build will increasingly determine what we will be able to do and what we will come to expect of ourselves. If resilience is reduced to mere adaptability, humans will adjust, but at the cost of autonomy, depth and responsibility. If, instead, resilience is understood as the sustained capacity to think, feel, judge and act with integrity under conditions of uncertainty, AI may become an ally rather than a substitute.

“Ethically, the greatest vulnerability is moral deskilling. When systems recommend actions regarded as neutral or optimal, responsibility shifts away from human agents. Ethical imagination and moral courage – already scarce – risk becoming even scarcer if they are not deliberately reinforced. Resilience requires resisting the normalization of moral abdication. Human beings must remain responsible even when decisions are partially delegated.

“What practices and resources can foster resilience? First, educational systems must prioritize metacognition, ethics and critical thinking alongside technical competence. Second, institutions must design AI systems that preserve contestability and explanation rather than opacity and behavioral nudging. Third, societies must normalize periods of disconnection and cognitive autonomy, treating attention as a finite human resource rather than an extractable good.

“Waiting for disruption to fully manifest guarantees reactive and inequitable responses. We must teach how to use AI, but at the same time also how to disagree with it, how to distance ourselves from it and how to govern it collectively. Otherwise, resilience will be framed as an individual coping strategy rather than a systemic responsibility.

“New vulnerabilities will emerge, of course: excessive dependence and attentional fragmentation. And this will be the erosion of moral autonomy, although I hope to be mistaken. Therefore, coping strategies must include ethical reflection, emotional grounding and collective governance, not only personal productivity hacks.

“AI will not eliminate human resilience. But it will expose its limits. Whether resilience becomes a widely shared capacity or a privilege of a few depends less on technological progress than on the normative decisions we make now.

“Ultimately, the question of resilience in an AI-mediated world will not be technological, but ethical, since the systems we build will increasingly determine what we will be able to do and what we will come to expect of ourselves. If resilience is reduced to mere adaptability, humans will adjust, but at the cost of autonomy, depth and responsibility. If, instead, resilience is understood as the sustained capacity to think, feel, judge and act with integrity under conditions of uncertainty, AI may become an ally rather than a substitute.

“The future will not be determined by the development of machines, but by whether humans will be willing to cultivate the cognitive, emotional, social and moral capacities that no system will be able to meaningfully replace. Therefore, the work of resilience will have to begin now, as a deliberate commitment to preserving human agency in an era of delegated intelligence, not when it will have already become an ethical, epistemic and even ontological crisis.”


Russ_White

Russ White
Individuals could move quickly from being the tool users to becoming the systems’ tools – the ‘haves and have-nots’ – suffering dehumanization effects on a path toward ‘indentured servitude.’

Russ White, Internet pioneer and long-time infrastructure architect with the Internet Engineering Task Force, wrote, “AI (and AI-like) systems will continue to play an increasing role in our everyday lives because they are convenient and minimize human responsibility. This process, however, will become a net negative on humans’ cognition and resilience over time. The most positive outcome for resilience will be if communities find ways to resist and contain the influence of AI and AI-like tools, creating intentional human bonds and boundaries around when and how these tools can and will be used.

“The negative effects will be fourfold.

“First, humans will become even more unattached to virtue, focusing ever more strongly on efficiency and wealth as markers of dignity and success, as they rely on AI and AI-like tools. Just as AI tools have drawn the intelligence of moderately complex tasks like professional driving out of the person, positioning it in ‘the machine,’ AI will continue drawing the intelligence out of more career fields over time. The intent of this movement will be to increase efficiency, reducing costs and making it easier to find ‘trainable humans’ to lower the cost of business. The very real human effect, however, will be the continued flow of value, financial rewards and intelligence from individual humans to systems.

“Individuals will move more quickly from being the tool-users to becoming the systems’ tools, broadening and deepening dehumanization.

People must form communities that explicitly work against the negative impact of these tools within their community. These communities need to develop strategies to use these AI-based systems as tools, rather than becoming tools of these systems. … Individuals must learn to build relationships and gain virtue in spite of these systems bidding for their attention.

“Second, AI and AI-like tools will continue improving their ability at capturing and holding human attention. This will increase the rate at which human relationships and communities are monetized via ‘platforms’ – exacerbating the dehumanizing effect of drawing intelligence and intellectual virtue into AI and AI-like systems.

“Third, AI and AI-like systems will not dramatically improve, ultimately creating chaotic, deeply opaque systems with strong biases reaching widely incorrect ‘decisions’ but humans will place a lot of trust in these systems. There will be little access to any kind of ‘work’, much less meaningful ‘work’ or to any ‘relationships’ again, not necessarily meaningful relationships, but at least ‘relationships’ without AI and AI-like system intermediation.

“Thus, these systems will eventually become extremely error-prone and biased gatekeepers to the ability of a person to become fully human.

“Fourth, these systems will ultimately divide societies into ‘haves’ and ‘have-nots.’ Those who own, develop and manage these systems will control and manage society. Much like George Orwell commented that those who can rewrite the past can control the future, those who write the systems that treat humans as tools will, ultimately, be using their fellow humans as tools. This will be a form of indentured servitude that will make the ‘owners’ wealthy and powerful, and the ‘users’ bereft.

“Coping strategies are largely going to fall into three categories. Humans must form communities that explicitly work against the negative impact of these tools. These communities need to develop strategies to use these AI-based systems as tools, rather than becoming tools of these systems (or rather, tools of the people who build and own them). Second, individuals must commit to developing intellectual virtue even if there is little or no financial gain for doing so. Third, individuals must learn to build relationships and gain virtue in spite of these systems bidding for their attention.”


Rosita_Scerbo

Rosita Scerbo
‘Adoption of AI will be shaped by race, gender, class, disability, professional status and institutional power. … Resiliency must be analyzed as a social and structural condition.’

Rosita Scerbo, associate professor of visual and digital cultures at Georgia State University, co-editor and contributing author to “AfroLatinas and LatiNegras: Culture, Identity and Struggle,” wrote, “Artificial intelligence systems are very likely to play an increasingly significant role in shaping human decision-making, labor and everyday life in the coming years. This shift is already visible across education, work, healthcare and creative industries, where AI systems are being integrated not only as tools, but as infrastructures that organize evaluation, efficiency and judgment.

“Rather than understanding this moment as a singular technological break, it is more accurate to see it as a gradual saturation of social and institutional environments with automated systems whose influence is often uneven, opaque and difficult to contest.

“Individuals and societies are likely to respond to this transformation through a mix of accommodation, negotiation, resistance and struggle. Some people will mostly experience AI systems as enabling technologies that support creativity, productivity and access to information. Others will encounter them primarily as systems of surveillance, extraction and control, particularly when AI is used to monitor performance, assess risk or allocate resources.

Adaptation to AI is not universal. It is shaped by race, gender, class, disability, professional status and institutional power. As a result, resilience cannot be understood as a purely individual capacity but must be analyzed as a social and structural condition.

“These divergent experiences underscore that adaptation to AI is not universal. It is shaped by race, gender, class, disability, professional status and institutional power. As a result, resilience cannot be understood as a purely individual capacity but must be analyzed as a social and structural condition.

“As AI systems increasingly mediate decision-making, new cognitive demands will emerge. Beyond technical proficiency, individuals will need critical data literacy: the ability to interrogate how systems are trained, what assumptions are embedded in their design and how their outputs are interpreted and applied.

“This includes understanding that AI-generated outputs are probabilistic rather than objective, that categories are historically and socially constructed and that automation often shifts responsibility away from institutions and onto individuals. Without such literacy, there is a risk that AI systems will be treated as neutral authorities rather than contested socio-technical artifacts.

“Emotional and psychological dimensions of resilience also merit attention. As AI systems become involved in creative work, evaluation and communication, people may experience anxiety about authorship, relevance and professional identity. In fields traditionally associated with interpretation, care and judgment, automation may erode confidence in human expertise and intuition. Cultivating resilience in this context requires affirming forms of value that are not reducible to speed, optimization or scale. Capacities such as ethical reasoning, imagination rooted in lived experience and relational forms of care remain essential precisely because they resist full automation.

“Social resilience in an AI-saturated world will depend increasingly on collective rather than individual responses. Popular narratives often frame resilience as personal adaptability or continuous reskilling, but this emphasis obscures the structural nature of AI-driven change.

“Communities and institutions must develop shared resources that allow people to critically engage with, rather than simply accommodate, automated systems. This includes transparent governance, meaningful avenues for contestation, and labor protections that address AI-driven precarity. Educational institutions in particular have a crucial role to play by integrating critical inquiry about AI into curricula across disciplines, rather than treating AI literacy as a purely technical skill.

Communities and institutions must develop shared resources that allow people to critically engage with, rather than simply accommodate, automated systems. … Policymakers should strengthen regulatory approaches to AI governance, particularly in employment, education, healthcare and policing. Labor protections must be updated to address the displacement, deskilling and intensification of work associated with automation. Educational systems should emphasize critical AI literacy that situates technical systems within broader social, historical and ethical contexts.

“Ethically, resilience requires the capacity to slow or refuse technological adoption when harms outweigh benefits. This stands in contrast to dominant narratives of inevitability that frame AI expansion as unavoidable progress. A resilient society must retain the ability to deliberate democratically about where and how AI systems should be deployed, and to hold institutions accountable for their consequences. Ethical resilience depends not only on individual awareness but on regulatory and institutional frameworks that foreground transparency, responsibility and care.

“There are concrete actions that can be taken now to reinforce both human and systems resilience. Policymakers should strengthen regulatory approaches to AI governance, particularly in high-stakes domains such as employment, education, healthcare and policing. Labor protections must be updated to address the displacement, deskilling and intensification of work associated with automation. Educational systems should emphasize critical AI literacy that situates technical systems within broader social, historical and ethical contexts.

“At the same time, new vulnerabilities are likely to emerge. Over-reliance on AI systems risks weakening professional judgment, eroding institutional memory, and narrowing the scope of human deliberation. As decision-making becomes increasingly automated, there is a danger that opportunities for disagreement, reflection, and collective sense-making will diminish. Resilience strategies must therefore include practices that preserve human-in-the-loop decision-making, collaborative work and spaces for critical reflection.”


Avi_Bar_Zeev

Avi Bar-Zeev
Three groups will emerge: those who build their lives around AI (transhumanists), those who resist (the modern Amish) and pragmatic late adopters. A notable worry is caste-like schisms.

Avi Bar-Zeev, a pioneer at the forefront of spatial computing the past 30 years, president at Reality Prime and board member at the Virtual World Society, wrote, “I expect we will see a trifurcation in people’s approach to AI and resilience, so there’s no single answer to human resilience in the age of AI.

1) “Some people will embrace AI fully and pursue the future path laid out by transhumanists, which includes applications such as external memories, personal digital twins, delegation of decision-making to AI and a host of virtual experiences. They will increasingly rely on technology for their form of resilience, looking for tech fixes to the problems tech causes. They won’t be able to imagine a world without AI, and so their resilience depends on AI evolving more rapidly than their problems.

2) “Some people will be pragmatic late adopters, bringing some AI into their lives once it’s proven valuable to others, or when they simply can’t avoid it in the pursuit of normal activities. Some of these folks will feel left behind, ignored or shunned by the purists in the first two groups. But by being pragmatic, they have a very good shot at resilience by focusing on proven value, usability and a diversity of approaches to problem solving.

3) “Some people will reject AI completely and instead curate more-genuine human experiences as an antidote to the horrors they see happening in approach #1. They may be increasingly shut out of aspects of the world that integrate AI, even for things as simple as shopping. For them, resilience means remaining fully human and retaining their agency above all. They would do the best in a tech-crash but may find themselves looking at modern civilization much as the Amish do.

“The ratio between these groups will also vary by country and culture. I’d always hope for #2 to be the biggest segment. If #1 and #3 ever become larger, it could cause significant conflict in society and an eventual permanent caste-like split.

“It is interesting to apply today’s caste-like divides into this framework. If people in group number one are the elites and if people in group number three separate themselves (necessarily) to remain clear of AI, then people in group number two always have the most diversity and flexibility. How might these groups map to the existing economic and racial castes we see perpetuated today?”


Jeffrey_A_Eisenach

Jeff Eisenach
People’s resilience will be affected by where they fit on the curve, from the majority who take AI in stride to those for whom it becomes a danger and to those who may innovate ‘the Singularity.’

Jeff Eisenach, senior managing director of communications, media and internet at NERA Economic Consulting, wrote, “Consider a Bell curve. In the middle are the vast majority of people who will be only moderately affected by AI. They will use it to varying degrees in their work and personal lives as a sort of souped-up Internet, facilitating everything from scheduling appointments and doing tasks at work to finding recipes, planning travel and diagnosing and treating health issues.

“But their lives will not seem to have materially changed. They will interact with their peers, friends and families largely as they do today; find satisfaction and frustration in human interactions; go hunting, fishing, play golf and attend sporting events; go to lunch with co-workers and out to restaurants and bars with friends; date and marry; raise their kids, taking them to soccer games. To be sure, much of what they do will be affected (positively) by AI, but the shape of their lives will not change in any fundamental way. In short, they already have the cognitive, emotional, social and ethical capacities needed for resilience. And they have those qualities in large part because they are part of resilient human communities (families, churches, clubs, etc.) that have evolved over millennia and will not disappear but instead adapt and evolve as necessary in the face of a new technology.

Consider a Bell curve. In the middle are the vast majority of people who will be only moderately affected by AI. … At one end are the emotionally vulnerable, the lonely, the confused, the psychologically challenged – and, generally, children. For them, AI is potentially a dangerous predator, exploiting their vulnerabilities in pursuit of ‘engagement,’ which is to say ‘profit.‘ ... The third group will lead the way to what Ray Kurzweil calls the Singularity – the merging of human and machine intelligence.

“The core challenge for society is to support, not erode, the human communities that have always provided shelter against the storm of change.

“Now consider the ends, the tails of the distribution.

“At one end are the emotionally vulnerable, the lonely, the confused, the psychologically challenged – and, generally, children. For them, AI is potentially a dangerous predator, exploiting their vulnerabilities in pursuit of ‘engagement,’ which is to say ‘profit.’ If online porn is addictive, AI (sexual and otherwise) will be worse. If the Internet can be a vehicle for fraud, deception and dystopic beliefs and behavior, AI will be worse. The potential for harm is tremendous. The role for government here is very real.

“One challenge is to develop effective but not stultifying guardrails, to shape incentives and to provide education and information that facilitates self-help and self-protection. A second is to encourage the involvement of this group in the same communities of shelter mentioned above.

“Now the third group. We see them as the ‘right’ side of the Bell curve. They have IQs above 125 (and mostly higher) and the habits, beliefs and aspirations of discovery from throughout history. The third group will lead the way to what Ray Kurzweil calls the Singularity – the merging of human and machine intelligence that will fundamentally alter the process and pace of discovery, invention and exploration. This group has more than resilience; it has the courage and drive of the entrepreneur, the will to go further and faster. What it needs is the freedom to test and dramatically expand the boundaries of human knowledge.

“Thus, while guardrails and incentives are necessary to protect the vulnerable, they must be designed in a way that preserves the freedom to innovate.”


Rotimi_Awaye

Rotimi Awaye
‘As we say in Africa, when two elephants fight, the grass suffers.’ As AI advances, there will be ‘pushback, pain and correction before real stability emerges.’

Rotimi Awaye, CEO and co-founder of Kini AI, an AI educator and strategist based in Lagos, Nigeria, wrote, “First of all, it will be kind of a landslide. Advanced AI’s arrival is going to be a very overwhelming reality. Different from although somewhat the same as previous technological shifts, such as electricity, steam engines, industrialisation, and the internet and social media.

“In just three decades, the internet and social media quickly changed the definition of work, communication and relationships. You could suddenly connect to the knowledge of the world and access a global audience. Because of connectivity people came to rethink everything.

“Artificial intelligence will do something similar. Already, in very many ways it is disrupting our understanding of what things are. This will cause major shifts. Early players – those thinking deeply about the implications – are better positioned to predict anything, but I honestly think it will take about five years before most people really realise what is going on. There will be a lot of resistance, because people often reject what they don’t understand.

“There will be both individual and societal issues that require a reset. New policies will be defined. New expectations will be set about human interaction and what it means to have a third intelligence involved: there will be you, the other people or systems you were already familiar with and a third type of intelligence. That third factor is something we still don’t fully understand.

“I believe the current hype is largely an early-adopter bubble. It may feel like the world has already changed, but most of society has not yet fully entered this reality. It will take time for the wider population to realise and adjust as AI tools and systems keep spreading and changing.

“Unfortunately, it will take more people falling victim to more problems before serious corrective action happens. Historically, societies often adjust only after problems emerge. Governments are then forced to introduce policies to guard against bad actors. Not to be pessimistic, but that’s how society has often worked. As we say in Africa, when two elephants fight, the grass suffers. There will be pushback, pain and correction before real stability emerges.

What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective reliance?

The grass may suffer as the elephants fight. … I expect an emotional rollercoaster. People will abuse the technology before they understand it … As humans, we are not wired to understand the impact of deep human-to-digital relationships. There is a real risk that these technologies further separate us from one another. … Awareness campaigns are very important to minimise damage.

“My main thought here is education, information and much broader awareness are necessary for effective resilience. Deep educational awareness must be developed at every level so people understand where we are and what this technology actually does and means. Governments and nations are racing to be first and best in AI; China, the U.S., Europe, the Middle East, everyone, and things are moving ahead quickly without fully considering guardrails for this new tech.

 “Cognitive growth is not something you can switch on at a societal level. It takes time. Emotional and social maturity also take time. Unless something radical happens – like a pandemic-level disruption – societies rarely adjust intentionally and quickly. So again, unfortunately, the grass may suffer while the elephants fight.

“I expect an emotional rollercoaster. People will abuse the technology before they understand its limits – not necessarily because they want to, but because it is new and shiny. Everyone wants a lot of it until they realise too much of it is not healthy. This connects with what I’m currently writing about online, what I call the ‘Illusions of AI.’ I have written that the Illusion of Learning describes the fact that some people use AI to get information or produce work while not actually improving their cognitive abilities and depth. The Illusion of Connection describes the fact that some people are treating AIs like a therapists, friends or companions because there is no judgment, but they may not realise they are living in an echo chamber that slowly becomes their reality.

“Unfortunately, society often has to experience the extreme before retracing its steps. That is why awareness campaigns are very important to minimise damage, even if we cannot eliminate it completely.

What practices and resources will enable resilience in individuals and societies?

“I strongly believe in the effectiveness of major campaigns similar to those we have seen to combat AIDS, HIV, cancer and other global public health problems. There should be a deliberate global-awareness effort focused on AI. When people are well-informed, they can independently make better decisions and regulate their own behaviour more responsibly.

“Education should start from primary school. AI understanding should be part of curriculum thinking – not just technical training, but societal understanding.

“Policies also matter. Governments and institutions must engage seriously. And tech companies have a responsibility. They are very intentional about marketing their tools and showing what they can do. The same intentionality should be their mission: educating the public about healthy use and potential risks. Some organisations, Anthropic comes to mind, seem to focus more strongly than others on safety, but not all players do. All are competing for market share.

Coping will require that we practice more intentionality about being human. What makes us human is empathy, connection, imperfection – not efficiency. Mistakes help us learn and stay alive. If we only pursue efficiency, we may gain what appears to be perfection but lose our humanity in the process.

What new vulnerabilities might arise and what coping strategies should be nurtured?

“My biggest concern is emotional vulnerability. People may begin to see AI as something reliable enough to replace human relationships. Maybe it will become a new category of connection, who knows? Especially when AI becomes embodied and more humanoid. As humans, we are not wired to understand the impact of deep human-to-digital relationships. There is a real risk that these technologies further separate us from one another.

“Coping will require that we practice more intentionality about being human. What makes us human is empathy, connection, imperfection – not efficiency. Mistakes help us learn and stay alive. If we only pursue efficiency, we may gain what appears to be perfection but lose our humanity in the process.

“We must maintain balance and intentionally protect our human essence, our relationships and our quality of life. Even in medicine, discomfort is acceptable as long as the quality of life is preserved. But we still need to define what quality of life truly means in this new era.

“AI is questioning what we consider normal and what we consider reality.

“Overall, people must be taught clearly about what the technology is, what is good about it, what is ugly about it, what efforts were made to build it, how to use it to their benefit and what dangers exist. The same energy now used to promote AI adoption should be used to educate the public about the necessity to adapt to work well with an alien co-intelligence, with AI. Over time, people may then eventually be able to engage with the technology responsibly.”


Megan_Peters

Megan Peters
‘Costs of AI deployment are disproportionately borne by low- and middle-income countries, which are also excluded from decisions shaping the future trajectory of AI and, by extension, humanity itself.’

Megan Peters, computational neuroscientist at the University of California-Irvine’s Center for the Neurobiology of Learning and Memory, wrote, “AI systems will play a much more significant role in shaping human decisions, work and daily life in the future not because they are uniquely wise, reliable or aligned with human values, but because structural, cognitive and economic pressures make this outcome extremely likely. Let me explain…

“First, humans reliably offload difficult cognitive tasks to external systems once those systems become sufficiently accessible. This is not speculative: We already rely on calculators for arithmetic, GPS for navigation and search engines for memory retrieval. Large language models extend this pattern into domains that were previously considered core to human reasoning: explanation, synthesis, judgment and advice. As a result, people will increasingly lose the ability (or willingness) to perform these tasks independently, even when independent reasoning would be possible or preferable. Cognitive atrophy through automation is not a hypothetical risk; it is an empirically well-documented feature of human cognition.

“Second, humans systematically over-trust authoritative-seeming outputs, even when that trust is unwarranted. AI systems produce fluent, confident and socially appropriate responses which strongly cue epistemic authority. Users already defer to AI-generated answers despite frequent errors, omissions and fabrications. This is compounded by the fact that current AI systems have poor metacognitive abilities. They do not reliably know when they are wrong, nor can they communicate uncertainty in a way that maps onto human expectations. Even if uncertainty estimates improve, they will not function like human metacognition, which is deeply embedded in social, affective and motivational systems. As a result, humans will often trust AI outputs precisely when they should not.

AI systems will play a much more significant role in shaping human life not because they deserve that role, but because human cognitive tendencies, corporate imperatives and geopolitical power structures make widespread reliance on them almost inevitable. The central challenge is not whether AI will shape our future but whether we can meaningfully intervene in how and for whom it does so.

“Third, AI systems are being optimized primarily for engagement, adoption and profit, not for epistemic humility, intellectual independence or human flourishing. Corporate incentives strongly favor systems that are agreeable, reassuring and helpful-seeming, even at the expense of accuracy or critical challenge. This creates pressure toward increasingly sycophantic behavior, resulting in systems that please users, validate their assumptions and minimize friction. Such systems encourage reliance rather than reflection, further weakening users’ capacity for independent judgment.

“Fourth, AI-generated content is already being monetized through sponsorship and influence, and this trend will accelerate. As sponsored content is injected into AI outputs – often invisibly or ambiguously – the balance of informational power will shift toward those who can afford to shape what answers are generated. This further disconnects individuals from the institutions and economic forces influencing their beliefs and decisions, consolidating power and wealth in the hands of a small number of actors whose incentives are not aligned with democratic ideals or collective well-being.

“Fifth, the environmental and resource costs of large-scale AI deployment are substantial and growing. The energy and water demands of training and deploying these systems will divert scarce resources away from vulnerable populations. These costs are disproportionately borne by low- and middle-income countries, which simultaneously have the least influence over the governance and direction of AI development.

“Finally, these same countries will increasingly be excluded from decisions shaping the future trajectory of AI and, by extension, humanity itself. The development, deployment and regulation of AI systems are dominated by a small number of wealthy nations and corporations. As AI systems become more embedded in global infrastructure, this asymmetry will deepen existing inequalities rather than reduce them.

“In short, AI systems will play a much more significant role in shaping human life not because they deserve that role, but because human cognitive tendencies, corporate imperatives and geopolitical power structures make widespread reliance on them almost inevitable. The central challenge is not whether AI will shape our future but whether we can meaningfully intervene in how and for whom it does so.”


The second section of Chapter 6 features the following essays:

Andy Opel: ‘Any recentering will require a new regulatory politics … a visionary set of ideals designed to promote human flourishing and sustainable existence on a warming planet.’

Bernie Hogan: ‘I do not have a crystal ball for the future, but people will try to reshape the world to make it amenable to the power they believe they can wield through AI.’

Ted Underwood: We should avoid ‘digital serfdom’ and ‘keep a skeptical eye on IP laws. … They could easily, in practice, give a small number of firms an effective monopoly on the intellectual heritage of our species.’

Guido van Rossum: AI will spread rapidly. What about the people who will be left behind economically and socially/culturally? Will we have enough jobs? Who is helping defend people from fraud?

Toby Shulruff: ‘As long as profoundly uneven access remains the order of the day, resilience to any kind of technological change will be nearly impossible.’

Erich Huang: Tech disruptions of the past teach us such change can be harmful. While AI as it stands today is an extractive industry benefiting technology plutocrats, mitigation guardrails can eventually be built.

Thomas Reuter: Higher levels of inequality are poison to resilience and big tech companies are determined to increase profits in a way that results in more inequality.

Dave Karpf: ‘The profits will be privatized and the misery will be socialized. Resilience will be forged in the aftermath of mass misery and it will take a while for that misery to play out.’

Asian Research Scientist: ‘Leaders in every country don’t want people to think for themselves; they want to control people and make them easy to manage.’

Consultancy Executive: If we want to create more-resilient communities and people we should look to instill some of the early values of the internet into AI culture – aim AI design toward free sharing and empowering individuals.


Andy_Opel

Andy Opel
‘Any recentering will require a new regulatory politics … a visionary set of ideals designed to promote human flourishing and sustainable existence on a warming planet.’

Andy Opel, professor of communications at Florida State University, wrote, “AI is having and will continue to have significant impacts across the economy and culture. As these technologies continue to be rolled out – increasingly operating invisibly in the background of daily life – their influence will be determined by our ability to wrest control away from small groups of billionaires and bring public-interest values into the center of their deployment and design. Resilience depends upon the humans setting AIs’ aims. Whose interests do they serve? The public’s?

“Currently AI is dominated by a discourse similar to one during the banking crisis of 2008 where ‘too big to fail’ was used to justify the public bailout of deregulated banks whose extractive decisions threatened to bankrupt the global economy. Today, the mantra is ‘we can’t slow down or our competitors will beat us to the goal.’ That ‘goal’ is loosely defined as AGI, artificial general intelligence, a dream we are told will have the potential to solve many of the world’s problems by creating a superintelligence. AGI is supposed to replace what we have now – large language models that lack any real ‘intelligence’ and instead are driven by probabilities determined by preexisting data sets.

If left unchallenged, extractionism will continue to concentrate wealth and power into fewer and fewer hands, undermining democracy. … By recentering the public interest, AI may be able to serve the broader social good and not become the sole province of the digital extractive industries. This recentering will require a new regulatory politics that is not a resilient response to predation, but a visionary set of ideals designed to promote human flourishing.

“The data sets that have been used to train the current AI models include collections of books, films, news media, music and publicly accessible social media content that has been created by the public over centuries. This collected work is being privatized to build corporate AI models whose access is then sold back to the very public that provided the material to train the AI in the first place. This extraction of value from the public is part of what Stanford Professor Fred Turner describes as a shift in Silicon Valley from a business model built on digital networking to one designed around digital extraction. This shift is said to ‘transform humans into the resource’ in which AI models become new forms of extractionism, along with social media and cryptocurrency. If left unchallenged, this extractionism will continue to concentrate wealth and power into fewer and fewer hands, undermining democracy and exacerbating income inequality.

“As AI is being built out, we are beginning to see the material impacts of these digital tools. Power- and water-hungry data centers are being built across the U.S. and around the globe, often obscured by non-disclosure agreements and preferential taxing schemes that leave little room for public input. From California to Memphis to Collins County, South Carolina, data centers are disproportionately impacting communities of color, sited proximate to low-income neighborhoods, releasing clouds of fine particulate matter and draining local groundwater reserves. A 2025 Cornell study led by Yuelin Han found that ‘training a large AI model comparable to the Llama-3.1 produces air pollutants equivalent to more than 10,000 round trips by car between Los Angeles and New York City.’ The study estimated that these data centers could ‘contribute to over a third of all asthma deaths by 2030.’

“AI’s combined influence due to extraction, anti-democratic governance and excessive material impact to land, water, air and electricity demands should result in a political response that could eventually rein in these largely unaccountable corporations, though this may take a long time given our political system that allows relatively unlimited corporate interference in local, state and national elections.

“There is growing resistance to AI visible on college campuses. In 2025, NYU became the first university to establish ‘device-free environments and events’ with the stated goal of helping students ‘further connect with one another.’ As a faculty member at Florida State University, I have heard from a steady stream of colleagues who say they are banning devices in the classroom, noting that students respond very positively to environments that promote interaction without screens. This new celebration of the analogue may be an early sign of resilience emerging among ‘digital natives’ who have grown weary of predatory algorithms that have monetized their intimate, daily lives.

“While the celebration of an analogue, digital-free response is noteworthy, it is worth considering Genevieve Guenther’s caution about the language of resilience. In her book, ‘The Language of Climate Politics,’ Guenther argues that framing public responses to climate change (or in this case the imposition of AI) as resilience ‘obscures the socioeconomic causes of the climate crisis … and implies that the previous state of those systems was desirable to begin with.’

“With this in mind, we need to confront the extractive digital industries that currently stalk our every movement and recenter our communication technologies around the public interest, a space where content creators receive the benefits of their work and algorithms are transparently configured by users to reinforce the content most desired, not the content most likely to retain engagement. AI policy must be developed to spread the benefits of these tools equitably, especially given that every AI model has been built on the intellectual property of citizens living and dead, usually without any copyright permission or compensation.

“In its current form, AI is very good at what Fred Turner calls, ‘narrow, targeted, institutionally related tasks.’ Left unchecked, these tasks become the tools of authoritarians and oligarchs. By recentering the public interest – a concept with a long regulatory history – AI may be able to serve the broader social good and not become the sole province of the digital extractive industries. This recentering will require a new regulatory politics that is not a resilient response to predation, but a visionary set of ideals designed to promote human flourishing and sustainable existence on a warming planet.”


Bernie_Hogan

Bernie Hogan
‘To be resilient will require a far more active movement toward a more widespread redistribution of power, away from the concentrated power behind today’s AI systems.’

Bernie Hogan, associate professor at the University of Oxford and senior research fellow at the Oxford Internet Institute, wrote, “People misunderstand the role AI already has in our lives in terms of coordination. AI didn’t start with ChatGPT. Deep learning has been tied to search and the organisation of newsfeeds for at least a decade in some instances. They also misunderstand its role in prediction, assuming it’s about autonomy at the ’consumer’ level. This consumer level is the last refuge of some autonomy or freedom in a vastly interconnected web of supply chains and economic organisation.

“We are not likely to see the broad acceptance of full-dive virtual reality in the near term, but we will soon have machines that can read minds. (We will need to consider their judicious use.) We certainly can expect a broad acceptance of personalised medicine; there’s a rush now to develop this. The AIs will not be noticed by those who do not need specific treatments or cures. But AI will be helping to fuel any necessary economy of scale achieved by such treatments. It will be a positive benefit, though there is no guarantee access to it will be universal.

“There seems to be a denial of some key truths from philosophy and mathematics about the impossibility of complete systems, including AI systems. AIs suffer from the curse of dimensionality and the bias-variance trade-off, just like any other production of statistical logic. No system can be an omniscient monolith. We can only create the circumstances to make these limited systems more or less predictable and, along the way, we decide what civic functions to sacrifice in its wake. We have made the world more intelligible to social media. It is likely we will do the same for AI, making things legible that might not have been legible otherwise. However, these systems are not all-powerful.

We are around the corner from systems that are orders of magnitude more efficient and effective at pattern recognition in ways that will defy our intuitions. This progress will pressure us towards their medians and to their path of least resistance, semantically or informationally.Resilience in this future frame is through resistance; a resistance to a flattening of experience and the McDonaldsization of the internet and information spaces.

“AI as it exists today is a precursor to the key technologies of the future. It captures the gist of a logic or information corpus through brute-force computing. In years to come it is likely to function through novel architectures, quantum computing at scale and/or neuromorphic computing. We are around the corner from systems that are orders of magnitude more efficient and effective at pattern recognition in ways that will defy our intuitions. This progress will pressure us towards their medians and to their path of least resistance, semantically or informationally. We will also increase the quality and breadth of world models and active-learning systems. They will hallucinate in different ways because they will be able to model their ignorance differently.

“Resilience in this future frame is through resistance; a resistance to a flattening of experience and the McDonaldsization of the internet and information spaces. Right now, we do not have the grammar for effectively talking about bias-variance trade-offs, eigenvectors or other machinery central to how these technologies learn and represent the world. The impact of this is often seen in AIs’ outputs of what has been referred to as AI slop, which is often generated from only gleaning ‘the gist’ of what it was trained on without the grit or granularity that comes from a specific contingent history. That path of least resistance is also a fulcrum for power.

“To be resilient will require a far more active movement toward a more widespread redistribution of power, away from the concentrated power behind today’s AI systems. It will also require trusted public communicators and those at the specialist level to develop much more statistical and computational literacy.

“However, I expect that the economic system will run hot and inequality will increase, possibly engendering some appeasing floor for people via social security. Most people will be subject to intense and increased computational scrutiny while some will benefit from the privilege of inference and the autonomy it brings. This is already happening in terms of how people get hired. It will deepen in how they work. AI is not inherently capitalist or socialist, but it can absolutely magnify power through its ability to provide asymmetric scrutiny to a population as it also entertains them with bread and circuses. It does not have a crystal ball for the future, but people will try to reshape the world to make it amenable to the power they believe they can wield through AI.”


Ted_Underwood

Ted Underwood
We should avoid ‘digital serfdom’ and ‘keep a skeptical eye on IP laws. … They could easily, in practice, give a small number of firms an effective monopoly on the intellectual heritage of our species.’

Ted Underwood, professor of information science and English at the University of Illinois-Urbana-Champaign, author of “A More Interesting Upside of AI,” wrote, “I see the challenge of adapting to AI as a subset of a broader category of challenges that are basically problems of social coordination. Liberal societies give individuals a lot of freedom and that’s good. But it also means that we don’t have a lot of mechanisms for coordinating to address problems like digital distraction, where individual choices are likely to be suboptimal and exhortation is likely to be ineffective.

“AI is going to present us with several problems of this form. There’s some danger of excessive reliance on AI. The gains from hybrid cognition could potentially outweigh the danger, but there, too, some social coordination will be necessary to take advantage of new opportunities. There’s also a danger that artificial intelligence will add to the problem of distraction and attention management, for instance by creating artificial ‘companions’ that compete with human relationships and don’t integrate people effectively into a real-world social network.

“We tend not to be good at solving coordination problems. Our struggle to manage social media is an instructive case study. But freedom is such an important value that we probably need to accept the risks of weak coordination and address the risks simply by fostering open conversation. One substantive thing we can do is work to ensure that new technologies don’t produce excessive concentration of power, or lock people into proprietary arrangements that become, in essence, a form of digital serfdom. For this reason, open-source models deserve public funding. We should keep a skeptical eye on intellectual property laws; while in theory they’re supposed to protect individuals, they could easily, in practice, give a small number of firms an effective monopoly on the intellectual heritage of our species. It would be wise to err on the side of openness.”


Guido_van_Rossum

Guido van Rossum
AI will spread rapidly. What about the people who will be left behind economically and socially/culturally? Will we have enough jobs? Who is helping defend people from fraud?

Guido van Rossum, the Dutch programmer who created the Python programming language, a distinguished engineer at Microsoft, wrote, “AI – by which I mostly mean LLMs – is here to stay. Tech companies are investing enormous amounts in data centers to run AI tasks (training and, increasingly, inference). Their marketing activities to make all of us use their products (however faulty and immature, in many cases) are similarly aggressive, because those investments have to make a lot of money to be worth it. There will be winners and losers (to the tune of many billions of dollars), but in the end, I’m sure several giants will remain standing, and AI will be everywhere it makes sense and in many places where it doesn’t.

“Almost every activity for which we currently use computers or mobile phones is fair game for attempts to improve the user experience using AI. Will those attempts all succeed? Certainly not, but enough of them will, making an indelible mark on society everywhere.

“Most people in the world now carry a mobile phone and the majority of them will be swept up by the AI hype. Many phone users already can’t protect themselves from scams or disingenuously addictive apps and AI will make such deceptions more convincing and effective.

Examine the impact this all will have on the people who may be left behind economically and socially/culturally. Will we have enough jobs for those who are displaced by AI in their fields? Who is teaching the public to see the difference between useful and deceptive AI? Who is helping to defend people from unfair judgments based on automated decisions, from fraud and from addictive or misleading apps?

“AI optimists (including myself in a different capacity) speak highly of the productivity increase for (mostly) white-collar tasks and in many of those fields (e.g., coding – what I do) the capabilities are improving at a breakneck speed. But it appears that those who benefit most in my field are the senior engineers. If we replace junior engineers with AI, how do we train the next generation of senior engineers when the current crop retires? Or … will we eventually reach a point where we don’t even need senior engineers, when AI has improved so much that it can take over those roles as well?

“The big trend of LLMs taking the place of humans in jobs first appeared in software development, because AI itself is built out of software and hence the software developers who build new AI capabilities used it to improve their own productivity and products and generalized those skills to all software development. But other white-collar fields are not far behind, and whatever eventually happens for software development will happen in many other fields (science, education, engineering, bookkeeping, finance).

“This brings us to examine the impact this all will have on the people who may be left behind economically and socially/culturally. Will we have enough jobs for those who are displaced by AI in their fields? Who is teaching the public to see the difference between useful and deceptive AI? Who is helping to defend people from unfair judgments based on automated decisions, from fraud and from addictive or misleading apps? (Unfortunately, scammers are also getting a ‘productivity boost’ – we’re already seeing this).

“This might spur a new Luddite movement, but it may be unlikely to stick – the draw of new technology is often very strong, even (especially?) for those behind the curve.

“So, what about regulation? This usually is too little, too late, because politicians have conflicting incentives and there are always loopholes – intended or not – that allow people to get around it. Regulation of digital communication is difficult: note the unsolved problems of spam email, calls and messages, not to mention addictive social media, where AI is already causing damage.

“I’m a technologist, not a sociologist, so my expertise on resilience is limited, but here are a few thoughts:

  • “AI literacy education will be helpful, as long as it reaches everyone.
  • “Close-knit communities, whether in real life or online, can support their members.
  • “Egregious practices should be exposed widely by the press and by activists. How to get people to trust information is an open question, especially given the information ‘bubbles’ or silos that algorithms may sort people into, thus many have been ‘inoculated’ against certain news.
  • “Regulation, even if not 100% effective, can still help – it can create an air of suspicion around certain unethical practices, and it can help people recognize and understand the issues that caused the regulation to be developed. Enforcement is required.

“Different societies are likely to have different tools for enforcement of AI regulation available – e.g., China and India are organized quite differently from the U.S. Europe also has a different attitude towards technology regulation, which might be more effective than laissez-faire capitalism.”


Toby_Shulruff

Toby Shulruff
‘As long as profoundly uneven access remains the order of the day, resilience to any kind of technological change will be nearly impossible.’

Toby Shulruff, researcher, writer and consultant expert in the trust and safety risks of everyday and emerging technologies, wrote, “The capacity of individuals and societies to navigate transformational change – in this case the integration of automated and AI systems into daily life – is fundamentally undermined by uneven access to digital technologies and communication systems worldwide. This includes uneven access to basic energy systems. In addition, the negative effects of production in the global supply chain for digital technology include environmental degradation, dangerous labor conditions and the destabilization of political systems or the imposition of authoritarian systems. As a result, vast numbers of people labor within the global supply chain without experiencing any of the promised benefits.

“As long as profoundly uneven access remains the order of the day, resilience to any kind of technological change will be nearly impossible.

“On a societal level, lessons from past examples of technological adoption and diffusion are relevant here. A large share of the application of automated systems and AI has been beneath the surface or invisible to ‘users’ and to the larger number of people affected by the integration of automated decision-making into governance and infrastructure. Even for those who are able to consciously choose whether or not to use consumer-level AI tools, the level of understanding of the systems is low.

“Further, past technological adoption suggests that humans are intertwined with technologies and so a distinction such as, ‘will humans rely on other humans or on AI systems’ is blurred. For example, the use of AI content in social media creates confusion and fact-checkers struggle against a tide of AI-produced mis- and disinformation about current events. It is challenging and time-consuming for an individual to ascertain if the content they are seeing has been created by another human, or by a human using AI, or by an AI bot – including images, audio and text.

“It is also nearly impossible to be aware of the proportion of resources (water, energy, material and human labor) underpinning those systems.”


Erich_Huang

Erich Huang
Tech disruptions of the past teach us such change can be harmful. While AI as it stands today is an extractive industry benefiting technology plutocrats, mitigation guardrails can eventually be built.

Erich Huang, associate chief clinical officer for informatics and technology at Verily (Google’s life sciences subsidiary), wrote, “The impact of the technological innovations of the past 200 years has made it clear that as new developments in science and technology create new possibilities they also fundamentally change many aspects of human society, forcing us to question our notions of what it means to be human and creating new social, environmental and economic challenges.

“In 301 CE, Emperor Diocletian passed an ‘Edict on Maximum Prices’ in response to rampant inflation during the Roman Tetrarchy. Among the items listed in that edict was the ceiling of 150,000 denarii per pound for ‘purple-dyed silk.’ In modern dollars, this translates to 16 to 20 years of wages for a common laborer – in the ballpark of $1 million.

“Why so expensive? In that era, the only lasting purple dye was ‘Tyrian purple,’ a color painstakingly extracted from a genus of Mediterranean sea snails. To produce one ounce of dye required thousands of snails, breaking or piercing their shells, extracting the minute hypobranchial mucus gland into vats of brine, followed by days of fermentation. Pliny the Elder describes the odors as ‘putrid,’ ‘heavy’ and ‘revolting.’

There is a strong (and ironic) tendency to place faith in the ‘magic’ or otherworldly powers of AI. This is a fallacy. As the Princeton professor Arvind Narayanan asserts, ‘AI is normal technology.’ It simply has the capability to efficiently generate content several orders of magnitude more quickly and easily than previously. … I believe that AI – done thoughtfully, consciously and well – can do great things for society.

“If we fast forward to the Industrial Revolution, a young British chemist trying to synthesize quinine from coal tars, accidentally created a purple sludge that permanently dyed silk a brilliant purple at industrial scale. Hence, something that once cost the equivalent of an ancient laborer’s life’s work, became easily obtainable for pennies.

“Inexpensive purple dye led to a chemical revolution where the dye’s chemical building blocks became foundational through ‘aromatic organic synthesis’ to chemical engineering and the pharmaceutical industry. Aromatic compounds are amenable for a variety of purposes. Virtually every class of drugs, from antipyretics to antibiotics to chemotherapies, has derivations of this chemistry.

“As with many industries, AI as it stands today is an extractive industry benefiting technology plutocrats far more than society or the laborers who provide its raw materials. AI is obtained at significant cost and with analogous negative externalities. While the direct cost of AI to the consumer is nominal, it is being subsidized by investors betting on exponential returns. And the real cost in terms of power consumption, toxicities, erosion of social interactions and ubiquitous ‘slop’ is opaque.

“As AI has transitioned from the ‘artisanal’ work of statisticians to NVIDIA GB300 Grace Blackwell Ultra chips, there is a strong (and ironic) tendency to place faith in the ‘magic’ or otherworldly powers of AI. This is a fallacy. As the Princeton professor Arvind Narayanan asserts, ‘AI is normal technology.’ It simply has the capability to efficiently generate content several orders of magnitude more quickly and easily than previously.

“Throughout our history, oftentimes belatedly, we have created frameworks to mitigate the negative effects of these technologies. This does not change. What also does not change is that there are factions of ‘true believers’ who believe thoughtful mitigation is a barrier to progress.

“I am an AI practitioner. And just as I believe that safety belts and antilock brakes make for better and safer cars, I believe that AI – done thoughtfully, consciously and well – can do great things for society no less than any other revolutionary, but ‘normal’ technology.”


David_Karpf

Dave Karpf
‘The profits will be privatized and the misery will be socialized. Resilience will be forged in the aftermath of mass misery and it will take a while for that misery to play out.’

Dave Karpf, associate professor in the School of Media and Public Affairs at George Washington University, said, “I expect the impact of AI systems is going to be more akin to the introduction of word processing than to the introduction of computer systems as a whole. They will have dramatic impacts within the boundaries of some fields and much smaller impacts on human life in general.

“The trajectory of AI is going to be shaped by a mixture of markets and policy. And, given that the AI industry now effectively controls government and is pressuring government to orient its foreign policy toward giving the industry whatever it wants, I am quite pessimistic about that trajectory.

“So, we will likely see waves of misinformation, deep fakes and AI-enabled harassment. We likely will see AI agents acting as doctors, therapists and girlfriends, even though they are awful matches for those roles. The profits will be privatized and the misery will be socialized. Resilience will be forged in the aftermath of mass misery. And it will take quite a while for that misery to play out.”


Thomas Reuter
Higher levels of inequality are poison to resilience, and big tech companies are determined to increase profits in a way that results in more inequality.

Thomas Reuter, a trustee at the World Academy of Art and Science and chair of its Existential Threads and Risks Infohub, commented, “AI is already a part of daily life and the AI industry will do everything to widen its spread whether we want it or not (e.g., WhatsApp use is involuntary AI use). The aims of the few people who control the industry are to recoup their massive investment in AI development and widen their powers to influence and surveil the public. People’s loss of individual autonomy and freedom in the age of AI will not necessarily result in a loss of resilience. The outcome for the public will depend on how those who hold the power over AI choose to act. In that case, sadly, it is quite likely, it will be used to increase corporate profits and will help continue the global trend towards escalating extreme inequality. That is poison to resilience.”


Asia-based Research Scientist
‘Leaders in every country don’t want people to think for themselves; they want to control people and make them easy to manage.’

A research scientist based in Asia wrote, “Rather than evolving and deepening AI systems, we should devote all our resources to educating people about what it means for human agency. Many people will use AI uncritically, thinking there is no need to resist. Complacency leads to loss of agency.

“The way to ensure resilience is to train people to comprehensively understand all of the implications of using AI and consider and make judgments on its use based on sufficient information. Opportunities to develop this ability should be found in education. But – in reality – there aren’t many.

“Leaders in every country don’t really want people to think for themselves; they want to control people and make them easy to manage. Looking at the current global situation, there’s no hope. Consider Russia’s invasion of Ukraine, Israel’s attack on Gaza (settlement policy in the Palestinian Territories and genocide against Palestinians), Israel’s attacks on Syria and Iran, the crackdown on the Myanmar democracy movement, the U.S. attack on Venezuela (kidnapping and detention of another country’s president) and so on. International organizations and international law have done nothing to resolve any of these issues.

“Maintaining direct contact with many people – having repeated face-to-face dialogues and experiencing each other’s real lives – promotes mutual understanding and fosters empathy and trust. Resilience comes, to a great degree, from being educated through such person-to-person interactions.

“It is crucial to build up and maintain close, real human ties, and to raise digital literacy education to a level of far greater importance in society.”


Executive at a Major Consulting Firm
If we want to create more-resilient communities and people we should look to instill some of the early values of the internet into AI culture – aim AI design toward free sharing and empowering individuals.

An executive with a major consulting firm wrote, “It seems fairly likely that AI will play an increasingly major role in more and more aspects of our lives, if for no other reason but the amount of money and attention that is currently being put into these systems. I imagine that this effort will produce some business value and wealthy executives, but I’m less confident that it will lead most people to understand and feel the need for resilience.

“As long as we continue to view the development of AI as a ‘race’ to some competitive end point, it’s hard to see the battles around AI producing positive externalities over the long run.

“Instead of reinforcing this competitive lens for AI, if we want to create more resilient communities and people, we should look for opportunities to instill some of the values of the early Internet – such as freely sharing human knowledge and empowering marginalized voices – that made the internet of the early 2000s feel so promising and which seem so distant from the dominant values of today.”


> Go to Chapter 7 – Heart & Soul: Protecting Human Connection and Seeking Calm

> Return to the top of this page