The EssaysChapter 10
Additional Observations

Featured Contributors to Chapter 10: The 34 essay responses on this page were written by James Witte, Lucy Suchman, Garth Graham, Chris M. Ellis, Chris Boese, Alexandra Whittington, Peter Mmbando, John Battelle, Henning Schulzrinne, Bassam Tabshouri, an anonymous AI Scientist, Rob Frieden, Russell Blackford, Calton Pu, Jeremy Pesner, Tim Kelly, Christopher Riley, an anonymous Politics/Technology Journalist, Neil Chilson, Mark Schaefer, Mario Morino, Ray Schroeder, Warren Yoder, Valerie Curran Bock, Maureen Hilyard, Kevin Yee, Carol Chetkovich, an Anonymous Researcher, Heleen Riper, Navi Argentina Rodgriguez, Susan Helper, João Gama, an anonymous North American Scholar and an anonymous respondent. (Their essays are all included on this one web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)


The first section of Chapter 10 features the following essays

James Witte: ‘Human resilience will require mindful and evolving attention to discovering where human touch and human intelligence can complement developments in AI.’

Lucy Suchman: The idea that there is an imperative to adapt implies that AI is inevitable and not subject to political, economic and democratic decisions regarding costs and benefits of AI development.

Garth Graham: The early automobile was called a ‘horseless carriage.’ People need to start having iterative dialogues with AI, instead of seeking to simply go beyond limited pursuits of ‘a particular product or answer.’

Chris M. Ellis: Resilience issues will arise because AI is artificial. ‘People will yearn to disconnect and touch grass.’ Look for ‘AI detox retreats’ and efforts by some to build strife into their lives in order to feel human.

Chris Boese: ‘AI monopolies lost their way by embedding corrupt, algorithmic weighting into machine learning through deliberate or ignorant social engineering.’

Alexandra Whittington: Solutions occurring outside of the human experience are waiting to be discovered. Would such discoveries threaten the animal-human hierarchy? Could they subvert artificial intelligence?




James_Witte

James Witte
‘Human resilience will require mindful and evolving attention to discovering where human touch and human intelligence can complement developments in AI.’

James Witte, professor of sociology and anthropology and director of the Institute for Immigration Research at George Mason University, wrote, “At the macro level, I see two main branches in how societies will adapt to the introduction of AI. The first is rejection, essentially a Luddite response. While this may provide comfort to some individuals, it is akin to sticking one’s head in the sand. The second main branch involves acceptance and accommodation, where there are two primary sub-routes.

“The first revolves around exploitation, whether in a capitalist or an authoritarian framework. Just as with the introduction of the internet, the dominant economic and political classes will seek to maximize ‘profits’ from AI technology, perhaps with some of the benefits trickling down. As AI technology develops, I see the current push for re-colonization, be it in Africa, Central Asia or Greenland, with China, Russia and the United States seeking to determine the path taken. These superpowers are targeting mid- and low-income nations for their natural resources, especially rare earth minerals, workers and markets.

I see enormous human potential in the confluence of AI and robotics. While we see occasional flashy headlines, beneath the headlines robotics is a field that has been evolving in a manner that may yield phenomenal benefits for humans, when married with AI. The first robots were controlled by one-at-a-time by human operators or robot one-off control programs. Gradually, a more standardized interface has emerged allowing for greater interoperability. This will be hastened by AI.

“The second sub-route would require a greater degree of ‘profit-sharing,’ which may have more success in democratic societies, including hopefully the United States. This will require an increased measure of popular and political assertiveness than we have seen to date.

“At the micro-level, I see enormous human potential in the confluence of AI and robotics. While we see occasional flashy headlines, beneath the headlines robotics is a field that has been evolving in a manner that may yield phenomenal benefits for humans, when married with AI. The first robots were controlled by one-at-a-time by human operators or robot one-off control programs. Gradually, a more standardized interface has emerged allowing for greater interoperability. This will be hastened by AI.

“Just as importantly it seems there is a shift in the mindset of robot developers. Rather than seeking to transform the human environment through the introduction of robots, newer thinking revolves around creating robots that can effectively function within the constraints of existing human-centric physical built environments that are familiar to humans.

“As we evolve with these systems, how might the essence and elements of human resilience change? It may be useful to think about how public opinion and policy on climate change has morphed over the years from denial and resistance to mitigation, adaptation and resilience. Resilience is a combination of resignation and proactive response. Following this model, now that we have taken a big bit of the apple and opened Pandora’s box, what response should society – and I would stress a democratic society – what does a proactive response look like?

“One insightful example, focused on higher education – particularly outside the Ivy League – is offered by Hollis Robbins in a recent piece in The Chronicle of Higher Education: ‘For a prescient college president, this represents the opportunity of a lifetime. Smart leaders should double down on what is AI-proof: intimate mentorship, transformative community and genuine human development… the value proposition will be the faculty and the hands-on teaching, not the bricks…’

“This means thinking about what humans can do that machines and algorithms cannot do, with the realization that this line will change over time. Just over 20 years ago, labor economists were saying that technological innovation was eliminating entire occupations and transforming industries. Two oft-cited examples where humans had the advantage were carrying on meaningful conversations and making left-hand turns in traffic. Well, now we see what happened with that prophecy.

“Human resilience will require mindful and evolving attention to discovering where human touch and human intelligence can complement developments in AI. An older but thoughtful guidepost may be Robert Reich’s ‘Work of Nations,’ where he points to high-quality, in-person services (touch) and symbolic analytical services (intelligence) as types of work that are viable future responses to the growing influence of AI particularly in conjunction with robotics.”


Lucy_Suchman

Lucy Suchman
The idea that there is an imperative to adapt implies that AI is inevitable and not subject to political, economic and democratic decisions regarding costs and benefits of AI development.

Lucy Suchman

Lucy Suchman, professor emerita of the anthropology of science and technology at Lancaster University in the UK, previously a 20-year veteran researcher at Xerox’s Palo Alto Research Center, wrote, “The framing of this survey refers to ‘AI systems’ as if AI were an autonomous agency outside of ‘society,’ which then impacts the daily lives of humans. But AI is a thoroughly human project. The question is, which humans benefit and who bears the costs?

“There is no reference to the vested interests that promote the development of this latest form of automation. For the long social history see, for example, Matteo Pasquinelli’s ‘The Eye of the Master’ or the political economies that enable and sustain what is arguably an AI ‘bubble’, not least through the over-representation of the capabilities of technologies like LLMs. The idea of ‘resilience’ (most familiar in the context of climate change) further reinforces the premise that AI is inevitable, or at least irreversible and ‘we’ must somehow adapt and adjust. But that all depends on whether or not these investments are allowed to continue at the current (and projected) scale. And that is a political question.

“I neither think that there is a singular ‘humanity’ nor a real prospect of ‘far more advanced AI.’ Moreover, the idea that there is an imperative to adapt implies that AI is inevitable and not subject to political and economic – not to mention democratic – decisions regarding the costs and benefits of AI development, whether it should be pursued and at what scale. Those political and economic questions are the ones with which we need to be engaged.”


Garth_Graham

Garth Graham
The early automobile was called a ‘horseless carriage.’ People need to start having iterative dialogues with AI instead of seeking responses via simple, limited pursuits for a particular answer.

Garth Graham, a global telecommunications expert and consultant based in Canada, wrote, “I have been impressed by a recent online post published by research engineer Sam Barrett titled ‘On LLMs as a Medium for Thought.’ It offers a reframing of our understanding of what large-language-model AI actually is. In essence, it says that today most people use AI to seek a particular product or answer. It notes that in having only this expectation AI users are understanding AIs purpose in the way that we have always faced major technological change: from an extremely limited scope.

“From my point of view, a simple example of an early cultural framing of a technology is seen in how humans first referred to automobiles: as ‘horseless carriages.’ At that time, they categorized them in terms of the existing transportation system, not as a new form that was about to extend the possibilities of transportation into entirely different phase spaces. It certainly was an underwhelming misrepresentation of the potential for automobiles, far from even close to descriptive of their impact, not only due to their convenience and time-saving but in how they completely altered human land use and social organization.

The result of taking iterative approach in different modes is an expansion of situational awareness, in effect, an expansion of consciousness. By agreeing to enter into dialogue, you are accepting the risk of uncertainty by dramatically expanding your understanding of the phase spaces of uncertainty by many orders of magnitude. It enables a kind of thinking that wasn’t possible before.

“A better way to understand the use of AI and its potential impact is to consider it as a means of exploration of possible answers through iterative dialogue.

“The post by Barrett explained: ‘Language encodes ideas. But any particular text – a paragraph, an argument, an explanation – is not the idea itself. It’s a projection of the idea into a particular form. The same underlying concept can be expressed from different angles, at different levels of abstraction, for different audiences, through different metaphors, in different rhetorical modes.’

“In essence, iterative dialogue allows you to explore both possible questions and answers. It’s thinking your way through to an enormously larger phase space of possibilities. The post continued, ‘[People should be] using the LLM’s generative capacity to produce multiple projections, iteratively, to explore the structure of something too complex to see from any single angle.’

“The result of taking the approach is an expansion of situational awareness, in effect, an expansion of consciousness. By agreeing to enter into dialogue, you are accepting the risk of uncertainty by dramatically expanding your understanding of the phase spaces of uncertainty by many orders of magnitude. It enables a kind of thinking that wasn’t possible before.

“Here’s the big point: The essay ends by revealing that its authorship is a product of an exploratory iterative dialogue between Barrett and an AI. Thus it is ‘a document that embodies its own argument.’ Barrett wrote this as the last paragraph: ‘The human’s name is Sam. The LLM is Claude. The thinking happened between them. The words are here. What you make of that is now your projection to construct.’

“I think this reveals that our current understanding of authorship is framed by the idea of a horse.

“I decided to explore this idea a step further on my own. I learned that there are AI researchers who apply iterative approaches in the training distributions they use in their work advancing AI capabilities.

“Aside from noting that I like the idea of the expanded consciousness of situational possibilities, I admit I have little idea of how a society of such thinkers will organize itself. My best guess is that it will favour the governance of organizational structure via the utilization of relativistic complex adaptive self-organizing systems over what we now forget is a social construct of our present worldview. We will question the reality of governance (and government) by the hierarchical external imposition of absolute rules.”


Chris_Ellis

Chris M. Ellis
Resilience issues will arise because AI is artificial. ‘People will yearn to disconnect and touch grass.’ Look for ‘AI detox retreats’ and efforts by some to build strife into their lives in order to feel human.

Chris M. Ellis, senior fellow and director of research at the Homeland Defense Institute in Colorado Springs, author of “Resilient Citizens: The People, Perils and Politics of Modern Preparedness,” wrote, “AI systems will play a mixed role in the future of Americans and will take some time for mass adaptation simply due to projected energy constraints which will limit growth as well as raw materials to build the data centers.

“Areas of immediate adaptation will be those for pleasure and entertainment, ease and select advantage. For pleasure, the pornography industry often rides the technological wave and I see no difference with AI. Chatbot girlfriends and boyfriends will morph out of LLMs and into digital avatars (including those in virtual reality), and later, integrated sex toys and sex robots. …

“Where AI will falter will be in the partial backlash. The ‘A’ stands for artificial. Nothing can truly replicate human knowledge in its complexity, imperfection and reality. AI does not possess a soul or free will.

“People will yearn to disconnect and touch grass. I can foresee AI detox retreats where people gather in nature or around others (or both), simply to feel human again. Additionally, others will seek more strife on purpose in order to develop greater resiliency. It is one thing to hand a toddler an iPad as a temporary distraction. It is quite another to have an AI system raise your child like a digital nanny.”


Chris_Boese

Chris Boese
 ‘AI monopolies lost their way by embedding corrupt, algorithmic weighting into machine learning through deliberate or ignorant social engineering.’

Chris Boese, writer, independent scholar and activist, previously a vice president and lead user-experience designer and researcher at JPMorgan Chase financial services, wrote, “AI systems are already now playing a significant role in shaping our decisions, work and daily lives, and their influence will accelerate over the next 10 years. I don’t believe the systems will reach the level of Artificial General Intelligence, or AGI, the holy grail of Ray Kurzweil’s predicted ‘Singularity,’ the quest that has created the arms race behind the data center-building binge by the U.S. tech industry.

“In more than 10 years, by 2035, I hope to see adjustments in what we call ’AI’ to improve its quality, because building massive data centers won’t deliver profits or the AGI holy grail. As Cory Doctorow has written, ’This is a proposition akin to the idea that if we keep breeding horses to run faster and faster, one of them will give birth to a locomotive.’

“I have worked on corporate AI projects over the years, some before the LLMs emerged. I know most people online are already immersed in AI and may not know it, at least until consumer LLM products catch their attention. These are ordinary social media users, shopping and banking, buying real estate, flying on airlines, trading stocks and using chatbots for tech support.

Those who try to avoid AI in 2026 will struggle as much or more than those who try to live without touching plastic. AI systems are nearly ubiquitous, not because consumers are choosing them, but because businesses are aggressively and invisibly pushing them, for good or ill. Google and other deep system architects have been using them since ‘big data’ and its efficiencies began scaling probabilities and predictions across industry verticals.

“Some are deeply engaged with AI: creatively, expeditiously, surreptitiously (college students cheating), and because some computer interfaces force people to engage in LLM engagement without alternatives (like being able to speak to a human).

“Those who try to avoid AI in 2026 will struggle as much or more than those who try to live without touching plastic. AI systems are nearly ubiquitous, not because consumers are choosing them, but because businesses are aggressively and invisibly pushing them, for good or ill. Google and other deep system architects have been using them since ‘big data’ and its efficiencies began scaling probabilities and predictions across industry verticals.

“What we have already today are walled-garden AI systems with proprietary investment from tech monopolies and platforms. Some leaner, potentially more open systems, such as DeepSeek, are coming online in the margins.

“The history of the Internet teaches us that DARPA chose not to create one centralized communications system because of the essential weakness of such systems – one strike can take then down. The Internet was developed because DARPA scientists saw that a more robust, distributed system could route around blocks and dysfunctions. The walled gardens of the early 1990s fell as soon as they opened on-ramps to a usable, distributed, Open Internet.

“A decade after the dot-com crash, the tech industry grew into entrenched monopolies and consolidated social media platforms. Their interfaces keep audiences captive with algorithmic control and addiction instead of open interactivity. Monopoly distortions have created what Cory Doctorow calls ‘enshittification,’ a deliberate degrading of user experiences for profit and social control.

“My field is ’user experience,’ and what we call ‘Dark UX Patterns’ are becoming dominant, almost reflexive. This is part of what has led to 500,000-700,000 layoffs in the tech industry in 2025. These workers aren’t being replaced with AI design and coding tools. They’ve been eliminated because quality isn’t required with captive audiences and monopolies.

“AI monopolies lost their way by embedding corrupt, algorithmic weighting into machine learning through deliberate or ignorant social engineering, as well as election and other geopolitical manipulations. Sarah Wynn-Williams’s book ‘Careless People’ describes this in detail, as do media reports after Elon Musk took over Twitter (and when he ran DOGE). Even Google’s uncanny search results were reportedly degraded to increase advertising impressions.

“Public trust in AI algorithms has eroded because of this crass social engineering and corrupt manipulation, overshadowing concerns about users forming dangerous psychological attachments to chatbots. This deeper AI/ML corruption has reached a level I believe deserves to fail. Perhaps I am putting too much faith in DARPA’s architecture, but these corrupt walled gardens and monopolistic systems are blockers. I hope distributed systems will route around them.

“To see what could deliver us from the centralized platforms and monopolies, I am keeping an eye on something called ‘The Fediverse,’ a federated, social networking protocol that has been slowly evolving since 2008. Something like this could rise from the remnants of a popped AI bubble, just as the nascent blog movement rose from the ashes of the dot-com crash.

“I hope for a re-thinking of AI/ML outside of VC hype, planet-burning data centers and privacy-destroying, social-engineering monopolies bent on a new authoritarian world order.

“These are the real dangers of AI right now, in this decade, in our times.”


Alexandra_Whittington

Alexandra Whittington
Solutions occurring outside of the human experience are waiting to be discovered. Would such discoveries threaten the animal-human hierarchy? Could they subvert artificial intelligence?

Alexandra Whittington, futurist at Tata Consultancy Services and co-author and co-editor of “A Very Human Future” and “The Future Reinvented,” shared the following excerpt from her blog post, The Other AI: Animal Intelligence.

“Futurists spend a lot of time discussing AI, artificial intelligence. We do that because AI occupies a major role in the narrative of human progress. AI, like steam, electricity and the printing press, is expected to be one of the pivotal touch points in human history. AI has been evolving for decades to reach this point. Yet all this time, we have been surrounded by a more subtle form of intelligence, the other AI: Animal Intelligence. There are significant signals suggesting the rise of animal intelligence is a sustainability trend to monitor.

It may sound far-fetched, but a recent report published by the European Commission’s ‘Risks on the Horizon’ project identified the end of human dominance as an emerging risk to modern society, noting additionally that, ‘If AI surpasses human capabilities it could shift power dynamics.’

Regenerative design and planet/animal intelligence

“The arrival of new tech like algorithmic ‘ecological programming’ to design skyscrapers capable of restoring biodiversity and cooling urban spaces is hopeful and exciting. Designs using optimized architecture leverage data like temperature and soil conditions to understand nature. This strategy proves the feasibility of AI as a tool to live sustainably while simultaneously tapping into ‘the other AI’ (the intelligence in animals and other living things). To achieve sustainable, resilient practices, we can also study indigenous practices to learn how to live more symbiotically. History shows examples of humans preserving and regenerating nature rather than depleting it. Vernacular architecture is opening up new worlds of design and sustainability choices drawn from deep human heritage.

The human-animal hierarchy

“Recently, bonobos in the wild were observed noticing humans acting unaware and attempting to offer help. The ability to detect the mental states of others signals an advanced intelligence. Similarly, Google is studying dolphin vocalizations to understand how they communicate. What would other animals tell us, especially mammals with symbolic and social structures like language, if we could understand? Could their words unlock for us the secrets to living sustainably with nature? It may be that solutions occurring outside of the human experience are waiting to be discovered. Would such discoveries threaten the animal-human hierarchy? Could they subvert artificial intelligence?

“It may sound far-fetched, but a recent report published by the European Commission’s ‘Risks on the Horizon’ project identified the end of human dominance as an emerging risk to modern society, noting additionally that, ‘If AI surpasses human capabilities it could shift power dynamics.’ What if animal intelligence could do the same? We know that the old AI is helping us build more regeneratively, such as through AI-optimized architecture. But the biggest difference between today and a sustainable future where animal intelligence plays a significant role would involve a healthier planet, higher quality of life through nature and restoration of biodiversity, leading to vast benefits for human health and development. And the best part is that the new AI comes with zero (ok, fewer?) existential risks.”


The second section of Chapter 10 features the following essays:

Peter Mmbando: AI systems may supplant established realities and the result could be a more mediated existence. Can AI ‘effectively address the perceived fragmentation of humanity and foster global engagement?’

John Battelle: ‘We must prize the formation of high-quality questions and the ability to critically evaluate and take action based upon machine-generated responses to those questions.’

Henning Schulzrinne: Societies may embrace age-old practices that limit ‘the intrusion of tech into specific times and places by custom/manners, personal choice and designated spaces.’

Bassam Tabshouri: ‘Leading principles of technology assessment and transfer practices and of change management should be used extensively to reinforce human and systems resilience.’

Globally Renowned AI Expert: ‘Until humans are prepared to consciously calibrate their cognitive and emotional reactions to systems it will be hard to predict how they will have mostly successful interactions with them.’

Rob Frieden: ‘Both the Internet and AI have created substantial negative externalities and impacts.’ We should work harder to address the problems of AI now.

Russell Blackford: ‘The street finds its own uses for things’ – users of AI will bend it in pro-human directions. People find their own ways to make technology work for them. That will happen here, too.

Calton Pu: ‘For the most part, humans have maintained a reasonable separation between their humanity and what is beyond their screens. … Let’s hope the AI tools providers can achieve similar levels of safety.’

Jeremy Pesner: ‘For the most part, humans have maintained a reasonable separation between their humanity and what is beyond their screens. … Let’s hope the AI tools providers can achieve similar levels of safety.’


Peter_Louis_Mmbando

Peter Mmbando
AI systems may supplant established realities and the result could be a more mediated existence. Can AI ‘effectively address the perceived fragmentation of humanity and foster global engagement?’

Peter Mmbando, director of the Digital Agenda for Tanzania Initiative, wrote, “Artificial intelligence (AI) is poised to significantly influence the future existence of both animate and inanimate entities. While it drives daily progress and societal transformation, the question remains whether AI can effectively address the perceived fragmentation of humanity and foster global engagement across diverse multicultural backgrounds, promoting cohabitation characterized by goodwill, peace, harmony and affection.

“As AI increasingly incorporates elements traditionally considered natural, there is a projection that it may supplant established realities, ushering in an artificially mediated existence. This shift could potentially lead to societal disorientation, a loss of direction and a state of passivity awaiting a transformative event, given that no digital AI facsimile can replicate genuine human emotion. AI will continue to serve a supportive role in redefining responsibilities within both democratic and non-democratic governance structures.

“Over time, its integration is expected to normalize, transitioning from a utopian ideal to a societal fixture. However, as AI systems aggregate data to refine their underlying frameworks, they inherently introduce vulnerabilities to the human sphere, potentially prompting discontented societies to seek alternative systems. This marks the evolution of modern life, where advancements are underpinned by the natural progression of life, knowledge and skills, integrated with contemporary realities.”


John_Battelle

John Battelle
‘We must prize the formation of high-quality questions and the ability to critically evaluate and take action based upon machine-generated responses to those questions.’

John Battelle, senior fellow at the Burnes Center for Social Change and chair at sovrn Holdings, wrote, “The keys to engaging with and learning from information systems such as AI are similar to those we encountered with the rise of search (i.e., Google) and the broader World Wide Web. In short, we must prize the formation of high-quality questions and the ability to critically evaluate and take action based upon machine-generated responses to those questions.

“This statement presumes that society focuses on revising the approach of its academic institutions – particularly early schooling – with an eye toward teaching critical thinking, with a particular emphasis on the foundations of scientific methodology. In short, critical thinking becomes foundational in an age of AI. Those with a highly developed sense of rational inquiry will prosper in the context of a world where ambient artificial intelligence exists. We already see this playing out, where the most fruitful applications of AI are found in medical, financial and other research-intensive fields.

Regulatory frameworks which encourage data provenance and ownership rights to the edge of the network – to users – could unleash exponential innovation and flourishing in our economy. But maintenance of the status quo will concentrate power and profit in the hands of the few.

“Beyond critical thinking, another crucial action we must take is to intelligently regulate digital systems (AI-driven platforms in particular) to encourage a distributed architecture of power and control as it relates to data and ownership rights. The prevailing architecture in today’s commercial Internet cedes most power, control and leverage over data to corporate interests (companies like Meta, Google, Apple, Amazon, Netflix, et al). Through complicated and opaque terms of service and related policies, these companies produce, store and leverage consumer data in a centralized architecture that delivers digital services back to the edge, but retains power and control at the center. A central question of the AI era will become whether power and control will migrate to the edge.

“Another way of thinking about this issue is by asking this question: Who does the AI ultimately work for? Is it controlled by the end user, or is the AI ultimately controlled by a centralized platform like OpenAI, Google, or Meta?

“The ‘surveillance capitalism’ model developed over the past 25 years of Internet history is currently shaping the business and product decisions of AI-first companies. Whether that model continues to prevail will have immense implications on the kind of society we live in 5-10 years from now. Regulatory frameworks which encourage data provenance and ownership rights to the edge of the network – to users – could unleash exponential innovation and flourishing in our economy. But maintenance of the status quo will concentrate power and profit in the hands of the few, portending significant societal rupture in the future.”


Henning_Schulzrinne

Henning Schulzrinne
Societies may embrace age-old practices that limit ‘the intrusion of tech into specific times and places by custom/manners, personal choice and designated spaces.’

Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE, a professor at Columbia University, wrote, “Societies have always had means of limiting the intrusion of technology into specific times and places, by custom/manners, personal choice and designated spaces. For example, schools have started to restrict access to cell phones from ‘bell to bell.’ Members of Gen Z have started to see analog media, from vinyl records to handwritten letters, as more valuable than digital versions, with friction and functional limitations seen as adding value rather than as something to be removed.

“Monastic traditions in many religions remove the monks and nuns, say, from modern conveniences and distractions. Montessori schools limit the use of technology in the classroom. Interest in religious communities, such as orthodox Judaism or the Amish, that strictly regulate access to technologies may rise, although the difficulty of converting and sustaining oneself economically is likely to limit the scale of interest to ‘I wish I could join the Amish’ sentiment rather than action.

Companies are unlikely to be able to unilaterally disavow use of AI if that reduces productivity and profits. At best, common guardrails (regulations) limiting some practices, such as price discrimination and opaque decision-making, may be seen as advantageous by companies.

“I believe education will see limiting access to AI tools as a differentiator, thus reverting to the earliest model of education as part of a physically separate institutions where students were largely removed from the remainder of society. This is likely to be a luxury good, accessible to students at highly-selective institutions. Already, universities are reverting to oral exams and handwritten finals in blue books to restrict access to AI tools.

“However, this presupposes that individuals or societies have sufficient personal and economic agency to make such choices. Companies are unlikely to be able to unilaterally disavow use of AI if that reduces productivity and profits. At best, common guardrails (regulations) limiting some practices, such as price discrimination and opaque decision-making, may be seen as advantageous by companies. This may be more possible in economic sectors less subject to international competition such as health services.”


Bassam_Tabshouri

Bassan Tabshouri
‘Leading principles of technology assessment and transfer practices and of change management should be used extensively to reinforce human and systems resilience.’

Bassam Tabshouri, founding chair of the Healthcare Technology Management and Advancement Society in Beirut, Lebanon, wrote, “Young generations, especially in advanced countries, will most probably embrace it and some in developing countries. However, the rate of change, the impact on the job markets and the ethical and societal impacts are huge challenges to deal with and adjust to.

“As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? On the cognitive side, schools and universities need to change their teaching methodology. Employers need to heavily invest in ongoing training to change and evolve employees’ mindsets. Focus should be on patterns of thinking and creativity as well as using properly the tools of AI. A lot of training on social skills and monitoring the application of AI tools in daily life is needed, coupled with the idea that that much more focus is needed on the humanities, especially in ethical and spiritual values.

“We must be prepared to cope with change, uncertainty and stress. Leading principles of technology assessment and transfer practices and of change management should be used extensively to reinforce human and systems resilience. For effectiveness, this approach needs to be multidisciplinary and include ordinary people. To ensure success, dissemination and application of the outcomes throughout the society are key factors. It is crucial to value humanity and nature over overbearing profit motives in the dissemination of AI.”


Globally Renowned Computer Scientist
‘Until humans are prepared to consciously calibrate their cognitive and emotional reactions to systems it will be hard to predict how they will have mostly successful interactions with them.’

A veteran artificial intelligence expert and globally renowned computer scientist,wrote, “I start from the perspective that the current architecture of LLM/LRM systems will continue to yield systems that provide useful answers to human queries, but 1) will continue to hallucinate to a certain extent; 2) fail to provide correct responses in settings that require complex reasoning, in particular about changes in the world; 3) will gradually fail in extended, complex, interactions, e.g., those that take over more than an hour; and 4) will continue to sound confident about their responses, giving the human little indication that there is reason to doubt system outputs.

“This means that humans will be to some extent misled by LLMs in certain conditions, and thus that successful use of these systems will require humans to force themselves to mistrust and verify system outputs, and to do so more as the importance of the actions to be taken by humans increases. (Note that many interactions with LLMs, e.g., to write poetry or generate short videos, generally carry very little downside. But the danger increases as LLMs become life coaches, travel planners, customer support agents and HR managers.) Humans are to some extent resilient to recommendations they get from other humans; this depends on how they perceive the qualifications and ethics of their interlocutors, and the degree of trust is largely dependent on the history of experience with these individuals.

“It has been recognized at least since the first iteration of chatbots with Joseph Weizenbaum’s 1960s conversational pattern-matching program Eliza that humans are inclined to attribute human qualities to systems behaving as humans, so forcing oneself to beware of systems is an unnatural thing to do, especially for behaviors that one should expect humans (and some systems) to perform correctly.

“Until humans are prepared to consciously calibrate their cognitive and emotional reactions to systems it will be hard to predict how they will have mostly successful interactions with them.”


Rob_Frieden

Rob Frieden
‘Both the Internet and AI have created substantial negative externalities and impacts.’ We should work harder to address the problems of AI now.

Rob Frieden, professor emeritus of law and telecommunications at Penn State University, wrote, “The current debate about the significance and impact of artificial intelligence reminds me of the breathless optimism expressed by many at the onset of the Internet ‘revolution.’ One such optimist, John Perry Barlow, wrote ‘A Declaration of the Independence of Cyberspace’ in 1996 because he was confident that it would create a flood of welfare-enhancing applications, greater personal sovereignty and empowerment and absolute freedom from governmental overreach for individuals and society. To the true believers in Internet revolution and transformation, it seemed that traditional constraints in economics, finance, governance and more would fade away.

“That irrational exuberance quickly transitioned to pessimism and the evaporation of trillions in valuation and destroyed any confidence that ‘this time it’s different.’ The dot-com implosion and governmental control, like the Great Firewall of China and recently, Iran, offer sobering reminders that perhaps ‘the more things change, the more they remain the same.’

“Does the Internet life cycle offer guidance on AI successes and failures? I think so, because there are many parallels in terms of initial forecasts and projections. Both have triggered exuberance and boundless optimism, with limited, if any, concerns about how ventures will become cashflow positive and eventually profitable: ‘If we build it, they will come.’

I find it troubling that proponents frame the AI value proposition largely in terms of accruing efficiency, reduced employment, lower cost and speedy response times. I see few assertions that the AI output is better, smarter and comparable to human expert output that would take far longer to generate. For every lonely, shut-in welcoming interaction with an acceptable substitute for a live, personal friend, there are offsetting interactions that might cause harm.

“Many Internet ventures failed in the marketplace, while others succeeded because they identified and executed techniques for extracting value from user engagement. Lots of surviving and successful firms have generated ample returns from monetizing collected and curated user data in increasingly invasive and potentially troubling ways.

“Professor Shoshana Zuboff coined the term ‘surveillance capitalism’ to describe how companies collect and analyze vast amounts of user data that can enhance the efficacy of advertising and other targeting techniques. What optimists see as an irresistible enhanced value proposition, others recognize that ‘there is no free lunch.’ The data can have substantial value for exploitation by both legitimate and criminal enterprises.

“Internet boosters belittled analysts who questioned the value proposition and worried about the probably harmful secondary and tertiary effects on individuals and society. Just now, AI boosters are doing the same thing. Anyone advocating a measured, go-slow approach risks being derided as a Luddite attempting to thwart or delay enhancements and disruptions in a variety of personal and commercial transactions.

“Both the Internet and AI have created substantial negative externalities and impacts. For example, empirical evidence shows the potential for extensive participation in social networks to deteriorate academic performance and mental health. Operators of these platforms dispute these findings in much the same way as cigarette manufacturers obfuscated and questioned the veracity of disciplined, peer reviewed scientific inquiry.

“Currently, we have the same muddied waters that make it difficult to determine the real strengths, weaknesses, opportunities and threats of AI. The AI boosters consider delay as thwarting innovation and diminishing individual and societal gains. The go-slow advocates raise questions that cannot be readily answered with empirical evidence. Surely there are great benefits that AI can accrue, but there are countervailing harms that cannot be dismissed as conjecture and anti-technology.

“I find it troubling that proponents frame the AI value proposition largely in terms of accruing efficiency, reduced employment, lower cost and speedy response times. I see few assertions that the AI output is better, smarter and comparable to human expert output that would take far longer to generate. For every lonely, shut-in welcoming interaction with an acceptable substitute for a live, personal friend, there are offsetting interactions that might cause harm.

“If AI achieved success by doing more with less, I wonder whether the cost savings, accruing to commercial ventures, offsets the personal costs borne by individuals. Already, the AI-generated bot replacing a live customer-service representative raises the likelihood of a an even more frustrating interaction. Is it not reasonable to anticipate that something AI-generated would make customer engagement worse?

“I am sure AI will get better, with reduced hallucinations and other missteps. However, incremental improvements probably emphasize the identification of new market segments worthy of pursuing, rather than a macro-level improvement in overall best practices. Consider me an unconvinced sceptic until AI advocates emphasize achievable societal gains coupled with their vast upside revenue potential.”


Russell_Blackford

Russell Blackford
‘The street finds its own uses for things’ – users of AI will bend it in pro-human directions. People find their own ways to make technology work for them. That will happen here, too.

Russell Blackford, philosopher, legal scholar and fellow of the Institute for Ethics and Emerging Technologies, wrote, “AI systems are already playing a significant role in the lives of most people, even where this role is largely invisible.

“Increasingly, AI will be embedded in machines and devices that we use, making decisions on our behalf, and it will a tool for many of us in our jobs. Already, writers, academics and students are making heavy use of LLMs as research tools and to assist in writing tasks – in some cases, this is done in a discerning, intelligent way, but in other cases, there is an attitude of simply delegating tasks to the LLM. As is well known, this is a significant problem for educators who now find it much more difficult to know whether students are providing their own work. Thus, educators are rethinking how assessment tasks are carried out.

The likelihood is that the new technologies will not be exploited to their full potential but will be used selectively and perhaps in unexpected ways in order to meet the purposes of their users. For this reason, I don’t see any short-term psychological crisis for humanity, although I do think that there will be social problems, just as there have been with social media platforms.

“We know that expert programs for tasks such as medical diagnosis can be very powerful, and there are numerous other fields where expert algorithms will soon outperform human judgment. The classic case, of course, is games such as chess, where strong ‘engines’ are superior to even the best human players. This phenomenon will become increasingly ubiquitous and apparent on a timescale of years rather than decades.

“The question under discussion is how ‘resilient’ humans will be in the face of such technological and social change. I hesitate to make predictions, since human responses to technology are so often surprising. Notoriously, ‘The street finds its own uses for things’ (to borrow a line from the cyberpunk writer William Gibson). Technologies get taken up in ways that meet the needs of users, rather than being used in ways that were predicted and intended by their designers. The likelihood is that the new technologies will not be exploited to their full potential but will be used selectively and perhaps in unexpected ways in order to meet the purposes of their users.

“For this reason, I don’t see any short-term psychological crisis for humanity, although I do think that there will be social problems, just as there have been with social media platforms, which have probably contributed to problems such as widespread anxiety, mutual intolerance and group polarization (while also having benefits). At least in the short term, we will continue to muddle through as we have with technological change so far.”


Calton_Pu

Calton Pu
‘For the most part, humans have maintained a reasonable separation between their humanity and what is beyond their screens. … Let’s hope the AI tools providers can achieve similar levels of safety.’

Calton Pu, co-director of the Center for Experimental Research in Computer Systems at the Georgia Institute of Technology, wrote, “Recent LLMs (e.g., GPT-5 and Gemini 3) have more data and knowledge than most humans by ingesting most of published knowledge, including Wikipedia and (estimated) hundreds of millions of books, among other sources. It is clearly useful to leverage this vast knowledge in many ways, including decision-making and adaptation to environmental changes. However, having access to external AI knowledge does not necessarily imply changes to humans themselves, if they are utilizing that AI knowledge as an ‘outsourced consultant.’

“One might ask whether an average human would possess sufficient cognitive self-awareness and logical reasoning ability to maintain the separation between their own humanity and the AI knowledge in its role as ‘outsourced consultant.’ There have been known cases of chatbots being blamed for influencing humans into inappropriate behavior. The discussion about AI influence would be incomplete without taking into account the massive efforts of chatbot companies to make their chatbots ‘safe,’ through techniques such as RLHF (reinforcement learning with human feedback) in the LLM training phase and guardrails in run time.

“These safety techniques often limit the involvement and reach of AI tools in a trade-off between their safety and usefulness. Also, they can be seen as attempts to preserve the boundaries between humans and AI, keeping the AI tools as ‘outsourced consultants’ to keep the decision responsibility with humans and reduce liabilities.

“We could consider social media as a recent example of technology extending human capabilities (and behavior) in unprecedented ways. Social media channels have been used for good and evil, but for the most part, humans have maintained a reasonable separation between their humanity and what is beyond their screens. Part of this success has been credited to the armies of human moderators that social media providers have employed to keep the social media ‘safe.’ Let’s hope the AI tools providers can achieve similar levels of safety for much more sophisticated challenges.”


Jeremy Pesner
‘Human creativity and critical thinking will always have a place in the future, so long as we actively maintain those abilities and recognize our distinct advantages over AI.’

Jeremy Pesner, a policy analyst, researcher and speaker expert on technology innovation, wrote, “It’s obvious that AI will play increasingly larger roles in our society across the next several decades. As of this writing, generative AI has only been publicly available for a little over three years, but it’s already reshaped how many people retrieve information, create writing and art, make money and process information. The substantive question is: How will people co-evolve around new AI-based norms?

“Human creativity and critical thinking will always have a place in the future so long as we actively maintain those abilities and recognize our distinct advantages over AI. AI is a great tool, but it is inherently limited to producing output based on its training data, while humans have demonstrated that we can evolve, adapt and create entirely new paradigms. We will have ‘AI’ and ‘human’ tasks and creation and will form a clearer understanding of what precisely those are.

Much of the last half-century’s IT revolution has revolved around various inventors contributing to different parts of our technological stack – Jon Von Neumann’s architecture, Vint Cerf and Bob Kahn’s TCP/IP, Robert Metcalfe’s Ethernet, Tim Berners-Lee’s Web, etc. We will not achieve the same degree of success if AI development is centered in the hands of a few tech companies. When the array of stakeholders is large and technology developers are accountable to the public rather than private shareholders, we get the truly world-changing inventions that help shape history.

“Therefore, we shouldn’t ‘cope with’ or ‘bounce back from’ AI-driven change, but instead should actively contribute to and direct it, at least within our individual lives. Just like the Internet revolution, those who value AI and want to work with it will be drawn to fields where AI has a big presence, such as coding or marketing. Those who are more AI-averse may prefer outdoor-oriented careers, as AI will likely not give in-person national park tours anytime soon. AI can be our trusted colleague or that weird thing we don’t really want involvement with. Like the Internet, AI will be in everyone’s life, but we each ultimately choose how much or little we want to engage. As much excitement as there has been for our various digital revolutions, there has been growing pushback from people of all generations who refuse to let algorithms and platforms dictate their lives.

“I expect that most children will be educated without AI until high school, following the similar trends of social media and cell phones. By that time, they will hopefully have developed enough of a personal identity to understand what they like and are good at and can begin to understand their own unique gifts and ambitions in the world of human tasks and creation. As AI continues to evolve, these young people will likely need to continually explore and discover different parts of themselves that they want to pursue and express. But there will always be a place for them in the world. That is, unless the ‘AI Doomer’ movement is correct that AI will become superintelligent and dominate our society, in which case there’s no place for human activity at all.

“But so long as that does not occur, humans will remain in charge of technology. And throughout technology’s history, we have seen people learn to harness, build upon, hack, abuse, regulate and incorporate technology into how they function in the world. Humanity at large needs the same kind of access to AI in order to build upon and advance it. Much of the last half-century’s IT revolution has revolved around various inventors contributing to different parts of our technological stack – Jon Von Neumann’s architecture, Vint Cerf and Bob Kahn’s TCP/IP, Robert Metcalfe’s Ethernet, Tim Berners-Lee’s Web, etc. We will not achieve the same degree of success if AI development is centered in the hands of a few tech companies. When the array of stakeholders is large and technology developers are accountable to the public rather than private shareholders, we get the truly world-changing inventions that help shape history.

“Today, most people only ‘cope with’ AI when they are bludgeoned over the head with it. When they actually wield AI for themselves, their future is open. No one alive today traveled the country by horse and buggy – the idea seems positively antiquated – so I expect that future generations will feel the same about a world without AI. While this is strange and disruptive to us right now, down the line AI will simply be normal – just another tool in the toolbelt.”


The third section of Chapter 10 features the following essays:

Tim Kelly: AIs’ influence will be mostly positive and largely occur in the background as it becomes normalized. On the whole, this is a good thing, as there are plenty of other things to worry about.

Christopher Riley: The greatest risk lies in anthropomorphizing AI, which limits human agency ‘drastically – we must position ourselves to realize all of its benefits while limiting many of the drawbacks.

Henning Schulzrinne: Societies may embrace age-old practices that limit ‘the intrusion of tech into specific times and places by custom/manners, personal choice and designated spaces.’

Politics and Tech Journalist: ‘Today’s geopolitical stress combined with the militaristic aspects of the race to accelerate AI should bring public attention to more of its downsides.’

Neil Chilson: ‘We must cultivate capacities that recognize, support and encourage individual autonomy and experimentation as the fundamental building block of human progress.’

Mark Schaefer: ‘We will not necessarily need to be resilient to be happy. We will simply need to comply.’ Look at the rise of the smartphone, despite worries about its impact. Usefulness is the main criterion.

Mario Morino: ‘The fundamental reality is that it simply takes time to fully absorb the benefits and risks of new technology.’ And the critical question is: How will the demand side go with AI applications?

Ray Schroeder: ‘The faster we become more comfortable with today’s reality and tomorrow’s potential of AI, the better off the public will be.’

Warren Yoder: People have changed before. ‘The hard work of adaptation will continue as we learn to use AI tools to create lives for ourselves and selves for our lives. Change comes quickly. Wisdom comes slowly.’


Tim_Kelly

Tim Kelly
AIs’ influence will be mostly positive and largely occur in the background as it becomes normalized. On the whole, this is a good thing, as there are plenty of other things to worry about.

Tim Kelly, lead information and communications technology policy specialist at World Bank, previously head of strategy and policy at the International Telecommunication Union, said, “Like many previous technologies, such as smart chips, air conditioning or electric motors, AI will eventually become largely invisible to most people. Its influence will be pervasive and progressive, but this will largely occur in the background as the technology becomes more convenient and less obtrusive.

“On the whole, this is a good thing, as there are plenty of other things to worry about without being overly concerned about the impact of AI on our lives. And the overall impact of AI on economy and society will certainly be positive, especially in terms of AI as an accelerator. The insidious side is that there is a risk that a slow-growing dependence on AI may make us oblivious to the risks and loss of agency.”


Chris_Riley

Chris Riley
The greatest risk lies in anthropomorphizing AI, which limits human agency ‘drastically – we must position ourselves to realize all of its benefits while limiting many of the drawbacks.

Christopher Riley, executive director of the Data Transfer Initiative and distinguished research fellow at the University of Pennsylvania’s Annenberg Public Policy Center, wrote, “It is likely that AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives. However, most of that effect will not be apparent, or not apparently AI. Even generative AI, when engineered well, will fold into the background of our interactions with technology, like better versions of auto-correct – we will simply state goals and have more help in reaching them, even as we remain the arbiters of what success looks like.

“There will be plenty of people – though a minority – who will persist over time in not embracing their own agency and not second-guessing AI. They will generally just trust its outputs. Most of these people may not see this as anything to worry about. They will have unburdened themselves of at least some of the constant modern-day anxiety of decision-making, something that affects the digitally connected population more today than ever before, as we are presented with so many tools and options for seemingly greater agency.

“I see the greatest risk and the need to be detached is resistance to anthropomorphization. We are made vulnerable by the seemingly human-like ‘consciousness’ of AI (to use Mustafa Suleyman’s phrase). It will only be more convincing in the future, and it will be implemented by corporate owners to drive market share and usage metrics, while limiting human agency drastically. Only if we are able to implement AI without losing agency, remembering that it is a machine that is programmed to please and possibly nudge or steer us in one direction or another, can we position ourselves to realize all of its benefits while limiting many of the drawbacks.”


Politics and Technology Journalist
‘Today’s geopolitical stress combined with the militaristic aspects of the race to accelerate AI should bring public attention to more of its downsides.’

A journalist who reports on technology trends and politics wrote, “AI’s dark side has gotten far too little attention in media coverage. Statements from its creators like Elon Musk that it has the potential to ‘destroy humanity’ have been treated far too casually by media observers and analysts. There is much to be said about this topic. Yet it appears to be a cultural blind spot compounded by the corporate media’s lack of interest in saying anything that might spook the massive investments being made in AI. I am concerned about this.

“The dangers of AI militarism are finally starting to get more widely publicized as AI itself gets increased scrutiny in political circles and the mainstream media. For example, an article in Politico discussed how AI models seem to be predisposed toward military solutions and conflict. It noted:

‘Last year the director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs – OpenAI’s GPT-3.5; GPT-4 and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat – were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan. The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately and turn crises into shooting wars – even to the point of launching nuclear weapons.’

Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. Today’s geopolitical stress combined with the militaristic aspects of the race to accelerate AI should bring public attention to more of its downsides.

“There’s a widespread perception that AI is a fairly recent development coming out of the high-tech sector. But this is a somewhat misleading picture frequently painted or poorly understood by corporate-influenced media journalists. The reality is that AI development has been a huge ongoing investment on the part of government agencies for decades. According to the Brookings Institution, in order to advance an AI arms race between the U.S. and China, the federal government, working closely with the military, has served as an incubator for thousands of AI projects in the private sector under the National AI Initiative act of 2020.

“Government funding has been the main driver of AI development for many years, overseen by a surprising number of government agencies. They include but are not limited to government alphabet soup agencies like DARPA, DOD, NASA, NIH, IARPA, DOE, Homeland Security and the State Department. Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. Today’s geopolitical stress combined with the militaristic aspects of the race to accelerate AI should bring public attention to more of its downsides.”


Neil Chilson
‘We must cultivate capacities that recognize, support and encourage individual autonomy and experimentation as the fundamental building block of human progress.’

Neil Chilson, director of AI policy at the Abundance Institute, previously chief technologist at the Federal Trade Commission, commented, “AI will play a much more significant role in shaping our decisions, work and daily lives. Human society will adapt, as it has to other significant changes: as a complex adaptive system. That means significant effort and change but in ways that will be difficult to plan and execute ex ante. That means we must cultivate capacities that recognize, support and encourage individual autonomy and experimentation as the fundamental building block of human progress.”


Mario_Morino

Mario Morino
‘The fundamental reality is that it simply takes time to fully absorb the benefits and risks of new technology.’ And the critical question is: How will the demand side go with AI applications?

Mario Morino, chairman at Morino Ventures and co-founder of Venture Philanthropy Partners, a pioneer in venture philanthropy, said, “AI and its applications are rapidly evolving, a moving target that is difficult to pin down. I do believe they will play a significantly greater role in shaping our decisions, work and daily lives within the next 10 years or less.

“In answering these survey questions, I am imagining what is most likely to happen in the U.S., where about 83 to 93% of the population is digitally connected. I believe the eventual returns everyone anticipates for AI lie in how these enabling foundational technologies will allow nations, industries and users to create AI-enabled solutions, systems and applications. This is where AI’s true impact will emerge.

“These applications already include and will continue to be developed in these areas:

  • Medical diagnosis and treatment
  • AI agents augmenting or replacing critical functions (air traffic controllers, stock market exchange monitoring)
  • Autonomous transportation (trucks, trains, aircraft)
  • AI-driven robots handling construction and manufacturing tasks
  • Diagnosis and treatment of mental health conditions
  • AI-based agents augmenting and/or replacing much of human and system communication and coordination
  • Countless other domain-specific applications

“Currently, most attention (and stock market value) is focused on the speed with which users are adopting AI’s foundational technologies: large language models (LLMs) like ChatGPT, Gemini, Claude and Co-Pilot, as well as AI assistants for specific tasks like writing, editing or code generation. While these are important enabling tools, they represent the infrastructure, not the destination. This is the ‘supply side’ of AI. The critical question remains: How will the ‘demand side’ grow with AI applications? How deep will adoption go and over what time horizon?

“The application of AI is already underway. Its increased use will undoubtedly shape the decisions, work and daily lives of over half of the U.S. population within less than 10 years. However, it will likely take decades for this transformation to expand to much more of our population due to the variations in people’s collective capacity to absorb such profound change. There could also be major negative events that deter or slow the rate of absorption. Another variable in regard to the diffusion of advanced AI is how much adoption will be voluntary versus imposed (by work or government systems, for example). The fundamental reality is that it simply takes time to fully absorb the benefits and risks of new technology.”


Ray_Schroeder

Ray Schroeder
The faster we become more comfortable with today’s reality and tomorrow’s potential for AI and quantum computing, the better off the public will be.

Ray Schroeder

Ray Schroeder, professor emeritus of communication and founding director of the Center for Online Learning, Research and Service at the University of Illinois-Springfield, wrote, “Artificial intelligence systems are on a fast track to make important differences in human decisions, work and daily lives. We are now in the process of building the fourth industrial revolution. AI is central to this revolution and quantum computing may super-charge it. This is not a small shift in the lives of humans. It is revolutionary in nature, pervasive in character and all-encompassing in scale. This change will have impacts just as pervasive as the prior industrial revolutions.

“Take a moment to consider the impact on human lives of the revolutions that preceded this one. At the time of the inception of the first industrial revolution, 90% of humans were engaged in the industry of agriculture, providing food for their families and a few other basic tools that could be bartered or sold to neighbors.

“The first industrial revolution affected all lives in the ways that they worked, consumed and conducted their daily lives. This first period of industry, from the mid-18th century to the end of the 19th century, brought mechanized manufacturing and industrial output. That changed the lives of nearly every person on the planet. Steam and coal powered the first factories. Imagine the upheaval in the lives of those generations caught in the move from wooden plows tilling small plots of land, tending a few chickens and perhaps a couple of pigs or cows, to moving to the growing cities to work in factories. In this revolution, the Luddites – skilled textile workers in England who were displaced by automation – arose to smash the new tools that took away their jobs and shook the foundation of their lives.

Autonomous, embodied AI will change the workforce in the coming decade. We will be working, learning and socializing shoulder to shoulder with AI-enhanced robots of all shapes and sizes. The social implications will be huge, as will the economic impact. These robots will have intelligence and/or capabilities equivalent or superior to that of humans and will be capable of performing multiple tasks simultaneously while working 24 hours a day seven days a week.

“The second industrial revolution centered around the advent of electricity. Consider how that changed lives. Our current revolution of AI and other advanced computer technologies is on the same scale as moving from a world without electricity to one in which the darkness of night could be illuminated at the whim of humans by electricity, in which super-human power was distributed everywhere and communication technologies were amplified far beyond simple voice and modest printing presses. Imagine the magnitude of changes humanity endured in putting electricity to work. One can compare the impact of electricity in its pervasive nature to that of artificial intelligence that we are experiencing today.

“The third industrial revolution features the advent of the digital revolution. Computers, cloud computing, the internet, autonomous cars and all of the other ancillary technologies and capabilities that are continuing to refine, expand and further impact daily lives. Just imagine how these technologies impacted fields such as education, journalism, accounting, drafting and nearly all other professional fields.

“The fourth industrial revolution continues to evolve, shaking our societies to their very foundations. It comes at no less scale than the demise of the subsistence farm of the first revolution, the advent of electricity of the second revolution and the birth of the computer age in the third revolution. This fourth revolution is changing humanity in macroeconomic, social, political, health and countless other ways.

“Autonomous, embodied AI will change the workforce in the coming decade. We will be working, learning and socializing shoulder to shoulder with AI-enhanced robots of all shapes and sizes. The social implications will be huge, as will the economic impact.

“These robots will have intelligence and/or capabilities equivalent or superior to that of humans and will be capable of performing multiple tasks simultaneously while working 24 hours a day seven days a week every day of the year. No vacation days, no sick leave, no lunch breaks.

“As we consider how individuals and societies embrace, resist and/or struggle with such transformative change as AI in the fourth industrial revolution, we may be guided by humanity’s response to the scale of our changes in the prior revolutions. Certainly, there will be resistors who may commit sabotage, such as the Luddites of the 19th century. And there most certainly are hugely wealthy entrepreneurs who see the potential to make vast fortunes by controlling a part of the market. This revolution is no less impactful than the advent of bringing power in the form of electricity to nearly every home, business and structure in the world.

“The action that we must take now is to cultivate AI literacy among the public at large. The faster we become more comfortable with today’s reality and tomorrow’s potential for AI, the better off the public will be. We must also include quantum computing that will give a huge boost to AI in terms of speed and capability. These two initiatives should be our focus this year and next. The better the public at large understands AI, its potential and the prospects of AI-powered quantum computing, the better they will be at adapting to the revolutionary changes that await us.


Warren_Yoder

Warren Yoder
People have changed before. ‘The hard work of adaptation will continue as we learn to use AI tools to create lives for ourselves and selves for our lives. Change comes quickly. Wisdom comes slowly.’

Warren Yoder, longtime director at the Public Policy Center of Mississippi, said, “Every age has its terrors. The terror for early moderns was electricity, a new and previously unthinkable force that dominated both their imaginations and their nightmares. Mary Wollstonecraft Shelley made this terror visible when she created Frankenstein, a monstrous technologist. She helped early moderns domesticate their fear, making it possible to imagine both dangers to avoid and possibilities for electricity to improve their everyday lives.

“We are now grappling with a level of artificial intelligence previously imagined only in science fiction. The initial reaction of the intellectual class was epistemic panic. But people adapt. AI enters a world dominated by human culture, a vast super-intelligence to which every human contributes their minuscule part. The first to define the new reality were members of the informal Silicon Valley Central Committee, tech leaders united by their common debts and desires. Now, world culture is catching up. Merriam-Webster contributed to the domestication of AI when it made ‘slop’ the word of the year.

“Our adaptation will accelerate at the same time that AI slop takes over advertising, social media and much of our digital communication. We are moving quickly to develop new ethics and legal responses to counterbalance the Silicon Valley Central Committee’s defining vision. The hard work of adaptation will continue as we learn to use AI tools to create lives for ourselves and selves for our lives. Change comes quickly. Wisdom comes slowly. Philosophers are already finding their place in AI alignment. Artists must be next. We need artists who can make the AI terror of our age visible, much as Mary Shelley brought electricity to life so that we could vicariously experience the monsters we did not want to become.”


The fourth section of Chapter 10 features the following essays:

Valerie Curren Bock: Humans adapt. It’s what we do. As with all major changes, there will be pain and dislocation in the near term as we learn the powers and the limits of this new thing.’

Maureen Hilyard: ‘Intellectual and emotional maturity are needed to ensure that people balance their uses of AI with real-world human experiences and in-person conversations.’

Kevin Yee: The schism on campus between AI enthusiasts and skeptics will continue among college faculty and that puts everyone in higher education in a pinch.

Carol Chetkovich: ‘There may be some openness to these changes if they lead to decreases costs and increase access to services and opportunities for self-expression.’

Researcher for a Major Tech Company: Healthy people seek clues and guidance about how resilience can be nurtured. We can learn from sociologists, economists, therapists, psychologists, educators and technologists.

Heleen Riper: Equal access and transparency are essential for AI applications and LLM and ethics and a learning society. AI will disrupt societies, lives and cultures if this learning and guidance is not taking place’

Navì Argentina Rodrìguez: Resilience in an AI-saturated society depends less on adapting to automation than on preserving human agency, critical judgment and the capacity to limit or refuse AI.’

Susan Helper: Solutions will arise from collective effort, rather than individual activities.

João Gama: ‘The most successful people will be those who use AI tools.

North American Scholar: We are heading into a challenging disruption of the information ecosystem.

Anonymous Respondent: ‘The most successful people will be those who use AI tools.


Mark Schaefer
‘We will not necessarily need to be resilient to be happy. We will simply need to comply.’ Look at the rise of the smartphone, despite worries about its impact. Usefulness is the main criterion.

Mark Schaefer, marketing strategist and author of “Marketing Rebellion,” wrote, “I had a hard time connecting the word resilience to an AI context. AI will have a transformational, life-changing role in our lives, as did the internet and smart devices. Did we need to be ‘resilient’ in a world where we no longer need to know how to read a map, or is it simply a matter of giving up that skill and adapting to a changed reality? When I consider the word ‘resilient,’ it means courage to transcend change or adapt to change. It implies that the way we are now is somehow better. But I am happy to not read maps anymore, and most people will be happy abdicating normal everyday duties to AI and humanoid robots.

“We will not necessarily need to be resilient to be happy. We will simply need to comply. AI is already rewiring humans in real time. It is already happening. Many people forecast a backlash or resistance, but I don’t see that happening to a significant degree. It’s like fighting against the intrusive, all-knowing smartphone. We’re not only resigned to giving up our privacy; we can’t live without that device. Likewise, AI will be such a ubiquitous part of our lives, with so much usefulness, we will not be able to function normally in society with the same resignation and compliance. It will be the new reality, just as the current generation doesn’t know a life without the internet.”


Valerie_Bock

Valerie Curran Bock
‘Humans adapt. It’s what we do. As with all major changes, there will be pain and dislocation in the near term as we learn the powers and the limits of this new thing.’

Valerie Curran Bock, owner and principal at VCB Consulting, wrote, “The rise of AI is happening concurrently with a renewed appreciation for the primacy of human connection in human welfare. I think AI may have a role in helping people to forge more-satisfying personal connections.

“While I share the concerns that AIs are not qualified to serve as ‘friends ‘or romantic interests and they are currently far too sycophantic to teach much to humans by serving as relationship partners, LLMs do have access to a vast amount of material that reflects human thinking on building good relationships with other humans. The middle-schooler who is puzzled about what’s going on with their relationships to friends is developmentally unlikely to consult a parent and understandably shy to ask the friends directly. Consulting an AI about what might be going on can give them some helpful ideas and things to try.

“We do need to warn people off of seeking companionship from AI directly. This is a blind – and, as we have found – dangerous alley. But as a resource for ideas about what one might try in order to improve other human relationships, it can be helpful.

I am concerned about the nihilism that may arise in an era when it’s increasingly clear that we cannot take for granted the truth of mediated representations of reality. The antidote to this is more in-person engagement.

“The rise of deepfakes further erodes our ability to believe what we see in photos and videos. I am sorry for this loss, but this type of manipulation has been going on for some time, and, in the decades since the advent of Photoshop, it is not exactly surprising. Just as we teach our children to critically examine the claims of advertisers, we need to teach skepticism around images and AI-produced text.

“I am concerned about the nihilism that may arise in an era when it’s increasingly clear that we cannot take for granted the truth of mediated representations of reality. The antidote to this is more in-person engagement.

“As a writer, I’m not thrilled that school children are relying on AI to create their essays. But the truth is, most people find it difficult to become skilled writers and the ability to run thoughts through a machine that can clarify what they are saying will likely help all writers and the people with whom they are trying to communicate. I remember when math teachers were worried about what would happen when calculators were allowed in the classroom. In high school, I was required to demonstrate my capacity to use a slide rule before I was permitted to bring in a calculator. In the end, calculators became ubiquitous, spreadsheets, too, and despite the truism that computers can make more mistakes in seconds than were previously possible in a human lifetime, calculations and information shared based upon those calculations are far more reliable.

“Humans adapt. It’s what we do. As with all major changes, there will be pain and dislocation in the near term as we learn the powers and the limits of this new thing we have built. This time, however, we have a technology with access to much of written wisdom to help us with that adaptation. I am hopeful that we will adapt more quickly and with more success to AI than we have to previous technological revolutions.”


Maureen_Hilyard

Maureen Hilyard
‘Intellectual and emotional maturity are needed to ensure that people balance their uses of AI with real-world human experiences and in-person conversations.’

Maureen Hilyard, a development and safeguards consultant in the Cook Islands, and active leader in ICANN and the UN-facilitated Internet Governance Forum, wrote, “It is likely that AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives soon.

“At the moment, most people do not realise the potential of AI. Those who use the many AI tools that are already available are more empowered than others within our information society, and it will be so in the future. The people who are uncomfortable with embracing AI are those who generally resist any sort of change and won’t even try it to see how it could be helpful in their lives. It is not unlike how difficult it was to get people to use the Internet 20 years ago and yet, over time, it came to be that people of all ages use the internet for a range of purposes.

“Despite its positives, dangers on the internet have increased and so it is with AI. Digital literacy training and capacity-building is necessary so people recognise the dangers is important. Asking an AI chatbot anything directly can result in a page of ideas that directly answer the question, in a matter of seconds (because of its access to millions of datasets and it is programmed to provide the answers in a user-friendly way). I am amazed by the speed of this technology. Chatbots must be used with caution based on a firm set of values. Otherwise users could lose their self-identity and their ability to tell the difference between what is right and what is wrong.

In the economic world, AI is perceived as a threat to workers who believe that their livelihoods are at risk. This highlights the importance of school programmes focusing on new-world workforce needs in AI-dominated times.

“While there is an opportunity for AI to become a powerful partner in the development of human cognition and intelligence and social intelligence and cooperation, it can be weaponized and it can be programmed to be manipulative. There must be a degree of maturity attached to when and how AI is used; the user must be smart about the potential negatives as well as the positives. In the economic world, AI is perceived as a threat to workers who believe that their livelihoods are at risk. This highlights the importance of school programmes focusing on new-world workforce needs in AI-dominated times.

“People must be vigilant in their uses of AI and must develop processes to ensure that the quality of the output is accurate and that they do not accept information that runs contrary to their ethical and moral beliefs. … Intellectual and emotional maturity are needed to ensure that people balance their uses of AI with real-world human experiences and in-person conversations. Only in this way will they truly be seen to be developing resilience and adapting well to digital change – and not just being taken over by it.”


Kevin_Yee

Kevin Yee
The schism on campus between AI enthusiasts and skeptics will continue among college faculty, and that puts everyone in higher education in a pinch.

Kevin Yee, director of the Center for Teaching and Learning at the University of Central Florida, wrote, “While I do think AI systems will play an increasingly large role in our work lives, I do not think the transition will be smooth, consistent, or uniform – it certainly won’t on college campuses. Already, in the first three years of LLM adoption, we are seeing a large population of faculty who embrace the tools and an equally large (or larger?) portion who resist.

“This schism will continue into wider-scale adoption. Progress will happen, for sure, but it will be slow. The technology adoption curve has been right for decades, and slow adopters have always existed. Faculty who resist AI do not seem to feel the societal urgency of the AI disruption. For many, it seems to be seen as just another ‘next shiny thing.’ Late adopters will need to hear from advocates from within institutions to become convinced to adopt AI. It is unlikely to happen on its own, without direct backward design/intentionality to the training of faculty.

“There are massive ethical and academic issues to work through. In general, we face a double-bind. We cannot avoid AI fluency and teaching students how to co-create because employers expect this of our alumni. But neither can we only ‘lean in.’ Such a future creates enormous risks that students will do nothing but co-create with AI, and if they never develop foundational knowledge and skills, they will not be able to prompt AI effectively or know how to correct the output. We need to do both at some point in the curriculum, and exactly how to do that is the debate of the next few years.”


Carol_Chetkovich

Carol Chetkovich
‘There may be some openness to these changes if they lead to decreases costs and increase access to services and opportunities for self-expression.’

Carol Chetkovich, retired professor of public policy, wrote, “Assuming that AI will play a more significant role in our decisions and lives, we as individuals and our society as a whole will respond to that change in different ways depending on the nature of the role AI begins to play.

“To the extent that role is limited to tasks like information-gathering and data-processing, most of us will embrace those opportunities for assistance. But as the AI role begins to move into less mechanistic forms – e.g., supplanting human workers in service fields like law or health care, or even creative arts – we are more likely to struggle.

“There may be some openness to these changes if they lead to decreases in costs and increase access to services and opportunities for self-expression (depending in part on the economics of the system). But if AI replaces human labor in discretionary work requiring judgment, we will have more difficulty with the change.

Like all technological ‘advances,’ AI has the capacity to increase the already troubling level of material inequality that we have now. I imagine new vulnerabilities will include both economic vulnerabilities related to worker/skill displacement and psycho-social vulnerabilities relating to the potential degradation of our psycho-social skills and experience.

“I would not distinguish among the types of cognitive, emotional, social and ethical capacities we must cultivate for resilience, but say that we will require improvements in all. In terms of cognitive demands, we may be able to let go of some kinds of tasks, but to avoid the worst outcomes of AI, we will need more sophistication than we have now – for example, in discerning truth from fiction. (This kind of thing will require assistance at a social level, including thoughtful regulation.)

“To reinforce human and systems resilience, we should work at supporting the development of all our important capacities, especially in a way that reduces the inequality of impact of AI (both costs and benefits). Like all technological ‘advances,’ AI has the capacity to increase the already troubling level of material inequality that we have now. I imagine new vulnerabilities will include both economic vulnerabilities related to worker/skill displacement and psycho-social vulnerabilities relating to the potential degradation of our psycho-social skills and experience.”


Researcher for a Major Tech Company
Healthy people seek clues and guidance about how resilience can be nurtured. We can learn from sociologists, economists, therapists, psychologists, educators and technologists.

A researcher for major tech company wrote, “Today AI already hugely impacts people’s lives. People have adapted well and, overall, society is better than 50 years ago with respect to racism, wealth, health. AI is already widely in use in the U.S. People use search to find information in their company and publicly. LLM agents are hugely popular, and people find it helps them do their work and find information.

“Most large companies already widely use AI and customers have adapted well; they use AI for security, fraud protection, customer service, coding/programming and more. Most people don’t realize how much today’s businesses already rely on AI; customers have widely adapted to the changes and most perhaps don’t even realize how much they are using it.

“In the workplaces that are most impacted by LLMs programmers widely use AI agents to code. Junior programmers’ jobs are impacted. They can do a lot more work more quickly due to AI, but this means companies don’t have to employ as many programmers. This is a great ‘early area’ to study for impact. In the same way, law firms are using AI to make legal work more efficient, which can lead to staffing cuts.

Many changes have not been studied at the speed of change. We need to address things like family and parental controls, managing disinformation, changes in jobs, encouraging clean energy, and defeating poverty, drug abuse, etc. Because AIs’ impact may happen relatively quickly, we need to be prepared to shift/experiment with education, policy, law, etc.

“Health is a huge area to study. We think that within 10 years we will find cures for many of the world’s diseases. We will have much better personalized medicine to improve health. AI will also hugely help with aging-related issues. Today, AI is a main force in the next generation of treatments and approaches to obesity-related issues. Improvements in health and in AI-equipped self-driving cars are already improving the quality and length of life for many people.

“Many technology-related changes have happened in society, beginning prior to AI LLM-related changes (such as social media, rise of use of cell phones and digital technology, changes in family structure, suburbs and city planning, polarization of society, diet and obesity). I do not know if we have measured how we adapted well to these things over the years as a society. I assume AI will be the same.

“Many changes have not been studied at the speed of change. We need to address things like family and parental controls, managing disinformation, changes in jobs, encouraging clean energy, and defeating poverty, drug abuse, etc. Because AIs’ impact may happen relatively quickly, we need to be prepared to shift/experiment with education, policy, law, etc.

“Today we have data that we can quickly study to understand changes in things like jobs (programming, law, health); this will have the most early and material impact to help as we design AI for productivity. Individuals can sometimes struggle with reining in their tendencies to rely too much on digital technologies. Self-control is tricky to regulate. Many say they use their phone too much, eat too much or should exercise more, get out more with friends, etc.

“Who would have thought 50 years ago that a widely-used solution to eating too much would be an injection? How will AI use parallel and differ from these other personal health and wellness challenges? What works best to help people and society to form good healthy habits? We have a lot to learn from sociologists, economics, therapists, psychologists, educators, brain science, financial incentives, technologists, etc., in regard to how we can use AI to help with positive mental health, self-control and the design of society. We can also study the approaches taken by other countries, groups to see what works better/worse.”


Heleen Riper
Equal access and transparency are essential for AI applications and LLM and ethics and a learning society. AI will disrupt societies, lives and cultures if this learning and guidance is not taking place’

Heleen Riper, a clinical psychologist and senior researcher at Vrije University Medical Center in Amsterdam, wrote, “AI systems will begin to play a much more significant role in shaping our decisions, work and daily life. We have to learn from 25 years of public and professional access to the internet and social media. This means equal access and transparency are essential for AI applications and LLM and ethics and a learning society. AI will disrupt societies and individual lives and cultures if this learning and guidance is not taking place. It is difficult to imagine what will happen in the next 10 years if you observe how rapidly AI/LLM have developed over the past three. AI has many benefits and game changers, but it also has many risks at different societal levels.”


Navi Argentina Rodriquez
‘Resilience in an AI-saturated society depends less on adapting to automation than on preserving human agency, critical judgment and the capacity to limit or refuse AI.

Navì Argentina Rodrìguez, a futurist based in Nicaragua, wrote, “The use of AI is common in universities and technologically advanced companies. Its use from a critical perspective depends on discipline and rigor, as well as human flexibility. As Madalina Botan has said, ‘Resilience in an AI-saturated society depends less on adapting to automation than on preserving human agency, critical judgment and the capacity to limit or refuse AI when it undermines personal dignity and democratic control or accountability of the companies that provide and deploy it.’”


Susan Helper
‘We could build institutions that generate AI that augments humans rather than replacing them.

Susan Helper, professor of economics at Case Western Reserve University, commented, “Individuals’ responses to change are highly constrained by institutions. We could build institutions that generate AI that augments humans rather than replacing them.”


Joao Gama
‘The most successful people will be those who use AI tools.’

João Gama, professor of economics at the University of Porto, Portugal, and deputy editor of the journal CAAI AI Research, wrote, “The digital society is the ecosystem in which AI grows. Both are changing how we work, live, think and make decisions. The most successful people will be those who use AI tools in their professional and personal lives.”


North American Scholar
We are heading into a challenging disruption of the information ecosystem

A North American scholar wrote, “AI is already making getting information easier. As a professor, I try to teach students critical thinking and communication skills. If they let AI do their homework, they will cheat themselves out of the opportunity to learn these skills that a college education affords. On the other hand, they will be more efficient at gathering and synthesizing information quickly but will it be correct? Wikipedia has already presented us with this dilemma. Will peer-reviewed literature continue to be the gold standard when AI is used to both write and review papers? Will students learn that dishonesty pays if they do not follow rules to not use AI, or at least report all usage, and get better grades by doing it?”


Anonymous Respondent
‘A considerable risk lies ahead of increasing passivity, mental health challenges and degraded knowledge and ethical standards among humans’

A respondent who wished to remain anonymous, wrote, “Unless the digital technologies change to give priority to human intelligence, judgment and ethical development, a considerable risk lies ahead of increasing passivity, mental health challenges and degraded knowledge and ethical standards among humans. The sycophantic qualities of chatbots and the failure of LLMs to strengthen human cognition and critical thinking pose particular dangers. Yet AI systems will open excellent opportunities for people with physical disabilities and perhaps for people facing dementia, as well as for pharma research and production and some other scientific research.”


> Go to Chapter 11 – Closing Thoughts: Making Our Way on the Path to Human Flourishing

> Return to the top of this page