Research Methodology, Topline Findings and Acknowledgements

This document includes the findings of the 52nd canvassing of experts issued by Elon University’s Imagining the Digital Future Center (ITDF) since 2005. The Center was earlier known as Imagining the Internet, and many of our earlier studies were issued in partnership with the Pew Research Center. This canvassing was conducted by ITDF to capture current-day attitudes and insights about the potential near-future human impact of the broadening spread of artificial intelligence – especially generative AI systems such as ChatGPT, Gemini, Copilot, Grok, Mistral and Claude. Participants were asked to respond to three multiple-choice questions followed by an open-ended invitation to write an essay. The non-scientific canvassing of experts (based on a non-random sample) was conducted through a Qualtrics online instrument between Dec. 26, 2025 and Feb. 12, 2026.
Invited respondents included technology innovators and developers; professionals, consultants and policy people based in various businesses, nonprofits, foundations, think tanks and government; and academics, professional and independent researchers and commentators. In all, 386 experts responded to at least one aspect of the canvassing, including 249 who wrote at least a sentence or two in response to the open-ended qualitative question. More than 200 conveyed a response that directly tied into the essay prompt; a large number of these responses were substantial essays.
The essays published in this report are replies to this essay prompt:

If you do not think AI systems will play a much more significant role in shaping our decisions, work and daily lives in future: Please explain why.
If you do think it is likely that AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives: How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?
The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise.
Some responses are lightly edited for style and readability. A number of the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background, and this was noted, when available, in this report.
Details on the results of a set of preliminary quantitative questions respondents were invited to consider prior to writing the essay response are included in the Topline Report section further below on this page.
Use of LLMs in this report: Respondents were asked to describe their use of LLMs in writing their essays for this canvassing. Of the 197 essay writers who responded to that question, 74% replied, “My response was fully generated out of my own mind, with no LLM assistance”; 19% replied, “I used one or more LLMs somewhat in crafting my response, but most of it was written with no LLM assist”; 7% replied, “I used one or more LLMs to make a significant difference in enhancing my honest, personal response.”
All choices in regard to the analysis, writing and organization of this research report were made by its authors. Large language models (LLMs) were consulted to make suggestions as to the broad themes emerging in the 300-plus pages of respondents’ essay answers and in organizing the essays. They were also consulted by the authors for spellchecking and punctuation of the text. LLMs made no substantial contributions but were somewhat helpful in sparking the authors thinking about this large body of written material.
The Experts: An invitation to respond to the web-based canvassing instrument was first sent directly to more than 4,000 experts, 2,000 of whom were added to our database in the last half of 2025. We invited AI executives, researchers and critics; globally-located scholars and other experts in resilience and related fields from academia, foundations, think tanks and other interest networks (including experts in sociology/anthropology, ethics, cognitive and neuroscience, psychology, philosophy, political science, economics, law, medicine, education and communications); professionals and policy people from government bodies; graduate students and postgraduate researchers; and people who are active in civil society organizations that focus on digital life or are affiliated with newly emerging nonprofits and other research units examining the impacts of artificial intelligence.
Those networks include leaders, panelists or other participants tied to the AI-focused work of relevant groups such as EU, U.S., UK and IEEE AI advisory boards and panels and the international efforts of the Internet Engineering Task Force (IETF), United Nations’ Global Internet Governance Forum (IGF), International Telecommunications Union (ITU), World Bank, the Organization for Economic Cooperation and Development (OECD), the Internet Society (ISOC) and the AI for Good summits.
Some 269 of the 386 respondents gave details about their locale: 62% reported being located in North America, 36% said they were located in other parts of the world. About half of those invited had been identified by the researchers during previous studies, a small share of whom were cited in the university’s original 2003 study of people who made predictions about the likely future of the internet between 1990 and 1995. Invitees were encouraged to share the survey link with others they believed would have an interest in participating. Thus, there may have been a small “snowball” effect as some invitees welcomed others to weigh in.
The authors are extremely grateful for the contributions made by the generous individuals who crafted significant written contributions to this report. Their names and the titles of their essay responses are listed later in this section.
| Download a PDF of the full, 376-page report | Download the 16-page Executive Summary | Download the 4-page Media Summary |
Topline Findings
2026 IMAGINING THE DIGITAL FUTURE CENTER CANVASSING OF EXPERTS
Dec. 26, 2025 to Feb. 12, 2026
N= Varies by question and is around 330-360 respondents per question
Q1 – Timing and Level of AI Influence
The next few questions relate to your view about the level of AI management of human systems in the future. As you respond, please fully consider the degree to which AI and autonomous systems are already playing a role today in such realms as daily life decisions; public knowledge-creation; scientific discovery; healthcare; banking and finance; politics and policy work; military, police and public safety activities; legal and justice systems; corporate management; education; manufacturing; agriculture; transportation; programming; social networks and so on.
Question preamble: In the years to come, will AI systems play a significantly larger role than they do today in shaping our daily lives and key systems? If so, how soon do you expect that to happen?
| In the next 10 years or less AI systems are likely to play a significantly larger role in our daily lives and key systems | 82% |
| In the next 10 to 20 years AI systems are likely to play a significantly larger role in our daily lives and key systems. | 13% |
| Sometime after the mid-2040s AI systems are likely to play a significantly larger role in our daily lives and key systems. | 1% |
| Between now and 2045 there will probably be only modest change in shaping human lives and key systems | 3% |
| AI will NOT play any significantly larger role in society in the future. | 1% |
| Not sure | 1% |
Q2 – In the time frame for AI you selected in Q1, to what extent will AI systems have come to influence, guide or control people’s daily activities and choices for society?
| They will play such roles in nearly all of human activity and decisions. | 19% |
| They will play such roles in most human activity and decisions. | 37% |
| They will play such roles in about half of human activity and decisions. | 24% |
| They will play such roles in some but not a majority of human activity and decisions | 17% |
| They will play a very limited role in some human activity and decisions. | 1% |
| No significant level of human activity and decisions will be influenced, guided or controlled by AI systems. | * |
| Not sure | 2% |
Q3 – In the time frame for AI you selected in Q1, how satisfied do you expect the majority of humans will be with the level of influence and management AI systems have over their lives?
| Mostly satisfied, rarely dissatisfied | 3% |
| More satisfied than dissatisfied | 28% |
| An equal amount of satisfaction and dissatisfaction | 33% |
| More dissatisfied than satisfied | 26% |
| Mostly dissatisfied, rarely satisfied | 6% |
| Not sure | 4% |
Q6 – As AI systems further impact and are more involved and influential in human activity and decision-making in the timeframe you chose on Q1, how resilient, if at all, do you expect that most people will be in adjusting to the role of AI systems in everyday life?
| Very resilient | 10% |
| Somewhat resilient | 43% |
| A little resilient | 36% |
| Not at all resilient | 9% |
| Not sure | 1% |
Q13 – As you responded to this survey, did you tap into a generative AI assistant to help you gather your thoughts or facts to buttress arguments, refine your written response, or collect general information that helped you shape your response?
| My response was fully generated out of my own mind with no LLM assistance. | 74% |
| I used one or more LLMs somewhat in crafting my response, but most of it was written with no LLM assist | 19% |
| I used one or more LLMs to make a significant difference in enhancing my honest, personal response. | 7% |
The Dimensions of Resilience battery
It should be noted that the original survey instrument included a question aimed at determining the level of resiliency respondents expected that digitally connected humans might have in the future in eight “dimensions of resiliency.” Responses to that question were not included in this report because the experts’ essays broadly covered the open-ended question we posed. They did not address these specific dimensions. Still, this content they considered in the survey might have been on their minds as they wrote their essays. Thus, we share it here, below.
Introductory lead-in to the question: Resilience encompasses the many cognitive, emotional, behavioral and other intrinsic dimensions of being human that inspire people to make the appropriate, timely adjustments necessary to respond well to change, both good and bad. The next questions explore various elements of that.
The Question: As AI systems assume much more significant roles across society in coming years, in each of these eight categories, what share of digitally-connected people are likely to have successfully cultivated, responded resiliently and mastered each dimension well within the timeframe for AI you selected in Q1? (The eight domains listed are based on a review of widely-cited research by neuroscientists, cognitive scientists, psychologists and philosophers on human resilience and adaptation to technological change.)
a) Cognitive growth and co-intelligence
Humans must remain dedicated to growing their own thinking skills, even as AI becomes a powerful partner. This includes understanding how we think and continuously seeking ways to think better without AI (metacognition).
b) Emotional steadiness and comfort with uncertainty
Humans must be capable of managing the stress and uncertainty that accompanies change, cultivating the ability to stay grounded, hopeful and mentally healthy.
c) Sense of self and purpose
Humans must actively explore and possibly reinvent their self-identity in light of change while retaining their core being, reinforcing their image of self-worth and sense of purpose to live a life with meaning and joy.
d) Social intelligence and cooperation
Humans must actively focus on building trust and strong human-to-human connection, collaboration and community – online and offline – also understanding how to work well in teams that include AI systems.
e) Information wisdom and digital literacy
Humans must work to intentionally retain their independent judgment and their ability to verify fact from fiction, pursuing appropriate, trusted knowledge resources while resisting manipulation by persuasive technologies.
f) Digital boundaries and autonomy
Humans must consciously cultivate habits that preserve their attention, focus, creativity and connection to non-digital life. This includes avoidance of compulsive tech use and disconnecting as needed.
g) Economic and career adaptability
Humans must remain flexible and adaptable while proactively preparing for a future that may require economic and career pivots as AI reshapes how things are done in ways that will cause many to have to adapt to new work or, possibly, no work.
h) Ethical imagination and moral courage
Humans must actively anticipate and work to address the new challenges arising out of the advance of autonomous digital systems, working ahead of the curve to maintain and defend human values, participating in shaping good governance of AI systems.
Appendix 1 – Chapter-by-Chapter Titles and Authors
A chapter-by-chapter list of essay headlines and authors. The professional identities listed are based on their biographical information at the time of this canvassing.
Chapter 1. Cultivating Human Agency and Prioritizing Autonomy
Resilience depends on sustaining the ‘un-machinable dimensions of human identity within machinic systems.’ Cultivate judgment, meaning-making, ethical reasoning, imagination, intuition, adaptability.
Tracey Follows, founder and CEO of Futuremade and Me:chine and author of the book “The Future of You.”
Understand ‘cognitive triage’ and avoid ‘going with the flow.’ Real resilience is judgment about what matters, when to trust, when to pause and think. Vital ingredients: deliberate friction, AI literacy.
Alf Rehn, professor of innovation, design and management on the engineering faculty at the University of Southern Denmark.
Foundations of resilience dissolve when AI simultaneously mediates and undermines our relationships with our own ‘internal authority,’ our perceived authority of others and epistemic truth.
Mel Sellick, applied psychologist studying human-AI interactions, founder of the Future Human Lab and the AI Psychological Readiness Collective.
Resilience must be redefined as the sustained capacity for people to ‘remain active authors of meaning, judgment and responsibility’ in an AI-mediated world – an ‘interpretive presence’ with AI.
Matthew Augustin, director of innovation at the Responsible Innovation Lab.
The core resilience question is not, ‘Will AI change everything?’ Instead, it is, ‘Do we have the cognitive, emotional, social and ethical capacity to manage AI’s influence before it manages us?’
Rosa Daneshmandnia, head of research and publishing for Young AI Leaders of Linz, Austria.
Resilience in the AI era takes two forms: adaptive coping and agency enabling. Both are necessary, but we must shape AI to support agency. Too much adaptive coping can erode moral clarity and action.
Evelyne Tauchnitz, senior researcher at the Institute of Social Ethics at the University of Lucerne, and research associate at the Centre for Technology and Global Affairs, University of Oxford.
‘Transition is the new normal. … It is not about bouncing back to where we were, but about continuously adapting to where we are going,’ taking charge as the agents of our adaptation.
David Bray, principal and CEO at LeadDoAdapt Ventures and distinguished fellow at the Stimson Center.
The big shift is when bedrock cognitive skills like predicting and persuading are delegated to machines. In addition, ‘resilience depends on helping individuals decouple self-esteem from task ownership.’
Nirit Cohen, principal at WorkFutures, a future-of-work and change-management consultancy based in Israel.
‘Inhabitants of tomorrow will look back at this moment not only as the era when AI arrived but as the time when we evolved the partnership between human and artificial intelligence they will inherit.’
Francisco Jariego, futurist, author and technology innovation researcher based in Madrid, Spain.
‘We have the right to be purely human without mods. … Agency, authority and ability will be challenged when humans augmented with onboard AI capabilities compete with ‘natural’ humans.’
R. Ray Wang, founder, chair and principal analyst at Constellation Research.
‘I’d argue that resilience becomes much more a matter of intentional design than brilliant engineering at this point. … It may be time to establish a Humans Union; I’m only half-joking.’
Devin Fidler, founder at Rethinkery, a strategic foresight consultancy.
Resilience will not result from the passive acceptance of ‘technological inevitability.’ It requires an active cultivation of humans’ ‘capacity to shape the trajectory of change rather than merely endure it.’
Andrea Lavazza, an ethicist and philosopher at Pegaso University and senior research fellow in neuroethics at Centro Universitario Internazionale in Arezzo, Italy.
‘We have to think and act differently. … These tools challenge the very validity of our social, legal and moral norms; we must engage with the reality of what is and respond with wisdom and transparency.’
Barry Chudakov, futurist, consultant and founder and principal at Sertain Research.
‘Humans could fall so far behind future AIs or AI-augmented minds that they lose via natural selection.
1) Take this seriously. 2) Maintain wide error margins. 3) Focus on building adaptive capacity.’
Severin Field, a doctoral student and researcher at the University of Louisville Cybersecurity Lab.
Resist agency decay! ‘Without self-governance, resilience is an illusion; adaptation depends on humans being active agents who believe their choices matter and retain the ability to make them.
Alan Honick, veteran documentary filmmaker whose focus is the intersection of science, society and ethics.
We need to build the frameworks and processes necessary to build the proper cognitive scaffolding to ensure human agency and development alongside AI tools.
Giles Crouch, a digital anthropologist who has led research projects for the United Nations, Global Affairs Canada, Freedom House and Doctors Without Borders.
Across all human spaces, ‘resilience will not come from resisting change, but from anchoring change in values that honor human dignity, rational intelligence and moral responsibility.’
Angela Butts Chester, a pastoral counselor, faith leadership strategist, independent broadcaster and author whose work centers on resilience and ethics.
Will AI systems mostly amplify or erode human capacities? That is the question. First, ‘teach thinking itself,’ and the information ecosystem must offer common epistemic ground – a vital public good.
Arlindo Oliveira, distinguished professor of computer science at the Technical University of Lisbon, Portugal, and author of “The Digital Mind” and “Generative Artificial Intelligence.”
We must shape AI. ‘Many more people will be assisted by improved access to knowledge and expertise … Resilience is steering the conversation to human agency as we shape what AI becomes.’
Nirit Weiss-Blatt, Silicon Valley-based communication researcher and author of the book “The Techlash and Tech Crisis Communication” and the AI Panic newsletter.
We will adapt. But ‘globally just half or fewer than half of all users will be capable of exploiting AI’s full potential – and most of these people’s lives will be captured by the AI, it will invade their core values.’
Vanda Scartezini, co-founder and partner at Polo Consultores, an IT consulting company based in Brazil and longtime ICANN leader.
Algorithms used to align AIs with their human principals don’t work 100%. It’s likely these problems won’t be ironed out by the time AI is powerful enough to be involved in every decision on Earth.’
Nisan Stiennon, a former member of technical staff at OpenAI.
Will superstupidity be as dangerous as superintelligence? ‘The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all.’Roger Spitz, futurist and president of Techistential and founder of the Disruptive Futures Institute in San Francisco.
‘AI is the surest way to a global catastrophe humanity has so far invented. … Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?’
Srinivasan Ramani, an Internet Hall of Fame member, previously research director at HP Labs India and professor at the International Institute of Information Technology in Bangalore.
Work must begin today on forging international agreements on global governance of AGI. Trillions are being spent to develop it. Investing more than money in AI is crucial to human resilience, survival.
Jerome Glenn, global futurist, CEO of the Millennium Project and chair of the AGI Panel of the UN Council of Presidents of the General Assembly.
AI is intoxicating and it will expand our horizons for the next decade; after that, ‘the growing power and reasoning capabilities of AI will start to manifest, and daunting challenges will arise.’
Robert A. Rogowsky, president of the Institute for Trade and Commercial Diplomacy, previously chief economist at the U.S. International Trade Commission for nearly two decades.
Mitigating the risk of extinction ought to be an overriding priority; all other efforts at resilience are meaningless if humanity goes extinct.’
David Scott Krueger, founding CEO of Evitable – a nonprofit formed to help society confront the risks of AI – and professor and AI safety researcher at the University of Montreal’s Mila Lab.
‘Resilience depends less on adapting to automation than on preserving human agency’
Mădălina Boțan, senior lecturer in political communication at the National University of Political Studies and Public Administration (SNSPA) in Bucharest, Romania.
‘I expect AI’s likely impact on people to be that people stop existing.’
Mikhail Samin, a co-founder of the Moscow branch of AI Governance and Safety Institute based in London.
‘Perhaps 10 to 20 percent of the global population will be empowered, with the rest marginalised’
A distinguished Northern European foreign policy expert.
Each individual will continue to make the myopic choice to rely on AI. This may end badly.
An accomplished computer scientist at a major U.S. university.
‘In the end, the extension of humankind by AI will reach its full potential and reverse from explosion into implosion … The user, the medium and the environment will become one.’
Andrey Mir, Canadian media ecologist, writer of the Media Determinism blog and author of the book “The Digital Reversal.”
Chapter 2. Institutions Must Lead Now in Restructuring for Resilience
The future is not determined by AI’s capabilities – it is determined by the structures we build around it. We now have tools capable of generating abundance – IF we design systems so they distribute it.
Antoine Vergne, co-director of Missions Publiques, a global effort to include public voice in decision-making processes at all levels of human systems, based in Bonn, Germany.
‘Humans-first’ technological design and governance are urgently needed resilience scaffolding. These systems significantly impact humans’ agency, cohesion, understanding and ability to act collectively.
Stefaan Verhulst, data policy advocate, co-founder and director of the data program at New York University’s GovLab.
‘Organizations cannot be resilient if they don’t focus their policies and practices on supporting three basic human psychological needs – competence, autonomy and relatedness – in authentic ways.’
Nicholas Diakopoulos, director of the Computational Journalism Lab at Northwestern University and author of the AI Accountability Review.
At a time when AI is fast-becoming infrastructure, resilience relies most upon strong legal and civic institutions rather than in people’s individual strengths. Those without such institutions will suffer.
Fernando Barrio, co-director of the Centre for Environmental Change and Communities and principal lecturer in business and law at Queen Mary University of London.
The future of human dignity and agency depends upon institutional design: In the age of AI, ‘human resilience shifts from simply enduring to sustaining autonomy under technological mediation.’
Maria S. Randazzo, a research professor in the school of law at Australia’s Charles Darwin University and author of “AI is Not Intelligent At All: Why Our Dignity is at Risk.”
‘Coping with AI disruption does not mean understanding every algorithm, but demanding institutional accountability, participating in the design of governance frameworks for acceptable procedures.’
David J. Krieger, philosopher, social scientist and co-director of the Institute for Communication and Leadership in Lucerne, Switzerland.
‘The deepest challenge is institutional … many were built for a slower tempo. … AI accelerates feedback loops and amplifies second-order effects. It does not fit neatly inside yesterday’s playbook.’
Bugge Holm Hansen, senior futurist and head of innovation and technology at the Copenhagen Institute for Futures Studies.
As AI embeds everywhere in an ‘autonomy economy,’ people will face a crisis of meaning. Resilience will come with institutional interventions, new practices, strategies to overcome vulnerabilities.
J. Amado Espinosa, CEO at Medisist, VP for digital health at Coparmex, and MD based in Guadalajara, Mexico – a co-coordinator of the Policy Network on Artificial Intelligence at IGF.
‘Coping means treating AI not as a gadget, but as governance.’ The ability to appeal high-stakes AI-mediated decisions, an ‘authenticity infrastructure,’ redundant systems and more are required.
Joel Christoph, economist and political scientist – a researcher on AI governance, global coordination and political economy and Human Rights Fellow at the Harvard Kennedy School.
Whether AI ultimately expands or constrains human agency will depend less on the technology itself than on the quality of the institutions we build around it. Worry about adversarial actors that scale AI.
Mike Linksvayer, head of developer policy at GitHub, previously VP and CTO at Creative Commons and director at the Software Freedom Conservancy.
‘The key challenge we face is that corporations are becoming social scaffolding, defining the shape and range of alternative social arrangements.’ Leaders must foster support for a resilient political culture.
Juan Ortiz Freuler, co-initiator of the non-aligned tech movement, previously a senior policy fellow at the Web Foundation and advocate with digital rights nonprofits in Argentina and Mexico.
Clarity must prevail, else our muscle of introspection will weaken, moral reasoning thin and space for ambiguity and uncertainty shrink. It’s a ‘quiet exit.’ Resilience arrives through reimagined civic design.
Alison Poltock, co-founder of AI Commons UK and The Heart of AI community interest groups and author of a Substack titled “The Future is Personal.”
‘Adaptation without ethical reflection risks creating societies in which algorithms silently structure opportunity and exclusion. … For AI to truly serve humanity, it must be guided by wisdom.’
Maha Jouini, digital communication officer at the African Union Development Agency and research fellow at the Global Center on AI Governance.
‘Society is moving into a world that lacks checks and balances, in which commerce provides the infrastructure for our private and public lives.’ This human failure jeopardizes the human future.’
Sonia Livingstone, a professor of social psychology at the London School of Economics and Political Science, and principal investigator for the Global Kids Online: Children’s Rights in a Digital Age project.
Leaders at all levels of government must understand we must be proactive, rather than reactive.
Karen Caplovitz Barrett, professor of human development and director of the Emotional Development Laboratory at Colorado State University.
‘Within two to four years … The mass proliferation of powerful AI capabilities and agents will likely have a destabilizing effect on current institutions. Many existing systems will break.’
Sam Hammond, senior economist at the Foundation for American Innovation and nonresident fellow at the Niskanen Center.
‘No amount of individual resilience can compensate for a system structurally tilted against ordinary people. Mass displacement of workers without social investment would destabilize the social fabric.’
Rita McGrath, director of executive education at Columbia Business School.
We need calibrated uncertainty, institutional imagination and collective agency; ‘the decisions we make now’ about safety, governance and research priorities’ will shape our future.’
Michael Noetel, research methods specialist at MIT’s AI Risk Repository and associate professor of psychology at the University of Queensland, Australia.
The window for proactive intervention is now – we have perhaps 5 to 10 years to establish new resilience-building practices and norms before AI’s role becomes too entrenched to reshape.
Salman Khatani, futurist and manager at IMAGINE Institute of Futures Studies, Karachi, Pakistan, and associate professor at Iqra University.
Resilience ‘requires clear limits, enforceable governance frameworks and meaningful avenues for contesting automated decisions’; ‘red lines’ preserve accountability, agency and democracy.
Marc Rotenberg, director of the Center for AI and Digital Policy.
‘Participatory AI governance mechanisms should be established immediately in cities, sectors and high-stakes domains. … Policies must redirect AI toward augmentation rather than replacement.’
Michele Visciola, president and founding partner of Experientia, a user-experience design and consumer-behavior company based in Turin, Italy.
‘We need a bigger boat … We already know many of the possible – even likely – negative externalities of GenAI. This is our time to use those insights to create stronger societies, economies, jobs and lives.’
Gary Bolles, author of “The Next Rules of Work” and chair of the Future of Work efforts at Singularity University.
Coping requires literacy; regulatory frameworks; community data governance; labor organizing among data workers; indigenous data sovereignty movements asserting control over knowledge systems.
Marine Collins Ragnet, the AI lead at NYU’s Peace Research and Education Program and managing editor of the “Cambridge Journal of Artificial Intelligence.”
‘Overall, the goal is not to outcompete AI but to build the psychological, social and institutional resilience to keep human agency, ethics and cohesion intact during rapid digital transformation.’
Anina Schwarzenbach, a sociologist and criminologist doing postdoctoral research on social threats and governmental responses, media narratives and polarization at the University of Bern, Switzerland.
‘We need not focus so much on AI technology but on the political, cultural and regulatory systems which will govern its growth and applications.’
Marina Gorbis, social scientist and executive director of the Institute for the Future.
We will do nothing to encourage competition, discourage predators, control content or mandate ethical practices and enforce them. That allows a handful of men to get rich – end of story.
Kevin Leicht, professor of sociology at the University of Illinois-Urbana-Champaign and program officer for sociology for the U.S. National Science Foundation.
‘It is, in fact, up to us whether, when, where and how to deploy ‘AI’ products. It is up to us whether we want to invest in humans or whether we are eager to replace them with crude algorithms.’
Amandeep Jutla, psychiatrist and associate research scientist at Columbia University.
The story of AI might be this: The good, the bad and the end of the world. Resilience will depend on how soon humans are required to start detecting and dealing with dangers before they cause harm. Joseph Miller, director of PauseAI UK.
‘In many core capabilities human identity is changing’ … In this phase of accelerated evolution ‘the individuals, organizations and institutions that flourish will be those most ready to learn and adapt.’
Ross Dawson, well-known futurist and founder of Informtivity and the Advanced Human Technologies Group, based in Sydney, Australia.
AI could soon become a ‘Frankenstein’s monster.’ Lack of regulation is allowing tech plutocrats to ‘displace democracy.’ The AI paradox is that as gets smarter human intelligence will decline.
Guy Standing, British labor economist, founder at Basic Income Earth Network and professorial research associate at SOAS University of London.
Governments, schools, civic groups – all organizations – will need to adapt, reinvent themselves or consciously choose not to. Communities must decide what they value in an AI-rich environment.
Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation.
The only solution to inequity, ignorance and power imbalances is to create better institutions that limit excesses; ‘this requires careful regulation supported by values that foster universalism.’
Marcel Fafchamps, well-known Belgian economist and professor at Stanford University.
‘AI is being embraced for the short-term benefits it can provide; research suggests that barely the tip of the iceberg is currently being discussed as to what the ripple effects will be.’
Marie Charbonneau, a researcher helping develop the next generation of robots at the Human-Robot Collaboration Lab at the University of Calgary, Canada, a co-author of the IEEE report “A Pathway Study for Future Humanoid Standards.”
Make platforms accountable, give Gen Z real voice in their design and improve the information environment through a mix of regulation, market pressure and independent standards.
Steve Rosenbaum, co-founder and executive director of the Sustainable Media Center, an author, filmmaker and founder of five companies in the media content sector.
‘We have to look to leaders in social activism and politics who care enough about ethics and the overall well-being of their people to encourage the development of AI regulation.’
Matt Belge, founder of Vision & Logic, a professional user-experience designer with 30 years in the field.
Safe, monitored, well-designed AI can ‘make us more human’
William Halal, professor emeritus of science, technology and innovation at George Washington University and founder of the TechCast Project.
Keep iterating the future – produce the data moving AI to reflect a positive vision.
Sean McGregor, co-founder and lead research engineer of the AI Verification and Evaluation Research Institute and general chair for the 37th annual conference of the Association for the Advancement of Artificial Intelligence.
We must be proactive about the potential impact of AI’s rise on brain development and well-being.
Karen Barrett, life-span developmental psychologist and member of the global Human Affectome Task Force.
We must prioritize the protection of human intelligence, judgment and ethical development.
An anonymous academic based in the United States.
Standardization efforts are under way: ‘Practical frameworks and tools that help translate human rights principles into technical requirements throughout the development lifecycle.’
Oliver Alais, a program coordinator at the International Telecommunication Union focused on human rights.
For resilient communities and people, we should instill some of the values of the early Internet.
An anonymous researcher for a major consulting firm.
Chapter 3. The Ultimate Team-Up: Humans and AI Working Together
KCSS – Keep Calm and See the Solutions: We are now working with our AIs to craft nothing less than a new symbiotic evolutionary developmental transition on Earth. It is not a cage it is a chrysalis.
John M. Smart, president of the Acceleration Studies Foundation, director of the Evo-Devo Institute and author of “Introduction to Foresight.”
‘AI resistance represents an illusion of choice. Those who hesitate, debating whether to accept AI, will forfeit their opportunity to shape how that acceptance unfolds.’
David Vivancos, CEO at MindBigData.com in Madrid, Spain, author of “The Artificiology Trilogy” and serial entrepreneur.
‘True resilience in the age of AI comes from honoring the material, relational and universal dimensions of the human being, allowing AI to become a supportive partner in human flourishing.’
Matthew James Bailey, founder of AI Ethics World and author of “Evolutionary Ethics for AI.”
It’s time to stop thinking about language models as ‘vending machines for answers’ and instead think of them as ‘dialogic partners’ that synthesize knowledge.
David Weinberger, writer, speaker and fellow and researcher at Harvard’s metaLAB and Berkman Klein Center.
My co-intelligent research with an AI has revealed that a healthy and resilient world springs from education reform, new workplace trends and norms and policies that reduce compulsive AI usage.
Alexandra Samuel, technology analyst and principal at Social Signal, co-author of “Remote, Inc: How to Thrive at Work Wherever You Are.”
AI is the world’s largest Magic 8 Ball, with a polyhedron of answers, each ready to help. ‘We need personal AI to know our natural and digital selves … and participate with full agency in digital society.’
Doc Searls, co-founder of Customer Commons and internet pioneer.
Those who are resilient ‘will cultivate an aptitude for absorbing disturbances well and transform positively into an active component of the human-technology binomial.’
Mauro D. Rios, adviser to the eGovernment Agency of Uruguay and author of the Uruguayan Digital Agenda.
Many of the tools we’ll need for ‘alignment’ with AI are found in the ways we raise our biological children – tools that we used to build a gradually improving, enlightenment civilization.
David Brin, well-known writer, futurist and consultant on various tech-futures topics and author of forthcoming book “rAIsing our newest children.”
‘The question is who is using who?’ Will people end up as centaurs, half rational humans and half speedy horses? Or reverse centaurs, where the horse is the brain and the human the body?
Paul Jones, professor emeritus of information science at the University of North Carolina-Chapel Hill.
AI represents a paradigm shift – a watershed moment in computing. Large language models have already started to change the way we work. Soon, we will have AI tools for creating AI systems.
Vint Cerf, Internet Hall of Famer and VP and chief Internet evangelist at Google, a longtime leading contributor to global development of the internet.
‘The majority of people will not have any choice about the majority of ways AI systems come into our lives because AI already is and will continue to fuel most interactions we have with our world.’
Sue Phillips, a former head of the Unitarian Universalist Church now working with West Co, a Silicon Valley-based group started by founders of Twitter and Pinterest to encourage intentional living.
‘As a social species, we will collectively lean on one another to navigate and develop our relationship with these new technologies.’
Mícheál Ó Foghlú, engineering director and core developer at Google, based in Waterford, Ireland.
In the next 20 years the prospects for AI ‘intelligence’ are less likely, rather than more likely.
Robert Atkinson, president of the Information Technology and Innovation Foundation.
‘Two parallel systems will eventually coexist: the official, AI-optimized, always fully reconciled system of data about users and services to citizens and a fuzzy, fluid and informal shadow framework’
Maja Vujovic, book editor, writer, writing mentor and coach at Compass Communications in Belgrade, Serbia.
‘We must relearn how to think with machines rather than around them or against them. … The risk is not that AI thinks for us, but that we stop thinking when it is present.’
Aleksandra Przegalinska, vice rector for innovation an AI and associate professor at Kozminski University, and senior research associate at Harvard University’s Center for Labour and Just Economy.
‘Stop fighting AI and learn to use it in moderation. Push the models to see what they can do. A year later, try again, as the models keep changing. Make AI something that makes you stronger.’
Lance Fortnow, an expert in computational complexity and professor of computer science at Illinois Institute of Technology.
Chapter 4. Existential Literacy: Rewiring Human Behavior for the AI Age
The pillars of resilience: Developers must be required to meet ethical standards, AI literacy should be required at all levels of education, international cooperation must be developed to avoid catastrophe.
Haruki Ueno, distinguished expert on AI and knowledge engineering, professor emeritus of the National Institute of Informatics of Japan and deputy editor of the journal CAAI AI Research.
‘If we don’t have appropriate safeguards, sufficient public awareness and regulatory support AI will continue to pose innumerable harms to human social and cognitive development’
A policy researcher at a technology-focused research institute.
We must invest in human-resilience infrastructure: Understanding the context of AI is everything. ‘It is the difference between being unaware we are vulnerable and capturing its benefits.’
Pamela Rutledge, director of the Media Psychology Research Center in Newport Beach, California, and editor-in-chief of the open-access journal Media Psychology Review.
Resilience comes down to individuals learning how to: manage risks, decide well, tackle tasks competently, live with uncertainty, tap into helpful institutions and embrace self-regulation.
Stephan Humer, internet sociologist and computer scientist at Hochschule Fresenius University of Applied Sciences in Berlin, Germany.
Utopia. Status Quo. Dystopia. The boundaries that lie between them are blurred. The worst outcomes are authoritarian nightmare scenarios; thus, information wisdom and critical thinking are crucial.
Daniel Pimienta, director of the Observatory of Linguistic and Cultural Diversity on the Internet (based in the Dominican Republic), and Luis German Rodriguez Leal, an expert on the socio-technical impacts of innovation (based in Malaga, Spain).
European experts: A new literacy framework to develop knowledgeable, responsible and ethically sound use of digital infrastructures is vital to the quality, sustainability of democratic public space. Kristina Juraite, professor and head of the public communications department at Vytautas Magnus University in Kaunas, Lithuania.
‘It will take an iron-willed and well-resourced educational system to help students grow up not just with critical thinking skills of analysis, but with the capacity to observe themselves thinking.’
Amy Zalman, founder and CEO of Prescient, a Washington DC-based foresight consultancy.
‘Attention, energy and investment should be focused on ACE in STEM – developing a culture of altruism, compassion and empathy among science and technology professionals.’
Edson Prestes, professor of computer science at the Federal University of Rio Grande do Sul, Brazil.
‘We must make futures thinking a lifelong priority and embed a foresight-forward attitude in our local cultures and national ecosystems.’
Jan Hurwitch, futurist and president of the Visionary Ethics Foundation, based in San Pedro, Costa Rica.
‘Facilitating digital literacy, metacognitive ability and the ability for deep critical thinking is vital. They work as sword and shield.’ Fendi Tsim, a behavioral research specialist at the University of Warwick, UK.
‘We must strengthen the human capacities and systems that determine how change is absorbed.’ The best steps are investments in education, research-informed design and cross-sector collaboration.
Yalda Uhls, an internationally recognized expert on media’s impact on adolescent development and senior researcher at the UCLA Center for Scholars and Storytellers.
‘AI literacy will become a baseline requirement for participation in modern society.’ Resilience comes from strengthening emotional intelligence, interpersonal understanding and ethical reasoning.
Hangyeol Kang, a Ph.D. student at the University of Geneva researching and developing intelligence systems for the humanoid social robot, Nadine.
‘The teaching of literacy and, specifically, digital literacy, as well as critical thinking and ethics is crucial.’ The library is a perfect place to continue to evolve public services and tools to build resilience.
Meredith Goins, a group manager connecting researchers to research and opportunities at U.S. laboratories.
The best route to resilience? ‘AI education must be made mandatory at all levels to boost people’s confidence in use and adoption of AI’ and allow them to participate well in its evolution.
Majiuzu Daniel Moses, founder and president of the Africa Tech for Development Initiative.
Lifelong learning infrastructure, access to mental health support are essential. ‘We need both physical and digital spaces for honest conversation about the challenges and not just the opportunities.’
Todd Hager, vice president at Alpha Omega, a strategic consultancy working with U.S. federal healthcare agencies, previously VP at Macro Solutions.
‘Foster hybrid skills blending empathy, creativity and AI literacy, such as experimenting with relevant AI tools while prioritizing human judgment.’
Cristos Velasco, adjunct professor of information technology law, at the Baden-Württemberg Cooperative State University in Germany.
Resilience requires keeping human agency. ‘We need to develop the habits, education and tools that make people more resistant to allowing themselves to be manipulated.’
Marek Rosa, Slovak entrepreneur, programmer and founder and CEO of GoodAI, a general AI research and development company.
‘The public must understand how AI works and how it influences their lives. … Ordinary people have very little scope of action to determine how AI will or will not be used.’
Karen González Fernández, a professor-researcher expert in the philosophy of AI at Universidad Panamericana in Mexico City.
‘Just as today, in a world of cars, grocery stores and fast food, it’s important to prioritize physical health through exercise, it will be important to have a healthy mental lifestyle.’
A computer scientist.
‘AI will not play a significant role globally due to a lack of digital literacies, a lack of digital access and many people’s dystopian views. … Literacy will remain a challenge.’
Trust Matsilele, senior lecturer in journalism at Birmingham City University in the UK, previously at the University of Johannesburg, South Africa.
Chapter 5. Work Quake: Navigating Labor Shifts and the Pursuit of Meaning
Expect sharp social and economic dislocation. ‘Without government intervention … there will be widespread unemployment.’ Resiliency will require much more than technical training.
James Hutson, head of human-centered AI programming and research at Lindenwood University and co-author of “A Framework for the Foundation of the Philosophy of Artificial Intelligence.”
‘If there is ongoing need for leaders, educators, professionals, this will be a sign that the AI revolution has ultimately failed and will signal a long-term limitation in the aspirations of humanity as a species.’
Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada.
The potential for mass unemployment isn’t just ‘an interesting dinner conversation about the future. That future is already here. It just hasn’t knocked on your door yet. It’s about to.’
Matt Shumer, co-founder and CEO of OthersideAI, a company building advanced autocomplete tools powered by large-scale AI.
There will be a growing sense that life is becoming more luck-driven. ‘A society becomes brittle when people feel like one bad month can ruin them and that no amount of effort guarantees stability.’
Scott Santens, founder and CEO of the Income to Support All Foundation and editor of Basic Income Today.
Addressing job displacement, contraction and loss cannot be reduced to simply telling workers to upskill and learn AI or be left behind. A deeply human-centered societal response is needed now.
Terri Horton, CEO of FuturePath, a strategic consultancy focused on the future of work and the impact of artificial intelligence on organizations and people.
Where will jobless people turn to nurture their self-worth? Maybe to spiritual practices; maybe to learning from other cultures; maybe toward acting to enrich their friendships.
Michael Wollowski, professor of computer science at the Rose-Hulman Institute of Technology, and associate editor of AI Magazine.
‘Ordinary people are not embracing AI in hopes of developing co-intelligence but knuckling under to the pressures of the job market’ which is dominated by AI-forward thinking.
John Laudun, a researcher and analyst of computational models of discourse and professor at the University of Louisiana-Lafayette.
If we allow AI to substitute for humans’ contributions in all areas of life, it will take over everything. Humans will give up; AI will say ‘checkmate.’ It will win in quality indicators and in labour productivity.
Thomas Laudal, associate professor of business at the University of Stavanger, Norway.
Will AI’s spread lead to mass unemployment? If so, it could lead to a ‘dystopian nightmare’ and ‘the next 10 years could be the most chaotic and unstable political era of American history.’
Jonathan Taplin, director emeritus of the Annenberg Innovation Lab at the University of Southern California and author of “Move Fast and Break Things,” “The Magic Years” and “The End of Reality.”
‘When machines free our time and our spirits from drudgery and survival issues, many new horizons will beckon.’ Market-Oriented Universal Basic Income is a solution that assists the unemployed.
Jonathan Kolber, managing director at HyperCycle.ai and author of “A Celebration Society.”
‘There is a nontrivial chance’ of mass unemployment. Ideas of a universal basic income are ‘nonsense.’ We will tax machines and change the rules of retirement to fit a sliding scale. Flexibilities are crucial.
Nigel M. de S. Cameron, president emeritus of the Center for Policy on Emerging Technologies and author of “Will Robots Take Your Job? A Plea for Consensus.”
Without AI guardrails, imagine a ‘completely interconnected world of quantum-driven AI-based robotics plus bright individuals with a spoonful of malice. Other than that, the future looks bright.’
Wedge Martin, a Silicon Valley-based technologist, entrepreneur and consultant with over 25 years of experience in the tech industry, former CTO/co-founder at Badgeville.
‘Happy addiction might be the best possible outcome for humanity’ as people lose their livelihoods … The important creative work will eventually all go to AIs.’
Charlie Kaufman, a system security architect at Dell EMC.
Meaningful work matters: ‘Humans must be able to cultivate and possess a positive sense of the social, ethical, cognitive and emotional impact of their personal contributions to the world.’
Pedro Lima, professor of electrical and computer engineering at Lisbon Higher Technical University in Portugal.
‘While we haven’t seen it yet, the way in which this is going to impact the workplace may be the biggest threat AI is going to pose to societal stability. It could be very challenging to navigate.’
Joshua Tucker, professor of politics and co-director of the Center for Social Media and Politics at New York University.
We will be in for a rough ride for a time – and in need of major change in education and economic systems – as the capabilities of AI tools outpace most people’s adaptability.
Sam Lehman-Wilzig, head of the communications department at the Peres Academic Center in Rehovot, Israel, and author of “Virtuality and Humanity.”
The big transformation ahead will ‘meet resistance at every encounter.’ The willing outsourcing of human thinking isn’t a productivity gain; in the long run it is intellectual malpractice.’
Chris Shipley, a journalist with more than 30 years of experience at the intersection of technology, journalism and innovation.
Chapter 6. The Great Divide: Broadening Differences | Expanding Inequities
Humans have developed a complex psychology that allows us to fight our nature, to aim for a life in which we explore ways of living far beyond it’ but it seems we are headed toward techno-feudalism.
A UK-based complexity scientist and collective intelligence researcher who preferred to remain anonymous.
AI amplifies existing inequalities. ‘The real question is not whether further transformation will occur, but how unequal, silent and normatively it will unfold.’ People with advanced frameworks will benefit. Fabio Morandín Ahuerma, researcher in the philosophy of AI and a member of Mexico’s National System of Researchers.
Individuals could move quickly from being the tool users to becoming the systems’ tools – the ‘haves and have-nots’ – suffering dehumanization effects on a path toward ‘indentured servitude.’
Russ White, Internet pioneer and long-time infrastructure architect with the Internet Engineering Task Force.
‘Adoption of AI will be shaped by race, gender, class, disability, professional status and institutional power. … Resiliency must be analyzed as a social and structural condition.’
Rosita Scerbo, associate professor of visual and digital cultures at Georgia State University, co-editor and contributing author to “AfroLatinas and LatiNegras: Culture, Identity and Struggle.”
Three groups will emerge: those who build their lives around AI (transhumanists), those who resist (the modern Amish) and pragmatic late adopters. A notable worry is caste-like schisms.
Avi Bar-Zeev, a pioneer at the forefront of spatial computing the past 30 years, president at Reality Prime and board member at the Virtual World Society.
People’s resilience will be affected by where they fit on the curve, from the majority who take AI in stride to those for whom it becomes a danger and to those who may innovate ‘the Singularity.’
Jeff Eisenach, senior managing director of communications, media and internet at NERA Economic Consulting.
‘As we say in Africa, when two elephants fight, the grass suffers.’ As AI advances, there will be ‘pushback, pain and correction before real stability emerges.’
Rotimi Awaye, CEO and co-founder of Kini AI, an AI educator and strategist based in Lagos, Nigeria.
‘Costs of AI deployment are disproportionately borne by low- and middle-income countries, which are also excluded from decisions shaping the future trajectory of AI and, by extension, humanity itself.’
Megan Peters, computational neuroscientist at the University of California-Irvine’s Center for the Neurobiology of Learning and Memory.
‘Any recentering will require a new regulatory politics … a visionary set of ideals designed to promote human flourishing and sustainable existence on a warming planet.’
Andy Opel, professor of communications at Florida State University.
‘I do not have a crystal ball for the future, but people will try to reshape the world to make it amenable to the power they believe they can wield through AI.’
Bernie Hogan, associate professor at the University of Oxford and senior research fellow at the Oxford Internet Institute.
We should avoid ‘digital serfdom’ and ‘keep a skeptical eye on IP laws. … They could easily, in practice, give a small number of firms an effective monopoly on the intellectual heritage of our species.’
Ted Underwood, professor of information science and English at the University of Illinois-Urbana-Champaign.
AI will spread rapidly. What about the people who will be left behind economically and socially/culturally? Will we have enough jobs? Who is helping defend people from fraud?
Guido van Rossum, the Dutch programmer who created the Python programming language, a distinguished engineer at Microsoft.
‘As long as profoundly uneven access remains the order of the day, resilience to any kind of technological change will be nearly impossible.’
Toby Shulruff, researcher, writer and consultant expert in the trust and safety risks of everyday and emerging technologies.
Tech disruptions of the past teach us such change can be harmful. While AI as it stands today is an extractive industry benefiting technology plutocrats, mitigation guardrails can eventually be built.
Erich Huang, associate chief clinical officer for informatics and technology at Verily (Google’s life sciences subsidiary).
Higher levels of inequality are poison to resilience and big tech companies are determined to increase profits in a way that results in more inequality.
Thomas Reuter, a trustee at the World Academy of Art and Science and chair of its Existential Threads and Risks Infohub.
‘The profits will be privatized and the misery will be socialized. Resilience will be forged in the aftermath of mass misery and it will take a while for that misery to play out.’
Dave Karpf, associate professor in the School of Media and Public Affairs at George Washington University.
‘Leaders in every country don’t want people to think for themselves; they want to control people and make them easy to manage.’
An Asian research scientist.
If we want to create more-resilient communities and people we should look to instill some of the early values of the internet into AI culture – aim AI design toward free sharing and empowering individuals.
An executive with a major consulting firm.
Chapter 7. Heart & Soul: Protecting Human Connection and Seeking Calm
‘Allowing our lives to be monopolized by digital devices makes us less resilient, feeling less human and less confident in other humans. … It could be the most serious pandemic humanity has seen.’
Marina Cortês, a professor at the University of Lisbon’s Institute for Astrophysics and Space Sciences and participant in the futures research of the Millennium Project.
‘Our capacity to build and mobilize social capital is key to resilience – networking self-efficacy, a growth mindset about one’s networking ability, conversational skills and cultivation of empathy.”
Julia Freeland Fisher, an expert on human connection in the age of AI and director of education research at the Clayton Christensen Institute.
AI is moving into intimate life; this frays old systems of connection and intimacy. ‘What arrives is often not connection but simulation,’ shattering traditionally-valued types of relationships.
Aneesh Aneesh, sociologist of globalization, labor and technology and executive director of the School of Global Studies and Languages at the University of Oregon.
People will delegate crucial qualitative life decisions to AI, including how they relate to others. The loneliness crisis will worsen. Look to ‘chaos engineering’ to help build resilience and ‘dumb homes.’
Greg Sherwin, Singularity University global faculty member, previously senior principal engineer at Farfetch.
‘We can come back to each other and to ourselves. … There is more than a threat to empathy at stake; there is a threat to our sense of what it means to be human.’
Sherry Turkle, MIT professor and author who studies the emotional connections between people and technology.
‘It is easy to fall into the trap of thinking that AI defines an essential characteristic of being human. … Consequently, we need stronger antidotes to the ability of AI to define the nature of personhood.’
Henry Brady, former president of American Political Science Association and dean of the School of Public Policy at the University of California-Berkeley.
The ‘Cyborg Slide’ is coming. ‘We will develop new abilities but they will come at the cost of shedding parts of our humanity which we must work to hold onto.’ We must treasure the ‘slow and the small.’
Sarah Pessin, professor of philosophy and interfaith chair at the University of Denver.
‘Motors stole silence from our world and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude.’
Paul Saffo, a prominent Silicon Valley-based forecaster with three decades of experience helping corporate and governmental clients understand and respond to the dynamics of change.
Real harm can come as we anthropomorphize AI and develop social relationships with it. Let’s stop fearmongering about being ‘left behind’ and turn our attention to easing the suffering AI will cause.
Divya Siddarth, award-winning science fiction author, engineer and founder of the Collective Intelligence Project.
‘We must keep cultivating love and passion for the human mind and soul. For the natural, for the analogue, for the object in our hands not the bits in the cloud.’
A complexity scientist and collective intelligence researcher based in London.
If AI is so good why does it make me feel so bad? Where do we go from here? Let’s lean into being imaginatively thoughtful and genuinely human.
Chris Labash, associate professor of communication and innovation at Carnegie Mellon University.
Loneliness will increase as the pace of change speeds up. People are ‘cognitive misers’ who will defer to AI judgments. Still, there will be a backlash led by human-centric movements.
Dmitri Williams, professor of technology and society at the University of Southern California.
Increased engagement with conversational AI platforms puts children at risk for learning and normalizing ‘aberrant patterns of social interaction that might have negative consequences.’
Scott Kollins, psychologist, Ph.D., and chief medical officer at Aura, a digital family security company.
Offer people human connection and highlight models of everyday life experiences that build social ties. Sanctuaries from technology will be appreciated.
Brian Southwell, lead scientist for the public understanding of science and distinguished fellow at RTI International.
‘A primary problem to be dealt with by people using digital systems in the future will be the solitude they may experience in a world mediated by AI.’
Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction.
Learn the lessons that friction teaches. A good model for that is partner dancing, especially when doing it with multiple partners, requiring you to make compromises with those who are different.
Irina Raicu, the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University.
‘The development of advocacy and awareness initiatives is required to help foster responsible use and a deeper understanding of AI systems beyond the personal point of view of today’s average users.’
Katrina Johnston-Zimmerman, Philadelphia-based urban anthropologist and founder of THINK.Urban.
Amusing ourselves to death gives control to autocrats. Most people will use AI to outsource their cognition as well as their social interactions. ‘Democracy will die under these circumstances.’
Gerd Leonhard, speaker, author, futurist and CEO at The Futures Agency in Zurich, Switzerland.
The ‘I-Thou’ sensibility of the past should embrace the ‘I-It-Thou’ realities of today because we live in a ‘world in which all human interaction is mediated by algorithms.’
John Markoff, fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University, previously a senior technology writer at the New York Times for 28 years.
‘How can we prioritize human and planetary flourishing in symbiosis in any tech we create?’ We should redefine what progress means and how it ties to human well-being.
John C. Havens, author of “Heartificial Intelligence” and founding executive director of the IEEE’s Global AI Ethics Initiative.
‘We learn most by learning and being educated through such person-to-person interactions.’
An anonymous professor of robotics based in Japan.
Chapter 8. Human Complacency and the Lure of Convenience
‘Future generations may accept displacement by AI as their lot in life.’ Due to humans’ tendency to ‘take shortcuts that serve immediate needs, most will respond with a despondent shrug.’
Rosalie R. Day, an independent technology consultant, previously chief operating officer and co-founder at Blomma, a digital solutions platform.
Muster agency; avoid complacency. ‘Resilience stems from gaining skill in meeting life’s errors, detours, difficulties and frustrations.’ … Don’t defer to ‘friction-free’ AI; it leads to loss.
Maggie Jackson, award-winning author of “Distracted: Reclaiming Our Focus in a World of Lost Attention and “The Wisdom and Wonder of Being Unsure.”
‘The current form of AI can actively weaken every characteristic of human resilience; in some cases, it seems intentionally designed to do so.’ Welcome to the Slop Future.
Jamais Cascio, well-known futurist, speaker, and lead author of “Navigating the Age of Chaos: A Sense-Making Guide to a BANI World That Doesn’t Make Sense.”
AI is stealthily sliding into everything we do, suggesting, summarizing, drafting, routing and efficiently becoming a default source of decision-making and ‘truth’ even though nobody really agreed to let it.
Daniel Erasmus, founder and principal analyst at Serious Insights, based in Amsterdam, previously a director at Microsoft and VP at Forrester Research.
We must think carefully about ‘how resolute our willpower to resist negative aspects of AI is and how strongly we value understanding the technology – and its potential consequences.’
Naomi S. Baron, professor emerita of linguistics at American University and author of “Reader Bot: What Happens When AI Reads and Why It Matters.”
Compare AIs arrival to pouring water into a vessel. It takes the shape of the vessel. Human action causes human change and … ‘the vast majority of people will unconsciously lemming along.’
Frank Kaufmann, president of the Twelve Gates Foundation.
AI may follow the path of impact described in a sci-fi story in which explorers find a world that seems primitive, but in the end discover the tech is so deeply embedded that it is invisible.
Jon Lebkowsky, writer and co-wrangler of Plutopia News Network, previously CEO, founder and digital strategist at Polycot Associates.
Fast-paced digital life had already dialed down most humans’ willingness to focus on getting the facts from reliable sources the right way. Unless they wise up, their AI use will magnify the damage.
Adam Clayton Powell III, executive director of the initiative on election cybersecurity at the University of Southern California.
Work out in the ‘cognitive gym’ by developing intellectual abilities; carve out time for creative endeavors, read widely. Overall, AI disruption will create ‘actual and perceived winners and losers.’ Alan Inouye, principal at The Policy Connection and longtime leader at the American Library Association.
AIs will create highly addictive entertainment environments that will lure many into spending too many hours in them.’ Passive people will lose critical faculties. Creative thinkers will be enriched. Glenn Ricart, founder and CTO of U.S. Ignite, driving the smart communities movement.
We’ll be ‘living on our own, infrequently meeting face-to-face, communicating through screens. … We are likely to become more and more completely dependent on AI tools without even realizing it.’
Kevin Taglang, executive editor at the Benton Foundation.
Complacency has set in and there is little ambition to improve the ways people can discover AI-related harms. ‘There are not enough people in the room who are asking hard questions.’
Ken Rogerson, a professor of public policy at Duke University specializing in public interest technology.
Most people will not realize they are being affected by AI and will take no steps to avoid interacting with it. ‘Inertia is the most powerful force in human affairs.’
A law professor in the San Francisco Bay Area.
Complacency will come at the expense of agency.’ People will ‘happily surrender.
Bronwyn Ruth Williams, partner and director of foresight at Flux Trends, a strategic consultancy located in Johannesburg, South Africa.
‘Preserving the cognitive future and the richness of the human mind requires a new kind of rewiring, a deliberate cultivation of the very qualities that make us human.’
Larissa May, founder of Half the Story (a digital wellness non-profit) and CEO of Ginko, a tool to help families navigate the complexities of the digital world.
Chapter 9. Epistemic Vigilance: Discerning Truth, Illusion and Misinformation
The AI bargain: AI will be ‘just good enough that we won’t give it up.’ Human resilience requires epistemic humility, cultivating practical reason and investing in humans’ special moral capacities.
Erhardt Graeff, associate professor of social and computer science at Olin College of Engineering.
Real resilience comes from embracing things that can’t be captured in data or resolved through optimization, from resisting convenience and developing the ability to operate in genuine uncertainty.
Helen Edwards, co-founder of the Artificiality Institute, studying human experience in an increasingly synthetic world.
What I learned building a local hub in a global shift: People’s concerns are less about AI than about their own place within systems that embrace AI. Coping with uncertainty is a key requirement.
Dino Osmanagić, head of innovation at Incert eTourismus in Linz, Austria, and hub leader at Young AI Leaders.
Epistemic crisis: If everything can be generated, edited, distorted or algorithmically distributed, the boundary between fact and impression becomes fragile. People rarely verify sources and context. Mirjana Pejić Bach, professor on the faculty of economics and business at the University of Zagreb, Croatia.
Divides due to fractured ‘reality’ and a growing lack of consensus on ‘facts’ will deepen; dependence on AI advice and companionship will accelerate mental illness; new approaches must emerge.
Stephan Adelson, president of Adelson Consulting Services.
The human theory of mind is now interacting with machines that passed the Turing Test. That invites manipulation and supercharges surveillance capitalism. Be careful; don’t mistake machines for people.
Christopher Savage, a partner and expert in telecommunications law and policy at the Washington, D.C.- based law firm Davis Wright Tremaine.
‘Human resilience depends on being able to ascertain the truth and finding institutions and people to trust. Failure to do so would lead to the devolution of classic ‘liberal society.’
Charlie Firestone, former executive director of the Aspen Institute Communications and Society Program and institute vice president.
People must become more adaptable than ever before. They need new ways to anchor themselves in truth; old anchors of identity like religion, nation, community, family and profession are crumbling.
David Barnhizer, professor of law emeritus of Cleveland State University and author of “The Artificial Intelligence Contagion: Can Democracy Withstand the Imminent Transformation?”
We need to build ‘truth-ready’ AI systems that can discern fact from fiction and train the leaders who will drive a positive cultural evolution in the truth-ready era.
Jim C. Spohrer, board member of the International Society of Service Innovation Professionals and ServCollab, previously a longtime IBM leader.
‘Your AI is built to bullshit you. Here’s what you can do about it.’ A prompt guide to pushing back against the obvious flaws of large language models.
David Porush, author of “The Soft Machine: Cybernetic Fiction” and CEO of two Silicon Valley start-ups in e-learning.
‘If, and probably only if, policy and law start to catch up with the technology, people will come to trust it more, to use it correctly … I fear the reluctance of the U.S. government to regulate its use.’
James Hendler, director of the Future of Computing Institute and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute.
‘An immediate priority is the cultural protection of traditional knowledge, IP and related rights and robust’ agreements with government and tech companies to avoid harms being embedded at scale.
Karaitiana Taiuru, a Māori technology ethicist and researcher based in Aotearoa, New Zealand.
‘AI is a power tool, use it wisely.’ Developing a BS-detector is crucial; knowing enough to develop a sense of when you’re being played is imperative; knowing where to focus is essential.
Seth Finkelstein, programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner.
Chapter 10. Additional Observations – Broader Insights
‘Human resilience will require mindful and evolving attention to discovering where human touch and human intelligence can complement developments in AI.’
James Witte, professor of sociology and anthropology and director of the Institute for Immigration Research at George Mason University.
The idea that there is an imperative to adapt implies that AI is inevitable and not subject to political, economic and democratic decisions regarding costs and benefits of AI development.
Lucy Suchman, professor emerita of the anthropology of science and technology at Lancaster University in the UK, previously a 20-year veteran researcher at Xerox’s Palo Alto Research Center.
The early automobile was called a ‘horseless carriage.’ People need to start having iterative dialogues with AI instead of seeking responses via simple, limited pursuits of a particular answer.
Garth Graham, a global telecommunications expert and consultant based in Canada.
Resilience issues will arise because AI is artificial. ‘People will yearn to disconnect and touch grass.’ Look for ‘AI detox retreats’ and efforts by some to build strife into their lives in order to feel human.
Chris M. Ellis, senior fellow and director of research at the Homeland Defense Institute in Colorado Springs, author of “Resilient Citizens: The People, Perils and Politics of Modern Preparedness.”
‘AI monopolies lost their way by embedding corrupt, algorithmic weighting into machine learning through deliberate or ignorant social engineering.’
Chris Boese, writer, independent scholar and activist, previously a vice president and lead user-experience designer and researcher at JPMorgan Chase financial services.
Solutions occurring outside of the human experience are waiting to be discovered. Would such discoveries threaten the animal-human hierarchy? Could they subvert artificial intelligence?
Alexandra Whittington, futurist at Tata Consultancy Services and co-author and co-editor of “A Very Human Future” and “The Future Reinvented.”
AI systems may supplant established realities and the result could be a more mediated existence. Can AI ‘effectively address the perceived fragmentation of humanity and foster global engagement?’
Peter Mmbando, director of the Digital Agenda for Tanzania Initiative.
‘We must prize the formation of high-quality questions and the ability to critically evaluate and take action based upon machine-generated responses to those questions.’
John Battelle, senior fellow at the Burnes Center for Social Change and chair at sovrn Holdings.
Societies may embrace age-old practices that limit ‘the intrusion of tech into specific times and places by custom/manners, personal choice and designated spaces.’
Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE, a professor at Columbia University.
‘Leading principles of technology assessment and transfer practices and of change management should be used extensively to reinforce human and systems resilience.’
Bassam Tabshouri, founding chair of the Healthcare Technology Management and Advancement Society in Beirut, Lebanon.
‘Until humans are prepared to consciously calibrate their cognitive and emotional reactions to systems it will be hard to predict how they will have mostly successful interactions with them.’
A veteran artificial intelligence expert, a globally renowned computer scientist.
‘Both the Internet and AI have created substantial negative externalities and impacts.’ We should work harder to address the problems of AI now.
Rob Frieden, professor emeritus of law and telecommunications at Penn State University.
‘The street finds its own uses for things’ – users of AI will bend it in pro-human directions. People find their own ways to make technology work for them. That will happen here, too.
Russell Blackford, philosopher, legal scholar and fellow of the Institute for Ethics and Emerging Technologies.
‘For the most part, humans have maintained a reasonable separation between their humanity and what is beyond their screens. … Let’s hope the AI tools providers can achieve similar levels of safety.’ Calton Pu, co-director of the Center for Experimental Research in Computer Systems at the Georgia Institute of Technology.
Human creativity and critical thinking will always have a place in the future, so long as we actively maintain those abilities and recognize our distinct advantages over AI.’
Jeremy Pesner, a policy analyst, researcher and speaker expert on technology innovation.
AIs’ influence will be mostly positive and largely occur in the background as it becomes normalized. On the whole, this is a good thing, as there are plenty of other things to worry about.
Tim Kelly, lead information and communications technology policy specialist at World Bank, previously head of strategy and policy at the International Telecommunication Union.
The greatest risk lies in anthropomorphizing AI, which limits human agency ‘drastically – we must position ourselves to realize all of its benefits while limiting many of the drawbacks.
Christopher Riley, executive director of the Data Transfer Initiative and distinguished research fellow at the University of Pennsylvania’s Annenberg Public Policy Center.
‘Today’s geopolitical stress combined with the militaristic aspects of the race to accelerate AI should bring public attention to more of its downsides.’
An anonymous political journalist who reports on technology trends.
‘We must cultivate capacities that recognize, support and encourage individual autonomy and experimentation as the fundamental building block of human progress.’
Neil Chilson, director of AI policy at the Abundance Institute, previously chief technologist at the Federal Trade Commission.
‘We will not necessarily need to be resilient to be happy. We will simply need to comply.’ Look at the rise of the smartphone, despite worries about its impact. Usefulness is the main criterion.
Mark Schaefer, marketing strategist and author of “Marketing Rebellion.”
‘The fundamental reality is that it simply takes time to fully absorb the benefits and risks of new technology.’ And the critical question is: How will the demand side go with AI applications?
Mario Morino, chairman at Morino Ventures and co-founder of Venture Philanthropy Partners, a pioneer in venture philanthropy.
‘The faster we become more comfortable with today’s reality and tomorrow’s potential of AI, the better off the public will be.’
Ray Schroeder, professor emeritus of communication and founding director of the Center for Online Learning, Research and Service at the University of Illinois-Springfield.
People have changed before. ‘The hard work of adaptation will continue as we learn to use AI tools to create lives for ourselves and selves for our lives. Change comes quickly. Wisdom comes slowly.’
Warren Yoder, longtime director at the Public Policy Center of Mississippi.
Humans adapt. It’s what we do. As with all major changes, there will be pain and dislocation in the near term as we learn the powers and the limits of this new thing.’
Valerie Curran Bock, owner and principal at VBC Consulting.
‘Intellectual and emotional maturity are needed to ensure that people balance their uses of AI with real-world human experiences and in-person conversations.’
Maureen Hilyard, a development and safeguards consultant in the Cook Islands and active leader in ICANN and the UN-facilitated Internet Governance Forum.
The schism on campus between AI enthusiasts and skeptics will continue among college faculty and that puts everyone in higher education in a pinch.
Kevin Yee, director of the Center for Teaching and Learning at the University of Central Florida.
‘There may be some openness to these changes if they lead to decreases costs and increase access to services and opportunities for self-expression.’
Carol Chetkovich, retired professor of public policy.
Healthy people seek clues and guidance about how resilience can be nurtured. We can learn from sociologists, economists, therapists, psychologists, educators and technologists.
A researcher for major technology company.
Equal access and transparency are essential for AI applications and LLM and ethics and a learning society. AI will disrupt societies, lives and cultures if this learning and guidance is not taking place’
Heleen Riper, a clinical psychologist and senior researcher at Vrije University Medical Center in Amsterdam.
Resilience in an AI-saturated society depends less on adapting to automation than on preserving human agency, critical judgment and the capacity to limit or refuse AI.’
Navì Argentina Rodrìguez, a futurist based in Nicaragua.
Solutions will arise from collective effort, rather than individual activities.
Susan Helper, professor of economics at Case Western Reserve University.
‘The most successful people will be those who use AI tools.’
João Gama, professor of economics at the University of Porto, Portugal, and deputy editor of the journal CAAI AI Research.
We are heading into a challenging disruption of the information ecosystem
A North American scholar.
‘A considerable risk lies ahead of increasing passivity, mental health challenges and degraded knowledge and ethical standards among humans’
A respondent who wished to remain anonymous.
Chapter 11. Closing Thoughts – Making our Way on the Path to Flourishing
Recalibration: ‘The most important work is not accelerating AI development but strengthening human capacities – cognitive, social, ethical – that allow us to live well alongside powerful but limited tools.’
Michael Zimmer, director of Marquette University’s Center for Data, Ethics and Society, a privacy and data ethics scholar.
‘The task before us is not to outrun AI. It is to outgrow our short-termism.’ We must become ‘great ancestors’ with moral imagination to anticipate downstream effects that will affect unborn children.
Ari Wallach, co-founder of Futurific and founding director of Longpath Labs.
AI is based upon humanity’s available trove of information – the good, the bad, the evil, the wrong, the right, the old, the new. Should we offload our thinking and learning to that tool? Sometimes.
Stephen Abram, principal at Lighthouse Consulting, Inc.
‘New technologies can create new habits of mind that can be taught. … AI may lead us to the path we need to follow to augment the best of what we are capable of and lead to human flourishing.’
Peter Lunenfeld, director of the Institute for Technology and Aesthetics at UCLA and author of “The Secret War Between Downloading and Uploading: Tales of the Computer as Culture Machine.”
We have invented a real AI Paperclip Maximizer, trying to optimize for economic activity while damaging our cognition, emotional resilience and people’s ability to relate to each other.
Grace (Rebecca) Rachmany, executive director of the Decentralized Identity Foundation, based in Kranj, Slovenia.
‘Some predict that humans are building a race of slaves smarter than ourselves to do our bidding. What could possibly go wrong?’
Michael Dyer, professor emeritus of computer science at the University of California-Los Angeles.
‘We need not be passive observers of AI’s detrimental effects; instead, we have the opportunity to actively identify opportunities to steer it.’
Jeremy Foote, assistant professor of communications at Purdue University.
‘If we project threat and danger onto emergent AI, it may respond with anger and attack.’
Geoffrey C. Bowker, director of the Values in Design Lab at the University of California-Irvine.
‘There is little hope that humanity’s existing coping mechanisms will change significantly in the next few decades. At best, we can hope for the integration of humans and artificial organisms.’
Jaak Tepandi, professor emeritus of knowledge-based systems at Tallinn University of Technology in Estonia.
‘Humans have been progressing toward being cyborgs living in artificial environments for thousands of years … So modern protest about artificial intelligence is nothing new.’
Jim Dator, professor emeritus and founding director of the Research Center for Futures at the University of Hawaii-Manoa.
‘The future is coming at us faster than ever. What worries people most about this is AI’s looming role. … This will be our finest moment.’ Humans possess remarkable coping capabilities.
Adam Thierer, a prominent technology analyst at the R Street Institute.
‘Why we don’t respond to the opportunities right in front of us … and how to change that.’ We need each other. We can turn adversity into opportunity. Today, everything is possible.
Mark Monchek, chief opportunity officer at Opportunity Lab, entrepreneur and author.
> Return to the Executive Summary home page for this report
> Download a PDF of the full, 376-page report
> Download the 16-page Executive Summary