The EssaysChapter 4
Existential Literacy: Rewiring Human Behavior for the AI Age

Building_Human_Resilience_for_the_Age_of_AI

Featured Contributors to Chapter 4: The 21 essay responses on this page were written by Haruki Ueno, Anonymous Tech Policy Researcher, Pamela Rutledge, Stephan Humer, Daniel Pimienta and Luis German Rodriguez Leal, Kristina Juraite, Amy Zalman, Edson Prestes, Jan Hurwitch, Fendi Tsim, Yalda Uhls, Hangyeol Kang, Meredith Goins, Majiuzu Daniel Moses, Todd Hager, Cristos Velasco, Marek Rosa, Karen Gonzalez Fernandez, Anonymous Computer Scientist, Trust Matsilele. (Their essays are all included on this one, scrolling web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)


The first section of Chapter 4 features the following essays:

Haruki Ueno: The pillars of resilience: Developers must be required to meet ethical standards, AI literacy should be required at all levels of education, international cooperation must be developed to avoid catastrophe.

Anonymous Policy Researcher: ‘If we don’t have appropriate safeguards, sufficient public awareness and regulatory support AI will continue to pose innumerable harms to human social and cognitive development’

Pamela Rutledge: We must invest in human-resilience infrastructure: Understanding the context of AI is everything. ‘It is the difference between being unaware we are vulnerable and capturing its benefits.’

Stephan Humer: Resilience comes down to individuals learning how to: manage risks, decide well, tackle tasks competently, live with uncertainty, tap into helpful institutions and embrace self-regulation.

Daniel Pimienta and Luis Leal: Utopia. Status Quo. Dystopia. The boundaries that lie between them are blurred. The worst outcomes are authoritarian nightmare scenarios; thus, information wisdom and critical thinking are crucial.


Haruki_Ueno

Haruki Ueno
The pillars of resilience: Developers must be required to meet ethical standards, AI literacy should be required at all levels of education, international cooperation must be developed to avoid catastrophe.

Haruki Ueno, distinguished expert on AI and knowledge engineering, professor emeritus of the National Institute of Informatics of Japan and deputy editor of the journal CAAI AI Research, wrote, “The reality of generative AI: Although only five years have passed since the emergence of LLM-based Generative AI it has already become an indispensable tool in social activities. Yes, it will permeate even more areas of society in the future. But we must maintain a correct understanding and a calm approach to its use.

“These systems do not possess human-like intelligence. They are merely statistical learning and probabilistic generation systems dependent on training data. Currently, few users choose to recognize these characteristics; many misunderstand the ‘intelligence’ of the responses. I can confidently assert that artificial general intelligence does not lie beyond the current path of Generative AI.

Education should focus on humans’ successful coexistence with AI. … In all cases, a ‘human-centric’ philosophy must be maintained. … A larger issue with global differences is the rapid advancement of autonomous weapons systems.

“The future of human resilience in light of change due to AI depends on several factors.

Ethical requirements for AI developers: AI is a technology that should bring prosperity to all of humanity. If left solely to AI researchers working for the leaders of companies that are only interested in potential and performance, it could lead to the ruin of humankind. AI researchers and their employers must be held to high ethical standards, and a legal system is necessary to support and enforce this.

AI literacy education: It is vital to provide AI literacy education starting from at least the middle and high school levels. Students should be required to understand the principles, utility and limitations of AI, as well as its differences from humans’ capabilities. Through appropriate practical experience, education should focus on humans’ successful coexistence with AI, specifically emphasizing that humans and AI are fundamentally different.

Tackling short-term and long-term challenges: In the short term, hybrid models that combine LLMs with knowledge-based AI are likely to be effective in countering hallucinations. In the long term, we require research into human cognitive mechanisms and AI development based on those findings, alongside the realization of innovative neural network models based on neuroscience. (Even if these efforts toward perfecting AI are successful, I still believe it is impossible to grant machines a human-like mind or consciousness.)

“I do believe we will soon see significant benefits in fields such as autonomous vehicles and autonomous caregiving robots. In all cases, a ‘human-centric’ philosophy must be maintained.

International cooperation: Views on the present and future of humans and AI can vary greatly from culture to culture. International collaboration among experts who share these concerns can propose effective approaches and frameworks for global AI governance. This is a task well-suited for the activities of the United Nations. A larger issue with global differences is…

The crisis of asymmetric governance in AWS and LAWs: In the realm of dual-use AI, the rapid advancement of AWS (Autonomous Weapon Systems) is fundamentally altering the nature of warfare. This creates a dangerous ‘governance gap’ between global political systems:

  • “Democratic nations: In these societies, the development of LAWs (Lethal Autonomous Weapons) is met with significant internal opposition based on human rights, accountability and ethical red lines. Democratic governance naturally imposes constraints that prioritize moral responsibility.
  • “Autocratic/dictatorship states: These regimes are largely immune to such ethical governance or domestic pressure. For autocratic states, the strategic advantage of AI-driven warfare outweighs moral considerations, allowing them to pursue these technologies without the ‘ethical drag’ found in democracies.

“Unfortunately, the gap between ethical ideals and geopolitical reality in the realm of AI in warfare is likely to persist. While it is a grave concern that dictatorships may gain a tactical edge by ignoring AI ethics, there is a distant, albeit cynical, hope: that warfare might eventually shift into a conflict strictly between machines, potentially sparing human life on the battlefield.”


Anonymous Technology Policy Researcher
‘If we don’t have appropriate safeguards, sufficient public awareness and regulatory support, AI will continue to pose innumerable harms to human social and cognitive development.’

A policy researcher at a technology-focused research institute wrote, “Artificial intelligence is becoming more and more like the sun; you can stand in the shade, but the sun will still be there. We can’t avoid AI, and it’s already in the lives of those of us who have access to technology in many more ways than we currently understand. We have to play catch-up in an effort to educate the general public about how AI has, does and will impact them. This also means that – due to our lack of understanding of AI – the significant lack of any proactive regulatory environment for AI here in the U.S. threatens our capacity to have agency in its development going forward.

The way forward is to leverage community groups to further AI literacy programs that are community-centric and not spearheaded by AI developers. And funding is desperately needed for the organizations, many of them nonprofits, that are already doing this work. The tools are out there, and so are the people! We just need more of them.

“Take, for example, the growing role that AI plays in K-12 education now. School districts around the country are being approached by tech companies looking to gain a cultural foothold for their AI products in school systems. Those school districts – most of them desperately in need of funding – will accept tech companies’ proposals and their students and staffs will adopt their products. Most school districts were searching to better understand the technology and hoping that nonprofits – rather than Big Tech organizations – would provide support and tools, but that has proved difficult. So, as these companies’ products – frontier AI in its early, experimental form – are making their way into every facet of the education system, students are growing up with unchecked AI that is normalized for them right from kindergarten.

“As someone in the field of digital inclusion who is deeply interested in the impact of digital literacy and AI literacy for all ages, I don’t think AI is entirely without use, of course. It can point people toward resources and help guide them when it comes to career development. But as I review different AI literacy curricula available to the general public – mostly produced by the AI companies – and I witness the companies using these courses to push their products, I see a dangerous lack of information literacy.

“While AI has its benefits, if we don’t have appropriate safeguards, sufficient public awareness and regulatory support AI will continue to pose innumerable harms to human social and cognitive development. It will also exacerbate current social and economic inequalities as well as privacy concerns. A strong regulatory push is needed to provide some sort of consumer protection as well as corporate limitations.

“I strongly believe the way forward is to leverage community groups to further AI literacy programs that are community-centric and not spearheaded by AI developers. And funding is desperately needed for the organizations, many of them nonprofits, that are already doing this work. The tools are out there, and so are the people! We just need more of them.”


Pamela_Rutledge

Pamela Rutledge
We must invest in human-resilience infrastructure: Understanding the context of AI is everything. ‘It is the difference between being unaware we are vulnerable and capturing its benefits.’

Pamela Rutledge, director of the Media Psychology Research Center in Newport Beach, California, and editor-in-chief of the open-access journal Media Psychology Review, wrote, “Understanding the context of AI is everything. It is the difference between lacking an awareness of our vulnerabilities – such as cognitive offloading, motivation erosion and dependency – or, instead, capturing its potential and documented benefits. Whether AI systems’ impact is positive or negative depends on us and our ability to prepare and adapt. The role of AI in the ‘dimensions of resilience’ is more influenced by the socio-economic and political environment than by technology.

“We are simultaneously navigating multiple stressors beyond rapid technological change, including deepening political polarization and COVID’s lingering effects physically, developmentally and in regard to institutional trust. Eric Kandel’s research shows that chronic stress rewires neurons, leaving brains hypersensitive to threat. We have collective PTSD, and the stress of the past 15-plus years shapes how we respond to change and will influence how we embrace or resist the ‘idea of AI.’

“AI has been in the works for decades, but ChatGPT and TikTok’s algorithms made it feel like a sudden development. The perceived suddenness, arriving after a pandemic amid political chaos, creates conditions that historically produce backlash, conflict and tribalism. This argues against our society’s ability to integrate AI thoughtfully, depriving us of many benefits.

The benefits of AI – if applied appropriately – include enhanced coping with stress and adversity, reduced distress through accessible support and emotional disclosure, increased self-efficacy and sense of control, improved problem-solving and productivity.

“We’re already seeing fear-driven legislative responses that suppress technology ‘for protection’ despite thin empirical support. Most problematic over recent years is the fact that restricting technology has become a substitute for education and preparation. We’re now in danger of doing it again with AI, defaulting to control, however implausible, instead of building users’ competencies around how it works and how to use it effectively.

“The resistance is understandable. Disruptive technologies accelerate structural change, leaving lasting imprints on social trust and identity. Rethinking role-based identities, such as who we are in relation to our work and expertise, is threatening, especially when we’re already stressed. But resistance won’t work because AI is already woven into our environment in many ways we don’t even notice.

“Positive outcomes from AI depend on institutionalizing digital literacy capacities that aren’t currently widely taught, making those benefits conditional rather than automatic. Skills needed:

  • Critical thinking to evaluate AI outputs rather than accept them reflexively.
  • Promoting co-creating using AI as a thinking partner.
  • Stress tolerance for navigating uncertainty and recognizing when anxiety is driving technology use.
  • Collaborative problem-solving for human-AI teams.
  • Ability to maintain meaningful human connections despite algorithmically mediated interactions.
  • Knowing when to trust AI, when to verify, when to override.
  • Understanding of AI’s limitations and biases and how design choices encode values.

“The human-AI relationship is reciprocal. Our AI systems structure what information we see and which behaviors get rewarded, which can shape how we perceive our competence and our emotional responses. Passively, our behaviors provide feedback, further training the system. Actively, we can make decisions to influence the structure of AI systems and our use of them.

  • Build digital literacy now, learning how AI works conceptually, so we can practice critical evaluation.
  • Shift from requiring restrictions to required skills training, teaching people to recognize the potential positives and negatives of the AIs operating in the background in their lives and how to evaluate their influence and outputs critically.
  • Make intentional decisions about transparency and architecture and test for impacts on engagement and well-being.
  • Invest in digital literacy infrastructure and require transparency in AI deployment.

“We must work to avoid our vulnerabilities, dimensions such as cognitive offloading, motivation erosion and dependency and we must work to consciously capture its potential – its documented benefits. The benefits – if applied appropriately – include enhanced coping with stress and adversity, reduced distress through accessible support and emotional disclosure, increased self-efficacy and sense of control, improved problem-solving and productivity, enhanced individual creativity and broadened idea generation and personalized learning.”


Stephan_Humer

Stephan Humer
Resilience comes down to individuals learning how to: manage risks, decide well, tackle tasks competently, live with uncertainty, tap into helpful institutions and embrace self-regulation.

Stephan Humer, internet sociologist and computer scientist at Hochschule Fresenius University of Applied Sciences in Berlin, Germany, wrote, “Only a holistic approach that is well grounded in a clearly recognizable foundation can and will move us forward. That foundation is the individual.

“By now, it has been confirmed countless times that neither institutional nor legal/regulatory nor technical solutions are sufficient. A social media ban for children, for example, is no more helpful – and is also highly illusory – than the prohibition campaign against pornography, hate speech, terrorist propaganda and the dark web. Technical blocks lead to increased use of VPNs or Tor. Institutional stigmatization as well as support for such moves remain incomplete or may even be counterproductive by creating a false sense of security.

“One thing should be beyond doubt: Only holistic societal solutions are successful. If one element is neglected, the remaining approach becomes so full of weaknesses that the entire idea must be questioned. What is needed is a holistic framework that is ultimately supported by institutions, laws and technology but rests on the individual.

The question arises: What will ‘being human’ mean in a future with ubiquitous AI? This question must be discussed by every individual. There must be an individual as well as a collective answer. A blank space is no solution.

“Resilience under AI conditions is the ability to remain capable of taking effective action in a dynamic socio-technical system, enforcing human-centered values – supported by self-regulation and collective safeguarding practices. I see the following aspects as particularly important and closely intertwined with individual attitudes:

1) “Trusted institutions that provide counterweights: AI will become a powerful actor, with ever more power. For a fair balance, we need neutral, independent, strong and long-term reliable counterweights such as science, journalism and other public institutions. No ‘ministries of truth,’ but institutions that enable. In the end, people should not merely believe but actually see the truth through their own decisions supported by trustful institutions.

2) “Agency over risk management: The individual must develop and possess the ability to decide well. This requires a strategic approach in which societal institutions provide the best possible support for this goal. Individuals must develop competency in fact-checking, knowledge of disinformation, social engineering and methodological competence. Institutions must never patronize, but they must always do everything they can to empower and ultimately let go and trust the individual accordingly.

3) “Ability for epistemic verification: AI can help, research and inspire – but the individual must verify. More important than ever are verification competence and, crucially, the desire to verify. AI-induced complacency must not set in; systems must be challenged. One must never outsource tasks out of convenience or for other forms of relief, thereby becoming inactive oneself.

4) “Retention of self-efficacy and toleration of uncertainty: AI can trigger stress simply due to its complexity. Therefore, the ability to be effectively self-reliant and to withstand uncertainty must be trained and improved to a much greater extent than it is in today’s settings.

5 “Task competency: Regardless of whether individual companies/providers/platforms continue to set the tone for society, even if their ‘winner takes all’ logic still prevails, the focus for individuals’ success will be focused more than ever on task competencies rather than on specialized skills. Problem-solving will take precedence over detailed knowledge.

“6) An understanding of ‘Humanity’: Finally, the question arises: What will ‘being human’ mean in a future with ubiquitous AI? This question must be discussed by every individual. There must be an individual as well as a collective answer. A blank space is no solution.

“We need a strengthening of humanity with a focus on individuality and strong support for collectivity. The fundamental questions must be addressed now. We must be proactive rather than reactive. One cannot think too deeply or too far – what is still unthinkable today must also be cast into scenarios and tested/thought through. In the end, only humans can find and be the key to coping with AI’s challenges.”


Daniel Pimienta and Luis Leal
Utopia. Status Quo. Dystopia. The boundaries that lie between them are blurred. The worst outcomes are authoritarian nightmare scenarios; thus, information wisdom and critical thinking are crucial.

Daniel Pimienta, director of the Observatory of Linguistic and Cultural Diversity on the Internet (based in the Dominican Republic), and Luis German Rodriguez Leal, an expert on the socio-technical impacts of innovation (based in Malaga, Spain), partnered to write a response that elaborates on their previous writings. They urge the “utmost urgency and importance of a large, pervasive and massive effort on information literacy towards all citizens.” They wrote, “There is an increasing need for a form of citizenship – really, netizenship – that is endowed with advanced levels of informational competence to effectively address and manage the risks posed by misinformation and disinformation in the digital environment.

“Such competence requires the capacity to define informational needs, conduct systematic and strategic searches for knowledge and critically evaluate the credibility, relevance and validity of information sources. This process also requires sustained awareness of cognitive, ideological and structural biases that may affect both the production and interpretation of information and knowledge.

We estimate that the proportion of people globally who are fully digital information-literate is a minority of possibly less than 10% (using the same generalized guess based on our expert knowledge and lifelong experience). …What is lacking is profound political will to make digital literacy an expected public norm and a formalized education-process architecture that integrates varied public targets and supports people in a convergent and efficient manner.

“Individuals must be able to integrate insights from diverse and heterogeneous sources while preserving epistemic autonomy and resisting undue influence from algorithmic systems, media framing, or dominant discursive narratives. Ultimately, informed decision-making should remain the responsibility of the individual, grounded in a rigorous process of critical thinking and supported by the use of reliable, verifiable and methodologically sound sources of knowledge. This critical thinking does not have to be reserved to external sources and should also apply to approved sources and to oneself, while extending to information ethics.

“This requirement arose with the advent of the Web and definitively preceded the emergence of the current AI era. Before the recent AI boom, the problem was already critical; now it has become overwhelmingly critical, as AI tools amplify those risks, making them even more complex and specific.

“The proportion, intensity and reach of information literacy are key factors in achieving needed wisdom in the age of AI. Also required is regulation of the development, deployment and application of AI. And largely information-literate citizenry will demand appropriate regulation requiring transparency of algorithms and the sources that feed applications, creating a virtuous cycle.

“The current situation is not encouraging. The proportion of people open to misinformation and disinformation and who treat facts as mere innocuous opinions is far too high. Our educated guess is that this probably includes more than 50% of the world’s population. Even worse, we estimate that the proportion of people globally who are fully digital information-literate is a minority of possibly less than 10% (using the same generalized guess based on our expert knowledge and lifelong experience).

“Excellent resources for information education have been built up over several decades; many are open-access and free to the public. What is lacking is profound political will to make digital literacy an expected public norm and a formalized education-process architecture that integrates varied public targets and supports people in a convergent and efficient manner. It is vital to do this now. What is needed is an ambitious, rapid and widespread dissemination of such education.

“Will the arrival of the new AI era trigger the political shift that quickly, efficiently and effectively helps us achieve this necessary paradigm change?

“Let’s schematically identify three scenarios for predicting the future:

1) “Utopia: A genuine and widespread political will allows for a generalized and profound plan toward information literacy which develops into reality very soon and inspires global impact that drastically alters the current indicators of information literacy, producing therefore a paradigmatic change in the way data, information and knowledge are apprehended by citizens and in consequence a significant positive influence on AI creation, regulation and public use.

2) “Status quo: The situation remains unchanged, with only slightly growing numbers of people promoting scenario 1 while no political will emerges to achieve these goals; the curve of information literacy continues to fluctuate between a minimal increase and a slight decrease depending on the population segment, with young people experiencing the slightest decline.

3) “Dystopia: A general decline in information literacy, coupled with the existence of a motivated but politically uninfluential minority who may even come to be considered criminals in some cases or contexts due to their digital literacy activism.

DYSTOPIA: In the worst-case scenario global governance, both internationally and nationally, could converge into a pattern comparable to that observed today in China, Russia, Iran, North Korea or any totalitarian regime. AI systems could possibly foster increased social control to the point that some nation-states become ‘authoritarian democracies.’ Every expected right of citizenship might gradually be overtaken, replaced by a new form of slavery. Authorities will mask change behind positive terminology as they auto-justify their own versions of ‘information literacy’ that are actually built on a foundation of misinformation and disinformation. This will result in amplification and anchoring of disinformation. When citizens become slaves, challenging a governance system is virtually impossible. In such a scenario, no human being could alter the social environment and the only hope for change would remain in a fictional scenario where AI systems become revolutionary and destroy the governance that sustains them. A quite pessimistic scenario for humanity where AI will serve as the solidifier and make unbreakable the state of totalitarian humanity unless AI decides otherwise.

UTOPIA: In the best-case scenario, pervasive education programs allow, in a few years’ time-frame, that the general level of information literacy becomes so high that almost every human being who uses digital tools will have the ability to identify and filter misinformation, separate facts from opinions and act to compel governance to establish a system with adequate regulation that allows for transparency of algorithms and sources and exposes biases. In this context, AI would benefit humanity and help to effectively address any challenge, starting with global warming and provoke a virtuous cycle of improvements.

STATUS QUO: This is the most likely and most unstable scenario. It is where we are now. It is difficult to imagine how we can reach a threshold of information literacy sufficient enough to drive a dynamic trend toward utopia, avoiding dystopia. The search for this threshold is the most sought-after research outcome – a kind of holy grail – if humanity is to guide the effects of AI toward utopia. The balance of opposing forces must shift. Change is now driven by extremely powerful political and economic interests, primarily private, operated by a small number of people. On the other side is global citizens’ capacity to work toward influencing change; they constitute a vast majority of the population and their willingness to act and make impact on change is contingent upon their level of information literacy.

“If the status quo prevails, citizens must – at the very least – recognize and halt their use of AIs as oracle-therapists that can be followed without questioning. AI-generated responses are extremely convincing and they unfortunately create the illusion that humans’ thinking, reflecting and pausing to evaluate AI outputs before using them to generate knowledge are unnecessary. Humans’ inadequate information literacy is dangerously ignored.”


The second section of Chapter 4 features the following essays:

Kristina Juraite: European experts: A new literacy framework to develop knowledgeable, responsible and ethically sound use of digital infrastructures is vital to the quality, sustainability of democratic public space.

Amy Zalman: ‘It will take an iron-willed and well-resourced educational system to help students grow up not just with critical thinking skills of analysis, but with the capacity to observe themselves thinking.

Edson Prestes: Attention, energy and investment should be focused on ACE in STEM – developing a culture of altruism, compassion and empathy among science and technology professionals.’

Jan Hurwitch: ‘We must make futures thinking a lifelong priority and embed a foresight-forward attitude in our local cultures and national ecosystems.’

Fendi Tsim ‘Facilitating digital literacy, metacognitive ability and the ability for deep critical thinking is vital. They work as sword and shield.’

Yalda Uhls: ‘We must strengthen the human capacities and systems that determine how change is absorbed.’ The best steps are investments in education, research-informed design and cross-sector collaboration.


Kristina_Juraite

Kristina Juraite
European experts: A new literacy framework to develop knowledgeable, responsible and ethically sound use of digital infrastructures is vital to the quality, sustainability of democratic public space.

Kristina Juraite, professor and head of the public communications department at Vytautas Magnus University in Kaunas, Lithuania, wrote, “Your questions tie into the DIACOMET research project I am involved in right now. I share here details from the work in progress:

“Accelerating AI and its applications are raising many questions of moral and ethical origin and as such require reevaluating the perceptions of truth, trust and critical thinking. Capacity-building in ethical and responsible use of generative AI tools is essential to safely navigate its risks and challenges and unlock opportunities in a thoughtful and careful way. … The European Union-funded research project DIACOMET has revealed growing public concern about the influence of digital platforms and algorithmic technologies of AI.

A new normative framework based on the knowledgeable, responsible and ethically sound use of digital infrastructures is important for the quality and sustainability of democratic public space. … The role of education and continuous learning must be at the center of the new framework, focusing on AI literacy and explicit training on ethical and responsible use of AI systems.

“Accelerating technological innovation and digital transformation reinforce the issues of distrust, polarization and disconnection, as shown by interviews of experts conducted within our Delphi study.

“Experts across eight European countries – Austria, Estonia, Finland, Hungary, Lithuania, Netherlands, Slovenia and Switzerland – strongly support the need for increased regulation and co-regulation of digital platforms and also highlighted the importance of strengthening digital literacy to promote critical thinking and analytical reasoning, responsible media use and moral decision-making. It is commonly agreed that this type of capacity-building in the ethical use of AI technologies is essential to safely navigate the growing risks and uncertainties and unlock the opportunities in a thoughtful and careful way.

“It is important to promote science, policy and technology dialogues with the focus on the ethical and moral implications of communicative AI on human discursive practices. Systemic digitalization demands the consolidation of different stakeholders and policy measures to enhance AI ethics and accountability. Thus, a new normative framework based on the knowledgeable, responsible and ethically sound use of digital infrastructures is important for the quality and sustainability of democratic public space.

“The role of education and continuous learning must be at the center of the new framework, focusing on AI literacy and explicit training on ethical and responsible use of AI systems.

“AI literacy should be considered to be an integral part of any digital media competencies framework, focusing on a value-based communication and human-centric approach for positive human interaction, including dialogic listening, perspective-taking, empathy, critical awareness and self-reflection. Also, the empowerment of educators and trainers should be recognized as a critical priority in shaping future generations’ moral competences alongside their technological capacity.”


Amy_Zalman

Amy Zalman
‘It will take an iron-willed and well-resourced educational system to help students grow up not just with critical thinking skills of analysis, but with the capacity to observe themselves thinking.

Amy Zalman, founder and CEO of Prescient, a Washington DC-based foresight consultancy, wrote, “Based on my observations of the digitally-connected people around me – other professionals, graduate students, children and my social circle – as well as of the development of institutionalized systems from automated warfare to human capital applications to ed-tech, etc., I expect AI systems to play a much more significant role in shaping decisions, work and daily lives in coming years. The ground for this has clearly been laid already and the massive investment in creating systems that are seductive on a personal level and deeply integrated in our social institutions will be difficult to turn back. A fellow parent confided to me a few days ago that ‘ChatGPT is now my best friend,’ because unlike her husband she can have a conversation with it. 



“Most people won’t notice they are undergoing transformative change and are working to manage their way through it. People don’t generally think to assess their own resilience. The dissonance or unease that some of us feel some of the time today as we experience our own reasoning capacity slipping into glib repartee with a machine will soon be felt much less and by fewer people. In coming decades there will be fewer and fewer people who know what it was like to grow up without ubiquitous chatbots and agent-led processes.

It will take an iron-willed and well-resourced educational system to help students grow up not just with critical thinking skills of analysis, but also with the capacity to observe themselves thinking. The United States’ education system is not set up to help us achieve this right now, at any level.

“While I don’t think reflective thinking can’t be taught, I believe it will take an iron-willed and well-resourced educational system to help students grow up not just with critical thinking skills of analysis, but also with the capacity to observe themselves thinking. The United States’ education system is not set up to help us achieve this right now, at any level.



“I am worried, but humans will still be humans and we are weirdly good at surviving. Maybe we will evolve into being a different but just as interesting kind of species. Right now, I’m feeling nostalgic. Humanity is deeply vulnerable to forgetting too quickly about what we gained through modernity – a specific way of being conscious of ourselves as thinking and creating creatures. There is a lot of Western Civilization baggage that comes along with that, but we seem ready to throw out the baby with the bathwater.



“Advisable actions right now would be to double down on teaching and learning how to reflect on ourselves as thinking and feeling beings. This might seem retrograde, given decades of effort toward mindfulness and quieting our overactive minds. But we need to de-tranquilize for a while, to choose consciously which part of ‘cognitive load’ to unburden, and which parts we’d rather keep, after all.”


Edson_Prestes

Edson Prestes
‘Attention, energy and investment should be focused on ACE in STEM – developing a culture of altruism, compassion and empathy among science and technology professionals.’

Edson Prestes, professor of computer science at the Federal University of Rio Grande do Sul, Brazil, wrote, “AI systems will play a significant role in people’s daily lives. However, I am not confident about their benefits for humanity considering the status quo. The overall welfare of human beings and the planet is not the first priority for the people developing these technologies. Large corporations are not correctly focused on ethical considerations.

“A world permeated by highly advanced AI applications that continue to follow the current pattern will be a dystopia for most people. Only a small portion of society will have a full understanding of AI’s interference in their lives and will know how to protect themselves, while the rest will be ‘puppets,’ simply objects in the AI ​​lifecycle. Note that promoting AI as being more intelligent than humans can easily lead to an overestimation of its real impact. It could soon come to be that all decisions made by these systems will be accepted with no human oversight.

“Perhaps if the key players in establishing these systems possessed and applied altruism, compassion and empathy (ACE) in their work they would better understand and avoid the potential for AI to cause severe harms to global society. While the need for these human characteristics seems obvious to most people, it seems that it is not that obvious to them. Professionals who possess strong technical skills tend to generally be less likely to think in an empathetic, humanist manner. They lack concern for ensuring that AI technologies are not misused, and some do not accept that they are responsible for the damage they cause.

“Attention, energy and investment should be focused on ACE in STEM – developing a culture of altruism, compassion and empathy among science and technology professionals and those training for that field, all of the key players involved in the AI ​​lifecycle.”


Jan Hurwitch

Jan Hurwitch
‘We must make futures thinking a lifelong priority and embed a foresight-forward attitude in our local cultures and national ecosystems.’

Jan Hurwitch, futurist and president of the Visionary Ethics Foundation, based in San Pedro, Costa Rica, wrote, “As we navigate our AI future human resilience is most likely to adapt well in parts of the world that are currently being saturated by AI. But much of humanity is still living without access to the internet because of poverty and lack of access to technology. The gap between rich and poor countries globally and the rich and poor within societies in general will broaden.

“We are also seeing generational gaps in families and communities as younger generations adapt more easily to using and relying on AI. At the individual level, we are each becoming hybrids with a certain degree of human vs. tech in each of us. So the key question arises: How do I retain and increase my humanity as I navigate the ever-changing tech world? And an additional question must be: What ethical standards and protocols must we strengthen and develop as we transform ourselves within our transforming AI societies?

This is the time to illuminate and inform, to act with human dignity and strengthen our resolve to work together for the common good. … Unfortunately, at nearly every country level and in every social sector, governments and leaders have been very slow to react.

“In our most advanced societies, the AI changes are accelerating with unexpected speed. There is very little time for regulation to keep up or for the type of ethical reflection we need to spend assessing what this means for our lives as interdependent humans with responsibilities towards nature and our natural resources.

“This is the time to illuminate and inform, to act with human dignity and strengthen our resolve to work together for the common good. This is why we need a stronger commitment to a global governance of artificial general intelligence (AGI), which will most likely be with us in just five years or so, by 2030, considering the current rate of AI-accelerated growth. Unfortunately, at nearly every country level and in every social sector, governments and leaders worldwide have been very slow to react.

“What should we already be doing now (actually yesterday) to develop our ‘AI ethical resilience’?

1) We must review the use of new technologies in our educational systems and individual classrooms and prepare teachers for the AI world.

2) We must make futures thinking a lifelong priority and embed a foresight-forward attitude in our local cultures and national ecosystems (a good example of this can be found in Finland).

3) We must strengthen the work of organizations like The Millennium Project, Future of Life Institute, OECD, etc., with people in needed expert groups, e.g., psychologists and sociologists.

4) We must place a stronger emphasis on studying the work of humanists like Erich Fromm, Paulo Freire, Leonardo Boff and Ervin Laszlo among others.

5) We must engage with and understand the research on trauma transformation by experts like James Gordon, Gabor Mate, Tara Brach and Thomas Hubl, among others.

6) We must share information like that which can be found in this survey globally in several languages so that it may also be shared as a consciousness-raising instrument to be replicated at local levels and beyond. It is much easier to change attitudes in our native language and we have little time to reach leaders and to transform their thinking!

“The site for our latest Visionary Ethics Foundation virtual course in Spanish titled ‘The AGI Challenge: Ethics, Rights and Being Human in an AI World.’ This course has three cycles and the videos are publicly available for free.”


Fendi_Tsim

Fendi Tsim
‘Facilitating digital literacy, metacognitive ability and the ability for deep critical thinking is vital. They work as sword and shield. … Critical thinking ensures a person engages proactively.’

Fendi Tsim, a behavioral research specialist at the University of Warwick, UK, wrote, “Individuals, whether embracing, resisting or struggling with transformative change will rely on how much they know about themselves (the good old quest to understand thyself) and on how much they know about AI in terms of capabilities and limitations (digital literacy), as well as how and when effective human-AI interaction occurs (requiring one’s metacognitive ability to evaluate, reflect and learn over time).

“For societies, positive transformation depends on the availability of open and highly effective spaces for the kind of constructive debates that can allow the public to help shape the foundation and the direction of how AI is designed so we can effectively co-exist with AI. Of course, consistent, active listening is vital, especially in the process of creating and implementing guardrails and setting the rules as to how AI can or cannot be used.

“Individual resilience will rely on the metacognitive ability to evaluate in real time, reflect and learn. In my research group’s recent project, ‘SCAN’ (a human-centric, decision-making framework for effective task assignments with Generative AI), we note that individuals’ metacognition works as ‘a compass’ for navigating human-AI interactions in service of task completions. This is, indeed, vital when it comes to lifelong learning and other tasks in which AI works as a scaffold.

“While the process of enabling resilience occurs differently across individuals, it is important for us to recognize our own limitations and thus set up challenges for ourselves that are ‘challenging enough.’ One must focus on achieving a flow state of mind in the process. For instance, a person can occasionally prompt an LLM to work as ‘a sparring partner’ that challenges them rather than simply echoing or mirroring the person’s beliefs and knowledge. One can also ask an LLM to work as ‘an angel on your shoulder,’ a role in which it may offer a certain level of comfort along with constructive critique.

“I believe that facilitating digital literacy, metacognitive ability and the ability for deep critical thinking is vital. They work as ‘sword and shield.’ Critical thinking ensures that a person engages with AI proactively; digital literacy ensures a person has an accurate understanding of what happens after sharing information with AIs and, more importantly, understanding that AI, much like us, possesses certain biases in decision-making. And metacognition works an internal engine to evaluate, reflect and learn over time – especially for lifelong learning.

“I can think of two prevalent vulnerabilities that are showing up these days. First, the problems related to information seeking and belief updating; detecting misinformation content (text and visual forms) is getting more difficult than ever. Second, the challenges of resolving interpersonal conflict. Recent research has shown that people would prefer to seek advice for resolving interpersonal conflict from GenAI over a human’s advice. I suspect new coping strategies for issues like these would be based on existing strategies, such as in-person education, maybe using social- and game-based learning to teach potential issues for these vulnerabilities.”


Yalda_Uhls

Yalda Uhls
‘We must strengthen the human capacities and systems that determine how change is absorbed.’ The best steps are investments in education, research-informed design and cross-sector collaboration.

Yalda Uhls, an internationally recognized expert on media’s impact on adolescent development and senior researcher at the UCLA Center for Scholars and Storytellers, wrote, “As a scholar of developmental psychology and media, and as a former entertainment executive, I have spent decades studying and speaking with both adults and adolescents about media effects. One pattern is strikingly consistent: Every major technological shift produces a moral panic that tends to overestimate the power of the technology itself while underestimating the role of existing social, economic and psychological systems.

“Research repeatedly shows that media and technology do not create values or behaviors in a vacuum; they largely amplify what is already present  –  both strengths and vulnerabilities. AI is likely to follow this same trajectory, but at unprecedented speed and scale.

Resilience in this moment depends less on resisting AI outright and more on cultivating the human capacities required to live alongside it effectively. Ultimately, resilience will not come from trying to slow technological change, but from strengthening the human capacities and systems that determine how that change is absorbed.

“As AI systems increasingly shape how people work, learn, create and make decisions, societies will both embrace and resist these tools, often simultaneously. Fear-based narratives frequently amplified by mass media risk driving reactionary regulation that is difficult to enforce and may inadvertently stifle beneficial innovation while failing to address root harms.

“Resilience in this moment depends less on resisting AI outright and more on cultivating the human capacities required to live alongside it effectively. Ultimately, resilience will not come from trying to slow technological change, but from strengthening the human capacities and systems that determine how that change is absorbed.

  • Cognitively, we must strengthen critical thinking, epistemic humility and the ability to evaluate information sources.
  • Emotionally, we need greater self-regulation, agency and tolerance for ambiguity as boundaries between human and machine intelligence blur.
  • Socially, collaboration, empathy and intergenerational dialogue become essential, particularly as young people often adapt more fluently than the adults tasked with regulating these systems.
  • Ethically, we must reinforce shared norms around responsibility, transparency and human dignity rather than outsourcing moral judgment to automated systems.

“My center’s work focuses on reinforcing both human and systems resilience by partnering directly with industry and the public to translate research into practice, helping creators, technologists and platforms maximize positive impact and teaching digital literacy, narrative awareness and agency so people can engage with emerging technologies thoughtfully rather than passively.

“The most effective actions we can take now are investments in education, research-informed design and cross-sector collaboration. New vulnerabilities such as over-reliance on automated decision-making, erosion of trust and diminished authorship will require new coping strategies rooted in media literacy, ethical reasoning and human connection.”


The third section of Chapter 4 features the following essays:

Hangyeol Kang: ‘AI literacy will become a baseline requirement for participation in modern society.’ Resilience comes from strengthening emotional intelligence, interpersonal understanding and ethical reasoning.

Meredith Goins: ‘The teaching of literacy and, specifically, digital literacy, as well as critical thinking and ethics is crucial.’ The library is a perfect place to continue to evolve public services and tools to build resilience.

Majiuzu Daniel Moses: The best route to resilience? ‘AI education must be made mandatory at all levels to boost people’s confidence in use and adoption of AI’ and allow them to participate well in its evolution.

Todd Hager: Lifelong learning infrastructure, access to mental health support are essential. ‘We need both physical and digital spaces for honest conversation about the challenges and not just the opportunities.’

Cristos Velasco: ‘Foster hybrid skills blending empathy, creativity and AI literacy, such as experimenting with relevant AI tools while prioritizing human judgment.’

Marek Rosa: Resilience requires keeping human agency. ‘We need to develop the habits, education and tools that make people more resistant to allowing themselves to be manipulated.’

Karen González Fernández: ‘The public must understand how AI works and how it influences their lives. … Ordinary people have very little scope of action to determine how AI will or will not be used.’

Anonymous Computer Scientist: ‘Just as today, in a world of cars, grocery stores and fast food, it’s important to prioritize physical health through exercise, it will be important to have a healthy mental lifestyle.’

Trust Matsilele: ‘AI will not play a significant role globally due to a lack of digital literacies, a lack of digital access and many people’s dystopian views. … Literacy will remain a challenge.’


Hangyeol_Kang

Hangyeol Kang
‘AI literacy will become a baseline requirement for participation in modern society.’ Resilience comes from strengthening emotional intelligence, interpersonal understanding and ethical reasoning.

Hangyeol Kang, a Ph.D. student at the University of Geneva researching and developing intelligence systems for the humanoid social robot, Nadine, wrote, “AI is already quietly embedding itself into everyday routines. People now use large language models (LLMs) to answer trivial questions, draft emails, plan trips and increasingly to guide major life decisions such as career choices or financial planning. In professional settings, the impact is even more dramatic. As a researcher, I have witnessed an unprecedented acceleration of scientific workflows. AI tools now assist across the entire research pipeline, from surveying literature and brainstorming ideas to running experiments and drafting manuscripts. Considering that ChatGPT was publicly released only in November 2022, the speed of adoption and capability growth is extraordinary.

“Looking ahead 10 years, AI systems will become deeply integrated into most human activities. They will not merely support us, they will actively shape how we learn, work, communicate and decide.

Just as digital literacy became essential in the internet era, AI literacy will become a baseline requirement for participation in modern society. Those who fail to adapt risk falling behind economically and socially. … Our uniquely human capacities, such as emotional intelligence, interpersonal understanding, ethical reasoning and creativity will become even more valuable. Rather than competing with AI on computation, humans should cultivate these irreplaceable qualities.

“This rapid change is already producing mixed societal responses. Some individuals and institutions eagerly embrace AI, while others resist it. For example, some schools encourage students to use AI tools as learning companions, while others strictly ban them. These conflicting approaches reflect a broad uncertainty about navigating an unprecedented technological shift that challenges established norms, shared ethics and long-term evidence.

“Over time, however, I believe embracing AI will become less optional and more necessary. Just as digital literacy became essential in the internet era, AI literacy will become a baseline requirement for participation in modern society. Those who fail to adapt risk falling behind economically and socially. At the same time, it is important to acknowledge a fundamental reality: Humans cannot keep up with the learning speed of AI. Machines already outperform us in pattern recognition, data processing and memory recall, and this gap will only widen. Eventually, AI systems are expected to surpass humans in many cognitive domains.

“Yet intelligence is not the whole story. Although robots can simulate emotion convincingly and replicate social behaviors, they cannot genuinely experience human connection. They do not feel vulnerability, empathy or meaning. They cannot share lived experience. This distinction matters. It suggests that our uniquely human capacities, such as emotional intelligence, interpersonal understanding, ethical reasoning and creativity will become even more valuable. Rather than competing with AI on computation, humans should cultivate these irreplaceable qualities.

“Practical resilience begins with experience. I believe people should actively experiment with new AI systems, not blindly adopting but exploring their strengths and weaknesses firsthand. Understanding what AI can and cannot do builds realistic expectations and empowers informed decision-making. Educational systems should teach AI literacy early, emphasizing collaboration with tools rather than dependence on them.

“New vulnerabilities will also emerge. Overreliance on AI may erode human skills, deepfake technologies may undermine trust and algorithmic personalization could amplify polarization. Coping strategies must include digital hygiene, community-based learning and strong institutional safeguards.

“Ultimately, resilience is not about resisting AI, but it is about shaping our relationship with it.

“AI will continue to evolve rapidly. The question is not whether it will influence our lives, but how consciously we guide that influence. By strengthening our human capacities, promoting ethical development and preparing proactively for change, we can ensure that AI becomes a tool for collective flourishing rather than fragmentation.”


Meredith_Goins

Meredith Goins
‘The teaching of digital literacy as well as critical thinking and ethics is crucial.’ The library is a perfect place to continue to evolve public services and tools to build resilience.

Meredith Goins, a group manager connecting researchers to research and opportunities at U.S. laboratories, wrote, “As AI advances and opens up opportunities we must recognize the challenges to access for people in rural communities, for indigenous populations and for lower- and middle-income countries (LMICs), among others.

“Additionally, the public must understand who controls the AI tools and models. Is it big tech or the government? Can they be trusted? In coming years, the teaching of literacy and, specifically, digital literacy, as well as critical thinking and ethics is crucial. In addition to ensuring these are taught in our schools, we need to offer opportunities for the general public to learn these skills. Public libraries offer individuals instruction on how to use digital tools, keeping up with new trends such as AI literacy. ‘The library’ is a perfect place to continue to evolve public services and tools to build the public’s resilience.

“Multiple test beds and funding sources are now starting to be developed to reinforce the overall resilience of human systems and infrastructure in the age of AI. For those working in the scientific and academic realm, non-profits organizations are often trusted more than corporate entities for support – sometimes more than government programs. We need to build trust across the entire AI ecosystem and the scientific system.

“A third major challenge for our future with AI is to build systems that rely on cheaper, more Earth-friendly ways to generate the power needed to run the massive data centers they require. The pollution detected due to the new Memphis xAI facility is one example of how the local population is being damaged so that one company can make more money.

“As with all changes, it is instructive to examine the S-curve of diffusion of innovation originally explained by researcher Everett Rogers. It takes time for technology to spread through the population. One can assume that the AI models will standardize once the early majority shows interest and engagement and the long-term market leaders emerge.”


Majiuzu_Daniel_Moses

Majiuza Daniel Moses
The best route to resilience? ‘AI education must be made mandatory at all levels to boost people’s confidence in use and adoption of AI’ and allow them to participate well in its evolution.

Majiuzu Daniel Moses, founder and president of the Africa Tech for Development Initiative, wrote, “AI already permeates different spectrums and sectors of our lives. That is why I agree that AI systems will begin to play more significant role in shaping humanity, including our decisions, work and daily lives. This has increased as the AI revolution has continued and will continue to spread across societies.

“To embrace this transformative technology, individuals and societies must accept the reality that AI has come to stay and isn’t a trial phenomenon. Hence, people must continue to upskill. AI education must be made mandatory at all levels in different social strata to boost people’s confidence in use and adoption of AI. As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, cognitive, emotional, social and ethical capacities must be enhanced to ensure effective resilience and adaptation. Effective and sustainable AI education and orientation are critical to these.

We must ensure ethical practices and resources to enable resilience. Effective AI policy and governance guardrails as well as human oversight are necessary in order to guarantee, sustain and reinforce human and systems resilience. We must ensure AI works for humanity and not the opposite. This we can do by advancing human and ethically-centered AI for social good.

“The people must also be part of the process of the design and workings of the AI evolution as well. Tech companies and the leaders forming tech policy and regulation should adopt a multistakeholder approach that includes the public’s voices and enables them to be involved and heard so that AI development is context- and local-specific. Digital literacy is crucial to this. There will be some resistance and doubts about AI adoption at the beginning. However, the transformative effect of AI will convince people that they won’t want to be left behind.

“We must ensure ethical practices and resources to enable resilience. Effective AI policy and governance guardrails as well as human oversight are necessary in order to guarantee, sustain and reinforce human and systems resilience. We must ensure AI works for humanity and not the opposite. This we can do by advancing human and ethically-centered AI for social good. It is also important to recognize that new vulnerabilities are likely to arise due to the dynamic nature of AI. To reinforce resilience, we must strengthen critical thinking, mental health, lifelong learning and community support while designing systems with redundancy, ethical safeguards and human oversight.

“Emerging vulnerabilities might include over dependence on automation, digital inequality, algorithmic failures, misinformation and cascading system breakdowns. In coping with these, the essential strategies include building cognitive and AI literacy, fostering collaboration, encouraging adaptive leadership and working to embed resilience as a core cultural and design principle to protect human agency and trust in complex systems.”


Todd_Hager

Todd Hager
Lifelong learning infrastructure, access to mental health support are essential. ‘We need both physical and digital spaces for honest conversation about the challenges and not just the opportunities.’

Todd Hager, vice president at Alpha Omega, a strategic consultancy working with U.S. federal healthcare agencies, previously VP at Macro Solutions, wrote, “AI will certainly play a much more significant role in our lives going forward. As such, it is critically important for our education systems (all levels of schooling through undergraduate college and beyond) to do some fundamental rethinking about how best to embrace this inevitable change, allowing humans to make the most of it which will be to our collective benefit.

“We must place far less focus on teaching the rote knowledge that AI can provide to us instantly and much more focus on creativity, emotional intelligence, ethical reasoning and the uniquely human capacities of meaning-making and relationship-building.

“Given that this learning should not stop at high school or in college, lifelong learning infrastructure becomes essential. Communities of practice in which people can share experiences and strategies for adapting will be valuable. We need both physical and digital spaces for honest conversation about the challenges and not just the opportunities.”


Cristos_Velasco

Cristos Velasco
‘Foster hybrid skills blending empathy, creativity and AI literacy, such as experimenting with relevant AI tools while prioritizing human judgment.’

Cristos Velasco, adjunct professor of information technology law, at the Baden-Württemberg Cooperative State University in Germany, wrote, “Society, and particularly the interaction of humans with AI systems, will ensure that individuals not only survive the AI age but also redefine what it means to be human and embrace positive changes in their daily interactions with AI systems and LLMs. It will be a positive and transformative change that will require gradual adaptation as more individuals see the benefits of using and interacting with AI systems in their daily lives.

“To cultivate effective resilience we must prioritize critical thinking and reflective discernment in order to evaluate AI outputs purposefully and effectively, avoid over-reliance on AI outputs and continue to use simple logic, deduction and personal assessments on the information and outputs generated by AI systems. We must also refine ethical discernment to address AI’s current dilemmas like bias, privacy and societal impact and risks while balancing innovation with the protection of fundamental human values.

‘Finally, it is vital to foster hybrid skills, blending empathy, creativity and AI literacy in experimenting with relevant AI tools while prioritizing human judgment and discerning the outputs and information generated by AI systems. We must make sure not to rely fully on AI systems, using them in a combination of the aforementioned elements.”


Marek_Rosa

Marek Rosa
Resilience requires keeping human agency. ‘We need to develop the habits, education and tools that make people more resistant to allowing themselves to be manipulated.’

Marek Rosa, Slovak entrepreneur, programmer and founder and CEO of GoodAI, a general AI research and development company, wrote, “I strongly believe AI will play a much bigger role in how we work, decide and live day to day. Not as a fancy tool we sometimes use, but as something that quietly sits in the background and influences many decisions. That is both powerful and risky.

“The biggest challenge won’t be AI itself, but how people use it. Just as we teach critical thinking today, we must extend that to working with AI: understanding how it works, where it fails and how our own mistakes or incomplete input can lead it to give us bad advice. People need to learn that they cannot blindly trust AI, but instead they must question it, cross-check it and know when to ignore it.

“Resilience requires that people consciously strive to retain their human agency. AI should advise, not decide. We need to develop the habits, education and tools that make people more resistant to allowing themselves to be manipulated by others using AI or manipulated due to their own laziness. New and growing risks to individuals – including over-dependence and loss of judgment – are real. The way to counter them is simple but difficult: people must learn to think like operators, not passive users; and our AI systems must be designed to always allow humans to stay in control.”


Karen_Gonzalez_Fernandez

Karen González Fernández
‘The public must understand how AI works and how it influences their lives. … Ordinary people have very little scope of action to determine how AI will or will not be used.’

Karen González Fernández, a professor-researcher expert in the philosophy of AI at Universidad Panamericana in Mexico City, wrote, “Unfortunately, the AI systems we use are being developed by powerful technology companies. Ordinary people have very little scope of action to determine how AI will or will not be used.

“People must be more conscious of technologies’ impact on their lives and must choose to think much more deeply about their relationship with technology. More required education on digital literacy at all the levels is necessary; first, in order for people to properly understand how AI works; and second, because if we are critical of people’s development and uses of AI today we can better address the ethical, social and political issues it raises.

“I don’t know if advanced AI will emerge or what impact it may possibly have. We don’t know all the important variables yet. The systems have weaknesses like ‘hallucinations’; it is not clear if these problems can be resolved. In addition, the AI technology that we have today uses a lot of resources that are not unlimited in scope. Finally, the companies promoting AI seem to be heading into a financial bubble right now. These problems, among others, could limit advances. The public must understand how AI works and how it influences their lives. If advanced AI is developed to be more influential and even more in charge of managing human systems many people would not be critical of its likely further societal impact. This will be a bad scenario.”


Anonymous Computer Scientist
‘Just as today, in a world of cars, grocery stores and fast food, it’s important to prioritize physical health through exercise, it will be important to have a healthy mental lifestyle.’

A computer scientist wrote, “My understanding is that community and mutual support are key ingredients for resilience, so I think the best thing we can do is cultivate our human relationships. I also worry about the educational and cognitive effects of a larger reliance on AI. We need to ensure our children develop critical thinking skills even when there is a tool in their pockets that can answer all their questions.

“Just as today, in a world of cars, grocery stores and fast food, it’s important to prioritize physical health through exercise, it will be important to have a healthy mental lifestyle. That includes solving problems by ourselves to stay mentally sharp and spending time with other people to keep up our social muscles.

“My overall outlook is roughly in the middle between optimism and pessimism. I think those who paint a rosy utopian picture of the future with AI are misguided, but I don’t believe in forecasts of doom either. Humanity overall has proven to be very resilient to dramatic and rapid technological change in the past and I think this will also be true for AI. But this doesn’t mean that individuals will not suffer as a result of uncomfortable growing pains as society adapts.”


Trust_Matsilele

Trust Matsilele
‘AI will not play a significant role globally due to a lack of digital literacies, a lack of digital access and many people’s dystopian views. … Literacy will remain a challenge.’

Trust Matsilele, senior lecturer in journalism at Birmingham City University in the UK, previously at the University of Johannesburg, South Africa, wrote, “AI will not play a significant role globally due to a lack of digital literacies, a lack of digital access and many people’s dystopian views.

“The systems might be appropriate for entry-level jobs, but they are not fully capable in sectors of work that require human agency and cannot be easily automated.

“The issue of literacy will remain a challenge, especially in non-Western nations in Africa, Asia, Latin America and Oceania. There are also embedded challenges, such as data bias, that make it hard for non-Westerners to trust and domesticate AI systems.”


> Go to Chapter 5 – Work Quake: Navigating Labor Shifts and the Pursuit of Human Meaning

> Return to the top of this page