The contributors whose responses are featured in this section shared concerns in their responses. Among those discussed here are the high carbon (and thus environmental) costs of advanced AI systems and of the human labor necessary to mine the materials that create and eventually dispose of these systems. A string of other examples from the pieces in this chapter: “AI gives the state increased power to both influence behavior and to shape collective understanding of what acceptable behaviour involves.” | “Expect the much broader spread of deepfakes, disinformation and post-truth content, to the extent that masses of electronic documents will be modified in hindsight to fit special interests’ points of view … societies could easily lose all reference points to the truths they now have.” | “The current movement toward condensing power in fewer and fewer systems, governments and individuals has to be redirected.” | “AI competes with deep immersion by offering impersonal summaries of human beings’ aggregate thought.


Amy Sample Ward
Technology is not neutral unless we build it with inclusive intention and chart its course

Amy Sample Ward, CEO of the Nonprofit Technology Enterprise Network (NTEN), said – “A better world is possible by 2040 than the one we have today. But will we actually live in that better world in 15 years?

“The current movement toward condensing power in fewer and fewer systems, governments and individuals has to be redirected if we want to assure that the impacts of AI technologies can actually be a net positive for individuals and for society. This requires a reversal of the current momentum of AI development, from who develops it and how to who funds it and how. There also must be much more attention paid to AI’s future role in democratic engagement, content development and copyright, artistic and cultural creation and ownership, and so much more.

Without mechanisms of accountability that enable individuals and communities – especially those already systemically marginalized and harmed by biases in and access to technology – to manage their consent, receive restitution for harm and adopt the technologies in ways that best meet their individual needs, we cannot anticipate AI having positive impacts for most individuals and communities.

“Without mechanisms of accountability that enable individuals and communities – especially those already systemically marginalized and harmed by biases in and access to technology – to manage their consent, receive restitution for harm and adopt the technologies in ways that best meet their individual needs, we cannot anticipate AI having positive impacts for most individuals and communities.

“Without access, participation, leadership and ownership in technology evolution, individuals and communities will continue to be systemically excluded, maintaining and furthering the oppressive divides we are experiencing today. Technology is not neutral, and unless we build it with inclusive intention we cannot change its course.”

Garth Graham
If you can’t tell a person from a machine, how can open systems of governance be achieved?

Garth Graham, long-time leader of Telecommunities Canada’s advocacy for community-based networks, said, “The idea of a model is inherent in an AI. That implies a set of assumptions that structures a narrative or, in essence, a story. But a life is a complex adaptive system where what happens next is not predictable and is not a story. I think this means that an AI that structures a narrative about me will always miss the point.

“In small communities, the most effective vehicle for social control is gossip. But gossip, as it structures a local collective opinion, is always a distortion of an individual’s reality. That is to say, in tightly controlled social networks privacy is and always has been an illusion. And when AI intensifies the capacity for social control and does so on the basis of a model of me that always misses the point, my being in social relationships is at risk of massive unintended consequences.

Because of the capacity to model behaviour, AI gives the state increased power to both influence behavior and to shape collective understanding of what acceptable behaviour involves. The powerful will not be able to resist using that increased power. The quality of human rights and social control practises in a society will depend on how individuals understand those practices and have a capacity to participate in their ever-shifting definition. The openness of the systems that shape collective opinion about acceptable behaviour are the key to engendering trust in the institutions of governance.

“To the degree, that AI models my consumer behaviour why should I care? I am already living in a world where that happens. But, to the degree that AI models my social behaviour, I do care where the locus of defining acceptable social behaviour resides. A model of my social behaviour is an extension of myself. In a society characterized as open I have a greater capacity to own the capacity to tell my story.

“Because of the capacity to model behaviour, AI gives the state increased power to both influence behavior and to shape collective understanding of what acceptable behaviour involves. The powerful will not be able to resist using that increased power.

“The quality of human rights and social control practises in a society will depend on how individuals understand those practices and have a capacity to participate in their ever-shifting definition. The openness of the systems that shape collective opinion about acceptable behaviour are the key to engendering trust in the institutions of governance.

“In societies where machines have autonomous agency and you can’t tell a person from a machine, I don’t think we have any idea of how open systems of governance can be achieved.”

Charalambos Tsekeris
Don’t underestimate the dangers of unintended consequences embraced out of ignorance

Charalambos Tsekeris, senior research fellow in digital sociology at Greece’s National Centre for Social Research, commented, “In the next 15 or so years, AI (not AGI) will arguably complement humans by improving the productivity of workers of every kind and by creating new, augmented tasks and capabilities with the powerful help of machine learning. It will also provide better and more usable information for human decision-making and long-term planning.

“By 2040, new digital platforms will give people with different skills or needs the opportunity to become connected. Nation-states will seriously confront the most severe AI-related cyber-risks – e.g., data leaks, cyberattacks and automated wars – and bio-risks such as engineered pandemics. Sounds good, but along with all of this arrives a panoply of problems.

“In a messy world of global permacrisis, some countries will react by using AI-charged authoritarianism to avoid or slow down the emergence and cascade of such risks. This could lead to even higher levels of surveillance, a complete loss of privacy and new threats to the rule of law and fundamental rights.

We can expect the much broader spread of deepfakes, disinformation and post-truth content, to the extent that masses of electronic documents will be modified in hindsight to fit special interests’ points of view, including scientific articles and books. As a result, the future AI societies could easily lose all reference points to the truths they now have. The inconceivable dissemination of AI-generated bots and fake news in polarized political discourse will gradually be linked to alternative understandings of truth and honesty, as well as to the further disintegration of liberal democracy, public trust and civic mindedness. Therefore, what is most likely to be lost is democratic citizenship and genuine faith in liberal values, as well as the Aristotelean middle ground in democratic politics.

“In parallel with this, we can expect the much broader spread of deepfakes, disinformation and post-truth content, to the extent that masses of electronic documents will be modified in hindsight to fit special interests’ points of view, including scientific articles and books. As a result, the future AI societies could easily lose all reference points to the truths they now have.

“The inconceivable dissemination of AI-generated bots and fake news in polarized political discourse will gradually be linked to alternative understandings of truth and honesty, as well as to the further disintegration of liberal democracy, public trust and civic mindedness. Therefore, what is most likely to be lost is democratic citizenship and genuine faith in liberal values, as well as the Aristotelean middle ground in democratic politics, which already appears to be shrinking.

“In the same context, AI will be a serious threat to quality journalism and the autonomy of traditional media. At the level of individuals’ daily lives, most people will be glued to their social media and caught up in their algorithmically constructed, private virtual worlds, perhaps living in an online goblin mode. This will disconnect them from real experience and empathic face-to-face (or human-to-human) communication, as well as from their community and democratic discourse because in the newly segregated reality extremist and toxic voices are loudest and much more attractive.

“Within abound social networking environments, manipulative, unethical, abusive and addictive behaviors will tend to be the norm, despite the unprecedented number of education opportunities and cultural resources available to the public. Like-minded atomized individuals will have the perceived chance to create numerous life purposes within their boredom-free artificial echo chambers, while experiencing, however, very little exposure to real human friendship or companionship.”

Toby Shulruff
The voices of the voiceless will continue to be underrepresented in AI systems

Toby Shulruff, owner and principal of a futures consultancy based in Beaverton, Oregon, predicted, “The changes in daily life due to AI will likely be both profound and largely invisible by 2040. Profound, because the use of complex algorithms driven by massive computing power processing vast quantities of data will increasingly be woven through the fabric of daily life in moderately wealthier communities, applied to hiring and employment, personal finance systems, shopping, environmental controls in buildings and infrastructure, navigating the internet, communication systems, transportation systems, the criminal justice system and health systems.

“They will also be profound because the costs and impacts of these systems in the form of human labor, material extraction and refining, manufacturing, shipping and, later, disposal will continue to be disproportionately borne by poorer communities globally. Vast quantities of energy are needed to drive these systems, which, for the time being, come with an unacceptably high carbon cost. Processes of extraction, manufacture and disposal already wreak ecological havoc. Human labor is needed to mine the materials, including rare earth minerals, that form the tangible stuff of AI, as well as to assemble it into the necessary equipment, and ultimately to dispose of it.

If the public does not become aware of or understand the role that this technology plays in daily life and what it truly costs to maintain and find some way to effective positive change in regard to its looming challenges, there will be few obstacles to the continued adoption of AI. The calculations and decisions of AI will cause people to have opportunities or to be barred from them in ways that are obscure, hidden and difficult to correct. The voices of the voiceless will continue to be underrepresented in AI systems.

“Human labor is also needed to maintain and grow the informational component of computing systems, from guiding algorithms and correcting errors, to ‘feeding’ the AI by labeling content and data.

“Much of this change will be invisible, as so much of what AI does happens beneath the surface of daily life – in the cloud, within the systems that control infrastructure – and also because the material, environmental and human costs of the technology happen outside of moderately wealthy communities.

“If the public does not become aware of or understand the role that this technology plays in daily life and what it truly costs to maintain and find some way to effective positive change in regard to its looming challenges, there will be few obstacles to the continued adoption of AI. The calculations and decisions of AI will cause people to have opportunities or to be barred from them in ways that are obscure, hidden and difficult to correct. The voices of the voiceless will continue to be underrepresented in AI systems, just as has been the case in past industrial and computing ‘revolutions.’”

Juan Ortiz Freuler
Predictive systems reduce the notion of the individual to a collection of characteristics

Juan Ortiz Freuler, an Argentinian and fellow at Harvard’s Berkman Klein Center for Internet and Society, previously senior policy fellow at the Web Foundation, wrote, “The mass-adoption of predictive systems and their introduction into everyday activities will require that humans adapt their worldview. It intensifies a probabilistic turn, shifting focus from the past to the future, from individual to group behavior and from certainty to mere plausibility.

Traditional categories, including the concept of the individual, are coming under pressure. These technologies are designed for segmentation and grouping, emphasizing insights obtained through a perspective of the group at the expense of individuality. The notion of the individual becomes a collection of diverse characteristics, sometimes too broad and at other times too narrow to be relevant in the systems driving our key economic, social and political processes.

“Traditional categories, including the concept of the individual, are coming under pressure. These technologies are designed for segmentation and grouping, emphasizing insights obtained through a perspective of the group at the expense of individuality. The notion of the individual becomes a collection of diverse characteristics, sometimes too broad and at other times too narrow to be relevant in the systems driving our key economic, social and political processes.

“This shift embraces uncertainty through probabilistic thinking and elevates statistics and complex modeling as knowledge approaches. ChatGPT, for example, embodies this shift by framing language as a system of probabilities, mixing truth with plausible fictions. This transformation, ongoing for decades, is less visible but more pervasive than technology-centric news cycles. It builds on the quantitative shift taking place since the 1970s and extends it further into various aspects of daily life.”

Wei Wang
Expect a dip in humans’ capabilities for rational deliberation and critical analysis

Wei Wang, a fellow at Fundação Getulio Vargas and PhD candidate in law and technology at the University of Hong Kong, observed, “One of the most salient and auspicious contributions of artificial intelligence resides in its capacity to alleviate repetitive labor in day-to-day occupational tasks, thereby affording humans increased temporal resources for emotional and intellectual enrichment.

It is essential to remain cognizant of the risks associated with excessive reliance on AI in routine work. Such overdependency could potentially attenuate human capabilities for rational deliberation and critical analysis, especially when AI serves as an auxiliary cognitive tool and users have insufficient AI literacy, such as less knowledge of prompt engineering.

“Nevertheless, it is essential to remain cognizant of the risks associated with excessive reliance on AI in routine work. Such overdependency could potentially attenuate human capabilities for rational deliberation and critical analysis, especially when AI serves as an auxiliary cognitive tool and users have insufficient AI literacy, such as less knowledge of prompt engineering.

“This predicament is intricately linked to the current technological architecture of AI, which functions through physical hardware – for instance, computing infrastructure – at least so far. Consequently, in particular cases, a loss of access to this medium could remarkably result in users reverting to their original, unassisted state, unless the users already synthetically internalize the information AI produces. This may thus redefine the agenda for setting the learning processes and outcomes of our education.”

Jon Stine
Beware! An avalanche of high-engagement disinformation lies ahead

Jon Stine,director of the Open Voice Network, focused on conversational AI, commented, “I fear an accelerating gap between those who have the interest and ability to evaluate information sources (and who largely depend upon established time-honored sources) and those who do not have the interest nor the ability. Generative AI promises remarkable efficiencies for the former group; it promises an avalanche of disinformation for the latter. Our digital and cultural divide will widen into a chasm as large institutions (business and political) find reward in feeding or distributing high-engagement disinformation.”

Peter Levine
As AI competes with deep immersion people will lead more-impoverished lives

Peter Levine, associate dean of academic affairs and professor of citizenship and public affairs at Tufts University, observed, “An essential aspect of any good life is deep immersion in other individuals’ thoughts. This has both spiritual and civic advantages, enriching our private lives and our communities. AI competes with deep immersion by offering impersonal summaries of human beings’ aggregate thought. Deep immersion is hard, but without that struggle we will lead impoverished lives. AI will remove some of the immediate, practical payoffs of deep immersion. For example, it will become ever easier not to read a book if AI can summarize it. It is going to be challenging to preserve the liberal arts, especially the humanities, in the face of this technology.”

Karl M. van Meter
Advances in AI will not modify the structure of today’s societies, nor will it reduce inequities

Karl M. van Meter, director of the International Association of Sociological Methodology, based in Paris, commented, “The use of AI in communications and politics and particularly on social networks will cause more trouble of the type that the EU is already trying to deal with, and it will probably be more problematic in the U.S. Its use in education will probably increase but not fundamentally change how we learn. There will be new uses of AI in leisure and cultural activities, and certain adjustments will be necessary but not fundamental, as with all new technologies. In short, the wider use of AI is not likely to modify the structure of modern societies nor will it reduce inequalities that it may well accentuate.

AI systems have been in use in research and education since at least the 1970s and have made significant progress since then, greatly benefiting from the massive increase in computer capacities. But the basic model of massive memory data coupled with analysis by classification methods, regression methods and factoral methods hasn’t changed that much.

“Artificial intelligence (AI) systems have been in use in research and education since at least the 1970s and have made significant progress since then, greatly benefiting from the massive increase in computer capacities. But the basic model of massive memory data coupled with analysis by classification methods, regression methods and factoral methods hasn’t changed that much. That type of AI has produced ‘insights,’ found and developed not well-known known information, but largely not ‘discovered’ nor ‘created’ significant new knowledge, which is still limited to the domain of ‘evolutionary algorithms,’ which are much more difficult to develop. However, the tremendous economic strength and advantage of AI-assisted multi-objective optimization methods and applications will continue to be the driving force behind the current development of AI, which is very fashionable and mainly à la mode, a situation that will stabilize well before 2040.”

Carol Chetkovich
We need to figure out how to democratize the use of AI and overcome inequality

Carol Chetkovich, professor emerita of public policy at Mills College, predicted, “I expect development of AI will be like other technological changes, but on steroids. It has the capacity to significantly increase human productivity and to enhance the availability of important knowledge, but like other technological advances it will create winners and losers. Unless we do a better job as a society in taking care of the ‘losers’ than we have in the past, inequality will increase significantly. And then there’s the existential problem: At what point might humans become obsolete?

“Those with AI-relevant knowledge and skills may acquire concerning levels of influence. I worry particularly about the use of AI in political activity. The increased ability to create and distribute disinformation is very troubling. I don’t hear enough public conversation about how this can be controlled or countered.

Perhaps AI will provide an answer to the question: How can we ensure that everyone has the level of understanding needed to live with AI?

“I also think that the advantage of those with relevant technical knowledge will grow, and I don’t see that much thought is being given to universalizing knowledge/skills relating to AI development, use and control. We need to figure out how to democratize the use of AI.

“Perhaps AI will provide an answer to the question: How can we ensure that everyone has the level of understanding needed to live with AI? When I think about our challenges, I can see AI being very useful in some problems with potential technical ‘solutions’ (e.g., treating disease, countering climate change) but more threatening in problem areas involving human emotion (e.g., resolving violent conflict and power struggles).”

Evan Selinger
Advanced AI will enhance and automate surveillance to new heights of invasiveness

Evan Selinger, professor of philosophy at Rochester Institute of Technology and author of “Re-engineering Humanity,” observed, “A helpful way to think about AI, in the present and future alike, is to consider its relation to power. From this lens, surveillance is one of the most significant issues. AI enhances surveillance due to its efficiency and speed:

  • Automating facial recognition and facial analysis: Identifying anonymous people and inferring emotion and intent, measuring concentration, etc.
  • Automating object detection: Any object, including weapons.
  • Automating behavioral analysis: Seeking patterns and identifying undesirable ones, including unusual gatherings of people or aggressive movements.
  • Predicting future behavior: Analyzing surveillance data, including inferring future crime hotspots.

If we don’t get governance right, 2040 could be a giant step closer to dystopia. AI-driven surveillance will erode obscurity in public, making it nearly impossible to enter public spaces without being identified, scanned and assessed. Among other harms, this could have massive chilling effects.

“Each of these technological advancements raises potent privacy and civil liberties issues. Collectively, they suggest we’ve entered an age in which the balance between security and personal privacy is being redefined, with AI-driven surveillance extending the reach of observation, classification and sorting to unprecedented levels. This new era necessitates a robust dialogue on ethics and the law to prevent abuse and ensure that the use of such technology aligns with democratic values and the protection of individual rights. If we don’t get governance right, 2040 could be a giant step closer to dystopia. AI-driven surveillance will erode obscurity in public, making it nearly impossible to enter public spaces without being identified, scanned and assessed. Among other harms, this could have massive chilling effects.”

Francisco Jariego
Our most pressing challenge is the need to effectively apply humans’ collective intelligence

Francisco J. Jariego, futurist, author and professor at the National Distance Education University of Spain, observed, “AI is a natural evolutionary path of information technologies. In the most probable scenario development will continue apace without dramatic disruptions (e.g., the emergence of useful artificial general intelligence and similar innovations).

“AI is a ‘general technology’ with potential applications, opportunities and impacts on practically every area of activity, economy sector and the society at large. It will surely find interesting uses in science and academic research (e.g., managing information overload), research and development (optimization, design), industry (production and the supply chain) education, personal assistants, medical applications (drug design, diagnosis, attention and care). In a more speculative space, AI will surely help and interact with the emerging field of synthetic biology.

“This technology, as with all, introduces plenty of risks, and we have a huge challenge in making sure we understand them in order to create the conditions (fundamentally the incentives and controls) to keep technology on the ‘right’ path. Society’s past 25 years of experience with the Internet, the Web, search engines and personal devices clearly show that we have not reached the full potential of these technologies. We are fighting numerous threats and there is plenty of room for improvement, specifically in new forms of organization, social participation, decision-making, etc.

“The main concerns for individuals who use these tools are security, privacy and overcoming cultural prejudices and biases. Even if progress is limited, we will continue to move forward, adopting and adapting to new applications through deeper integration by means of ever-more-personal devices (watches, headset, lenses, etc.) and, eventually, neuro-integration. This will stimulate even deeper debates and developments around personal identity, copyright and memory beyond life.

There are two fundamental challenges slowing progress toward the successful development of effective AI governance. The first is that nearly all power is centered in the tech monopolies, the second is the public’s general lack of understanding of what the digital future might bring and how they can make a difference. We all know that the outsized power of Big Tech and its purely profit-based motives are a danger to our future, but we don’t know how to stop it, or we don’t want to do it in the face of present-day geostrategic tensions and geopolitical confrontation. And leaders in government and other public-serving spaces often lack an understanding of the technologies and fear creating barriers to innovation or being overprotective.

“Our current technology development pace will be fundamentally modulated by generational replacement therefore 15 years is a short-term horizon for big societal changes. Artificial general intelligence (AGI) is not yet clearly defined. If it evolves into an AI with general capabilities equivalent or superior to a human’s, it will very likely take more than 15 years to develop, and at the very least it will demand full integration of equivalent sensory inputs. However, ‘narrow’ artificial intelligence will still continue exceed human capacities in many different areas, as it has been for years. Research and development in AI will help us to better understand the concepts of ‘intelligence’ and ‘consciousness.’

“There are two fundamental challenges slowing progress toward the successful development of effective AI governance. The first is that nearly all power is centered in the tech monopolies, the second is the public’s general lack of understanding of what the digital future might bring and how they can make a difference.

“We all know that the outsized power of Big Tech and its purely profit-based motives are a danger to our future, but we don’t know how to stop it, or we don’t want to do it in the face of present-day geostrategic tensions and geopolitical confrontation. And leaders in government and other public-serving spaces often lack an understanding of the technologies and fear creating barriers to innovation or being overprotective.

“Futuristic scenarios in today’s popular literature, cinema, videogames, etc., are overwhelmed by dystopian scenarios, and to a large extent they feed us a steady diet of polarized confrontation in narratives and images. Some fiction seems to be a naive utopian marketing of techno optimism, while most fiction is quite dystopian. But the impact of digital technology is not black and white and is unlikely to be all good or all bad.

“We are facing an informational and educational challenge. We must improve social awareness and work to facilitate further social progress. Disciplined fiction that reflects this could help us understand the challenges and the opportunities that lie before us. Although current myths may remain, we should work to help people see new images that show the future of technology (in particular AI) is much more specialized. “Over the next 15 years we must rethink our approaches for this emerging age and create new models and institutions that are capable of facilitating broad debate and meaningful agreements. Collective Intelligence is our most pressing challenge. The potential benefits and threats could depend a lot more on humanity’s social aptitude and the legal environment (in particular, restrictions to individual liberties) than on the technological innovations themselves.”

Continue reading: Hopes highlighting expected positives in years to come