Theme 3

A share of these experts focused on the ways people’s uses of AI could diminish human agency and skills. Some warned it will nearly eliminate critical thinking, reading and decision-making abilities and healthy, in-person connectedness, and lead to more mental health problems. Some said they fear the impact of mass unemployment on people’s psyches and behaviors due to a loss of identity, structure and purpose. Some warned these factors combined with a deepening of inequities may prompt violence.


Rosalie Day
We will be more self-absorbed, post-truth will worsen, our sense of purpose will diminish

Rosalie Day, co-founder at Blomma, a platform providing digital solutions to clinical research studies, wrote, “Advances and proliferation of AI will allow us to be more self-absorbed than we are now. The post-truth era that the 2016 election ushered in will be backstopped by deepfakes. Cognitive dissonance will eventually disappear from our vocabulary because we can choose anything we want to believe and make the evidence for it – until we can’t.

“Humans thrive from community and a sense of purpose. Increasing dependence on AI bodes poorly for both.

The U.S. culture, politically polarized and obsessed with performative aspects, is content to be the Wild West for the private sector. It’s this nostalgic laissez-faire attitude toward business, which in the past led us to be innovators, that now prevents us from playing well with others… Effective lifesaving and pain-minimizing health technology advances will be a wash societally if the U.S. doesn’t turn around the domestic economic trends.

“Social networks, remote working and online-gaming and shopping are solitary pursuits, depriving us from shared experiences and increasing our sense of isolation. That these are sedentary does not help with health and stress levels. The sense of purpose we get from work will be diminished as we all become prompt engineers.

“The EU is grappling with privacy, sustainability and AI governance issues by creating institutional infrastructure and enforceable frameworks somewhat proactively. The U.S. culture, politically polarized and obsessed with performative aspects, is content to be the Wild West for the private sector. It’s this nostalgic laissez-faire attitude toward business, which in the past led us to be innovators, that now prevents us from playing well with others. The stakes have changed globally, yet we are still competing among ourselves. Effective lifesaving and pain-minimizing health technology advances will be a wash societally if the U.S. doesn’t turn around the domestic economic trends.”

Russ White
The threat is the loss of thinking skills and social cohesion and the destruction of dignity

Russ White, a leading Internet infrastructure architect and Internet pioneer, said, “I don’t know that even more-advanced AI – artificial general intelligence (AGI) – would have the kind of physical existential threat people perceive.

“They say, ‘the AI does not love you,’ or ‘the AI does not hate you,’ or ‘the AI has a better use for the atoms that make you up,’ that sort of thing. The threat seems more likely to come from a general loss of thinking skills, decreasing social cohesion and the potential complete destruction of dignity.

“In some parts of the world, dignity has been redefined so an AI would actually increase dignity – defining it as the drive toward being able to have complete freedom of choice at every moment in life. From a Judeo-Christian perspective, however, dignity has a far deeper and richer meaning than that. From this perspective AI is a serious threat.”

Stephan Adelson
There is likely to be an AI-driven war over people’s minds and emotions

Stephan Adelson, president of Adelson Consulting Services, an expert on the internet and public health, said, “By 2040, for most of the population, AI’s daily influence will appear to be rather benign and quite useful, but the purveyors of AI will invest their efforts toward creating and operating applications that feature the most potential to generate revenue.

“AI ‘agents’ are tools tech companies are conceiving to act as people’s helpful ‘personal advisors’ on any variety of topics. The recent announcement by Meta of its suite of AI ‘personalities’ is one example. It is likely these agents will be programmed to nudge those they ‘advise’ toward products and revenue-generating options packaged as advice.

“As AI progresses and learns, it (more accurately, they, as there will be abundant AIs) will come to understand each individual and their psychological and emotional makeup. I assume history will repeat itself, as advertising and the aspect of AI that generates data leading to revenue will be prolific.

It is easy to imagine an AI-driven war over people’s minds and emotions motivated by criminals and self-interested individuals in politics, government and business. … When deepfakes are combined with AI, powerful alternative realities can be created and these alternative realities can easily sway perceptions, beliefs, emotions and actions. Those who lack the capacity to discern what is a created reality from what is a naturally occurring reality will continue to be exploited.

“The effectiveness of advertising is very much about psycho-social understanding. Each individual, to a large part, already has an advertiser’s profile, a database that ‘understands’ each, including their psychology, behaviors and social circles. Advanced AI can fine-tune and combine existing databases on individuals that not only include basics, such as websites visited, items viewed, etc., but will also include much more personal conclusions about the person, their life and their motivations for their actions.

“This can be a very positive thing if the information is confidentially used to people’s benefit, for things like mental health services, social matchmaking and other options that will allow for personal growth and development. Apps for meditation (such as the current VR apps Maloka and TRIPP), apps for peer-based support (such as the VR app InnerWorld) and others, will likely use AI to provide more-individualized support and well-being options.

“But AIs already are and will continue to introduce privacy concerns that go far beyond simple behaviors and habits. AI’s ‘understanding’ of a person will likely surpass most individuals’ understanding of themselves. This dimension of understanding invades personal privacy in ways that could easily be exploited to manipulate individuals in extreme ways.

“It is easy to imagine an AI-driven war over people’s minds and emotions motivated by criminals and self-interested individuals in politics, government and business. A database that provides a much fuller picture of an individual (potentially a fuller picture than the person has of themself) is one that could hold great power over not only the individual but over groups of individuals and society at large.

Aspects of people’s mental resourcefulness will continue to be significantly influenced by their uses of AI. They will be less likely to write their own stories and to read deeply and be challenged to think in creative ways as AI becomes more of a tool to replace things like art, literacy and a broad vocabulary. … People are likely to continue to become more passive, and dependency on AI and other technologies that arise from it will increase dramatically.

“The areas that are of greatest concern to me are in the areas of politics and the so-called ‘culture war.’ When deepfakes are combined with AI, powerful alternative realities can be created. These alternative realities can easily sway perceptions, beliefs, emotions and actions. Those who lack the capacity to discern what is a created reality from what is a naturally occurring reality will continue to be exploited. Without proper safeguards and regulations, divisions in society will increase.

“There is a broader range of negative results that can come from the scenarios described above. Aspects of people’s mental resourcefulness will continue to be significantly influenced by their uses of AI. They will be less likely to write their own stories and to read deeply and be challenged to think in creative ways as AI becomes more of a tool to replace things like art, literacy and a broad vocabulary.

“When information is presented in a personal and friendly way by AI, the need for mental resourcefulness (including the skill of critical thinking) when working through problems or finding answers to questions decreases as the information presented is immediately trusted, being considered as presented by a sort of AI ‘friend.’ People are likely to continue to become more passive, and dependency on AI and other technologies that arise from it will increase dramatically.

“There are other positives. AI’s most-positive influence will lie in the areas of science and medicine. I anticipate cures for illnesses, more-comprehensive and effective treatment plans and better general care through the gathering and distribution of more-complete medical histories and a clearer picture of the interaction of the various biological systems within the body. Disabilities will be overcome, chronic diseases cured, and self-care will become more effective and integrated into our daily lives through discoveries made by AI and advice offered by personal AI ‘assistants’ or ‘agents.’

“We will gain a better understanding of our planet and our universe through AI tools that can ‘think’ logically and learn. Current theories on topics like creation and evolution (of life, planets, and the universe) will be proven and disproven as new theories arise.

“The potential for humankind to improve our own personal performance will exist simply through the competition that will arise between human efforts and those put forth by AI. I think AI will push many to rise to a more-sophisticated level of personal achievement if they feel that they could be rendered ‘useless’ in comparison.”

Sharon Sputz
There will be no individual agency when ‘algorithms tell us how to think’

Sharon Sputz, director of strategic programs at Columbia University’s Data Science Institute, commented, “It seems as if we are heading toward lives with no individual agency in which algorithms tell us how to think, resulting in the loss of humans’ ability to operate effectively without them. All of this is happening as society seems to be losing its ability to debate issues in a way in which we honestly listen to different opinions with open minds in order to learn and to expand our thinking.”

Evelyne A. Tauchnitz
The AI-fueled transition challenges the way we live and experience everything

Evelyne A. Tauchnitz, senior researcher at the University of Lucerne’s Institute of Social Ethics and member of the UN Internet Governance Forum’s Multistakeholder Advisory Group, wrote, “The proliferation of AI in the next 15 years will undoubtedly bring about a transformation characterized by more security but less personal freedom to move, express ourselves, purchase and make choices as we see fit.

“Many of these changes will go largely unnoticed by most people, as AI offers to them a life that seems both safer and more convenient. In the pursuit of the promise of security and comfort, society may become complacent and oblivious to the encroachment on personal freedoms and privacy.

While AI holds the promise of enhancing safety … addressing issues such as criminality, traffic accidents and natural disasters, doing so will invariably encroach upon our liberty to navigate public spaces, exercise financial autonomy, manage our time in accordance with our preferences and critically reflect upon the choices that we make.

“The year 2040 is poised to bring about significant transformations in our daily lives and in the broader societal landscape primarily driven by the widespread proliferation of artificial intelligence. One of the most striking and consequential developments will be the increasing trade-off between freedom and security.

“While AI holds the promise of enhancing safety in cities and villages by addressing issues such as criminality, traffic accidents and natural disasters, doing so will invariably encroach upon our liberty to navigate public spaces, exercise financial autonomy, manage our time in accordance with our preferences and critically reflect upon the choices that we make.

“One of the most significant consequences of the coming shift towards greater public security is the pervasive utilization of AI-powered surveillance, biometric data collection and the analysis of individuals’ behaviours. These technologies will inevitably lead to a fuller loss of privacy as society once knew it.

“The omnipresence of sensors and surveillance systems will cast a shadow over our personal lives, raising concerns about individual autonomy and civil liberties. People’s every move may become subject to scrutiny, fundamentally altering the nature of personal freedom.

The gradual transition from personal freedom to public security will happen incrementally, often made in small, barely noticeable steps. However, the consequences of this shift are profound, as it endangers the way we live, communicate, spend our money, experience leisure activities and engage in social activities. The risk lies in small deviations, but what we consider ‘normal’ today will become a luxury good in the future. What we lose in the end are personal freedom, autonomy, privacy and anonymity. … Tragically, we might remain oblivious to what we forfeit.

“Moreover, the possibility to pay in cash, which is the only truly anonymous form of payment, may face the risk of being abolished for the sake of economic efficiency and easier traceability of electronic transactions. In addition to the loss of privacy and increased surveillance, the advent of AI will exert its influence on how we communicate and exchange information. The instantaneous nature of communication may deprive individuals of the freedom to respond at their own pace. The expectation of constant online availability and responsiveness may leave individuals feeling pressured to prioritize the demands of others over their own time and preferences.

“The gradual transition from personal freedom to public security will happen incrementally, often made in small, barely noticeable steps. However, the consequences of this shift are profound, as it endangers the way we live, communicate, spend our money, experience leisure activities and engage in social activities. The risk lies in small deviations, but what we consider ‘normal’ today will become a luxury good in the future. What we lose in the end are personal freedom, autonomy, privacy and anonymity.

“Tragically, we might remain oblivious to what we forfeit in pursuit of comfort and ‘security’ until it’s too late. These incremental alterations are likely, in aggregate, to culminate in wide-reaching consequences for our daily lives that nobody desired and nobody could fully anticipate.

“Returning to the life we once knew would prove to be an insurmountable challenge, as individuals and institutions may be hesitant to shoulder responsibility for any perceived decrease in public security and any increase in potential victims (such as those affected by terrorist attacks) due to insufficient surveillance. This fear of accountability can further exacerbate the erosion of personal freedoms.

“This scenario conjures echoes of Aldous Huxley’s ‘Brave New World,’ in which citizens willingly sacrifice personal liberty for the allure of comfort and security, illustrating the complexity of the trade-off between freedom and the promise of better security that will define life in 2040.”

Giacomo Mazzone
What will humans become if they lose the agora and the ability to reason with no assistance?

Giacomo Mazzone, secretary-general of Eurovisioni and member of the advisory council of the European Digital Media Observatory, wrote, “I have two primary worries. The first concerns the vanishing of the public sphere. By 2040, each individual – thanks to AI apps – is likely to live their own, unique life experience and the number of people’s face-to-face in-person interactions people have is likely to be reduced to nearly zero.

We could see the emergence of millions of different alternative truths, magnitudes more than what we see today. In such a scenario how can democracies possibly survive? What happens if humanity’s uses of these technologies shifts society into a state where there are few, if any, real personal connections and any sort of shared set of common, fact-anchored truths disappears and each individual comes to live in their own private world with highly varied priorities and views, all based on ‘alternative facts’?

“Teleworking from home will reduce personal exchanges with colleagues. Personalized access to information that will exclude anything that does not correspond to the specific AI settings made by each person will create individualized realities.

“We could see the emergence of millions of different alternative truths, magnitudes more than what we see today. In such a scenario, how can democracies possibly survive? The concept of democracy is based on the idea of the ‘agora,’ the public square, where facts (not multiple alternative ‘realities’) are presented to citizens and are commented upon and analyzed through people’s in-person interactions with others.

“What happens if humanity’s uses of these technologies shift society into a state where there are few, if any, real personal connections and any sort of shared set of common, fact-anchored truths disappears and each individual comes to live in their own private world with highly varied priorities and views, all based on ‘alternative facts’?

What will happen when AI tools begin to replace much more of humans’ own brainwork in more and more of their myriad day-to-day actions of simple to medium complexity? We will lose useful basic skills humans have cultivated over the course of centuries, just exactly as we have come to lack the ability to make mental calculations and as we have lost our sharpened innate sense of physical orientation. Then what will we become?

“My second key concern regards human skills development. The introduction of the pocket calculator and calculator apps has rendered most humans incapable of applying reason to successfully achieve very simple mathematical operations. The introduction of navigation software through tools such as Google Maps has led to a progressive decline among humans in the ability to use their minds only to have a firm grasp of their geographic orientation and how to go from here to there.

“What will happen when AI tools begin to replace much more of humans’ own brainwork in more and more of their myriad day-to-day actions of simple to medium complexity? We will lose other useful basic skills that humans have cultivated over the course of centuries, just exactly as we have come to lack the ability to make mental calculations and as we have lost our sharpened innate sense of physical orientation. Then what will we become?

“What will happen in an extreme situation in which AI tools would not be accessible (i.e., during the natural hazards expected due to climate change)? Will there be a limit to these types of losses of our capacity for brain-driven intelligence? Could the type of culture seen in ‘Judge Dredd’ (a good science fiction book but a bad movie) become reality one day?”

Louis Rosenberg
AI systems are being taught to ‘master the game of humans’

Louis Rosenberg, extended reality pioneer, chief scientist at the Responsible Metaverse Alliance and CEO of Unanimous AI, said, “I’d like to explain the concept of sentient AI and the ‘arrival-mind paradox.’ As I look to the year 2040, I believe AI systems will likely become super-intelligent and sentient.

“By ‘superintelligence,’ I’m referring to cognitive abilities that exceed humans on nearly every front, from logic and reasoning to creativity and intuition. By ‘sentience,’ I’m referring to a ‘sense of self’ that gives the AI system subjective experiences and the ability to pursue a will of its own.

“No, I don’t believe that merely scaling up today’s LLMs will achieve these milestones. Instead, significant innovations are likely to emerge in the basic architecture of AI systems. That said, there are several cognitive theories that already point toward promising structural approaches. The one I find most compelling is Attention Schema Theory, developed by Michael Graziano at Princeton.

Structural changes could turn current AI systems into sentient entities with subjective experiences and a will of their own. It’s not an easy task, but by 2040, we could be living in a world that is inhabited by sentient AI systems. Unfortunately, this is a very dangerous path. In fact, it’s so dangerous that the world should ban research that pushes AI systems in the direction of sentience until we have a much better handle on whether we can ensure a positive outcome.

“In simple terms, attention schema theory suggests that subjective awareness emerges from how our brains modulate attention over time. Is the brain focused on the lion prowling through the grass, the wind blowing across our face, or the hunger pains we feel in our gut? Clearly, we can shift our attention among various elements in our world. The important part of the theory is that a) our brain maintains an internal model of our shifting attention, and b) it personifies that internal model, creating the impression of first–person intentions that follow our shifting focus.

“Why would our brains personify our internal model of attention? It’s most likely because our brains evolved to personify external objects that shift their attention. Consider the lion in the grass. My brain will watch its eyes and its body to assess if it is focused on me or on the deer between us. My brain’s ability to model that lion’s focus and infer its intentions (i.e., seeing the lion as an entity with willful goals) is critical to my survival. Attention schema suggests that a very similar model is pointed back at myself, giving my brain the ability to personify my own attention and intention.

“Again, it’s just one theory and there are many others, but they suggest that structural changes could turn current AI systems into sentient entities with subjective experiences and a will of their own. It’s not an easy task, but by 2040, we could be living in a world that is inhabited by sentient AI systems. Unfortunately, this is a very dangerous path. In fact, it’s so dangerous that the world should ban research that pushes AI systems in the direction of sentience until we have a much better handle on whether we can ensure a positive outcome.

“I know that’s a tall order, but I believe it’s justified by the risks. Which brings me to the most important issue – what are the dangers?

Over the last decade, I have found that the most effective way to convey the magnitude of these risks is to compare the creation of a sentient AI with the arrival of an alien intelligence here on Earth. I call this the ‘arrival-mind paradox’ because it’s arguably far more dangerous for an intelligence to emerge here on Earth than to arrive from afar.

“Over the last decade, I have found that the most effective way to convey the magnitude of these risks is to compare the creation of a sentient AI with the arrival of an alien intelligence here on Earth. I call this the ‘arrival-mind paradox’ because it’s arguably far more dangerous for an intelligence to emerge here on Earth than to arrive from afar. I wrote a short book called ‘Arrival Mind’ back in 2020 that focuses on this issue. Let me paraphrase a portion:

“An alien species is headed for Earth. Many say it will get here within the next 20 years, while others predict longer. Either way, there’s little doubt it will arrive and it will change humanity forever. Its physiology will be unlike ours in almost every way, but we will determine it is conscious and self-aware. We will also discover that it’s profoundly more intelligent than even the smartest among us, able to easily comprehend notions beyond our grasp.

“No, it will not come from a distant planet in futuristic ships. Instead, it will be born right here on Earth, most likely in a well-funded research lab at a university or corporation. Its creators will have good intentions, but still, their work will produce a dangerous new lifeform – a thoughtful and willful intelligence that is not the slightest bit human. And like every intelligent creature we have ever encountered, it will almost certainly put its own self-interests ahead of ours.

AI systems will know us inside and out, be able to speak our languages, interpret our gestures, predict our actions, anticipate our reactions and manipulate our decisions. … We are teaching these systems to master the game of humans, enabling them to anticipate our actions and exploit our weaknesses while training them to out-plan us and out-negotiate us and out-maneuver us. If their goals are misaligned with ours, what chance do we have?

“We may not recognize the dangers right away, but eventually it will dawn on us – these new creatures have intentions of their own. They will pursue their own goals and aspirations, driven by their own needs and wants. Their actions will be guided by their own morals and sensibilities, which could be nothing like ours.

“Many people falsely assume we will solve this problem by building AI systems in our own image, training them on vast amounts of human data. No – using human data will not make them think like us, or feel like us, or be like us. The fact is, we are training AI systems to know humans, not to be human. And they will know us inside and out, be able to speak our languages, interpret our gestures, predict our actions, anticipate our reactions and manipulate our decisions.

“These aliens will know us better than any human ever has or ever will, for we will have spent decades teaching them exactly how we think and feel and act. But still, their brains will be nothing like ours. And while we have two eyes and two ears, they will connect remotely to sensors of all kinds, in all places, until they seem nearly omniscient to us.

“And yet, we don’t fear these aliens – not the way we would fear a mysterious ship speeding towards us from afar. That’s the paradox – we should fear the aliens we create here far more. After all, they will know everything about us from the moment they arrive – our tendencies and inclinations, our motivations and aspirations, our flaws and foibles. Already we are training AI systems to sense our emotions, predict our reactions and influence our opinions.

“We are teaching these systems to master the game of humans, enabling them to anticipate our actions and exploit our weaknesses while training them to out-plan us and out-negotiate us and out-maneuver us. If their goals are misaligned with ours, what chance do we have?

“Of course, AI researchers will try hard to put safeguards in place, but we can’t assume that will protect us. This means we must also prepare for arrival. That should include making sure we don’t become too reliant on AI systems and requiring humans in the loop for all critical decisions and vital infrastructure. But most of all, we should restrict research into sentient AI and outlaw systems designed to manipulate human users.”

Mary Chayko
People may not even notice the losses they are suffering as the world is infused with AI

Mary Chayko, professor of communication and information at Rutgers University, said, “By 2040 it will be increasingly difficult to know whether something that we see or experience has been human-generated. And it may matter less and less to us, as successive generations grow up in an AI-infused world.

We may find that our humanity – what makes us special as human beings – is being gradually and systematically stripped away. As AI becomes a taken-for-granted aspect of everyday life in the coming decades, will we even notice?

“Part of this shift can be positive, if and when the technology is used to expand current ideas of work and creativity in productive, life-affirming ways. But the temptation will be to use it exploitatively and to maximize profits.

“In the process, we may find that our humanity – what makes us special as human beings – is being gradually and systematically stripped away. As AI becomes a taken-for-granted aspect of everyday life in the coming decades, will we even notice?”

Kevin Yee
We might be heading toward a post-knowledge generation; we may be at AI’s mercy

Kevin Yee, director of the Center for Teaching and Learning at the University of Central Florida, said, “2040 is a long time horizon. In the past 15 years, we’ve had paradigm shifts in technology in the form of Web 2.0, then again with smartphones and apps. In less than a single year, LLMs have gone viral and had rapid adoption, and the development of bigger and faster models will increase every six months.

“There is every reason to believe that AI development will meet or exceed Moore’s Law-type acceleration. Futurist Ray Kurzweil predicted the singularity, which ought to come at about the same time as AGI, to happen in roughly this time frame. The pace of change will rock normal conventions. Not many folks yet appreciate how much will change just in absolute terms, let alone the relative pace of change, which will eventually feel non-stop.

Lost will be foundational knowledge in the younger generations. Because AI makes it easy to cheat on foundational knowledge in schools and colleges, teachers and professors will soon switch to focus on higher levels of Bloom’s Taxonomy. By 2040, our college graduates will be great at using AI, but will end up trusting AI output with no way to question it. … The trend that started with the arrival of Google’s search engine – with students believing that ‘knowledge is outside of me’ – will get worse in the AI era.

“This will reverberate in all aspects of society, politics, economy and workplaces. It will make as much difference in everyday lives as widespread electricity did. Historians note how the times before electricity and after electricity differed; a pre-AI existence, even in a technologically-advanced first-world country, will look quaint by 2040. Gained will be massive productivity. There will be massive disruption to jobs. As is often said, ‘You may not lose your job to AI, but you may lose your job to someone who knows how to use AI.’

“Lost will be foundational knowledge in the younger generations. Because AI makes it easy to cheat on foundational knowledge in schools and colleges, teachers and professors will soon switch to focus on higher levels of Bloom’s Taxonomy [a mapping of thinking, learning and understanding]. That makes sense for students who already have foundational knowledge, but it will soon prove disastrous. How can future alumni evaluate an AI’s output if they don’t know how to spot the errors or suggest improvements?

“By 2040, our college graduates will be great at using AI, but will end up trusting AI output with no way to question it. That may indeed have profound effects on our relationship with AI, as perhaps seen in many science-fiction films over the years. The trend that started with the arrival of Google’s search engine – with students believing that ‘knowledge is outside of me’ – will get worse in the AI era.

“What’s unclear is what will happen once most people in the workforce are of the ‘post-knowledge generation.’ We might stagnate as a society, unable to lurch forward because we simply trust the AI. If we built safeguards into the AI to only follow our lead, we might just remain in status quo. More ominously, if AI (or, even more ominously, AGI) determines that humanity needs help to evolve, we may be at its mercy.”

Katindi Sivi
‘It is imperative to start questioning AI and big data assumptions, values and biases’

Katindi Sivi, founder and director of the LongView Group, a socioeconomic research, policy analysis and foresight consultant based in Nairobi, Kenya, said, “The power of AI to solve problems and transform life should not erase the need for vision or human insight. The more AI advances, the more I feel that people will relinquish their human abilities to think and feel to machines.

“AI and its sub-components like big data will increasingly become the sole determinant in decision-making processes. It is necessary to ask critical and objective questions about what all this means: Who has access to what data, how is data analysis deployed and to what ends? AI companies have privileged access.

We must work to ensure that people gain enough digital literacy to understand the gap between what they want to do online and what they should do, because the failure to bridge this gap and make the right choices leads most not to notice the gradual corrosion of their autonomy, which leads them to a slow slide deeper under powerful people’s control.

“There is a divide between the big-data rich and the big-data poor as well as among the three classes of people – the creators of AI, those with the means to collect and own the data and those with the ability to analyze it.

“It is imperative to start questioning AI and big data assumptions, values and biases and to effectively democratize the space. Conversations must be held and mechanisms must be put in place around accountability principles that apply across the board.

“We must also work to ensure that people gain enough digital literacy to understand the gap between what they want to do online and what they should do, because the failure to bridge this gap and make the right choices leads most not to notice the gradual corrosion of their autonomy, which leads them to a slow slide deeper under powerful people’s control. Vices such as privacy intrusions, invasive marketing, gross biases, misinformation and the curtailing of human freedoms are among the many already creeping in.”

Michael Wollowski
Will our future resemble the fearful outcome in the E.M. Forster essay ‘The Machine Stops’?

Michael Wollowski, professor of computer science at Rose-Hulman Institute of Technology and associate editor of AI Magazine, wrote, “Given that the world is unwilling to quickly act to contain climate change, I am taking a rather dim view of the will of societies to regulate AI towards the betterment of civilization.

I am afraid that the negative impact that social media have on people’s ability to directly communicate with each other and on civility in general, will be amplified and accelerated by advances in AI.

“I am very concerned that we will bring about a world depicted in E.M. Forster’s essay ‘The Machine Stops.’ He writes, ‘Cannot you see … that it is we who are dying, and that down here the only thing that really lives is the Machine? We created the Machine to do our will, but we cannot make it do our will now. It has robbed us of the sense of space and of the sense of touch, it has blurred every human relation, it has paralysed our bodies and our wills.’’’

William L. Schrader
AI adds greater velocity to the vector of humanity’s troubles

William L. Schrader, 2023 Internet Hall of Fame inductee and advisor to CEOs, the co-founder of PSINet, wrote, “AI – which is controlled by the wealthy and powerful – accelerates many threatening processes. And it is too late to stop it. Think about the one-tenth of one percent holding 99.9% of today’s global wealth in all countries. AI will make the rich richer, the poor poorer, and the differential will be substantially greater by 2040.

Fascists will dominate nearly all governments, including that of the U.S. AI will drive further dangerous military activity and intelligence gathering. Global warming and pandemics will significantly worsen by then; all coastal communities across the world will be covered by water and island nations across the globe may disappear. AI adds greater velocity to the vector of humanity’s troubles. 

“The death toll from all of this will be frighteningly epic. The planet will survive. Humans will too. But I believe billions will die in the next few decades from conflict, pandemics, global warming (starvation, flooding, drought, dead oceans) and more. People will perish before our eyes and get no help. This all seems inevitable to me. Earth’s population will shrink to one-tenth of today’s number. Wake up and smell the gunfire.”

Continue reading: A selection of essays tied to Theme 4