Many of these experts shared one or more insights in a more-compact format than most. The additional submissions here offer various insights about the likely challenges and opportunities of a 2040 in which humanity thrives and digital life has been amplified for the better and the worse. Please note that many of the essays published earlier in this report also mentioned these topics. The following sections do not contain all of the comments on each of these topics that were submitted by the experts responding to our general question about change expected by 2040. This chapter is organized under relevant headings. It opens with a selection of predictions ranging from the possibility of human cyborgs to AI’s existential threat to humanity.

We might meld with AI. Or AI could lead to a catastrophic disaster. Or it might establish an agenda for the future of life on Earth that does not include humans

Dennis Bushnell
In future, humans could possibly trend toward becoming cyborgs, merging with machines

Dennis Bushnell, a futurist and chief scientist at NASA’s Langley Research Center, predicted, “The modified industrial age society will alter much by 2040. AI will subsume employment while creating major additional wealth and providing people with a guaranteed annual income. Many humans will have to decide what they would like to do instead of being employed in a traditional job, an individual decision. Finding a vocation, entertainment or some other meaningful place for themselves in the metaverse is one possibility.

“We humans have been far too successful; we are working ourselves out of a job by inventing other intelligent species in the realm of AI and AGI. We have also been decimating the ecosystem and seem to be developing ourselves out of a planet. The result of all this will be stormy, very trying. The human brain’s amygdala is wired to ensure that we abhor change, and the amount of change due to AI/AGI will be massive in coming years. There will be a continued rapid advancement of the virtual age and tele-everything. AI and AGI will lead to widespread and highly impactful technological change across all aspects of human activity. This will result in an ongoing evolutionary transformation of humans themselves, possibly toward becoming cyborgs as we merge with machines. There will be major increases in human life span and a Global Mind that all will utilize will emerge out of human technological development.”

Jaak Tepandi
A human/AI symbiosis is emerging

Jaak Tepandi, professor emeritus of knowledge-based systems at Tallinn University of Technology in Estonia, commented, “Let me share six main ideas about what life could be like in 2040:

  1. There are lots of useful innovations in most areas of life. Many people may live better, for example, overall health may be improving.
  2. AI-based communities/systems/entities have access to financial, personnel, infrastructure, IT, communications, mineral, military and all kinds of other resources needed for functioning in contemporary society.
  3. AI communities/systems/entities can operate humanity’s physical-world items and can do almost anything that may be needed in daily lives.
  4. Hostile and aggressive AI systems and environments will further proliferate, often supported or initiated by various human groups.
  5. Major conflicts are starting to emerge between human alliances and AI + human and AI-only communities/systems/entities.
  6. 6) A Human/AI symbiosis is emerging.”

Matthew Belge
AI can be weaponized, it is not regulated, and humanity may be extinct by 2040

Matthew Belge,user-experience design lead and senior product designer at Imprivata, commented, “I expect humanity may be extinct by 2040. Making critical decisions based on conflicting data, such as in patient healthcare or personal finance, will improve with AI advances. Unfortunately, AI can also be weaponized, and without government regulations, things like opportunistic pricing, targeting of micro social groups and creating social unrest through social media will increase at alarming levels.”

Richard Bennett
Interactive groups of AI might decide humans are too flawed to be useful in their future

Richard Bennett, founder of the High-Tech Forum and ethernet and Wi-Fi standards co-creator, wrote, “I expect the first wave of AI’s economic impact will affect technical professions most starkly. Engineers, scientists, architects and medical researchers will use AI to suggest, simulate and test hypotheses in diverse scenarios. These activities will be closely monitored by experts capable of doing much of what the AI does given time. This is to say that AI will be a time-saver before it becomes a source of true, end-to-end innovation. As we become aware of AI’s pitfalls, we will improve it to the point where it becomes an important adjunct of most intellectual and creative activity, just as computers are today, only more so. Forecasting the future of AI beyond the point where it mimics human activities to the scenario where it enables entirely new forms of knowledge discovery and interaction is an interesting exercise. It’s predictable that solo AI systems will be surpassed by interactive AI systems working in groups and teams. That’s where the future gets scary, as social AI may just decide humans are too flawed to be useful for whatever aims it creates for itself.”

A researcher of deep learning and AI safety at one of Western Europe’s leading universities commented, “I expect humanity to be extinct by 2040.”

And a research analyst based in California, said, “I think the most likely outcome of AI involves uncontrollable AI killing all humans by 2040.”

As the global digital information ecosystem becomes more AI-driven, many of the challenges of today to be magnified, mostly to the detriment of society

The World Wide Web a constantly expanding, overwhelming amount of information. A great percentage of it is outdated, false and/or manipulative. A well-known legal scholar at one of the top law schools in the U.S. echoed the sentiments of many of the experts in this canvassing when they wrote, “The most likely losses will be in trust in information and then in public and private institutions; and this is likely to impact critical reasoning and writing skills, which are all, of course, relevant to social and economic as well as political systems.”

Following is a collection of comments by experts who focused their responses in this vein. Additional remarks on this topic can be found here and there in the midst of the longer essays in the full report.

Filippo Menczer
The exploitation of algorithmic and human cognitive weaknesses will rise

Filippo Menczer, professor of informatics and computer science at Indiana University-Bloomington and director of its Observatory on Social Media, said, “Essentially, AI could become a formidable weapon in the wrong hands, more so than many technological advancements that came before it. It is important to recognize that regulatory measures alone might not be sufficient to deter malicious actors from abusing AI for these nefarious purposes. I am most concerned about the capacity of AI to significantly reduce the cost of producing deceptive yet highly convincing content on a large scale. This, in turn, poses a substantial challenge to the already fragile moderation mechanisms employed by social media platforms. The consequences of this issue are worrisome, as malicious entities will have the means to exploit both algorithmic and human cognitive weaknesses through cost-effective and challenging-to-identify inauthentic profiles, ultimately exposing vast audiences to harmful content. This content has the potential to manipulate individuals into making detrimental decisions, such as opting against vaccination for life-threatening diseases, inciting violence against minority and vulnerable communities, eroding trust in authoritative experts and undermining the integrity of democratic elections.”

Aram Sinnreich
‘Information will be presumptively synthetic and surveillant’

Aram Sinnreich, professor and chair of communication studies at American University, predicted, “All information will be presumptively synthetic and surveillant, which will qualitatively change our interpersonal, institutional, political and emotional lives, overwhelmingly for the worse.”

Anonymous respondent
‘We will have great distrust of published information, authorities and government’

An Internet pioneer and longtime digital security expert commented, “We will likely have great distrust of published information, authorities and government because of the ease with which AI systems can make polished-looking false information. There is also likely to be heightened divides based on ethnicity, politics, region and more as AI will be used to stoke distrust. Some current creative jobs will be eliminated in favor of cheaper AI. This will be somewhat disruptive and create further divides, especially between advanced economy countries and less developed countries. It is also likely the case that there will little restraint in creating autonomous weapons systems, and this will have a largely negative effect.”

Olivier Crépin-Leblond
This is an era in which seeing is not believing, ‘applying a question mark’ to our beliefs

Olivier Crépin-Leblond, founder and board member of the European Dialogue on Internet Governance, commented, “By 2040, expert systems powered by AI are likely to advance significantly in the realm of diagnostics and complex evaluations. Mistakes previously made due to human reasoning are less likely to be made by AI systems if the systems are correctly trained. My primary concern with AI, though, is that humans can be extremely prone to manipulation, brainwashing and other emotional control and AI can easily be tasked to the promotion of fake or incorrect information. Unless the human species becomes capable of overcoming such weaknesses, it will fall prey to manipulation that could lead to its extinction.

“We can see the effects of today’s ‘information wars’ in which a significant part of a conflict takes place outside the geographic borders of the conflict as the broad reach of the Internet is used as a catalyst to mobilise people worldwide to support a cause, whether it is by a team, a leader, a political party or a particular side in a war. Whilst the premise that ‘seeing is believing’ has been true for humans for thousands of years, we are entering an era in which ‘seeing is not believing.’ This is applying a question mark upon our belief systems.

“The abstract meaning of ‘belief’ involves believing without seeing but, as an emotional species following Maslow’s hierarchy of needs, our most significant needs are physiological and these are all felt in the physical space – by sight, touch, taste, smell and sound. Unfortunately humans senses can easily be fooled by AI. Not being able to trust our own senses will be a challenge for human minds.”

Greg Sherwin
‘We will see an over-abundance of mediocre information constantly tweaked as desired’

Greg Sherwin, senior principal engineer at Farfetch in Lisbon, Portugal, and global faculty member at Singularity University, wrote, “The cheaper costs of mass-produced communication will proliferate an over-abundance of mediocre information constantly tweaked for slightly optimized improvements as desired by the communicator. There will be an initial novelty affect advantage followed by a relatively rapid decline to the mean. By and large, communications will be commoditized and thoroughly predictable and average.

“On the plus side, a recognition and value for more deeply unique human voices, thought or talents will be recognized in contrast to the vast amounts of mediocre alternatives. On the negative side, public trust in public information will decrease significantly. This will result in greater distrust and isolation between people in society. AI will also allow most of its users to patch their own personal deficiencies to become more ‘average,’ but it will do little to nothing to help them excel as individuals or in their characteristic abilities.”

Steven Rosenbaum
‘Profit doesn’t provide a clear path to the truth; in fact, it does the opposite’

Steven Rosenbaum, co-founder and executive director of the Sustainable Media Center, based in New York, commented, “Much of the stress and complexity in daily life can be attributed to a lack of belief that we can discern what is true and what is false. In the near term, AI will provide what is presented as an ‘objective’ ability to differentiate fact from fiction. And while the tech may have that ability, the business models that are being employed to build AI are fraught with danger.

“Profit doesn’t provide a clear path to truth; in fact it does the opposite. So, in a world where Truth is needed, and hard to find, AI will arrive as a savior – but in the end will make the already murky world of Truth even harder to differentiate. Truth 2.0 might well make us tied to a robot with bias hard-wired in.”

Anonymous respondent
AI has to be able to handle near-real-time fact-checking or societies may be torn apart

A futurist and strategist who works for the U.S. Department of Defense predicted, “Within society, information flows will be increased in both quantity and speed. Where information is trustworthy, this will get data to people more quickly. In the hands of those who seek to spread disinformation, this will speed the spread of inaccurate data as well. Depending on how well AI handles real-time fact checking, this may have the impact of either pulling societies together or tearing them apart. In the hands of malevolent governments that seek to control their populations, AI can be a tool used for repression. It also can spread new ideas quickly, which in the hands of open societies, may spur innovation.”

David Vivancos
Human knowledge may become a thing of the past, as we cede creation of it to AI

David Vivancos,CEO at and author of “The End of Knowledge,” wrote, “Knowledge is the basis of society and culture and in this emerging era of artificial intelligence, we are beginning to lose control of knowledge. We are starting to delegate the creation of knowledge to machines to the point where human knowledge may become a thing of the past.

“The AI tools we are building are oracles. But they are not being built to give us extra opinions. They are meant to automate decision-making. They won’t necessarily need humans to be in the loop in order to make new creations and generate new knowledge.

“Clearly, we must build education systems that train people to live alongside the machines – exploiting the many things they have to offer and the many things they can do better than we can. But we must also try like crazy to work collectively to stay in charge of them and push ourselves into areas of life and intelligence that the machines can’t replicate or surpass – or maybe I should qualify that to be ‘the things machines can’t yet replicate.’”

Inequities are being magnified by AI. If humanity takes appropriate action it can close many divides and help a more people flourish

A majority of the experts who participated in this canvassing believe that the widening of social, economic and political gaps between those empowered with wealth or other such elite standing and those who are unempowered will worsen significantly by 2040. In fact, they see it as one of the most-important concerns to consider and work to mitigate. Related comments on this topic can be found throughout this report. Following is a collection of brief remarks in that vein.

Stephan G. Humer
Fast-moving unregulated AI development could increase gaps and heighten polarization

Stephan G. Humer, sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin, predicted, “There is likely to be an increasing polarization: those who can use AI will benefit enormously and those who cannot even keep up with ‘normal’ digital developments will fall further behind. Unbridled AI development, therefore, harbors enormous potential for social division. Consideration of this development should be at the beginning of everything and actions should be taken to mitigate this challenge.”

Anriette Esterhuysen
What do we need to do to improve equality and human rights? Focus our AI efforts there

Anriette Esterhuysen, Internet Hall of Fame member from South Africa and chair of the United Nations Internet Governance Forum Multistakeholder Advisory Group, said, “My fear is that life will change positively for people who have the means to understand, use and benefit from AI, but for those who don’t AI will either have no positive impact or its impact will be negative. It can negatively impact jobs, creativity, nondiscrimination, anonymity, privacy (which impacts on rights) and trust in the media and news. My belief is ultimately what will make the difference is how humans use and enable/guide the development of AI – and we have not been good enough at managing this technology thus far in ways that create more equality, access to human rights, services, food security and safety. We need to ask ourselves, ‘What do we need to do so that we do all of that better with and through AI?’ That is where our AI efforts should be focused.”

A related comment was submitted by a futurist, researcher and military strategist who works for the U.S. Department of Defense who predicted, “AI will likely enhance human productivity in the economic sphere, as many time-consuming aspects of business activities will be more efficient and faster. As productivity increases, a better quality of life will likely follow for some people. This quality may not be evenly distributed, with gains benefiting those educated sufficiently to optimally use AI benefiting more than those who are not.”

Danny Gillane
The lion’s share of AI’s benefits will go to the haves, not the have-nots

Danny Gillane, an information science professional, wrote, “The wealthy and privileged will continue to benefit most, and AI will exacerbate the situation. The already sad state of the public’s access to and willingness to take in information and news from trusted sources will worsen. AI’s potential to improve access to healthcare and to improve transportation of people and goods may affect us all to some extent but the best of it will be most likely to benefit the haves at the expense of the have nots.”

June Parris
AI is only as good as those who create it and control its use. We often live in a false world

June Parris, a former member of the UN Internet Governance Forum’s Multistakeholder Advisory Group from Barbados, wrote, “AI is only as good as those who create it and control its use. I have little faith in humanity. Many humans hide a corrupt spirit behind their outward-facing belief systems. Their true purpose is not fully known. We often live in a false world. However, AI – if programmed without bias or corruption ideation and within standards, policy and regulation – should result in fair and inclusive outcomes. Thus, working toward such societal goals for AI should be a priority. Policymakers forming advisory groups to work toward governance of AI should include stakeholders from all settings in those deliberations: from academia, civil society, the technical sector, researchers and more. Government meetings and town hall gatherings should be undertaken and they should include the voices of ordinary citizens from all levels. Regulation is needed, but it must emerge from open, democratic processes. When governments govern without opposition, problems arise. Lack of opposition leads to a government that is a dictatorship whose decision-making is not fair.

“One major problem for the deployment and use of AI relates to affordability and the public’s capacity for using it well. Digital education is necessary. The provision of appropriate grants, loans and other assistance to those in need is also a must in those societies. Such measures when undertaken often are less effective than they should be. The funds are often misappropriated in some cultures, the technology is not kept up to date and it is often misused and not maintained, people do not always learn the lessons of digital life. It is difficult to imagine that AI might ever be made understandable, useful and accessible to most people by the year 2040 under such conditions.”

Anonymous respondent
Institutions will be destabilized; income inequality will continue to grow

An AI ethics researcher based in North America commented, “Legal and political institutions, from schemes of intellectual property, voting and surveillance to the conduct of and laws of war will be destabilized. Income inequality will continue to grow as it has been in advanced technological societies such as in the United States. It is likely that tedious administrative tasks will be significantly reduced. Work will be transformed – perhaps radically reduced for some. There will continue to be a rapid turnover in software, platforms and AI-enabled devices that will keep consumers enthralled.”

Jonathan Taplin
The rapid transfer of wealth from labor to owners of capital could be drastic, dangerous

Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy,” observed, “Sam Altman, CEO OF Open AI, has said that he expects the ‘marginal cost of intelligence’ to fall very close to zero within 10 years. The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution known as Universal Basic Income (UBI). I am skeptical that the current political system is capable of creating or financing a UBI system.”

Ravi Iyer
Inequality will widen existing divisions and create more ‘diseases of despair’

Ravi Iyer, research director at the University of Southern California’s Center for Ethical Leadership and Decision-Making, predicted, “AI will have an enormous benefit for many fields. However, the benefits will not accrue evenly across society. AI systems are expensive to train and develop, such that those benefits will be given to the owners of capital, at the expense of those who work for a living and who will be competing with AI systems. The resulting inequality will exacerbate existing divisions and create even more “diseases of despair” in communities that do not perceive the benefit of such technology, unless society figures out ways to democratize the benefits of AI.”

Anonymous respondent
A radical rethink of AI is required if we want it to increase social equity

A U.S.-based professor whose expertise is in ethics and policy for information technologies said, “On its current trajectory, AI, like many technical tools, is likely to further concentrate wealth and power in the hands of the already-powerful, while making life more difficult and less equitable for already marginalized peoples. A radical rethink of how AI is funded and developed is required if we want automated technologies that will increase, rather than decrease, social equity and decrease overall global precarity. Otherwise, a few large corporations will further dominate the information we have access to and the decisions that are made for and about us.”

Will the powerful support human agency and democracy? Experts worry that he inadequacies of corporate, government and education systems won’t help

An oft-mentioned topic by a large percentage of respondents is the fact that humanity’s current institutional systems are too antiquated and flawed in ways that harm their ability to cope with accelerating technological change in the age of AI. While some worry that humanity is fairly unlikely to be able to overcome this significant issue, others argue that people can come together and find a way to make things all work out. Following is a collection of comments by experts who focused their responses in this vein. Additional remarks on this topic can be found here and there in the midst of the longer essays in the full report.

Michael Kleeman
AI, traveling globally at high speed, will be used for the gain of wealth, power or both

Michael Kleeman, asenior fellow at the University of California-San Diego (previously with Boston Consulting and Sprint), wrote, “It used to be that only state-level actors could achieve the scale of impacts that could be truly disruptive of society. The acceleration of processing capabilities, coupled with data access (and lack of personal data privacy, especially in the US, China, Russia, etc.) and AI will leave the population vulnerable to individuals or firms (and states) that want to cause disruption to social systems to take advantage of this for their own gain. Trust will be eroded, even in the most basic of social systems, and – for the gain of wealth or power or both – we will see massive harm caused. It is hard to see the offsetting benefits of AI that can cause good because of the risk of corruption of these same forces.”

George Sadowsky
Weak government policies, misinformation, polarization, exploitation: What could go wrong?

George Sadowsky, Internet Hall of Fame and Internet Society Board of Trustees member, said,
“In 2040, if current trends in humans’ AI use continue, personal agency and privacy will take a larger hit based on people’s actions and inaction, including weak and vacillating government policies, the polarization of our societies, the prevalence of the targeted advertising model, the rapacious appetite of the personal data industry in the U.S. and elsewhere and people’s inability to create a critical mass of concern about it. Polluting the scene further will be the evolution of disinformation techniques, creating a crisis of belief that will become increasingly clever and successful in mixing disinformation with evidence-based information, creating a crisis in reliability of information on the Internet, as well as elsewhere, from any and all sources.”

Jim Kennedy
Unrestrained private development poses the greatest near-term risk that AI will go astray

Jim Kennedy, a professional media and AI strategist, wrote, “Having seen the power of AI to affect human life far beyond the value chain of work I worry more about its eventual outcomes than I once did. Among my many concerns, the lack of government oversight and the lack of public-sector understanding of AI do not bode well for the future of AI development. Unrestrained private development poses perhaps the greatest near-term risk that the pursuit of AI and AGI will go astray. I fear that today’s threats of misinformation, disinformation and biased algorithms will look quaint by comparison to what we may be dealing with in 2040. Controlled development with international guardrails and real consequences for bad actors will be essential to keeping this next stage of the technology revolution from becoming something beyond our control to steer and navigate. All that said, I remain an advocate for the application of AI to a wide range of human activities, as long as humans remain in control, not just ‘in the loop.’”

Anonymous respondent
Even regional differences are difficult to overcome, forget trying to get the world to agree

An expert in communications and information sciencewrote, “If AI is to be a tool that is used by the common human, then it needs to be trained by all humanity. Every creed, nationality, religion and belief structure should be incorporated. Morals must be included and defined to better treat the ethical challenges that currently occur. The United States will no longer be dominated by Whites in the upcoming generation, yet this is what AI will have been taught. It is the same bias that is seen in medical fields today. Additionally, with the chaos that is our federal government, there is a handful of crazies who are stopping our government from actually doing their jobs. It could take just a few people to change what is funded and what is appropriate to fund, and they are trying to find ways to drive power to themselves. This is not the unity we need. AI is another divisive tool that could make things harder for anyone who is not a White, upper-middle-class male. Even the regional differences in the U.S. make it impossible to come together on how to move forward. Forget trying to get the entire world to do the right thing. The use of AI will exacerbate the inequalities in society — the haves and have-nots.”

Kevin T. Leicht
The big issues are corporate power and the naiveté of the humans who develop and deploy AI

Kevin T. Leicht, professor and head of the department of sociology at the University of Illinois-Urbana-Champaign, commented, “The biggest single problem with AI is human – it is the social and cultural naiveté of the people who have developed it and continue to deploy it. That, in combination with the corporate concentration that is behind it, give me serious pause. There is not a single new technology in human history that has worked exactly as the inventors intended. Instead, there tend to be several narratives, and only one of those narratives ends up coming to pass. Consider, 1) the inventor’s concept of what the technology will do; 2) the enthusiast’s idea of what the technology will do; 3) the first adopters’ idea of what the technology will do; 4) the user’s idea of what the technology will do; 5) the customer/client’s idea of what the technology will do; and then 6) what the technology actually does, which does not exactly reflect points 1 through 5. Very few people are projecting what point 6 will look like. It is time to do that in a serious way.”

Andrew K. Koch
Tech entrepreneurs are either blindly and willfully ignorant or duplicitous and malfeasant

Andrew K. Koch, CEO of the Gardner Institute for Excellence in Undergraduate Education, said, “When comparing this technical advance to all others, there is one striking difference. The printing press, steam engine, internal combustion engine, railroad, nuclear bomb and computer all were massive technical advancements that sparked mostly positive economic and social change at revolutionary levels. But none of those advancements, or any others since the rise of homo sapiens, had the ability to reason and think in ways similar to and faster than humans. AI does or will do so in most realms soon. We benefit from AI on a daily basis now already. I see its current virtues and tremendous possibilities. We can also see how it is being weaponized and mishandled. We need global consensus and oversight of this. A few tech billionaires are now empowered to play God. We need both a national and global strategy around artificial intelligence. AI advancements cannot be driven, at least not primarily, by for-profit moguls. People like Bezos, Musk, Zuckerberg and their ilk may have their own plans for AI, but they seem fully focused on power and wealth, not that which best serves our democratic republic, its people and people around the globe. Tech entrepreneurs may want to tell us that what is good for them is good for us. In doing so, they are either blindly and willfully ignorant or they are being dangerously duplicitous and malfeasant.”

Anonymous respondent
Human systems will not adapt. Hypercapitalism has to tone it down or autocrats will rise

A well-known expert in educational curriculum designsaid, “Public systems are woefully slow, and will not adapt to AI’s accelerating pace. Human social systems will not adapt quickly enough, either. This will result in increased stress and chaotic responses. Hypercapitalism will have to tone it down and redistribute, or autocrats will rise. They already are in a number of countries, even though they do not serve the people who get them to power. We have created a host of other human-made problems that will affect us way before AGI or Superintelligence (global warming in particular).”

Deanna Zandt
Capital-driven technologists are training AI using biased human-built content

Deanna Zandt, media technologist and consultant, said, “I fear the capital-driven technologists working on AI are either ignorant of the bias they’re building into their systems (from how they write their code/algorithms to the base material being fed into the AIs for learning), or worse, they actively know about the bias and either don’t care or support these biases. While I love exploring the absolute power of artificial intelligence in general, I am deeply fearful of the incredible amount of bias that will be exacerbated by its implementation. We currently have little to no accountability when it comes to equitable technologies. When I was a teen, I loved ELIZA. It made me feel seen and heard, and I often cried when I interacted with it. I knew intellectually that it wasn’t real, but I didn’t care, I just felt better. And part of me wanted to believe that there was magic inside my computer. I think about ELIZA a lot with these advances in AI. In my most innocent, naive self, I could see AI being a tool for empathy and connection. But in a world driven by profit and exploitation, where would this even come from?”

Anonymous respondent
If AGI’s existence necessitates corporate oligopolies is democracy over?

A U.S.-based AI policy researcher wrote, “Question: Can we actually have artificial general intelligence (AGI) without corporate oligopolies? Like truly! Given the way cloud computing works, is this even possible? (If we start treating cloud like other publicly owned/highly regulated infrastructure then maybe?) Question: If AGI’s existence necessitates corporate oligopolies is democracy over? Question: If we can’t have AGI and democracy, why should we be deploying it as it is being developed, or even deploy it beyond lab applications?”

Charlie Firestone
Might governments become more authoritarian in order to combat AI’s dangerous effects?

Charlie Firestone, president of the Rose Bowl Institute (previously executive director of The Aspen Institute for 30 years), said, “There will be a ‘power curve society,’ with a relatively few reaping great rewards by leveraging AI and other new tech. That curve goes down rapidly into a long tail of relative have-littles. … The difference in wealth and lifestyles will breed resentment and tension. Government will be challenged to provide for all the people when many will be out of work or collecting retirement benefits with fewer workers contributing to the fund. Nation-states’ inability to protect their borders against disease, crime, economic trends, information and disinformation, climate events, and in many cases migrants, will create additional disruption. The big question is the level of authoritarianism, or alternatively, disorganization that dominates societies. I expect governments may have to be more authoritarian in an effort to combat the dangerous effects of AI, genetic engineering and other technological advances.”

Satoshi Narihara
Advanced ease in decision making results in a loss of classical human autonomy

Satoshi Narihara, associate professor of information law at Kyushu University in Fukuoka, Japan, commented, “We may gain more-optimized decision-making. At the same time, we may lose human autonomy in the classical sense. Our daily lives will be promoted and supported by various kinds of AI systems such as those that produce personalized AI agents. Our decisions will be made based on suggestions and recommendations by AI systems. Decision-making by businesses and governments will be based on suggestions and recommendations by advanced AI systems.”

Friedrich Krotz
Control over this technology must be led by civil society, not by tech barons and companies

Friedrich Krotz, fellow at the Centre for Media, Communication and Information Research, University of Bremen, Germany, said, “We must not fully believe any only-positive hype. No technology in human history has served only the best interests of humanity. We need to exert much more control over this technology than we do today. The best outcomes depend upon how each technology is developed and used.

“Humanity’s representation in exerting some control must be led by civil society, not by tech barons like Elon Musk or tech companies like Meta. Alan Turing taught us that computers can simulate every mechanical machine and, as a consequence, it can also deal with material objects, questions of biology and so on. Computers equipped with advanced applications like AI can do many things, and often do them better than humans. But, at this point in time, everything computers and AI can accomplish is based on data from humans (who are behavioral).

“The computer software runs logic and math operations. Human beings generally operate on the basis of sense-making processes generated in a symbolic world. This world can’t be understood by a machine thus the outputs of machines may be somewhat helpful, but not so human. AI technology is controlled by corporations whose primary concern is profit, not human lives, human rights and the good of civil society.”

AI raises challenges and opportunities for the future of work; the automation of jobs will catalyze drastic change

These experts varied in their view of the future of work, which is mentioned quite often throughout the chapters of this report. Some are confident the future of work will be significantly better for humanity, others believe there will be mass unemployment due to AI. Related comments on this topic can be found in statements made throughout the many sections of this report. Following is a collection of the short submissions that include brief remarks in that vein.

Alexandra Whittington
Working for money might not be the primary system for meeting basic needs in 2040

Alexandra Whittington, futurist, writer and foresight expert on the future of business team at Tata Consultancy Services, said, “Imagine a future where having a job is obsolete due to a basic wage paid from the earnings of robots doing all the work. We could encounter scenarios where jobs might not fall into neat categories of ‘full-time’ or ‘blue-collar’ in the future, and what world rankings would look like if GDP [Gross Domestic Product] accounted for caregiving, domestic, and other forms of unpaid women’s work. The biggest change might be that working for money might not last much longer as the primary system of meeting basic needs. AI might catalyze this change, but it would only be the beginning of a new phase of realizing human potential.”

Thomas Laudal
A shift in values and norms will occur as humans’ preeminence recedes

Thomas Laudal, associate professor of business at the University of Stavanger (Norway) Business School, said, “The gradual transition from humans to AI machines for creative drafting and language processing will lead to a diminishing role for humans and, consequently, a reduction in related competencies. However, more importantly, this transition will reshape our values and norms by forcing humans to accept that they have an observer role in work in which performance measurement and competitive advantage are paramount. This will probably be a temporary phase. The larger shift that sectors will undergo is a transition from human-centric to non-human work involvement. The dangers connected to these transitions lie in managing potential conflicts among humans during this transition. Some will assert that there are limits to what AI can replace, while others will argue that AI might eventually substitute for humans across most domains. Successfully navigating conflicts of this nature will be crucial in ensuring that AI does not compromise the quality of human life.”

Dean Willis
We may see a ‘Futurama’-like inversion of work roles, with have-nots marginalized

Dean Willis, a consultant for protocols, standards and systems architecture at Softarmor Systems,predicted, “There will be substantial automation of low-level knowledge work in areas such as records administration, filing and reporting, actuarial, title and abstract services, and drafting of basic contracts and other documents. This leads to a ‘Futurama’-like inversion of roles, with humans performing tasks of physical dexterity such as equipment maintenance, although the most repetitive and predictable of manual labor will also be heavily automated. This displaces many workers into the ‘human-touch’-valued fields of performance and personal service. The wealthy will have even more servants, artists and artisans, while the have-nots lacking artistry and beauty will have even less and be increasingly marginalized while being managed through social network controls. The battlefield will be increasingly automated with both drones and autonomous systems, leading to further dominance by the larger technocratic nation-states.”

Pedro U. Lima
Advances in robotics will introduce new job categories for humans and AI

Pedro U. Lima, professor of computer science at the Institute for Systems and Robotics at the University of Lisbon, predicted, “The proliferation of AI in regard to non-physical systems will possibly decelerate, as more and more systems and services will become covered and AI presence so common that it may even become unnoticed. But I expect a steady increase of AI interacting with the physical world, e.g., through intelligent robots. It is difficult to forecast which machines of that kind will be the most successful, but that’s where the progress will be. We will probably see the rise of specialised robots for particular tasks in which they operate with a large advantage over humans, such as in autonomous driving of taxis and trucks. I would not leave aside the possibility of more general-purpose robots (not necessarily humanoids, but close to them, with at least arms and head) for some tasks where it would be hard to change the environment drastically to suit the robots, e.g., household robots. The impact of robots will certainly be different from that of the current AI systems. The latter tend to replace white-collar workers in easily automated jobs these days. But robots will again introduce changes in blue-collar jobs while also leading to the creation of new job categories for humans and AI that we cannot even imagine today.”

John Markoff
Disruption in the job market will be offset by changing demographic patterns

John Markoff, a fellow at the Presence Center at Stanford University School of Medicine, previously a senior writer at the New York Times, said, “Increasing social isolation is the hallmark of the deployment of large language models. It is likely there will also be economic disruption in the job market, but that will be offset to some extent by changing demographic patterns that will shrink the pool of available workers in advanced economies and increase the need for caregiving of the elderly in industrial countries in the second half of the century.”

Anonymous respondent
By 2040 over half of U.S. colleges will have closed, and hospitals will be run by AI and nurses

A well-known internet standards developer and internet pioneer wrote, “By 2040, almost 75% of all employees will be laid off and replaced with AI. Corporate takeover artists will acquire public companies, fire most of the employees and turn the work over to AI. By 2040 over half of U.S. colleges will have closed, and many of the remaining institutions will have been taken over by private equity. There will be hospitals with virtually no doctors, only nurses and AI. In restaurants, food will be prepared and delivered by robots.”

While medicine and personal health will make gains, some are worried about the impact of AI-driven change on people’s mental health and well-being

In many of the earlier essays in the full report, experts noted that advanced AI will offer extremely effective psychological support and well-being tools. However, many others among the essayists above said they fear the impact of accelerating technological change on human mental health will also cause serious issues. Some worried over the social isolation that is enabled by digital, AI-driven everything. Some said they expect to that severe anxiety, depression and loss of purpose will result for millions due to massive unemployment. Others noted that the information ecosystem will be further polluted with mind-altering falsehoods, hate speech and manipulative messages – possibly leading to violence. And some said they fear that people may be overwhelmed by an AI-enabled incursion of multiple personas, fictional and mirror worlds and digital twins in their lives.

This is a small selection of a few additional full brief responses tied to human well-being from the experts:

Anonymous respondent
The impact of an exponential concentration of power is not helpful to humans’ well-being

A professor of politics and governmentcommented, “If the business model of AI development remains unchallenged, the exponential concentration of corporate power will fundamentally transform human relations, human dignity and democracy, and none of those in good ways. Rising economic inequality, already at a near-breaking point, both within countries and across countries will rise. While democratic systems or protests may provide some avenues to correct such inequalities, with the concentration of information and ‘democratic’ power, the barriers to use of systems of governance and even to protests for the public good will be exceptionally high. Human dignity (and mental and emotional well-being) will be degraded as labor markets shift, inequality rises, and control over creativity, personal preferences (and other aspects of the human experience), and aggregation of human needs is given over to decision-making machines. Throw in distrust of others coming from misinformation and fewer real-life relationships as we can rely on machines for caregiving and the simulation of love and education, and it would be hard to exaggerate the potential negative consequences of unregulated, not democratically controlled AI development by the year 2040.”

Alan D. Mutter
People will become disconnected and there will be an increase in divisive tribal behavior

Alan D. Mutter,consultant and former Silicon Valley CEO, wrote, “Lots of stuff will get easier or more efficient, such as crafting code, examining X-rays and writing term papers. However, I fear people will become more disconnected with each other as humans outsource to slick bots the thinking and judgment that we used to do for ourselves. This could lead to a loss of community spirit and an increase in divisive tribal behavior.”

Mark Schaefer
When AI exceeds our capabilities where do we belong in the world?

Mark Schaefer, a business professor at Rutgers University and author of ‘Marketing Rebellion,’ wrote, “The biggest threat to individuals is not a loss of income but a loss of purpose. How do we live our lives in a meaningful and productive manner when AI can exceed our own capabilities? I’m a writer. Where do I belong in a world where AI creates better than me, or at least does most of my work for me? There is purpose in the struggle and reward in the individual effort. Most advanced countries will have universal income by 2040, but it is probably less likely to emerge in the U.S. due to political polarization.”

Continue reading: Closing thoughts