Theme 2

Many experts responding to this canvassing urged that societies fundamentally change long-established institutions and systems – political, economic, social, digital, and physical. They said there should be major moves toward a more equitable distribution of wealth and power. They also argued that the spread of AI requires new multistakeholder governance from diverse sectors of society.


Lene Rachel Anderson
We are not creating the institutions that could protect us against our own invention

Lene Rachel Anderson, economist, author, futurist and philosopher at Nordic Bildung a Copenhagen-based think tank, predicted, “If things don’t change, by 2040 capitalism will have crashed, societal institutions will have been undermined, civilization will collapse and humans will have two options: live in chaos ruled by violent gangs or live under total surveillance in AI-controlled pockets. It could be otherwise, but there do not seem to be any political institutions that understand the scope of what we are facing, and we are not creating the next generation of institutions and legislation that could protect us against our own invention.”

Marina Gorbis
We can create a great future, but it will require new infrastructure, policies and norms

Marina Gorbis, executive director of the Institute for the Future, urged, “How our individual lives and society will change with the diffusion of AI depends less on technological innovations and more on policies and institutional arrangements in which they develop. It can immiserate large numbers of people, eliminating or unleashing a wave of poorly paid jobs, increasing levels of mistrust and disinformation or it could allow us to reduce work hours without reducing pay, improve health access and outcomes, improve the workings of our physical infrastructure and much more. This gives me hope.

AI tools and platforms … will require us to imagine and build new social infrastructure, institutional arrangements, policies and norms. This is what we eventually did in the transition from agricultural to industrial societies, after going through much pain and misery. We should accomplish this transition faster, foregoing the pain and misery. The time to start imagining and prototyping such approaches is now!

“How AI evolves by 2040 depends on many factors. The history of technological change teaches us that although new technologies come with certain affordances, their impact is shaped by multiple factors – social and cultural norms, regulatory environments, tax structures, existing business forms, etc. The impacts would be very different if AI tools and platforms were seen as a part of the public infrastructure rather than as a private asset, or if we created policies and institutional arrangements that enabled productivity gains from AI to be distributed more equitably rather than flowing mainly to investors and capital holders.

“For example, large language models (LLMs) use vast amounts of data and information (in written, visual and audio formats). They not only raise legitimate concerns about privacy, data bias and the quality of the software itself, LLMs raise ethical issues of permission and economic issues in regard to how we acknowledge and compensate for all the collective knowledge and data that feeds these programs.

“In many ways, LLMs make us confront the reality that all new discoveries, creations and innovations are based on previous discoveries, creative outputs and innovations. There would be no Mozart, Chopin, Debussy without Bach; no Gutenberg press without the winemaking presses in Southern Germany; no social media platforms without public investment and collaborations of many researchers to create some of the foundational Internet technologies. This is why throughout history we have seen similar discoveries appear almost simultaneously in multiple places. All knowledge and discoveries are results of collective processes.

“Neither our copyright system nor our compensation structures recognize this adequately. In fact, it goes against the prevailing ideology of Silicon Valley, where many AI innovations originate. If we recognize that LLMs use existing knowledge and data as raw materials, should we tax LLM-based tools and platforms and establish sovereign public funds to distribute some of the productivity gains and subsequent profits that they bring? After all, this is what countries like Dubai and Canada have done with their oil revenues – establishing sovereign wealth funds that pay dividends to their citizens. There is a much-overused analogy of data as the new oil. If it is, shouldn’t we follow the path of oil-rich countries and treat data fueling LLMs as a public resource that delivers dividends to all?”

Sam Lehman-Wilzig
Economic, employment and education systems must be massively restructured

Sam Lehman-Wilzig, is program head of the communications department at Peres Academic Center in Rehovot, Israel, and author of “Virtuality and Humanity.”

He wrote, “AI will render almost all aspects of personal life much easier/smoother. However, advanced AI constitutes a direct threat to employment. True, past economic eras of technological advancement have not caused mass unemployment. AI is different because it competes with (perhaps outcompetes), the highest form of human capabilities: critical and creative thought.

Social tensions might well increase despite and because of AI’s capabilities. There is a need for a massive restructuring of the 21st-century economic system, i.e., taxation moving from the individual to the corporation (e.g., taxing AI and robots), with far greater government subsidization of individuals (e.g., Universal Basic Income) becoming standard. Such a complete transition will not happen by 2040, but we will be on the way there.

“Another important restructuring will have to occur in education at all levels, aimed no longer almost exclusively at preparing people for professional work but rather mostly for a life of non-work or leisure. We must learn how to lead satisfying and productive lives without financial remuneration and the other benefits of work. AI can be a huge aid in the reinvention of humans’ self-identities, but only if people understand the best ways to exploit it.”

Aviv Ovadya
Reinventing democracies’ infrastructures can cut the likelihood of dystopia by 85%

Aviv Ovadya, a founder of the AI & Democracy Foundation based in San Francisco, said, “Our future depends upon what we choose to do and invest in. It is as if all of society is in vehicles navigating treacherous mountain passes with engines that are rapidly increasing in speed and power. If the drivers do not commensurately improve their ability to safely stay on the road through better, faster decision-making and with more-effective control systems in the cars, this will lead to catastrophe.

“The impact of AI depends on whether or not we invest ourselves in that decision-making and safety infrastructure. If we continue on our current course, advances in AI may take us down one of two possible paths toward a dystopian future:

  1. “The path of autocratic centralization, in which powerful corporations and authoritarian countries unilaterally control extraordinarily powerful AI systems.
  2. “The path of ungovernable decentralization, where everyone has unrestricted access to those incredibly powerful systems and, because there are no guardrails, their uses can – intentionally and/or unintentionally – come to cause massive, irreversible harm.

If we are able to bring to bear even one-tenth of the level of resources being invested in AI advances toward reinventing our democratic systems – along with improving the safety of AI systems and developing the necessary international agreements and regulations – we can bring the likelihood of a truly dystopian 2040 down to as low as 10%.

“Without extensive concerted effort far, far beyond that which we’ve seen to this point, the likelihood of us ending in these dystopian futures is extremely high, beyond 95%. That said, there is an alternative, a third path: The path of combined democratic centralization and democratic decentralization with an immediate acceleration of investment in the democratic infrastructure needed to make such a path viable is our best bet.

“I believe that if we are able to bring to bear even one-tenth of the level of resources being invested in AI advances toward reinventing our democratic systems – along with improving the safety of AI systems and developing the necessary international agreements and regulations – we can bring that likelihood of a truly dystopian 2040 down to as low as 10%.

“I share many more details about what it might look like for AI in my recent paper in the Journal of Democracy. A brief summary: ‘Reinventing our democratic infrastructure is critically necessary and also possible. Four interconnected and accelerating democratic paradigm shifts illustrate the potential: representative deliberations, AI augmentation, democracy-as-a-service and platform democracy. Such innovations provide a viable path toward not just reimagining traditional democracies but enabling the transnational and even global democratic processes critical for addressing the broader challenges posed by destabilizing AI advances – including those related to AI alignment and global agreements. We can and must rapidly invest in such democratic innovation if we are to ensure our democratic capacity increases with our power.’”

Lorrayne Porciuncula
Agile governance must meet the dynamic challenges of future complex adaptive systems

Lorrayne Porciuncula, founder and executive director of the Datasphere Initiative, wrote, “Managing and understanding the risks and nonlinearities of future advances will be a critical challenge from now and beyond. Sophisticated models and agile governance mechanisms will be required to responsibly unlock the value of data and AI for all. Governance considerations will play a critical role in shaping the impact of AI on complex adaptive systems. By 2040, establishing frameworks for responsible AI use, transparency and accountability will be paramount. This includes addressing the governance of AI and AI for governance. It means iterating solutions to biases in AI systems, ensuring privacy and developing mechanisms to intervene in the case of unintended consequences.

“When considering the impact of AI on our interconnected world by 2040, it is essential to frame the discussion around the transformative potential and challenges that AI poses to existing complex adaptive systems such as social, economic, political, ecological and technological systems (including the emergence of novel behaviors in the datasphere). We can expect that by 2040 AI will further increase the interconnectedness and interdependence of components within complex adaptive systems. In global economic systems, AI-driven supply chain management and market prediction tools will become highly interlinked. While this has the potential to drive personalization and service by demand and optimize economic outcomes and stability, it also increases the system’s vulnerability to cascading failures or unforeseen emergent behaviors. Managing these risks will require advanced monitoring and mitigation strategies.

Looking ahead to 2040, the impact of AI on complex adaptive systems is poised to be profound, driving innovation, efficiency and adaptability across various domains. However, the integration of AI also introduces challenges related to unpredictability, and the need for agile governance. Navigating this future will require a nuanced understanding of both AI and complex adaptive systems, as well as proactive strategies to harness the benefits of AI while mitigating potential risks.

“By 2040, we can also expect that AI will significantly enhance the self-organizing capabilities of complex adaptive systems. In smart cities, for instance, AI-driven systems could autonomously manage traffic, energy distribution and waste management, leading to more efficient and sustainable urban allocation of resources and enforcement. The emergence of new patterns of behavior and efficiency will likely be a hallmark of AI’s impact, potentially leading to innovative solutions for long-standing challenges. However, AI use will likely drive up the demand for energy consumption particularly as energy-intensive data centers proliferate. This could negatively impact sustainability gains from more-efficient resource allocation.

“Moreover, as AI systems excel in their ability to learn from data and adapt their behavior over time, we can predict that by 2040, complex adaptive systems in domains such as healthcare will leverage AI for continuous learning and adaptation, leading to more personalized and effective treatments. The systems’ capacity to learn and evolve will be crucial in addressing the dynamic challenges of the future.

“AI will also introduce new evolutionary pressures to existing systems, driving innovation and efficiency. In sectors such as manufacturing, AI-driven automation and optimization could lead to significant advancements in productivity and product quality. However, this also has the potential to disrupt labor markets and existing industry structures, as well as drive further economic and digital inequalities and gaps between regions and countries.

“In general, the nonlinear nature of complex adaptive systems, paired with the scale and depth of transformations brought by AI, will likely result in unpredictable and emergent behaviors. In political systems, for example, the use of AI in information dissemination and campaign strategies could lead to unforeseen shifts in public opinion and political dynamics and increase political polarization and a decline in institutional trust led by mis- and disinformation.

“Looking ahead to 2040, the impact of AI on complex adaptive systems is poised to be profound, driving innovation, efficiency and adaptability across various domains. However, the integration of AI also introduces challenges related to unpredictability, and the need for agile governance. Navigating this future will require a nuanced understanding of both AI and complex adaptive systems, as well as proactive strategies to harness the benefits of AI while mitigating potential risks. Ultimately, the goal is to create resilient, adaptable systems that leverage AI to address the complex challenges of the future, fostering sustainable and equitable outcomes across society.”

Sean McGregor
AI requires new technology, social institutions and social conventions

Sean McGregor, founding director of UL Research Institutes, developer of safe digital systems, predicted, “We will not have all the answers to safely managing AI by 2040, but we will have a profession with millions of people dedicated to advancing the cause. It is likely to be the last new profession. Unsafe industrial activity turned Oklahoma into a dust bowl and lit the Cuyahoga River on fire at least a dozen times. Like the agricultural and industrial revolutions of yesteryear, AI requires new technology, social institutions and social conventions to avoid the worst outcomes. However, ‘AI safety’ is a far more difficult proposition than environmental sustainability.”

Peter Lunenfeld
Communal, civic and even constitutional guidelines should be exercised over this tech

Peter Lunenfeld, professor of design and media arts at the University of California-Los Angeles, commented, “By 2040, AI will extend into virtually every digitally-enabled technology we interact with; it will be woven into the very infrastructure that surrounds and supports us in the 21st century.

The danger of a very few humans controlling AI is much greater than the science-fictional nightmare of AI controlling vast numbers of humans.

“The effects will be a mix of the astonishing, the appalling and the invisible. If we leave all aspects of the AIs’ deployment, control, displacement and profit-production to their inventors and exploiters – as we have been up to now – and do not exercise communal, civic and even constitutional guidelines and controls over these technologies, we will be at even greater risk of oligarchic control.

“The danger of a very few humans controlling AI is much greater than the science-fictional nightmare of AI controlling vast numbers of humans. In 2040, just as today, how humans relate to other humans and how they regulate the distribution of power and powerful tools – like AI – will be the primary determinant of how AI impacts its human hosts.”

Greg Adamson
It is unlikely that human institutions are ready or willing to properly adapt to this change

Greg Adamson, an Australian currently serving as a vice president of the IEEE Society on Social Implications of Technology and chair of its Dignity, Inclusion, Identity, Trust and Agency group, is not optimistic that humanity will meet the challenges ahead. He wrote, “I see no evidence that human institutions anywhere in the world are ready for the change that lies before us, nor do they show any significant capacity to address existential threats – as can be clearly seen in their response to climate change.

“A telling quote: ‘The world of the future will be an even more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.’ Norbert Wiener, one of the most influential scientists of the 20th century said this in 1964.”

Sonia Livingstone
To flourish, humans must have agency and efficacy; our children may never forgive us

Sonia Livingstone, professor of social psychology and former chair of the Truth, Trust and Technology Commission at the London School of Economics, urged, “Let us focus on one point: At heart, for human beings to flourish, they must have the opportunity to exercise their agency and efficacy in a world that they can, broadly, understand and which is directly responsive to their needs, interests and concerns.

Our children – one-third of the population today, 100% of the population tomorrow – will not know a world without, or before AI. We are treating them as the canaries in the coal mine. They may never forgive us.

“In all the talk of what AI can do, this basic recognition of the nature of humanity seems drowned out. Perhaps we could start over and develop a truly human-centered vision of AI and its potential. But instead, the political interests of states in unholy tandem with the economic interests of companies seem to drive the agenda, to our lasting detriment.

“As for our children – one-third of the population today, 100% of the population tomorrow – they will not know a world without, or before AI. We are treating them as the canaries in the coal mine. They may never forgive us.”

Continue reading: A selection of essays tied to Theme 3