“There are two critical uncertainties as we imagine 2040 scenarios:
- Do citizens have the ability to see the role AI plays in their day-to-day lives, and, ideally, have the ability to make choices about its use?
- Does the AI have the capacity to recognize how its actions could lead to violations of law and human rights and refuse to carry out those actions, even if given a direct instruction?
“In other words, can humans say ‘no’ to AI, and can AI say ‘no’ to humans? Note that the existence of AIs that say ‘no’ does not depend upon the presence of AGI; a non-sapient autonomous system that can extrapolate likely outcomes from current instructions and current context could well identify results that would be illegal (or even unethical).
A world in which most people can’t control or understand how AI affects their lives and the AI itself cannot evaluate the legality or ethics of the consequences of its processes is unlikely to be one that is happy for more than a small number of people. I don’t believe that AI will lead to a cataclysm on its own; any AI apocalypse that might come about will be the probably-unintended consequence of the short-term decisions and greed of its operators.
“It’s uncertain whether people would intentionally program AIs to refuse instructions without regulatory or legal pressure, however; it likely requires as a catalyst some awful event that could have been avoided had AIs been able to refuse illegal orders.
“Considering all of the above, here are four quick AI-enabled humanity scenarios for 2040:
- “Careful Choices: This is a world where humans can make choices about their interactions with AIs and AIs can identify and refuse illegal or unethical directives is, in my view, the healthiest outcome, as this future probably has the greatest level of institutional transparency and recognition of the values of human agency and rights. AGI is not necessary for this scenario. If it does exist here, this world is likely on a pathway to human-AGI partnership.
- “AI as Infrastructure: This is a world in which humans have the information and agency necessary to make reasonable choices about the ways in which AIs affect their lives but AIs have no ability to refuse directives is one where the role of AI will be largely utilitarian, with AIs existing in society in ways that parallel corporations: important, influential but largely subject to human choices (including human biases and foibles). AGI is unlikely in this scenario.“The notion that the future harm and benefit from AI derives (at least in part) from the degree to which the general public has some awareness, understanding and choice about the role AI plays in their lives is not novel, but it is important. We currently seem to be on a path that’s accelerating the presence of AI in our institutional lives (i.e., business, social interactions, governance) without giving individuals much in the way of information or agency about it.
The notion that the future harm and benefit from AI derives (at least in part) from the degree to which the general public has some awareness, understanding and choice about the role AI plays in their lives is not novel, but it is important. We currently seem to be on a path that’s accelerating the presence of AI in our institutional lives (i.e., business, social interactions, governance) without giving individuals much in the way of information or agency about it.
- “Angel on the King’s Shoulder: This is the opposite world, one in which the role of AIs in human lives is largely invisible or outside of day-to-day choice but AIs can choose to accept or reject human instructions. It is a ’benevolent dictatorship’ where the people in charge use the AIs as ethical guides or monitors. This scenario is probably a best-fit for a global climate triage future, one in which it would be easy for desperate leaders to make decisions with bad longer-term consequences without oversight. AGI in this scenario would be on a path to a machines-as-caretakers future.
- “And Then It Got Worse: A fourth scenario is one in which people don’t have much day-to-day awareness of how AIs affect their lives and the AIs do what they are instructed to do without objection. This is depressingly close to real-world conditions of the present, the 2020s. AGI in this scenario would probably start to get pretty resentful.
“The notion that the future harm and benefit from AI derives (at least in part) from the degree to which the general public has some awareness, understanding and choice about the role AI plays in their lives is not novel, but it is important. We currently seem to be on a path that’s accelerating the presence of AI in our institutional lives (i.e., business, social interactions, governance) without giving individuals much in the way of information or agency about it.
“On top of that, current AI visibly replicates the biases of its source data, and the heavy-handed efforts to remove these biases via code attack the symptoms, not the disease. A direct extrapolation of this path further embeds a world where citizens have less and less control over their lives and have less and less trust that outcomes are honest and fair. AIs, being in some senses alien, would likely be the target of human hostility, even though the actual sources of the problem would be the institutional and leadership choices about how AI is to be used.
“The underlying concern is that a future that maximizes the role of AI in economic and business decision-making – that is, a future in which profit is the top priority for AI services – is very likely to produce this kind of world.
“The idea that future harm and benefit from AI might come from whether or not the AI can say ‘no’ to illegal or unethical directives derives from American military training, where service members are taught to recognize and refuse illegal orders. While this training (and its results) have not been perfect, it represents an important ideal. It also raises a question regarding military AI: how do you train an autonomous military system to recognize and refuse illegal orders? This, then, can be expanded to ask whether and how we can train all autonomous AI systems to recognize and refuse all illegal or unethical instructions.
“A world in which most people can’t control or understand how AI affects their lives and the AI itself cannot evaluate the legality or ethics of the consequences of its processes is unlikely to be one that is happy for more than a small number of people. I don’t believe that AI will lead to a cataclysm on its own; any AI apocalypse that might come about will be the probably-unintended consequence of the short-term decisions and greed of its operators.”
This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional responses are included in the report “The Impact of Artificial Intelligence by 2040”