“The existential battle over the next 15 years will not be humans versus AI (as Hollywood and misinforming billionaire oligarchs portray, the better to keep us entertained and unconcerned about their historic hoarding of wealth). Rather, it will be between good, bad and evil AI.

“By 2040 the general public and political leaders will know to not expect that Large Lying Machines (‘LLMs’) are designed to serve the public good.

“Before 2040, disasters of social harm by bad AI design will finally spur action; regulation of AI will focus on real harms painfully learned from bad experiences. A new certification process or processes, whether organized at professional or national level, or both, will ensure that at least some of those to whom much computational power is entrusted, recognize they have professional responsibilities extending beyond their paycheck and employer. And – ideally – a legal obligation to do societal good.

“Just as an incompetent but licensed civil engineer can be prosecuted for violating commitments to for example, not build bridges which easily collapse, so too must there be consequences when artificially not-so-intelligent systems designed intentionally or through professional negligence to discriminate unlawfully are deployed.

One positive 2040 scenario due to regulation: Industry will be incented to improve its practices and produce not just good but better AI. Local city and state procurement decisions will come to be shaped by verifiable proof that bias, discrimination or fraud are not a feature or a bug of AI-powered city services. Humanitarian or public-interest AI bots will power a growing array of robots working for good, in people’s homes, communities, hospitals and schools.

“Thus, by 2040 a new priesthood or profession of certified ethical AI developers will emerge who are willing to sign their names to a pledge that they have done their best, as they were trained to do, to ensure XYZ AI system is designed to minimize bias and social harm, and to self-monitor and self-report anomalous behaviors. (They will no longer ignore the known consequences of models designed to maintain sticky ‘engagement’ or rushed to public use before they are refined, as is common with the incredibly error-prone Large Language Models being released today.)

“One positive 2040 scenario due to regulation: Industry will be incented to improve its practices and produce not just good but better AI. Local city and state procurement decisions will come to be shaped by verifiable proof that bias, discrimination or fraud are not a feature or a bug of AI-powered city services. Humanitarian or public-interest AI bots will power a growing array of robots working for good, in people’s homes, communities, hospitals and schools. Good AI will be fantastic and do amazing things for us, improving the quality of life substantially over the coming 15 years. On average.

“But, yeah, then we have to speak of the consequences of the next decade-plus of bad-by-design AI. These systems are built and released quickly for competitive reasons following the ‘break society fast’ amoral model espoused by Silicon Valley-ish wannabe philosopher king bros pretending to be dystopic visionaries. These systems may have high error rates and poor security and privacy controls, and they may be designed to package and sell user information whether true or ‘hallucinated.’

“Artificially intelligent disinformation as a service will only be rivaled by the also fast-growing market for AI-powered misinformation as a service. Social harm by AI design is not a thing 15 years in the future, it is a business model today.

AI deep fakes and bully bots and scam-artist Large Lying Models will insist they be let into our Zooms, rooms and lives to vacuum up our data and steal our money with far greater ease and convenience than do the spam emails of today.

“My personal AI bot of 2023 tries to bully its way into my online meetings today, pushing around professionals who wonder if I will be upset that my AI bot was refused entrance to a Zoom (answer: no, of course not). AI deep fakes and bully bots and scam-artist Large Lying Models will insist they be let into our Zooms, rooms and lives to vacuum up our data and steal our money with far greater ease and convenience than do the spam emails of today.

“The business disruption, data and intellectual property theft and fraud committed by ‘personal’ AI bots actually serving another master/enlisted in a bot army will inspire a new category of case law. (My personal AI assistant/bot has never signed an NDA. So, am I liable for its collection and sharing of others’ proprietary information? Courts will decide in the next 15 years.) Logically, we must recognize that AI models and systems will quickly learn that crime by design has no meaningful consequences – for the AIs at least.

“Finally, there has been some discussion about the eventual possibility of truly evil AI. We’re hearing a lot of noise about AGI lately, as it is seen by some engineers as the ghost in their machines/large language models today. They are the ones hallucinating, or at least suffering from Freudian transference.

People’s attributing of humanoid characteristics to machines will lead to new addictions by design and social alienation by design and be a favored tool of a growing host of information warfare-enlisted AI bots.

“Artificial general intelligence will be no more real in 2040 than when MIT Professor Joseph Weizenbaum created Eliza [a conversational natural language processing program that emulates a Rogerian psychotherapist] in 1963. The willingness to presume there is actual intelligence in AI rather than a scripted, or rather, modeled process designed to trick you, to make you think you’re talking to someone who’s not actually there will be an ever-growing problem through to 2040.

“AGI will not be real nor will it be a problem in 2040; rather, people’s attributing of humanoid characteristics to machines will lead to new addictions by design and social alienation by design and be a favored tool of a growing host of information warfare-enlisted AI bots.

“Detecting what’s real and what and who is an artificially intelligent scam artist will be the huge social problem of the day, since artificially intelligent machines and models trained to be and do evil, can do so without ever suffering from a guilty conscience, or – unless the law catches up – any legal consequence for their makers.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”