“AI is poised to be the most persuasive technology ever invented, which also makes it the most dangerous in greedy human hands. By 2040, we may decide to let AI influence or decide legal cases. We may continue to see ad-tech with personal data run amok. We may even find that AI makes for better people-managers than people, replacing the top of companies with automation, more so than we originally expected low-level workers to be replaced by robots. Robots are expensive. Software is cheap.

“AI has the power to help humans collaborate. While generative AI indeed robs creators of their credit and income, it is also the most powerful tool for human-to-human collaboration we’ve yet invented. It can let people combine their ideas and expressions in a way that we never could. That power remains still largely untapped.

The best and worse uses of AI are largely a function of the choices we humans make. If we build tools designed to help people, we can do good and still make mistakes. But if we choose to exploit people for our own gain we will certainly do harm, while any good is incidental. We should be regulating the uses and intentions more than the technologies themselves. And we must be educating everyone how to make ethical choices for the best outcomes. The risk of AI extinction is roughly equal to the risk of nanotechnology turning the world to grey goo or some stock-trading algorithm tanking the market. But humans failing to build safe systems can injure people.

“AI has the power to help people heal from emotional trauma, but we may also use it as a substitute for people when what we need most is real human love and compassion. Will the people most in need turn to proven therapies or use the crutch of AI girlfriends to ease their loneliness? Probably the latter.

“The most important question about AI is how much control of our lives we grant it. We may trust AI more than individual human bias. But we should know that AI carries all of the same learned biases with, so far, none of the compassion to counteract that.

“All in all, this is one thing I know to be true of AI today as well as what is likely in 2040: The best and worse uses of AI are largely a function of the choices we humans make. If we build tools designed to help people, we can do good and still make mistakes. But if we choose to exploit people for our own gain we will certainly do harm, while any good is incidental.

“We should be regulating the uses and intentions more than the technologies themselves. And we must be educating everyone how to make ethical choices for the best outcomes. The risk of AI extinction is roughly equal to the risk of nanotechnology turning the world to grey goo or some stock-trading algorithm tanking the market. But humans failing to build safe systems can injure people.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”