“Of course, many people are thinking about the issues around AIs – especially the major industry players – but I’m not confident that values beyond efficiency, novelty and profit will ultimately prevail in this arena by 2040.

“I question whether the claims made for AI will ultimately pan out as they are now being glowingly promised. AI research was originally a quest for ‘general intelligence’ for machines, and – despite repeated failed attempts to build such machines over the decades – such human-like capacities still seem some way off.

Who will eventually get to decide what general machine intelligence is, how it should be deployed and under what circumstances and to what ends?

“The difference today, of course, is the sheer brute-force approach being applied to the creation of machine ‘learning’ using imponderably large datasets – despite questionable practices about the sources, cultural/social significance or meaning, or ownership and use of that data – and assumptions that massive computing power will only continue to expand on some kind of unstoppable log scale – despite the environmental risks and foregone opportunities for investing in something other than computing infrastructure that these entail.

“My impression is that the current batch of AIs (multiple because so far they each really just do certain types of things well) have been rushed to market with little non-tech oversight, so proponents can gain first-mover and network-effects advantages (and property rights).

“Under these conditions, who will eventually get to decide what general machine intelligence is, how it should be deployed and under what circumstances and to what ends?”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”