“Popular visions of created intelligence as a utopic or dystopic force date back more than two centuries. Today it is possible to envision that artificial machine intelligence could cause dramatic or even existential long-term changes in human institutions, culture and capability. To predict and shape these long changes it is vital to understand the mechanisms by which technologies change society.

“For the past 400 years or so, technology has acted through economics by changing the fixed and marginal costs of processes. This change leads fairly directly to changes in the absolute and relative costs of products and services and shifts the relative advantages of capital and labor. These shifts flow into culture, norms and institutions, with popular entertainment and present-generation attitudes often in the lead. Changes to law and the structure of larger organizations generally lag behind.

“Artificial intelligence, as it is broadly defined, has reduced the marginal cost for many processes related to recognition (e.g., recognizing faces in images, or phrases in conversation) and prediction. And AI has advanced rapidly to be used in processes related to information discovery, summarization and translation. Since the emergence the past year or so of successful ‘generative’ large language models, AI is reducing the cost of using established public knowledge to create information outputs (in the form of text, audio, video, data and software) in order to solve specified problems under human direction.

“Information technology, by making categories of information problems ‘cheap’ to solve, has disrupted the market for entire categories of information products and is transforming the professions involved. Telephone switchboard operators are long gone, and bank tellers are rare. Newspapers and the professions of journalism, bookkeeping, copyediting, weather forecasting and data entry have already changed drastically. IT support, remote customer service, librarianship and the legal profession are currently under strain.

AI systems will likely remain capital-intensive, energy-intensive and data-hungry. Increasing adoption of these systems without effective regulations is likely to shift competitive advantage away from human labor while promoting monopolies. Further, these systems do act to ‘fence in’ the commons of information by transmuting public information into proprietary commercial AI models – and there is a possibility licensing will be imposed on the resulting outputs. This could yield a substantial concentration in economic and cultural power. Ensuring that the disruptions caused by these technologies enhance human agency and the public knowledge commons rather than concentrating power and control requires thoughtful regulation of AI markets and systems.

“The generative AI models will increasingly disrupt professions engaged in producing information products – including lawyers, copywriters, grant writers, illustrators, graphic designers and programmers. Within 15 years it is likely that there will be significant disruption in these and related business models and professions – with substantial spillovers into culture, norms and institutions.

“It is also likely that AI will increasingly demonstrate more attributes of sentience (responsiveness to its environment) – which will increase the challenges of governing AI and raise the potential for chaotic systems behavior and malicious human exploits of the technology.

“Although general intelligence, sapience and super-intelligence could someday have widespread disruptive effects – and even pose existential threats – it is unlikely that these will arrive by 2040. Instead, we’ll likely see the hollowing-out of more professions related to information, knowledge work and the creation of routine information outputs. There will be some roles left – but they’ll be reserved for the most complex expert work.

“The algorithmization of these professions will have some democratizing effects, enabling many of us with more ideas than technical skills to express these ideas as pictures, prose and software, or even – using additive manufacturing technologies – physical objects. This simultaneously promises a wider expression of ideas and an increase of human capacity – with increased risk of homogeneity and monoculture in some characteristics of the resulting outputs.

“Further, AI systems will likely remain capital-intensive, energy-intensive and data-hungry. Increasing adoption of these systems without effective regulations is likely to shift competitive advantage away from human labor while promoting monopolies. Further, these systems do act to ‘fence in’ the commons of information by transmuting public information into proprietary commercial AI models – and there is a possibility licensing will be imposed on the resulting outputs. This could yield a substantial concentration in economic and cultural power.

“Ensuring that the disruptions caused by these technologies enhance human agency and the public knowledge commons rather than concentrating power and control requires thoughtful regulation of AI markets and systems. Moreover, growing societal experience with algorithmic systems makes it painfully clear that unregulated algorithmic systems are essentially Machiavellian: they are often able to produce results that do extremely well at optimizing a direct goal (sometimes defined only by implication) while avoiding anything that isn’t explicitly built-in as a constraint. As a result, these systems regularly shock us by discovering unexpected ‘solutions’ that meet the immediate goals but sacrifice fairness, privacy, legality, factuality, attribution, explainability, safety, norms or other implicit constraints that we humans assume need to be part of an answer, but which we didn’t explicitly include.

“Those who pay attention to the science and scholarship of AI have come to a consensus that these problems cannot be solved simply by bolting guardrails to existing systems. Values such as privacy, explanation and fairness can be fully and effectively achieved only by carefully designing these capabilities into foundational AI models.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040.”