“I start with two assumptions. The first is that we won’t see much progress on general-purpose AI in the next 15 years or so. If this is wrong, all bets are off, and one of the biggest challenges is going to be sorting out all kinds of human-species-oriented bias; these intelligences won’t be human and won’t act like humans.

“My second assumption is that we won’t see radical advances in human-computer interfaces (direct brain/neural connections), or if so only among small elite groups in the sciences, the arts, finance, medical care or the military (to name a few possibilities where the advantages may be so compelling that we’ll see adoption of those technologies).

By 2040 AI technologies … will be embedded in and connected to everything, and most people will use them within the context of other tasks and systems, not as ends in themselves. For example, in scientific research, engineering and drug discovery we’ll see automated labs or collections of instruments that can perform guided scientific discovery and optimization of materials or processes under high-level human guidance. We are already seeing early examples of this.

“Given those assumptions, one of the most compelling conclusions for me is that by 2040 most people won’t spend much time thinking about ‘AI’ per se. AI technologies (machine learning, natural language processing, robotics, some generative technologies, etc.) will be embedded in and connected to everything, and most people will use them within the context of other tasks and systems, not as ends in themselves.

“For example, in scientific research, engineering and drug discovery we’ll see automated labs or collections of instruments that can perform guided scientific discovery and optimization of materials or processes under high-level human guidance. We are already seeing early examples of this, and over the next 15 years these will steadily grow in capacity and levels of autonomy. But they will remain limited in their ability to formulate new hypotheses and design ways to explore them, or to deal with really unexpected or novel situations.

“We’ll see a lot of AI technologies packaged as consultants, advisors or assistants to human ‘experts’ in various sectors today. Obvious examples would include in health care, financial advising, perhaps sales and some forms of teaching. There are likely to be many more. Progress in these areas will be gradual. I don’t expect severe and sudden disruptions in general, though there is certainly the possibility of dangerous, suddenly disruptive uses of these technologies.

“I can imagine some significant crises arising in the financial markets if risk isn’t recognized and managed appropriately, but this doesn’t feel fundamentally new but rather just an additional set of tools to allow humans to do stupid things. I’m more concerned with warfare and warfare-adjacent applications of AI (e.g., terrorism, asymmetric warfare), which may be characterized by high levels of desperation and the need to match or one-up opponents in what are perceived as existentially threatening scenarios. These situations could produce horrible outcomes.

“We are at the beginnings of a major reconsideration of our conceptualization of the role of creators and how we recognize and delineate their rights over their creations. We are gaining the ability to easily and convincingly re-animate performers (e.g., deceased film stars, sports heroes), to author new works ‘in the style of’ previous authors and to involve various kinds of computational and AI technologies intimately in new creative work.

“Legal controversies are already arising over the use of copyrighted or otherwise protected materials as ‘training data’ for AI-based systems. These developments, which are being accelerated by AI-related technologies, do not fit well within our existing cultural or legal frameworks and our understanding of creative works and creators. Resolving this is going to be a slow – and definitely disruptive – process. It may have some very unexpected and important second-order effects, for example in the ways that we relate to our cultural history and centuries of creative works that form part of this history, or even in the way we relate to our individual or family histories (computational re-animations of our ancestors).

We are going to have to learn to understand and deal with the results of these technologies. The effects of these social changes will ripple through areas as diverse as the legal system, politics and news reporting, as well as in entertainment and the arts and sciences, and will perhaps cause profound changes in the conduct of day-to-day interpersonal relations. … We need a complete social recalibration of how we think about evidence and truth. Generative AI technologies and applications such as deepfakes have brought us to the point where we can no longer believe our eyes and ears in any straightforward way.

“Sources, provenance, and chains of custody have become critical, along with issues of corroboration and consistency. I am very skeptical that we will be able to restrict or control (e.g., through watermarking requirements) the technologies that can generate utterly convincing sounds and images of events that never took place. Rather, as a society we are going to have to learn to understand and deal with the results of these technologies.

“The effects of these social changes will ripple through areas as diverse as the legal system, politics and news reporting, as well as in entertainment and the arts and sciences, and will perhaps cause profound changes in the conduct of day-to-day interpersonal relations. Sorting through this is going to be very difficult and disruptive but seems unavoidable.

“We need a complete social recalibration of how we think about evidence and truth. Generative AI technologies and applications such as deepfakes have brought us to the point where we can no longer believe our eyes and ears in any straightforward way.

“Closely related here are developments in computationally-based ‘friends’ or ‘companions’ which will make heavy use of AI technologies. These also raise issues about intellectual property and indeed issues about the extent to which we regard them strictly as property; perhaps the ways we think about pets today will become a relevant point of departure.

“Overall, I am optimistic. On balance, these technologies will leave us in a better place as individuals and as a society, though there are going to be many surprises along the way.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”