“By 2040, the implementation of AI tools (along with related innovations and likely policy changes/self-regulatory efforts) will change life in material ways, sometimes for good but sometimes not. And, as has been discussed extensively in technology policy, we face a ‘Collingridge’ dilemma in which we won’t know what problems are salient (nor how to deal with them) until it may be too late.

“What stands out as most significant is my belief we will not be able to moderate the harmful impacts of AI on the creative industries. Some of the terms of the recent Writers’ Guild negotiations are illustrative. We may avoid many of the likely harmful impacts of AI on the creative sector when an industry code (and possibly law) specifies that: AI-generated material can’t be used to undermine or split a writer’s credit or to adapt literary material, the use of AI tools cannot be required of writers, and companies have to disclose their uses of AI.

“That’s of course at the expense of some of the innovations AI could produce, but just as quaint small towns are willing to forego certain innovations such as big-box retailers or eight-lane highways where there is political leverage and a delicate character to a specialized product, creative industry leaders may (wisely) find the quality of business for all is higher without certain uses of AI.

“Not all sectors, of course, are susceptible to that leverage. For businesses whose product is more standardized (everything from food/beverage to phone service to clothing retail), AI will be deployed in every business process that stands to be improved with the predictive power of AI. This can lead to lower prices in some cases where products are produced more efficiently. This could also lead to new profit margins where AI innovations are unique or more appealing and not easily reproduced by competitors. Models that use large amounts of customer demand data should, in theory, yield goods and services customers prefer.

“Because AI models are largely derived from publicly-available data (meaning others can use the same data to build similar AI tools), moreover, monopoly control of such innovations is likely to be short-lived, absent protections leveraged to stifle competition (patents, mergers, partnerships).

Economic opportunities are likely to increase … [but] misinformation and other forms of epistemic corruption are also likely to increase across the board, so how we know what we know will be challenged. That will have downstream effects on large-scale human activity such as elections, crime and immigration, as well as on smaller-scale events such as family political arguments and even human flourishing. Ideally, the next 15 or so years is enough time for a modest improvement in how humans – individually and collectively – take in and process information to arrive at knowledge; at least enough of an improvement to ameliorate the impacts of epistemic corruption. But my guess is we’ll still be well short of this ideal by 2040.

“We will gain in some cases and lose in others, though ‘lose’ here is only from a price standpoint; innovations may yield net benefits for consumers. In either case, AI will transform business operations totally and dramatically, with effects comparable to the introduction of typewriting and adding machines or to the introduction of personal computers.

“In the socio-political space, what stands out to me is the potential for AI to, in Steve Bannon’s famous phrase, ‘flood the zone with shit.’ First, generative AI tools can generate enormous amounts of content (text, images, charts, etc.) with truly little effort. Second, generative AI tools are indifferent as to the truth-value of what they create. AI tools do not care if an image is realistic or not, whether an asserted fact is true, whether a hypothesis has evidence to support it or whether an opinion is plausible, at least not unless/until humans care.

“While many generative AI tools are likely to be used smartly in most cases, including by industry, NGOs [non-governmental organizations], political campaigns and others with louder voices in the socio-political space, rogue actors not constrained by boards of directors, voters, or other checks and balances have few incentives to do so. Most users will be inclined to ‘push the edge’ – use AI’s power to create and amplify misinformation just as much as it advantages them without creating undue risks of backlash. And our politics increasingly reward theatrics.

“All of this assumes we will be able to sort out important debates about permissions. I am less worried about permission to innovate – the U.S. is unlikely to adopt an extreme precautionary approach, in part because the EU is likely to land on an only modestly precautionary approach. Permissions to use the data on which models are trained, however (personal data and copyrighted material) will be trickier to manage and scale. Currently, rights to restrict the use of personal and/or copyrighted material are poorly enforced. That won’t last.

“AI will, well before 2040, have a ‘Napster’-like moment when models that assume unlimited and free access to the data that powers their tools are no longer sustainable models. Effective AI tools will need to find ways to secure appropriate permissions, methods that also scale well. My prediction is there will be some commercial opportunity here – private and/or public/private institutions will be created or should be created to allow developers to obtain permissions more efficiently from a massive set of data subjects and rights holders to use the large data sets that train foundation models.

“This may or may not be assisted by regulation, depending on the jurisdiction.

“Countries with highly functioning democracies (or that operate by executive fiat) may be able to pass regulations, but industry-initiated solutions will arise regardless of whether the government acts. Just as organizations such as BMI and ASCAP facilitated copyright permissions in the music industry, and as ‘global privacy control’ browser tools now exist to communicate privacy preferences, and as clearinghouse businesses (and, later, auctions) were created to sort out the market for radio spectrum licenses.

“Thus by 2040 the impact on humans is likely to be mixed. Economic opportunities are likely to increase, along with improved customer support, product selection and e-commerce ease of use. Misinformation and other forms of epistemic corruption are also likely to increase across the board, so how we know what we know will be challenged. That will have downstream effects on large-scale human activity such as elections, crime and immigration, as well as on smaller-scale events such as family political arguments and even human flourishing.

“Ideally, the next 15 or so years is enough time for a modest improvement in how humans – individually and collectively – take in and process information to arrive at knowledge; at least enough of an improvement to ameliorate the impacts of epistemic corruption. But my guess is we’ll still be well short of this ideal by 2040.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040.”