“Predicting the future is a tricky business under the best of circumstances and the world in 2023 is pretty far from the best of circumstances. At its core, AI is just one more tool in humanity’s toolbox. Our task – as we jump into using AI with a mixture of rapture and horror – will be to treat it with the respect that we have for things like nitroglycerin. Used the right way, it can be hugely positive. Used the wrong way, it can blow up in our face.

“When I started thinking about how to respond to this, my obvious first thought was, ‘I wonder what AI would say?’ so I asked ChatGPT to ‘Write a 1,200-word essay on the future of artificial intelligence’ and it did, returning a nicely-headlined, ‘The Future of Artificial Intelligence: A Glimpse into Tomorrow’s World.’ And while I did get 1,200 words, I also got an essay of hard-to-argue-with generalities that sounded like the work of an eighth-grader who compiled everything from the first page of a Google search. Admittedly, I could have prompt-engineered this better and refined it more, but I thought my time would be better spent actually thinking about this myself. The biggest issue from my perspective, both as an academic and as a communications professional who teaches about the veracity of and confidence in information is the ‘95% true’ problem.

There will be a feeling of ‘let AI do it.’ AI’s ubiquity will tempt us to give up ownership, control and responsibility for many of the things that we ask it to do (or don’t ask it to do and it just does). Principal among these may be the ability (or perhaps, lack of ability) for critical thinking. … Information ownership will become even murkier. As all of our thoughts, writings, musings and creative artifacts become part of the LLM we are, is essence, putting everything into the public domain.

“In my classes now, my graduate students do final presentations of evidence surrounding issues that relate to the UN Sustainable Development Goals as two-part presentations: one generated by AI, and one using their own resources and critical thinking. They then compare the two and share the suspected reasons where AI got it wrong and best practices for using generative AI in the future. One thing that we find consistently is that AI is often ‘close enough’ to be mistaken for accurate information. While this is a learning experience for graduate students (and for me), this can, in the real world, be accepted as fact and thrown into the zeitgeist, influencing future searches and conversations: as these types of 95%-true answers become part of the corpus of knowledge the next answer may be just 95% accurate of something that’s already just 95% accurate. You see the potential problem.

“That’s my biggest worry, but there are plenty of others: There will be a feeling of ‘let AI do it.’ AI’s ubiquity will tempt us to give up ownership, control and responsibility for many of the things that we ask it to do (or don’t ask it to do and it just does). Principal among these may be the ability (or perhaps, lack of ability) for critical thinking.

OpenAI founder and CEO Sam Altman has warned that it is not completely outside the realm of possibility that advanced AI could overpower humanity in the future. But a poisonous potential result of offloading responsibility for information ownership to AI is that we as a global culture lessen ourselves in regard to civility, dignity and humanity even more than we have so far.

“Nicholas Carr considered this point in his 2008 Atlantic article, ‘Is Google Making Us Stupid?’ Information ownership will become even murkier. As all of our thoughts, writings, musings and creative artifacts become part of the LLM we are, is essence, putting everything into the public domain. Everything (including what I’m writing here) is now ‘owned’ by everyone. Or more properly, perhaps, by OpenAI. ‘Hey, it’s not me, it’s the AI.’

“I don’t have room to get into ethical AI or the gender, racial, or cultural biases, or talk about the potential, as OpenAI founder and CEO Sam Altman has warned that it is not completely outside the realm of possibility that advanced AI could overpower humanity in the future. But a poisonous potential result of offloading responsibility for information ownership to AI is that we as a global culture lessen ourselves in regard to civility, dignity and humanity even more than we have so far.

“There are many positives, of course. AI will help us be more productive at basic tasks. It can provide potentially more-accurate data and information in certain areas. It can help unlock more possibilities for more people in more areas.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”