The following respondents to this canvassing of experts wrote contributions that consider a wide range of issues tied to the future of humans as artificial intelligence begins to emerge more fully across broad swaths of society.

Warren Yoder
The path to 2040 will be a jumble of unanticipated developments in tech, culture and policy

Warren Yoder, longtime director at the Public Policy Center of Mississippi, now an executive coach, predicted, “The next 15 years will be a time of confusion, partly because of the initial misdirection and partly because the results of generative machine learning expose how little we know about ourselves.

Will machine learning make human life better or worse? Yes. Both. And many other things besides. The field was called artificial intelligence, even though intelligence is a poor representation of humanity’s culture-based capabilities. No one objected when these names were mere marketing puffery. Now that machine learning has developed modest capabilities, these misleading definitions are a serious misdirect.

“The path to 2040 will be a disordered jumble, full of unanticipated developments in technology, culture and public policy.

“Will machine learning make human life better or worse? YesBoth. And many other things besides. Machine learning is capital- and expertise-intensive. Those who develop and finance machine learning have demonstrated over and over again that they have remarkably limited understanding of the complexity of both human individuals and society. This is most obvious in the names chosen for the new field. The basic technology was described as neural networks, even though neurons are far, far more complex.

“The field was called artificial intelligence, even though intelligence is a poor representation of humanity’s culture-based capabilities. No one objected when these names were mere marketing puffery. Now that machine learning has developed modest capabilities, these misleading definitions are a serious misdirect.”

Esther Dyson
Focus on the long-term welfare of people and society:

Ask not what AI can do but what we can ask it to do

Esther Dyson, Internet pioneer, journalist and founder of Wellville, wrote, “The question of the future of humans and AI seems impossible to answer because of unexplainable humans, not because of unexplainable AI. So much depends on our use and control of AI. And that depends on who ‘our/we’ is. There are a number of issues here.

“Machines gave us huge gains in our ability to produce and eventually move things, including food. That in turn gave us too many choices, which often overwhelms us (see Barry Schwartz’s brilliant book ‘Paradox of Choice’). “While poor people often lack the money/security to make good choices, rich people lack the time to enjoy/make use of all their options (as described in Eldar Shafir and Sendhil Mullainathan’s equally brilliant book ‘Scarcity’).

Instead of regulating AI, we need to regulate its impact, and AI can actually be very helpful at that – both at predicting outcomes and at assessing counterfactuals. … Humans are needed to figure out what the goals of those AI tools and algorithms should be: How much to maximize sales versus reduce working hours? How much to maximize profits for the next year…?

“We have gotten used to accelerated but overfilled time. Then and now, you could lose your life in a few seconds, but in the past there were very few instant solutions for any problem.

“We now live in a world of pills and instant shopping and even instant companions – found on dating apps (some real, some duplicitous) and also on mental-health support apps. We expect immediate relief of our craving. But instead, our cravings never go away; rather, they turn into addictions. Indeed, what makes us most human may be how we perceive our own time and that of others.

“That was the fundamental gulf between the protagonist of the movie ‘Her’ (played by actor Joaquin Phoenix) and his AI ‘lover’ Samantha (Scarlett Johansson); she had more than a thousand lovers and time to pay attention to each of them. But in the end, what we’re seeking is share of mind from other humans, not fungible minutes of attention. 

“Instead of regulating AI, we need to regulate its impact, and AI can actually be very helpful at that – both at predicting outcomes and at assessing counterfactuals. whether in healthcare, advertising or political campaigns. It can also automate huge amounts of physical labor and routine decision-making or repetitive work. However it’s up to humans to figure out what the goals of those AI tools and algorithms should be: How much to maximize sales versus reduce working hours? How much to maximize profits for the next year versus for the current CEO’s tenure versus on behalf of the investors who trade on the basis of a quarter’s earnings? Things were very different when entrepreneurs built businesses for their grandchildren to inherit.

AI will inevitably do a lot of useful things. I’d rather have an AI than a hungry, grumpy judge sit on my case in court. And, as a nondriver … I’d rather sit in a car driven by a predictable AI that does not chat with the passengers, try to drink coffee, look at TikTok during stoplights or speed through yellow lights.

“Or is ‘we’ actually really people like Vladimir Putin and Donald Trump and Elon Musk – caught up in their own visions of a grandiose future (whether based on an imperial past or a future interstellar civilization)? They measure success differently, and they try to spread that vision whatever way they can. Mostly, they first seduce people with visions of power and money – and then make them complicit through the compromises they have made to realize those visions. Some go knowingly, but most are swept along, unexplainable even to themselves. 

“AI will inevitably do a lot of useful things. I’d rather have an AI than a hungry, grumpy judge sit on my case in court. And, as a nondriver with no illusions about how safely I (and presumably most sensible people like me) drive, I’d rather sit in a car driven by a predictable AI that does not chat with the passengers, try to drink coffee, look at TikTok during stoplights or speed through yellow lights. Those points make sense and are only slightly controversial.

“To take a less abstract look, let’s use healthcare as an illuminating example. We can take healthcare as a model for pretty much everything, but with extremes. It’s a business, even though for some people – especially at the beginning of their careers – it’s also a calling. Indeed, it’s a very messy, complicated business. Its people – leaders and workers and customers – are overwhelmed with paperwork, with details, with conflicting regulations and requirements and stiff record-keeping protocols. And, of course, they must deal with privacy requirements that complicate the record-keeping and also serve to maintain silos for the incumbents. AI can help handle much of that. AI will take care of the paperwork, and it can make a lot of good, routine decisions – clearly and cleanly and with explanations. It’s very good at routine operations and at making decisions on the basis of statistics and evidence – as long as it’s prompted with the right goals and using the right data.

“Our challenge – in healthcare as elsewhere – is to train humans to be human. Training AIs is scalable: Train one and you can replicate it easily. But humans must be trained one by one. Yes, they learn well in groups, but only if they are recognized as individuals by other individuals.

“Getting the right goals and using the right data are, of course, the big challenges. Is society really ready to consider the future consequences of its actions, not just a year from now, and not just a century from now, but in the foreseeable future? Think of the people today whose predictable diabetes we do not prevent this year and next; those people will eventually require expensive treatment and find their lives disrupted well before 2040. (See the recent frightening stats on diabetic amputations.)

“What about the kids who now spend their days in some sort of child storage because parents can’t afford or find childcare? They are likely to drop out of school, get into drugs and lose their way, and scramble as adults to make money however they can in 2040 and beyond. Then there are the mothers today who get inadequate pre- and post-natal care and counseling. They may suffer a miscarriage or fail to provide a nurturing childhood, with all the inevitable consequences by 2040.

“We need AI to predict the positive counterfactuals of changing our approach to fostering and investing in health in advance, versus spending too late on remedial care. If we use the right data and make the right decisions, for each patient specifically, AI will allow us to do one broad, important thing right: It will reduce busywork and free those who joined healthcare as a calling to be better humans – paying human attention to each of the individuals they serve.

“Our challenge – in healthcare as elsewhere – is to train humans to be human. Training AIs is scalable: Train one and you can replicate it easily. But humans must be trained one by one. Yes, they learn well in groups, but only if they are recognized as individuals by other individuals.

In the positive parts of the planet, AI – in its ethical form – will win out and we’ll start focusing not so much on what AI can do, but on what WE ask it to do. Do predatory business models reign supreme, or do we focus more on the long-term welfare of our people and our society? We need explainability of the goals and the outcomes … We need to understand our own motivations and vulnerabilities. We need to understand the long-term consequences of everyone’s behavior.

“There are mostly positive and mostly negative scenarios for the near future. Both will happen across different societies and, of course, they will interact and intersect. There will be stark differences across countries and across boundaries of class and culture within countries. I doubt that one side or the other will win out entirely, but we can collaborate to help spread the good scenarios as widely as possible. We’ll still be asking the same question in 2040: ‘How will it turn out?’ It won’t be over.

“As a society, we need to use the time we spend on rote decision-making and rule-following – which AIs can do well – to free ourselves and train ourselves to be better humans. We need to ask questions and understand the answers. We need to be aware of others’ motivations – especially those of the AI-powered, business-model-driven businesses (and their employees) that we interact with every day.

“In the positive parts of the planet, AI – in its ethical form – will win out and we’ll start focusing not so much on what AI can do, but on what we ask it to do. Do predatory business models reign supreme, or do we focus more on the long-term welfare of our people and our society? In short, we need explainability of the goals and the outcomes more than we need an understanding of the technological underpinnings.

“And we need to understand our own motivations and vulnerabilities. We need to understand the long-term consequences of everyone’s behavior. We need the sense of agency and security that you get not from doing everything right, but from learning by making, acknowledging and fixing mistakes. We need to undergo stress and get stronger through recovery. What makes us special in some ways is our imperfections: the mistakes we make, the things we strive for and the things we learn.”

Jan Schaffer
Humans will not understand the consequences of advances in AI

Jan Schaffer, entrepreneur in residence in the school of communication at American University, said, “Advanced AI will change lives in a lot of ways by 2040, but humans will not fully understand the consequences. I worry about future economic prospects for worker-bee employees who comprise the middle- and lower-middle classes.

I worry about future economic prospects for worker-bee employees who comprise the middle- and lower-middle classes. I worry about the future of higher education. We already may have more colleges and universities than are needed for the demand. As some of them close it will impact small towns.

“I worry about the future of higher education. We already may have more colleges and universities than are needed for the demand. As some of them close it will impact small towns.

“AI will definitely help with better medical diagnoses and will give women and minorities better listening posts than many now have in the health sector. Robotic surgery seems promising, as does more-advanced robotic manufacturing. I’m not yet convinced AI can improve journalism or crime-solving – or even dating matchmaking. 🙂

“In truth, I’m kinda glad I’m not going to be around when the full impact will be seen.”

Chris Labash
‘AI’s ubiquity will tempt us to give up ownership, control and responsibility’

Chris Labash, associate professor of communication and innovation at Carnegie Mellon University, wrote, “Predicting the future is a tricky business under the best of circumstances and the world in 2023 is pretty far from the best of circumstances. At its core, AI is just one more tool in humanity’s toolbox. Our task – as we jump into using AI with a mixture of rapture and horror – will be to treat it with the respect that we have for things like nitroglycerin. Used the right way, it can be hugely positive. Used the wrong way, it can blow up in our face.

“When I started thinking about how to respond to this, my obvious first thought was, ‘I wonder what AI would say?’ so I asked ChatGPT to ‘Write a 1,200-word essay on the future of artificial intelligence’ and it did, returning a nicely-headlined, ‘The Future of Artificial Intelligence: A Glimpse into Tomorrow’s World.’ And while I did get 1,200 words, I also got an essay of hard-to-argue-with generalities that sounded like the work of an eighth-grader who compiled everything from the first page of a Google search. Admittedly, I could have prompt-engineered this better and refined it more, but I thought my time would be better spent actually thinking about this myself. The biggest issue from my perspective, both as an academic and as a communications professional who teaches about the veracity of and confidence in information is the ‘95% true’ problem.

There will be a feeling of ‘let AI do it.’ AI’s ubiquity will tempt us to give up ownership, control and responsibility for many of the things that we ask it to do (or don’t ask it to do and it just does). Principal among these may be the ability (or perhaps, lack of ability) for critical thinking. … Information ownership will become even murkier. As all of our thoughts, writings, musings and creative artifacts become part of the LLM we are, is essence, putting everything into the public domain.

“In my classes now, my graduate students do final presentations of evidence surrounding issues that relate to the UN Sustainable Development Goals as two-part presentations: one generated by AI, and one using their own resources and critical thinking. They then compare the two and share the suspected reasons where AI got it wrong and best practices for using generative AI in the future. One thing that we find consistently is that AI is often ‘close enough’ to be mistaken for accurate information. While this is a learning experience for graduate students (and for me), this can, in the real world, be accepted as fact and thrown into the zeitgeist, influencing future searches and conversations: as these types of 95%-true answers become part of the corpus of knowledge the next answer may be just 95% accurate of something that’s already just 95% accurate. You see the potential problem.

“That’s my biggest worry, but there are plenty of others: There will be a feeling of ‘let AI do it.’ AI’s ubiquity will tempt us to give up ownership, control and responsibility for many of the things that we ask it to do (or don’t ask it to do and it just does). Principal among these may be the ability (or perhaps, lack of ability) for critical thinking.

“Nicholas Carr considered this point in his 2008 Atlantic article, ‘Is Google Making Us Stupid?’ Information ownership will become even murkier. As all of our thoughts, writings, musings and creative artifacts become part of the LLM we are, is essence, putting everything into the public domain. Everything (including what I’m writing here) is now ‘owned’ by everyone. Or more properly, perhaps, by OpenAI. ‘Hey, it’s not me, it’s the AI.’

OpenAI founder and CEO Sam Altman has warned that it is not completely outside the realm of possibility that advanced AI could overpower humanity in the future. But a poisonous potential result of offloading responsibility for information ownership to AI is that we as a global culture lessen ourselves in regard to civility, dignity and humanity even more than we have so far.

“I don’t have room to get into ethical AI or the gender, racial, or cultural biases, or talk about the potential, as OpenAI founder and CEO Sam Altman has warned that it is not completely outside the realm of possibility that advanced AI could overpower humanity in the future. But a poisonous potential result of offloading responsibility for information ownership to AI is that we as a global culture lessen ourselves in regard to civility, dignity and humanity even more than we have so far.

“There are many positives, of course. AI will help us be more productive at basic tasks. It can provide potentially more-accurate data and information in certain areas. It can help unlock more possibilities for more people in more areas.”

Frank Kaufmann
‘Imagining that AI can replace human contributions to outcomes arises not from wrong views about technology, but from wrong views about being human’

Frank Kaufmann, president of the U.S.-based Twelve Gates Foundation, commented, “I believe AI has just as much potential to be massively harmful as it has to be massively helpful. It is being rushed to market today in a hurry, in a way not unlike the rush to global mass medical experiments taking place during this same era. The AI rush has been condemned by entrepreneur Elon Musk, who joined thousands of others to call for a moratorium on the race for AI supremacy and warned of great danger, toward what is known as ‘the singularity.’

“I do think fear of the singularity is legitimate absent positions drawing from classical religious faith. Not fearing it without standing in a counterpoint grounded in some form of classical religious belief is, in my view, a form of naivete or Pollyanna-ism (i.e., being optimistic as a simple act of will without providing sufficient bases in reason to support one’s affirmation).

Imagining that AI’s capacity (even that of generative, or even cognitive AI) for breadth, depth, speed, range and efficiency could substitute for human investment in outcomes is a form of techno-materialism or techno-humanism. Imagining that AI can supplant or replace human contributions to outcomes arises not from wrong views about technology, but rather from wrong views about being human.

“The religious faith notion for rejecting the possibility of AI (machines) wiping out humans builds on the affirmation that humans are created by something beneficent and all powerful for a purpose, and in the end it is not possible to develop something with sufficient power to annul that.

“I have trained AI bots while employed by a for-profit firm, and I use AI for my scholarship in areas of social science. It can be helpful only when the user has the foundation to be in an ‘assessing dialogue’ with what the AI produces in response to one’s prompts and requests. This necessity for the existence of an ‘assessing subject’ relating to AI-produced outcomes is one of the realities that makes me less anxious about the prospect of AI possibly ‘taking over’ in the future.

“Here’s an example prompt for an AI: ‘Explain in academic style the economic impact of the Gutenberg Press.’ If the person writing that prompt and then perhaps then submitting or trying to publish the AI outcome has never previously produced academic writing or has never produced content related to economic impact of technological developments, how is this person to have any idea that she or he hasn’t just received a stream of utter garbage?

Or how about using the prompt, ‘Name four Stuxnet derivatives capable of nullifying current Iranian progress in isotopic enrichment?’ Or: ‘Write an email to my boss to tell her that I am unavailable tonight, shaped in a way that shows my interest in her invitation.’ If an employee is too lazy to write thankfully and apologetically to her boss, can AI really solve that?

“Vanderbilt University DEI officials used an AI chatbot to publish a consoling public statement in response to a mass shooting at Michigan State University and had to later apologize for it. Imagining that AI’s capacity (even that of generative, or even cognitive AI) for breadth, depth, speed, range and efficiency could substitute for human investment in outcomes is a form of techno-materialism or techno-humanism. Imagining that AI can supplant or replace human contributions to outcomes arises not from wrong views about technology, but rather from wrong views about being human.

“Is anything gained by anyone anywhere by having AI write a ‘sincere’ apology to their boss? The invitation to have a machine do so is perverse. Where might a person have learned the enriching beauty of apologies and supportive expressions of interest in things important to people in our lives? Probably these capacities and these sensitivities are developed while growing up in a family (or perhaps from a coach, a caring teacher or a surrogate).

Even if we move beyond generative AI to ‘cognitive’ AI, still AI poses no threat to our authority in the realm of ‘intelligence’ and genuine progress toward evermore elegant manifestations of culture and community. … The diminishment of being human to ‘utility’ is a darkness gurgling in the bowels of technocrats.

“Can AI have the experience of having a family? Can AI have a son it cares for? Can it have a parent for whom it is grateful? Can the unique, incomparable strengths that come from care for one’s child be transferable to AI? If not, then we can begin to see where AI can help and where it cannot.

“Even if we move beyond generative AI to ‘cognitive’ AI, still AI poses no threat to our authority in the realm of ‘intelligence’ and genuine progress toward evermore elegant manifestations of culture and community. Imagining a true threat to the usefulness of humans becomes possible only if we mistakenly imagine cognition to be the preeminent capacity of humans. The diminishment of being human to ‘utility’ is a darkness gurgling in the bowels of technocrats.

“In summation, it is my view that AI is merely the latest new technology, following the path of the wheel, the printing press and the combustion engine. It is broader, deeper, stronger and faster than humans. When it is asked to do what humans can do, it cannot and will not accomplish it as human beings uniquely do, and there is nothing as lovely, desirable or magical as what humans can uniquely do.

“If AI is utilized to advance and improve the realms of love, care and scientific and artistic creativity our world can become endlessly more fine. If it is used to serve our darkness, greed, cruelty and capacity for violence, it will hurl us into a new Dark Age, and from there sons and daughters of some mother will start again with the invention of the wheel.”

Seth Finkelstein
AI seems poised to add to the pressures of wealth inequality and associated social tensions

Seth Finkelstein, Let me start by deriding the AI apocalypse fearmongers. I wonder if some of the promotion of these ideas comes from venture capitalists and the like to serve as an effective way of diverting attention from the discussions of AI social issues (racism, sexism, etc.) and AI economic issues (looming worries over the future of human jobs). It’s sort of a local version of the overall political alliance between plutocrats and evangelicals, in which worrying over ’the afterlife’ can distract from misery of current life.

“Many low-level (though still professional) white-collar jobs are going to disappear. Not all and not at the highest levels, but there will be a major shift due to what will be automated. Think of how there are still jobs for musicians but recorded music has replaced a whole set of positions. As a professional programmer, I see this process underway very directly. Some mostly-rote tasks which used to be intern or entry-level assignments can be done at least as a first draft by AI. Programmers will not all be replaced, but there’s going to be a general leveling-up of what’s required in a paid human job in that sector. On the other side, ‘AI programmer’ is going to be a new job itself.

Economically, the advances in AI are going to add to the pressures of wealth inequality and associated social tensions. I suspect this is partially what’s driving some of the popularity of AI doomerism punditry. Of course tech ghost stories are an ever-present genre. Still, I think there’s a detectable thread in the discourse where fearing the death of humanity is an acceptable allegory for fearing the death of one’s job.

“There will be a massive explosion of new auditory and visual art by 2040 – basically computer-generated imagery (CGI) taken to the next level. CGI art has gotten so much better so fast that it has confused people’s sense of limitations. Deepfakes can be jarring; they exceed our current cultural knowledge. That’s a sign of a real advance. It means what could be done with previously with animated characters is now possible with ‘live action.’ That of course brings in all sorts of social and legal issues.

“One thing I’m very skeptical about is the stock prediction of the AI girlfriend/boyfriend. We frequently see it in cliché sci-fi shows, and pundits are writing scare stories about it, yet I never see anyone actually using its primitive implementation yet. Well, it’s a big world, but we aren’t hearing that a lot of people are running it as part of their lives.

“Then again, anthropologically, there’s a whole set of practices which are basically ‘listen to people go on about their daily problems and make soothing noises in reply.’ On the other hand, AI-based customer support is going to be a big business. Those workers now are essentially forced into being robots who operate from scripts anyway. The prospect of being able to avoid fighting through annoying telephone-tree options is all the sales pitch any consumer will ever need to use it.

“Economically, the advances in AI are going to add to the pressures of wealth inequality and associated social tensions. I suspect this is partially what’s driving some of the popularity of AI doomerism punditry. Of course tech ghost stories are an ever-present genre. Still, I think there’s a detectable thread in the discourse where fearing the death of humanity is an acceptable allegory for fearing the death of one’s job.”

Zizi Papacharissi
AI is not ‘intelligence,’ it performs as we define it

Zizi Papacharissi, professor of communication and political science at the University of Illinois-Chicago, observed, “AI is not new, not artificial and not intelligence. It typically recycles old ways of doing things. There is nothing artificial about the way it reproduces human habits, but there is something manufactured about it that humans are not sure how to process yet. Finally, it is a genre of, an approach to, or a way of performing intelligence rather than serving up intelligence. This we must understand: We have designed technologies that perform what we have defined as intelligence – this is a thing very different from organic intelligence.”

Richard Barke
The forward momentum of AI is probably far too powerful to restrain or direct

Richard Barke, professor of public policy at Georgia Tech, commented, “The past few years have seen a distinct decline in the trust that citizens have in their institutions – political, business, educational, etc. Fake news and skepticism about science, expertise and higher education already have eroded the confidence that many have in government, universities and the private sector. Even without advances in AI, that trend is very threatening.

“According to a 2023 Gallup survey, only small business and the military rate more than 50% confidence. Fewer than 20% of Americans have confidence in newspapers, the criminal justice system, television news, big business or Congress. All of these are easy targets for AI-related cynicism. The potential for AI to greatly accelerate the decline in trust is already obvious. Markets, schools, and civic culture all depend on trust, and once it is perceived to be gone it is extremely difficult to recover.

The potential for AI to greatly accelerate the decline in trust is already obvious. Markets, schools, and civic culture all depend on trust, and once it is perceived to be gone it is extremely difficult to recover. … Efforts to corral the development and applications of the technology through self-regulation by the IT sector or by government regulation are laudable but it is unlikely that the pace of oversight can keep up with technological advances.

“The advances that AI will enable will be viewed through a filter of suspicion and fear, encouraged by news and entertainment media, and reassurances about the risks of AI will be viewed with skepticism, especially after the occurrence of several dramatic scandals or unfortunate incidents involving typical citizens.

“Efforts to corral the development and applications of the technology through self-regulation by the IT sector or by government regulation are laudable but it is unlikely that the pace of oversight can keep up with technological advances.

“Transparency is essential but it will always be imperfect; the expected benefits to those who are fastest, regardless of their impact on society, are too great. Within two months of its launch ChatGPT was estimated to already have more than 100 million active users. Google Bard was forecast to surpass 1 billion users by the end of 2023. The forward momentum probably is far too powerful to restrain or direct.”

Calton Pu
It is difficult to define artificial general intelligence due to changing variables

Calton Pu, director of the Center for Experimental Research in Computer Systems at Georgia Tech, observed, “The definition of AGI suffers from a fundamentally flawed assumption: that all of humanity behaves in a consistent manner constrained by some unseen, unwritten, unspecified yet inescapable limitations of the entirety of humankind. It is clear that the current state-of-the-art AI tools already surpass many human beings in their performance in the particular specialty that an AI tool was trained for. This should not be a surprise since many mechanical robots have surpassed human performance in their (robot) specialty. For AGI to surpass all of humanity requires that all humans stop evolving and learning.

We can’t talk about ‘people’ as a monolithic block. AI will not impact all of humanity in the same way, and we can’t consider a meaningful ‘average’ over the entire block of humanity. … AI technology will evolve continuously in the near future, so even the college graduates of today may become quickly out-of-date in a few years if they stop learning.

“If we consider AGI as a competition between AI (in whatever form) and humanity (individually and as organized societies) as they are co-evolving, it is clear that they will help each other evolve, since the smartest humans are going to learn from and continue to utilize (even the smartest) AI, just as humans (and their tools) became stronger with robots. We can’t talk about ‘people’ as a monolithic block. AI will not impact all of humanity in the same way, and we can’t consider a meaningful ‘average’ over the entire block of humanity. … AI technology will evolve continuously in the near future, so even the college graduates of today may become quickly out-of-date in a few years if they stop learning.

“As we have learned from history, technology in general has been used for good and evil by people with varying intentions, goals and means. AI will not be an exception, and evolving AI tools will be used by many people and institutions (both technology-savvy and technology-ignorant) for many purposes, some good and some evil. …If AI tools are used for good, then their impact will be positive. Conversely, if AI tools are used for evil purposes, then their impact will be negative. The question of human and social impact is not really about the evolution of technological tools, but how they are used.”

David R. Barnhizer
AI surveillance and social threat systems are likely to repress freedom and damage democracy

David R. Barnhizer, professor of law emeritus and co-author of “The Artificial Intelligence Contagion: Can Democracy Withstand the Imminent Transformation of Work, Wealth and the Social Order?” wrote, “Where will we be in 2040 if the government and corporate control over information and personal data we have already been seeing is exacerbated by emerging AI tools? According to a 2017 report by Freedom House the governments of at least 30 nation-states were using the Internet and AI capabilities to shape and control their citizens. 

“These nation-states, including China, Russia, Iran, Egypt, Pakistan, North Korea, Thailand and Turkey have been monitoring and restricting Internet communications and access while using armies of opinion shapers to spread propaganda to their populaces. Critics of the existing political structure in China, Thailand, Egypt, Thailand, Saudi Arabia and Turkey are jailed and worse. Freedom House reported in 2023 that of the 70 countries it studied conditions for human rights online had deteriorated in 29 and only 20 registered gains.

The militarization of AI and robotics systems by the U.S., UK, Russia and China is a dangerous development with even some of the top U.S. military leaders warning about the dangers of autonomous weapons systems. But today’s more widespread, freely available and extremely effective weapons are not just bullets and explosives. From the standpoint of politics and society, the most fearful new autonomous weapons systems work by intimidating, isolating and controlling people through a kind of psychological warfare. By 2040, that warfare could be supercharged.

“The European Union has developed wide-ranging criminal laws aimed at hate speech. Criminal charges can be brought against people deemed to have offended a minority or historically disfavored identity group by their statements, whether publicly or on the Internet. This grant of power to offended groups and individuals has chilled some legitimate free speech. Universities, supposed bastions of free speech and inquiry, also place limits on what may be said.

“Virtually all speech may offend someone according to an individual’s subjective perception and the ability to use their claimed sensitivity for political purposes. The grant of the power of ‘subjective sensitivity’ to limit, ban or sanction others’ speech in a period of the rapid growth of identity politics is a destructive choice for the preservation of the kinds of challenging and conflicting discourse required for healthy democratic societies. Such a grant of subjective sensitivity is socially destructive when backed by formal laws or one-sided institutional tolerance of vicious attacks on anyone who does not conform to your views and agendas. It also forces people to express themselves anonymously. And online anonymity levels its own threats.

“The preservation of online anonymity plus mob psychology are core causes for the malicious venom we see posted online in spaces that should serve the public with intelligent exchange and discussion. Peter Drucker described what is happening in our society as the ‘new pluralism,’ explaining, ’the new pluralism … is a pluralism of single-cause, single-interest groups. Each of them tries to obtain through power what it could not obtain through numbers or through persuasion. Each is exclusively political.’ 

“The language used by each collective movement (and counter-movement) is language of attack, protest and opposition. It is language used as a weapon to gain or defend power. To achieve political ends they engage in rampant hypocrisy and manipulate by the use of ideals and lies. 

“World Wide Web originator Tim Berners-Lee has said one side effect of the massive and coordinated collection of data is the endangerment of the integrity of democratic societies. He warns that governments are ‘increasingly watching our every move online’ and passing laws such as the UK’s Investigatory Powers Act, which legalises a range of snooping and hacking tools used by security services that, he said, ’trample our right to privacy.’ He said such surveillance creates a ‘chilling effect on free speech,’ even in countries that don’t have repressive regimes.

From the standpoint of politics and society, the most fearful new autonomous weapons systems work by intimidating, isolating and controlling people through a kind of psychological warfare. By 2040, that warfare could be supercharged.

“Berners-Lee also said, ‘It is too easy for misinformation to spread on the web, particularly as there has been a huge consolidation in the way people find news and information online through gatekeepers like Facebook and Google, who select content to show us based on algorithms that learn from the harvesting of personal data. … This allows people with bad intentions and armies of bots to game the system to spread misinformation for financial or political gain.’

“The militarization of AI and robotics systems by the U.S., UK, Russia and China is a dangerous development with even some of the top U.S. military leaders warning about the dangers of autonomous weapons systems. But today’s more widespread, freely available and extremely effective weapons are not just bullets and explosives. 

“From the standpoint of politics and society, the most fearful new autonomous weapons systems work by intimidating, isolating and controlling people through a kind of psychological warfare. By 2040, that warfare could be supercharged by the changes in society that will take place in the next decade-plus as artificial intelligence tools become supercharged and weaponized for ill purposes.”

Vanda Scartezini
The future of AI is a continuous work in progress

Vanda Scartezini, a co-founder and partner at Polo Consulting who has also served in many global and Brazilian IT leadership roles the past four decades, commented, “My hope is that AI will be given the chance to develop positively and serve humankind. This will only happen if it is developed without too many government-imposed restrictions.

“The technology, itself, is neutral; it is not inherently good or bad. Humankind’s uses of it – as with any powerful tools – must receive the credit and the blame. Atomic technology can be used to safely generate energy with positive impact for millions of communities, and it can also be carried in bombs that wreak great destruction and loss of human lives.

The future of AI will depend upon the amount of accurate data collected and applied to improving its performance. As such, it will be a continuous work in progress, but it will only advance if legislation does not cut its wings.

“Ethical and safe use of AI has become a major emphasis in the development of AI today. It is true that – as with any digital technology – the bad guys will have the same opportunities as the good guys, and they will use it to the detriment of society. However, advanced AI is also being developed to identify and try to track and halt destructive behavior, possibly even before it happens.

“I believe AI will mostly be applied to positive uses for the benefit of humanity. The future of AI will depend upon the amount of accurate data collected and applied to improving its performance. As such, it will be a continuous work in progress, but it will only advance if legislation does not cut its wings. It will inspire great progress in areas such as education, personal and business communication, precise medical diagnoses and health evaluations, improved research in agribusiness and many other aspects of people’s lives.”

Continue reading: Experts answer the quantitative questions