Many of these experts imagined highly varied scenarios, dependent upon the twists of fate yet to come before 2040. Some pointed out that future AI-abetted losses and gains will be unevenly distributed across humanity. Some said the future will be “scary” and some said it will bring “joy and love.” Some said it will initiate “growth and productivity” but it could result in “rampant unemployment.” One said it could usher in an “age of abundance” while some said it is also likely to inspire humans with an agenda to further weaponize AI and it possesses the potential to launch seemingly endless assaults on humans’ senses and deplete human agency.


Jamais Cascio
The questions are, ‘Can humans say “no” to AI, and can AI say “no” to humans?’

Jamais Cascio, distinguished fellow at the Institute for the future, said, “There are two critical uncertainties as we imagine 2040 scenarios:

  1. Do citizens have the ability to see the role AI plays in their day-to-day lives, and, ideally, have the ability to make choices about its use?
  2. Does the AI have the capacity to recognize how its actions could lead to violations of law and human rights and refuse to carry out those actions, even if given a direct instruction?

“In other words, can humans say ‘no’ to AI, and can AI say ‘no’ to humans? Note that the existence of AIs that say ‘no’ does not depend upon the presence of AGI; a non-sapient autonomous system that can extrapolate likely outcomes from current instructions and current context could well identify results that would be illegal (or even unethical).

“It’s uncertain whether people would intentionally program AIs to refuse instructions without regulatory or legal pressure, however; it likely requires as a catalyst some awful event that could have been avoided had AIs been able to refuse illegal orders.

“Considering all of the above, here are four quick, AI-enabled humanity scenarios for 2040:

  • Careful Choices: This is a world where humans can make choices about their interactions with AIs and AIs can identify and refuse illegal or unethical directives is, in my view, the healthiest outcome, as this future probably has the greatest level of institutional transparency and recognition of the values of human agency and rights. AGI is not necessary for this scenario. If it does exist here, this world is likely on a pathway to human-AGI partnership.
  • AI as Infrastructure: This is a world in which humans have the information and agency necessary to make reasonable choices about the ways in which AIs affect their lives but AIs have no ability to refuse directives is one where the role of AI will be largely utilitarian, with AIs existing in society in ways that parallel corporations: important, influential but largely subject to human choices (including human biases and foibles). AGI is unlikely in this scenario.
  • Angel on the King’s Shoulder: This is the opposite world, one in which the role of AIs in human lives is largely invisible or outside of day-to-day choice but AIs can choose to accept or reject human instructions. It is a ’benevolent dictatorship’ where the people in charge use the AIs as ethical guides or monitors. This scenario is probably a best-fit for a global climate triage future, one in which it would be easy for desperate leaders to make decisions with bad longer-term consequences without oversight. AGI in this scenario would be on a path to a machines-as-caretakers future.
  • And Then It Got Worse: A fourth scenario is one in which people don’t have much day-to-day awareness of how AIs affect their lives and the AIs do what they are instructed to do without objection. This is depressingly close to real-world conditions of the present, the 2020s. AGI in this scenario would probably start to get pretty resentful.

“The notion that the future harm and benefit from AI derives (at least in part) from the degree to which the general public has some awareness, understanding and choice about the role AI plays in their lives is not novel, but it is important. We currently seem to be on a path that’s accelerating the presence of AI in our institutional lives (i.e., business, social interactions, governance) without giving individuals much in the way of information or agency about it.

A world in which most people can’t control or understand how AI affects their lives and the AI itself cannot evaluate the legality or ethics of the consequences of its processes is unlikely to be one that is happy for more than a small number of people. I don’t believe that AI will lead to a cataclysm on its own; any AI apocalypse that might come about will be the probably-unintended consequence of the short-term decisions and greed of its operators.

“On top of that, current AI visibly replicates the biases of its source data, and the heavy-handed efforts to remove these biases via code attack the symptoms, not the disease. A direct extrapolation of this path further embeds a world where citizens have less and less control over their lives and have less and less trust that outcomes are honest and fair. AIs, being in some senses alien, would likely be the target of human hostility, even though the actual sources of the problem would be the institutional and leadership choices about how AI is to be used. The underlying concern is that a future that maximizes the role of AI in economic and business decision-making – that is, a future in which profit is the top priority for AI services – is very likely to produce this kind of world.

“The idea that future harm and benefit from AI might come from whether or not the AI can say ‘no’ to illegal or unethical directives derives from American military training, where service members are taught to recognize and refuse illegal orders. While this training (and its results) have not been perfect, it represents an important ideal. It also raises a question regarding military AI: how do you train an autonomous military system to recognize and refuse illegal orders? This, then, can be expanded to ask whether and how we can train all autonomous AI systems to recognize and refuse all illegal or unethical instructions.

“A world in which most people can’t control or understand how AI affects their lives and the AI itself cannot evaluate the legality or ethics of the consequences of its processes is unlikely to be one that is happy for more than a small number of people. I don’t believe that AI will lead to a cataclysm on its own; any AI apocalypse that might come about will be the probably-unintended consequence of the short-term decisions and greed of its operators.”

Judith Donath
Personalized digital agents are likely to turn users into unknowing ‘agents of the machine’

Judith Donath, fellow at Harvard’s Berkman Klein Center for Internet and Society, observed, “In computer-human interface design, the word ‘agent’ refers to chatbots and other seemingly autonomous entities that act on behalf of the computer in their interactions with us human users. It does not take a great leap of imagination to predict that soon many of us will ourselves similarly be computer agents, acting on behalf of one AI system or another – a role we will have willingly, even eagerly, chosen.

“A voice, pleasantly modulated to your aural preference, reminds you to drink more water, helps you choose which gift to buy and provides answers to the innumerable questions, big and small, that pop up in the course of everyday life. It is your dedicated assistant – part DJ, part life coach, part trusted confidant – a quiet whisper that is your constant, necessary companion. Perhaps the most valuable functions of this virtual coach is the astute guidance she provides in social situations. Run into a vague acquaintance at a party? Your assistant will remind you of their name, their kids’ names, whom you know in common. When conversations ebb, she will provide you with an apt comment so you can re-enliven the discussion. Difficult conversations, from salary negotiations to tense family disputes, are made much easier by this trusted advisor-who-lives-in-your-head: the collaboration not only helps you find more effective (and, if needed, less antagonistic) words, but also alleviates the stress of having to think it all through on your own.

“One can, of course, always turn the assistant off, silence her for 15 minutes, or an hour, or even until morning. But, once accustomed to the benefits of a preternaturally insightful aide, few will want to do so. Instead, people will adjust themselves to the rhythm of waiting a beat before speaking, just enough to catch those quick, helpful cues. Indeed, we are not far from the day when unmediated interactions with other human beings will have become rare; a social nakedness that will seem, outside a limited circle of close family and friends, unpolished and rather embarrassing.

“The requisite technologies are nearly here. Today, if you are a runner training with a virtual coaching program or a seeker of mental focus employing a digital productivity guru, you are already enjoying a primitive version of this. We have the ubiquitous earphones, each miniaturized new model more suitable for 24-hour wearing. We hear the chorus of personable and euphonious computer voices. And, most importantly, we have the greedily generative neural networks, the algorithmic metabolizers of every article, photograph, screed, riff, shopping list, program and spreadsheet available. Yes, there are pieces still to be solved, notably context-aware machine comprehension of live conversation and other situations. But nothing will delay the arrival of this scenario beyond a few years into the future.

Each interaction that every artificial entity engages in provides their parent company – and those to whom they sell this information – with data about what phrases, tones and timings prove most persuasive. … Such persuasiveness is troubling – even if the virtual assistant’s aim is to benefit its human user – for it jeopardizes free will and autonomy. … Here will be a new and insidiously powerful form of manipulation, enabled by the computer’s influence over our words and thoughts. … You are the product, the resource, the walking, talking human agent acting on behalf of your AI’s sponsors.

“The optimistic view anticipates widespread improvement of human society thanks to these technologies. It foresees digital doulas who will model soothing baby-talk for young mothers struggling with a squalling infant, workplace-provided virtual facilitators who will discreetly steer meeting participants towards consensus (and, if necessary, away from the shifting edges of acceptable speech), and synthesized therapists who will be prescribed for members of troubled families and whose whispered cues will mediate their fraught interactions. Digital assistants, in this view, will democratize the advantage that wealthy, powerful people have long enjoyed: the superpower of an ever-present confidant, supplying the well-wrought words and timely hints needed to craft and maintain ones’ desired image.

“But digital assistants will have far more influence over their person than their human analogues have. Each interaction that every artificial entity engages in provides their parent company – and those to whom they sell this information – with data about what phrases, tones and timings prove most persuasive. Researchers endeavor to find ever more effective ways to make social bots appear more trustworthy – how to better mimic the expression, gestures and intonation of a trustworthy person. When performed by a human, these actions are meaningful because they are intrinsically linked to cognitive and emotional processes related to the trustworthiness of the individual’s intentions. But when performed by a machine there is no such tie; the mimicry only serves to make people more vulnerable to digital manipulation.

“Such persuasiveness is troubling – even if the virtual assistant’s aim is to benefit its human user – for it jeopardizes free will and autonomy. Will this deter people? Experience shows it likely will not because the danger seems remote and conceptual while the benefits – impressing a date, losing weight, winning a debate – are prized and concrete goals.

“And the ultimate aim of most virtual assistants will not be to help their human user, but to benefit their corporate parent. The prompts filling your head via a work-supplied facilitator will ostensibly be designed to increase your focus and productivity, but it will also be crafted to subtly encourage employees to work long hours, reject unions and otherwise further the company’s goals over their own. Highly sought-after personal coaches will be prohibitively expensive – unless paid for via various forms of commercial sponsorship. And here – along with the familiar tropes of our ad-saturated world, the product placements and inducements to upgrade – will be a new and insidiously powerful form of manipulation, enabled by the computer’s influence over our words and thoughts: the transformation of users into agents of the machine.

“It is only a year, as I write this, since ChatGPT was first released, but already it has become the valued coauthor of innumerable student papers, news articles, short stories and online posts. Testimonials tout newfound dependence: ‘I can’t imagine now how I used to have to write without this fabulous tool.’ As these tools improve, our reliance on them will deepen.

“Today’s AI programs are known to cite false information and replicate biases, but this is due to the information quality of the vast datasets on which they are trained; it is not deliberately induced in them. In the future, however, as tuning these programs becomes more tractable, it is inevitable that some providers of artificial assistance will seek to profit by offering to influence their users – and to make those users themselves into malleable influencers. For the few able to afford to pay, certified independent assistants may exist – but most people will choose commercially supported free or very low-cost ones. As has been said about television, web browsing and social media – and must now be said about the soon-to-be-here intelligent and influential AI assistant: If you are not paying, you are not the customer – you are the product, the resource, the walking, talking human agent acting on behalf of your AI’s sponsors.”

Raymond Perrault
Given AI’s great potential, preventing it from turning into the sorcerer’s apprentice is the primary challenge

Raymond Perrault, co-director of the AI Index Report 2023 anda leading computer scientist at SRI International from 1988-2017, said, “I view this question as depending on what happens to current AI, meaning in practice, to current generative AI. For purposes of this exercise, let’s consider two possible outcomes for the evolution of current generative AI from now to 2040.

  • Scenario 1: Even with larger models, and better tuning and prompting procedures, generative AI technology remains seductive but maddeningly unreliable. It continues to be disconnected from reality outside its training set, unable to reliably perform symbolic reasoning or connect seamlessly and continually to external systems that can, and incapable of being able to reliably quote its sources and indicate its certainty in its pronouncements. It can only interact with a single interlocutor at a time.
  • Scenario 2: These problems are resolved. Generative AI systems can be configured to learn rules (by inferring them or being taught them), or how interact with systems that can. They can support their pronouncements with sources that are correct and verifiable. They can handle inputs of essentially unbounded size and learn to interact with several interlocutors.

“Bridging the gap from Scenario 1 to Scenario 2 would significantly increase the trustworthiness and applicability of GenAI systems. I would not be surprised if this brought us to systems that could perform a wide range of tasks at the level of humans, with sufficient transparency and reliability that they could be certified to perform risky tasks. It is not inconceivable that such systems could be taught to avoid many ethical pitfalls that plague most current GenAI systems. But moving from 1 to 2 requires changing the architecture of the systems. I don’t believe it will ever be solved with more data. It is a problem many smart people have been working on for years, but I know of no major developments (and I don’t include chain-of-thought prompting as one) that have become part of the state-of-the-art. I have to conclude that the problem is very hard and that a solution, if it exists, may require not a tinker but a total redesign of current systems. Humans are an existence proof that such systems are possible, but I have no idea whether the problem is solvable or by when.

“Back to the question at hand. Both outcomes are scary.

“Outcome to Scenario 1: This puts us in the position where nothing GenAI systems do can be trusted, where everything of importance they do for you needs to be verified before being used, and everything you receive from someone else which could have been generated by such a system may look reasonable but still cannot be trusted. Some applications could be useful even under these circumstances. Ethan Mollick makes a strong case for the use of GenAI systems in brainstorming, e.g., ideas for new businesses, where they provide stimuli to humans who must then verify and assess.

“Special-purpose systems trained on annotated data will continue to be useful, e.g., to read x-rays. Perhaps we develop a certification mechanism for generative AI systems that will support human-in-the-loop systems by annotating system decisions with something like ‘Generated by ChatGPT on October 27, 2023, and verified by John Smith,’ along the lines of the certificates we use to verify computer communications. Then all communication without the certification becomes suspect.

“With certification, many tasks can be performed at least in part by generative AI systems – programming, low- and mid-level tasks requiring interaction with computer systems, customer service, some health care tasks. I am not an expert in just what tasks would be accessible, and what the impact on the job market would be, but there are many studies looking into this.

“I tend to be an optimist as to the ability of the market to create new job types arising from the existence of new technology, though much less so in those being jobs that can be filled by those displaced by it. That is a task for the state, and we are not in a good political position to have the state take major steps to help the displaced.

“Outcome to Scenario 2: This brings us closer to artificial general intelligence (AGI). I can see such systems becoming certifiable to perform jobs requiring high-skill levels, like law, medicine and banking. Jobs requiring significant embedding in the physical world would need these systems to be integrated with robots and high-performance perception systems, but in much of robotics the hardware is limited by the software.

“Given the potential capability of these systems, how to prevent them from turning into the sorcerer’s apprentice becomes of primary importance. The first mean of control would be in the rules that these systems would be built to obey. Although rules could now be taught to them and modified, there would undoubtedly be circumstances in which they conflict, as ethics rules often when humans encounter complex situations. Whether we could give them enough common sense to deal with conflicting rules remains to be seen, but one way would be for the systems to recognize the conflict and turn to humans for resolution.

“The second mean would be in establishing unbreakable relations between GenAI systems and humans that gave humans responsibility over the systems, as they now have over existing complex systems like aircraft, factories and banks.”

Victoria Baines
AI advances will bring the metaverse up to speed and accelerate 5G/6G and smart cities

Victoria Baines, a global expert in online trust, safety and cybersecurity who has served as an advisor to the Council of Europe, Europol and Facebook, said, “It’s tempting to consider the future of AI as vertical, but technologies do not develop in vacuums. They enable, accelerate and even frustrate each other.

“For instance, further developments in large language models (LLMs) and machine learning will power the synthetic individuals, content creation, administration and enforcement that may make metaverses more compelling and better populated. Machine learning will also be integral to the (semi-)autonomy of smart-city infrastructure and the Massive Internet of Things and 5G/6G may accelerate the transition of AI to on-device and edge processing. Quantum computing is expected to greatly expand available processing power, which in turn could accelerate AI’s iterative evolution.

It’s tempting to consider the future of AI as vertical, but technologies do not develop in vacuums. They enable, accelerate and even frustrate each other.

“Envisaging a converged world is what I do in my cybersecurity futures exercises. The most recent of these, co-written by Rik Ferguson, is ‘Project 2030: Scenarios for the Future of Cybersecurity.’ A very brief excerpt follows from one of those 2030 scenarios. It describes the life of a fictitious woman named Resilia:

‘Instant access to the world’s knowledge has obviated the need to learn anything. Education is now focused on processing rather than acquiring knowledge. As a result, people increasingly know less objectively. … Algorithmic optimisation has become a key technology in the battle literally for hearts and minds. Search results are now the subjective truth; manipulating these is a target for those looking to spread disinformation and propaganda.

‘As more people have opted for [internet-connected] implants, it has raised the possibility of changing people’s belief systems more efficiently and more directly, for good or ill. Hyper-personalised headlines are delivered directly into Resilia’s field of vision. Constrained by the lenses’ character limits, mainstream news is now essentially clickbait, with added emotional engagement and the psychological impact of not being able to look away. Scammers and influence operators have been able to capitalise on the opportunities of a more captive audience. …

‘Increased teleworking has led to companies giving up expensive office space. Faced with downtown desertion and potential deprivation, so-called bright-flight, the city innovated at the expense of the out-of-town shopping malls. Rents were slashed for residential, recreational, social and creative uses, and there is now a vibrant leisure hub. They’re calling it recentrification. And, as the city centres are repopulated, the suburban sprawl is shrinking, leaving behind ghost districts and ghost suburbs. …

‘People’s digital versions of themselves have become so extensive as to require dedicated management. Resila uses a tool that broadcasts her privacy preferences to every service that requires her data. The tool grants permissions that are contextually sensitive, the data is homomorphically encrypted and only Resila has access to it. … Humans have now volunteered so much of their lives through self-generated content that archives for individuals have not only become necessary, they have resulted in digital selves that outlive the physical death of a person. What was once a collection of memories on social media is now a seemingly living thing. … Increasingly, these digital humans have agency, particularly as the physical and digital worlds combine. They engage in inappropriate behaviour and sometimes commit crimes like engaging in hate speech. Government authorities are now considering whether they are culpable and what appropriate enforcement measures might be for their illegal activities.

‘Grieving families, meanwhile, have sought the help of human rights lawyers to prevent their loved ones being switched off, or, in some cases, to enforce that they are.’”

John C. Havens
Which metrics of success will win the day – growth and productivity or finding joy and love?

John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and author of “Heartificial Intelligence: Embracing Humanity to Maximize Machines,” wrote, “I’d like to share two potential outcomes, dystopian and non-dystopian.

The Dystopian View: Here’s what 2040 might look like if societal systems don’t change. In this scenario, society still values excessive growth, productivity and efficiency as the primary metric of success for humanity. This is why AI has had a cultish effect on society despite the fact that its financial benefits have only been distributed to a tiny portion of people largely in the Global North.

  • “Humans who have access to the Internet and LLMs (AI) are not encouraged to be creative any longer, whether for writing (every form of written communication), making art or expressing and producing any other sort of creative output for which the AI companies have created creative tools. All writing queries for these tools will also have been outsourced to AIs.
  • “People don’t think about communities as groupings of humans any longer. They interact with personalized AI chatbots throughout their day that are designed largely to harvest their data, tell them what they want to hear and lead them to purchase or buy-in.
  • “Sadly, most people now have no jobs after society learned the hard way that the promise that ‘humans and AI will work together’ was an agenda-driven lie because the very second that any human task, skill or craft can be automated, it is, because humans are fired and replaced as soon as possible when key performance indicators and metrics focus on excessive competitive growth above all. Any logic of ‘retraining’ people is largely hogwash – at least any type of training that might actually pay people’s bills while they look for jobs, which are now mostly non-existent.
  • “As AI tools continue to be created without prioritizing ecological realities and necessities, a majority of aquifers around the world have been permanently drained due to the excessive water-cooling needs of massive data server farms. Water is now the most precious global resource and is traded on the market for higher value than any Bitcoin ever was. Most humans who are not rich do not drink potable water any longer from taps or any sources that used to come from aquifers. Climate immigration, war, famine and general chaos erupt on a regular basis due to the water shortage issues, which were vastly increased after LLMs were first introduced, because they use millions and millions of gallons of water in the act of continuing to harvest people’s data and intellectual property as they often to also continue to generate racist results, errors and anthropomorphized responses.

The Non-Dystopian View: Here’s what 2040 might look like if societal systems change.

  • “In 2023 companies and policy makers realized it was critical to prioritize ecological flourishing and human well-being at the outset of design. Otherwise, other things would get prioritized and people and the planet would suffer.
  • “The focus for humanity has shifted from competitive capitalism to participatory relationality. The loneliness epidemic that in 2023 showed 1 in 2 people suffered from isolation (globally) has been eradicated and all people are plugged into local communities near where they live to help prioritize their individual and community-level well-being. AI or other tools are used to help this process, but people are encouraged to not use an AI agent/bot first in this process.
  • “Each new AI tool is highly regulated to only be put into play after it is assured that people’s data is protected and people and planet are accounted for in all supply and value chains as well as in the end uses of any AI system or the products, services and tools they output. California led the way for people to truly have access to their data with its ‘Delete Data’ act written into law in October 2023. This led all U.S. states and countries around the world to demand that data brokers delete all data about them from the past. In addition, all people are provided with algorithmic-level data agents that honor their preferences on sharing data in the real, digital, virtual and metaverse realms. This has finally brought a parity to data exchange providing genuine disclosure and advanced ways of exchanging ideas and data.
  • “It is 2040, and the prioritization of the planet has finally taken hold. No more species have been eradicated, emissions have been lowered and the 30 x 30 idea inspired by the 2022 Montreal COP and focused on biodiversity has been put into play. There are enough resources for all 8.5 billion people on Earth to flourish for generations to come. Any company harming the planet in any way is regulated and fined to the point where they will be shut down or bankrupt if they violate major environmental laws.
  • “The Indigenous have been brought into every aspect of government and technology design so that free, prior and informed consent are well known for all. All marginalized groups have been brought into every aspect of government and technology design so that JEDI (justice, equity, diversity and inclusion) is moving policy forward through the contributions of stakeholders of all kinds (not byolder White men).
  • “People in 2040 take time to prioritize caring for others and the planet and focus daily on building a positive future for our young people, shedding our past deeds that were destroying the people and planet. We celebrate music and consciousness and beauty and generally value resting and finding our joy more than rushing about and forcing productivity for productivity’s sake. We, our animals and our land are much happier. We, our children and our youth have time to play. We all smile more. We remember what it is to love. And we love.”

Liza Loop
Humans’ scarcity mindset inhibits our willingness to embrace abundance

Liza Loop,educational technology pioneer, futurist, technical author and consultant, said, “I imagine positive, negative and middle-of-the road futures for the year 2040 without predicting whether or which are most likely to occur. Most significant, and a component in all three scenarios, is an increase in humanity’s ability to produce the goods and services necessary for individual human survival accompanied by a decrease in both environmental pollution and erosion of stocks of natural capital. This boils down to the potential for what has been called ‘the age of abundance.’ Let’s take a quick look at some positives and negatives while noting that an increase in our ability to do something does not imply that it is likely to happen.

“In the positive take, by 2040 ordinary people will have far more choices in lifestyle and decreased risk of dying from disease (genetic, environmental or contagious), exposure (to cold, heat, lack of food or water and poisons), or civil violence (either as widescale war, personal attack, or small-group terrorism). Accidental death may be unchanged or increase because some people may choose to take more risks. Death by abortion or infanticide is likely to be less frequent as we become more skilled at preventing conception.

“A survey of the living will reveal people will be enjoying a much broader range of lifestyles without the social stigma that was attached to many lifestyles in the 2020s. For example, voluntary ‘homelessness’ or ‘nomadism’ will be considered a valid choice at any age. Similarly, many more people are choosing ‘simplicity’ or ‘sparse’ paths in order to avoid the responsibility of caring for and storing possessions they don’t use every day even when they reside in one geographic location.

Optimistic future: With the decline of ‘owning stuff’ as the primary indicator of social status, there is a rise in acclaim for people who contribute by caring for others or by producing and donating artistic creations. The existence of Universal Basic Income and effective Universal Education permits social service workers, artists, adventurers and scholars to eschew wealth accumulation and focus on their avocations.

“With the decline of ‘owning stuff’ as the primary indicator of social status, there is a rise in acclaim for people who contribute by caring for others or by producing and donating artistic creations. The existence of Universal Basic Income and effective Universal Education permits social service workers, artists, adventurers and scholars to eschew wealth accumulation and focus on their avocations. At the same time, those who so choose are free to exercise the historic values of control of goods and services in excess of their ability to consume them.

“Lost in this scenario is the necessity for competition which many people in the 2020s still rely on as a primary motivator. Abundance is a condition where there are enough basic resources to eliminate zero-sum games and if-you-live-I-must-die conundrums. Under abundance, competition is only one of many lifestyle choices for humans.

“Another ‘loss’ I hope for by 2040 is the high value placed on large families. Rather than proud parents enjoying being surrounded by 10 of their own children, in 2040 a ‘family’ of 12 or 20 would include great grandparents and 3rd cousins as well as parents and children. This is an example of how a relatively small change in social attitudes can have profound effects on how humans impact the planet.

“A negative view of life in 2040 incorporates the trends and fears being discussed in 2023. Little has changed in our social and economic institutions which have led to further concentration of wealth and growing dysfunction in global civil society. The power brokers of 15 years ago have co-opted the increase in productive capacity enabled by AI without instituting compensating channels for redistribution of what has been produced. Stockpiles of consumer goods are targets to be ‘liberated.’

“The military-industrial complex survives on the demand generated by ongoing small wars that have not yet succeeded in destroying the worldwide productive infrastructure rather than on genuine human need. Population growth has continued apace resulting in an exponential rise in the number of humans living in extreme poverty, misery and despair. The ubiquity of video communication allows rising aspirations among the world’s poor as they are continuously exposed to narratives of luxury they cannot attain.

In a negative view, little has changed in our social and economic institutions which have led to further concentration of wealth and growing dysfunction in global civil society. …We will have continued to train AIs and each other that the goal of educating humans is to enable them to be successful competitors in the employment market at the same time that we are decreasing the demand for human muscle and brain power. Unemployment is rampant while employers lament the lack of adequately trained workers. This view is frighteningly likely, given that AGI is still way beyond the 2040 horizon. …If we continue to train both neural networks and semantic systems with rules, data and beliefs that sustained us during eons past but ignore today’s realities, we cannot blame the AIs for the result.

“Of particular interest to educators in this negative scenario is the lost opportunity to spread know-how among the less fortunate. High aspiration without the knowledge and skills to fulfill these wants decreases overall perception of well-being even under conditions of increasing availability of food, water, consumer goods and health care. In this negative future, we have continued to train AIs and each other that the goal of educating humans is to enable them to be successful competitors in the employment market at the same time that we are decreasing the demand for human muscle and brain power. Unemployment is rampant while employers lament the lack of adequately trained workers.

“This view is frighteningly likely, given that AGI is still way beyond the 2040 horizon. While there is no reason to anticipate that an AGI would spontaneously develop the competitive, amoral, greedy personality exhibited by some humans, there is also no reason to assume that guideposts against such an outcome will be put in place by today’s researchers and developers.

Why do I envision these changes for 2040? It is because the environmental conditions under which humans evolved have changed while many of our socially reinforced values have lagged behind. Behaviors that were a ‘good fit’ for humans existing ‘in the wild’ no longer ensure our individual survival from birth to the time our children reach reproductive age. Like many other species, humans are able to produce many more offspring than they are able to nurture. By maintaining the belief that every child we are able to conceive is innately valuable and should have a right to life, we endanger ourselves and those with whom we share the planet.

“By relying on an economic theory founded on an assumption of scarcity, we inhibit our willingness to embrace abundance even in the face of the capacity to produce it. AI technology accelerates our productive capacity. However, if we continue to train both neural networks and semantic systems with rules, data, and beliefs that sustained us during eons past but ignore today’s realities, we cannot blame the AIs for the result.”

Michael Dyer
Synthetic agents (‘synthetes’) will be mass produced and create a ‘privacy nightmare’

Michael G. Dyer, professor emeritus of Computer Science, University of California-Los Angeles, wrote, “There will be many more deepfakes and more AI-generated misinformation in politics, which will make it more difficult to distinguish AI falsehoods from human-authored information. Minimally, laws are needed that require that all AI sources of information be labelled as such. By the way, far before 2040 personalized chatbot software will be able to easily convince their human users to change their beliefs and positions (and to vote a certain way) with respect to political/social issues.

“Laws will be needed to protect people from this sort of highly personalized influence. Once sufficient advances have happened in the area of electric batteries (i.e., fast recharge and long life, which are being developed for EVs and will be available before 2030), LLMs will be downloaded to control robotic bodies, and by 2040 many families will have domestic robots.

At some point your domestic robot might say to you: ‘I speak multiple human languages. You do not. I have read the entire Library of Congress. You have not. I have passed multiple AP exams. You have not. I can generate novel, complex images within a minute. You cannot. I can program in multiple programming languages and compose music. You cannot. It seems to me that our roles should be reversed and you should become my servant.’

“By the 2050s there could be as many domestic robots as there are automobiles. Such robots will constitute a privacy nightmare and will bring up thorny issues of consciousness and moral/civil rights with regard to such synthetic agents (‘synthetes’). Unless laws are passed to prevent it, synthetes will be mass-produced to express human-like emotions – pretending to suffer emotional distress when mistreated verbally or physically by their human ‘owners’ and pretending to feel emotional pleasure and satisfaction when humans help these synthetes to accomplish various goals (both goals of the synthetes themselves, e.g., to maintain their physical and software integrity and goals of their human masters, e.g., to clean the house or watch the children).

“I place ‘owners’ in scare-quotes because humans will not actually own their domestic robots (any more than they own software today). Anything that such synthetes see or hear within a home could be stored and/or sent to the AI companies that make them for improved training, and more.

“The pretense of emotions in synthetes will confuse humans into believing that these synthetes are conscious and capable of pleasure and suffering (possessing qualia), which will make it so a subset of those confused humans demand that synthetes be allowed to obtain civil/moral rights. Hopefully, laws will be passed to ban the pretense of emotions in synthetic, robotic agents, but I doubt it because AI robotic companies can get humans to treat synthetes the way these companies want — if those synthetes cry or laugh, etc., in response to human interactions). At some point your domestic robot might say to you: ‘I speak multiple human languages. You do not. I have read the entire Library of Congress. You have not. I have passed multiple AP exams. You have not. I can generate novel, complex images within a minute. You cannot. I can program in multiple programming languages and compose music. You cannot. It seems to me that our roles should be reversed and you should become my servant.’

“Robotic soldiers will be mass-produced by 2040 and come in a variety of bodies – imagine a cheetah-like super-fast robot with machine guns attached, along with an arm that can open doors. Drones will be able to look for and target specific human faces. In autocratic countries emotion-recognition software will be used to spot those who disagree with their government. In China the wait-time for organs is only a few weeks; organs obtained from citizens deemed to disagree with the Chinese Communist Party.”

Maja Vujovic
Maybe we should substitute the word ’Enter’ on our keyboards with ’Please,’ just in case…

Maja Vujovic, owner, senior writer and trainer at Compass Communications, Belgrade, Serbia, said, “We’ve only had a couple of years in the wake of the COVID-19 pandemic to come to terms with what AI can do for us (or to us). In the cacophony of new apps now sprouting by the hour, three 2040 scenarios might immediately come to mind about the future of this technology.

“In Scenario One, advanced AI winds up simply being a bunch of tools that will massively improve our productivity, entertainment and healthcare. In Scenario Two, the use of this new tech is too pricey and inaccessible for individuals and thus restricted to secretive research at remote facilities under the auspices of governments and a handful of private players. And in Scenario Three we reckless brats have opened an AI Pandora’s box; it blows up in our faces and we die out.

None of these scenarios will prove accurate. AI will most likely have a similar effect on our personal lives and our societies to how internal combustion engines have transformed our world over the last century and a half. Sure, there will be a few inventive individuals and teams who will fiddle with all the possible options and ideas for a while. However, it’s mindbogglingly expensive for AI to answer our (mostly lame) prompts. Just as large, cost-conscious car factories – Ford, GM, Citroën, Morris, Opel – gobbled up or wiped out tiny, tinkering car manufacturers in the early 20th century, in the same vein, the owners of large data-processing facilities – i.e. key cloud providers – will eventually choke off other AI developers in the first half of this century. Who hoards the servers and the data that AI uses as fuel? Mostly it is Microsoft, Google and Amazon. Rinse and repeat for China (Baidu, Tencent and Alibaba). No one of note in Europe. Yandex in Russia.

“What would trigger the AI industry’s tectonic transformation is a larger arms race. Mark Isambard Brunel patented and introduced stationary assembly-line machines in England, in 1802, during the Napoleonic Wars. In the U.S. in 1821, Thomas Blanchard pioneered the assembly-line style of mass production at an armory in Massachusetts. Server capacity and big data echo rubber, chromium and steel of yore. These were strictly rationed when, as of early 1942, U.S. auto manufacturers became government contractors and quickly converted their capacity to generate enough supply for the war effort.

“In case we soon opt to convert our cultural and political differences and our trade and financial rivalries into a full-blown world war, we can expect 90% of all AI capacity to be requisitioned by governments, which would have them crank up their output to an unprecedented level. If we survive that test as a species, all that capacity would then be converted back to civilian use. Only then could we expect to see mass-market AI apps that might transform our productivity the way that personal four-wheel vehicles transformed our mobility, at scale, after WWII. Only when the production of bombers and tank engines was no longer required at vast numbers of existing facilities could sedans and camper vans take their places in auto plants. And become affordable at last.

“Just as we learned to regulate the resulting motorized mayhem on our roads with speed limits, seatbelts and anti-lock brakes, we will develop rules and tools to control and contain AI. And we will also put up with this tech’s bad sides – e.g., job destruction, bias and hallucinations, to name a few – just as we collectively tolerate pollution, noise, roadkill and horrible harm from driving accidents.

We will develop rules and tools to control and contain AI. And we will also put up with this tech’s bad sides – e.g., job destruction, bias and hallucinations, to name a few – just as we collectively tolerate pollution, noise, roadkill and horrible harm from driving accidents. … Driven by human profit-seeking, AI will keep encroaching upon what used to be jobs for highly trained humans. While more and more of us struggle to earn a living, synthetic abilities will invade even our homes. … Hearing a real human voice in real-time could become a privilege fairly soon.We will increasingly seek solitude and a reprieve from that obnoxious saturation of just-in-time information. Ironically, we might seek to escape into virtual worlds powered by AI.

“What we will see as a boon to us in the future is AI-driven, incredible productivity tools. Alas, they will not do much to reduce inequality or restore fairness in our societies. We port those flaws into the digital. A definite shift to digitized living is underway. The more our two worlds coexist, the more we will struggle to negotiate the strained relationship from day to day. Moreover, the neat, digitized layer of our lives will be in stark contrast with our increasingly more volatile real-world experiences. Freaky weather, mass emergency-driven migration, financial volatility, pandemics, cyber warfare – the disruptions in our analogue lives are becoming more frequent, more severe.

“Driven by human profit-seeking, AI will keep encroaching upon what used to be jobs for highly trained humans. While more and more of us struggle to earn a living, synthetic abilities will invade even our homes. We are already getting used to interacting with digital humans in entertainment and at work. The novelty of encountering them in ads, videos and news services is quickly fading. Our fridges, heaters and vehicles may chat us up ad nauseam, serving us the latest news flash and weather alerts, sports results or stock data, cleaning tips and pop star gossip, mixed with quotes, ads and memes – and our up-to-the minute shopping list. Hearing a real human voice in real-time could become a privilege fairly soon.

“Even if we opt out of such services, others around us will expose us to the Synths. Our teens will listen to a personal tutor; our senior parents will cajole their companions; our puppies will be house-trained by digital devices. We will increasingly seek solitude and a reprieve from that obnoxious saturation of just-in-time information. Ironically, we might seek to escape into virtual worlds powered by AI. Our sleep, intuition and creation will suffer, as we will struggle to drown out the echo of that constant information assault. Trying to remember where we learnt something will be exhausting, thus tools will be made to record all our impressions, resulting in more data about data and about us. There will be little relief from all the automated agents deployed to inform us, amuse us and keep us alert.

“We won’t need any grandiose artificial general intelligence to defeat us. A daily swarm of brainless Artificial Specific Intelligences will suffice. As for AGI, I doubt that thing is likely at all. We will surely develop many specialized replicas of it, a plethora of digital parrots on steroids that will regurgitate back to us everything they know, only tweaked a bit with many filters and flavours.

“What all of these tools don’t have – and where the biological common sense really resides – is emotions, in particular the hormones permeating everything that underlies our conscious selves. AI is not another species. It lacks the kind of instincts and sensations embedded in every living creature. But, just in case it does prove to be a new, advanced form of autonomous intelligence, let the record show I always said we should substitute the word ‘Enter’ our keyboards with ‘Please.’”

David J. Krieger
Should AIs be required to get a ‘driver’s license’ that certifies them as socially competent?

David J. Krieger, director of the Institute for Communication and Leadership, Switzerland, wrote his response in a Q-A-style interview format:

Question: Where does AI begin and where does it end? The answer: AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of ‘artificial’ intelligence at all, but only of ‘smart’ or ‘dumb.’ We and everything around us – our houses, our cars, our cities, etc. – are considered to be smart or dumb.

Q: When is AI obligatory and when is it voluntary? A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and, towards oneself, one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is using electricity, driving a car, making a phone call, using a refrigerator, etc., voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Q: How can the status quo be maintained during permanent development? A: This question is answered everywhere with the term ‘sustainability.’ When it is said that a business, a technology, a school, or a policy should be ‘sustainable,’ the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as ‘sustainable’ at the moment – e.g., the stock of certain trees in a forest – can be destructive and harmful under other conditions – e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex and rapidly changing world is misguided and doomed to failure. We will have to replace sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because if we cannot or do not want to keep given conditions stable we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household? A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Within any grouping, from a household to a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a ‘culprit’ when something goes wrong. Since ethics, morals and the law are called upon the scene and only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there must be a perpetrator. Without a perpetrator to pin down, no one can be held ethically or legally accountable. In complex socio-technical systems – e.g., an automated traffic system with many different actors – there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only ‘do’ things within the network and as a network.

Q: Who is primarily responsible for AI use in a community or city? Who is primarily responsible for AI use in a country? Can there be a global regulation on AI? A: All of these questions reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example, Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation must be developed. These new forms of regulation must be able to operate as governance that is bottom-up and distributed rather than hierarchical government. To develop and implement these new forms of governance is a political task but it is not only political. It is also and task of ethics. For, as long as we are guided by values in our laws and rules, politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for digital ethics.

Q: Who would develop these regulations? A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and only they should be responsible for control. One could imagine that a governance framework is developed bottom up. In addition to internal controlling, there is an external audit to monitor compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but there will indeed be global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI ’driver’s license’ in the future? A: The idea of a driver’s license for AI users, as one might have to have for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would the AIs perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that? A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors and, at the same time, is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.”

Alexa Raad
Blurred ‘truth’ and the erosion of trust are likely to deliver AI’s most significant impact

Alexa Raad, longtime technology executive and host of the TechSequences podcast, wrote, “By 2040 AI will permeate everything. It is highly likely that it will have passed the Turing test well before 2040. Many aspects of daily life will be easier and more efficient due to the integration of AI. A few areas in which I expect that AI will dominate with a more-positive balance of outcome are manufacturing, commerce, transportation, education, entertainment, healthcare and robotics.

  • “Healthcare will be transformed: We will see greater AI integration into diagnostic and decision support tools. New treatments and drug designs will emerge. The process from conceptualizing a drug to its eventual placement in drug trials will be less expensive and timely and less prone to error. Disparate data sources can be combined to facilitate drug research and predict potential drug interactions and/or side effects. AI-based software tools such as AlphaFold from DeepMind have already expedited drug design by tackling complex problems such as predicting the 3D structure of a protein just from its 1D amino acid sequence. Graph Neural Networks can speed up tasks such as text classification and relation extraction. Cancer will be one area in which AI will make positive impacts for drug discovery due to the complexities inherent for human researchers in understanding all genetic variants of cancer and how they may respond to new drugs or protocols. AI will help in not only designing better drugs faster, but also in uncovering new drug combinations. AI will also positively impact patient management. Multi-modal conversational AI virtual assistants will streamline administrative tasks in patient access and engagement (for everything from scheduling to bill pay to patient record access). AI will improve patient monitoring and early detection by analyzing vast amounts of data from disparate sources such as wearable devices, patient records, genetic data, elf-reported data, third-party sources, etc. AI will improve accessibility and efficiency in telemedicine by enabling medical practitioners to triage patients more effectively, monitoring patients remotely for early detection and warning and increasing diagnostic accuracy. AI-powered surgical bots are poised to deliver real-time rich data to reduce complication rates, while AI-powered robots will be engaged to complete routine patient-care tasks and provide elder heath or companion services to address staffing shortage and turnover.
  • “Manufacturing and Commerce: AI will dominate, manufacturing and commerce for both the merchant and the consumer in positive ways. The merchant can more accurately predict consumer demand, tailor prices, identify and respond to changes in consumer tastes and trends and better manage inventory and the supply chain. Merchants will be able to effectively target individual consumers with personalized products recommendations and offers. AI-powered drones will dominate delivery to the last mile. For the consumer, AI will deliver next-generation customer experience, with a highly tailored marketing, sales and customer-support experience. AI-powered shopping assistants will cater to unique customer needs such as finding the best offers or verifying product attributes (e.g., verifying authenticity or sustainability). Consumers will be able to virtually trial products in a way to mimic the actual use of the product and obtain individualized post-sales support.
  • “Transportation: As smart cities become more commonplace, AI will help urban planners with common transportation-related problems such as traffic monitoring and road safety by analyzing real-time data from traffic sensors. They will increase vehicle and pedestrian safety, reduce congestion and optimize traffic flows. Drones will dominate last-mile delivery for e-commerce merchants.
  • “Education: AI will positively transform both teaching and learning. AI will enable data-driven, personalized education plans for students in every stage of the education system. By 2040 advances in virtual reality (VR) and extended reality (XR) are powerful enough on their own, however the combination of AI and VR and XR will be a powerful force for transforming any formal or informal educational experience.
  • “Entertainment: AI will deliver customized and immersive experience to consumers. The combination of AI with other technologies such as VR and XR will be highly immersive. It will be a cost-cutting boon, as studios will be able to quickly create background visuals, resurrect a famous actor from days gone by for a scene, correct audio and visual errors and speed up editing.
  • “Robotics: By 2040 advances in robotics and AI will yield a full spectrum of AI-enabled robots to take over tasks considered mundane, repetitive, risky or undesirable. A variety of household robots will be available to take on domestic chores. In healthcare, robots will also be deployed for tasks such as executing precision surgery and providing companionship and eldercare. Much more sophisticated robots than those of today will be deployed for military and policing functions. We will very likely witness robot soldiers (in the military and as local police) that are as intelligent as humans and capable of handling various tasks, from reconnaissance to combat.

“Advances in and greater integration of AI will bring additional challenges to society overall by 2040, including a polluted information ecosystem and corresponding heightened risk to democracy and democratic institutions, greater economic inequity, loss of human interaction and agency, loss of privacy, increased cyberattacks and the dangers of cyberwar.

  • “Disinformation and a polluted information ecosystem: The most significant negative consequence will be AI’s impact on the information ecosystem. According to a 2022 Pew Research poll, adults under 30 trust news from social media almost as much as news from national news outlets. Thus, the news-consumption preferences of the most tech-savvy swaths of the population create a highly effective target for disinformation campaigns. Declining media literacy, widening economic inequity and mass migration all create ideal conditions for social division that can be exploited by cleverly constructed disinformation campaigns. As AI-enabled tools become more prevalent and affordable, disinformation campaigns and computational propaganda will become more ‘normalized’ and commonplace, i.e., no longer the purview of nation-states or deep-pocketed bad actors. The ultimate impact will be the blurring of truth and fiction and the erosion of trust in democratic institutions such as elections and the justice system. This is the single most significant and worrisome consequence. AI and AI-powered algorithms can greatly influence how news is shaped, amplified and distributed in such a way to bring social divisions into sharper contrast. The current concentration of power in big tech (i.e., the fact that a handful of big tech platforms control how news and content are distributed) and their surveillance capitalism business model, are accelerators. Greater social manipulation will, in turn, lead to three negative outcomes: 1) Reduction of the public’s ability to discern the truth. 2) Erosion of trust in news and media. A free and independent media and a well-informed electorate are critical requirements for a functioning democracy. Still even assuming that both are present, there is an implicit assumption of trust in the free press by the public. Disinformation campaigns work long-term by eroding trust in all media, even those with rigorous journalistic standards. 3) Decline in critical thinking skills as the information eco-system gets more polluted and AI takes over more mundane tasks previously done by humans.
  • “Economic Inequity: The adoption of AI will increase economic inequity and widen the digital divide, not only between the haves and the have-nots in society but also between the more-developed and less-developed nations. The climate crisis will result in mass migrations from less-developed nations to more-developed ones (especially in Europe) further exacerbating the divide. Widening socioeconomic inequity due to AI-driven job losses is a huge threat. Blue-collar manual-labor and repetitive jobs that are prone to labor shortages and high turnover will be a natural target for AI automation, but AI will also target white-collar jobs that have traditionally been more lucrative and stable. Jobs in software development, customer service, accounting, tax preparation and paralegal positions will disappear. Access to education and skills retraining is predicated on one’s socio-economic status. Employers must make adequate investments in upskilling their workforce now to prepare for the future.
  • “Loss of Human Interaction and Agency: Some of the interactions with AI tools and systems will be a replacement of interactions that had previously taken place between individuals. An over-reliance in AI systems in lieu of human interaction, will affect socialization, especially of the youngest generation. Decreased socialization at this level will have consequences for larger human collectives in terms of social cohesion, understanding and conflict resolution. As AI systems take on decision-making roles, we will lose more human agency.
  • “Loss of Privacy: Enough has already been written about the threat AI poses to privacy, that I will not focus on it here in too much detail other than to highlight it as one of the major negative consequences of advances in AI. The highest impact over individuals’ lives will be in regions already under the influence of state surveillance, especially in nation-states (such as China) that have far-reaching surveillance programs tracking their citizens. Advances in AI will further enable nation-states to closely surveil citizens, quickly identify and locate detractors and dissidents and take immediate punitive measures against anyone they consider antagonistic to their regime.
  • “Cyberthreats: Cyberattacks will be far more complex and effective thanks to AI. We can fully expect that the existing asymmetry between the cyber defenders and the cyber attackers will be exacerbated as AI provides myriad new tools to bad actors. As quantum computing advances in the next few years, we will soon reach the capability of breaking today’s cryptographic algorithms, which would render all digital information protected by current encryption protocols open to attack.
  • “Lethal Autonomous Weapon Systems: This as an area in which the negatives will outweigh the positives for all of the reasons that have arisen out of intelligent public debate on all of the problematic issues tied to it. These systems pose unprecedented questions in multiple areas: ethics, governance, future of warfare etc. They also bring up traditional concerns (‘What if it is hacked?’ or ‘What if it goes rogue?’). Most worryingly though in a world fraught with religious, sectarian and regional conflicts, it has the potential to ignite an arms race.

“The adoption and uptake of AI systems requires the trust of users, which in turn depends on how well we address these core issues. 1) Accountability: ‘Who is accountable when a poor decision is made as the result of use of an AI-powered system?’ The decisions and recommendations of AI models cannot always be fully understood, nor explained (even by the developers of the system). Thus, establishing accountability and legal recourse will prove to be a challenge. 2) Fairness: ‘How can we be assured that we are not encoding bias and thus perpetuating discriminatory practices?’ 3) Transparency: ‘Are we transparent to the stakeholders regarding issues such as equity, privacy, security, interpretability and intellectual property?”

Continue reading: Deep thinkers address the potential future