A number of the experts in this canvassing shared insights about the most-likely mix of both gains and losses in the digital future that lies between now and 2040. Several expressed their thoughts on the steps that should be taken more boldly today to mitigate the future risks of accelerating technological change in order to maximize positive outcomes. One observed that many experts say AI may advance to human-level general intelligence – AGI – in three to 15 years, and that would create a far different future than the one that might be projected when humanity is equipped only with the type of ANI (artificial narrow intelligence) that is rapidly evolving in today’s generative AI systems.


Klaus Bruhn Jensen
Hope for progress and hard work will determine the future; and the sweep of history shows humans can prevail

Klaus Bruhn Jensen, professor of communications at the University of Copenhagen and author of ’The People’s Internet,’ predicted, “By 2040, AI will have changed individuals’ daily lives on a scale similar to the changes afforded by the internet between the 1990s and the 2020s in terms of the availability of information, the access to communicative interactions and the capability of acting at a distance, whether in the pursuit of personal interests and relations, or in economic and other social transactions.

“By 2040, AI will have been embedded in the economic, political and cultural systems of local and global society, on a scale similar to the digitalization of social institutions between the 1990s and the 2020s. In both cases – digitalization and AI as the latest manifestation of digital computing technologies – it is essential that publics and policymakers do not avoid, but address and embrace the question of determination: What determines the structures and trajectories of individual lives and social and cultural systems?

Following and making the most of technologies as infrastructures requires two things: hope and hard work. Hope represents the denial that AI and other digital technologies determine, for instance, ever-increasing surveillance or exploitation of their ordinary users. Hard work, in the face of advances as well as setbacks, will be required of researchers along with publics and policymakers. Progress is possible. The hope for progress has fueled many local and global interventions and developments that, despite persisting inequality and misery, have made the world a better place for human existence than it was two millennia or even just two centuries ago. It is in this perspective that we must observe AI, its potentials and challenges.

“With history as his and our guide, the sociologist and cultural theorist Stuart Hall suggested the answer: ‘Technologies such as digital computing and AI constitute determinations not in the final instance, but in the first instance – like economic markets, technologies stake out a field of the possible and the impossible and within this field it is human agency, individual and concerted through collective deliberation and decision-making, that embeds technologies into social life. Where classic economic and technological determinism proposes to follow the money, or to follow the machines, we should instead follow the infrastructures – the practical ways in which the undeniable potential of technologies for individual and collective flourishing come to fruition, or not.’

“Following and making the most of technologies as infrastructures requires two things: hope and hard work. Hope represents the denial that AI and other digital technologies determine, for instance, ever-increasing surveillance or exploitation of their ordinary users. Hard work, in the face of advances as well as setbacks, will be required of researchers along with publics and policymakers. Progress is possible. The hope for progress has fueled many local and global interventions and developments that, despite persisting inequality and misery, have made the world a better place for human existence than it was two millennia or even just two centuries ago. It is in this perspective that we must observe AI, its potentials and challenges.

It is not for us to predict what will have been lost and gained, respectively, by 2040. Until 2040, and after, we must undertake hard work to have AI and subsequent technologies serve humanity and the good life. Throughout natural evolution and human history, we have been propelled by hope, by the imagined and anticipated actualization of manifest potentials for what the ancients thought of as human flourishing (eudaimonia). As we move toward 2040, we must be constantly mindful of two other deadlines regarding a sustainable human existence on Earth: 2030 and 2050. One safe prediction is that humanity needs to make a green transition to make it, comfortably or at all, beyond 2040 and 2050. AI is the least of our problems, a potential solution. AI is one more instrument that may promote human survival and flourishing in the centuries and millennia ahead.”

Seth Herd
Outside of job losses narrow AI will be mostly beneficial; this may change when AGI arrives

Seth Herd, a futurist and computational cognitive neuroscience researcher now working on human-AI alignment and lead author of “Goal Changes in Intelligent Agents,” predicted, “AI timelines are very difficult to predict. My own prediction, as an expert in neuroscience, neural networks and cognitive architectures, is that we can expect to advance to having self-improving artificial general intelligence (AGI) in three to 15 years. The change would have to be astonishingly slow for it to come in 2040.

“I worry much less about the impact of narrow (limited) AI on society than I do about that of AGI, However, I’m going to answer this question as I would if we might assume that we won’t have real, self-aware, agentic, self-improving AGI by 2040 (as I do expect we will).

The increases to productivity and well-being due to narrow AI will be enormous, outside of the enormous exception of the impact of its displacement of workers. Narrow AI will serve as a cheap personal assistant with expertise in psychology, finance, job strategy and just about everything else. This will be enormously useful to everyone, particularly in regard to the emergence of abundant expert psychological counseling. However, narrow AI will eliminate an immense number of jobs by 2040. It is difficult for me to see how humanity will weather this challenge, since our economic models are centered on a job for most people as their source of livelihood and source of meaning.

“The impacts of narrow AI will be enormously net beneficial. Concerns about deepfakes and algorithmic bias are relatively easy to address, and I have confidence that they will be successfully mitigated. The increases to productivity and well-being due to narrow AI will be enormous, outside of the enormous exception of the impact of its displacement of workers. Narrow AI will serve as a cheap personal assistant with expertise in psychology, finance, job strategy and just about everything else. This will be enormously useful to everyone, particularly in regard to the emergence of abundant expert psychological counseling. However, narrow AI will eliminate an immense number of jobs by 2040. It is difficult for me to see how humanity will weather this challenge, since our economic models are centered on a job for most people as their source of livelihood and source of meaning.

“If this job replacement happens slowly enough, the wealth increases from productivity may be large enough to provide a minimum basic income type of support for much of the world’s population. This should be relatively easy in the U.S., but poor countries are unlikely to benefit enough to provide this support. However, with a lower fraction of their populations performing knowledge work, their economies will be relatively less affected.”

Kunle Olorundare
If people don’t harmonize on AI, the future will not bring out the best in humanity

Kunle Olorundare, president of the Nigeria Chapter of the Internet Society, predicted, “The rapid proliferation of AI is likely to create significant changes in individuals’ daily lives and in society’s social, economic and political systems. By 2040, we will see AI-powered technologies integrated into all aspects of our lives, from our homes and workplaces to our transportation systems, economy, social lives, tourism and healthcare systems.

“The following potential gains and losses can be expected by 2040:

  • Increased industrial productivity and economic growth: AI will automate many production tasks currently performed by humans, freeing up people to focus on more creative and strategic work. This could lead to significant increases in productivity and economic growth.
  • Improved healthcare and education: AI will be used to develop new medical treatments and diagnostic tools that will make medical treatment easier and more effective. 3D and 4D printing will be used to manufacture and synthesize medical body parts. The personalization of education will be seamless for each student. This could lead to significant improvements in healthcare and education outcomes.
  • New forms of entertainment and recreation: AI will help create new and immersive forms of entertainment and recreation. This will lead to new ways for people to relax and socialize. This will include a lot of games in the metaverse, using extended-reality (XR) tools. There will be both immersive and generative-AI entertainment.
  • New social and economic challenges: AI is likely to lead to some job displacement and losses, however, these can be overcome by continuous education and the creation of new job roles. A second primary concern is the use of more-sophisticated autonomous warfare weaponry and ammunition systems and other deployment of AI in warfare. A third concern is AI’s deepening of more-comprehensive surveillance that will further erode privacy. A fourth concern is the polarization of society due to uses of social media to manipulate and divide people; more attacks between people of opposite thoughts will be a serious problem. This bad side of digital life has been evident the past decade or so, playing role in the erosion of democracy and human rights in many places in the world.

“It is important for people worldwide to start finding ways to harmonize and work together toward the responsible use of AI. If it is used ethically, AI will be launched into action to further improve the quality of life for people worldwide. It will be used to address some of the world’s most pressing challenges, such as climate change, poverty and disease. It is vital to identify the potential risks associated with AI now and to take steps to mitigate these risks to ensure that it is responsibly developed and used in a way that benefits all of humanity. We need to assure that AI systems are transparent and accountable and that they are used to promote human rights and well-being.”

Walid Al-Saqaf
AI’s success will require maintaining a delicate balance between its vast potential and the challenges it introduces

Walid Al-Saqaf, associate professor of media technology and journalism at Södertörn University in Huddinge, Sweden, wrote, “By 2040 the ubiquity of AI will reshape numerous facets of our daily lives. Advanced personal AI assistants will likely be part of our routines, offering predictions and automations tailored to individual needs. The medical field is poised to witness significant advancements, with wearables and AI-driven virtual consultations becoming much more commonplace. The merging of human thought processes with computational capacities through brain-computer interfaces such as Elon Musk’s Neuralink project, if realized, could be a game-changer.

While the efficiency gains and new opportunities AI offers are undeniable, concerns over data privacy, job displacement and diminishing human-to-human interactions persist. … AI’s trajectory in the coming decades presents a delicate balance between its vast potential and the challenges it introduces.

“This technological proliferation, however, also brings with it potentially grave challenges. The economy will undergo transformative shifts, birthing new professions while sidelining others, necessitating societal strategies for unemployment and creating new jobs and skillsets. Governance too could benefit from enhanced AI capabilities, but the potential for misuse in surveillance and control by authoritarian regimes looms large, particularly in the realm of warfare.

“A major inflection point could be the evolving power dynamics; AI might decentralize power by equipping individuals with tools once exclusive to large entities, particularly if peer-to-peer cryptocurrencies become the norm for exchanging value. However, the risk of a few dominant entities controlling AI’s pinnacle remains unless a total transformation of how wealth is created is achieved, e.g., universal income and reduced economic inequality.

“While the efficiency gains and new opportunities AI offers are undeniable, concerns over data privacy, job displacement and diminishing human-to-human interactions persist. If the brain-computer interface is hacked, for example, then this may create a major risk where humans may be misled or take actions based on generated information.

“In essence, AI’s trajectory in the coming decades presents a delicate balance between its vast potential and the challenges it introduces. We must strive to minimize the risk while harnessing the best of what this revolutionary technology has to offer.”

Andrea Romaoli Garcia
The great benefits of positive innovation are always accompanied by great challenges

Andrea Romaoli Garcia, an international human rights lawyer from Brazil working toward transformational governance, said, “AI is merging with biology and other technologies. We expect to gain greater knowledge due to heightened data analysis. Cognitive robots may emerge to assist in many human endeavors. All of these breakthroughs can be a big deal for many aspects of governance and daily life. But there are always challenges present alongside the benefits at times of innovation. While AI has the potential to enhance public services, health care, education and the global economy, one of the global problems still at large will be that many people will still have to struggle to be able to harness opportunities of employability.

We expect to gain greater knowledge due to heightened data analysis. Cognitive robots may emerge to assist in many human endeavors. All of these breakthroughs can be a big deal, but … how can we expect bots to help us in a civil and truthful manner in the near future if machine learning is being trained on sets of data exhibiting human conversations and disinformation at a time in which humans are becoming more withdrawn, divided and even violent?

“Mental health in the digital age is already a concern within society. The proliferation of digital platforms has led to heightened levels of anxiety and depression for many people and caused some to withdraw more from society. By the 2040s, advanced AI avatars and bots will replace humans in many if not most of our online interactions and jobs.

“People are losing interpersonal social skills. They are avoiding face-to-face real-world contact and relying on remote transactions. In the future, many people are likely to lose the capacity to build strong interpersonal relationships and strong human networks. Lack of real social understanding is likely to increase conflicts at both the social and the diplomatic levels. Furthermore, how can we expect bots to help us in a civil and truthful manner in the near future if machine learning is being trained on sets of data exhibiting human conversations and disinformation at a time in which humans are becoming more withdrawn, divided and even violent?”

Dmitri Williams
Some systems may spread the gains equitably while others will be reserved for the rich

Dmitri Williams, professor of technology and society at the University of Southern California, said, “It’s difficult to estimate the further evolution such a rapidly moving target, but one thing I think is safe to say is that AI brings speed and efficiency. Is that good or bad? It’s both, and the net effect is going to vary widely across the planet.

“If we take the very long view, we can see that AI is likely to increase what can be done and reduce the amount of human input to do it. That will result in increases in productivity per capita. That’s mostly a good thing, but it needs to be looked at against the backdrop of social mobility and the distribution of wealth across the world. We have enough ‘stuff’ right now on the planet, and we still have economic disparity, poverty and places without clean water.

Places that privilege equality and a social safety net will look at the increases in productivity and seek to spread them out to ensure health, safety, well-being and opportunity. Places that privilege maximum wealth for those who can attain it at any cost will be more likely to continue with disparities.

“Will more stuff be the rising tide that lifts all boats, or will it simply be more for the wealthy while the conditions of the poor remain unchanged? My guess is that the answer will be a little of both, and that it’s going to vary based on the politics and structure of different groups and countries.

“Places that privilege equality and a social safety net will look at the increases in productivity and seek to spread them out to ensure health, safety, well-being and opportunity. Places that privilege maximum wealth for those who can attain it at any cost will be more likely to continue with disparities.

“If we look at say, Scandinavian countries, we might expect something more like the rising tide lifting all boats. Highly economically disparate places like the U.S. are less likely to see universal gains, but there should still be some lifting of universal conditions through inefficient trickle-down effects. Faster, better health care and more accurate diagnoses, for example, are still a net positive, and they will reach the rich first, but eventually more people than have it now.

“So, asking about the impact of AI is as much about what it can do as it is as about where it would do it and how some systems are going to spread the gains out while others will concentrate them.”

Axel Bruns
We are entering a novel and versatile new stage in the ongoing evolution of machine-intelligence systems

Axel Bruns, professor of digital media and chief investigator in the ARC Centre of Excellence for Automated Decision-Making and Society at Queensland University of Technology in Brisbane, Australia, said, “LLMs (AIs trained on large learning models) are getting easier and cheaper to build and run. This means that they will no longer be specialty services that only major tech companies can provide. It’s likely that they will proliferate – i.e., not just their use in everyday life, but the variety of available systems, including self-hosted stand-alone systems. This means they’re going to be less like search engines or social media platforms (few providers, very large userbases), and more like 3D printers or drones (many vendors, local deployment). This can be highly generative, leading to substantial competition and high levels of innovation, but also dangerous, due to the fact that they feature very limited oversight and offer few opportunities for effective policing.

LLMs are getting easier and cheaper to build and run. This means that they will no longer be specialty services that only major tech companies can provide. It’s likely that they will proliferate. This means they’re going to be less like search engines or social media platforms (few providers, very large userbases), and more like 3D printers or drones (many vendors, local deployment). This can be highly generative, leading to substantial competition and high levels of innovation, but also dangerous, due to the fact that they feature very limited oversight and offer few opportunities for effective policing.

“The most dystopic perspective is that such AI systems will be used for rogue and illegal purposes, much like 3D printers can be used to print guns or drones used to deliver bombs. Key challenges at the societal and political level include the use of AI to further pollute the information environment with disinformation and disrupt and discourage more meaningful and prosocial discourse. Further, they will also cause substantial economic disruption, undermining and replacing many existing professions and requiring substantial change in most others. Actions like the recent Screen Writers’ Guild strike can delay such change but won’t be able to hold it off forever; and many other professions lack such labourforce organisation in the first place or will be outsourced to locations where worker protections can be bypassed.

“Conversely, we will see the emergence of a cottage industry of AI intermediaries at least in the short term similar to the search engine / social media optimisation services of the past decades. These will prompt the introduction of more engineering and related services. But these may be short-lived – much as SEO/SMO have been – as ordinary users’ AI literacy improves. What remains will be a handful of high-end services for major commercial customers, as well as a bunch of charlatans still trying to make a buck off the rubes who haven’t yet seen through the hype.

“I am considerably less concerned about the current hype about super-intelligent AGIs that will gain the power to destroy humanity. This has always been a convenient fiction, playing on science fiction tropes. It has been promoted deliberately by AI vendors themselves in order to generate further hype around their projects; the fact that some of the current industry leaders themselves were amongst the people who were claiming that ‘our tools are so powerful, they could wipe out humanity’ tells us all we need to know about how seriously we should take these statements. We should move past such silliness and take these tools for what they are: a novel and versatile new stage in the ongoing evolution of machine-intelligence systems, yes, but ultimately continuing to be shaped by their developers and users far more than shaping them.”

Jim C. Spohrer
Our ‘digital twins’ will help us become better versions of ourselves

Jim C. Spohrer, board member of the International Society of Service Innovation Professionals, previously a longtime IBM leader, wrote, “Most significant will be the digital ‘twins’ for all of us. By 2040 we will all ‘own’ our own digital twin and large companies and governments will also build digital twins of us, so some new rules and policies will be needed. The potential for benefits from digital twins is very great, as people can learn to invest in improved win-win interaction and change – the give and get of service to help us become better future versions of ourselves – healthy, wealthy and wise. The potential for harms from digital twins is equally large, as they can become a powerful drug for hedonistic activities – especially for children, the elderly and other vulnerable populations. For more on this search the Journal of Service Research for information on hedonistic and functional goal setting in the give and get of service in business and society.”

Vint Cerf
These systems magnify the damage possible by a single person against society

Vint Cerf, Internet Hall of Famer and vice president and chief Internet evangelist at Google, said, “Machine learning-based ‘AI’ tools will be much more widely available but they will have dual-use challenges, as is the case with many other very powerful technologies. We will be hard-pressed to find and hold accountable the parties who are using these tools for their own benefit at a cost to others. In the hands of skilled users, AI/ML will be a power source augmenting human capabilities. The risk factor is that these systems will magnify the damage possible by a single person against elements of society.”

Anonymous respondent
Will AI usher in an artistic renaissance or simply infinite repetitions on a tired theme?

A research scientist who works at a major technology company wrote, “Right now, we think of AI as a tool for improving things we already do; we leverage AI in an intentional way. But in the near term, AI will become latent – something that influences both the things we think about doing in the first place and the things we consciously decide to do – without any prompting. This could involve making prioritization decisions for you or elevating highly relevant information based on weak contextual cues even before you think to ask.

“To make it concrete, imagine that you finish an AI-free dinner with your family and start washing the dishes on your own. Your smart glasses recognize that this is a rote task which you are doing alone and suggest that you catch up on today’s news. A few days ago your neighbor put out a political yard sign and you’d never heard of the candidate; the AI had suggested some civic programming. The news is delivered in a way that matches your preferences for level of detail and topical focus and special attention is paid to your favorite outlets and opinion writers. But unfortunately, the news you care about is all bad. Before you get too disheartened, the glasses remind you that you promised one night this week would be dedicated to having ice cream sundaes with your kid. And, by the way, the organic bananas look like they might get too ripe if you wait much longer.

AI will be particularly valuable in education and employment contexts – it will help fill in gaps and allow people to more seamlessly adapt to new expectations. … The biggest risk of emerging AI technologies is stagnation. Pleasing or entertaining someone (which is where the digital technologies money is!) does not challenge them to grow – it reinforces their existing worldview and serves self-satisfaction. Will people opt into AI that disturbs them, that expands the windows of perception through discomfort? Could an AI you employ to make your life easier reasonably be expected to push you to make yourself better?

“What makes this example different from existing AI ‘suggestions’ is that these will all be actually good ideas. You will be excited to experience them.

“AI may also help people make more durable shifts in their preferences and behaviors; when you want to do something new, AI can provide reinforcement, guidance and the context needed to make the change. For this reason, AI will be particularly valuable in education and employment contexts – it will help fill in gaps and allow people to more seamlessly adapt to new expectations.

“The biggest risk of emerging AI technologies is stagnation. Pleasing or entertaining someone (which is where the digital technologies money is!) does not challenge them to grow – it reinforces their existing worldview and serves self-satisfaction. Will people opt into AI that disturbs them, that expands the windows of perception through discomfort? Could an AI you employ to make your life easier reasonably be expected to push you to make yourself better? It’s possible that mass-market AI will give people an endless stream of superhero movies unless there are critical voices who can show another path.

“In addition, if all art becomes bricolage – re-assemblages of past creative outputs – the idea of originality becomes difficult to maintain. It’s unclear whether AI will usher in an artistic renaissance or something closer to medieval art: infinite repetitions on a tired theme.”

Bibek Silwal
We may fulfill a goal that be seen as science fiction: Each person born may live forever

Bibek Silwal, a consulting civil engineer and founding member of the Youth Internet Governance Forum in Nepal, wrote, “The tech field is highly unpredictable. With the emergence of generative AI and newly trained models popping up everywhere today, anything could be possible by 2040. It seems as if there are an equal number of opportunities and risks due to the dynamic nature of the evolution of the tech. In this new world of more-diversified networked communications tools, with advances coming in AI, quantum computing and many other likely sectors, tech could come to dominate over the capacity of humans in terms of crime and oppression. When the tech takes over from the humans doesn’t that make humans slaves to those operating the AI?

“It is up to every individual using digital tools to question themselves in the process, asking, ‘Am I doing the right thing? Does this tool and what I am using it for serve humanity or does it only serve specific, unknown persons with unknown motivations?’ Accelerating technological change has left us with many unanswered and unanswerable questions. Things seem to be moving more quickly all the time. In recent years AI has begun to bloom in full force, from autonomous vehicles to Chat GPT to drones and more.

  • Education Systems: Significant transformation will transform education in the coming decade. AI will play a dual role, presenting both threats and opportunities. On the one hand, AI can enhance, upgrade and personalize the learning experiences of students, adapting content to individual students’ needs and abilities. On the other hand, it may lead to concerns about a decline in the creativity and critical-thinking skills of students and a decline humans’ natural learning capabilities which had been in place for thousands of years. The change in teaching pedagogy should embrace AI.
  • Digital Divide: The digital divide is likely to persist, with barriers to internet access as well as weaknesses in digital infrastructures remaining a challenge for less-developed countries. This gap can limit the opportunities for individuals in less connected areas to experience the benefits of AI and the digital world.
  • Human Interaction: AI is interrupting and reshaping human interactions with friends, families and everyone. The way people communicate and interact online and offline will continue to change.
  • Public Services and Facilities: The use of public services and facilities, such as healthcare community services, and administrative services will see significant upgrades. AI-driven technologies will enhance the efficiency and quality of these services. Telemedicine, personalized healthcare, and smart city initiatives will redefine how people access and experience these services. Accessible resources and tools will be assisted by AI.
  • Automation: Automation will extend into more-varied aspects of life, from automated cars to automated restaurants. This shift will change the way people commute, dine out and engage with services.
  • Digital Crime: The type of crime in the digital world will evolve. With AI, cybercriminals can employ more sophisticated and dynamic tactics for fraudulent and criminal activities. New forms of digital crime may emerge, challenging law enforcement and cybersecurity efforts. Detecting and preventing these crimes will require advanced AI-driven solutions.
  • Employment and Economy: The workforce and job market will experience significant changes as AI becomes more integrated. Labor intensiveness will reduce and routine tasks in various industries will become automated, reducing the demand for certain jobs but also creating opportunities for newer tasks. Human work will shift toward more supervisory and decision-making roles. Upskilling and adaptability will be crucial for job security. Automation also upgrades the production capacity contributing to the economy. Developed countries will leverage the technology to increase their GDP and profitability.
  • Global Disparities: Disparities between first and third-world countries will become more pronounced regarding AI implementation. Most-developed countries will speed up the adoption and integration of AI technologies, while less-developed countries may lag behind. This disparity could exacerbate economic and social inequalities and cause serious problems if not addressed. The global village may become global islands.
  • Market Dynamics: AI will influence market dynamics in trade sectors, potentially leading to fraudulent activities and sudden market collapses in stocks. High-frequency trading algorithms and AI-powered market analysis can introduce volatility and challenges to financial stability. Regulatory measures and oversight will be essential.
  • Currencies: Digital Currencies will be the key in coming years. There will be more adoption of digital currencies not only cryptos but the ones issued by the government and there will be more global currencies.
  • Surveillance and Privacy: Citizen surveillance and privacy will face threats from more authoritarian and ruling governments. Advanced surveillance technologies, eroding personal privacy will be implemented by the government. Striking a balance between national security and individual privacy will be a critical challenge.
  • Predictive Politics: Political systems will become more predictive with the integration of AI intelligence. AI can analyze vast datasets to predict trends, election outcomes, and public sentiment.
  • Deepfakes and Misinformation: Deepfakes and misinformation will pose a significant challenge due to AI’s ability to create convincing fake content and its distribution. Identifying and combatting disinformation will require advanced AI-based detection tools and regulatory measures.
  • Digital Identity: AI can potentially compromise online security and lead to identity theft. Protecting digital identities will be a priority, with cybersecurity measures continuously evolving to counter emerging threats.

“At the end of the day, it depends on all stakeholders to participate in taking us forward to a sustainable digital future for everybody. We should address the present problems and anticipate and react to diminish the upcoming ones as well as possible. AI has brought opportunities to everyone. Technology may eventually enable us to fulfill a goal that used to only be considered science fiction: That each person born may live forever, with their mind and intelligence digitally retained in some shape or form.”

Buroshiva Dasgupta
‘Learn to operate the magic lamp. The genie will be at your service. Always.’

Buroshiva Dasgupta, director of the Center for Media Studies and Research at Sister Nivedita University in Kolkata, India, said, “Communication will be easier, less trustworthy perhaps but more efficient. The fear that AI will replace human activity is mostly unfounded. The human brain is much smarter.

“Much of the dreary daily chores will be replaced by AI – and that’s welcome. More time for idleness, and that is praiseworthy. Those who cannot make creative use of the extra time allowed by machine activity will suffer from anxiety, but in time one hopes they will find alternative vocations. A typist became a computer operator in a day – so why worry? But you needed to learn to use your fingers – that’s a basic skill.

What we really need to do is simplify the operation of the new technology for the masses. Don’t allow it to remain accessible to only a few. They will try to control it by confusing the masses. There lies the key to social welfare. The new communicators must demystify the technology.

“Humans will have more time to think. So be prepared for it. Don’t panic. Your day chores will be looked after by the machines. Learn to operate the machine, don’t be its slave. Infinite new possibilities are opening up. Have faith in oneself. Don’t let AI become a Frankenstein. Learn to operate the magic lamp. The genie will be at your service. Always.

“You will have time to be spiritual. Please go ahead. Read the scriptures better – in the new social context. What we really need to do is simplify the operation of the new technology for the masses. Don’t allow it to remain accessible to only a few. They will try to control it by confusing the masses. There lies the key to social welfare. The new communicators must demystify the technology.”

Philippa Smith
Developers, governments, civil society are working together to identify best practices of AI

Philippa Smith, a digital media expert, research consultant and commentator based in New Zealand, wrote, “AI is life-changing. By 2040 it will be so ingrained in individuals’ daily lives that it will have become normalised, accepted and expected. Parallels can be seen in our experiences with the advent of the internet as it took us down new pathways in how we learned, were informed and entertained, how we communicated with our social networks, did our purchasing and banking, sourced our news, organised holidays, sought medical advice or engaged with government departments and organisations (to name only a few examples).

It is indeed a revolution. What is significant for me, and this gives me hope, is that people have healthy reservations about what the future holds. In 2023, it is pleasing to see developers, researchers, governments and civil society working together to identify best practices of AI, and exploring how emergent issues such as deep fakes, biased programming, socioeconomic equality and invasion of privacy might be countered.

“AI, too, will take these activities to new heights – but at a much brisker pace. Even now in 2023 there is a sense of urgency from professionals, businesses, organisations, institutions and governments that we all need to jump on the bandwagon with AI or we will be left behind. It is indeed a revolution. What is significant for me, and this gives me hope, is that people have healthy reservations about what the future holds. In 2023, it is pleasing to see developers, researchers, governments and civil society working together to identify best practices of AI, and exploring how emergent issues such as deep fakes, biased programming, socioeconomic equality and invasion of privacy might be countered.

“If we work collaboratively to reach the best possible outcomes as the technology continues to advance, then by 2040 we may be well placed in taming AI so that it is exactly what we envisage: a game changer. Gains will be felt in many fields of life in the next 15 years – improved business productivity, advances in medical science, business, education and law. AI is a problem solver and offers exciting possibilities.

“One of my concerns is that if AI takes over too much, if we become too reliant on its superior abilities because it can work faster and more efficiently than human beings, we might lose our motivation and our desire to be personally creative. That, indeed, would be an unfortunate loss.”

Anonymous respondent
AI that helps code will help things. But AI that generates writing ‘will be a catastrophe’

A professor of statistics at a major U.S. university who is an expert in prediction and inference wrote, “If by ‘AI’ you mean what people mean by it today – namely, generative models for text and images and so on – then the biggest effects will all follow from making it very cheap to produce the sort of text, images and computer code that were abundant on the Internet in the early 2020s. Data from after that will be too polluted by the output of generative models to be really useable. This means that we’ll be able to churn out tons of boilerplate/repetitive/insincere writing, that certain sorts of commercial/popular art will be extremely cheap, and some kinds of low-level code will be extremely cheap. These things will thus become even more common and even more devalued, in both the social/psychological and the monetary sense.

“The AI takeover of computer programming will mostly be good. Memorizing low-level coding for basic tasks was always a waste of time, not unlike memorizing multiplication tables. This advance is somewhat similar to the introduction of calculators. Even then, however, strong norms will have to develop about not relying on automatically generated programs for anything complicated. (The ‘hallucination’ problem is fundamentally unsolvable for anything like the current technologies, and we’re not likely to see anything radically different available within 15 years.)

I am not very concerned about reproducing the various injustices of our society in our machines. We do a good enough job of that on our own, whatever you might think those injustices are. I am very depressed by the prospect of our machines endlessly rehashing our most inane, and most common, online arguments and killing the Internet as a valuable source of information in the process.

“Writing and speech synthesis, however, will be a catastrophe. Lots of our institutions are predicated on words coming from human beings and signaling at least some minimum degree of thought and commitment. Writing to elected representatives and comments to public agencies are already astroturfed, but that will become too cheap to meter. Online reviews are already gamed, but, again, it will be trivial to produce hundreds of reviews for, or against, anything you like. Search engines are institutions for aggregating distributed opinions about which web pages are relevant to which queries, but they rely on some genuine intelligence being behind the creation and maintenance of links; that signal will be overwhelmed. In every case, each spammer extra spammer will be diluting the value they’re all seeking to exploit, but that’s not going to stop any of them.

“Lots of our institutions could adapt. (For instance, one might have to provide some sort of biometric proof-of-humanity before submitting a comment to an administrative agency.) But these adaptations will be expensive, clunky, and require a good deal of experimentation to work out. It’s possible that in some cases the adaptation will be to get rid of genres of writing that are already extremely formulaic and degraded (job application cover letters, corporate mission statements, expressions of official concern, etc.). It’s also possible that we’ll demand even more of these things when they’re cheap.

“I am not very concerned about reproducing the various injustices of our society in our machines. We do a good enough job of that on our own, whatever you might think those injustices are. I am very depressed by the prospect of our machines endlessly rehashing our most inane, and most common, online arguments and killing the Internet as a valuable source of information in the process.

“If by ‘AI’ you mean what we used to call ‘data mining,’ i.e., prediction and decision-making based on statistical models, that’s a very different and much longer, much slower story.”

Garrett A. Turner
AI will fall short and the benefits and problems in future will be quite similar to those of today

Garrett A. Turner, vice president for strategy at Liberty Port, which constructs wireless networks globally, predicted, “By 2040, AI will have fallen well short of the overly promoted capabilities that scientists and researchers have promised. It will not significantly influence social systems throughout the world.

“AI will undeniably have a major economic impact between now and 2040. As with most new technologies, government and the private sector will invest substantial resources into developing the central platforms or marketplaces in which users will leverage AI. However, I believe these investments will miss their mark. The everyday person will have little to no use in engaging in this type of technology. Large corporations driven by data analytics stand to benefit the most, as well as employers in labor-intensive businesses that can automate, outsource and ultimately eliminate human employees as a whole.

As with most new technologies, government and the private sector will invest substantial resources into developing the central platforms or marketplaces in which users will leverage AI. However, I believe these investments will miss their mark. The everyday person will have little to no use in engaging in this type of technology. Large corporations driven by data analytics stand to benefit the most, as well as employers in labor-intensive businesses.

“Political systems will see the most change due to AI advances. It could be used during live political debates to show data in real-time that reveals whether candidates are misleading or misinforming the audience. Precise data visualizations of voting records and public political stances could be posted to inform constituents about their representatives’ performance. Unfortunately, AI will be used to produce and spread convincing but false deepfake videos of trusted people and sources. Campaigns will be generated to target audiences for votes through data analytics rather than grass roots campaigns aimed at understanding the greatest needs of a local populous.

“Overall, I believe that by 2040 AI will not step too much further beyond the benefits and deficits it is creating for individuals and society today. Yes, AI will impact our daily lives. But the impact of additional server farms will not outpace that of cattle ranches, and the canary in the coal mine won’t be cryptocurrency.”

Lee Warren McKnight
A new priesthood or profession of certified ethical AI developers will emerge

Lee Warren McKnight, professor of innovation and entrepreneurship at Syracuse University, commented, “The existential battle over the next 15 years will not be humans versus AI (as Hollywood and misinforming billionaire oligarchs portray, the better to keep us entertained and unconcerned about their historic hoarding of wealth). Rather, it will be between good, bad and evil AI.

“By 2040 the general public and political leaders will know to not expect that Large Lying Machines (‘LLMs’) are designed to serve the public good. Before 2040, disasters of social harm by bad AI design will finally spur action; regulation of AI will focus on real harms painfully learned from bad experiences. A new certification process or processes, whether organized at professional or national level, or both, will ensure that at least some of those to whom much computational power is entrusted, recognize they have professional responsibilities extending beyond their paycheck and employer. And – ideally – a legal obligation to do societal good. Just as an incompetent but licensed civil engineer can be prosecuted for violating commitments to for example, not build bridges which easily collapse, so too must there be consequences when artificially not-so-intelligent systems designed intentionally or through professional negligence to discriminate unlawfully are deployed.

“Thus, by 2040 a new priesthood or profession of certified ethical AI developers will emerge who are willing to sign their names to a pledge that they have done their best, as they were trained to do, to ensure XYZ AI system is designed to minimize bias and social harm, and to self-monitor and self-report anomalous behaviors. (They will no longer ignore the known consequences of models designed to maintain sticky ‘engagement’ or rushed to public use before they are refined, as is common with the incredibly error-prone Large Language Models being released today.)

Good AI will be fantastic and do amazing things for us, improving the quality of life substantially over the coming 15 years. On average. But then we have to speak of the consequences of the next decade-plus of bad-by-design AI. These systems are built and released quickly for competitive reasons following the ‘break society fast’ amoral model espoused by Silicon Valley-ish wannabe philosopher king bros pretending to be dystopic visionaries. These systems may have high error rates and poor security and privacy controls, and they may be designed to package and sell user information whether true or ‘hallucinated.’ Artificially intelligent disinformation as a service will only be rivaled by the also fast-growing market for AI-powered misinformation as a service. Social harm by AI design is not a thing 15 years in the future, it is a business model today.

“One positive 2040 scenario due to regulation: Industry will be incented to improve its practices and produce not just good but better AI. Local city and state procurement decisions will come to be shaped by verifiable proof that bias, discrimination or fraud are not a feature or a bug of AI-powered city services. Humanitarian or public-interest AI bots will power a growing array of robots working for good, in people’s homes, communities, hospitals and schools. Good AI will be fantastic and do amazing things for us, improving the quality of life substantially over the coming 15 years. On average.

“But, yeah, then we have to speak of the consequences of the next decade-plus of bad-by-design AI. These systems are built and released quickly for competitive reasons following the ‘break society fast’ amoral model espoused by Silicon Valley-ish wannabe philosopher king bros pretending to be dystopic visionaries. These systems may have high error rates and poor security and privacy controls, and they may be designed to package and sell user information whether true or ‘hallucinated.’ Artificially intelligent disinformation as a service will only be rivaled by the also fast-growing market for AI-powered misinformation as a service. Social harm by AI design is not a thing 15 years in the future, it is a business model today.

“My personal AI bot of 2023 tries to bully its way into my online meetings today, pushing around professionals who wonder if I will be upset that my AI bot was refused entrance to a Zoom (answer: no, of course not). AI deep fakes and bully bots and scam-artist Large Lying Models will insist they be let into our Zooms, rooms and lives to vacuum up our data and steal our money with far greater ease and convenience than do the spam emails of today. The business disruption, data and intellectual property theft, and fraud committed by ‘personal’ AI bots actually serving another master/enlisted in a bot army will inspire a new category of case law. (My personal AI assistant/bot has never signed an NDA. So, am I liable for its collection and sharing of others’ proprietary information? Courts will decide in the next 15 years.) Logically, we must recognize that AI models and systems will quickly learn that crime by design has no meaningful consequences – for the AIs at least.

“Finally, there has been some discussion about the eventual possibility of truly evil AI. We’re hearing a lot of noise about AGI lately, as it is seen by some engineers as the ghost in their machines/large language models today. They are the ones hallucinating, or at least suffering from Freudian transference. Artificial general intelligence will be no more real in 2040 than when MIT Professor Joseph Weizenbaum created Eliza [a conversational natural language processing program that emulates a Rogerian psychotherapist] in 1963.

“The willingness to presume there is actual intelligence in AI rather than a scripted, or rather, modeled process designed to trick you, to make you think you’re talking to someone who’s not actually there will be an ever-growing problem through to 2040. AGI will not be real nor will it be a problem in 2040; rather, people’s attributing of humanoid characteristics to machines will lead to new addictions by design and social alienation by design and be a favored tool of a growing host of information warfare-enlisted AI bots.

“Detecting what’s real and what and who is an artificially intelligent scam artist will be the huge social problem of the day, since artificially intelligent machines and models trained to be and do evil, can do so without ever suffering from a guilty conscience, or – unless the law catches up – any legal consequence for their makers.”

Chris Riley
AI seems magical now, but it really offers only flawed and limited promise

Chris Riley, executive director of the Data Transfer Initiative and a distinguished research fellow at the University of Pennsylvania’s Annenberg Public Policy Center, commented, “I recently wrote a technology policy piece on this topic. I believe that the worries of human displacement by AI in various ways (as an employee, as a relationship partner or as the primary tenant on Planet Earth) are overblown.

Where we can see the biggest potential impacts of AI is in industrial efficiency, where the U.S. stands poised to reclaim a position of world leadership at the intersection of many evolutionary forces – a ‘de-risking’ with China, massive domestic investments under the Biden administration and America’s current leadership in AI technology. AI offers the most benefits in the most mundane of circumstances, though the hype of simulating human interaction gets all the news headlines. We risk, unfortunately, an equally large consequence of AI in the negative: the further undermining of the post-World War II world order.

“While we will continue to see significant advances from AI in many ways, the raw power of simulating intelligent behavior through LLMs will plateau as a result of model collapse and diminishing returns. AI will not suddenly give us always-perfect answers to questions nor be able to tell us how to do anything, much less be able to execute on such tasks perfectly. In this way, it is much like search engines. They were magical when they first appeared and seemed like an opened door to a fount of infinite knowledge and possibility; today, they are a fundamental part of everyday life, but they have severe limitations and, like their sources of information, cannot be unquestioningly relied upon. The same is true of AI.

“Where we can see the biggest potential impacts of AI is in industrial efficiency, where the U.S. stands poised to reclaim a position of world leadership at the intersection of many evolutionary forces – a ‘de-risking’ with China, massive domestic investments under the Biden administration and America’s current leadership in AI technology. AI offers the most benefits in the most mundane of circumstances, though the hype of simulating human interaction gets all the news headlines. We risk, unfortunately, an equally large consequence of AI in the negative: the further undermining of the post-World War II world order.

“We already have questions around the efficacy of the United Nations on the heels of Russia’s invasion of Ukraine, as Russia holds a permanent seat on the UN Security Council while committing such grave violations of security. What will happen with China and Taiwan between now and 2040? And will American economic restrictions on China, motivated in part by the desire for AI dominance, exacerbate tensions within the West, even as the U.S. and Europe struggle to identify a shared approach to technology governance to present to the developing world as an alternative to authoritarian control?

“Time will tell on these questions. But rather than AI being at the heart of them or driving their answers, AI – like search engines, like the internet, like computers themselves – will simply be one piece of the puzzle, like its historical precedents. A large piece, but a piece nevertheless.”

Continue reading: Specific concerns – Worries about the future