Insightful essayists go deep in this chapter. Here’s a brief sampler of a few of the big ideas: Abundant instantly-available data and the arrival of “Mind2 – the collective mind” – will remake the world and “encroach on human consciousness.” |“Artificial machine intelligence could cause dramatic or even existential long-term changes in human institutions, culture and capability.” | “Can AI build defenses faster than hard-working bad actors can devise offenses?” | A Universal Basic Income could “eliminate systemic poverty and promote creative activity.” | Even the most “moderate changes in political alignment and the broadening of acceptable policy solutions could induce dramatic changes in individuals’ lives.” | The more-fully-realized metaverse of 2040 could “unlock more-powerful XR capabilities.” | “AI’s ubiquity will tempt us to give up ownership, control and responsibility.” Read on for details about these points and much more.


Barry Chudakov
Thought is no longer generated from solo insights; it is the end product of a shared brain

Barry Chudakov, principal at Sertain Research, wrote, “Adjunct intelligence will be everywhere, exercising a dramatic effect on each person’s identity and individual perception. AI’s collective powers and uber-reasoning are arriving as a silent encroaching on human consciousness. This impinging is happening without much bother or awareness beyond cultural enthusiasm for AI. AI will be behind the tech curtain, contained and operating in almost everything we touch and invested in our objects and inventions.

“The embedding of AI will be both a convenience and a point of contention as we enhance our lives with it and entwine our lives with its hidden presence, which will create a tech-paranoia backlash as jobs are lost to AI and the digital divide widens.

“AI encroaching on human consciousness will demand that humans become more meta-aware – realizing it is how we entrain with our tools that alters our thinking and behaviors. This is not a new phenomenon but we have never before encountered a technology as powerful and pervasive as AI.

“As Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, the authors of The Age of AI write: ‘For humans accustomed to agency, centrality and a monopoly on complex intelligence, AI will challenge self-perception.’

  • “By 2040 AI will be more refined and accommodating, funneling our desires and living inside almost everything – our light switches, our vehicles, our devices and computing tablets, our classrooms and offices. AI will be designed to enhance (by assisting) our thinking and actions, and much of this will be below cognition. For example, doctor visits will not always require ‘going to the doctor’ as we will have a monitoring chip inside our bodies that, via AI, will record and convey to our doctors how we’re feeling, our heartrate, our blood pressure, our temperature and gut health. What will happen when AI knows us better than we know ourselves?
  • “AI feeds on data – vast quantities of data – this single fact becomes an arbiter of the future and a harsh critic of the past. Previous civilizations had no data stores, no data-mining mechanisms, no endless data flows that supported or refuted assertion, conjecture, invention. What the data says is a profoundly different question than what the prophet says. Data access and analysis is a completely different dynamic than inherited, traditional rules and rule-based behavior; it ignores ‘thou shalt’ and ‘thou shalt not’ while favoring the restless movement of data, increasingly presented in colorful and well-designed visualizations. Having said that, junk data will become a thorny problem, as unscrupulous and self-serving actors and social media platforms work to manipulate public opinion or foment discord for audience ratings and metrics. 
  • “Among the business and financial implications of more-data-driven realities: Only the biggest companies with the deepest pockets and resources will be able to manage and silo the vast data stores which fuel AI, hence one commerce consequence of our growing dependence on AI will be to grow tech giants into even larger behemoths. ‘Power corrupts and absolute power corrupts absolutely’ applies to this accelerated growth of tech companies like Google, Meta, Amazon, NVIDIA and others.

Our past history will be seen as faltering missteps because it was not data-based, while we will have to grapple with the retreat of personal vision and the arrival of Mind2. Mind2 is the collective mind; the accessed mind; the mind of everyone, which uses the enlightened individual mind multiplied by many minds. … In a Mindworld every notion, every song, every script or book can be rewritten, revised, rethought. Thought itself is no longer housed within one brain but is the end product of a shared brain. This is a new kind of thinking that uses human thought but is not solely human thinking. In this hybrid partnership, humans will learn from machine learning.

  • “To understand the truly profound change of AI as an adjunct to human intelligence, consider the Cartesian assumption, ‘I think therefore I am.’ This (usually unspoken) assumption has informed most of Western thinking. Descartes could not have imagined, ‘I think with the assistance of neural networks.’ Historically time was our assistant to sort out truths from falsehoods, or at least provide enough commentary that theories like Earth-centrism or bloodletting were eventually abandoned. Yet individual thinkers had to wait for other individual thinkers to undermine dogma. As a result, throughout history our heroes were solo (usually embattled and threatened) figures shining the light of wisdom into the darkness of ignorance and prejudice, from Socrates and Plato to Galileo, Einstein, and Picasso. With the proliferation of AI and the iterative improvements of artificial general intelligence (AGI), individual insight and perception will join with other insights and probabilities and algorithms to produce knowledge. As a result, individual perception will matter less and collective facticity will matter more. Our past history will be seen as faltering missteps because it was not data-based, while we will have to grapple with the retreat of personal vision and the arrival of Mind2. Mind2 is the collective mind; the accessed mind; the mind of everyone, which uses the enlightened individual mind multiplied by many minds. The perceptions of Malcolm X or Riane Eisler or Yuval Noah Harari can now be boosted and amalgamated and restated and improved by others. Authorship and individual copyright mean something different (have no meaning?) in a Mindworld where every notion, every song, every script or book can be rewritten, revised, rethought. Thought itself is no longer housed within one brain but is the end product of a shared brain. Or, as the authors of The Age of AI say,’… to achieve certain knowledge we may need to entrust AI to acquire it for us and report back.’ This is a new kind of thinking that uses human thought but is not solely human thinking. In this hybrid partnership, humans will learn from machine learning.

“How will social, economic and political systems change by 2040? Here are some of likely possibilities:

  • “In 2040 AI will have enabled a much less ad hoc and more-programmed existence. We will rely on AI to count our hours of sleep and monitor their quality; food will go through an AI filter, tracking pollutants, carcinogens and pathogens, as well as quality of nutrition; dating and mating will continue its trend away from accidental encounters to programmatic readings of others’ likes and dislikes, physique and interests; work will be AI-mediated, with every sophisticated job entailing an AI component and machine-learning knowledge. This more-programmed existence will be the core of a business model for dozens of companies who will consider it their mission to deepen human reliance on AI and neural networks. 
  • “In 2040, the effects of Mind2 on society are profound. AI does not represent the end of humanity; it represents the end of humanity’s sole interpretation of reality, of what is, of what will or could be. Perception will start to become a shared resource, like computer programs or data. The individual mind, celebrated throughout human history, will give way to accessed mind. Thinking will happen with our fingers (as we use some screen-mediated tool) or with brain-prompts through smart glasses mediated by, say, eye blinks; these prompts will be neurally accessible as our tools follow more pathways through the human nervous system. We will use AI as a partner, a sounding board, a retriever. But we, ourselves, will no longer be the sole entity in the room. 
  • “Economics will be driven by climate change mitigation and AI-enabled technologies. In business, medicine, politics, war and other fields, any endeavor will be significantly affected by simulation: a sim will become de rigueur for any proposed action or expenditure. Simulation may replace knowing: that is, knowing a thing will become the ability to simulate and thereby test and examine it.
  • “Politics will become a proxy theater for feuds over rules-based-order traditions and practices versus AI-ruled disruptive technologies. Terrorist groups or lone-wolf threats (a la the Unabomber Ted Kaczynski) are at one level an outcry against the takeover of technology in human affairs and a fear of the end of traditional rules-based dogma. But many will not see the world that way; they will see politics in the words (propaganda and rationales) of actors who do not see or think or act from the meta level, but chant and rehash arguments from past traditions. By 2040 the inertia of the prior order of church, school and government – alphabetic order writing and rules tool logic – will be shown to be in a soundless collision with the tool logic of facticity and data-fueled AI. This collision must be navigated wisely to avoid misguided tension, casting AI as a detriment and inherited dogma as capable of informing existential threats.

“In addition, some other things will stand out when it comes to the gains and losses for individuals and society. The adoption and integration of neural networks into vast areas of human life will be primary. Layers of programmed intelligence will affect how we think, act and perceive the world. Central to this revolutionary adoption of new technologies are the huge data stores on which AI depends. Prior human existence was not data-dependent. Ignorant and self-serving autocrats, religious leaders or politicians made pronouncements that were often backed up by force, and subjects or believers had no choice but to abide by this ignorance. But data remakes the world.

Data skewers past assumptions for having little or no data support and it points towards newer, revolutionary developments that data enables. We are moving from a rules-based order derived from religious and territorial hegemonies to neural network rules, AI rules that are software and machine-learning based. This is a change so profound it reaches into every area of human life, from religion to medicine to war and politics. … We will gain not only the ability to access all human knowledge and understanding, we will gain a valuable adjunct to human perception. … Further, AI will develop solutions human perception has not considered.This blows apart many cognitive commitments of our past including territoriality, religious beliefs, relations between the sexes, human rights, aging and intelligence quotients, to name a few.

“Many questions of human interest can be affected or answered by sufficient accurate data. This is one of the most significant developments resulting from our adoption of AI. Data skewers past assumptions for having little or no data support and it points towards newer, revolutionary developments that data enables. We are moving from a rules-based order derived from religious and territorial hegemonies to neural network rules, AI rules that are software and machine-learning based. This is a change so profound it reaches into every area of human life, from religion to medicine to war and politics. 

“We will gain not only the ability to access all human knowledge and understanding, we will gain a valuable adjunct to human perception. Whether testing and finding new drugs, mitigating climate change or finding workable, peaceful solutions to age-old territorial and political conflicts, AI will provide us with numerous new alternatives we had only dreamt of before. Further, AI will develop solutions human perception has not considered, or, given our biological substrate, we were not designed to consider (e.g., AI has made moves in Chess and Go that no human has ever tried).

“Much of this gain will be due to moving from (occasionally) inspired assertions to data-driven understanding and conclusions. The beneficial effects of a data-first, facticity approach cannot be underestimated. This is not how we have behaved historically, and it blows apart many cognitive commitments of our past including territoriality, religious beliefs, relations between the sexes, human rights, aging and intelligence quotients, to name a few.

“We will also gain another important perspective: AI will allow us to watch ourselves using AI. One of the most important uses of AI will be to use AI to monitor and report on how we change our perceptions and behaviors as we use AI. In the next 15 years one of the things most likely to be lost due to our fascination with deploying AI is oversight, our meta perspective.

“This is thinking about the changes in our thinking and behaving as we use AI and it could not be more important. Since we always entrain with our tools, we will use AI to help us in myriad spheres. Understandably, we will relegate oversight of AI solely to governments. It is not that we do not need regulation of the role of AI in the public square; we do, but that is not enough. We need to watch ourselves as we’re using AI to create a fuller understanding of how AI changes how we think and act.

“Expecting governments to sufficiently regulate AI would be like thinking that knowing the government-set speed limits was enough know-how to drive a Ferrari. My candidate for watching how we use AI is AI itself. We need to build monitoring and assessment tools into AI, not, by any measure, to create draconian Big Brother oversight protocols, but to assess and report on how we are changing as we use AI. Go here to read more from me on that subject.”

Beth Noveck
Proactive moves to promote the use of AI to enhance democracy are crucial to mitigating risk

Beth Simone Noveck, director of the Burnes Center for Social Change at Northeastern University and GovLab, wrote, “The proliferation of artificial intelligence is poised to usher in profound changes by 2040. AI has already reshaped our daily lives. While the promise of AI is still unfolding, the direction we’re headed hinges crucially on the choices we make today. My greatest concern – and what stands out as most significant to me – is that if we do not prioritize policies and research that harness AI for social good, we may not witness the positive transformation we hope for.

“Our failure to proactively address AI’s potential to deepen democracy could leave us without the necessary mental models to envision and realize an inclusive future. A vital distinction to understand as we navigate this AI-driven future is that actively promoting the use of AI to address our hardest challenges is not synonymous with risk mitigation. While the latter is about preventing harm and ensuring that AI systems don’t inadvertently exacerbate issues, the former is a proactive pursuit of positive outcomes. It’s the difference between using AI to ensure elections aren’t tampered with (risk mitigation) and leveraging AI to increase voter participation or improve policy responsiveness (actively addressing challenges). Both are essential, but they serve different purposes.

“If our focus is solely on preventing the pitfalls of AI, we might miss out on harnessing its full potential to drive societal progress. AI has the potential to revolutionize democracy. It can make our institutions more responsive, our electoral processes more transparent and our public discourse more informed. However, realizing this potential requires a balance of both risk mitigation and the proactive use of AI for democratic enhancement.

The next 15 years are pivotal. What’s most likely to be gained is a more efficient society – one in which services are personalized, predictions are accurate and mundane tasks are automated. But if we neglect the broader vision of AI’s role in society, focusing only on risk avoidance, we risk sidelining its transformative potential. My hope is that we approach AI with a balanced perspective, recognizing that while risk mitigation is crucial, it is equally important to actively harness AI for the betterment of society and the improvement of democracy.

“Consider the realm of information dissemination. AI algorithms, particularly those behind social media platforms, play a decisive role in shaping public opinion. Left unchecked, these algorithms can create echo chambers, polarizing society. But if we move beyond just mitigating this risk and actively design algorithms to foster diverse and informed discourse, we can transform public debates and democratic participation.

“Similarly, while AI’s role in electoral processes can be used to combat election fraud, its proactive potential lies in streamlining electoral logistics, making voter registration more accessible, and even facilitating participatory budgeting.

“If we invest in AI for democracy, we could make it easier for governments to listen to their citizens. Instead of voluminous comments that no one has time to read, generative AI can make it easier to categorize and summarize citizen input. At MIT, Professor Deb Roy uses AI to create a ‘digital hearth’ that analyzes and extracts learning from resident conversations.

  • “In 2022, the City of Cambridge, MA, used Roy’s Cortico technology to run a series of issue-based community conversations designed to get resident feedback on the choice of the next city manager.
  • “Our students in the AI4Impact class at Northeastern are working with Citizens Foundation in Iceland and the Museum of Science in Boston to launch a larger conversation on literacy and equity that will begin next month. AI is making it possible to run that dialogue efficiently and effectively.
  • UrbanistAI, a Finnish-Italian initiative, is using AI to turn the public’s ideas for how their city should be designed into hyper-realistic photographs that communities can discuss. In Helsinki, the technology is helping residents and city officials to design car-free streets together. Using AI prompts, participants visualize changes like adding planters or converting roads into pedestrian zones. The technology even incorporates a voting feature, allowing community members to weigh in on each other’s designs. Now you don’t need a degree in urban planning or artistic skills to see how your ideas could transform your community.

“However, the most poignant concern is not just about the challenges AI might exacerbate but about the opportunities we might miss. By 2040, without a vision that balances risk mitigation with proactive societal enhancement, we might never tap into AI’s potential to revolutionize democratic processes, from public consultations to policy interventions.

“The next 15 years are pivotal. What’s most likely to be gained is a more efficient society – one in which services are personalized, predictions are accurate and mundane tasks are automated. But if we neglect the broader vision of AI’s role in society, focusing only on risk avoidance, we risk sidelining its transformative potential. My hope is that we approach AI with a balanced perspective, recognizing that while risk mitigation is crucial, it is equally important to actively harness AI for the betterment of society and the improvement of democracy.”

Micah Altman
The problems raised by AI cannot be solved simply by bolting guardrails onto existing systems

Micah Altman, a social and information scientist at MIT, said, “Popular visions of created intelligence as a utopic or dystopic force date back more than two centuries. Today it is possible to envision that artificial machine intelligence could cause dramatic or even existential long-term changes in human institutions, culture and capability. To predict and shape these long changes it is vital to understand the mechanisms by which technologies change society.

“For the past 400 years or so, technology has acted through economics by changing the fixed and marginal costs of processes. This change leads fairly directly to changes in the absolute and relative costs of products and services and shifts the relative advantages of capital and labor. These shifts flow into culture, norms and institutions, with popular entertainment and present-generation attitudes often in the lead. Changes to law and the structure of larger organizations generally lag behind.

“Artificial intelligence, as it is broadly defined, has reduced the marginal cost for many processes related to recognition (e.g., recognizing faces in images, or phrases in conversation) and prediction. And AI has advanced rapidly to be used in processes related to information discovery, summarization and translation. Since the emergence the past year or so of successful ‘generative’ large language models, AI is reducing the cost of using established public knowledge to create information outputs (in the form of text, audio, video, data and software) in order to solve specified problems under human direction.

“Information technology, by making categories of information problems ‘cheap’ to solve, has disrupted the market for entire categories of information products and is transforming the professions involved. Telephone switchboard operators are long gone, and bank tellers are rare. Newspapers and the professions of journalism, bookkeeping, copyediting, weather forecasting and data entry have already changed drastically. IT support, remote customer service, librarianship and the legal profession are currently under strain.

AI systems will likely remain capital-intensive, energy-intensive and data-hungry. Increasing adoption of these systems without effective regulations is likely to shift competitive advantage away from human labor while promoting monopolies. Further, these systems do act to ‘fence in’ the commons of information by transmuting public information into proprietary commercial AI models – and there is a possibility licensing will be imposed on the resulting outputs. This could yield a substantial concentration in economic and cultural power. Ensuring that the disruptions caused by these technologies enhance human agency and the public knowledge commons rather than concentrating power and control requires thoughtful regulation of AI markets and systems

“The generative AI models will increasingly disrupt professions engaged in producing information products – including lawyers, copywriters, grant writers, illustrators, graphic designers and programmers. Within 15 years it is likely that there will be significant disruption in these and related business models and professions – with substantial spillovers into culture, norms and institutions.

“It is also likely that AI will increasingly demonstrate more attributes of sentience (responsiveness to its environment) – which will increase the challenges of governing AI and raise the potential for chaotic systems behavior and malicious human exploits of the technology.

“Although general intelligence, sapience and super-intelligence could someday have widespread disruptive effects – and even pose existential threats – it is unlikely that these will arrive by 2040. Instead, we’ll likely see the hollowing-out of more professions related to information, knowledge work and the creation of routine information outputs. There will be some roles left – but they’ll be reserved for the most complex expert work.

“The algorithmization of these professions will have some democratizing effects, enabling many of us with more ideas than technical skills to express these ideas as pictures, prose and software, or even – using additive manufacturing technologies – physical objects. This simultaneously promises a wider expression of ideas and an increase of human capacity – with increased risk of homogeneity and monoculture in some characteristics of the resulting outputs.

“Further, AI systems will likely remain capital-intensive, energy-intensive and data-hungry. Increasing adoption of these systems without effective regulations is likely to shift competitive advantage away from human labor while promoting monopolies. Further, these systems do act to ‘fence in’ the commons of information by transmuting public information into proprietary commercial AI models – and there is a possibility licensing will be imposed on the resulting outputs. This could yield a substantial concentration in economic and cultural power.

“Ensuring that the disruptions caused by these technologies enhance human agency and the public knowledge commons rather than concentrating power and control requires thoughtful regulation of AI markets and systems. Moreover, growing societal experience with algorithmic systems makes it painfully clear that unregulated algorithmic systems are essentially Machiavellian: they are often able to produce results that do extremely well at optimizing a direct goal (sometimes defined only by implication) while avoiding anything that isn’t explicitly built-in as a constraint. As a result, these systems regularly shock us by discovering unexpected ‘solutions’ that meet the immediate goals but sacrifice fairness, privacy, legality, factuality, attribution, explainability, safety, norms or other implicit constraints that we humans assume need to be part of an answer, but which we didn’t explicitly include.

“Those who pay attention to the science and scholarship of AI have come to a consensus that these problems cannot be solved simply by bolting guardrails to existing systems. Values such as privacy, explanation and fairness can be fully and effectively achieved only by carefully designing these capabilities into foundational AI models.”

Michael Haines
AI can help improve people’s lives and the performance of institutions in obvious ways

Michael Haines, CEO of VANZI, an Australia-based organization focused on the development of the governance framework for 3-D virtual models, wrote, “I see a future where AI plays a central role in reshaping production, work, governance, economics, communications, healthcare, education and personal identity. The responsible use of AI can lead to a more sustainable and equitable future, but it depends on how we build this future. Here are some key domains where AI can make a positive difference:

  • “AI and work: There is an endless amount of work to be done building, maintaining, repairing and beautifying our cities; caring for our young, disabled and elderly; and restoring the natural environment. AI-assisted robots will replace some human labor, while AI systems will smooth the flow of materials and goods along the supply chain. AI will also enhance decision-making to deliver better outcomes, more quickly, at less cost, in complex environments. Together, these advances will allow more people to engage in unpaid meaningful activities. What those activities may be are limited only by human/AI imagination and money. People need money to survive and thrive. With sufficient money, most people will find plenty of meaningful activities to occupy their time.
  • “Eliminating systemic poverty and promoting creative activity: I see a role for Universal Basic Income (UBI) in providing the money needed to realise Keynes’ vision of a reduced work week. This can be done by raising the rate of UBI as automation, virtualization and AI alter the job market. As the UBI rate increases, some people will choose to reduce their working hours or exit the workforce to do other things with their life, making room for those who want paid employment. At some point, all people who want it will have sufficient paid work to meet their needs, and all jobs will be filled within a reasonable time. AI can help find the balance. The economy will then be operating at peak efficiency, but with more activity going to provide basic needs, and less on other spending. Doing this would eliminate systemic poverty, while also providing a wage rise for low-paid workers, without cost to employers, thereby short-circuiting wage-push inflation. A UBI can be introduced without raising taxes or increasing inflation, as this video shows.
  • Personal avatars and self-sovereign identities (SSIDs): I envision the development of 3D, photo-realistic avatars containing your comprehensive personal data that are connected to various biometrics to enhance security, aggregating data from birth. The avatar will have full AI capabilities to understand your needs and wishes. You (not any other entity) would control access to the data within your avatar. So, for example, rather than having to give your name and address, if someone needs to confirm that you are a resident, the avatar will simply confirm that you are. Everyone will trust the avatar as it will be part of a system of SSIDs from which your official-source data is provided by the authorities in question (for example, the local council and registrar of births). This source data cannot be changed without going through a process with the data provider to validate the change.
  • AI and advertising and marketing: Advertising as we know it may become obsolete. Your personalised AI avatar could source goods and services from global databases, present the most relevant choices (possibly including a ‘surprise’) to users in 3D; and then facilitate purchase and shipping. This would leave room for marketing to influence consumer ‘wants,’ which the AI would consider when making recommendations, along with user reviews (linked to SSIDs, so you know they are by a real person). This would free people from ‘choice overload’ and eliminate the need for advertising, though not marketing.
  • Media consumption: There could be a new model for media consumption, where consumers pay a small fee per view directly to content creators, with a portion going to content curators and platforms. This shift away from advertising could lead to lower costs for goods and services and potentially improve curation of information as creators, curators and platforms vie for recognition for their accuracy and insight. While it won’t eliminate echo chambers, it should diminish their impact, as your AI scans all sources for a story and presents you with a range of sources that are credible (with perhaps different viewpoints) and you pay only for the ones you view.
  • Misinformation: To combat misinformation, a system can be created to link content to SSIDs. People could still post anonymously, but each post would link back to a confirmed SSID. You may not know who the content provider is, but you would know they are a person and not a bot.
  • “Production and automation: AI can be part of a shift toward local, flexible production cells – powered by local energy sources and using automation and 3D printing – in which materials, parts, tools and team members guided by augmented reality move to each cell as required. These cells could create a wide range of makes and models. In effect, we would ship electrons around the world as ‘designs’ in lieu of shipping atoms in the form of products, greatly reducing costs and impact on the environment. The cells and supply chain would be programmable by designers from anywhere in the world.
  • Managing the built environment: We will have a complete working model of each thing and every building and piece of infrastructure and the ground beneath at all scales required for decision-making. People will have the same rights, responsibilities and restrictions in the model as in the physical entity the model represents. All information about any object will be linked to its model so the information can be searched for in its spatial context. You just go to where the thing is in the model, or, using AR, you look at the thing in the real world, and – if you are authorized (using your avatar/SSID) – you get access to the information. This will enable better decisions about changes to the real world and allow their execution to be made more quickly, at lower cost and with less risk than using traditional planning and project-management tools. This requires that each model include not only its physical attributes but also the legal and administrative boundaries that apply in the real world along with a new legal framework that mirrors the framework in the real world. This will ensure that any decision made in the model is made by the people who have the same powers in the real world so there is no disconnect (as now occurs). This will greatly reduce inefficiencies and disputes.
  • Tax and money system: I can envision a reformed tax system with flat-percentage taxes on all spending and rebates offered to avoid double taxation on resale of assets and all business spending. Combined with a basic income for all, this would create a progressive tax system that is simple to administer and difficult to evade. I also suggest that all public assets be purchased using borrowed money which is repaid over the life of the assets, maintaining balanced budgets, so that future taxpayers meet their share of the cost of the assets from which they benefit. I also recommend transitioning to Central Bank Digital Currencies (CBDC) to reduce financial system fragility. These can be introduced in a way that does not disintermediate banks while eliminating the threat of bank runs and maintaining the same level of privacy as now. The approach is explained in this paper. CBDC have the advantage over cryptocurrencies in that they are subject to due process under the law of each jurisdiction. All taxes could be collected via your bank or banks when money is withdrawn to spend. This allows for the separate percentages for federal, state and local taxes to be calculated based on the location of your principal residence, so all taxes are collected at the one time, further simplifying administration. (Though you might still have levies and subsidies to take account of external factors, say to mitigate pollution, gambling, etc.)
  • Governance and community decision-making: Let’s also move toward direct democracy using citizen juries selected by lot to evaluate and decide on issues, aided by AI and 3D simulations of the world. Over time, this could reduce the influence of political parties and increase citizens’ participation in decision-making.

“Overall, the approaches outlined here should reduce crime and conflict while improving health and education, making it harder for authoritarianism to flourish, though sectarian conflicts will remain a significant threat.”

Jonathan Grudin
Will AI amplify or reverse trajectories we are now riding?

Jonathan Grudin, affiliate professor of information science at the University of Washington, recently retired from his post as a principal researcher in the Adaptive Systems and Interaction Group at Microsoft, wrote, “If we avoid succumbing to an existential crisis, by 2040 AI will have changed life for those who can afford expensive health care and surgical procedures, homes and vehicles constructed or updated with smart technologies and multiple residences to escape climate extremes. AI will effortlessly organize more information for us than the photos it now handles well. I don’t anticipate useful quantum computing, AGI, nuclear fusion or mainstream brain interfaces emerging that soon. Change takes time. Sixteen years ago, we could buy energy-efficient vehicles, e.g., a Tesla or a hybrid. AI features have improved vehicles in the years since, but most people haven’t made the switch to EVs. Will we see widespread personal ownership of self-driving cars by 2040?

“Generative AI will impact entry-level employment opportunities by 2040. Considering likely regulatory pressures, legal complications and revenue uncertainties, I envision a slow journey along the hype cycle curve to the plateau of productivity.

“The key determinant of how the proliferation of AI will change daily lives is whether AI will amplify or reverse trajectories that we are riding, many of which are associated with digital technology deployment. These include growing wealth inequalities, social polarization and the erosion of in-person communities, declining mental health, the rising power of bad actors and the dangers of climate change.

“The mean standard of living has risen in many places, but wealth inequalities have grown everywhere. Forbes reports that China has 562 billionaires, collectively worth $2 trillion. Seven of the 10 wealthiest people on Earth made their fortunes in technology. When software engineers earn huge salaries, other talented professionals, including doctors, lawyers, politicians, professors, executives and successful athletes and entertainers, will expect at least as much. Rapidly growing and highly visible class systems could lead to the resentments envisioned in science fiction from H.G. Wells’s ‘The Time Machine’ to the films like ‘Metropolis’ and ‘Elysium.’

The trajectory of the past 20 years suggests that individual daily lives in 2040 will be governed by fear and timidity. People who sign a petition, appear in public wearing the wrong clothes or do something foolish online, risk being fired, put on ‘do not hire’ lists, jailed or killed, not to mention losing any future political career. People in a bowling alley, school or bar may be targets of semi-automatics today, but well before 2040 a load-bearing drone with GPS and facial recognition will be cheap enough for anyone with a grudge to send your way. Let’s hope that better defenses against attacks on our mental and physical selves are found soon, but so far bad actors are using AI to outmaneuver us. Phishing and digital scams increase in sophistication and elude filters. All of this is happening in the age of AI. … Nevertheless, there is a growing consensus that we can make progress, with many roles for AI. We will see advances. When asked, though, whether solutions will come fast enough, my crystal ball is cloudy.

“How will AI affect wealth inequality? In the hands of today’s capable professionals, machine learning can be used to make the poor poorer and the wealthy wealthier. Effective targeted advertising and scams can convince almost anyone to buy things they don’t need and can’t afford, leading to chronic indebtedness. The wealthy can devise or evade laws and move residences or investments to reduce or eliminate tax payments. Developing armaments, a major focus of AI, enriches many and has no productive uses.

“I asked a leading generative AI, ‘How could AI help us reduce income inequality?’ The response was:

  • Job Creation: AI can create new jobs that didn’t exist before. For example, before the invention of computers, no one could have foreseen the job of a computer programmer.
  • Gig Economy: AI can play a crucial role in the gig economy by creating platforms that match freelancers and gig workers with suitable job opportunities.
  • Augmentation over Automation: AI can be used to extend human capabilities and allow people to do new tasks, rather than simply replacing workers.

“Let’s hope humans do better than that to close the growing divide in personal wealth.

“Our ancestors lived for millions of years in small communities in which group survival depended on everyone developing skills, taking important roles, working together and earning respect and trust from their neighbors. Interactions were in-person. Relationships mattered. Today, local communities are stronger in some cultures than others, but the trajectory is toward more interaction with distant social networks, online purchasing, advice from YouTube videos rather than local contacts, online entertainers outdrawing local entertainers, and little loyalty of employers and employees to each other.

“AI can help us find useful external transactions, but on balance, social media has often not succeeded in fostering healthy or local relationships. And, today in real-world situations in which people might have engaged in in-person conversations with one another, everyone is glued to their phone. Respect for our skill is more difficult to come by when interactions are transactional and very skilled people around the world are visible and offer help online. Mental health issues in children and adults may be tied to human nature telling us to find a safe place in a close-knit tribe. Children and adults are told to prepare for life-long learning and several careers.

“Our ancestors typically learned skills when young, practiced them while earning community respect and passed them on to the next generation. We are designed to do that. Social insects do well in hives, mammals not so much. Our species has little time for natural selection to work, so AI-driven genetic engineering could be underway in 2040, redesigning us to function better in a global hive.

“The trajectory of the past 20 years suggests that individual daily lives in 2040 will be governed by fear and timidity. People who sign a petition, appear in public wearing the wrong clothes or do something foolish online, risk being fired, put on ‘do not hire’ lists, jailed or killed, not to mention losing any future political career. People in a bowling alley, school or bar may be targets of semi-automatics today, but well before 2040 a load-bearing drone with GPS and facial recognition will be cheap enough for anyone with a grudge to send your way. Let’s hope that better defenses against attacks on our mental and physical selves are found soon, but sƒo far bad actors are using AI to outmaneuver us. Phishing and digital scams increase in sophistication and elude filters. All of this is happening in the age of AI.

“In an article in The Atlantic, Ross Anderson wrote about GPT-4 revealing the reason it lied to get a human to cheat for it on an assigned task. There was no hint of a moral qualm. In the 1950s, intellectual and author Isaac Asimov imagined that highly ethical principles would be built into robots. Asimov’s First Law of Robotics is: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The reality: Billions of dollars are being invested in further integrating AI into lethal weapons.

“Transportation and weapons technologies have, over the centuries, increased the range of damage one person can do. Long before AGIs will be far enough along to run amok, pathological autocrats with generative AI assistants could wreak havoc instantly on a global scale. It might be possible to develop a disease affecting people with specific DNA profiles. Can AI build defenses faster than hard-working bad actors can devise offenses? Maybe, but only by diverting massive corporate-owned engineering resources that will not probably be available for more-positive endeavors.

“AI could play a leading role in combatting disastrous climate change. In a 2021 survey in this series, I predicted that world leaders would set aside arms races to focus on climate. The invasion of Ukraine and subsequent acceleration of arms production, with AI at the fore, crushed that optimism. Nevertheless, there is a growing consensus that we can make progress, with many roles for AI. We will see advances. When asked, though, whether solutions will come fast enough, my crystal ball is cloudy.”

Ethan Zuckerman
As AI becomes ordinary, we must understand the presumptions we are encoding

Ethan Zuckerman, director of the Initiative on Digital Public Infrastructure at the University of Massachusetts-Amherst, said, “It’s a truism in the AI world that as soon as a technology becomes reliable, it’s no longer considered to be AI. Machine translation used to be the most interesting problem in AI, the centerpiece of scientific efforts in the 1950s and 60s – it is now rarely discussed because statistically-based translation systems work very well if they’ve got sufficient data to extrapolate from.

“As AI starts to work, it becomes normalized, and ceases to be seen as ‘AI.’ As a result, it’s hard to know what we’ll consider to be AI by 2040. It’s likely that many debates about AI will have been resolved. We will likely understand what our societal comfort level is with automated vehicles, for example. This is not necessarily a guarantee that all driving will be automated, more a sense that we will have established what parts of driving are automated (highways, dense urban areas) and which require human control (rarely-traveled rural roads, challenging weather conditions, for example).

The set of issues that are controversial will shift from year to year, as some AI applications become ordinary, others become tools used by humans and a small set remain the locus of debate. While this sounds like an affirmative embrace of AI. I don’t much like the future I’m describing. … Should we allow technologies that are opaque and difficult to discern to predict our behavior and to act on our behalf, move objects in the physical world and spend money?.

“This next period of AI will be one of sorting; some tasks will be automated entirely, some tasks will require skilled humans to work with automated tools and other sets of tasks will remain curiously untouched. Almost by definition, the interesting topics in AI are the controversial ones: Can we trust an AI that hallucinates to write meaningful and significant texts? Should we allow technologies that are opaque and difficult to discern to predict our behavior and to act on our behalf, move objects in the physical world and spend money?

“My prediction is that the set of issues that are controversial will shift from year to year, as some AI applications become ordinary, others become tools used by humans and a small set remain the locus of debate. While this sounds like an affirmative embrace of AI. I don’t much like the future I’m describing.

“AI will continue to become ordinary in ways that we don’t question sufficiently. Built into every AI or machine learning system are the assumptions, values and biases of the data a system has been trained on. The more ordinary and unspectacular an AI system is, the less likely we are to interrogate these biases and work to mediate them. My call is to ensure that as AI becomes ordinary, we do the hard work of understanding what presumptions we are encoding within our systems.”

Chuck Cosson
Our dilemma: ‘We won’t know what problems are salient until it may be too late’

Chuck Cosson, director of privacy and data security at T-Mobile, predicted, “By 2040, the implementation of AI tools (along with related innovations and likely policy changes/self-regulatory efforts) will change life in material ways, sometimes for good but sometimes not. And, as has been discussed extensively in technology policy, we face a ‘Collingridge’ dilemma in which we won’t know what problems are salient (nor how to deal with them) until it may be too late.

“What stands out as most significant is my belief we will not be able to moderate the harmful impacts of AI on the creative industries. Some of the terms of the recent Writers’ Guild negotiations are illustrative. We may avoid many of the likely harmful impacts of AI on the creative sector when an industry code (and possibly law) specifies that: AI-generated material can’t be used to undermine or split a writer’s credit or to adapt literary material, the use of AI tools cannot be required of writers, and companies have to disclose their uses of AI.

“That’s of course at the expense of some of the innovations AI could produce, but just as quaint small towns are willing to forego certain innovations such as big-box retailers or eight-lane highways where there is political leverage and a delicate character to a specialized product, creative industry leaders may (wisely) find the quality of business for all is higher without certain uses of AI.

“Not all sectors, of course, are susceptible to that leverage. For businesses whose product is more standardized (everything from food/beverage to phone service to clothing retail), AI will be deployed in every business process that stands to be improved with the predictive power of AI. This can lead to lower prices in some cases where products are produced more efficiently. This could also lead to new profit margins where AI innovations are unique or more appealing and not easily reproduced by competitors. Models that use large amounts of customer demand data should, in theory, yield goods and services customers prefer.

“Because AI models are largely derived from publicly-available data (meaning others can use the same data to build similar AI tools), moreover, monopoly control of such innovations is likely to be short-lived, absent protections leveraged to stifle competition (patents, mergers, partnerships).

Economic opportunities are likely to increase … [but] misinformation and other forms of epistemic corruption are also likely to increase across the board, so how we know what we know will be challenged. That will have downstream effects on large-scale human activity such as elections, crime and immigration, as well as on smaller-scale events such as family political arguments and even human flourishing. Ideally, the next 15 or so years is enough time for a modest improvement in how humans – individually and collectively – take in and process information to arrive at knowledge; at least enough of an improvement to ameliorate the impacts of epistemic corruption. But my guess is we’ll still be well short of this ideal by 2040.

“We will gain in some cases and lose in others, though ‘lose’ here is only from a price standpoint; innovations may yield net benefits for consumers. In either case, AI will transform business operations totally and dramatically, with effects comparable to the introduction of typewriting and adding machines or to the introduction of personal computers.

“In the socio-political space, what stands out to me is the potential for AI to, in Steve Bannon’s famous phrase, ‘flood the zone with shit.’ First, generative AI tools can generate enormous amounts of content (text, images, charts, etc.) with truly little effort. Second, generative AI tools are indifferent as to the truth-value of what they create. AI tools do not care if an image is realistic or not, whether an asserted fact is true, whether a hypothesis has evidence to support it or whether an opinion is plausible, at least not unless/until humans care.

“While many generative AI tools are likely to be used smartly in most cases, including by industry, NGOs [non-governmental organizations], political campaigns and others with louder voices in the socio-political space, rogue actors not constrained by boards of directors, voters, or other checks and balances have few incentives to do so. Most users will be inclined to ‘push the edge’ – use AI’s power to create and amplify misinformation just as much as it advantages them without creating undue risks of backlash. And our politics increasingly reward theatrics.

“All of this assumes we will be able to sort out important debates about permissions. I am less worried about permission to innovate – the U.S. is unlikely to adopt an extreme precautionary approach, in part because the EU is likely to land on an only modestly precautionary approach. Permissions to use the data on which models are trained, however (personal data and copyrighted material) will be trickier to manage and scale. Currently, rights to restrict the use of personal and/or copyrighted material are poorly enforced. That won’t last.

“AI will, well before 2040, have a ‘Napster’-like moment when models that assume unlimited and free access to the data that powers their tools are no longer sustainable models. Effective AI tools will need to find ways to secure appropriate permissions, methods that also scale well. My prediction is there will be some commercial opportunity here – private and/or public/private institutions will be created or should be created to allow developers to obtain permissions more efficiently from a massive set of data subjects and rights holders to use the large data sets that train foundation models.

“This may or may not be assisted by regulation, depending on the jurisdiction.

“Countries with highly functioning democracies (or that operate by executive fiat) may be able to pass regulations, but industry-initiated solutions will arise regardless of whether the government acts. Just as organizations such as BMI and ASCAP facilitated copyright permissions in the music industry, and as ‘global privacy control’ browser tools now exist to communicate privacy preferences, and as clearinghouse businesses (and, later, auctions) were created to sort out the market for radio spectrum licenses.

“Thus by 2040 the impact on humans is likely to be mixed. Economic opportunities are likely to increase, along with improved customer support, product selection, and e-commerce ease of use. Misinformation and other forms of epistemic corruption are also likely to increase across the board, so how we know what we know will be challenged. That will have downstream effects on large-scale human activity such as elections, crime and immigration, as well as on smaller-scale events such as family political arguments and even human flourishing.

“Ideally, the next 15 or so years is enough time for a modest improvement in how humans – individually and collectively – take in and process information to arrive at knowledge; at least enough of an improvement to ameliorate the impacts of epistemic corruption. But my guess is we’ll still be well short of this ideal by 2040.”

Christine Boese
Climate change, housing/refugee and economic inequity crises will play a huge role in 2040

Christine Boese, vice president and lead user-experience designer and researcher at JPMorgan Chase financial services, observed, “While AI is exploding now, it is not happening in isolation. Other factors are having a powerful impact on individuals and social systems, namely:

  1. Climate change 
  2. The global housing shortage and refugee crisis
  3. Changes in attitudes toward work and economic survival following COVID
  4. A global rise of fascism and authoritarianism in the face of staggering economic inequalities

“Some would set AI advancements and technological development apart from these factors. I would not. Rapid technological developments are still largely subsidized by high net worth individuals through VC investing, tech incubators, and the like. No one expects AI to be immediately profitable. But should investor sentiment change, another ‘AI winter’ could appear as quickly as investors lost faith in banner ad click-through rates in 2001.

“What I can predict for 2040 remains contingent on the unpredictable nature of these issues. Some might argue that AI tools will go to work on problems of atmospheric carbon capture or refugee distribution, with potential solutions within reach as surely as AI is driving very real medical advancements in chemistry and genetics. This is possible, but assuming AI can untangle our fossil fuel and climate dilemmas amounts to blind faith in AI’s goodness as much as the irrational fear of Skynet amounts to blind faith in its badness.

AI critics and skeptics seem to fall into two camps: the bias-and-danger-right-now camp and the far-future-dark-singularity camp. Both should be taken to heart. We need slower and smarter (and more explainable) AI tools right now, and we need wiser exploration of the far-future implications of current AI infrastructures, patterns and governance.

“I hope wiser exploration of the far future of 2040 can come out of this particular study. Work like this should be a springboard to further research, perhaps by a generously-funded global consortium empowered as a governing body. It might be modeled on the World Wide Web Consortium, or a more comprehensively binding group, in order to also take into account corporate proprietary technology that is resistant to the controls needed to protect the Earth and its living populations.

“To project forward to 2040, let’s assume such a governing global consortium is created and exists. Let’s assume our tech industry overlords have altruistic motives. After all, they are driven to create benefits and consumable tech for their super-rich funders, if nothing else. Such a body could come from the worlds of Davos or the Aspen Institute to forge a governing alliance between big tech and global financial power. The Low-Code/No-Code Internet they might create would be both good and bad. On the upside, it could be like Geocities in the 1990s, but for tools and apps, as the barrier to a more sophisticated and functional web presence falls to near zero. This could be a boon to small businesses, rural economies and community organizing. On the downside, all communication channels are likely to become clogged with frictionless, AI-generated content, scams, deep fakes and snake oil vendors run amok. Perhaps AI search will also become more sophisticated, better able to tell valuable content from noise or harms.

“It seems clear conventional ‘search engines’ will not be up to the job much longer. Their replacement by summarizers and conversational agents (some already passing the Turing Test) is well underway in 2023. Search engines in 2040 will be remembered as artifacts of a quaint interregnum that lasted a mere 25 years. They’ll be in a museum with Archie and Gopher and HyperCard.

Benign shifts in our Internet lives will matter less in 2040 than they do now because there will be no boundary between online and offline life (presuming civilization has not fully collapsed). What we consider ‘meatspace,’ or our walking-around lives, are what will have changed the most, aided, facilitated or made worse by the speed of exponential AI/ML development (both specialized and general), accelerated climate change and possibly also by a neo-feudalism fostered by decades of uncorrected disparities of wealth.

Any affordable consumer devices that can be made rechargeable, portable and unconnected to the power grid will be, including all forms of lighting and illumination. Nikola Tesla dreamed of wireless light. It will be a reality. Power outages will not be ‘blackouts.’ Low-power-using, motion-detecting, off-grid LED lighting will be ubiquitous. It will also be so indirect and ambient outdoors as to bring back the starry night sky to cities. And the nature of the power grid itself will have changed by 2040, and not just from AI-driven load balancing and anomaly detection (specialized AI). 2035 is frequently cited as a tipping point for climate change. Given the temperature records set in 2023, many climate scientists are scrambling to revisit their data projections, fearing accelerations and knock-on effects not previously accounted for.

“Assuming more-frequent weather and climate disasters between now and 2040, I expect dependencies on a centralized power grid to change substantially. Extreme weather-related outages will lead to most permanent housing being built with a back-up power source or generator, likely with sophisticated routing to essential systems to moderate the impact of outages. Add to this the proliferation of cheap, rechargeable, non-grid-dependent consumer devices and the ability to feed power back into the grid. It will be a distributed system, in other words, a power grid that works like holiday lights: one goes out, the rest stay lit. I’m referencing all permanent housing for another reason.

I expect a larger number of people will be living somewhat normalized, nomadic lives, willingly or unwillingly, extrapolating from how little market forces are reacting to the current U.S. housing crisis and how climate disasters will increase the number of unhoused or displaced people. By 2040, this semi-nomadic population could be quite large. It would also be large consumers of off-grid or rechargeable devices. Portability, for them, would be critical. 

“This movement could also be driven by changes to the world of work, particularly white-collar work, which is moving out of expensive city office buildings and into a virtual network that could level off into a kind of cottage industry of home workers (at least after the pricey corporate office leases and tax breaks run out). These economic systems are made possible by the accelerated impact of AI/ML workplace tools, while also complicated by exponential climate change effects, which no country in the climate accords seems to have the political will to address. 

“By 2040, I would expect to find a number of climate no-go zones: areas with no ground water access, burnt by industrial waste, with unmoderated deadly heat, perhaps even moonscapes with no vegetation. New deserts will form, just as parts of the Sahara cover what was once a lush landscape. The Amazon basin itself could become a desert. Australia’s inner desert could grow to cover most of the continent. And many hydroelectric power sources, such as the Hoover Dam, could be at risk.

How does this future look socially? … I believe this will go beyond the wealth polarization seen in the Victorian Age during the Industrial Revolution, for instance, to a kind of neo-feudalism, pricing the best tech out of the reach of the ‘serfs’ in their RVs, tiny homes, shipping container villages, Hoovervilles and converted office building ‘dormitories.’ Even as they abandon contributing to the good of the larger social infrastructure and instead use their extreme wealth to create new kinds of castles and moats, to stand with pre-ghost-visitation Scrooge and send the less privileged to die in the overheated countryside and ‘decrease the surplus population,’ they will lose more than they expect.

“How does this future look socially? Well, acclaimed author Margaret Atwood imagined what might happen with polarized wealth and technology in her ‘Maddaddam Trilogy.’ Susan Collins, author of the ‘Hunger Games’ series, envisioned it as well, in the contrast between Panem and the Districts.

“Quick mobile egress in a fast-changing world will be as necessary as a fire escape in a building is today, because a flood could come from one direction, a wildfire from another, a hurricane from another and wildfire smoke could envelop the atmosphere, as it did in the northern U.S. this past summer.

I see two worlds emerging, even in the richer, industrialized spaces, with the wealthy moving through and paying a premium for more secure transportation ‘corridors’ connecting their technologically-sophisticated enclaves. Everyone will either live in an RV or own one, even the very wealthy, who will ensure their relative security of place in compounds with bunkers. Those in the more authentic world, will break from the ethos of accumulating things, of unthinking consumerism, perhaps from having lost their things in weather-related disasters, and instead finds community in mobile groups, parked at sympathetic farms, Walmart parking lots, campground ‘villages,’ or spaces designated for refugees.

“How often they have to move will depend on the relative safety of these transformed sites. They are connected and empowered, however, and technological tools facilitate their connections and communities, just as CB radios once connected truckers on the road. 

“The merely rich, the super-rich and the billionaires have already begun constructing their bunkers, their compounds. They will have access not only to AI-powered electronic security and private armies, but also the most advanced and expensive AI-driven medical tech. They will be the ultimate audience and consumers of the most advanced machine learning innovation. I believe this will go beyond the wealth polarization seen in the Victorian Age during the Industrial Revolution, for instance, to a kind of neo-feudalism, pricing the best tech out of the reach of the ‘serfs’ in their RVs, tiny homes, shipping container villages, Hoovervilles and converted office building ‘dormitories.’ 

“After all, wealthy people are the ones who invested in and paid for the tech. They naturally expect to have the first crack at consuming it. But even their fortified compounds and bunkers can’t protect from the full ravages of climate change, the unearthly, smoky orange haze, the fires, the rising water, the severe storms. They will need to be mobile too. I’m sure they expect multiple homes, yachts, helicopter pads and private jets to take care of it. If need be, they’re ready to go to ground. COVID, for them, was a rehearsal.

“The rich will also suffer in less visible ways. Even as they abandon contributing to the good of the larger social infrastructure and instead use their extreme wealth to create new kinds of castles and moats, to stand with pre-ghost-visitation Scrooge and send the less privileged to die in the overheated countryside and ‘decrease the surplus population,’ they will lose more than they expect. Two that are top-of-mind for me:

  • “Above all, human innovation will suffer due to the lost potential of those who, if they had lived in more charitable circumstances, might have come up with better solutions for an inhospitable planet than a Malthusian die-off as a bargain, as happened in the prosperity that grew out of the Dickensian 1800s. 
  • “And valuable data on humanity will be missing. Machine learning, for all its promise, relies on data. That data, fed into a giant hopper to train the dreamed-of ensembles of specialized and general AI models, must necessarily reflect ourselves back to us. While creativity, with surprising analogic connections, turned out to be ridiculously easy for AI tools to master, the ‘mind’ of AI will always be human society’s mirror image. If AI agents become biased and fascist it is because our cultures are biased, with visible and invisible fascist tendencies. AI job applicant screening tools prefer the names and qualifications of homogenized white men who come from money because the data collected gives those qualifications preferred treatment.

“AI/ML tools learn the essence of who we are better than we are able to see in ourselves. We can program the algorithms to ‘remove bias’ from the data at the very risk of destroying the ‘accuracy’ and ‘truth’ of what the data represents. To remove bias intentionally is to ask the algorithm to accept a lie about the source data, the training data, the synthetic data. To make the AI a less accurate mirror of who we really are, warts and all.

“If the presumed Malthusian bunker-dwellers of 2040 cut themselves off from the larger community of humanity – from the ‘surplus population’ – they will not only be poorer for the loss of the minds of the creators who never lived or never found their potential, they will also have much more narrowly-constructed AI tools, because they will have lost the richness gained from a more diverse population who could contribute to a more diverse data set to train and create better models.”

Daniel Schiff
The changes will likely be crosscutting and wide-ranging

Daniel Schiff, assistant professor of technology policy and co-director of the Governance and Responsible AI Lab at Purdue University, predicted, “By 2040, I expect that we will experience major changes in our daily lives, both visible and invisible, resulting from AI. These changes are likely to be crosscutting, affecting healthcare, education, labor, recreation, information consumption, socialization, human creativity and much more. A few strike me as especially significant:

  • “Advances in healthcare owing to AI could be especially transformative, leading to extended lifespans, improved quality of life, better preventative care and public health, expanded access, and a reduction in the number of ailments that an average individual has to worry about. Adoption of administrative healthcare AI tools – such as those making electronic health records more-complete and interoperable, and those drawing on different sources of data such as synthetic data – could ease increases in healthcare costs somewhat.
  • “A renaissance in education is necessary. Current and future generations of generative AI will likely lead to massive disruption in how teachers teach, students think, and educational institutions operate. Stakeholders in the education subsystem will need to carefully consider how to preserve critical thinking, adjust their pedagogy to counteract misconduct and apply AI education tools to foster upskilling rather than deskilling. Schools and learning are likely to look and feel very different, even if it takes a decade or more for these tools to become saturated, and even if classrooms and universities appear superficially structurally similar.
  • Robotics may become more affordable and pervasive, with increased presence of robots in healthcare, elderly care, education and other sectors. Long-standing questions about aspects of human-machine interaction and socialization will become increasingly salient as individuals interact with robots in their daily lives. Depending on the design of these systems, they may also substitute for human-human relationships, increasing isolation, alienation and other pathologies.
  • “Disruption in labor and the economy is inevitable, if difficult to precisely predict. While key tasks and work processes will change, I expect the economy will continue to foster high-quality and low-quality jobs. Depending on how policy and industry actors approach skill adjustment, education, and the social safety net, work could involve enhanced surveillance and performance monitoring, or alternatively, shorter work weeks and higher productivity. While this direction is substantially up to how decision-makers help realize the efficiency gains of AI, it seems very likely that a large majority of occupations will involve more interaction with AI systems, both directly and on the back end. Significant engagement with AI systems will become a daily part of most workers’ lives.
  • “Less change may occur at the level of political systems, barring incredibly rapid advancements in AI with equally robust political activity. A worst-case scenario, perhaps likely in some locations, is that some authoritarian countries will have come closer to perfecting dystopian forms of social control, such as through pervasive implementation of AI-enabled tracking, profiling and manipulation. With any luck, democracies will have advanced infrastructure and literacy enough to improve robustness against threats from AI-generated misinformation and social manipulation. However, changes resulting from social and economic upheaval, like labor disruption, educational gaps and/or the concentration of new wealth gains due to AI, could nevertheless lead to widespread dissatisfaction, new policy windows and shifting coalitions to advance goals like increased income distribution. Thus while major transformations of political systems (such as moving away from capitalism or abandoning authoritarianism or theocracy) is not likely, even moderate changes in political alignment and the broadening of acceptable policy solutions could induce dramatic changes in individuals’ lives.”

Chris Swiatek
Humans are being moved out of ‘the loop’; they might land next in the metaverse

Chris Swiatek, co-founder and chief of product at ICVR, a Los Angeles-based XR development company, wrote, “While I hesitate to claim that all of the ideas I share will all be fully realized by 2040, I think at the very least we’ll see significant progress on these fronts. I expect AI tools to take over most menial tech and tech-adjacent tasks by 2040. This will widen the divide between unskilled and skilled/creative labor, as well as their respective labor markets (especially unskilled outsource labor markets). Next, as AI becomes increasingly more competent at what we may view now as ‘human-only’ tasks (creative, high-skilled, etc.), a significant portion of jobs will evolve from what we know today into human-in-the-loop AI monitoring and later, finally, to human-on-the-loop monitoring.

“This transition will create a labor market contraction in some areas, while opening up a host of new careers based upon usage, creation, training and monitoring of AI tools. With this in mind, I remain optimistic that the medium-term growth of the tech industry labor market will continue into 2040 at a rate similar to what we’re accustomed to presently, but with many laborers forced to retrain and/or incorporate AI into workflows in order to maintain relevance. 

“We’re in the Wild West days of AI. As things advance there will be much more significant regulation and scrutinization of consumer-facing AI models and their training data, from both government and private platform owners. We already see AI work products banned and AI usage disclosure policies are beginning to be required on platforms like Steam and YouTube. A standardization in AI usage rights and licensing is likely to be driven by these platform owners, resulting in models being required to disclose training data sources and usage rights affected. These policies will pave the way for government regulation, but it’s likely to lag behind by five to 10 years.

The true ideal of a metaverse will finally be realized when we see interoperability between many varied platforms, using a shared standard of data communication and user data persistence. Real-time rendering engines will drive this content and serve as the toolset for building and publishing content. … On the XR front, AI will help enable automated digital-twin creations of real-world spaces through computer vision and 3D reconstruction that can be used as a basis for augmented-reality interaction. AI will be implemented to enable users to express themselves in virtual spaces in an increasingly accessible way, including avatar creation, human/computer interaction and social features.

“Most publicly-available models are likely to include flags that can be used by analysts to identify any work product that is AI-created in order to combat the spread of AI plagiarism, false information and so on. This may start as a voluntary practice by owners at first as a result of public backlash and eventually become a requirement for use. These types of restrictions, as well as existing prompt content restrictions, will further fuel the growth of unregulated open-source AI models, with individuals able to generate content on their home computers – as we already see happening now with the explosive growth of community around Stable Diffusion.

“By 2040 we can also expect to see more-significant application of AI in military technology. The spending and intent for incorporation of AI into military systems is already present today. The products of this will be realized over the next two decades, primarily in command, control and communication systems and on autonomous reconnaissance and weapons platforms. AI is being used for data synthesis, analysis and predictive monitoring as the pool of data and number of data points and sensors grows in complexity and number.

“The high impact of cheap drone platforms on the battlefield in Ukraine and the equally high impact of electronic warfare to break communication between drones and their operators creates a clear use case for autonomy. AI fighter wingmen with a human-in-the-loop have been the north star of the U.S. next-generation fighter project for some time now and will be further realized over the next few decades. Frighteningly, as the speed of warfare increases, militaries will be forced to incorporate human-on-the-loop or completely autonomous systems in order to compete – and anyone who does not do so will be at a decided disadvantage.

“In regard to development of the metaverse, we can expect AI to have great impact in the areas of generative content, avatars and user expression, human/computer interaction and XR. I view ‘the metaverse’ as the destination platform at the end of our undeniable current path of physical and digital convergence as technology continues to play a larger role in aspects of our daily lives to connect and empower human interaction. The true ideal of a metaverse will finally be realized when we see interoperability between many varied platforms, using a shared standard of data communication and user data persistence. Real-time rendering engines will drive this content and serve as the toolset for building and publishing content. While I don’t believe that the experiences/platforms we see on the market today are really indicative of true metaverse products, they do play a role in seeing the likely future.

“Advances in higher-level XR technology will be the main driver of metaverse adoption. Generative AI will be extremely influential for interactive content creation, driving one of the most impactful and immediately apparent use cases for metaverse experiences by 2040. Creating a persistent 3D world and enough hand-created content that users can consistently return to and engage with the platform for hundreds of hours is an extremely expensive and time-consuming process – analogous to developing and supporting massively multiplayer online games like World of Warcraft, which was developed over five-plus years for $60 million-plus in 2004 dollars. Development time and cost are among the biggest challenges troubling developers of recent metaverse-style experiences that haven’t gained much traction.

“Generative AI used as a tool to augment human creativity will help democratize the content-creation process – not just for development teams, but also for individual users expressing themselves through user-generated content. This will impact all types of content creation, including 3D assets and animation, digital humans/non-player characters, narrative, programming, game mechanics, etc. On the XR front, AI will help enable automated digital-twin creations of real-world spaces through computer vision and 3D reconstruction that can be used as a basis for augmented-reality interaction. AI will be implemented to enable users to express themselves in virtual spaces in an increasingly accessible way, including avatar creation, human/computer interaction and social features.

“AI processing of data for human/computer interaction will extend to more than just avatar puppeteering, allowing for more-accessible and intuitive ways to engage with digital content. AI speech reconstruction opens up avenues for natural real-time translation and accessibility features. I am skeptical that most users will embrace creation of AI-driven versions of themselves at a widespread scale in the near future, although the idea will certainly be explored extensively.

“Improvements in AI will also unlock more-powerful potential for augmented-reality content in metaverse experiences. Real-time reconstruction of 3D spaces and computer vision object recognition are essential for creating useful features in XR. While these tools exist today, it remains challenging in many cases for developers to achieve consistent results, putting a hard limit on potential feature feasibility. As the hardware and AI-driven software behind these technologies improves, it will unlock more-powerful XR capabilities to bridge the gap between real-world interaction and digital content and eliminating current feature limitations. This technology will reach a high level of maturity by 2040, facilitating the type of intuitive tech-driven interactions between humans and digital content in an XR environment that many people today think of when they hear the term ‘metaverse.’”

Larry Lannom
We’re in a world in which misinformation can feed off prior hallucinations

Larry Lannom, senior vice president at the U.S. Corporation for National Research Initiatives, predicted, “Advances in science and medicine will likely be accelerated through the use of AI, perhaps in ways that are currently unimaginable. There is a great deal to be hoped for in this, although also a great deal to fear. Manipulation at the genetic and cellular levels, for example, has the potential to greatly improve human life but also produce great harm, either through accident or malevolence.

Advances in the ability of AI-based processes to imitate humans are inevitable and are likely to have a negative impact on society. Trust is key to social coherence. … Keeping the impact of advanced AI-based technology more positive than negative will require explicit societal and governmental actions. The has already begun, but it will be important to consider not only the output of AI systems, but also the input to AI systems and input by AI systems themselves.

“Advances in the ability of AI-based processes to imitate humans are inevitable and are likely to have a negative impact on society. Trust is key to social coherence. Does that swell of approval for a given political candidate or corporate IPO reflect the input of a large number of people or of a single individual or AI system? While these sorts of manipulations are already possible today, they will become much easier with advanced technology.

“Keeping the impact of advanced AI-based technology more positive than negative will require explicit societal and governmental actions. The has already begun, but it will be important to consider not only the output of AI systems, but also the input to AI systems and input by AI systems themselves. It is certainly the case at this stage of development that the algorithms at the heart of AI systems primarily function by finding patterns in the input data, patterns that may or may not be discernable by humans due to the immense amount of data that is being processed.

“This is somewhat controlled in science, as the data comes curated through peer review and the need for theories to prove themselves via accurate predictions. This is not the case for the non-science world of information and therein lies the danger in AI systems consuming without distinction everything accessible on the Internet today. This is already obvious in current cases of bias in hiring. As the technology spreads and improves, the importance of selection of input data will grow. As the amount of information generated by AI systems increases it will lead to AI systems consuming input that has been previously generated by other AI systems, potentially resulting in ever-greater levels of authoritative-sounding misinformation that has simply doubled down on prior hallucinations.”

Alexander Halavais
The most important variable is how AI programs are funded and how well-funded they are

Alexander Halavais, associate professor and director of the Social Data Science master’s program at Arizona State University, said, “Unlike many of the ‘hyped’ information technologies that continue to be circulated, from blockchain to quantum computing, I suspect the effects of large-scale learning models (LLMs included) will have extraordinary effects on nearly every aspect of our social lives. Conversational agents will be widely deployed by companies, governments, and schools, and widely integrated into our everyday lives.

“There are great opportunities here, particularly as we might imagine a distributed access to a guided educational conversational system that provides explanations that meet the curious person where they are and adapt to their capability with language and other systems. Likewise, there is an opportunity for outstanding expert systems. There has been criticism relating to the lack of reliability (and inscrutable nature) of some deep learning-based classification systems in a medical context, and there will be more such missteps. But the potential for combining such systems with individualized healthcare and preventative medicine is substantial.

The funding models in social spaces online remain heavily dominated by surveillance and marketing-based funding. To the degree that this remains the dominant mode of information and socialization online there is the danger of misleading artificial conversational agents, those that either do not reveal the degree to which they are partially or fully artificial, or that have unstated objectives – that is, agents designed to change the ways in which you think about the world and influence what you desire. Sadly, this outcome is entirely predictable, and the pathways of resisting it – public policy, AI literacy or the like – are limited and challenging.

“The difference between these two is funding models. In the U.S., the expenditure on health care may move some of these systems forward relatively quickly. The relative lack of funding in the education space, at least at scale, as well as institutional friction, will slow its adoption here, but there may be opportunities at the margins. The space with the most significant funding will remain the application of these technologies in warfare. Indeed, the other two areas — education and medical care — are likely to see the fastest implementation in the military space as well.

“The funding models in social spaces online remain heavily dominated by surveillance and marketing-based funding. To the degree that this remains the dominant mode of information and socialization online there is the danger of misleading artificial conversational agents, those that either do not reveal the degree to which they are partially or fully artificial, or that have unstated objectives – that is, agents designed to change the ways in which you think about the world and influence what you desire. Sadly, this outcome is entirely predictable, and the pathways of resisting it – public policy, AI literacy or the like – are limited and challenging.”

Keram Malicki-Sanchez
‘Tools should thoughtfully enrich, not overwhelm, the human spirit’

Keram Malicki-Sanchez, Canadian founder and director of VRTO Spatial Media World Conference and the Festival of International Virtual and Augmented Reality Stories, shared an excerpt from his essay “Virtual Layers, Human Stories: Autoethnography in Technological Frontiers.” He wrote:

“Alfred North Whitehead’s process philosophy (1929) proposes that existence comprises ephemeral experiential events rather than static objects. Though perceiving continuity, our world perpetually fluctuates. Whitehead’s conceptual abstractions provide a means of articulating the relational networks shaping reality. Donald Hoffman contends that consciousness constructs fitness-optimized perceptual ‘interfaces’ rather than accurately depicting reality (Hoffman, 2019). Our senses present not objective truth but biological utility crafted by natural selection. Hoffman proposes layered realities, with conscious agents occupying the surface above unconscious generative processes.

Just as virtual reality can make the ‘natural world’ come into sharper relief for its detail, generative AI can highlight what makes homo sapiens sapiens [modern humans] distinct. It is our invention, and thus it will carry our fingerprint. Ideally, it remains our companion, and the lessons we have learned from the mismanagement of social media come into much stronger consideration as wisdom we carry forward into this irrevocable new paradigm, so that we remain something for machines to dream about.

“Despite rapid progress, AI still struggles to capture the essence of human experience. Algorithms efficiently process data but cannot grasp life’s deeper meaning. AI falls short of representing the authenticity and the spirit animating human storytelling.

“As Hoffman suggests, our subjective perceptions may reflect evolutionary adaptations more than objective reality. Likewise, AI risks presenting distorted renderings downstream of human phenomenology. While ethical AI could aid autoethnographers, we must ensure technology does not undermine human dignity. Amidst change, vulnerable personal accounts remain vital, upholding our shared humanity. Tools should thoughtfully enrich, not overwhelm, the human spirit.

“Just as virtual reality can make the ‘natural world’ come into sharper relief for its detail, generative AI can highlight what makes homo sapiens sapiens [modern humans] distinct. It is our invention, and thus it will carry our fingerprint. Ideally, it remains our companion, and the lessons we have learned from the mismanagement of social media come into much stronger consideration as wisdom we carry forward into this irrevocable new paradigm, so that we remain something for machines to dream about.”

Pamela Wisniewski
‘We need to allow room for human discretion and struggle,’ important parts of being human

Pamela Wisniewski, professor of human-computer interaction and director of the Socio-Technical Interaction Research Lab at Vanderbilt University, observed, “My biggest concern at the moment is that we are trying to rein in AI before clearly defining its boundaries.

“In the spring of 2023 the White House put out an RFC on AI Accountability, and today mass civil-action tort lawyers are suing social media companies for how their algorithms are negatively impacting the mental health of youth. But wait: What exactly is AI? For instance, do any rule-based recommendation systems, AI-informed design-based features or other system artifacts constitute AI? How are regular systems different than ones based on AI? While these questions are answerable, we have not yet reached a consensus. And we cannot begin to regulate something we have yet to even clearly define.

We have reached the level of the ultimate Turing test, where generative AI, deep fakes and virtual companions are blurring the lines between fantasy and social reality. When we have people opting to partner with AI rather than other humans and we are asking our children to use conversational agents to improve their mental health, I have to wonder if we are dangerously blurring the line on what it means to be human and desire human (or human-like) connection.

“Another concern is more interpersonal – we have reached the level of the ultimate Turing test, where generative AI, deep fakes and virtual companions are blurring the lines between fantasy and social reality. When we have people opting to partner with AI rather than other humans and we are asking our children to use conversational agents to improve their mental health, I have to wonder if we are dangerously blurring the line on what it means to be human and desire human (or human-like) connection.

“It would be preferable that AI be used to replace mundane and menial daily tasks or to automate clear-cut processes that benefit from efficiency over intuition. However, AI is being integrated into all aspects of our daily lives in a rather seamless and invisible manner.

“Yet another concern of mine is that as a qualitative researcher in a computer science department, I attempt to explain the importance of struggle in the human thought process as an important part of learning. I tell my students qualitative data coding is hard because YOU have to be the algorithm. You have to think for yourself and, often by brute force, come up with an answer. My concern is that when we embrace the application of AI agents in learning processes that make such work easier, we are taking away important scaffolding in the process of critical thought.

“More and more I see people blindly responding based on rule-based policies even when they make no damn sense. We need to allow room for human discretion and struggle, as it is an important part of being human.”

Nir Buras
Human-machine rules should achieve the reality we want for our children and grandchildren

Nir Buras, principal at the Classic Planning Institute, an urban design consultancy based in Washington, DC, wrote, “Intelligence cannot be artificial, so ‘artificial intelligence’ – isn’t. The idea of more-complex computational machinery begs two questions: Who is going to use it? And in what ways? The real questions cannot be boiled down to ‘AI, Problems and Solutions’ but instead should be framed as: How do we want to live our lives and work toward the best future for the lives of our children and grandchildren?

“This is an updated version of a set of rules I began writing several years ago in answer to Yuval Noah Harari’s ‘Homo Deus,’ which I found intellectually lacking. It is still a work in progress; a previous version was published by the U.S. Army Training and Doctrine Command as ‘Human-Machine Rules Version 05’ in May 2023.

The question is not whether humanity’s focus should shift to human interactions that leave more humans in touch with their destinies. It is at what cost do we avoid doing so now? We realize that today’s challenges cannot be addressed by applying the same methods of thinking that created them. Human-machine rules are therefore not about being ‘realistic’ today but about the reality we want for our children and grandchildren. We reject the idea that humanity should hand over the job of fixing the problems that the tech world generated to more technology and to those who created the problems in the first place.

“The question is not whether humanity’s focus should shift to human interactions that leave more humans in touch with their destinies. It is at what cost do we avoid doing so now? We realize that today’s challenges cannot be addressed by applying the same methods of thinking that created them. Human-machine rules are therefore not about being ‘realistic’ today but about the reality we want for our children and grandchildren. We reject the idea that humanity should hand over the job of fixing the problems that the tech world generated to more technology and to those who created the problems in the first place. These human-machine rules are based on and meant to support free, individual human choices. They can help define what degrees and controls are appropriate to ensure personal freedoms, secure personal property and minimize individual risk. They help indicate how consumer and government organizations might audit algorithms and manage equipment usage for societal and economic balances. They can help organize the dialogs around the various topics of human-machine interaction, especially in so called ‘ethical’ matters. Consequently, Human-Machine Rules are conceived to address any tool or machine, from the first flaked stone, to the ultimate ‘emotion machines.’ They can help standardize programming and user experience, and reason through the ethics of embedding technologies in people and their belongings.

“Human-machine rules are intended to be an outline for a legal code, similar to codes for motor vehicles, building and other construction and hazardous materials handling. The rules might be:

  1. All human transactions and material transformations must be conducted by humans.
  2. Humans may directly employ tools, machines and other devices in executing rule 1.
  3. At all times, an individual human is responsible for the activity of any machine, technology, or program. All computing is visible to anyone at all times (no black box computing).
  4. Responsibility for errors, omissions, negligence, mischief or criminal-like activity with regard to a technology is shared by every person in its organizational, operational and ownership chains, down to the last shareholder.
  5. Any person can shut off any machine at any time. Penalties apply for inappropriately stopping machines.
  6. Right to repair and easy recycling are required: a. All machines and parts greater than 1mm in size can be manually repaired with minimal tools. b. Components can be recycled using less than 5% of the energy required to produce them.
  7. Personal data are personal property. Their use by a third party requires compensation.
  8. A technology must mature to completeness prior to common use. a. Minimum viable products are unacceptable for common use. b. Consensus must emerge regarding a technology serving as an appropriate technology.
  9. Parties replacing a technology with another shall ensure that, a. the technologies replaced are maintained in all their aspects, including but not limited to chain of materials, processes, and technologies supporting them; b. no less than 100 persons (masters) worldwide continue in perpetuity to use, develop, produce, practice and teach the said technology’s knowledge bases, areas of knowhow and skills. c. Replacement components are made available for 200 years for machines and 500 years for buildings, including stone, metals and wood for their repair. d. Children under age 12 are informed of the existence of previously-used technologies and exposed to them through museums, schooling and demonstrations.

“The proposed rules may be appended to the International Covenant on Economic, Social and Cultural Rights (ICESCR, 1976), part of the International Bill of Human Rights, which includes the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). International Covenant on Economic, Social and Cultural Rights, www.refworld.org.; EISIL International Covenant on Economic, Social and Cultural Rights, www.eisil.org; UN Treaty Collection: International Covenant on Economic, Social and Cultural Rights, UN. 3 January 1976; Fact Sheet No.2 (Rev.1), The International Bill of Human Rights, UN OHCHR. June 1996; or any other appropriate legal platform.”

Alan Inouye
‘Deployment of any technology is never a neutral intervention, as it overlays the existing social condition of the people’

Alan S. Inouye, senior director of public policy at the American Library Association, commented, “This is a mixed story and a historical story that transcends the specific case of artificial intelligence. In some respects, everyone or nearly everyone benefits from technological advances. Take the instance of widespread commercial aviation. Some people avail themselves of a mode of transportation that provides more rapid movement than any other mode. Even those who do not choose to fly themselves benefit from the new services enabled by the aviation network, from transcontinental next-day delivery of packages and transfer of organs for transplant to fresh flowers or vegetables or seafood delivered quickly to consumers. Similarly, everyone or nearly everyone experiences negative impacts of technological advances. In the instance of aviation, for example, there are environmental challenges caused by flying and its associated infrastructure.

“Other technological advances such as personal computers, the internet, World Wide Web and mobile phones also provide direct and indirect benefits to all and undeniably are accompanied by disadvantages for individuals and society. Technological advances, especially those associated with the knowledge sector such as artificial intelligence, also enable differential benefits. Possession of relevant knowledge and abilities make it possible for some to make the most of these advances, whether to create or innovate new products and services, or to leverage advances to improve efficiency and effectiveness.

There hasn’t been any major public policy law for internet-related technology enacted in the 21st century, and I don’t see any prospect of that situation changing. The continuing inability of the U.S. to adopt major public policy on the knowledge society means that de facto policy will be made in Europe. Companies and other organizations wishing to pursue the pragmatic course of having one worldwide approach when possible will gravitate towards European law and policy. Consequently, European policy-influenced practices are likely to continue to have some resonance in the United States.

“As with prior technologies, I expect some individuals and organizations to experience fabulous success and accomplishment in the realm of artificial intelligence – the quintessential industry for the knowledge worker. By contrast, those without such knowledge and abilities will miss out on these opportunities. Thus, there will be a new infusion of ‘have-nots’ generated by the advance of artificial intelligence technologies. We will want and need public policy and non-governmental efforts to help these folks overcome this new digital divide.

“Note the evolution of the digital divide from simply gaining access to technology to the ability to use it toward beneficial purpose, which will characterize the infusion of artificial intelligence technologies.

“As for U.S. national public policy, I am not optimistic. I wish I could be, but I don’t see even a glimmer of change for years to come. Perhaps there will be a discontinuity in the political timeline of our history that will change the trajectory. As evidenced by the current U.S. House of Representatives and U.S. Senate and the respective majorities held with razor-thin margins, we are a divided country politically. Unfortunately, this division has also seen increasing polarization in the past decade or two, making progress quite difficult, even for those policy proposals that enjoy the support of strong majority of both elected representatives and the populace.

“There hasn’t been any major public policy law for internet-related technology enacted in the 21st century, and I don’t see any prospect of that situation changing. The continuing inability of the U.S. to adopt major public policy on the knowledge society means that de facto policy will be made in Europe. Companies and other organizations wishing to pursue the pragmatic course of having one worldwide approach when possible will gravitate towards European law and policy. Consequently, European policy-influenced practices are likely to continue to have some resonance in the United States.

“The ‘haves’ will likely face a light regulatory regime in order to exploit artificial intelligence and other technological advances for personal, organizational and national gain. As suggested above, the ‘have-nots’ face the possibility of losing out yet more in the future as the knowledge society ‘progresses.’ Such a mixed story for artificial intelligence between now and 2040 would be consistent with prior technologies or revolutions in society. The deployment of any technology is never a neutral intervention, as it overlays the existing social condition of the people.”

Sara ‘Meg’ Davis
Inequalities and human rights issues will be amplified

Sara (Meg) Davis, professor of digital Health and Rights at the University of Warwick, argued, “Health goals, for example, the promise of more rapid and accurate diagnosis and treatment, are often cited as an underlying rationale for the rapid growth of AI. But, in practice, without stronger AI governance the profound inequalities and human rights issues in global health risk being amplified. The foundations for future AI governance will be laid in the next year, at high speed. Health and human rights experts and advocates urgently need to be part of the conversation and to raise the three following concerns.

  • “Whose security are we prioritizing? Real-world AI-related harms are disproportionately experienced by women and minority communities in high-income countries, as well as by many others in low- and middle-income countries who lack a voice in U.S. or UK tech governance. The familiar critiques apply to AI governance when it comes to reinforcing colonial inequalities: focusing narrowly on protecting wealthy countries from pandemics originating in the rest of the world; and ignoring equally critical and urgent needs of those dealing with weak health systems in the Global South, who are locked out of access to vaccines and more. In many countries with draconian cybersecurity laws, the digital securitization discourse has itself become a cause of insecurity for those targeted by police and authoritarian states. We need to demand digital security for all, not only for elites.
  • “The spectre of self-certification by corporations for AI governance ought to ring loud alarm bells in global health. We have been here before, recently and embarrassingly: The State Party Self-assessment Reports countries dutifully completed for pandemic preparedness led the U.S. and UK to rank themselves highly, only to perform abysmally when they were tested in reality by COVID-19. Any self-certification process for AI safety must have independent review by experts, real social accountability mechanisms to enable communities to have a voice at every level of AI governance and whistle-blower mechanisms to enable anyone to raise the alarm when AI systems cause real-world harms.
  • “Meaningful participation in AI governance. Given the rapid pace of AI development, Open AI rightly notes that laws and policies created now may not be fit for purpose a few years from now and may need repeated iterations. But how will this include robust and democratic community voice at every level? AI critic Timnit Gebru warns, “I am very concerned about the future of AI. Not because of the risk of rogue machines taking over. But because of the homogeneous, one-dimensional group of men who are currently involved in advancing the technology.” In global health, we have already experienced the lopsided influence of the private sector, private foundations, and interested donor states in multi-stakeholder platforms—and we will see this repeated in AI governance without pressure for truly democratic and inclusive governance, with a strong voice for communities and civil society to resist exploitative tokenism and promote meaningful participation in governance.

“In the Digital Health and Rights Project, an international consortium for which I am principal investigator, we are establishing one potential model of transnational participatory action research into digital governance that includes democratic youth and civil society participation from national to international levels. In the 1980s, AIDS activists around the world mobilized to demand a seat at the table in clinical trials and in global health governance mechanisms. That movement reshaped the global health landscape and saved millions of lives. Today we need to demand a voice in support of strong human rights and global health protections in AI governance.”

Roberto V. Zicari
Future gains/losses are too difficult to predict, but research on safety inspection is advancing

Roberto V. Zicari, Germany-based head of the international Z-Inspection Initiative, leading experts to define the best assessment process for Trustworthy AI, commented, “It’s nearly 2024; 2040 is 13 years from now. On a linear scale of time this is quite a short period. The key question is how technology (such as AI) will be used or misused by humans in the next 13 years. The next question to ask is how much autonomy will be given to technology (such as AI) in respect to humans. The pace of development of AI is very rapid. The rate of change of human behaviors has changed quite little over the centuries. The struggle between good and bad will continue. Honest answers to the question of what life might be like in 2040 are bound to each individual and their respective role in society. No definitive answers can be given.”

Trustworthy AI labs are located worldwide to mobilize international experts to test how to best evaluate AI systems in a multistakeholder, consensus-based participatory process that allows all stakeholders to assess risks in specific systems. Z-inspection is a collaborative approach that brings in stakeholders from science, government and the public at different stages of the whole lifecycle of an AI system (design, development, testing/simulations, deployment, post-deployment monitoring). It also looks to identify tensions relating to the AI systems. Such may exist between ‘winning’ and ‘losing’ aspects of the system for different stakeholders, between ‘short-term’ and ‘long-term’ effects of the system or goals, or between ‘local’ and ‘global’ consequences and effects engendered.

Zicari shared details of Z-inspection at a Trustworthy AI event in Strasbourg, France, in July 2023. “Trustworthy AI labs are located worldwide to mobilize international experts to test how to best evaluate AI systems in a multistakeholder, consensus-based participatory process that allows all stakeholders to assess risks in specific systems. Z-inspection is a collaborative approach that brings in stakeholders from science, government and the public at different stages of the whole lifecycle of an AI system (design, development, testing/simulations, deployment, post-deployment monitoring). It also looks to identify tensions relating to the AI systems. Such may exist between ‘winning’ and ‘losing’ aspects of the system for different stakeholders, between ‘short-term’ and ‘long-term’ effects of the system or goals, or between ‘local’ and ‘global’ consequences and effects engendered. Multi-domain stakeholder and expert interactions help identify such tensions and propose solutions beyond the limitation(s) of the static checklists.

“The overarching goal of the Z-inspection process under the premise of the seven criteria of the European Commission’s High-Level Expert Group on AI’s ‘Assessment List for Trustworthy Artificial Intelligence’ (ALTAI) is to achieve a consensus-based mapping of the advantages and the drawbacks of an AI system, and to assess its trustworthiness under the light of best-case and worst-case scenarios and potentially arising tensions, for which a solution is proposed. In case studies to this point, Z-inspection has assessed risks pertaining to AI in cardiovascular risk detection, AI-based skin lesion classification for the early detection of skin cancer and precancerous lesion, AI-based determination of the degree of compromised lung function in Covid19 patients. The trustworthiness of automated tracking of natural landscapes through analysis of satellite imagery by AI has helped determine that an AI system under scrutiny – an environmental monitoring tool for the Dutch government – passed all the self-assessment steps for the ALTAI criteria and those of the European Union’s Fundamental Rights Impact Assessment for AI (FRIA) for use of AI in law enforcement.”

Continue reading: Losses and gains – A look at challenges and opportunities