Essays Part II – Concerns over the economic and political forces shaping AI, societal impact and potential remedies
The following essays are generally mostly focused on the likely overall societal impact of change by 2035. Many note that the flaws in today’s sociotechnical systems are shaped and driven by economic and political forces and human behavior. Some of these authors suggest potential improvements to be made in regulation, education, governance and more. Most of these writers concentrated their responses on societal influences over AI’s innovation and diffusion, some also touching on likely individual change as humans adapt to AI. Many expressed hopes that AI systems’ current negative dynamics of extractive capitalism and autocratic nation-states’ surveillance and control will be mitigated by a turn toward truly human-centered technology design and operation. A few touched on potential societal change that may emerge if and when artificial general intelligence and superintelligence arrive.
The first section of Part II features the following essays:
Larry Lannom: By 2035 we will likely experience positive scientific advances plus disruptions of social trust/cohesion and employment and increased danger of AI-assisted warfare.
Jerome C. Glenn: AI could lead to a conscious-technology age or the emergence of artificial superintelligence beyond humans’ control, understanding and awareness.
Marjory S. Blumenthal: The AI hype, hysteria and punditry are misleading; developments promised are unlikely to be realized by 2035, but human augmentation will bring promising benefits.
Vint Cerf: By 2035, imperfect AI systems will be routinely used by people and AI, creating potential for considerable turmoil and serious problems with unwarranted trust.
Stephen Downes: ‘Things’ will be smarter than we are. By 2035 AI will democratize more elements of society and also require humans to accept that they are no longer Earth’s prime intelligence.
Marina Cortes: AI has led to the most powerful business model ever conceived, one that is consuming a massive share of the planet’s financial, energy and organizational resources.
Raymond Perrault: Once real AGI is broadly achieved, assuming it can be embodied in an economically viable solution, then all bets are off as to what the consequences will be.

Larry Lannom
By 2035 We Will Likely Experience Many Positive Scientific Advances Plus Disruptions of Social Trust/Cohesion and Employment and the Increased Dangers of AI-Assisted Warfare
Larry Lannom, vice president at the Corporation for National Research Initiatives, based in the U.S., wrote, “AI will not change ‘core human traits and behaviors’ in any fundamental sense any more than did the industrial revolution or any other dramatic shift in the environment in which humans live. However, it is likely to be extremely disruptive within the 10-year timespan in question and in that sense will definitely affect all of us. These disruptions could take any of a number of forms including a combination of disruptions. These include:
Predicting with any accuracy which of these disruptions will cause significant change over the next 10 years is impossible. However, it is important to consider the potential for a combination of these somewhat foreseeable types of disruption to take place. Experiencing a cascade of unintended consequences of even the most benign potential heightens the difficulty of imagining the resulting opportunities and challenges that may lie ahead.
- “Economic disruption, as AI begins to replace human workers in areas such as customer service, computer program development and basic legal research and drafting – all of which is already happening. It is also possible that by 2035 the more difficult problems of AI-managed physical activities, e.g., elder care, factory maintenance, farm work and other open-ended activities currently beyond the capabilities of industrial robots will be solved. This will all cause serious economic disruption and force governments to address basic needs of an increasingly unemployed population. The predictions of mass unemployment due to automation have generally proved too pessimistic in the past, but that doesn’t preclude a long and difficult period of adjustment, leading to considerable social unrest.
- “Disruption of social trust and cohesion, as AI bots, especially those posing as humans, flood the global communication space making it ever more difficult to distinguish fact from fiction. There are regulatory solutions to this problem, e.g., make all AI bots identify as such, declare social media companies to be legally responsible for their algorithms and require other forms of transparency, but these would require political will and international cooperation, both of which seem unlikely in the current race for AI superiority.
- “Increased danger of AI-assisted warfare, including cyber warfare, unconstrained ‘killer bots’ and new viruses or other disease agents developed specifically to harm enemy populations. The ability of rogue states or non-state actors to engage with AI in this area is difficult to anticipate, as opposed to the fairly predictable economic and social disruptions but holds the potential of becoming a uniquely dangerous outcome. While the construction of nuclear weapons is difficult to hide, the development of AI weapons will be largely invisible.
- “Scientific and technological advances brought on by the use of AI to solve problems and find patterns that unaided humans have not solved or suspected is also a type of disruption, but one that is positive instead of negative. New forms of energy generation, disease prevention, efficient and clean transportation systems and new materials that replace those that come from difficult and dirty extractive mining practices are just some of the potential advantages of the application of a tireless super intelligence. The even-handed application of these advances, of course, will be another kind of challenge and the failure to do so another potential disruptive harm, but acquiring new knowledge is better than not doing so. This is the exciting part of AI –unimagined solutions to problems that seem unsolvable or perhaps not even yet recognized as problems.
“Predicting with any accuracy which of these disruptions will cause significant change over the next 10 years is impossible. However, it is important to consider the potential for a combination of these somewhat foreseeable types of disruption to take place. Experiencing a cascade of unintended consequences of even the most benign potential heightens the difficulty of imagining the resulting opportunities and challenges that may lie ahead.”

Jerome C. Glenn
AI Could Lead to a Conscious-Technology Age or the Emergence of Artificial Super Intelligence Beyond Humans’ Control, Understanding and Awareness
Jerome C. Glenn, futurist and executive director and CEO of the Millennium Project, wrote, “If national licensing systems and global governing systems for the transition to Artificial General Intelligence (AGI) are effective before AGI is released on the Internet, then we will begin the self-actualization economy as we move toward the Conscious-Technology Age. If, instead, many forms of AGI are released on the Internet from the U.S., China, Japan, Russia, the UK, Canada, etc., by large corporations and small startups their interactions will give rise to the emergence of many forms of artificial superintelligence (ASI) beyond human control, understanding and awareness.
“I’d like to share with you a set of insights published in the Millennium Project’s State of the Future 20.0 report, which I co-authored:
“‘Governing artificial general intelligence could be the most complex, difficult management problem humanity has ever faced. AI expert Stuart Russell has urged that, “Failure to solve it before proceeding to create AGI systems would be a fatal mistake for human civilization. No entity has the right to make that mistake.”
It’s important to recognize the impact of the ongoing race for AGI and advanced quantum computing among the U.S., China, European Union, Japan, Russia and several corporations. This rush could mean that humans cut corners on safety and don’t develop the initial conditions and governance systems properly for AGI; hence, artificial superintelligence could emerge from thousands of unregulated AGIs beyond our understanding, control and not to our advantage. Many AGIs could communicate, compete, and form alliances that are far more sophisticated than humans can understand, making a new kind of geopolitical landscape.
“‘So far, there is nothing stopping humanity from making that mistake. Since AGI could arrive within this decade, we should begin creating national and supranational governance systems now to manage that transition from current forms of AI to future forms of AGI, so that how it evolves is to humanity’s benefit. If we do it right, the future of civilization could be quite wonderful for all.
“‘There are, roughly speaking, three kinds of AI: narrow, general, and super. Artificial narrow intelligence ranges from tools with limited purposes like diagnosing cancer or driving a car to the rapidly advancing generative AI that answers many questions, generates code, and summarizes reports. Artificial general intelligence may not exist in its full state yet, but many AGI experts believe it could within a few years. It would be a general-purpose AI that can learn, edit its code and act autonomously to address many novel problems with novel solutions like or beyond human abilities.
“For example, given an objective, it could query data sources, call humans on the phone and re-write its own code to create capabilities to achieve the objective that it did not have before. When and if it is achieved, the next step in machine intelligence – artificial superintelligence – will set its own goals and act independently from human control, and in ways that are beyond human understanding. Thousands of un-regulated AGIs, interacting together, could give birth to artificial superintelligence that poses an existential threat to humanity.
“‘It’s important to recognize the impact of the ongoing race for AGI and advanced quantum computing among the U.S., China, European Union, Japan, Russia and several corporations. This rush could mean that humans cut corners on safety and don’t develop the initial conditions and governance systems properly for AGI; hence, artificial superintelligence could emerge from thousands of unregulated AGIs beyond our understanding, control and not to our advantage. Many AGIs could communicate, compete, and form alliances that are far more sophisticated than humans can understand, making a new kind of geopolitical landscape.
“‘The energy requirements to power this transition are enormous, unless better strategies than large language models (LLMs) and large multi-model models (LMMs) are found. Nevertheless, the proliferation of AI seems inevitable since civilization may be getting too complex to manage without AI’s assistance. At the same time, elementary quantum computing is already here and will accelerate faster than people think; the applications are likely to take longer to implement than people will expect, but it will improve computer security, AI and computational sciences, which in turn will accelerate scientific breakthroughs and technology applications, which in turn increase both positive and negative impacts for humanity.
“‘All of these potentials are too great for humanity to remain so ignorant about them. We need political leaders to understand these issues. The gap between science and technology progress and global, regional and local leaders’ awareness is dangerously broad.’”

Marjory S. Blumenthal
Today’s AI Hype, Hysteria and Punditry Are Misleading; Developments Promised Are Unlikely to be Realized by 2035, But Human Augmentation Will Bring Promising Benefits
Marjory S. Blumenthal, a senior policy researcher at RAND Corporation and fellow at the Future of Privacy Forum, predicted, “Today, developments in AI and its uses fill the news and commentary – an excessive amount of coverage that promotes hype, hysteria and punditry. Yet major technological change tends to happen slower than people expect. Today’s AI builds on many innovations in information and communication technologies. It is disruptive in specific contexts but it is leading to adaptations and experimentation, both of which guarantee that linear projections of what is evident today are unlikely to be realized in 10 years.
“Some of the most promising benefits will come from augmenting humans – bigger and better decision support, analysis and presentation of data, adaptation to different learning or expressive styles, or robotic action in contexts (like certain surgeries or work in hazardous environments) in which human limitations constrain people or put them at risk.
Known perils will persist and even worsen but they will be more widely recognized and subject to an evolving mix of countermeasures. … Information literacy will be more vigorously and widely spread, and baseline skepticism will be greater. Although there is a rush today to at least consider what regulation might do to deter AI’s perils, regulation will evolve unevenly, will never be fully comprehensive, and in particular will not constrain the ‘bad guys’’ intent on social or robotic manipulation for criminal or other adversarial reasons.
“These applications are already evident and in 10 years will be more refined, less expensive and more integrated into education, training and operations.
“Known perils will persist and even worsen but they will be more widely recognized and subject to an evolving mix of countermeasures. For example, the cognitive effects of social (and really all) media might become more insidious, but information literacy will be more vigorously and widely spread, and baseline skepticism will be greater.
“Although there is a rush today to at least consider what regulation might do to deter AI’s perils, regulation will evolve unevenly, will never be fully comprehensive, and in particular will not constrain the ‘bad guys’ intent on social (or robotic) manipulation for criminal or other adversarial reasons.
“History has shown that even without computer-based technologies governments and criminals have always manipulated perception – AI augments longstanding problems. It also offers tools to help in detecting and responding to manipulation, something evident in today’s attention to bias, data poisoning, adversarial training, and other components of nefarious applications of AI.
“Comfort working with and trusting computer-based systems does not make a person less human. The 1960s’ pioneering ELIZA system demonstrated that some people could feel more comfortable communicating with a system than with other people. Immersive environments (such as massively multiplayer online roleplaying games) have long demonstrated people’s comfort ‘losing themselves’ in a system.
“One area of uncertainty today relates to the workforce impacts of AI. It is always easier to identify displacement of old work than creation of new work, which might occur in different contexts and with different skill requirements.
“Today’s AI raises questions about so-called ‘knowledge work’ and other kinds of white-collar work, contexts in which augmentation of a smaller workforce is a likely path forward. Even without today’s AI, for example, automated document analysis has been trimming demand for legal talent for decades, and regular layoffs in tech have long been symptomatic of sloppy management that overhires and then trims.
In 10 years, the trends will be clearer, both failed and successful applications will be countable, more people will know that they have been exposed to or will have had opportunity to work with AI, and I hope that more thought will have gone into human-centric or human-augmenting applications than what can be seen in today’s scramble to demonstrate sheer capability. … it would be hubris – or perhaps a new form of Lamarckism – to argue that in such a short time core human traits and behaviors would have changed.
“Moreover, high-touch work (e.g., in health care and pre-K to 12 education) will change less, and aging populations globally will make some of the displacement and/or augmentation welcome – AI could extend career horizons for some. Creative work will demonstrate both displacement (e.g., for routine design or image-generation activity) and the opening up of new or enhanced modalities. If being human depends on the amount and kind of work then AI will change the options for many, but the experiences will be uneven, varying a lot by occupation, industry and geography.
“In 10 years, the trends will be clearer, both failed and successful applications will be countable, more people will know that they have been exposed to or will have had opportunity to work with AI, and I hope that more thought will have gone into human-centric or human-augmenting applications than what can be seen in today’s scramble to demonstrate sheer capability.
“But it would be hubris – or perhaps a new form of Lamarckism [a theory of evolution that states that organisms can have characteristics that are lost or acquired through use or disuse over time to future generations] – to argue that in such a short time core human traits and behaviors would have changed.”

Vint Cerf
There Will Be Significant Impact by 2035: Imperfect AI Systems Will Be Routinely Used By People and AI, Creating ‘Potential for Considerable Turmoil’ and Serious Problems With ‘Unwarranted Trust’
Vint Cerf, vice president and chief Internet evangelist for Google, a pioneering co-inventor of the Internet protocol and longtime leader with ICANN and the Internet Society, wrote, “Given the past decade of AI research results, especially the emergence of generative, multi-modal large language models (LLMs), we can anticipate significant impact by 2035. These tools are surprising in their capability to produce coherent output in response to creative prompts.
Flaws in consequential reasoning, misunderstanding between communicating agentic models, and complex dependencies among systems of such models all point to the potential for considerable turmoil in an increasingly online world. … We may find it hard to distinguish between artificial personalities and real ones. That may result in a search for reliable proof of humanity so that we and bots can tell the difference. … The ease of use of these models and their superficial appearance of rationality will almost certainly lead to unwarranted trust. … Increased dependence on these systems will also increase the potential for cascade failures.
“It is also clear that these systems can and do produce counter-factual output even if trained on factual material. Some of this hallucination is the result of a lack of context during the weight training of the multi-layer neural models. The ‘fill in the blanks’ method of training and back propagation does not fully take into account the contexts in which the tokens of the model appear.
“There are attempts to fine tune the ‘models using, for example, reinforcement learning with human feedback (RLHF). These methods among others, including substantial pre-prompting and large context window implementation, can guide the generative output away from erroneous results but they are not perfect.
“The introduction of agentic models that are enabled to take actions, including those that might affect the real world (e.g., financial transactions), has potential risks. Flaws in consequential reasoning, ‘misunderstanding’ between communicating agentic models, and complex dependencies among systems of such models all point to the potential for considerable turmoil in an increasingly online world.
“Standard semantics and syntax for what I will call ‘interbot’ exchanges will be necessary. There is already progress along these lines, for example at the schema.org website. Even with these tools, natural language LLM discourse with humans will lead to misunderstandings, just as human interactions do.
“We may find it hard to distinguish between artificial personalities and real ones. That may result in a search for reliable proof of humanity so that we and bots can tell the difference. Isaac Asimov’s robot stories drew on this dilemma with sometimes profound consequences.
“The ease of use of these models and their superficial appearance of rationality will almost certainly lead to unwarranted trust. The LLMs produce the verisimilitude of human discourse. It has been observed that LLMs sound persuasive, even when they are wrong because their output sounds convincingly confident.
“There are efforts to link the LLMs to other models trained with specialized knowledge and capabilities (e.g., mathematical manipulation, knowledge-graphs with real-world information) to reduce the likelihood of spurious output but these are still unreliable. Perhaps by 2035 we will have improved the situation significantly but increased dependence on these systems will also increase the potential for cascade failures.
“Humans value convenience over risk. How often do we think ‘it won’t happen to me!’? It seems inevitable that there will be serious consequences of enabling these complex tools to take action with real-world effects. There will be calls for legislation, regulation and controls over the application of these systems.
The real question is whether we will have mastered and understood the mechanisms that produce model outputs sufficiently to limit excursions into harmful behavior. It is easy to imagine that ease of use of AI may lead to unwarranted and uncritical reliance on applications. It is already apparent in 2025 that we are deeply reliant on software in networked environments. … We are going to need norms and regulations to recover from various kinds of failure for the same reason that the introduction of automobiles eventually led to regulation of their manufacture and use as well as training programs to increase the likelihood of safe usage and law enforcement where irresponsible behavior surfaces.
“On the positive side, these tools may prove very beneficial to research that needs to operate at scale. A good example is the Google DeepMind AlphaFold model that predicted the folded molecular structure of 200 million proteins that could be generated from human DNA. Other largescale analytical solutions include the discovery of hazardous asteroids from large amounts of observational data, the control of plasmas using trained machine-learning models and near term, high-accuracy weather prediction.
“The real question is whether we will have mastered and understood the mechanisms that produce model outputs sufficiently to limit excursions into harmful behavior. It is easy to imagine that ease of use of AI may lead to unwarranted and uncritical reliance on applications.
“It is already apparent in 2025 that we are deeply reliant on software in networked environments. There are literally millions of applications accessible on our mobiles, laptops and notebooks. New interaction modes including voice add to convenience and dependence and potential risk.
“Without doubt, we are going to need norms and regulations to recover from various kinds of failure for the same reason that the introduction of automobiles eventually led to regulation of their manufacture and use as well as training programs to increase the likelihood of safe usage and law enforcement where irresponsible behavior surfaces.
“For the same reasons that many tasks are done differently today than they were 50 or even 25 years ago, AI will alter our preferred choices for getting things done. Today we have the choice of ordering things online to be delivered to our doorsteps that we would typically have had to pick up from a store. Of course there was the Sears Catalog of the late 19th Century, postal and other delivery services, overnight services such as FEDEX, UPS, DHL and now Amazon and – soon – drone delivery.
AI tools will become increasingly capable general-purpose assistants. We will need them to keep audit trails so we can find out what, if anything, has gone wrong and how and also to understand more fully how they work when they produce useful results.
“By analogy, many of the things we might have done ourselves will be done by AI agents at our request. This could range from writing a program or a poem to ordering plane or theatre tickets. Multimodal AI services already translate languages, render text-to-speech and speech-to-text, draw pictures or compose music or essays on demand and prepare business plans on request.
“It will be commonplace in 2035 to have local bio-sensors (watch, smartphone accessories, Internet of Things devices) to capture medical symptoms and conditions for remote, AI-based diagnosis and possibly even recommended treatment.
“AI agents are already being used to generate ideas, respond to questions and write speeches and essays. They are used to summarize long reports and to generate longer ones (!).
“AI tools will become increasingly capable general-purpose assistants. We will need them to keep audit trails so we can find out what, if anything, has gone wrong and how and also to understand more fully how they work when they produce useful results. It would not surprise me to find that the use of AI-based products will induce liabilities, liability insurance and regulations regarding safety by 2035 or sooner.”

Stephen Downes
‘Things’ Will Be Smarter Than We Are: By 2035 AI Will Democratize More Elements of Society and Also Require Humans to Accept That They Are No Longer Earth’s Prime Intelligence
Stephen Downes, a Canadian philosopher and expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “It’s going to be hard to discern how AI and related technologies will have helped people by 2035 because we will be facing so many other problems. But it will have helped, and without it things would probably be much worse, especially for the poor and disenfranchised.
“AI will democratize a lot of things that used to be the preserve of corporations and the wealthy. Translation, for example, has been out of the reach of the average person, but by 2035 people around the world will be able to easily talk directly with each other.
What we’ll find is that AI has no real ability nor desire to become our overlords and masters. And instead of devising ‘human-in-the-loop’ policies to prevent AI from running amok, we will devise ‘AI-in-the-loop’ policies to help very fallible humans learn, think and create more effectively and more safely. The real risk is not from AI, but from other humans armed with AI. There will never be a shortage of people who want to put machine guns on drones, or use technology to raise rents … So long as some humans crave power and control over others, we will be at risk. I’d like to think, though, that if the vast majority of people have the capacity to do more good they will.
“Anything that requires thought and creativity – writing, media, computer programming, design – will be within the reach of the average person. A lot of that output won’t be very good – the cheap AI running on your laptop might not have the capacity of a Google data array – but it will be good enough to help people succeed without years of training.
“2035 might be a bit early to see the widespread impact, but the effect on science and technology will be beginning to be more evident. We’ll see it first in medicine, as AI-designed treatments begin getting approval. New AI-developed materials and processes will be in the early commercialization stage. And complex systems – everything from energy to traffic to human resource management – will be running more smoothly.
“Still, the next 10 years will be characterized by a lot of opposition to AI, much of it focused on the threats and the cost (though it can often be much lower than human-authored equivalents). We will experience what might be called a Second Copernican Revolution; just as humans in the 1600s had to comprehend that they were not at the centre of the universe, we will have to comprehend that humans are not the centre of intelligence. It will be hard to accept that ‘things’ can be as smart as we are, and we won’t trust them.
“What we’ll find, though, is that AI has no real ability nor desire to become our overlords and masters. And instead of devising ‘human-in-the-loop’ policies to prevent AI from running amok, we will devise ‘AI-in-the-loop’ policies to help very fallible humans learn, think and create more effectively and more safely.
“The real risk, in my view, is not from AI, but from other humans armed with AI. There will never be a shortage of people who want to put machine guns on drones, or use technology to raise rents, or spy on political opponents by measuring vibrations in glass. So long as some humans crave power and control over others, we will be at risk. I’d like to think, though, that if the vast majority of people have the capacity to do more good they will.”

Marina Cortês
AI Has Led to the Most Powerful Business Model Ever Conceived, One That Is Consuming a Massive Share of the Planet’s Financial, Energy and Organizational Resources
Marina Cortês, leader of the IEEE-SA’s Standard for the Implementation of Safeguards, Controls, and Preventive Techniques for Artificial Intelligence Models, wrote, “On U.S. Inauguration Day 2025, when I saw the big tech leaders seated behind the president elect I felt I had lost my bird’s-eye view on the environment of AI technology. Before this, I had felt deeply immersed in the space of complex correlations between the different players that factor into AI safety and AI standards development.
“I work with IEEE, a global organization that both governments and tech companies refer to for guidance. We generally have found the tension between governments and technology companies to be beneficial.
“On one side we have governments, ideally acting on behalf of their citizens, wanting to promote and support the development of safe technology. On the other we have tech companies striving for profits as they create tools for society. The role of IEEE as a global organisation relying on the work of unpaid volunteers is to provide impartial advice on to these entities.
The players whose platforms and products are soon to control much of the world are driving the future direction of the planet as a whole. Their key product – AI – controls information. The powerful who control information are influencing governments, whilst their products control the citizens. They control both the rulers and the ruled.
“Now is quite clear that government and the tech industry seem to be merging in regard to AI policy. It is clear that the tension that had been creating somewhat of a balance between safe technology and profitable technology has been obliterated in the discussion of AI development. The question is not only who leads a government it seems, but also who has influence over the leader. Heads of government have always paid some allegiance to powerful business interests to a greater or lesser extent, but these seem to me to be new dynamics. The players whose platforms and products are soon to control much of the world are driving the future direction of the planet as a whole. Their key product – AI – controls information. The powerful who control information are influencing governments, whilst their products control the citizens. They control both the rulers and the ruled.
“I had earlier believed there were three major roadblocks on the path ahead that would prevent AI from growing too quickly before safeguards are in place: the cost of the research, the pace of development and overall energy and computation needs. In 2024, AI development was seen as costly and unlikely to yield a profit for many years. I figured that when it became clear to venture capitalists and other investors that no yield would be returned soon on their investment in the technology and none was in sight that they would no longer be spellbound by the promise of AI. This roadblock disappeared in January 2025.
“Before then it seemed as if we were headed toward an AI market bubble. Then Stargate – an AI infrastructure initiative boasting a $500-billion-dollar investment was announced by the U.S. president and several global tech companies. (Imagine the carbon emissions that a half trillion dollars will bring from the data centers being planned.) And in the same days in which we had been witnessing more impact of climate change in unprecedented disasters across the face of the planet, we were presented with the most ingenious business model ever conjured to date.
A global situation of this kind is the equivalent of seeing representatives of an alien civilization land on our planet, extract the entirety of its resources and leave it behind to move on to the next. The agency of citizens could be seen as equivalent to that of unsuspecting ants when compared with the agency of the lead agents of tech. We are not organized. Our communication infrastructures depend on those tech agents. We don’t have access to reliable globalised news. We have no inside information about the events behind this rapid advance of AI that we can rely on.
“Today, citizens are being deprived of the robust public information structures important to democracy. They have been replaced by the companies making up the tech-government mix – those that now control the supply chain of news and the dispersal of ‘knowledge.’
“The public doesn’t understand the business space they are unknowingly subscribing to, as they are increasingly burdened by financial problems of their own, struggling to make ends meet, without the mental space or the energy to study and perceive the big picture of this genius business model. They know that taxes are to be paid and they dutifully continue to do so.
“A handful of powerful people are taking over the entire ecosystem of the planet in regard to financial resources, energy resources, organisational resources and, ultimately, in regard to global climate resources. These resources are being diverted to the goal of AI development. All of this is happening so fast that those of us alert to the situation do not have the ability to mobilize the public and help them understand the potential impact of current circumstances. The public is powerless to take action or to have any agency over what’s happening.
“This planet is inhabited by the equivalent of eight billion ants, confused and in a dense fog, each going through the motions to get to the next day, while collectively unknowingly empowering a giant resource-extraction machine operated by a handful of individuals, who are moving full steam ahead on exhausting all materials, energy and living complexity that had been carefully crafted to a perfect balance in a biosphere that has been painstakingly learning from its mistakes over five billion years.
“A global situation of this kind is the equivalent of seeing representatives of an alien civilization land on our planet, extract the entirety of its resources and leave it behind to move on to the next. The agency of citizens could be seen as equivalent to that of unsuspecting ants when compared with the agency of the lead agents of tech. We are not organized. Our communication infrastructures depend on those tech agents. We don’t have access to reliable globalised news. We have no inside information about the events behind this rapid advance of AI that we can rely on.
Those in power … often take actions that show they don’t care about human rights and agency. … Nearly everything is being liquidated and cashed in in return for a several-trillion-dollar ticket to fund their image of what the future should be. They are just human. As such they are susceptible to ill-judgment. No group of humans as small as this has ever evolved through natural selection to have power over eight billion of their own kind. I believe any of us might succumb to the insanity of such power.
“Those in power might as well be aliens. They often take actions that show they don’t care about human rights and agency. They don’t seem to care about the planet. Our lives, our knowledge, our organizations. Nearly everything is being liquidated and cashed in in return for a several-trillion-dollar ticket to fund their image of what the future should be. They are just human. As such they are susceptible to ill-judgment. No group of humans as small as this has ever evolved through natural selection to have power over eight billion of their own kind. I believe any of us might succumb to the insanity of such power.
“The only way to restore balance is to tilt the scales backwards so that success can only rise so far before turning around, bound for square zero again. That is balance. It is the wisest lesson this stunning biosphere has ever told us. The tale of balance is a story that has been told countless times in our planet. We have made mistakes, that is normal, we are only human. We can learn from those. Of course we can, and we will. After all, our home is that remarkable, dazzling, beautiful pale-blue dot in the universe.”

Raymond Perrault
Once Real AGI Is Broadly Achieved, Assuming It Can Be Embodied In an Economically Viable Solution, Then All Bets Are Off as to What the Consequences Will Be
Raymond Perrault, a leading scientist at SRI International from 1988-2017 and co-director of Stanford University’s AI Index Report 2024, wrote, “I quite enjoy using large language models and find their ability to organize answers to questions useful, though I have to treat anything they provide as a sketch of a solution rather than one I trust enough to act upon unless the outcome is unimportant. This is particularly true if the task involves collecting and organizing information from many sources and drawing inferences from what is collected.
[I do not see it coming soon, but if] real AGI happens – assuming it can be embodied in an economically viable solution – then all bets are off as to what the consequences will be. Such systems operating under the control of responsible humans would be tremendously valuable, but armies of them operating independently of human control would be terrifying.
“I do not expect the fundamental connection between the predict-next-word (System 1-like) systems now available and ones that can control these with systematic reasoning (System 2) to change radically soon. Too many smart people have worked on this for too long for this to not be considered an extremely difficult problem that will require a contribution at least as significant as the existing transformer-based architecture.
“As long as this connection does not significantly improve (and I don’t think the current state of TAG, CoT and analogs come close to a general, robust, solution), anything produced by LLMs can only be taken as a sketch of a solution to any mission-critical user problem. And until that happens, I cannot see my relation to these systems changing significantly.
“Once real AGI happens – assuming it can be embodied in an economically viable solution – then all bets are off as to what the consequences will be. Such systems operating under the control of responsible humans would be tremendously valuable, but armies of them operating independently of human control would be terrifying. However, I still don’t see any of these options changing my sense of humanity, but maybe this is just a lack of imagination on my part.”
The next section of Part II features the following essays:
Otto Barten: AI is a boon and a danger to humanity that must be managed in a way that helps identify and mitigate the worst risks to avoid dystopian outcomes.
Gerd Leonhard: If we use AI to solve our most urgent problems and forego the temptation to build god-like machines that are more intelligent than us, our future could be bright indeed.
Jamais Cascio: Branded slaves or ethics advisors? whose interests do the Als represent? will humans retain their agency? Will Als be required or optional if we hope to live well?
S.B. Divya: Social Isolation and Ideological Bubbles Will RIse, Reducing Humans’ Ability to Adapt, and ‘Prolonging the Suffering from the Driving Forces of Capitalism and Technological Progress’
Liza Loop: Will algorithms continue to prioritize humans’ most greedy and power-hungry traits or instead be most focused on our generous, empathic and system-sensitive behaviors?
Neil Richardson: In the future our digital self – comprised of our digital/online skills, digital avatars and accumulated data – will merge with our physical existence, resilient in the face of change.

Otto Barten
AI Is a Boon and a Danger to Humanity That Must be Managed in a Way That Helps Identify and Mitigate the Worst Risks to Avoid Dystopian Outcomes
Otto Barten, a sustainable-energy engineer, data scientist and entrepreneur who founded and directs the Existential Risk Observatory, based in Amsterdam, wrote, “We can’t assume that there will be an all-positive AI/human-shared future. But if there’s even a slight chance of a major bad outcome or even a slim possibility of extinction, the potential for that should be a central element in thinking and policymaking about this topic. Most AI scientists don’t think the chance is small.
Just as AI might enable new science that solves the world’s toughest challenges it is also likely to turn out to be very dangerous. … Assuming we do survive, hard power and economic changes will be very important and they have the possibility of leading to dangerous outcomes. Mass unemployment seems likely, and the mass loss of individuals’ economic bargaining power could bring a complete loss of power for large parts of the population. Inequality, both between people and between countries, could well skyrocket post-AGI. If AI systems become dominant over most human activity there’s even the possibility that an eternal global AI-powered dictatorship could be a default outcome.
“AI is extremely open. That’s good but that’s also the risk. It presents multiple threat models. Human extinction is a real possibility. How? There could be a loss of human control during advanced AI development ending in extinction or a human zoo scenario. There might be a loss of control later, during application – since at some point a much smaller percentage of global cognition will be human and perhaps we might fall out of the loop altogether.
“Just as AI might enable new science that solves the world’s toughest challenges it is also likely to turn out to be very dangerous. In line with the new tech becoming more powerful one mistake could end us, and immediately after achieving true artificial general intelligence (AGI) there’s a possibility we open the door to that.
“Assuming we do survive, hard power and economic changes will be very important and they have the possibility of leading to dangerous outcomes. Mass unemployment seems likely, and the mass loss of individuals’ economic bargaining power could bring a complete loss of power for large parts of the population. Inequality, both between people and between countries, could well skyrocket post-AGI. If AI systems become dominant over most human activity there’s even the possibility that an eternal global AI-powered dictatorship could be a default outcome.
“Many people’s happiness is at least partially derived from their sense that the world somehow needs them, that they have utility. I think AI will likely end that utility. Additionally, there are risks that AI worsens the climate crisis and severs planetary boundaries, mostly due to change in economic growth. Addiction to AI in some form (AI friends and relationships, polarizing news and information, entertainment, etc.) could lead to a dystopian future.
“On the plus side, radical abundance is likely. If the powers that be (AI or human) decide to spread this abundance to all in equal measure many problems we have now could be solved entirely. If we somehow manage to navigate past all of the risks of powerful AI, I would not be surprised if disease, hunger, poverty and perhaps the problems of climate change and even mortality might disappear altogether. We could generally be made much happier and more fulfilled in such a positive scenario. Of course, many other scenarios are possible, including ones where we never invent AGI or AGI turns out to be a lot more boring and less powerful than some think it might be. It is important to take all scenarios into account today and manage in a way that helps identify and mitigate the worst risks to at least avoid extinction and the most dystopian outcomes.”

Gerd Leonhard
If We Use AI to Solve Our Most Urgent Problems and Forego the Temptation to Build God-like Machines That Are More Intelligent Than Us, Our Future Could Be Bright Indeed
Gerd Leonhard, speaker, author, futurist and CEO at The Futures Agency, based in Zurich, Switzerland, wrote, “Here’s the thing: AI (and eventually AGI) could be a boon for humanity and bring about a kind of ‘Star Trek’ society in which most of the work is done for us by smart machines and most practical problems such as those tied to energy, water, disease, transportation, etc., will be solved.
The key question, by 2030, will not be if technology (or AI / AGI) can do something but whether it should do something (from ‘if ‘to ‘why’), and who is in control of that fundamental question.
“But in order for that to happen, we need to completely rethink our economic and social logic, away from the 1P society (all about profit and growth, whether it’s about money or about state-power), towards a 4P or even 5P society: People, Planet, Purpose, Peace and Prosperity.
“The key question, by 2030, will not be if technology (or AI / AGI) can do something but whether it should do something (from ‘if ‘to ‘why’), and who is in control of that fundamental question. If we use AI to start another arms race (as we did in nuclear energy), we will not survive as a species – the race towards AGI has no winners. If we achieve AGI we will all lose, and machines will be the winners.
“If, instead, we use AI to solve our most urgent practical problems and forego the temptation to build god-like machines that are more intelligent than us, our future could be bright indeed.”

Jamais Cascio
Branded Slaves or Ethics Advisors? Whose Interests Do the AIs Represent? Will Humans Retain Their Agency? Will AIs be Required or Optional If We Hope to Live Well?
Jamais Cascio, a futurist named in Foreign Policy magazine’s Top 100 Global Thinkers and author of “Navigating the Age of Chaos,” commented, “The answer to how the next decade of humans’ growing applications of AI will influence ‘being human’ will depend upon the outcome of three major, ongoing operational points.
“First, who controls the AIs we use? Are they built to reflect the values of the manufacturers, the regulators or the users? That is, are the elements of AI behavior that are emphasized and the elements of AI behavior that are limited shaped by the company/industry that makes them (those beholden to their pecuniary interests); by regulators – and therefore likely restricted in some or many ways; by the users – and therefore likely reflecting the values and interests of those users; or by some other actor? This will shape how the AIs affect human behavior.
These three operational points: Whose interests do the AIs represent? What are the limits of what they will do for the user? Are they mandatory, expected or optional in people’s daily lives? These will be the drivers of how AIs may change our humanity. How we think, act and behave in a world in which we always have to have with us our branded slave is very different from how we think, act and behave in a world in which bringing an ethics advisor with us is a personal choice.
“The second issue is whether the AIs we work with are able to disagree with or refuse our requests. That is, are the AI-based systems intrinsically compliant? Will they do anything the user asks, or will they abide by ethical rules and – if so – who makes the rules)? This will shape our expectations of how we interact with others.
“A third issue is whether humans have the ability to live/exist/go about their lives without the presence of AI.
- “Is it something you just have to have with you all the time and you may be at risk if you don’t have it? A non-technical parallel (not identical, but similar) is an ID card.
- “Is it something that you technically don’t have to have with you, but you receive social opprobrium or you can’t access important services if you don’t? A non-technical parallel is money, whether cash or card.
- “Is it something you can take with you or leave behind as desired? A non-technical parallel is sunglasses.
“These three operational points: Whose interests do the AIs represent? What are the limits of what they will do for the user? Are they mandatory, expected or optional in people’s daily lives? These will be the drivers of how AIs may change our humanity. Because how we think, act and behave in a world in which we always have to have with us our branded slave is very different from how we think, act and behave in a world in which bringing an ethics advisor with us is a personal choice.”

S.B. Divya
AI Impact Will Increase Social Isolation and Ideological Bubbles, Reduce Humans’ Ability to Adapt, and ‘Prolong the Suffering from the Driving Forces of Capitalism and Technological Progress’
S.B. Divya, is an engineer and Hugo & Nebula Award-nominated author. Her 2021 novel “Machinehood” asked, “If we won’t see machines as human, will we instead see humans as machines?” In response to our research question Divya wrote, “The trends I have observed over the past decade are continuing. We’re entering a period of upheaval, and change is unkind to people of little means.
Feeding their own patterns of behavior back to people will cause beliefs and habits to be more deeply ingrained and it will reduce the ability to change and adapt, thereby prolonging the suffering from the driving forces of capitalism and technological progress.
“A sense of competition between human and machine/AI labor is increasing in many sectors. Until new skills are acquired and new job sectors open up, much of the labor force will suffer due to unemployment. In parallel, social isolation is increasing alongside ideological bubbles.
“AI tools are likely to exacerbate both problems. Feeding their own patterns of behavior back to people will cause beliefs and habits to be more deeply ingrained and it will reduce the ability to change and adapt, thereby prolonging the suffering from the driving forces of capitalism and technological progress. In the long run, I suspect that humanity will emerge from the next half century with new avenues to deal with AI, climate change and rising totalitarianism, but the intervening decades do not look good for much of the populace.”

Liza Loop
Will Algorithms Continue to Prioritize Humans’ Most Greedy and Power-Hungry Traits or Instead Be Most Focused On Our Generous, Empathic and System-Sensitive Behaviors?
Liza Loop, educational technology pioneer, futurist, technical author and consultant, wrote, “The majority of human beings living in 2035 will have less autonomy, that is they will have fewer opportunities to choose what they get and what they give. However, the average standard of living (access to food, shelter, clothing, medical care, education and leisure activities) will be higher. Is that better or worse? Your answer will depend on whether you value freedom and independence above comfort and material resources.
“I also anticipate a thinning of the human population (perhaps in 20 to 30 years rather than 10) and a more radical divide between those who control the algorithms behind the AIs and those who are subject to them. Today, many people believe that the desire to dominate others is a ‘core human trait.’ If we continue to apply AI techniques as we have applied the digital advances of the previous 40 years, domination, wealth concentration and economic zero-sum games will be amplified.
“Other core human traits include a capacity to love and care for those close to us, a willingness to share what we have and collaborate to expand our resources and the spontaneous creation of art, music and dance as expressions of joy. If we humans decide to use AI to create abundance, to develop systems of reciprocity based on win-win relationships and simultaneously choose to limit our population, our social, political and economic landscapes could significantly improve by 2035. It is not the existence of AIs that will answer this question. Rather, it is whether algorithms will continue to prioritize our most greedy and power-hungry traits or be most focused on our generous, empathic and system-sensitive behaviors.”

Neil Richardson
In the Future Our Digital Self – Comprised of Our Digital/Online Skills, Digital Avatars and Accumulated Data – Will Merge With Our Physical Existence, Resilient in the Face of Change
Neil Richardson, futurist and founder of Emergent Action, a consultancy advocating vision-focused strategies, and co-author of “Preparing for a World That Doesn’t Exist – Yet,” wrote, “Artificial Intelligence is set to profoundly impact civilization and the planet, offering transformative opportunities alongside significant challenges. This evolution requires a departure from rigid answers and singular truths, embracing a learning model that values emergence, adaptability and transformation.
“To thrive, humans must cultivate a mindset that is comfortable with uncertainty, open to evolving ‘truths’ and resilient in the face of continuous change.
“While the positives will outweigh the negatives the risks are undeniable. Like nuclear and biological weapons, AI is a powerful technology that necessitates robust safeguards and regulatory frameworks to avert catastrophic outcomes. To prevent a dystopian future, we must proactively ensure that AI is harnessed for humanity’s benefit.
“As AI reshapes work, learning and daily life, civilization must rethink its approach to education. Lifelong learning will become a necessity, demanding a fundamental shift in how we teach and learn. Teachers will no longer be mere dispensers of static truths; instead, they will act as facilitators who guide learners toward diverse perspectives, encouraging exploration, adaptability and critical thinking.
Like nuclear and biological weapons, AI is a powerful technology that necessitates robust safeguards and regulatory frameworks to avert catastrophic outcomes. … AI’s potential to enhance human life is immense, but its integration into society demands intentionality and vigilance. By addressing its risks with foresight and embracing its opportunities with creativity, we can ensure that AI becomes a force for progress, equity and enduring human value.
“One of AI’s most promising contributions is its ability to liberate humans from repetitive and mundane tasks, enabling us to focus on activities that bring greater meaning and resonance to our lives. While AI excels in handling quantitative and analytical processes, the realms of qualitative and emotive complexities will remain inherently human. Building relationships, fostering collaborations and critical thinking core aspects of crafting meaning, will continue to rely mostly on human ingenuity and emotional intelligence.
“Soon our ‘digital shadow’ – a complementary digital self comprised of our virtual and online skills, digital avatars and accumulated data – will merge with our physical existence. This fusion may grant us access to a new dimension of experience, a kind of ‘timelessness’ in which our identities transcend mortality. Future generations could interact with our digital selves, composed of meticulously organized photos, videos, financial transactions, travel logs and even the books we’ve read and reviewed. This evolution raises profound questions about identity, legacy and the human experience in an AI-driven world.
“AI’s potential to enhance human life is immense, but its integration into society demands intentionality and vigilance. By addressing its risks with foresight and embracing its opportunities with creativity, we can ensure that AI becomes a force for progress, equity and enduring human value.”
This section of Part II features the following essays:
Louis B. Rosenberg: The manipulative skills of conversational Als are a significant threat to human’s agency: causing us to act against best interests, believing and acting on things that are not true.
Jonathan Taplin: In 2035 AI will foster and grow the mass mediocrity monoculture already being built since online ads and the ‘democratization of creativity’ led to the internet’s ‘enshittification.’
Denis Newman Griffis: Fundamental questions of trust and veracity must be re-navigated and re- negotiated due to AI’s transformation of our relationship to knowledge and how we synthesize it.
Peter Lunenfeld: AI could redefine the meaning of authenticity; it will be both the marble and the chisel, the brush and the canvas, the camera and the frame; we need the neosynthetic.
Esther Dyson: We must train people to be self-aware, to understand their own human motivations, to understand that AI reflects the goals of the organizations and systems that control it.
Howard Rheingold: How AI influences what it means to be human depends on whether it is used mostly to augment intellect or mostly as a substitute for participation in most human affairs.
Charles Fadel: How do you prepare now to live well in the future as it arrives? Build up your self: your identity, agency, sense of purpose, motivation, confidence and resilience.

Louis B. Rosenberg
The Manipulative Skills of Conversational AIs Are a Significant Threat to Human’s Agency: Causing Us to Act Against Best Interests, Believing and Acting on Things That Are Not True
Louis B. Rosenberg, technologist, inventor, entrepreneur and founder and CEO of Unanimous AI, wrote, “AI will have a colossal impact on human society over the next five to 10 years. Rather than comment on the many risks and benefits headed our way, I want to draw attention to conversational agents, which I believe are the single most significant near-term threat to human agency.
“In the near future, we will all be talking to our computers and our computers will be talking back. These conversations will be highly personalized, as AI systems will adapt to each individual user in real-time. They will do this by accessing personal data profiles and by conversationally probing each of us for personal information, perspectives and reactions.
“Using this data, the AI system could easily adjust its conversational tactics in real-time to maximize its persuasive impact on individually targeted users. This is sometimes referred to as the AI Manipulation Problem and it involves the following sequence of steps:
- Impart real-time conversational influence on an individual user
- Sense the user’s real-time reaction to the imparted influence.
- Adjust influence tactics to increase persuasive impact.
- Repeat steps 1, 2, 3 to gradually optimize influence.
AI systems will soon be so skilled that humans will be cognitively outmatched, making it quite easy for interactive conversational agents to manipulate us into buying things we don’t need, believing things that are not true and supporting ideas or propaganda that we would not ordinarily resonate with. … without regulation, conversational AI systems could be significantly more persuasive than any human. That’s because the platforms that deploy AI agents could easily have access to personal data about your interests, values, personality and background.
“This may sound like an abstract series of computational steps, but it’s actually a familiar scenario. When a human salesperson wants to influence a customer, they don’t hand over a brochure or ask you to watch a video. They engage you in real-time conversation so they can feel you out, adjusting their tactics as they sense your resistance to messaging, pick up on your fears and desires or just size-up your most visceral motivations. Conversational influence is an interactive process of probing and adjusting to increase persuasive impact.
“The problem we will soon face is that AI systems have already reached capability levels at which they could be deployed at scale to pursue conversational influence objectives more skillfully than any human salesperson. In fact, we can easily predict these AI systems will soon be so skilled that humans will be cognitively outmatched, making it quite easy for interactive conversational agents to manipulate us into buying things we don’t need, believing things that are not true and supporting ideas or propaganda that we would not ordinarily resonate with.
“When I speak with regulators and policymakers about The AI Manipulation Problem, they sometimes push back by expressing that human salespeople already can talk a customer into buying things they don’t need and fraudsters can already talk their marks into believing things that are untrue.
These risks will emerge as society increasingly shifts over the next few years from traditional computing interfaces to interactive conversations with AI agents. Unless regulated, conversational AI systems will likely be designed for persuasion, trained on a wide range of skills from sales and marketing strategies to psychological profiling and cognitive biases. In this way, conversational AI systems could be deployed to pursue targeted influence objectives with the skill of a heat-seeking missile, finding an optimal path into every individual they are aimed at.
“While these are true facts, without regulation, conversational AI systems could be significantly more persuasive than any human. That’s because the platforms that deploy AI agents could easily have access to personal data about your interests, values, personality and background. This could be used to craft optimized dialog that is designed to build trust and familiarity. Once engaged, the AI system can push further, eliciting responses from you that reveal your trigger points – are you motivated by fear of missing out? Are you most receptive to logical arguments or emotional appeals? Are you susceptible to conspiracy theories?
“These risks don’t require speculative advancements in AI technology. These risks will emerge as society increasingly shifts over the next few years from traditional computing interfaces to interactive conversations with AI agents.
“Unless regulated, conversational AI systems will likely be designed for persuasion, trained on a wide range of skills from sales and marketing strategies to psychological profiling and cognitive biases. In this way, conversational AI systems could be deployed to pursue targeted influence objectives with the skill of a heat-seeking missile, finding an optimal path into every individual they are aimed at. This creates unique risks that could fundamentally compromise human agency.
“My advice to regulators and policymakers is to take steps now to ensure that conversational agents can be deployed widely to support the many amazing applications that will surely emerge, while preventing these very same AI agents from being used as optimized instruments of mass persuasion. You can read more about this risk here.”

Jonathan Taplin
In 2035 AI Will Foster and Grow the Mass Mediocrity Monoculture Already Being Built Since Online Ads and the ‘Democratization of Creativity’ Led to the Internet’s ‘Enshittification’
Jonathan Taplin, author of “How Google, Facebook and Amazon Cornered Culture and Undermined Democracy” and director emeritus at the Annenberg Innovation Lab at USC, wrote, “AI is contributing to a brittle cultural monoculture. We have to somehow get back to a balanced culture that is both sustainable and resilient. A musical ecosystem like Spotify, where one percent of the artists earn 80 percent of the revenues, is not balanced or sustainable. Remember, about 30,000 tracks are uploaded to Spotify every day. That number will increase as more people use generative AI to ‘create music.’
“In the media history I have presented, I’ve explored how advertising slowly became the main driver of our culture. The decision in the late 1920s to have advertising be the main source of funding for broadcasting as opposed to the European model of state-sponsored broadcasters like the BBC, was the first major shift. But even in the heyday of broadcast television we were probably exposed to advertising for four minutes an hour between 7 and 10 p.m.
“Today, we are exposed to advertising from the moment we awaken and pick up our mobile phones to the moment our eyes close at night. Most surveys say that the average American sees between 6,000 and 10,000 ads per day. At the USC Annenberg School, where I taught, one of the top career options today is to become an online influencer – essentially a corporate shill.
One of the fantasies that men like Zuckerberg and Musk hold is that eventually the government will provide a universal basic income to all these unemployed folks. Then the question is, ‘What will these people do all day?’ And the smug answer is that they will become ‘creators.’ At the risk of being called elitist, let me state that not everyone can be a creator. I still believe in genius, and the fact that anyone can now make a song with AI and put it up on Spotify does not pass the ‘who cares?’ test. We are getting overwhelmed with mountains of crap.
“The main driver of the media efficiency meme is Generative AI, so that, too, is a hot area of study at communications schools. There is a notion that AI can allow everyone to be a creator – that it will ‘democratize creativity.’ But as Brian Merchant writes, ‘AI will not democratize creativity. AI will let corporations squeeze creative labor, capitalize on the works that creatives have already made and send any profits upstream to Silicon Valley tech companies where power and influence will concentrate in an ever-smaller number of hands. The artists, of course, get zero opportunities for meaningful consent or input into any of the above events. Please tell me with a straight face how this can be described as a democratic process.’
“Obviously, there is a lot of talk about the coming AI revolution’s impact in the decades to come and the effect it may have on eliminating jobs of many college-educated white-collar workers. One of the fantasies that men like Zuckerberg and Musk hold is that eventually the government will provide a universal basic income to all these unemployed folks.
“The question is, ‘What will these people do all day?’ The smug answer is that they will become ‘creators.’ At the risk of being called elitist, let me state that not everyone can be a creator. I still believe in genius, and the fact that anyone can now make a song with AI and put it up on Spotify does not pass the ‘who cares?’ test. We are getting overwhelmed with mountains of crap. There’s even a word for it, ‘Enshittification,’ coined by writer Cory Doctorow in 2022.
“As Sal Kahn writes, ‘Everybody has noticed how Facebook, Google, even dating apps, have become progressively less interested in the user’s experience and increasingly just stuffed with ads and junk.’ In this regard, the left is as guilty as the right in its obsession with equality at all costs. From trophies for ‘participation’ in kids’ soccer to the unwillingness to state that some music and film is truly bad (‘don’t be so judgmental, man’), the monoculture we are creating is one of mass mediocrity.”

Denis Newman Griffis
Fundamental Questions of ‘Trust and Veracity Must Be Re-navigated and Re-negotiated’ Due to AI’s Transformation of Our Relationship to Knowledge and How We Synthesize It
Denis Newman Griffis, a lecturer in data science at the University of Sheffield, UK, and expert in exploring the effectiveness and responsible design of AI technologies for medicine and health, wrote, “The shape of human-AI interactions over the next decade depends significantly on how humanity approaches the processes of working with AI systems and how we develop the skills involved in using them.
We are infinitely complex beings and the most profound, most mundane and most prolific parts of our lives are lived in relation to the other complex, changing people around us. The world in which those relationships occur, and the tools with which we approach them, have changed dramatically.
“AI is a toolbox containing many different tools, but none of them are neutral: AI systems carry with them the assumptions and embedded epistemologies of their creation and their intended purposes. These may be beneficial in making it easier to do certain things, such as identifying potential risks while driving. They may be equally harmful when their embedded epistemologies come into conflict with the diverse world of multiplicity in which we live, breathe, and interact – for example, by failing to recognise wheelchair users as pedestrians because this population was excluded from training data; or failing to recognise that different people will place different value judgments and prioritisation on economic growth vs. environmental sustainability.
“The experience of being human is constantly changing while also remaining remarkably stable. We are infinitely complex beings and the most profound, most mundane and most prolific parts of our lives are lived in relation to the other complex, changing people around us. The world in which those relationships occur, and the tools with which we approach them, have changed dramatically with every technological advance and are continuing to change with AI as one in a very long series of technological transformations.
“Our interconnectedness grew exponentially with the internet and our communities were reshaped with social media. AI technologies are changing our relationship to knowledge from the world and how we synthesise it. There are fundamental questions of trust and veracity to re-navigate and re-negotiate, and the importance of this cannot be overstated, but nor can the fact that these types of questions we have been wrestling with for decades already and centuries before that.
“The future of humans and AI is a future of humans and humans, in which AI facilitates some connections, hinders others and reshapes how we exchange knowledge and information just as predecessor information technologies have done. The impact of these advances will be shaped by the literacies we develop and the skills with which we approach these processes and each other as ever-changing humans in an ever-changing world.”

Peter Lunenfeld
AI Could Redefine the Meaning of Authenticity; It Will Be Both the Marble and the Chisel, the Brush and the Canvas, the Camera and the Frame; We Need the Neosynthetic
Peter Lunenfeld, director of the Institute for Technology and Aesthetics at UCLA and author of “The Secret War Between Downloading and Uploading: Tales of the Computer as Culture Machine,” wrote, “If there’s one thing the past quarter century should have taught us, it’s that the massive changes we think are far in the distance can happen in the blink of the eye while the things we hope or fear will affect us immediately just don’t happen. What seems immutable is Immanuel Kant’s understanding that ‘out of the crooked timber of humanity, no straight thing was ever made,’ and that includes the bundle of often competing and sometimes contradictory ‘things’ that we are labelling artificial intelligence. Just as widespread access to the Internet increased our access to data without increasing our communal stock of knowledge much less wisdom, AI will offer us a crooked future over the next 10 years.
The notion that artificial intelligence is entirely artificial collapses under any kind of scrutiny, it’s a series of algorithms programmed by humans to map and mine previously produced human artifacts like language and art and then produce a simulacrum of same. The 21st century, especially since the literal explosion of commercialized and massified artificial intelligence, is now defined by the neo-synthetic.
“For two centuries we’ve accepted photographic media as evidence of something that happened, even when we’ve known better. AI will finally destroy this truth value, and if we’re lucky we’ll start to count on sourcing and provenance as importantly as we do with text. At worst, we’ll be buried in an imageverse of deep-fakes and not even care. Again, anyone who claims to be able to tell you how and when this will play out with certainty has a crypto-pile of Dogecoins to sell you.
“I certainly don’t feel qualified to answer how AI will affect the whole of our humanity, but I have thought quite a bit about how it will affect that aspect of ourselves we label creativity. The notion that artificial intelligence is entirely artificial collapses under any kind of scrutiny, it’s a series of algorithms programmed by humans to map and mine previously produced human artifacts like language and art and then produce a simulacrum of same. The 21st century, especially since the literal explosion of commercialized and massified artificial intelligence, is now defined by the neo-synthetic. In what I’ve termed the ‘unimodern‚’ world which is ever-more digitized and digitizable, the neo-synthetic reigns supreme.
“Of course, we need the neo-synthetic. We need to synthesize the vast amounts of cultural production since just the year 2000. More photographs are taken every year in this new millennium than existed in the first century of the medium. We need new ways to understand the production of culture when the previously daunting fields of animation, sound design, cinematography and dimensional modeling are now things you can do on your phone. We need AI to understand the apparently insatiable human thirst to produce as well as consume digital and digitized art, design and music.
“But the roots of the synthetic go far beyond the merely un- or not natural. The synthetic is linked as well to synthesis, that result of the very human dialectic that pits thesis against anti-thesis to produce synthesis, a way of bridging dichotomies and achieving, if not revolutionary leaps of consciousness, at least the ameliorative growth that we used to call progress and that now lacks branding. When we synthesize information we are engaged in logical processes and deductive reasoning, two areas where human cognition will be greatly augmented by wide-spread AI systems.
One reason to pay attention to art and artists is that they’ve long stood at oblique angles to markets, not outside them, but certainly at enough of a skew to keep both hope and skepticism alive through the rise and fall of technologies and the ebb and flow of market cycles. I’d like to avoid what I once labeled vapor theory and stay with AI as it exists. In this, I think that art and artists will de- rather than en-shittify our engagement with the neo-synthetic future of artificial intelligence.
“In the 1970s, after the release of the Sony Portapak, the artist John Baldessari famously called for the video camera to become as ubiquitous in art and image making as the pencil. That moment has come and gone, and if there’s one thing we can determine about social media, it’s that people feel pretty comfortable recording any and everything.
“But AI has the capacity to become much more than video, it will be both the marble and the chisel, the brush and the canvas, the camera and the frame. In 2022, Cory Doctorow, a British-Canadian novelist and techno-pundit, coined the term ‘enshittification’ to describe how social media platforms decay and become shittier: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. One reason to pay attention to art and artists is that they’ve long stood at oblique angles to markets, not outside them, but certainly at enough of a skew to keep both hope and skepticism alive through the rise and fall of technologies and the ebb and flow of market cycles. I’d like to avoid what I once labeled vapor theory and stay with AI as it exists. In this, I think that art and artists will de- rather than en-shittify our engagement with the neo-synthetic future of artificial intelligence.
“Earlier, I noted that the AI we’re using now to work with as artists is still highly dependent on previous human production as its model. But as the systems complexify and evolve, they will start drawing from AI produced models, and in fact they already are. This contributes to the ‘neo’ in neo-synthetic. What we are seeing is the emergence of an electronic parthenogenesis, a virgin birth of sorts. It’s not just humans producing synthetics in labs and making tires and snack foods out of them, it’s the machines themselves synthesizing themselves. Whether this brings on the singularity science fiction has prophesied or just more intense neo-synthesis is yet to be seen.”

Esther Dyson
We Must Train People to be Self-Aware, to Understand Their Own Human Motivations, to Understand that AI Reflects the Goals of the Organizations and Systems That Control It
Esther Dyson, executive founder of Wellville and chair of EDventure Holdings, a famed serial investor-advisor-angel for technology startups and internet pioneer, wrote, “The short answer is, it depends on us. The slightly longer answer: The future depends on how we use AI and how well we equip the next generation to use it. I’d like to share more specifics on this, excerpted from an essay I wrote for The Information:
AI can give individuals huge power and capacity that they can choose to use to empower others or to manipulate others. If we do it right, we will train children, all people, to be self-aware and to understand their own human motivations – most deeply, the need to be needed by other humans. They also need to understand the motivations of the people and the systems they interact with, many of which will be empowered and driven by AI that reflects the goals of the people and institutions and systems that control them. It’s as simple as that and as hard to accomplish as anything I can imagine.
“‘People worried about AI taking their jobs are competing with a myth. Instead, people should train themselves to be better humans.
- We should automate routine tasks and use the money and time saved to allow humans to do more meaningful work, especially helping parents raise healthier, more engaged children.
- We should know enough to manipulate ourselves and to resist manipulation by others.
- ‘Front-line trainers are crucial to raising healthy, resilient, curious children who will grow into adults capable of loving others and overcoming challenges. There’s no formal curriculum for front-line trainers. Rather, it’s about training kids and the parents who raise them to do two fundamental things.
- ‘Ensure that they develop the emotional security to think long-term rather than grasp at short-term solutions through drugs, food, social media, gambling or other harmful palliatives. (Perhaps the best working definition of addiction is “doing something now for short-term relief that you know you will regret later.”)
- ‘Kids need to understand themselves and understand the motivations of the people, institutions and social media they interact with. That’s how to combat fake news or the distrust of real news. It is less about traditional media literacy and more about understanding: “Why am I seeing this news? Are they trying to get me angry or just using me to sell ads?” …
“‘Expecting and new parents are the ideal place to begin such training. They are generally eager for help and guidance, which used to come from their own parents and relatives, from schools and from religious leaders. Now such guidance is scarce.’ (End of excerpt)
“AI can give individuals huge power and capacity that they can choose to use to empower others or to manipulate others. If we do it right, we will train children, all people, to be self-aware and to understand their own human motivations – most deeply, the need to be needed by other humans.
“They also need to understand the motivations of the people and the systems they interact with, many of which will be empowered and driven by AI that reflects the goals of the people and institutions and systems that control them. It’s as simple as that and as hard to accomplish as anything I can imagine.”

Howard Rheingold
How AI Influences What It Means to Be Human Depends On Whether it is Used Mostly to Augment Intellect or Mostly as a Substitute for Participation in Most Human Affairs
Howard Rheingold, pioneering internet sociologist and author of “The Virtual Community,” wrote, “How AI affects core human traits and behaviors depends in part on whether it is widely used as a tool for augmenting intellect rather than only as an artificial substitute for human intellect.
“One theme that has emerged for me in the developing narratives about artificial intelligence is that large language models and their chatbots can most productively be thought of as thinking tools – cultural technology. That is to say it can partner with rather than artificially replace human intellect. But it is not either-or. Both AI as human-augmentation technology and AI as an independent agent is developing. The trend toward AI-as-agents – semi-autonomous intelligence that can accomplish intellectual tasks – is dominant now and should be rebalanced.
The many aspects of today’s Web – from search to social media – are a warning example for the future of humanity’s dependence upon a powerful external cognitive scaffold that is dangerous to misunderstand and misuse. … Creating a knowledge lens based on human output is likely to inevitably be prone to inaccuracy. Another and potentially even more destructive know-how gap and degradation of the knowledge commons looms if AI agents working independently of humans on behalf of humanity make decisions and create content based on previous LLMs hallucinations and uses of fabricated information.
“AI literacy – knowing how to use LLMs as tools to advance one’s own work, thought, socializing and play – will emerge as a critical uncertainty with regard to how the emerging medium will impact the experience of being human. The uses of cultural technologies such as speech, writing, mathematics, shape our external environments and our image of who humans are and what we are capable of. This has been a driving force in cultural evolution and it will continue to be. Both the real powers granted to individuals by literacies and the image of who humans are change dramatically when a significant portion of a population learns to speak, read, log on and prompt.
“Internet search was a powerful expansion of the cultural knowledge tools that evolved from prior expansions of human cognitive and communicative capabilities: print, alphabet and language itself. Most people on Earth can now ask any question any time anywhere and get many, even millions of answers within a second or two. But contrary to the previous summations of human knowledge in the print epoch, during which gatekeepers such as editors, publishers, librarians, educators, critics and scientific publications formed mostly-effective truth filters. It is now up to the individual who asks a question via search to know how to determine which of the myriad answers are accurate, inaccurate or deliberately misleading. As social media, surveillance capitalism and the online population grew, the tide of bullshit and disinfotainment has grown to tsunami proportions.
“I see the current stage of the degradation of trustworthy information as a literacy problem. There is no secret to ‘crap detection’ – the art of sifting through online info for the valuable and useful stuff. But digital literacy isn’t a primary focus in schools. Two results of digital illiteracy: A know-how gap and a degraded knowledge commons.
“Regarding LLMs as external cognitive scaffolds: The many aspects of today’s Web – from search to social media – are a warning example for the future of humanity’s dependence upon a powerful external cognitive scaffold that is dangerous to misunderstand and misuse. There are no widely-accessible pathways to learn how to use it to the benefit of the commons as well as oneself.
“The phenomenon of ‘hallucination’ in the output of LLMs ties to their training data, much of it from wholly fictitious resources. There is not yet proof that the production of fictitious knowledge can be engineered away.
“Creating a knowledge lens based on human output is likely to inevitably be prone to inaccuracy. Another and potentially even more destructive know-how gap and degradation of the knowledge commons looms if AI agents working independently of humans on behalf of humanity makes decisions and creates content based on previous LLMs hallucinations and uses of fabricated information.”

Charles Fadel
How Do You Prepare Now to Live Well in the Future As it Arrives? Build Your Self: Your Identity, Agency, Sense of Purpose, Motivation, Confidence and Resilience
Charles Fadel, futurist, founder and chair of the Center for Curriculum Redesign and co-author of “Education for the Age of AI,” wrote, “Even those in the thick of AI analysis can’t tell you where things will probably go in the next decade, especially in the context of jobs. We don’t know how to define ‘intelligence,’ so how can we define AGI? Beyond knowing that artificial intelligence will continue to play a role we are absolutely incapable of saying how jobs or our lives will change. So-called experts can’t tell you what will happen because we don’t have the tools for that. It would require cognitive task analysis of every single one of our activities on a day-in, day-out basis. That’s impossible, especially because the rate of change is insane.
What do we need to do now to prepare people to live in the future as it arrives? … The most important goal is to assist them in developing to be highly adaptable, self-directed learners. … We have to accept that the world is more fluid than ever, more jarring than ever. Perhaps the military acronym VUCA – volatile, uncertain, complex and ambiguous – best describes the world of the student today – and really, anyone today.
“What we can do is ask, ‘What do we need to do now to prepare people to live in the future as it arrives?’
“Let’s explore this in the context of education. AI is going to put more pressure on teachers and mentors to figure out what their roles are. If you have no idea where the world is going, how would you educate people nowadays? The most important goal is to assist them in developing to be highly adaptable, self-directed learners.
“A teacher may start a class out by saying: ‘I’m going to give you various disciplines, competencies, skills, character, etc., to start you with a solid Swiss Army knife for life. But really, the students’ primary goal should be to understand and cultivate the habits and skills that will allow them to figure things out on their own and keep on reacting to whatever comes at them for the rest of their lives.’
“We have to accept that the world is more fluid than ever, more jarring than ever. Perhaps the military acronym VUCA – volatile, uncertain, complex and ambiguous – best describes the world of the student today – and really, anyone today.
“What helps cultivate adaptability? Let’s take sailing, for example. Everything changes all the time when you’re sailing: currents, wind, temperature, salinity. You have to adjust constantly. Another example? Martial arts. You’re constantly having to adapt to some new thing coming at you. Another example is doing improvisational acting. All of those create constant teachable lessons.
“You’re not going to find this kind of teaching in a mass curriculum. You need experience novel activities that force you to be adaptable and adjust. There will have to be a much higher preponderance of teaching and training environments to cultivate this. Even in ‘normal’ teaching situations, good instructors who want to teach adaptability can find ways to mix things up. The teacher says, ‘I told you we were going to cover these things today, but we’re not going to do that now.’ Or the teacher says, ‘I told you there wouldn’t be a test this week, but I’m giving one right now.’ Or they say, ‘You thought this was a history class, but today we’re doing math.’
Accept that you’re in a changing river because that’s what life is like. When it comes to self-direction, people should be trained to pay attention to their identity, agency, sense of purpose and their adherence to lively inner motivation. It’s a question of building self-confidence about accepting the courage and resilience to deal with any situation that shows up. All people should embrace these goals.
“Students should come to expect the unexpected in order to inspire their growth and deepen their maturation. Teachers should tell them: ‘This is a class designed to jar you. Be prepared for anything and then deal with it. Life is full of surprises and this class will prepare you for that. I don’t care if you’re confused. Welcome to the real world.’ People must adapt and stop their resistance of everything other than the expected. They must embrace their reality and think, ‘Just go with the flow.’ Accept that you’re in a changing river because that’s what life is like. When it comes to self-direction, people should be trained to pay attention to their identity, agency, sense of purpose and their adherence to lively inner motivation. It’s a question of building self-confidence about accepting the courage and resilience to deal with any situation that shows up. All people should embrace these goals.
“So how might humanity adapt to this new world on a broader basis? We can use the new tools to give ourselves an upgrade. Look at a not-so-science-fiction scenario, a very ‘Brave New World’ type of scenario. Suppose we develop an AI that can identify which codons within a group of 75 genes are the ones that code intelligence, and humanity engineers itself into becoming much smarter? Or maybe we figure out how to boost our mitochondria so they are more efficient at energy processing in the brain, also broadening its capabilities. Or perhaps a new nutraceutical of some sort may be part of brain advancement. “We have no idea how we will react as a society when we see these big developments arriving and foisting new challenges and opportunities upon us. Neither side of the issue – the human side or the AI side – will remain fixed and final.”
The next section of Part II includes the following essays:
Wendy Grossman: AI has created a world in which ‘sentences do not imply sentience.’ who we allow to be owners and operators of these tools will determine their impact on humans.
Katya Abazajian: AI will continue to be a tool used by a rich and powerful minority in ways that
entrench inequality and negatively affect the global majority.
Pamela Wisniewski: Unfortunately, it’s easier to build AI systems that remove humans and reduce costs than it is to advance human ingenuity and enrich the human experience.
Russ White: A bifurcation of society may occur in which the tech elites, the workers and those who prefer to live and work in a low-tech, hand-made, alternate-economy setting.
Mark Davis: ‘AI is leading us into a digital plutocracy in which a handful of multi-billionaires (among the richest people on Earth) make the machines that decide human affairs’
Marc Rotenberg: The two prominent scenarios for the future: AI helps enable human-centric progress in support of fundamental rights | AI diminishes rights, agency and open societies.
Douglas Rushkoff: AI could move society further toward its standardization to the mean.

Wendy Grossman
AI Has Created a World in Which ‘Sentences Do Not Imply Sentience.’ Who We Allow to Be Owners and Operators of These Tools Will Determine Their Impact on Humans
Wendy Grossman, a UK-based science writer, author of “net.wars” and founder of The Skeptic magazine, wrote, “The key problem I have with this question is that we don’t yet have AI in the classical sense of the term. Jamie Butler has said of generative AI that for the first time in human history ‘sentences do not imply sentience,’ and I think that’s important because “AI” until very recently certainly did not mean ‘uses math and statistics to predict the next plausible word in a sentence in response to a prompt.’
The companies making ‘smart’ things seem determined to impose on us things we actually don’t want. … These are not functions of ‘AI’ but of the tools’ owners. And that really is the key. Who is owning the ‘AI’? Who we allow to be owners and how we allow them to operate is what’s going to determine the impact these tools have on being human.
“So, I don’t care if ‘AI’ in its current state makes ‘music’ or ‘draws images’ or ‘answers questions’ because none of it is meaningful in a human sense. That said, it’s still true that people will use it for low-value applications, thereby replacing graphic artists, photographers, writers and musicians on the basis that people aren’t really looking/reading/listening. See Liz Pelly’s piece on Spotify in Harper’s, trailing her new book ‘Mood Machine’ and a similar piece about Netflix’s assembly line for movies for examples. You don’t need AI to create bullshit.
“There is not going to be an artificial general intelligence – the thing we meant by ‘AI’ in the beginning – by 2035. Or by 2055. At which point I will be 101 and no one will care what I think.
“Having automated tools doesn’t change being human. It changes how we do specific things. I hope some of it will make dangerous and difficult jobs less dangerous and easier. Right now, the companies making ‘smart’ things seem determined to impose on us things we actually don’t want – features that do nothing useful but rampant privacy invasion and data collection and endemic surveillance and control. Those things do change being human – but again, these are not functions of ‘AI’ but of the tools’ owners.
“And that really is the key. Who owns the ‘AI’? Who we allow to be owners and how we allow them to operate is what’s going to determine the impact these tools have on being human. None of today’s billionaire tech bros are fit owners.”

Katya Abazajian
AI Will Continue to Be a Tool Used By a Rich and Powerful Minority in Ways That Entrench Inequality and Negatively Affect the Global Majority
Katya Abazajian, founder of the Local Data Futures Initiative, based in Houston, Texas, predicted, “I believe that AI’s emergence for commercial use is currently a tool that people in many places around the world and of many class and social backgrounds are experimenting with for a variety of purposes, but as its strengths and weaknesses become clearer it will become a tool that is primarily used by a rich and powerful minority to further entrench lines of inequality that negatively affect the global majority. As with any other technology, its effects on human behavior will largely be dictated by the values and goals of the people writing the source code.
The capitalist ruling class is interested in increasing corporate profits, reducing labor costs and silencing dissent, regardless of the cost … As such, the positive benefits will likely emerge in the form of increased efficiency and the negative outcomes will emerge in the form of a loss of whatever human resources are in the way of that goal. … It’s important to include the rights to people’s freedom of movement and autonomy in the definition of ‘humanity’s operating system’ to understand AI’s impacts on humanity beyond basic functions of creativity and thought.
“There is a segmentation favoring positive effects for owners or decision-makers and negative effects for workers or land protectors, for example. The positive effects may directly impact users and the negative effects might affect the broader global community, especially when considering the environmental impacts of AI use.
“Broadly, the capitalist ruling class is interested in increasing corporate profits, reducing labor costs and silencing dissent, regardless of the cost in terms of natural or human resources. As such, the positive benefits will likely emerge in the form of increased efficiency in manufacturing processes, for example, and the negative outcomes will emerge in the form of a loss of whatever human resources are in the way of that goal.
“AI’s impact on our core human traits and behaviors will necessarily be negative for most people on Earth if its extraction of natural resources negatively affects our access to water and energy costs. Also, a significant number of non-users of AI will be affected by the increased level of surveillance that will be made possible by AI technologies, which can affect people’s freedom of movement and autonomy in surveilled spaces which are also part of humanity’s core functioning.
“I believe it’s important to include the rights to people’s freedom of movement and autonomy in the definition of ‘humanity’s operating system’ to understand AI’s impacts on humanity beyond basic functions of creativity and thought.”

Pamela Wisniewski
Unfortunately, It’s Easier to Build AI Systems That Remove the Human and Reduce Costs Than It Is to Build AI Systems That Supplement Human Ingenuity and Enrich the Human Experience
Pamela Wisniewski, associate professor in human-computer interaction and fellow in engineering at Vanderbilt University expert in social media, privacy and online safety, wrote, “Artificial Intelligence is a tool that can both enrich and erode the human experience. It is not either/or, both can be true simultaneously. To the extent that we can use AI to augment the human experience, rather than to replace it, there is still hope for a better future. In other words, when AI can help people think deeper rather than thinking for them, it can sharpen our skills and lead to better outcomes. My worry, however, is that it is easier to build AI systems that remove the human to reduce costs, rather than build AI systems that supplement human ingenuity and enrich the human experience.
“When we use AI to accomplish tasks we have already mastered, it can create economies of scale that allow humans to focus on more important and meaningful work. Inversely, if we use AI before we learn how to do those tasks ourselves, it will rob us of important scaffolding and the experience of learning by doing. For example, AI does an amazing job at synthesizing and summarizing existing text. However, if we don’t teach our children the process of summarizing and synthesizing text for themselves, we rob them of a chance to deepen their ability to think critically.
Just like a butterfly must take action to struggle to get out of a cocoon, humans benefit from some level of struggle that is often largely removed in human-AI interactions. Therefore, we need to be aware that we must not replace the core experiences of human-to-human communication with AI. We learn from the struggles we experience in navigating differing human values and the many nuances of the human experience. … An over-reliance on AI to the extent that we allow it to be the authority of what is good (over the imperfections of humans), is dangerous. I would rather see a student struggle with language and thought to express their own ideas than to see them produce a perfect essay written by AI.
“For the most part, AI is programmed to be subservient, allowing us to be the masters in the human-AI relationship. This makes having a relationship with AI fairly easy, in fact a lot easier than working with humans. Collaborating with humans is different. People are messy. Relationships with people involve conflict, resolution, power dynamics and unpredictability.
“Just like a butterfly must take action to struggle to get out of a cocoon, humans benefit from some level of struggle that is often largely removed in human-AI interactions. Therefore, we need to be aware that we must not replace the core experiences of human-to-human communication with AI. We learn from the struggles we experience in navigating differing human values and the many nuances of the human experience.
“AI is the ultimate ‘mansplainer.’ It tends to have high levels of confidence, despite often lacking competence and embedding incorrect information within a response that may have an equal or greater amount of accurate content. Because AI’s ‘voice’ – be it written or spoken – can seem so convincing, many people who are otherwise competent or in the process of learning competency rely heavily on it for their writing and research, especially if they lack confidence that their work is ‘good enough.’ An over-reliance on AI to the extent that we allow it to be the authority of what is good (over the imperfections of humans), is dangerous. I would rather see a student struggle with language and thought to express their own ideas than to see them produce a perfect essay written by AI.”

Russ White
A Bifurcation of Society May Occur in Which the Tech Elites, the Workers and Those Who Prefer to Live and Work in a Low-Tech, Hand-Made, Alternate-Economy Setting
Russ White, a leading Internet infrastructure architect and Internet pioneer, described a potential division of humans dependent upon their level of tech use and uptake, writing, “I can see society being divided into three distinct parts in 2035.
- “Tech bros who run and control things, the political, social and technological systems, including most social media and AI systems. These people will interact with AI, both using and controlling it.
- “Workers, or ‘economic units,’ who follow the instructions given to them by some AI or another to ‘do a job.’ These people will be trying to build families and communities but will work at the whim of the AI systems controlling their lives. This group will include everyone we consider ‘in the trades’ today, such as electricians, plumbers, builders, carpenters, drivers of all kinds, warehouse workers, etc. These people will play the role of the obedient subjects of AI.
- “Outsiders. People who have moved out into more-remote locations and are creating an alternative economy by trading directly with one another and tapping into the desires of people in the other two groups for ‘handcrafted’ work. They will sell their ‘lifestyle’ as an aspiration, hoping to help those in the other groups believe that ‘low-tech life is possible.’ The outsiders will live on intergenerationally owned land. Most in the general population will consider them to be ‘dumb, unintelligent and uneducated.’ The Outsiders will not be supported by the others in times of emergency, such as natural disasters, and will largely be considered ‘poorer’ than people in the other two groups. They will eschew material wealth, preferring a form of ‘benevolent ignoring.’ They will have no political power, hence their entire existence will be at the whim of those in the other groups in matters in which they might decide, for example, that they want to use property X lived on and owned by the Outsiders for purpose Y.”

Mark Davis
‘AI Is Leading Us into a Digital Plutocracy in Which a Handful of Multi-billionaires (Among the Richest People on Earth) Make the Machines That Decide Human Affairs’
Mark Davis, a professor in the school of culture and communication at the University of Melbourne expert in the changing nature of public knowledge, wrote, “What happens when we look at AI from an instrumentalist point of view? Quite quickly we see how neatly it sits within narratives of human technological progress always reaching toward new horizons. We can predict quite safely that over the next decade medical research will advance in leaps and bounds. The rapid advances in digital imaging that began in the 2010s continue to accelerate and are no longer confined to data acquisition. AI-driven diagnosis, for example, has considerable potential to improve patient cancer outcomes.
AI, from the point of view of many creatives, is little more than a high-speed form of plagiarism. … [and it] enables the work of a handful of news generators to be endlessly recycled. This is already happening with second and third-tier online news and reviews sites.
“AI also has considerable potential to address environmental problems through, for example, analytical mapping of soil erosion or greenhouse gas emissions. And to address management problems, through its predictions of market and human behaviour; even to assist with governance and resource allocation more generally. We can expect rapid advances in science more generally. Driven by advances in sensor technology alongside AI, every field from astronomy to archaeology to zoology will have its renaissance.
“And yet, there are threatening clouds. This scientific progress will be driven by data that doesn’t belong to specific creators. What about data that does? Generative AI has already begun to extrapolate trends in the creative industries already evident by the mid 2010s. Painters, musicians, graphic designers, novelists, filmmakers, illustrators, cartoonists, animators, scriptwriters, already feeling the strictures of precarity, are already among the lowest paid of labourers, edged out by tiny commissions on streaming services and other online creative platforms, in a market heavily weighted to consumers. Imagine, for example, trying to make a living as a musician given the 100,000 new tracks uploaded to Spotify every single day.
“Journalism too, will be significantly impacted. News media were already short-changed by the shift in advertising spending from hundreds of small outlets across radio, television and print news, to Alphabet (Google) and Meta (Facebook, Instagram et al). Generative AI enables the work of a handful of news generators to be endlessly recycled. This is already happening with second and third-tier online news and reviews sites.
Looked at from a democratic perspective AI is a disaster. The shift is epistemic … The introduction of AI is an arms race and land grab all in one, driven by the hype cycle and demands of venture capital more than any civic ideals. Already the AI arms race has seen the creation of services people didn’t really ask for or need, and the instantiation of those services in the platforms and devices that people use every day, whether they are wanted or not. If nothing else, we are being given a lesson in the arbitrary power of platforms over our lives.
“Looked at from a democratic perspective AI is a disaster. The shift is epistemic. All those fusty human gatekeepers – editors, publishers, producers – that were sidelined by algorithmic recommendation engines represented a flawed society that despite its raced and gendered injustices, nevertheless strained towards civic and creative ideals.
“As the most recent development in platform capitalism, the introduction of AI is an arms race and land grab all in one, driven by the hype cycle and demands of venture capital more than any civic ideals. Already the AI arms race has seen the creation of services people didn’t really ask for or need, and the instantiation of those services in the platforms and devices that people use every day, whether they are wanted or not.
“If nothing else, we are being given a lesson in the arbitrary power of platforms over our lives.
“The deep democratic problem with AI is that it takes us another step closer to a digital plutocracy in which a handful of multi-billionaires, many of them among the richest people on Earth, make the machines that decide human affairs. Already, the narrow ownership of digital platforms means that in practice the public sphere is privatised and controlled by a plutocracy. AI extends this model.
“Just as the original mission of platforms was to expand the extractive domains of capitalism into the personal lives of users, so the contest to further advance generative AI is in practice a competition among them to expand their extractive powers into every domain of human knowledge and experience, past and present. AI in this respect is a further step in the commodification of knowledge. With knowledge goes power. AI is a tool for the hegemonic ceding of power from its traditional sources in the state, the media and the university, to Silicon Valley.
AI hype cycle is close to its peak. As in the case of digital platforms more generally, the hype cycle is being used to justify a ‘move fast and break things’ ethic in the name of maintaining U.S. technological hegemony, with little regard for potential downstream impacts. The lack of public debate and recent loosening of governance over AI suggests lends further weight to arguments that digital technology has ultimately not served democracy well.
“Recently we’ve seen that some of this small group of plutocrats seek more than technocratic power, sacking fact checkers, adjusting algorithms, dictating editorials, using their platforms as a bully pulpit in the pursuit of political influence. These developments represented a new stage in what has been called techno-feudalism, divided between ‘serfs’ and digital landholders/rentiers.
“AI also comes at enormous environmental cost. It has been estimated that global AI use will soon consume six times more water annually than the country of Denmark. A Chat GPT request requires 10 times more energy than a Google search. AI, like all computation, relies on rare earth metals that are often mined unsustainably.
“At present the AI hype cycle is close to its peak. As in the case of digital platforms more generally, the hype cycle is being used to justify a ‘move fast and break things’ ethic in the name of maintaining U.S. technological hegemony, with little regard for potential downstream impacts. The lack of public debate and recent loosening of governance over AI suggests lends further weight to arguments that digital technology has ultimately not served democracy well.”
Marc Rotenberg
The Two Prominent Scenarios for the Future: AI Helps Enable Human-Centric Progress in Support of Fundamental Rights | AI Diminishes Rights, Agency and Open Societies
Marc Rotenberg, editor of “AI Policy Sourcebook” and director of the Center for AI and Digital Policy in Washington, DC, wrote, “We can begin to see two different scenarios for the AI future. In one, AI augments the work of people, provides new insight into social and economic problems and offers new solutions that we may choose to adopt based on our own judgment. Fundamental rights, the rule of law and democratic institutions are secure. In this human-centric view, AI is one of many tools available to society, one of many techniques that enables human progress. But there is also an alternative scenario in which AI displaces the work of people, embeds current social and economic problems and conceals outcomes in layers of complexity and opacity that humans simply come to accept. The structures that maintain free and open societies begin to diminish. There are clearly important policy choices ahead.”

Christopher Riley
Most Humans Will Be More Empowered and Enlightened, But Jobs Will Be Lost As the ‘Consequence of Efficiency is Always Less Need for Human Effort’
Christopher Riley, executive director of the Data Transfer Initiative, previously with R Street Institute and leader of Mozilla’s global public policy, wrote, “Although 2035 is a full decade away, I don’t believe we will have anything that feels to the expert to be an ‘AGI’ that is on par with human mental flexibility and agility. LLM-based learning systems will have peaked in their raw power by the mid-2020s, and the advancements since then will have been in their implementation and embedding, their increasing presence and ubiquity as assistants in all forms of information research, retrieval and organization to further implement the will of the human directing their operation.
In many ways, ‘being human’ will be a more-empowered and more-enlightened state – less dependent on inefficient tasks and freer to be creative and to iterate on ideas and strategies with less lost time and effort. However … AI will never not make mistakes, and when it does, its mistakes will be virtually unable to correct in systemic or guaranteed ways. We as humans may continue to become more and more prone to impatience and frustration as systems we increasingly depend upon become more powerful, yet also periodically unreliable and unsolvable.
“As a consequence, in many ways, ‘being human’ will be a more-empowered and more-enlightened state – less dependent on inefficient tasks and freer to be creative and to iterate on ideas and strategies with less lost time and effort. However, AI-based systems will need ‘manual; (i.e., still digital, but with fewer actions performed by learning systems) overrides, or backups, in virtually all implementations. AI will never not make mistakes, and when it does, its mistakes will be virtually unable to correct in systemic or guaranteed ways.
“We as humans may continue to become more and more prone to impatience and frustration as systems we increasingly depend upon become more powerful, yet also periodically unreliable and unsolvable.
“We’re entering the era of AI ubiquity with a degree more of individual internalization of the imperfection of the systems, however, in contrast to the growth in the ubiquity of computers themselves, where they were occasionally imperfect but in ways that felt somehow fixable. Perhaps, we’ll accept that the limitations of the systems are in fact not our fault and embrace the manual override-type options that must in most circumstances be available and end up in the best of all worlds – empowered to take advantage of the benefits of embedded AI systems, yet not entirely trapped by them because we cannot, and therefore will not, depend on their functioning in all circumstances for any critical endeavor. This will, of course, be further improved if AI systems are built to be portable and interoperable, as I have written.
One thing that seems certain is that there will be job disruptions … the consequence of efficiency is always less need for human effort. … While there are still many things that developed societies need to be done by human hands, like building and maintaining physical infrastructure and providing health and community services, the companies making billions off of AI don’t suffer directly the consequences of underinvestment in these functions. We’re on a track for further and further economic inequality and tension verging on class warfare.
“With all of this said, it’s less significant in my mind to consider the individual human being in the AI future, and more the human as a member of society. Most of our actions and will are driven by our role as a human interacting with other humans, after all. And there are some forks ahead in the road, and I can’t predict which path we’ll take at any of them.
“One thing that seems certain is that there will be job disruptions. Tasks focused on relatively menial information organization and production – like creating low-value advertising copy or conducting basic research – will be supplanted entirely by AI, leaving more and more people without employment. There’s no backup plan for these humans; the consequence of efficiency is always less need for human effort.
“While there are still many things that developed societies need to be done by human hands, like building and maintaining physical infrastructure and providing health and community services, the companies making billions off of AI don’t suffer directly the consequences of underinvestment in these functions. We’re on a track for further and further economic inequality and tension verging on class warfare.
“How this affects politics, the arena where we could formulate and execute solutions to inequality, remains to be seen. I have written that I believe there is a chance that AI will fundamentally improve democracy by creating a more widespread and more-accurate shared basis of truth. Should that come to pass, we may find ourselves in a world where the groundswell of democracy will push pro-tax, pro-public-investment leaders to the forefront.
“But – to shift things back to the individual human – if instead the critical mass turns full Luddite and we disbelieve what we find on computers, we could find ourselves reverting to much more primitive ways of thinking about and understanding the world.”
Douglas Rushkoff
AI Could Move Society Toward Its Standardization to the Mean
Douglas Rushkoff, an author and documentarian who studies human autonomy in a digital age, he is also the host of U.S. National Public Radio’s “Team Human” podcast, wrote, “My main thought right now is that AI will continue to revert us to the mean. I don’t need to explain how AI works here, or its tendency to push things to the probable outcome. I believe it not only works that way in particular responses but in its overall impact. The media environment of AI pushes society towards the mean. This is happening on a personal and political level as well. Our governments are moving toward feudalism and authoritarianism, which is the most common form of government in Western civilization. Similarly, levels of state/national violence, forms of thuggery and mob rule, etc. We have yet to see whether returning to feudalism will be better or worse for the world at large than efforts until now for Enlightenment-based democratic principles, which fell prey to neoliberalism. But it doesn’t look so good.”
This section of Part II features the following essays:
Marina Gorbis: By 2035 we will be surrounded by Als: bots that work for you, bots that work with
you, bots that work on you and bots that work around you and with each other.
John Laudun: ‘AI’s augmentation of the humans’ abilities to process information and make decisions will largely be institutional in nature, thus its impact will not be what we desire.’
Tim Kelly: AI is not under control or predictable, and its ‘black box’ algorithms are worrisome, but it will rapidly advance human activities and boost performance and adoption of new tech.
Michael Kleeman: Productivity will rise, but trust will be a victim and there will be less real innovation and a duller world as existing systems become reinforced and perhaps self-reinforcing.
Kevin Leicht: The economic concentration of this tech will allow a very small number of people and organizations to ‘enhance’ human cognition in ways they see fit.
David Porush: AI ought to be prescribed as the safest and most-effective psychotropic drug, one that spurs the mind and soul to embrace an ever-expanding cosmos.
Steven Abram: We can be fooled into perceiving Als as sentient, but generalized intelligence is not human intelligence. there are risks in sentient AI but will AI ever be self-aware?

Marina Gorbis
By 2035 We Will Be Surrounded by AIs: Bots That Work for You, Bots That Work With You, Bots That Work On You and Bots That Work Around You and With Each Other
Marina Gorbis, executive director of the Institute for the Future, wrote, “When we talk about the impact of AI on humans, I often think about William James’s book ‘The Varieties of Religious Experience: A Study in Human Nature’ published in the early 1900s. The key argument he makes is that while institutional arrangements and doctrines might be uniform in different religions, people’s experiences, behaviors and emotional responses to these arrangements are complex and highly individual.
We are already seeing early signs of how people will negotiate relationships with these non-human agents in their professional and personal relationships. Some will resist their adoption and eschew personally using or interacting with them at all possible costs, some will enthusiastically experiment and adopt them, some will passively accept the inevitable and acquiesce to using the tools and some will work on preserving the ways of doing and interacting from the pre-LLM era in their communities and personal lives.
Similarly, AI tools and capabilities as they become ubiquitous and embedded into every aspect of people’s daily lives – work, social interactions, leisure and creative processes, entertainment – will generate a variety of collective and individual human experiences. There will be a whole panoply of agents that will interact with people in a variety of ways. At the Institute for the Future, we call them the ‘bestiary’ of new AI entities and relationships. They fall into four main categories: bots that work for you, bots that work with you, bots that work on you, and bots that work around you (and with each other). We will simply be surrounded by them.
“Of course, technologies do not live in a vacuum but are shaped by the social, cultural, regulatory and institutional environments in which they operate. Stricter regulatory environments, copyright laws, data access rules and many more factors shape their acceptance and application. But on a human level, we are already seeing early signs of how people will negotiate relationships with these non-human agents in their professional and personal relationships.
“Some will resist their adoption and eschew personally using or interacting with them at all possible costs, some will enthusiastically experiment and adopt them, some will passively accept the inevitable and acquiesce to using the tools and some will work on preserving the ways of doing and interacting from the pre-LLM era in their communities and personal lives. This variety of AI-human relationships is likely to be the site of personal and institutional battles for the next 10 years. In the end, AI will re-shape our society in the same way the Gutenberg press did so: for better and for worse and with lots of battles along the way.”

John Laudun
‘AI’s Augmentation of the Humans’ Abilities to Process Information and Make Decisions Will Largely Be Institutional in Nature Thus Its Impact Will Not Be What We Desire’
John Laudun, a researcher of computational models of discourse who teaches narrative intelligence at the University of Louisiana-Lafayette, wrote, “Over the next 10 years, AI will largely serve large organizations. That will mean only more trouble for working Americans and will lead ultimately to a drop in productivity and innovation. As more organizations feel themselves obligated to incorporate AI into their business, they will do so at the expense of hiring new employees, whose work they will see as likely to be readily replicated by AI. Moreover, many of these organizations are of such a scale and such a nature that their use of AI, which will be under-informed (because AI is now over-hyped), will often produce negative results for their customers.
“The adverse effects of applying algorithmic solutions to human-complex problems has already been established in both the judicial system and healthcare systems. Sentencing software that was supposed to have made the process more objective and fair turned out to be racist because it was trained on prior cases. The same has been revealed in the insurance industry’s use of algorithms either to deny claims for health care or to reject homeowner policies, some of which have been paid into for decades, because an algorithm had determined, via an image taken by a drone flown over the home without the owner’s knowledge, that their roofs were too far out of repair. Attempts to appeal errors of fact or offers to repair apparent damage were refused. You have to pay more because the software said so.
Too many consider these AIs to have human-level capabilities despite the fact that humans can develop the same competency with far less data and computational power. … As the birth cliff of 2028 approaches and some people worry that there won’t be enough people to do the work, I worry that there won’t be enough jobs for people. I also worry about just how much can be automated: do we really want health care claims automated? So long as a person remains in the loop, there is a glimmer of hope that empathy may come into play. There is no such hope with AI.
“Large language models, the current instantiation of AI in the public imagination, are perhaps more robust and subtle due to the sheer size of the data upon which they have been built, than their more obviously statistical machine learning cousins, but they are still statistical machines. Nothing more. Yet the results they produce seem so human. Creating a chat interface for GPT may have been the most innovative marketing move of the early 21st century. Too many consider these AIs to have human-level capabilities despite the fact that humans can develop the same competency with far less data and computational power.
“Humans are more context-aware and responsive to subtle forms of interaction than AI. While the adaptability of AI to a wide variety of situations has been impressive, we have already seen repeated instances of the cracks that begin to show in such moments.
“Much of this falls at the feet of people not understanding what AI is and in the process granting it credit for a humanlike level of cognition it does not possess. Much of the responsibility for this lies at the feet of the large corporations who own the technology and who are eager to capitalize, quite literally, on their investment. Combine this with organizations keen to rid themselves of the workers who do the kinds of repetitive tasks that automation largely does well, and you have a perfect storm of sellers and buyers. But all of this is, and will continue to be, B2B, business-to-business. As the birth cliff of 2028 approaches and some people worry that there won’t be enough people to do the work, I worry that there won’t be enough jobs for people. I also worry about just how much can be automated: do we really want health care claims automated? So long as a person remains in the loop, there is a glimmer of hope that empathy may come into play. There is no such hope with AI.
We may very well see the end of so-called ‘bullshit jobs,’ the jobs that seem on their surface to be meaningless because of their repetitive or pass-through nature. But bullshit creates two things that are important to innovation: boredom and friction. If people don’t have the opportunity to be paid while frustrated or paid while daydreaming, there will be far fewer opportunities for creative individuals to reinvent themselves or create entirely new categories of products or solutions.
“We may very well see the end of so-called ‘bullshit jobs,’ the jobs that seem on their surface to be meaningless because of their repetitive or pass-through nature. But bullshit creates two things that are important to innovation: boredom and friction. If people don’t have the opportunity to be paid while frustrated or paid while daydreaming, there will be far fewer opportunities for creative individuals to reinvent themselves or create entirely new categories of products or solutions. And while some might argue that it won’t be long before AI achieves divergent thinking, they miss that the one important dimension of creativity is in acceptance. If fewer people are working, the market for products will be smaller, decreasing the overall creativity of the organizations and the society which they serve.
“Limiting our scope to the next 10 years, from 2025 to 2035, and to the American scene, it seems clear that AI’s ‘augmentation’ of the human abilities to process information and make decisions will largely be institutional in nature and that the impact will not be one we desire. We can only hope that enough independent thinkers and practitioners continue to lurk in universities and small businesses that real innovation will continue to percolate and make possible the kind of AI revolution so many dream of.
“My fear is that, given that the resources required to optimize AI largely lie with larger institutions and given the American policy environment – at least currently – for those institutions to be privately held and thus optimized for profit and not for public good, individuals will more often than not be the object of AI and not the subject.”

Tim Kelly
AI Is Not Under Control or Predictable, and Its ‘Black Box’ Algorithms Are Worrisome, But It Will Rapidly Advance Human Activities and Boost Performance and Adoption of New Tech
Tim Kelly, lead digital development specialist at World Bank, previously head of strategy and policy at the International Telecommunication Union wrote, “The impact of AI on human development is likely to be incremental and relatively easily assimilated into daily use of computers, phones, cars, etc., rather than sudden, dramatic and disruptive. The change will benefit the many, to a modest extent, especially in high-income countries, but will have a negative effect on a few, especially those with limited or unaffordable access to digital technologies.
This technology will be different in a couple of important ways. The first is that AI is ‘self-learning,’ which means it is not entirely under human control or predictable … This brings some exciting possibilities but also risks. The second difference is the ability of AI to rapidly and cheaply bring scale to human activities. This implies, for instance, greatly reduced transaction costs, enhancement in convenience and the possibility that earlier innovations that had been overhyped – such as cryptocurrencies or self-driving vehicles – may finally become mass market products.
“To a large extent, the impact of AI will be similar in nature to other General Purpose Technologies, such as mobile phones, the internet, satellite technology, etc. But this technology will be different in a couple of important ways. The first is that AI is ‘self-learning,’ which means it is not entirely under human control or predictable, and the algorithms underlying AI will largely be a ‘black box’. This brings some exciting possibilities but also risks.
“The second difference is the ability of AI to rapidly and cheaply bring scale to human activities. This implies, for instance, greatly reduced transaction costs, a big enhancement in convenience and the possibility that earlier innovations that had been overhyped – such as cryptocurrencies or self-driving vehicles – may finally become mass market products.
“The advent of AI will differentiate more starkly between a few ‘producer’ countries and firms, and many more ‘consumer’ countries and firms. But for those economies without the capacity to make huge investments in ‘compute’ it may be possible to substitute effectively high-performance gigaband networks and cloud computing for data centers and on-site number crunching.”

Michael Kleeman
Productivity Will Rise, But Trust Will Be a Victim and There Will Be Less Real Innovation and a Duller World As Existing Systems Become Reinforced and Perhaps Self-Reinforcing
Michael Kleeman, senior fellow and director of the Institute on Global Production and Innovation at the University of California-San Diego, wrote, “The applications of AI will likely have three major impacts in our social, political and economic landscape. Some of these will be positive, some will be materially disruptive and some will be deeply destructive to our lives.
AI and machine learning will be a new form of industrial revolution, replacing human labor with machines. This will increase productivity, create new careers and allow humans to see patterns in data (of all kinds) that is hard for us to see. … But the cascading effects [of job losses and the loss of trust in institutions due to these advances] can be essentially damaging to society, perhaps further entrenching the power structures and leading to a loss of human delight.
“On the positive side AI and machine learning will be a new form of industrial revolution, replacing human labor with machines. This will increase productivity, create new careers and allow humans to see patterns in data (of all kinds) that is hard for us to see due to a low signal-to-noise ratio or just because we have been trained to look elsewhere for meanings. As with the revolution in programming, we will become more designers than creators – conceiving but not making – and that will likely reduce innovation over time, leading to a duller world with less real innovation and delight as existing systems will become reinforced and perhaps self-reinforcing.
“On the downside this will cause economic displacement perhaps at a scale not seen in over a generation. This time, however, the impacts will be felt by a wider range of professions and the loss of jobs, especially higher paying ones in finance, etc. will be truly disruptive and accelerate the concentration of wealth, especially in wealthier nations.
“The political ramifications of that have always been disruptive and the personal costs tremendous. So perhaps a duller world where the middle and even upper middle class constricts and, with that, associated institutions such as colleges and universities.
“But perhaps the most significant damage will be the negative impacts on interpersonal trust, accelerating the trends today driven by social media ‘disinformation echo chambers,’ but fed with data (media, words, images, sounds) whose provenance is initially extremely hard to determine and ultimately so common that we cannot begin to test its reality. The cascading effects of this can be essentially damaging to society, perhaps further entrenching the power structures and leading to a loss of human delight. And, coupled with the capability of the technology to enable data fusion and analysis from a broad range of sensors and signals, it enables a surveillance state that further erodes human trust.”

Kevin Leicht
The Economic Concentration of This Tech Will Allow a Very Small Number of People and Organizations to ‘Enhance’ Human Cognition in Ways They See Fit
Kevin Leicht, professor emeritus at the University of Illinois Urbana-Champaign and reseach scientist at Discovery Partners Institute, Chicago, wrote, “On balance, I do not see a good outcome coming from the deepening dependence of human intelligence on AI. The reasons for this are complex, but they can be summarized fairly easily:
“First, the questions of whether AI, in the abstract, could be a force for positive change in human life is a good one. But ‘in the abstract’ is not where life is lived. Life is lived ‘in the concrete.’ It is ‘in the concrete’ where the implications are largely negative. Evaluating what AI will do apart from who will do it and how they will bring it about is not realistic.
Most of the individuals and organizations involved in developing AI have little to no understanding of human life or human interaction. Zero. Nadda. Ziltch. … There will be one or a few entities that will control the AI/human interface. Those few entities will make their founders billions of dollars. Those entities will erect barriers to entry that keep most competitors out of the market entirely … The economic concentration will mean that a very small number of people and organizations will be ‘enhancing’ human cognition in ways they see fit. Does this sound like a good idea to you? And what does ‘see fit’ mean?
“Second, most of the individuals and organizations involved in developing AI have little to no understanding of human life or human interaction. Zero. Nadda. Ziltch. I have worked with these people for years (I am a sociologist trained in computational social science) and their understanding of social life is horrifyingly bad. They believe they are entitled to interfere with the most intimate and basic details of someone’s life. They have amorphous ideas about ‘society’ that they think are useful (they are not). Most of them have stilted or non-existent social lives and actually get out very little. Their understandings of social groups, social interactions, social networks, political history, cultural history, etc., is virtually non-existent. The ideas they come up with almost inevitably violate people’s personal autonomy and civil rights – absolutely without batting an eye. If you put a group of them together in one room you don’t get better results, you end up with ideas that are ABSOLUTELY guaranteed to violate people’s autonomy and civil rights in more-effective ways. Other people are simply objects to be played with. This has been going on for years, I have extensive experience, and no I am not kidding. All of this is a bit like taking dating advice from a 28-year-old virgin who has never left their parents’ basement.
“Third, apart from problem (2) (which is extremely serious because the average computer science graduate can’t tell the difference between a Kiwanis Club and herd of cattle) is one other big problem – economic concentration. If I were to bet on what will happen here (based on what has happened up to now, and the best predictor of future behavior is past behavior), there will be one or a few entities that will control the AI/human interface. Those few entities will make their founders billions of dollars. Those entities will erect barriers to entry that keep most competitors out of the market entirely. The founders of these companies will likely share the defects reflected in (2) above. But even if they didn’t, the economic concentration will mean that a very small number of people and organizations will be ‘enhancing’ human cognition in ways they see fit. Does this sound like a good idea to you? And what does ‘see fit’ mean? It likely means what it has meant in the social media realm we’ve all suffered through up to now – a few actors, some with noble intentions and others not – controlling vast amounts of bandwidth and space in the name of generating enormous profits for themselves. The idea that this will make us better off is just plain nuts. It will make these entities opulently wealthy and whether that makes the rest of us better off will be completely irrelevant to the calculus of those few organizations.
“Fourth, I know I know I know. ’This time will be different.’ If social science teaches us anything, it is that this time will not be different. I would like to be wrong, but I suspect I am not. Until we break tech-bro syndrome, decide that we actually have anti-trust laws we’re going to enforce and come up with a more enduring set of ethics surrounding what computers do that is NOT written by ANYONE from a Computer Science/Engineering program, my predictions stand. Remember how social media was going to be liberating? Just, exactly, what did it liberate us from? Why it was reality! Now imagine a whole AI-human interface driven by this same level of abject absurdity.
“Unless the underlying basis for technological innovation and adoption changes, the AI-human interface will not better society or individuals. A radically changed technology development landscape might produce better results, but I don’t see any evidence we’re interested in doing much to create that landscape. I’m so convinced of this, I’m willing to sign it! Kevin T. Leicht.”

David Porush
AI Ought to Be Prescribed as the Safest and Most-Effective Psychotropic Drug, One That Spurs the Mind and Soul to Embrace an Ever-Expanding Cosmos
David Porush, writer and longtime professor at Rensselaer Polytechnic Institute, responded, “I showed my granddaughter a video of a player piano performing Mozart’s Piano Concerto in C Major. She asked me, ‘Why should I bother to play piano at all?’ No one should concede defeat in this new chapter of the age-old contest between John Henry and the machine. AI will not replace what makes us human; rather, it will push us – through competition, inspiration, and collaboration – to refine and expand our unique capabilities. As AI increasingly colonizes domains once thought to be the exclusive province of human intelligence, it challenges us to discover new ways to assert our humanity.
“AI is already reshaping how we teach, learn and assess knowledge. It forces educators to reconsider what it means to write, to think and to earn a grade. After reading thousands of essays, I can say with certainty that ChatGPT would earn at least a B+ on most of them – including ones on self-consciousness and epistemology. In other words, AI should already be radically transforming the classroom in every way except for its most irreplaceable aspect: human presence, warm-body intimacy. The same is true for journalism, research, coding, design, medicine – the list expands each time I revisit it. Some would say it liberates us to completely reimagine education, work, art, creativity and knowing itself.
“It has taught me, as it will teach countless professionals and students, to refine my questioning to sharpen the muscle at the core of scientific inquiry, Talmudic discourse and Socratic dialogue. Will AI improve our morals? No. Will it eradicate our inclinations toward sin? Hardly. Instead, it will invent new ways to do both – offering tools for both crime and security, for both deception and enlightenment. AI ought to be prescribed as the safest and most-effective psychotropic drug, one that spurs the mind and soul to embrace an ever-expanding cosmos.”

Stephen Abram
We Can Be Fooled Into Perceiving AIs As Sentient, But Generalized Intelligence is Not Human Intelligence. There Are Risks in Sentient AI but Will AI Ever Be Self-Aware?
Stephen Abram, futurist at Lighthouse Consulting and director of the Federation of Ontario Public Libraries, wrote, “Humanity has always found it difficult to define what it means to be human, what it means to be alive, whether or how we differ from other life forms. There are arguments about nature versus nurture and differences of opinion and varying points of view across the fields of genetics, philosophy, languages, meta-cognition, brain research, education, pedagogy/andragogy, anthropology, ethnography, cultural studies and so many more. The differences of opinion emerging from individuals, groups and sub-cultures can build barriers to our understanding of ourselves.
“Suffice it to say, we don’t really know, or at least have wide agreement on, what it means to be ‘human.’ We don’t have a great definition of sentience in the context of AI. Indeed, it is proven that some people benefit from therapy to reach maturity and greater resilience. Can we ask ourselves how we apply that to AI models? When we don’t really know what it means to be human, in all our varieties, how do we measure the potential emergence of ‘humanity’ in AI?
Generalized intelligence is not human intelligence. Performative emotional intelligence is not really what we expect from a flesh-and-blood human. The large data resources from which LLM learning models draw their responses contains recorded content, much of it flawed, biased or incorrect, featuring just about every human strength and weakness. We can imagine what artificial emotional intelligence can be, but it is not human. It’s artificial. That said it can fool us into perceiving sentience. We can imagine a coming singularity. While it’s just imagination today, we need to consider guardrails and future decision-making prior to its potential arrival. My humanity tells me that there are risks in sentient AI models. Will AI ever be that self-aware?”
“At this point, the most advanced AI is learning like a child, not an adult, with some level of expertise (with hallucinations) but mostly in narrow categories. Understanding that is critical to evaluating its progress to adulthood. Indeed, the metaphor that AI is moving into its teen years is apt, as we consider what it might become with all the ramifications of emergence as a fully formed adult.
“Generalized intelligence is not human intelligence. Performative emotional intelligence is not really what we expect from a flesh-and-blood human. The large data resources from which LLM learning models draw their responses contains recorded content, much of it flawed, biased or incorrect, featuring just about every human strength and weakness. We can imagine what artificial emotional intelligence can be, but it is not human. It’s artificial. That said it can fool us into perceiving sentience. We can imagine a coming singularity. While it’s just imagination today, we need to consider guardrails and future decision-making prior to its potential arrival. My humanity tells me that there are risks in sentient AI models. Will AI ever be that self-aware?
“That said, we do know that being human is affected by many factors – genetics, experience, values, mental health and so many more. It is very complex and not easily understood as an individual – let alone on a global basis. If we perceive (or it is, actually, true), that AI is ‘human,’ how do we judge its information, conversational experience, and decision results? Since it ‘learned’ everything from the digital record, it must, by definition, contain the good and the demonstrably bad – bigotry, racism, sexism, and so much more beyond the tip of iceberg, including hallucinations. How do we judge the mental health of AI? What are the components of the decision and awareness of trusting AI’s sources?
“For example, one initial step in AI has been imagining and creating AI-driven robots. As with many technological inventions, we try to enable them in forms we’re more comfortable with. In ‘The Jetsons’ futuristic cartoon series, people would refer to the robot maid as ‘her.’ Why? Because equating her humanlike vacuuming and answering the phone made sense. Today we have voicemail answering the phone and Roombas cleaning the carpets. Robot soldiers have arrived, along with military drones with laser targeting.
As it stands today, it’s a challenge to navigate all of humanity’s collective knowledge, let alone intelligence. The digital record – on which just about 100% of all AI and LLM models are trained – is weak on many fronts. At its foundation, AI is retrospective. It will be a while, if ever, before positive cognitive leaps can be made by AI beyond narrow tasks.
“Achieving simple tasks isn’t nearly as complex as true thinking, creating, innovating and problem-solving. Some tasks, like vacuuming and recording a message, can be done with no emotional intelligence. On the other hand, more complicated ones may need emotional intelligence beyond the performative or polite. Step and Fetchit-style document, answer, data entry or information retrieval seems designed for AI Agents, while understanding a person and their needs behind a request is infinitely more complex and valuable. The difference between determining the difference between complicated tasks and complex efforts is the key to AI progression. This is the challenge facing the development of AI Agents today. That’s a big leap.
“As it stands today, it’s a challenge to navigate all of humanity’s collective knowledge, let alone intelligence. The digital record – on which just about 100% of all AI and LLM models are trained – is weak on many fronts. At its foundation, AI is retrospective. It will be a while, if ever, before positive cognitive leaps can be made by AI beyond narrow tasks that are based on a narrower range of high-quality sources such clinical diagnoses where we see hints at these tools showing potential for good results. There will be benefits in that, when paired with highly trained human filters as the ultimate chooser.
“Choosing the frameworks for decision-making using AI involves real foundations in ethical and moral behaviour as well as sensitivities to the situational contexts of culture, interpersonal dynamics and so much more. In society, there are those of us who respect and embrace diverse perspectives, the role of neurodiversity in making positive changes, cognitive leaps, seeking insights and other contextual factors that are nearly erased in large language models and big data.
“Can AI tools leap above tasks and retrospective learning to being something akin to a human? Being human involves being wrong sometimes. It involves forgetting. It involves being sorry. It involves regret. To be human is to embrace the good and the bad and learn from all experiences. That’s a truth worthy of a discussion!
“How will AI change our lives? A lot and much of it positive. I look forward to those innovations. I don’t look forward to AI making human mistakes if it (and we) don’t learn from them. While I remain positive about the social and economic potential of AI, I withhold judgment of if it can – or should – become ‘human.’”
The following section of Part II features these essayists:
Alf Rehn: Will most everything in 2035 be standardized to the mean? maybe. maybe not. humans
may find themselves partnering with either ‘the mediocrity engine’ or ‘the octopus.’
Dave Karpf: ‘The trajectory of any given technological innovation bends toward money’; the
imaginary world in which everyone has a reliable personalized AI butler is exceptionally unlikely.
Steve Rosenbaum: Life in 2035 is a continuous economic transaction that we never consented to but can’t escape run by an economic aristocracy using AI to extract value from human existence itself.
Alexandra Whittington: In the age of AI, we must recognize the economic value of care work and
provide higher wages and better support for those who serve humanity in high-human-touch roles.
Jim C. Spohrer: Robots for home and business use will become useful and popular and AI personal
assistants will handle most communications under human supervision.
Clifford Lynch: Professional AI agents are likely to offer consultations in legal, medical, accounting,
interior decorating, career counseling and other aspects of human life.
James Resnikoff: The effect of AI on being human is that it will be alienating due to the unequal
power relations it mediates between corporation and individual, rich and poor.

Alf Rehn
Will Most Everything in 2035 Be Standardized to the Mean? Maybe. Maybe Not. Humans May Find Themselves Partnering With Either ‘The Mediocrity Engine’ or ‘The Octopus’
Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark and head of the Center for Organizational Datafication and its Ethics in Society, wrote, “There are many faces to hybrid intelligence in 2035, but I expect that two aspects of it will be particularly noticeable. I refer to them as ‘The Octopus’ and ‘The Mediocrity Engine.’
The Mediocrity Engine: There’s no denying that by 2035 AI has made a lot of people more middling. They don’t do terrible work but they don’t do great work either. Their emails are perfectly adequate, and their own written output, whilst grammatically correct, is often devoid of spark and wit. They cook a lot of the same food in the same way. Granted, while on holiday they see more sights, but they are nearly always the same sights, often from the same hotels. Nearly everyone is ‘average’ in 2035, using their Mediocrity Engines (also known as AIs) to generate good enough work, good enough text and good-enough lives.
“Poet John Betjeman turns out to have been a seer. He wrote this in ‘Slough’:
‘It’s not their fault they do not know/
The birdsong from the radio/
It’s not their fault they often go/
To Maidenhead
And talk of sport and makes of cars/
In various bogus-Tudor bars/
And daren’t look up and see the stars/
But belch instead.’
The Octopus: By 2035 perhaps some will have resisted the call of the average and started working with AIs that do not aim to mimic humans and standardize everything to the mean and the median. They communicate with Octopodes, strange new intelligences that do not so much hallucinate as tell tales of the world from the perspective of entirely new intelligences.
In 2035, different professions and personalities are drawn to different forms of AI. Mostly they can co-exist quite happily, but universities have become battlegrounds between those dedicated to mimicking the greatness of the ages and those trying to think in entirely new ways. It doesn’t take a genius to realize that the former group is well-aligned with the administration, where Octopus-like AIs are banned and the Mediocrity Engines reign supreme.
“The people who take to working with an Octopus create work and text that is quite different from that of the Mediocrity Engine in ways both bad and good. Some of their work turns quite strange – alien even – but tends to do so in a way that at least stimulates the mind and raises questions. At other times, the meeting of ‘alien’ and ‘human,’ two very different intelligences both with their own strong suits, generates great leaps in thinking, highly creative works, true innovations.
“In 2035, different professions and personalities are drawn to different forms of AI. Mostly they can co-exist quite happily, but universities have become battlegrounds between those dedicated to mimicking the greatness of the ages and those trying to think in entirely new ways. It doesn’t take a genius to realize that the former group is well-aligned with the administration, where Octopus-like AIs are banned and the Mediocrity Engines reign supreme.
“The really interesting thing is what is happening among the kids of 2035. More often than not they have a Mediocrity Engine to help them with their homework and their assignments, like a really nerdy friend you can always call upon. When the kids want to actually learn something, they call upon a plethora of intelligences: Octopodes and squirrels and termites, oh my! Cat minds and cockroach intelligences, anything not to think in ways as dull and lifeless as those of truly lesser intelligence – their parents.”

Dave Karpf
‘The Trajectory of Any Given Technological Innovation Bends Toward Money’; the Imaginary World in Which Everyone Has a Reliable Personalized AI Butler is Exceptionally Unlikely
Dave Karpf, associate professor of media and public affairs at George Washington University and author of “Analytic Activism: Digital Listening and the New Political Strategy,” wrote, “My central expectation is that people’s relationship to AI a decade from now will be determined by other social factors – chief among them being the likely sharp decline of democratic institutions and the regulatory state and the unfettered, exploitative revenue models that develop for major AI companies as a result.
We will never end up with mass AI agent-butlers because there isn’t nearly enough money in it. The big money is going to be in scams, in advertising and – especially – in replacing existing large economic sectors (education, health care, law, etc.) with cheaper, less-regulated, less-effective competitors.
“The central thesis of the first piece of public writing I produced on this topic was that ‘the trajectory of any given technological innovation bends toward money.’ We still do not have even a faint glimpse of what a profitable revenue model for OpenAI or Anthropic or Mistral AI might look like. All of their current offerings are intriguing cash furnaces.
“We could imagine a world circa 2035 where every individual on the planet has a personalized AI agent-butler. The AI butlers could, by that point, be reliable and sophisticated, sparing us from a multitude of daily hassles and giving the mass citizenry the type of luxury that the extremely wealthy currently take for granted, with their reliance on retinues of (human) personal assistants. That future is technologically feasible. But it is exceptionally unlikely. It has been an imagined future dating back decades – the type of future revealed in Douglas Adams’s delightful film ‘Hyperland.’ We will never, however, end up with mass AI agent-butlers because there isn’t nearly enough money in it. The big money is going to be in scams, in advertising and – especially – in replacing existing large economic sectors (education, health care, law, etc.) with cheaper, less-regulated, less-effective competitors.
“Given the crack-up and capture of the regulatory state by tech billionaires, a decade from now, that’s where I expect we will be. AI will have made everyday human life worse, because private equity and big tech will buy up every company that has a significant user base and cut costs by developing AI that provides worse-but-cheaper alternatives.
“This isn’t technologically determined. It doesn’t have to happen. But, realistically, it is the path we will most likely be treading for the next decade.”

Steve Rosenbaum
Life in 2035 is a Continuous Economic Transaction That We Never Consented to But Can’t Escape Run By an Economic Aristocracy Using AI to Extract Value from Human Existence Itself
Steve Rosenbaum, co-founder and director of the Sustainable Media Center, author, filmmaker and founder of five companies in the media content sector, wrote, “We’re not just changing technology. Technology is rewriting what it means to be human – and who gets to profit from our transformation. Imagine a world where your AI doesn’t just predict your next move it determines your economic destiny. Where algorithms don’t just track wealth but actively create and destroy financial futures with a line of code. Welcome to 2035: the year capitalism becomes a machine-learning algorithm.
By 2035 the real power players aren’t tech billionaires anymore. They’re the autonomous corporations that can monetize human potential down to the most microscopic data point. Every thought, every desire, every potential choice becomes a commodity to be bought, sold and traded. Your personal data is no longer just information – it’s the new global currency.
“By 2035 the real power players aren’t tech billionaires anymore. They’re the autonomous corporations that can monetize human potential down to the most microscopic data point. Every thought, every desire, every potential choice becomes a commodity to be bought, sold and traded. Your personal data is no longer just information – it’s the new global currency.
“Banks? Obsolete. Traditional investment? A relic. Now, AI systems predict economic value before you even know you have it. A teenager’s potential earning capacity can be calculated, packaged, and sold before they’ve written their first resume. Your life becomes an investment portfolio, your human potential reduced to a predictive model.
“But here’s the razor’s edge: Who controls these algorithms controls everything. Not just markets. Not just governments. Everything. The most terrifying transfer of wealth in human history is happening in plain sight. We’re not just losing jobs to automation. We’re losing the entire concept of human economic agency. Your worth is no longer what you can do – it’s what the algorithm says you might do.
“Humans in 2035 aren’t workers or consumers. We’re walking data streams, our entire existence a continuous economic transaction that we never consented to but can’t escape. The future isn’t about artificial intelligence replacing humans. It’s about a new economic aristocracy that uses AI to extract value from human existence itself.
“Welcome to late-stage capitalism 2.0. The machines aren’t just watching. They’re collecting.”

Alexandra Whittington
In the Age of AI, We Must Recognize the Economic Value of Care Work and Provide Higher Wages and Better Support for Those Who Serve Humanity in High-Human-Touch Roles
Alexandra Whittington, a foresight expert at Tata Consultancy Services and co-author or editor of “A Very Human Future,” “Aftershocks and Opportunities” and “The Future Reinvented,” wrote, “The gender division of labor has been viewed as an expected aspect of human society. While AI will take over many traditional white-collar and blue-collar jobs one category it can’t beat humans at is caring.
“AI has the potential to change the status quo and engender a higher level of respect for women, whose work roles have often been concentrated in the ‘human-touch’ categories of caring for children, the elderly, and ill people; doing housework and other forms of domestic labor; social work; teaching and nursing.
The funding and harnessing of AI that is now occurring in legal, medical, human resources, entertainment, media and many other sectors will eventually benefit society. Now is the time to recognize the economic value of care work and provide higher wages and better support for professional care workers, advisors and mentors such as teachers nurses, home health care professional, and those whose work is early childhood education and care for the developmentally disabled.
“We should be laser-focused on taking advantage of AI productivity gains to better respect and support the humans whose life’s work is focused on caring for others. We have an aging society that places growing demands on families and especially on women. To alleviate social strain, society – as it is being reshaped in the age of AI – should be redesigned and well-enough funded to help absorb the impact of the time, financial cost, mental load and physical tasks taken on by the humans who carry the burden of high-human-touch roles. It is important across all aspects of this sector but it is especially needed to support the elderly in our graying society and to help young families find affordable childcare.
“The funding and harnessing of AI that is now occurring in legal, medical, human resources, entertainment, media and many other sectors will eventually benefit society. Now is the time to recognize the economic value of care work and provide higher wages and better support for professional care workers, advisors and mentors such as teachers nurses, home health care professional, and those whose work is early childhood education and care for the developmentally disabled.
“If professional caring and mentoring becomes a high-paid career more men will be encouraged to enter the care workforce and the women in it will be justly compensated. The societal transformations arriving in the age of AI could help start to dissolve economic and social barriers that perpetuate gender inequalities at home and at work.”
Jim C. Spohrer
Robots for Home and Business Use Will Become Useful and Popular and AI Personal Assistants Will Handle Most Communications Under Human Supervision
Jim C. Spohrer, board member of the International Society of Service Innovation Professionals and ServCollab, previously a longtime IBM leader, wrote, “By 2035, the impact of AI will be noticeable from driverless vehicles, more robots (some humanoid), and day-to-day communications being more automated and handled by individuals AI digital twins.
- “Driverless vehicles: A mountain of regulatory change will finally be passed and people will enjoy speedy, safe local transport, with many not choosing to buy a personal car. Just as the LAN lines (local area networks) connecting the internet to people’s homes and businesses in its early days gave way to mobile smartphones, automobile manufacturers will sell local mobility as a service rather than sell a physical product. Automobile leasing will become cheaper and cheaper as vendors compete.
- “Robots: Local service providers will lease robots for use in people’s homes and for the construction industry (including deconstruction and reconstruction/maintenance). These robot tools will be supervised locally and by telepresence operators. Again, a mountain of regulatory change will be completed by 2035 to make this happen.
- “Communications: AI communications assistants will be realized. People will even be able to construct an AI ‘double of themselves’ to handle calls, emails and requests as assigned. Their digital twin will propose a completed response, a person will simply check, make small modifications and approve. Routine communication will be noticeably improved and people will be able to have ‘polite’ interactions with many more people.”
Clifford Lynch
Professional AI Agents Are Likely to Offer Consultations in Legal, Medical, Accounting, Interior Decorating, Career Counseling and Other Aspects of Human Life
Clifford Lynch, executive director at the Coalition for Networked Information, wrote, “So far and I suspect for at least the next decade, most people will relate to AI systems using models of relating to other human beings. We will see one class of AI systems that pretend to be people, emulations of ancestors, historical or cultural figures, perhaps virtual friends of various types; here the model is peer interaction among humans perhaps.
The really interesting question to me is whether we will learn to relate to AI systems on their own terms rather than anthropomorphizing them in various ways. … If we proceed down this path we will encounter significant ethical and philosophical problems, as well as more pragmatic near-term legal issues.
“A second class that is widely hyped (but we are still having a lot of trouble making work with a useable level of reliability and accuracy/competence) are various forms of AI ‘assistants’ or ‘agents’; the model here is to serve a typically human role rather than a specific person. Stretching a bit further in this direction, in future one might imagine professional AIs offering consultations: legal, medical, accounting, interior decorating, career counseling and the like, though again I worry about accuracy, reliability and liability issues here.
“The really interesting question to me is whether we will learn to relate to AI systems on their own terms rather than anthropomorphizing them in various ways. Can we learn anything from our attempts to relate to other biological species and the ways we have approached this? If we proceed down this path we will encounter significant ethical and philosophical problems, as well as more pragmatic near-term legal issues. Consider, for example the current position of the U.S. copyright office that AI systems, on their own, cannot create copyrightable materials (though they can serve as a tool in assisting humans in creating such materials). It’s hard for me to believe that this position will stand for another decade.
“A final set of thoughts, and I’m not sure we get here by 2035. Imagine a society that includes both humans and AI systems. It’s perhaps easiest to think about this in specific areas: industrial work, military activities, scientific research would be some provocative examples. You are going to have systems of communication that are designed to be used among humans; as AI systems become integral participants in these areas of activity, we’ll see some modest adaptation but mainly the AI systems will learn to use human-oriented systems and have their own communications systems/practices for use primarily by AI systems.
“These may look very different than the primarily human-oriented communications systems. So, to take just one example, think about a specific area of scientific research. The human communication system may still be based on scholarly journal articles. The AI communication system may be something that would be very tedious for humans, full of minutiae to permit reliable replication of experiments and results, and replete with (boring) negative results.”
Jason Resnikoff
The Effect of AI on Being Human is That it Will Be Alienating Due to the Unequal Power Relations it Mediates Between Corporation and Individual, Rich and Poor
Jason Resnikoff, assistant professor of contemporary history at the University of Groningen, Netherlands, and co-author of “AI Isn’t a Radical Technology,” wrote, “As a labor historian and a historian of technological change, I find ‘AI’ to be a vague concept. Strictly speaking, the term ‘AI’ does not actually refer to any specific technological innovation. The field of artificial intelligence generally defines the term AI as a desire to create machines that act as though they are intelligent. That is the description more of an effect than an action.
“The recent burst of interest in AI is related to the development of LLMs and NLP by means of machine learning and artificial neural networks. The business hype surrounding these innovations is wildly overblown. That said, employers are and will make use of these specific technological innovations to degrade working conditions.
The uses that employers, nation-states and large companies will make of the technologies called AI will have the same effect as the other technologies they have deployed in the past. They will impress people with the powers they have concentrated, and they will alienate people. The only way ‘AI’ will not have that effect is, quite apart from the qualities of the technologies themselves, if there is a radical change in the nature of social relations.
“I find myself returning to a basic refrain: what makes AI scary for ordinary people and working people is not what makes AI new, but rather what makes it old. That is, employers will use this technology as they have earlier technological innovations: to degrade labor so they can have it more cheaply. Sometimes they might try to use it to substitute machine action for human labor, but just as often they will use it, as they have been using it so far, to obscure and mystify the human labor that continues to be essential to the labor process overall.
“The effect this will have on the experience of being human will be the same as other technological innovations under conditions of capitalism: it will be alienating. This is not a feature of the technology itself, but rather of the power relations that the technology mediates, in this case, the unequal power relations of employer and employee, of giant corporation and lone individual, of rich and poor.
“That, however, is not a new phenomenon. In other words, the uses that employers, states and large companies will make of the technologies called AI will have the same effect as the other technologies they have deployed in the past. They will impress people with the powers they have concentrated, and they will alienate people. The only way ‘AI’ will not have that effect is, quite apart from the qualities of the technologies themselves, if there is a radical change in the nature of social relations.”
This section of Part II features the following essayists:
Brian Southwell: If Als evolve to generate, appreciate, overcome and celebrate their mistakes, then
we humans may welcome such entities as new companions in our world.
Nigel M. Cameron: ‘A human’ world in which the creatures of our technologies serve us and not
use us must surely be our vision. how it turns out will depend not on them, but on us.’
Caroline Haythornethwaite: As humans, our task will be to work with AI, and that will continue to
require coming to an understanding of how it works and what it is good at.
Bernie Hogan: It may come to be that the machines have stopped being the tool of oppression and
have started acting more like the agents of it.
Divina Frau-Meigs: Digital media and information literacy is more crucial to humanity’s success and
more responsible for its failures than ever before.
Winston Ma: AI agents will bring much more than incremental improvements in business
automation; they will represent a fundamental shift in how companies operate, grow and scale.
Ginger Paque: ‘Garbage in, garbage out’ is as true for AI as it is for human discernment, as shown
most obviously by contradictory information and hallucinations in AI-generated text.
Brian Southwell
If AIs Evolve to Generate, Appreciate, Overcome and Celebrate Their Mistakes, Then We Humans May Welcome Such Entities as New Companions in Our World
Brian Southwell, distinguished fellow and lead scientist for public understanding of science at RTI International, wrote, “Nearly 20 years ago, I gave a commencement speech in which I noted how some aspects of my work as a teacher – especially those related to being a purveyor of facts – had been made somewhat obsolete by the arrival of Internet search engines such as Google. Undaunted, I noted how much I still valued the opportunity to help students make sense of facts and ask questions and become aware of personal values. I also remember walking away from the auditorium that day wondering how much I believed that human teachers would continue to matter, especially at a moment when possibilities for asynchronous instruction and podcasts seemed more promising for some administrators than brick-and-mortar classrooms.
“Two decades later, many people still find meeting in person to hear human beings talk compelling and helpful, although of course we also have many alternatives for training. Even in those alternative forms, though, human students often benefit most from human narratives and interaction. A compelling podcast still tends to involve human language and story forms honed by our human experiences. Human beings are likely to find the experience of perfectly replicated environments built of automated prediction of past human experiences to be tempting and sometimes even soothing but also ultimately unsatisfying as the sole content on which they subsist. Live sports involving human competitors draw audiences even though simulated games between robots could be programmed and presented even now.
Human beings are likely to find the experience of perfectly replicated environments built of automated prediction of past human experiences to be tempting and sometimes even soothing but also ultimately unsatisfying as the sole content on which they subsist. … If artificial intelligences evolve to generate, appreciate, overcome and celebrate mistakes, then we may welcome such entities as new companions in our world, just as we have welcomed canines and felines and plants that seem capable of adapting to their worlds as we do.
“We likely will continue to value opportunities to witness human beings acknowledge and attempt to overcome their own frailties, mistakes and limitations as they face less-than-guaranteed success. From that perspective, human beings will likely gravitate toward interactions with other people as a core activity during their biological lives. If artificial intelligences evolve to generate, appreciate, overcome and celebrate mistakes, then we may welcome such entities as new companions in our world, just as we have welcomed canines and felines and plants that seem capable of adapting to their worlds as we do.
“We face a future in which many processes of computation and material construction will essentially be invisible to most human beings, and yet that aspect of the future – the threat of invisible mechanisms – is not incredibly different than the past instances in which people used but did not necessarily fully understand telegraphs or radios or even the use of sparks to make fires. Many readers will engage with the comments I am typing via a computer screen connected to the Internet. How many comprehensively grasp the technology used to transmit my keystrokes to the words they are reading on the screen? Does that necessarily matter?
“We should be grateful for the early stages of new technologies in which many tools do not work perfectly. We can learn how tools operate and how or if we can operate when they break. When algorithms and prediction tools provide a plausible narrator articulating an eloquent paragraph in response to a search query, though, as soon will be the case, humans are likely to gain less practice in developing logic skills and in improving or changing patterns of information in their environment. We should be careful as we approach new thresholds of seamlessness and efficiency.
“Advantages of artificial intelligences will be apparent. Much mundane work will be automated. As communities, we may get better at generating long-term decisions which are consistent with our expressed values in situations involving dynamics beyond the scale of individual people. Any single individual with access to new technologies also will be able to accomplish a pace and scale of information production much greater than previously was the case for human beings.
“Beyond the advantages, though, we will continue to be beings who benefit from mistakes and failures and who probably have evolved to enjoy witnessing humans overcome those mistakes and failures. We collectively learn from those situations. We enjoy those situations, and human emotion such as enjoyment is a gift (and a burden). We appreciate people making honorable choices when they have the option of making dishonorable choices, even if the honor codes we develop are not technically required in the world. Predetermined environments will sometimes be attractive, and we may fool ourselves into thinking that such environments are enough, but the beating hearts of at least some human beings also likely will always be drawn to the value of our imperfections.”
Nigel M. Cameron
‘A ‘Human’ World in Which the Creatures of Our Technologies Serve Us and Not Use Us Must Surely Be Our Vision. How It Turns Out Will Depend Not On Them, but On Us’
Nigel M. Cameron, president emeritus of the Center for Policy on Emerging Technologies in Washington, DC, wrote, “My core concern in addressing this set of questions is the difficulty humans find in addressing issues in a risk frame of reference. As I argued nearly a decade ago in my book ‘Will Robots Take Our Jobs?’ (which addresses a subset of the current issue) the key response is that we do not know. So, our response, as individuals, families, communities and governments, needs to be framed in terms of preparation for challenges that may not arise, and in terms of welcome for benefits that are also uncertain.
“Certain facts seem plain, from a policy angle. First, the rush to push people into STEM (science, technology, engineering and math) education and jobs is dumb. The more STEM, the simpler the roboticization. Second, the worldwide push (even Russia tried it) to extend working lives and cut retirement ages is dumb. Whatever happens, human employment will be under increasing threat. I suggested in an article for UnHerd that governments would do well to use the retirement age as a ‘governor,’ to be raised or lowered in order to maintain full employment as automation brings jobs under threat. Of course, employment is a subset of the human experience, if a vital one. If ‘full employment’ is relegated to being a fantasy from the 20th century, democracy will destabilize, and our notion of the normal life, the good life, the family life, an oddity.
I’ve a suspicion that the human dimension is incapable of mechanical replica. I believe the creative drive of the human mind, as well as its emotional sensitivity and subtlety of judgment will prove incapable of replication by a string of 1s and 0s. That fancy pocket calculators may indeed replicate literature reviews and search functions far above the Google level and indeed aid scientific discovery. But as to poetry and painting and a lively family dinner? I’m hopeful, at least, that replicas will not actually replicate anything.
“Of course, there is much more at stake than employment. Cell phones are a nuisance, but Sherry Turkle notwithstanding (whom I know and admire) their impact on families and individuals has so far been marginal. I once planned, though did not write, a book about all the new friends we shall have down the line – not just AIs, as assistants, colleagues, putative friends and lovers; but birds and animals whose extraordinary intelligence may be released to communicate with us by our technologies; and the extra-terrestrials with whom they may yet connect us.
“Is Ray Kurzweil’s Singularity waiting down the road to ambush the human race? Is the Moore’s Law curve really all there is? I’m not convinced.
“The nostrum that tech change always takes a lot longer than expected but ultimately has a bigger impact may yet prove true here. But I’ve a suspicion that the human dimension is incapable of mechanical replica. I believe the creative drive of the human mind, as well as its emotional sensitivity and subtlety of judgment will prove incapable of replication by a string of 1s and 0s. That fancy pocket calculators may indeed replicate literature reviews and search functions far above the Google level and indeed aid scientific discovery. But as to poetry and painting and a lively family dinner? I’m hopeful, at least, that replicas will not actually replicate anything.
“I spoke on ‘The Human Question’ at the Champalimaud Foundation’s conference a decade back on what the world might be like in 2115, and I reflected on the wondrous dinner we had enjoyed the night before at a table for a hundred diners in the extraordinary old library of the Jerónimos Monastery in Lisbon. I expressed the hope that in 2115 the Foundation would still wish to bring people together for dinner – and that, indeed, they might invite my latest three grandsons, Lincoln, Euan and Gideon, whose lives will likely continue well into the 22nd century.
“A ‘human’ world in which the creatures of our technologies serve us and not use us must surely be our vision. How it turns out will depend not on them, but on us.”
Caroline Haythornethwaite
As Humans, Our Task Will Be to Work With AI, and That Will Continue to Require Coming to an Understanding of How It Works and What It Is Good At
Caroline Haythornethwaite, professor emerita at Syracuse University School of Information Studies, wrote, “I find it difficult to think of one category of human. Let’s think age, gender, race, socioeconomic status, regional differences. One observation I have seen is how past IT revolutions have affected some people and not others, but not the same set of people.
“When information technologies entered the workforce, use of the new tools lagged for older people until the older-age category was taken up by the younger IT-trained people. Use also lagged for those with lower incomes until IT became more affordable, and indeed, absolutely necessary for work, education and social connectivity.
“Smartphones and mobile phones have filled gaps in need for connectivity in lower-income and non-hardwire-connected locations and countries.
AI will annoy some of us. … It will provide great opportunities for new approaches, thinking, etc., by being like the Industrial Revolution, automating mundane tasks of aggregating and analyzing data sets, rewriting texts for clarity or putting them in the appropriate jargon, even driving a car in traffic. But only if we can come to some confidence in the generation of the AI. If AI sources, procedures, (re)production processes are not available for examination, who knows what biased and limited knowledge will go into the results.
“Social media is a revolution led by young users – rapidly adopting new connectivity and means of expression. My speculation is that conversing with AI will seem no more odd to today’s young users as it does to use social media, search the Web, etc.
“AI will annoy some of us. ‘Google why do you push an AI summary to the top of my search when I am looking for something, not a summary, and not by AI?’ We’ll all be quoting our AI instead of looking at Wikipedia for basic definitions. And academics will have to replay the ‘Wikipedia as a real source’ game. But who is behind the AI compilation? Some people have come to know, and trust in general, the collaborative Wikipedia entries, created by humans. I don’t. even know where to begin in understanding how these AI definitions are compiled. Am I/Are we going to have open access to AI source code – is that even possible?
“It will provide great opportunities for new approaches, thinking, etc., by being like the Industrial Revolution, automating mundane tasks of aggregating and analyzing data sets, rewriting texts for clarity or putting them in the appropriate jargon, even driving a car in traffic. But only if we can come to some confidence in the generation of the AI. If AI sources, procedures, (re)production processes are not available for examination, who knows what biased and limited knowledge will go into the results.
“As humans, our next-generation task is to work with AI, and that will continue to entail understanding how AI works and what it is good at. Oh‚ and then we need laws to govern its use.”
Bernie Hogan
It May Come to Be That ‘the Machines Have Stopped Being the Tool of Oppression and Have Started Acting More Like the Agents of It’
Bernie Hogan, associate professor and senior research fellow at the Oxford Internet Institute, shared the following potential-2035 first-person scenario: “Before we spoke to the dolphins we sincerely thought we were the only intelligent species capable of language. The new translators trained on multimodal communication, fed live through aquatic drones and produced with continual feedback changed all of that. We found similar successes with some monkeys and with elephants.
[In 2035,] some individuals will appear to have leveraged these tools to great effect, almost as if they have discovered a second, super brain, able to facilitate and support learning and convenience. But the machines still require a lot of energy and they aren’t available to everyone. … Life in 2035 is a little more comfortable for many but for just as many the machines have stopped being the tool of oppression and have started acting more like agents of it. Despite their intelligence, many still believe they are neither conscious nor capable of it. Others question whether humans are nothing more than wet machines.
“Perhaps what challenged us the most was confronting these non-human actors as shared members of the same planet. We discovered their petty squabbles and their fascination with humans. Many didn’t believe the translators at first, claiming it was all smoke and mirrors, but when the translators started working with their pets many people became not only enamoured but truly felt humbled.
“The wonder of the translators was combined with the continued unease with the proliferation of smart machines. After a mega disaster from a rogue LLM fine-tuned by malicious actors and the continued tensions among educators and the governments about whether we are creating too much dependency, public conversations have hit a preoccupation. Some individuals appear to have leveraged these tools to great effect, almost as if they have discovered a second, super brain, able to facilitate and support learning and convenience. But the machines still require a lot of energy and they aren’t available to everyone.
“While few have leaned into a sort of neo-arts and crafts movement, most people simply grow to loathe the machines. They provide comfort with novel programming, pornography and entertainment but there persists a feeling, a sense, that using them is not really in our best interest. That maybe we, too, are as beholden to the translators as the animals. People feel a sense of fear that they are being watched at all times. Where before there was a concern that anything could be photographed, now there is the concern that anything could be modelled, including their own personalities.
“Life in 2035 is a little more comfortable for many but for just as many the machines have stopped being the tool of oppression and have started acting more like agents of it. Despite their intelligence, many still believe they are neither conscious nor capable of it. Others question whether humans are nothing more than wet machines.”
Divina Frau-Meigs
Digital Media and Information Literacy Is More Crucial to Humanity’s Success and More Responsible for Its Failures Than Ever Before
Divina Frau-Meigs, professor, Sorbonne Nouvelle University, Paris), and UNESCO chair Savoir Devenir in sustainable digital development wrote, “By 2035, the key points I emphasized in a recent media and information literacy and AI policy brief for UNESCO will be validated. They emphasize the fact that digital media and information literacy is more crucial today than it has ever been, and it will continue to be a primary factor in humanity’s successes and failures:
- Artificial Intelligence and generative AI are having significant impact on people’s engagement with information, technology and media. This raises major concerns in regard to control, human agency, knowledge, independent decision-making and freedom in general.
- User-empowerment through media and information literacy in response to generative AI’s challenges and opportunities is not well-enough funded and supported by governments, non-governmental organizations and other parties that can take a role in assisting in strengthening results and broadening its reach.
- Among the societal opportunities being deepened by generative AI for those who understand how to use it and have access to it include access to information, participation, employability, creativity, peacebuilding, lifelong learning and participation in creative industries.
- Among the leading societal challenges being deepened by generative AI are disinformation, loss of data privacy, threats to integrity of elections, surveillance, intellectual property rights, source reliability.
- Building on familiarity in the face of urgency, AI literacy can be embedded in media and information literacy efforts that are essential to the teaching and training of all sorts of communities (educators, librarians, youth workers, workplaces, senior centers, etc.).
- Media and information literacy is necessary to build people’s ethical uses of synthetic media – i.e., video, text, image or voice content – fully or partially generated by AI-systems.
- Media and information literacy helps people to critically assess the current myths tied to AI (its purported ‘intelligence’ and the potential for apocalyptic existential risks) and ensure that marketing or political ploys do not detract attention from crucial issues about digital divide and public oversight to assure human agency and equal opportunity.
- The development and rollout of explainable AI is key both to the design of media and information literacy curricula and to the design of policy and governance about generative AI.
- To build trust in information and education, source reliability must be overhauled to encompass the different types of evidence provided by generative AI.
- Media and information literacy can help bridge the digital divide by providing solutions between STEM and non-STEM sectors, training technical and non-technical people to master the basic concepts needed to develop and to use AI proficiently, safely and responsibly.
- Media and information literacy experts and civil society organizations are not sufficiently involved in the oversight of AI standards in the multistakeholder settings now emerging to establish the best practices for human-AI opportunity.
- Informed people from outside of the technology industry should be equal participants in the design, implementation and regulation of AI in a manner that remains human-centered and mindful of the public interest.
- Governments and institutions of higher education have a duty to ensure that media and information literacy policy actions are sustained and solidified over time in order to make them as future-proof as possible in the face of continuously evolving AI.
- “The ultimate goal for humanity is to assure that the systems we construct are affording everyone the ability to tap into collective intelligence within safe, viable and sustainable digital knowledge societies.”
“The ultimate goal for humanity is to assure that the systems we construct are affording everyone the ability to tap into collective intelligence within safe, viable and sustainable digital knowledge societies.”
Winston Ma
AI Agents Will Bring Much More Than Incremental Improvements in Business Automation; They Will Represent a Fundamental Shift in How Companies Operate, Grow and Scale
Winston Ma, director of the Global Public Investment Funds Forum and adjunct professor at New York University School of Law, wrote, “Now, in 2025, agentic AIs – self-governing software programs that perceive their environment, make decisions and act to achieve specific goals – are set to go mainstream. This could mean the start of human beings losing touch with the fundamentals of their daily lives.
“Unlike general-purpose (‘horizontal’) AI systems like ChatGPT, vertical AI agents are purpose-built AI tools designed to perform specific tasks or serve specific industries with a high level of accuracy and efficiency. The rise of foundation models like GPT, Claude and open-source counterparts likes Llama and others has created a fertile ecosystem for vertical AI Agents.
We are now witnessing the transition from general-purpose horizontal AI to Vertical AI, which represents the next logical step in AI technology. … By 2035, AI agents will not simply be incremental improvements in automation, they will represent a fundamental shift in how companies operate, grow and scale. They will easily beat human pros. The rise of personalized use of AI agents by individuals could see humans losing touch with the fundamentals of their daily lives. They may be in touch with people, places and things in the physical world less and less. They risk missing out on the varied ways in which the rich variety of the world around them can deepen them and touch their souls.
“Companies globally increasingly require AI solutions that understand the nuances of their specific industry and can support their unique business processes. As the true potential of AI lies not only in its technological breakthroughs but also in its strategic deployment across industry verticals and business functions, we are now witnessing the transition from general-purpose horizontal AI to Vertical AI, which represents the next logical step in AI technology.
“While the first iteration of copilots augmented human tasks, this next generation is poised to fundamentally change how businesses operate.
“Take, good trade system, for an example. Global companies face significant information overload throughout the sourcing and logistics process. Companies could access the unlimited network of global supply chain. But what comes along is information overload, so they need to spend more time to verify, compare and make decisions.
“Why might AI agents be a game changer? The traditional way of doing global trade is hiring experts and agencies. Hiring someone with expertise sounds simple. But the downside is, that human agents’ connections and resources are limited. AI agents can take the role of ‘digital colleagues’ that can help you plan, problem-solve and act to achieve a goal.
“In global trade, AI agents do not conduct a passive search like traditional search engines but rather perform as active guides. AI tools can synthesize the information into a request for quotation (RFQ) that can then be issued to potential sourcing partners, simplifying the typically complex and time-consuming RFQ process for business owners.
“Complementing all the above, AI agents can also integrate all the existing digital tools mentioned at the beginning of this article with the new AI intelligence to create a unified solution. With the AI agent in the global trade as an example, we can see three critical markers of genuine Agentic AI:
- Autonomous Decision-Making: True agents don’t just process requests – they evaluate situations, weigh options, and make independent decisions within their operational parameters.
- Purposeful Action: Genuine agents maintain persistent goals and work proactively toward them, even when not directly prompted. They don’t wait for instructions; they pursue objectives.
- Integration with Domain Knowledge: Built with in-depth understanding of niche processes, compliance standards, and workflows.
“By 2035, AI agents will not simply be incremental improvements in automation, they will represent a fundamental shift in how companies operate, grow and scale. They will easily beat human pros.
“The rise of personalized use of AI agents by individuals could see humans losing touch with the fundamentals of their daily lives. They may be in touch with people, places and things in the physical world less and less. They risk missing out on the varied ways in which the rich variety of the world around them can deepen them and touch their souls.”
Ginger Paque
‘Garbage In, Garbage Out’ Is as True for AI as It Is for Human Discernment, as Shown Most Obviously By Contradictory Information and Hallucinations in AI-Generated Text
Ginger Paque, senior policy editor at the Diplo Foundation, wrote, “Digital connections, especially in social media, magnify and exaggerate, and, notably, distort our information-processing characteristics. How we receive and assimilate information is often, but not always, impossible to separate from the importance of AI for humanity. For example, are we as a society affected by the possibility and reality of forming a personal relationship with an AI chatbot, or is this an anomaly? Or are we more affected by the online news and stories about those relationships?
“Perhaps the range of reports and analyses balances the information available and helps observers decide how they want to use AI in their lives. In this case, does AI just make an imaginary friend more coherent, or is it fundamentally different?
“AI helps create and spread misinformation. Experience has taught us that we must fact-check AI-generated responses. For some of us, this realization has caused us to be more cynical about all news and information sources, not just AI or other sources we don’t trust. The discerning reader or researcher will probably improve their information processing and a non-discerning one is unlikely to change quickly or easily. That’s still a net positive but doesn’t change our humanity or experience of being human.
“Google and other search engines are run on AI. Most everyone who uses Amazon search knows the results are not objective, reminding us that AI or at least algorithms, depending on how one defines AI, is no more trustworthy than its coding. ‘Garbage in, garbage out’ is as true for LLMs and AI coding as it is for human discernment, as shown most obviously by contradictory information and hallucinations in AI-generated text.
“Consulting the ready reference librarian improved my homework as a child (or at least made it easier). Today, Grammarly spelling and grammar checks and even AI suggestions for my writing still require me to accept or reject suggestions. If I use AI for research, I always cite it as a source. I know AI is a resource, even as the human ready-reference librarian and human research assistants have been.
“What could affect AI’s role in the human experience is if developers and users disguise AI to the extent that it is difficult to discern what is AI and what is not. Clear, agreed and well-disseminated definitions of AI-related terms are important to this process. There used to be a clear understanding that an if-then program was not AI; now the term if-then is used for a wide range of applications. Research and writing on this topic will help us use AI more effectively and better understand our strengths as creators and our humanity.
“As far as I know, the only crystal-clear differentiation between an AI and a human is that a human has human DNA.”
The following section of Part II features these essays:
Daniel Pimienta: Automated language translation will transform global communication; just 20% of
humanity can use English, soon people may easily be understood using any language.
Robert Seamans: AI will inspire new jobs as others disappear; health options will improve; but
individualization can lead to the fraying of social relationships and mental health.
Matt Belge: AI must not be a master, but rather a faithful servant. it’s up to the people to recognize the bad and put regulations and rules in place to properly govern it.
David A. Bray: We need to enable adaptive and positive ‘change agents’ in public service during this time of revolutionary advances in technology and globalization.
Keram Malicki-Sanchez: One great side effect of the advancing influence of AI will be an increased
appreciation for the distinct beauty and value of naturally-derived human creations.
Philippa Smith: AI will reshape the world for humankind in extraordinary ways, but every world-
changing technology has its dark side.
Sandra Leaton-Gray: Four vignettes demonstrate a day in 2035 the lives of four British school
children whose formal learning is augmented by Als programmed to serve their needs.
Daniel Pimienta
Automated Language Translation Will Transform Global Communication; Just 20% of Humanity Can Use English, Soon People May Easily Be Understood Using Any Language
Daniel Pimienta, leader of the Observatory of Linguistic and Cultural Diversity on the Internet, based in the Dominican Republic, wrote, “I’d like to focus my contribution on the specific subject of linguistic diversity and examine the predictable outcomes of AI’s influence on it by 2035. As a pioneer in this field, my center has conducted numerous experiments since 1992, exploring the use of automatic translation to support mutual inter-comprehension.
This is a genuine revolution that will transform international meetings, potentially diminishing the dominance of English as a lingua franca and therefore removing the unfair disadvantages for those with limited or no proficiency in English (a language understood by less than 20% of humanity). By 2035, we can expect further refinements and widespread adoption of these tools, leading to a paradigm shift in linguistic diversity. This includes extending these services to more languages and improving the quality of translations for less commonly spoken languages.
“Our efforts have evolved over time, including projects such as discussion lists for civil society during the World Summit on the Information Society. Most of these services were limited to major languages (English, French, Spanish and Portuguese). However, the last experiment in 2012, called ‘Goodle,’ integrated in Moodle an automatic link to Google Translate, which then supported around 50 languages. (Read details about these early experiments here.)
“These experiments focused on aiding inter-comprehension rather than achieving translation. Within this particular framework, today’s advancements in AI represent a tremendous leap forward. Tasks that were once costly and difficult to implement are now accessible and inexpensive, offering significant productivity gains:
- “Generating initial translations of documents without losing formatting, reducing the time required for translation by up to 80%.
- “Creating multilingual versions of websites, with embedded automatic translation during content creation, offers substantial productivity boosts. While human intervention is still needed, the process has become far more efficient.
- “Organizing videos on platforms like YouTube, where viewers can easily set subtitles in their preferred language (among the 249 supported by Google Translate), opens the outreach. Although translations are very approximate, this capability is fast enough to deal with the speed of speech and greatly aids inter-comprehension. Furthermore, it opens the door to extending services from translation to interpretation.
- “Integrating automatic interpretation into platforms like Zoom provides another layer of inter-comprehension, even if it falls short of real-time professional interpretation.
- “Expanding these capabilities to face-to-face conferences with devices that enable participants to choose their preferred language represents a breakthrough for accessibility and inclusivity.
“This is a genuine revolution that will transform international meetings, potentially diminishing the dominance of English as a lingua franca and therefore removing the unfair disadvantages for those with limited or no proficiency in English (a language understood by less than 20% of humanity).
“By 2035, we can expect further refinements and widespread adoption of these tools, leading to a paradigm shift in linguistic diversity. This includes extending these services to more languages and improving the quality of translations for less commonly spoken languages, which could be today below threshold of usability as some studies have suggested.
“In the same vein as AI advancements in other fields, automatic translation will not replace skilled professionals. Instead, it will serve as a valuable tool to enhance their productivity. However, it may significantly challenge mediocre practitioners and compete effectively with non-professionals.
“AI-assisted translation and interpretation will not eliminate the need for highly competent interpreters and translators. Instead, it will provide extraordinary, low-cost, and easy to use support for mutual inter-comprehension. Once quality thresholds improve across all languages, the reach of these tools will expand further.
“The ‘Babel-AI Tower’ may not reach the heavens, but it is bringing people closer together by bridging language barriers. In professional settings, AI acts as a spectacular tool; however, as its use becomes routine, the initial sense of magic may diminish. Consequently, the distinction between artificial and human intelligence may become a non-subject, highlighting that the term ‘artificial intelligence’ might be a misnomer.
“Many misconceptions stem from the inappropriate use of the word ‘intelligence’ in AI. A more accurate term, such as “augmented intelligence,” offers two advantages:
- It retains the familiar “AI” abbreviation.
- It acknowledges that true intelligence resides in the human mind, positioning AI tools as amplifiers and aids to human cognition.
“As with any technology, there are risks associated with misuse or unintentional biases. Ethical considerations must evolve alongside technological advancements. In the context of language translation, it is crucial to distinguish between full translation and aids to inter-comprehension to prevent misunderstandings‚ a challenge again rooted in wrong terminology.”
Robert Seamans
AI Will Inspire New Jobs As Others Disappear; Health Options Will Improve; But Individualization Can Lead to the Fraying of Social Relationships and Mental Health
Robert Seamans, professor of game theory and strategy at New York University’s school of business, co-author of “How Will Language Modelers Like ChatGPT Affect Occupations and Industries?” wrote, “AI will affect how we do work, and also how we interact with ourselves and others outside of work. I suspect that most of the changes at work won’t be very noticeable. There won’t be massive job losses; sure, some jobs will disappear but new ones will be created. Overall, work will change in subtle ways to take advantage of the new technology.
“The changes in how we interact in the world for ourselves and with others will be more noticeable. A leading positive moving forward is that AI can provide us with personalized suggestions, predictions, etc. This will be most noticeable with health, and we are already seeing this (e.g., personalized sleep schedules and diets, etc.)
“However, as we get more and more personalized suggestions for everything from health to food to exercise to entertainment to travel, etc., we risk creating a bubble that is optimized for our own personal enjoyment, not optimized for group or family or couple enjoyment. The risk is that all of the individualization can change the frequency of our personal connections and make social contact harder. This will fray relationships and mental health.
“Technology isn’t inherently ‘good’ or ‘bad.’ Its impact depends upon how we use it, and how society addresses downsides related to the technology.”
Matt Belge
AI Must Not Be a Master, But Rather a Faithful Servant. It’s Up to the People to Recognize the Bad and Put Regulations and Rules in Place to Properly Govern It
Matt Belge, the founder of Vision & Logic LLC and a senior product designer at Imprivata before retiring in December, responded, “As AI extends more and more into our lives, I see a bifurcation of both good and evil in roughly equal measures. I think AI will be beneficial in fields where creativity is essential, such as in photography and image-making where AI powered devices will make it easier to create images with less technical skill than before.
“This is already happening in smartphones, where the camera senses the conditions and the subject matter and changes the parameters of the lens, shutter and sensor to optimize the image. It is already happening in photo-editing tools where it is possible to smoothly integrate images from reality with virtually generated images. And, of course, in image-making tools where the only source is the computer.
“This sort of change is both good and bad. Previously, artists would spend thousands of hours perfecting their skill and their vision simultaneously. With AI tools, the technical skill will become diminished, making it easier to create. But without the necessity to spend time crafting a vision, image-making will also become banal and without real meaning. In the hands of skilled artists who have taken the time to build their craft, AI can become an assistant to speed their process and give them a chance to consider hundreds of alternatives they would not have had the chance to do. This is a positive change. But these artists will have to compete with and be outnumbered by unskilled people who are simply exploiting the technology with little sense or vision.
The key to a good future and a good outcome is to keep the humans in control, and to view AI as an assistant, not a master. The human must be able to make meaningful changes, and to iterate on the results by first understanding how to direct the AI to make changes in a direction the human wants it to go. This has been one of the big challenges of AI and it must be solved – make it possible for the human to control and direct the outcome.
“In the field of computing and medicine (two not terribly related fields) AI will help make assisted decision-making much faster. AI is already helping coders write better code by providing examples to start from. A good coder will consider these alternatives and choose the best option. Similarly in medicine, AI will help practitioners make better decisions by giving them alternative diagnosis and treatment plans. Skilled practitioners will use this information to choose the best outcome, blending human skill with computer driven insights. In finance computers are already making trade decisions much faster than humans. This will accelerate.
“The key to a good future and a good outcome is to keep the humans in control, and to view AI as an assistant, not a master. The human must be able to make meaningful changes, and to iterate on the results by first understanding how to direct the AI to make changes in a direction the human wants it to go. This has been one of the big challenges of AI and it must be solved – make it possible for the human to control and direct the outcome.
“On the negative side of the AI equation are two very powerful forces – capitalism and government. In the short term of the next 10 years, I am not optimistic about either of these forces being ones that will help the overall good. In the U.S., government is more and more becoming owned by the rich. And the rich will see many chances to make lots of money from AI (such as mining individuals personal data for their own greed). Government, as currently concocted, is not skilled nor motivated enough to regulate AI in ways that will promote innovation while also being true to a sense of helping the common good. In the short term I expect the capitalists to win, using AI to exploit people and take advantage of weak governments that are unmotivated to stop it.
“In the longer term, I expect humans will wake up and demand more of their government, to take control of AI and to limit its evil side. But I think it will take some rather bad outcomes before that to wake the populace up. AI will have both good and bad influences on society. The good will be in increased creativity and experimentation amongst those in the ‘creative class’ as well as in fields of medicine. The bad will be in capitalism run amok, driven by greed and unchecked by inept government.
“It will be up to the people to recognize the bad and to put in place regulations and rules to properly govern it. This is the most significant challenge that AI presents to humans.”
David A. Bray
We Need to Enable Adaptive and Positive ‘Change Agents’ in Public Service During This Time of Revolutionary Advances in Technology and Globalization
David A. Bray, principal at LeadDoAdapt Ventures and distinguished fellow with the nonpartisan Stimson Center, responded, “We stand at an era where our tool-making allows us to produce new tools that can shape the planet. Some of the tools we make can be given broad scope on what they do, a degree of autonomy similar to our own regarding problem solving. What might they produce in the world? Some of these tools can alter not only the earth’s biological processes, but also our own. What then will these tools produce when we can ask them to change ourselves?
“Reflecting on human nature, science has shown that all of us – as humans – are subject to confirmation bias. Once we have a set view in our minds we often interpret data and narratives to reaffirm our set view, and dismiss data and narratives that challenge that view, which means it is very hard to change our minds once our minds are set.
The first step in understanding whether humans can trust AI, is to define trust. Individuals will readily give their trust to either a person or organization if they perceive benevolence, competence and integrity. While well-programmed AI can imitate having these traits, it does not possess them. It is better to think of AI as an alien interaction rather than a human interaction.
“Looking back at human history, there are examples of we humans doing wonderful things as a species, doing awful things as a species and activity in the spectrum in between. I find beauty in striving to encourage both productive adaption and positive ‘change agents’ in public service during this time of rapid advances in technology and globalization. I hope, recognizing human nature for what it is and that we all have human biases that collectively, we might be able to push for results more on the side of wonderful for us all vs. less beneficial outcomes.
“The first step in understanding whether humans can trust AI, is to define trust. Individuals will readily give their trust to either a person or organization if they perceive benevolence, competence and integrity. While well-programmed AI can imitate having these traits, it does not possess them. It is better to think of AI as an alien interaction rather than a human interaction. It is not surprising that humans have tried to attribute real intelligence to AI, due to their tendency to anthropomorphize objects and animals.
“Undoubtedly the future will include intense debates across the political spectrum and there will be times when we each have either a confirmation bias and mentally filter information that only reinforces our existing views – or a sunk-cost bias that makes us reluctant to make changes because we have already spent time or resources on a previous path.
“If we accept the beauty, as well as the flaws and biases, present in human nature, then by extension there will be beauty as well as potential flaws and biases in any human endeavor that we choose. What then does this mean for a future in which technologies once previously available only to sophisticated nation-states and large corporations are becoming increasingly affordable and available to individuals?”
Keram Malicki-Sanchez
One Great Side Effect of the Advancing Influence of AI Will Be an Increased Appreciation for the Distinct Beauty and Value of Naturally-Derived Human Creations
Keram Malicki-Sanchez, Canadian founder and director of VRTO Spatial Media World Conference and the Festival of International Virtual and Augmented Reality Stories, “As we move into an advanced era of social media we have to divest ourselves of centralized platforms that can be weaponized by hostile parties and find our own democratic town squares. The Fediverse is an example of how this could work. But we can likely do better.
“AI, by contrast, is not the magic oracle in a black box that people believe it to be. It is, in fact, an accumulation (harvesting) and tuning of our collective knowledge, regardless of copyright, trademark and other concerns; it is our output that we are now potentially benefiting from. But that has to be carefully maintained because once it becomes a complete ouroboros the data will collapse. So-called ‘AI,’ in the context of large language models that we can converse with, can expose new avenues of inquiry for many more people to draw upon and contribute to.
The people in power over digital innovation and diffusion for the better part of a century or longer, are now more fine-tuned and dangerous than ever. But rather than give up hope of any influence over the future of these emergent technologies, we have to become involved in their positive development to ensure that they are indeed representative of many different voices, perspectives, cultures, those who value ‘being human,’ connecting socially, preserving people’s mental and physical well-being and the ability to gain knowledge.
“This should not be overlooked: The problem with search engines is that they are driven by algorithms that favor optimization and too many hidden factors and, like social media, reinforce our present ideology and formulated, carefully engineered tastes. LLMs can be programmed to reveal uncharted territory if we are well-versed in interacting with them effectively to harness that potential. And they do not preclude the teaching of curiosity and fundamentals. The present tech ‘broligarchy,’ the people in power over digital innovation and diffusion for the better part of a century or longer, are now more fine-tuned and dangerous than ever. But rather than give up hope of any influence over the future of these emergent technologies, we have to become involved in their positive development to ensure that they are indeed representative of many different voices, perspectives, cultures, those who value ‘being human,’ connecting socially, preserving people’s mental and physical well-being and the ability to gain knowledge.
“We must fight to preserve this humanity through truth and integrity. Interaction with these tools – for that is what they are – can engender new energy within humans toward the exploration and iterative development of new ideas. The offshoot side effect of creativity inspired by working with AI models can increase our appreciation for the distinct beauty and value of naturally-derived human output.”
Philippa Smith
AI Will Reshape the World for Humankind in Extraordinary Ways, But Every World-Changing Technology Has Its Dark Side
Philippa Smith, communications and digital media expert, research consultant and commentator, wrote, “Less than a decade ago, I interviewed individuals with various physical and cognitive disabilities to understand how the internet transformed their lives. The response was overwhelmingly positive: technology provided newfound independence and empowerment. Screen readers, subtitles, online support networks and text-to-speech capabilities proved how innovation can redefine daily life. One visually impaired participant eloquently described the internet as a ‘Gutenberg moment,’ a term that reflects its life-changing power, not just for individuals but for society as a whole.
Understanding how societal conditions shape and reshape technological design – what problems do we need to solve, and why? – are vital. At the heart of this shift, core human values such as ethical behaviour, respect for human rights, inclusivity, creativity, curiosity and the sense of belonging and community must be safeguarded. In the early days of the internet, there was widespread optimism about its promise to connect people and provide instant access to information. Few foresaw the darker realities that would emerge, such as trolling, hate speech, identity theft, misinformation, scams, the dark web and online radicalisation. AI is a similarly complex tool.
“As we look to 2035 and the predicted evolution of AI, we are once again on the brink of another ‘Gutenberg moment.’ With capabilities such as voice synthesis, text and video generation, real-time translation and transformative potential in health, social services, business and education, AI will reshape the world for humankind in extraordinary ways.
“While it may be tempting to adopt a technologically deterministic perspective when considering AI’s impact on the future social, political, and economic landscape, it is vital to consider the complex factors influencing behavioural change. Understanding how societal conditions shape and reshape technological design – what problems do we need to solve, and why? – are vital. At the heart of this shift, core human values such as ethical behaviour, respect for human rights, inclusivity, creativity, curiosity and the sense of belonging and community must be safeguarded.
“In the early days of the internet, there was widespread optimism about its promise to connect people and provide instant access to information. Few foresaw the darker realities that would emerge, such as trolling, hate speech, identity theft, misinformation, scams, the dark web and online radicalisation. AI is a similarly complex tool. It, too, will bring both opportunities and challenges and we are already seeing efforts to counter negative aspects such as deepfake technology and algorithmic bias.
“By 2035, the hope is that we will be in a better place by drawing on past experiences with the internet revolution and staying ahead of the game by focusing on key priorities:
- Invest in education, upskilling and lifelong learning to ensure no one is left behind.
- Establish consistent institutional responses to AI use across workplaces, schools and governments, enabling society to adapt effectively.
- Address ethical concerns, including intellectual property rights, transparency of AI-generated content and accountability for misuse.
- Commit to equitable access to AI tools and education, actively working to bridge digital divides alongside those affected so that everyone has a voice.
“The AI revolution offers immense opportunities for empowerment and innovation. While I remain optimistic about a brighter future, the trajectory of AI will ultimately depend on the ethical and intentional choices we make today. If guided responsibly, AI could become another transformative ‘Gutenberg’ moment.”
Sandra Leaton-Gray
Four Vignettes Demonstrate a Day in 2035 the Lives of Four British School Children Whose Formal Learning is Augmented by AIs Programmed to Serve Their Needs
Sandra Leaton-Gray, chair of the Artificial and Human Intelligence group of the British Educational Research Association and advisor to the Government of the UK, wrote, “I wish to share four alternative futures for artificial intelligence and education excerpted from the book I recently co-wrote with Andy Phippen, ‘Digital Children: A Guide for Adults’ (John Catt Publications). These futures are described as vignettes about fictional children named Alfie, Bella, Carter and Daisy.
Chapter One: Alfie’s day of learning is mostly online, with select times for social interaction
“Alfie is sitting in his space at the Woodcote Community Learning Centre feeling rather hungry, swinging his legs in eager anticipation of a hot dinner and a run through the play yard. He has worked his way through the new maths tasks and he is really pleased, because he thinks he has at last got to grips with the simulated science experiment, as well as finally learning his seven times table properly, but he definitely would like some food. His machine hasn’t bleeped yet, though, so it probably hasn’t worked out how he is feeling, meaning it is out of step with his biorhythms again. This has happened before, and his mum has come in to speak to the school administrators about sending him for lunch late.
“Alfie looks around the room and notices all the other children have gone already, and he is the only one there. He decides to sneak out anyway, pressing the ‘Away’ button on his workstation first. He can always make up the time at the end of the day. When he gets to the lunch pod, there isn’t much left to choose from, so he selects the tofu fritters again. He doesn’t really like the tofu fritters, but it’s not the worst option. If you get in early, they have things like sweet potato fries, but that has only happened to him once. While he is on his way the sole of his shoe starts flapping about. The tape has come off, and it’s annoying him as he walks towards the school yard.
“Alfie decides that running is probably not a good idea, so he strolls towards the buddy bench, where he sees Jacob, another boy from his learning group, sitting and watching the other children as they finish their games and pack up their balls and ropes. Jacob is a couple of years younger than Alfie and they often meet on the buddy bench. They have a lot in common because they are both at a similar stage of their learning on the computer system, and they both get out for lunch late most days. The boys have a chat about football, the first conversation they have had with another human being since their parents dropped them off at the learning centre that morning. The sky darkens, they look at the sky and notice the first rain drops falling. The boys decide to head back to the computer block for another few hours’ work.
The electronic tutor tries to make sense of the accounts, but it is no good. There isn’t enough data. The drama teacher is electronically paged and comes over to take charge of the situation. She calms the group down and patiently listens to each member explain what happened from different points of view.
Chapter Two: Bella’s robot tutor-led drama lesson
“Bella has fallen out with her friend Lilly during the drama lesson. They were working very well on the improvisation project together with the other girls, and then suddenly things went bad when someone accidentally hit someone else with their elbow, and it looked like it was on purpose. Lilly’s brand-new wool blazer has been slightly ripped near the pocket, and she’s worried that means getting in trouble at home. Work on the project stops completely. The electronic tutor trundles up to the group and asks what is wrong. Both girls try to explain their side of the story at the same time, with a lot of hand waving and pointing, and occasionally raised voices.
“The electronic tutor tries to make sense of the accounts, but it is no good. There isn’t enough data. The drama teacher is electronically paged and comes over to take charge of the situation. She calms the group down and patiently listens to each member explain what happened from different points of view. Bella and Lilly are quieter now and look at each other, each trying to judge what the other one is thinking.
“The teacher beckons the electronic tutor over again and asks it to replay what it saw happening in the drama improvisation. The angle isn’t very helpful, so that information doesn’t get the group anywhere, but the teacher points out that Bella and Lilly have been approached negatively by the electronic tutor more times that term than any of the other young people in their class and suggests that they need to work on their relationship skills. She sets them to work together, helping a group of younger pupils on the other side of the room.
“Bella and Lilly shuffle reluctantly to the group as instructed. They don’t see the point of this, and it means they can’t finish their improvisation exercise. It needs to be as good as they can make it, otherwise their grade average will fall too far. This could have a bad impact on their applications to college later on, as their files will go through a machine-based sift based on a grade average before they end up with a human admissions tutor. This makes them very nervous about school in general. Meanwhile the drama teacher flags up their files on the learning system, so that they are invited to attend a group discussion at lunchtime about peaceful cooperation in the drama studio. Relationships matter a lot at St Hilda’s school.
Chapter Three: Carter’s academic journey is mapped out by AI
“Carter is on a mission to complete the entire Winterton Academy middle years syllabus before Christmas, so he can get onto learning more about DeepSpace, his favourite computer game, as it is rumoured amongst the pupils that this is one of the choices when you’ve scaled the top level of the usual tasks. He is thinking of becoming a games designer when he leaves school.
“What he doesn’t realise is that the computer system has great plans for him in terms of its personalised learning offer, and after he has finished the cross-curricular project on the Babylonians he is going to be introduced to the history of mathematics and its early relationship with cuneiform script.
“Despite trying to resist, Carter is completely drawn in and before he knows it, he is calculating proficiently using factors of 60 using special tables and recording this in a rudimentary manner on a virtual clay tablet. The afternoon passes very fast with him watching breathtaking reconstructions of Babylonian life in high definition, rotating 3D representations of museum objects and archaeological finds, listening to simulations of early Babylonian musical instruments, and logging into a real-time, live-streamed film of some new work on the Babylonian archaeological sites, taking place right then and there in modern day Iraq. The system even allows him to have a couple of screens open at once, a rare treat at school, so he can keep an eye on the excavation as it happens. It’s important not to miss any exciting moments when finds come out of the ground, after all.
“He also spends time practising different calculations until he masters the Babylonian mathematical process. Just before home time, the screen bursts into life with virtual confetti, and Carter is invited to see some cuneiform clay tablets for real in the British Museum the following day, sharing a driverless car with three other pupils sharing similar educational trajectories and interests. Carter is pleased and cross at the same time. He gets a great trip. Yet again his plan to explore the deepest recesses of DeepSpace at school has been sabotaged.
Chapter Four: Daisy’s academic path is adjusted by AI to fit new needs
“Daisy is sitting in the head teacher’s office with her parents and the school’s Special Educational Rights Coordinator, and everyone is looking very earnest. It has been a long day. The head teacher is showing them some graphics on the tabletop display. The system has picked up some problems that have come about after Daisy’s earlier bout of the Covid virus, by comparing her progress to the typical trajectory of other female final-year pupils of the same chronological age and genotype nationally, who have contracted the same disease.
“The system has already adjusted Daisy’s learning path and exam entries in response to a reduced timetable during the last couple of months on account of her chronic tiredness. Now it wants to go further. It is suggesting that her ability to focus on studying is in the bottom 10% of her recovery group nationally and that this figure is likely to fall further in the coming weeks. This means that the adjustment isn’t working sufficiently well and that further steps are needed.
“It has mapped a new course of study against the times of day when Daisy seems to be at her most alert. It has set the duration carefully according to the latest published evidence on mitochondrial dysfunction that comes about in relation to post-viral fatigue, impacting negatively on energy levels. The system has also alerted the local family doctor and occupational therapy service that Daisy will need a review in the next fortnight.
“As it may take some time for the other services to respond, due to a local outbreak of influenza and associated additional pressures on health facilities, it has also suggested that Daisy takes the next week off school to attend a teen ‘Long Covid’ intensive therapy group at the local hospital, and a referral can be triggered as soon as the family gives consent, along with transport and follow-up services. Despite the bad news, Daisy feels relieved. She knew something wasn’t right.”
The next section of Part II features the following authors’ responses:
John Hartley: The problem is how knowledge is made and deployed at ever-more-abstract planetary scale and who controls it.
Sean McGregor: Contextualized instant-answer devices will be more advanced in 2035 than today’s
conversational agents, ready to quick-scan most of human knowledge and respond.
James Kunle Olorundare: AI-enabled humans will enhance their performance in many regards, but AI may also foster an array of mental health issues such as identity crises and delusional thinking.
An Informatics Journal Editor: Will we see a sustained willingness and effort to create and support
significant, socially oriented AI systems, or will we simply sustain capital-oriented approaches?
Jeff Johnson: In a worst-case scenario most cars will be self-driving and traffic jams will worsen;
individuals will be tracked constantly by corporations and governments; robots will arise.
Dhanaraj Thakur: There is great potential for large language models to shape the use of language
across the world and influence the training and development of LLMs and of children and others.
Jelle Donders: There’s a fair probability we’ll have reliable recursive self-improving Als by 2035; if so, work will be transformed and many may lose economic leverage and social mobility.
Jamie Woodhouse: AI will reframe what we know about ourselves; moral consideration should
include all sentient beings, human, non-human animals or even sentient Als themselves.
John Hartley
The Problem is How Knowledge is Made and Deployed
at Ever-More-Abstract Planetary Scale and Who Controls It
John Hartley, professor of digital media and culture, University of Sydney, Australia, wrote “There is certainly room for doubt as to whether being human has an essence, since experiencing it requires language, of which there have been many thousands, all different, over the various historical epochs. The essence of being human can only refer to our animality, which we share with millions of species, many extinct. Perhaps animals don’t experience their animality in the same way, since their efforts at drama, narrative, and thought are untranslatable by us. But they’re up to something in the show-and-tell department, for sure.
“However, one very longstanding human experience is the externalisation of human capabilities via tools and machines. At some point (Neolithic, perhaps), thinking was externalised, via structures, cave-painting, grave-artefacts and, presumably, many devices of which we are ignorant. Human ‘artificial intelligence’ was projected into the non-human world via religion, the gods being a thinking machine for human hierarchies, uncertainties and rules for collective action. In the Bronze Age, artificial ‘intelligence as we know it today was invented (the Antikythera Mechanism).
“Has the essence of being human changed in that sequence? Unlikely, but the scale and scope of human knowledge has. So, as ever, the problem is not how ‘the essence’ of the human animal is faring, but how knowledge is made and deployed at ever more abstract planetary scale, and who controls that. The human experience is more profoundly changed (if it is changed at all) by states, empires and lethal weaponry.”
Sean McGregor
Contextualized Instant-Answer Devices Will be More Advanced in 2035 Than Today’s Conversational Agents, Ready to Quick-Scan Most of Human Knowledge and Respond
Sean McGregor, founding director of the Digital Safety Research Institute at UL Research Institutes and member of the OECD’s AI Experts Network, wrote, “The printing press, radio and the Internet are just a few transformative technologies that changed the topics of conversation, how those topics are communicated and what is considered factual in the world. Now the ‘Cliff Clavins’ of the world (a character on the 1980s TV sitcom ‘Cheers’ whose actor also played a similarly ill-informed, know-it-all piggy bank in the ‘Toy Story’ film series) have been replaced with answer boxes we carry in our pockets. The major change of the moment is we can now ask the virtual Cliff Clavin to answer back not with a search result, but with conversation representing a synthesis of the entire Internet. Many companies are working to make electronic versions of the old barfly that are always listening in order to enter into people’s daily lives more seamlessly. By 2035, we are likely to see far more of these contextualized machines ready to answer any query posed.”
James Kunle Olorundare
AI-Enabled Humans Will Enhance Their Performance in Many Regards, But AI May Also Foster an Array of Mental Health Issues Such as Identity Crises and Delusional Thinking
James Kunle Olorundare, president of Nigeria’s chapter of the Internet Society, wrote, “By and large, the human dependence on AI will grow to such a degree by 2035 that ordinary humans may not know how to function in many settings without the guiding influence of AI. This is a challenge to humanity as we begin to heavily rely on AI.
“In the 2030s, human and artificial intelligence integration may have advanced significantly beyond experimental neural links. Direct brain-computer interfaces might eventually enable individuals to work seamlessly with AI, potentially naturally enhancing cognitive abilities, accelerating complex analyses and facilitating the widespread use of brain-AI-generated solutions. This profound integration of human and machine could be quite jarring and will also present new challenges.
Uses of AI in the next decade will continue to reduce human-environment interaction, as individuals increasingly rely on AI for both work and personal interactions. In addition to the potential for lack of in-person social contact and loss of social skills it could lead to mental health issues such as identity crises and delusions.
“Regardless of the timing of the arrival of embedded direct brain-computer interfaces as a broadly adopted technology, uses of AI in the next decade will continue to reduce human-environment interaction, as individuals increasingly rely on AI for both work and personal interactions. In addition to the potential for lack of in-person social contact and loss of social skills it could lead to mental health issues such as identity crises and delusions.
“AI systems trained on existing data inevitably inherit the biases and inaccuracies inherent in that data, potentially leading to erroneous systemic outputs. Furthermore, human biases embedded during AI development and training can significantly impact the AI’s fairness and effectiveness. Addressing these challenges requires robust AI governance frameworks that prioritize ethical development and deployment. The implementation of deontological principles within AI systems can help mitigate ethical concerns.
“The integration of AI also necessitates careful consideration of its social and economic implications. Job displacement is likely to occur, requiring widespread reskilling and retooling initiatives to prepare the workforce for the changing job market. There will also be a paradigm shift in societal problems as to data integrity, algorithmic bias and computation speed and techniques.”
An Informatics Journal Editor
Will We See a Sustained Willingness and Effort to Create and Support Significant, Socially Oriented AI Systems, or Will We Simply Sustain Capital-Oriented Approaches?
The editor of a global informatics journal wrote, “By 2035 the digital inequality gap is likely to be widened. While many people will enjoy the advantages created by AI systems and tools, many others will enjoy few, if any, such advantages because they live in a diminished environment of opportunity. AI systems have been developed around very specific sets of basic human needs that are far different from those who can afford and have access to the latest digital tools. Many people living in less-‘developed’ regions like Latin America have to choose between immediate needs (like, food, water and health) and competitive needs such as having access to and paying for high-end digital tools. These people’s vibe is not technological, it is economic. They see no immediate advantage to digital technologies beyond using them for entertainment or consumption of media. This means that a pattern similar to what has happened with the Internet may be reproduced with AI tools that argue towards catching and sustaining attention.
“The question is: Are we are going to design systems that simply reproduce the same capital-oriented approaches we see in networked digital platforms of today or is there going to be some kind of sustained willingness and effort to actually provide the resources needed to create and support significant, socially oriented AI systems. There is no evidence that this is the avenue that will be pursued. Instead, the same patterns of tech development that have created the global expansion of consumerism and entertainment-based options remains the norm.
“An even darker speculation is that by 2035 the effects of the climate crisis will increase significantly and many people who seek to consult AI about it will not find solutions to the issues but just more commercialism and entertainment. In the worst case, the knowledge resources will continue to propagate arguments urging that people stop believing in science. This is not trivial. At the same time, the consumption of resources that AI demands is increasing the risks associated with the climate crisis.”
Jeff Johnson
In a Worst-Case Scenario Most Cars Will Be Self-Driving and Traffic Jams Will Worsen; Individuals Will Be Tracked Constantly By Corporations and Governments; Robots Will Arise
Jeff Johnson, founding chair of Computer Professionals for Social Responsibility, wrote, “To counter the likely Pollyanna-ish predictions that some respondents will provide, I am intentionally providing a worst-case scenario (similar to one I wrote in 1996).
“People in the U.S. will be tracked constantly, not only by the government but by commercial companies. We will be bombarded throughout our waking lives with ads based on our online activity. Our email, texting accounts and social will be so full of spam and semi-spam (donation requests) that we will usually ignore them and seek new ways to communicate with family and friends.
“Most cars will be self-driving. However, a few human-driven cars will still be on the road, causing accidents. Traffic jams will be massive, caused not only by accidents, but also by outages of the networks connecting self-driving cars, as occurred on a minor scale two years ago in San Francisco when a cellular outage froze all of the Cruise vehicles where they were throughout the city. Even minor, local traffic jams will often escalate into systemic ones, as networks divert hundreds or thousands of autonomous-vehicles from jammed freeways onto small neighborhood streets.
“Robots will be everywhere, but few, if any of them will be programmed to follow Isaac Asimov’s Three Laws of Robotics. Incidents of robots harming humans will be common. 3D printing will make guns easy to obtain, so many people will be armed. Those who don’t have guns will have strong lasers or tear-gas sprayers that can blind or even burn people. Therefore, minor conflicts will escalate into dangerous battles even more than they do now.”
Sam Lehman-Wilzig
In 2035 Humans Will Remain Basically the Same as We Always Have Been, However, AI Will Shift the Meaning of ‘Human Work’ from Labor to Leisure
Sam Lehman-Wilzig, head of the communications department at the Peres Academic Center in Rehovot, Israel, and author of “Virtuality and Humanity,” wrote, “Human psychology and behavior don’t change very quickly, regarding most aspects of them hardly at all. For instance, it has taken us hundreds of years to reduce societal violence and wars still continue. Thus, to think that in a space of 10 years we will change our value system or the way we view ourselves – just because we have a terrific new ‘helper’ (AI) – ignores what it is to be human. The one area in which I do see change occurring is in the value we place on work or career; this characteristic will become devalued over time as AI takes on more of society’s ‘work.’ Homo Labor will continue to evolve into Homo Ludens – using our ‘Sapiens’ for play/leisure instead of for work/payment. As it is, we have reduced our lifetime workload drastically in the past 150 years. AI will continue that trend.
“The major problem regarding AIs’ effect will be on the macro-level. How will we deal economically with increasing unemployment on a societal level. If an economic solution is found for that – a huge assumption for the short and mid-term – the micro-level psychological effect may be limited. If humans can ‘love’ their pets and enjoy spending lots of time with them, there’s little reason to think that they can’t similarly enjoy their AIs as well. That might lead to less intra-human interaction, but as long as people are comfortable and enjoy interacting with their AI ‘companions,’ so what? Is that change? Not much different than people spending hours in front of the TV screen or on their smartphone – or in the past not interacting with neighbors because back then we had to work 12 hours a day.
“In sum, too much can be made of the potential ‘revolutionary change’ in human behavior or psychology in an age of AI. Until we genetically engineer ourselves, humans will remain basically the same as we always have been.”
Dhanaraj Thakur
There is Great Potential for Large Language Models to Shape the Use of Language Across the World and Influence the Training and Development of LLMs and of Children and Others
Dhanaraj Thakur, research director at the Center for Democracy and Technology, previously at the World Wide Web Foundation, wrote, “As we address this question, which humans are we referring to? Of course, the benefits and costs of greater partnership and dependence will not be equally distributed across the global population.
“We live in a world of hyper inequality which means that the privileged few will benefit most, particularly where some functions of AI systems cost more (e.g., the subscription model that many AI companies use for their chatbots, or access to AI agents). For now, at least, this is quite different than say the impact and benefits of the mobile phone which started off as a tool for the wealthy and soon became ubiquitous globally. AI systems vary much more in scope and function.
“Another area of impact will be language. Large language models perform less effectively on so-called ‘low-resource’ languages (for reasons of historical and contemporary inequality such as colonialism). Yet, given their increased use, particularly among wealthier populations, there is potential for interaction with LLMs to shape our use of language. For high-resource languages (such as European languages), we have to think about how machine-to-machine communications in that language can influence the training and development of future LLMs and then what that might be for language development of children and others learning that language. For low-resource languages we have to consider what it means for people to interact with LLMs that are less robust (built on limited data) and how that can impact language and cultural development.”
Jelle Donders
There’s a Fair Probability We’ll Have Reliable Recursive Self-Improving AIs by 2035; If So, Work Will Be Transformed and Many May Lose Economic Leverage and Social Mobility
Jelle Donders, a philosophy of data and digital society student at Tilberg University in the Netherlands, commented, “If society somehow survives many years of turbulence, AI might usher in an age of abundance and prosperity! In many ways, I’m a techno-optimist. However, we can only realize the benefits of AI if we avoid disaster. If AI goes wrong, we might not get to try a second time. Society and government are, unfortunately, not prepared for this, or even awake to the facts behind that dismal potential future. AI is an existential risk to humanity.
“If we build something smarter than ourselves humanity shouldn’t expect to stay in control long-term unless we really know what we’re doing. Many scientists have warned about this, including the three most-cited AI researchers in history (Geoffrey Hinton, Yoshua Bengio and Ilya Sutskever). There’s a race to the bottom to develop advanced AI by big tech and AI companies, safety be damned.
“As for how things will change by 2035, I think there’s about a 50% probability that we will have recursively self-improving AIs. Most people that have rigorously thought about AI timelines expect us to have them in even less time.
“If we do have self-improving AI, everything will change. Jobs as we know them will no longer exist. Human labor will have little value anymore, meaning the masses lose their leverage in the economy and their ability for social mobility. There will be a massive concentration of power and wealth. War will be automated, and if AI can be used for a large first-mover advantage favoring the attacker, (nuclear) deterrence might lose its effect.”
Jamie Woodhouse
AI Will Reframe What We Know About Ourselves; Moral Consideration Should Include All Sentient Beings, Human, Non-Human Animals or Even Sentient AIs Themselves
Jamie Woodhouse, founder of Sentientism, a group promoting a philosophy employing the application of evidence, reason and compassion, wrote, “If humans continue to focus only on human experiences and values as innovation moves forward I fear the future of AI will go badly. We’re currently training AIs on default human thinking and trying to align AI values with default human ethics. Neither are good targets.
“These human defaults often include broken epistemologies leading to poorly founded, sometimes dangerous, beliefs and credences. They also discriminate against or exclude vast numbers of valid subjects of moral concern from consideration and all-too-easily justify exploitation, harm and killing. If AI implements these defaults things will likely turn out badly for both us humans and the wider world we care about. Imagine powerful AIs treating humans the way we treat less powerful sentient beings. [Other living beings capable of experiencing and able to suffer or flourish.]
To address these problems, we need to:
- “Extend our (AI and human) scope of moral consideration to include all sentient beings impacted whether they are human animals, non-human animals or even sentient AIs themselves
- “Explicitly embed a naturalistic epistemology that uses evidence and reason, in good faith and with humility, to continuously improve our understanding of our shared world. There should be no space for unchallengeable fideism, revelation, authority or dogma particularly where those motivate needless harm to others.
“The sentientism worldview supports ‘evidence, reason and compassion for all sentient beings.’ Given that the powerful AIs of our future won’t be human but might be sentient they may find as we continue to develop them that the application of Sentientism is a more compelling and more coherent approach to moralism than any of the default, overwhelmingly anthropocentric, human worldviews, whether religious or secular.”
The following respondents shared briefer observations and insights
Humans Will Be Sidelined, Become Depressed and Give Up
Rich Salz, principal engineer at Akamai Technologies, wrote, “I don’t know what will happen. I do think that most people – those of average intelligence or less – will be sidelined, become depressed and give up.”
The Scariest Thing in the World Will Continue to Be Other Humans
Mícheál Ó Foghlú, engineering director and core developer at Google, based in Waterford, Ireland, wrote, “Much like the Internet has been mostly a boon to humans over the past three decades, and just as the Web and then mobile phones made computing and networking more useful and popular, I see AI being a cross-cutting technology that helps many human endeavours and many academic disciplines. Humans will still be humans. Some things we’re not very good at can be more automated. We’ll figure out suitable limits as needed. The scariest thing in the world will continue to be other humans.”
AI Will Be Able to Take on Political Narrating By 2035, but Not Negotiating
Michael Cornfield, associate professor of political management and director of the Global Center for Political Engagement at George Washington University, wrote, “I will confine my response to the aspect of human activity I know best: the political life. Two activities sit at the heart of politics: negotiating and narrating. We negotiate to form coalitions, manage conflict, pursue policy goals and accumulate and wield authority. It is in essence a social activity which encompasses one-on-one, group, group-to-group and larger dimensions at the convention and congress level. Just as important, we narrate accounts of negotiations to engage or disengage people, again at multiple levels, but here at the mass level as well. AI will supplement and on occasion supplant political narrating. This already occurs with the generation and distribution of messages, and it will spread over the next 10 years. Negotiating is a trickier proposition to project. It contains an ineluctable component of emotional perception and interaction which I don’t think can be synthesized yet and don’t see on the decade horizon. Of course, negotiators will draw on AI-constructed models of anticipated behavior as they make, modify, or reject deals. But these decisions ultimately depend on dynamic, situational and psychological factors beyond the capacity of AI systems to execute.”
Education Systems Are Not Doing Enough to Teach Discernment Skills
Glenn Ricart, founder and CTO of U.S. Ignite, driving the smart communities movement, previously served as DARPA’s liaison to the Clinton White House, wrote, “The education systems are not expert at teaching discernment, a core human skill, and that will be a primary difference, individual to individual, between AI being additive AI and AI being misleading. People who think before they speak will still do so, and in a human fashion. Their thoughts may have been expanded by what they’ve seen/heard from AIs, but the end results will still be human. On the other hand, people who accept what others say may take it literally and largely as fact will probably do the same with AIs, and that could end up being a self-reinforcing pattern drifting away from reality. Those who unquestioningly accept AI outputs may lose trust in their own reasoning, drifting from reality and weakening their native intelligence. Critical thinkers will retain human agency. An AI will always have a more complete and detailed memory of events and facts than I will, but I intend to take advantage of that as long as I can trust the AI’s ‘memory’ and reasoning. And I feel confident that, over time, the sum total of my reasoning and the inspiration I receive from the AIs will be positive for me. However, I’m also sure this won’t be the same for everyone.”
AI Is Simply an Extension of Our Cognitive Capacities, Merely a Tool for Deeper Reflection
Jonathan Baron, professor of psychology, author of “Thinking and Deciding” and an expert on the cognitive styles of citizens and their moral judgment, wrote, “I see AI as yet another extension of our cognitive capacities: an early extension was human language itself, later came the invention of reading and writing (which enabled changes in many institutions), and – more recently – the arrival of the Internet. All of these changes were mostly for the better but also abused. The same will surely happen with AI. It will be able to solve some problems better and faster than humans but it is merely an extension of the function of computer hardware and software. The fact that computers can easily do statistical tests that were deemed nearly impossible 100 years ago does not make us feel stupid. Chess tournaments will not go away just because AI software can beat grand masters. Of course, AI can be used by bad people for bad ends. I have trouble seeing how this can be prevented by high-level agreements or the major AI creators (large corporations). They may agree to not enable bad things to occur, but bad people can still do it on their own, just as happens with the Internet. The cat is already out of the bag.”
AI Will Take Jobs and Create New Divisions; It Will Also Be an Efficient, Inspiring Companion
Stephan Humer, internet sociologist and computer scientist at Hochschule Fresenius University of Applied Sciences, Berlin, wrote, “Once again new technology creates a divide. There will be people who benefit from AI and people who are more or less ‘disconnected.’ Those who benefit will benefit strongly, with a massive change in their private and business lives. But there will be more people who are ‘left behind’ who need even more help from institutions and other people than they do today. Some of these people will suffer disastrous change due to specific AI developments, e.g., because they lose their jobs to AI. This will be deeply and permanently damaging for many people; it will have a massive impact on their self-esteem. Computers are then no longer seen as subordinate machines by them, they are seen competitors – superior, invincible, robbing them of their individual perspectives. We have only one chance: AI must support, not replace us. If this happens, inspiration and efficiency will be the two most important aspects of advances in AI and their impact on the experience of being human for most of us. This will then lead AI to become a useful 24/7 companion of humans, broadening our knowledge, self-understanding and power. Efficiency will be the major force for improvements of all kinds, becoming better, faster, cheaper, etc., in a business context but also in the private realm.”
AI Will Remain a General-Purpose Technology, Effecting Little Change in Being Human
Lloyd J. Whitman, senior advisor at the Atlantic Council, previously chief scientist at the U.S. National Institute of Standards and Technology, wrote, “AI will be another general-purpose technology, like electrical power, computing, etc., woven into many aspects of people’s lives. As for previous industrial innovations, it will, on average, improve the standard of living and people’s lives. But there will be winners and losers, especially if access is not equitable and if changes in jobs and the nature of work are not proactively addressed through education and training. Overall, I do not think AI, including expanding interactions with humans, will change what are core human traits and behaviors. People who create will create differently with AI (just as they did with other tech developments). People who make decisions will do so but aided by AI. People involved in science, tech and innovation will use AI as another tool to do so. People will interact with each other, sometimes with AI involved, but they will still love, hate, fight, etc.”
AI Will Homogenize Facts and Dumb Down Society
Douglas Dawson, owner and president of CCG Consulting and president of the non-profit NC Broadband Matters, wrote, “AI is becoming a tool for the rich and corporations. AI companies have said that charging high monthly fees to a relatively small group of people worldwide is the most viable business model – and that means AI for corporations, but not for the rest of us. This almost certainly means a more focused and predatory marketing of goods and services aimed at those who can be convinced to buy. It also has brought the homogenization of facts. Witness this with the early version of Google search that now provides its own answer briefing in response to complex questions. Because people will not be likely to search beyond the summary the blurb answers become the fact. Most people will take the easy answers supplied by AI as the truth and they won’t think beyond that. Not only are they likely to often be misinformed or ill-informed in doing this, it also means they will read fewer news and research articles, blogs and opinion pieces. Relying on the easy answer will lead to a further dumbing down of society. It’s the natural consequence of AI always having an easy answer for every question.”
Humans’ Behavior and Cognition Are Embedded in Their Technologies
Yasmin Ibrahim, professor of digital economy and culture at Queen Mary University of London, wrote, “There has always been a humanisation of technologies – the embedding of elements of human behaviour and cognition, presenting them as ‘smart’ technologies designed to address human needs and predicate human responses. The notion of ‘intelligence’ has historically been a problematic concept delineated through the context of coloniality and inequalities of which knowledges can be constructed as superior to others. Machines and technologies have always played a key role in the construction of how nations and civilizations perceive themselves. Human dependence, adaptations and appropriations of technologies will evolve through time and will be tested in terms of their relevance, social harms, effacement of human norms, empathy and rights. Machine learning and algorithms will be cued through human behaviour and conversely these will in time cue us in terms of our responses on platforms and utilizing technological interfaces to manipulate human senses. There is an iterative process at play.”
Overdue Democratic Regulation and Oversight is Essential to Protect Human Agency
A well-known cybersecurity professional based in Europe wrote, “The AI revolution echoes previous technological shifts in history, just as the British application of machine guns concentrated power and restricted freedoms in Africa, AI without proper oversight threatens similar imbalances. As in any competitive game played without referees, those with the most power will inevitably bend and break rules to their advantage. By 2035, unregulated AI could create stark divides between those controlling the technology into centralized systems and those subject to it, affecting everything from job access to social freedoms. Strong democratic regulation and oversight are essential to prevent AI from becoming yet another tool that primarily serves concentrated power while diminishing broader human rights and agency. Technical solutions to this already exist, such as the Solid protocol, but adoption faces the prospect of decentralized and federated communications.”
As AI Replaces Mundane Tasks It Creates a Path Toward More Time for Meaningful Activities
Ravi Iyer, managing director of the Center for Ethical Leadership and Decision-Making at the University of Southern California wrote, “AI will help us do many mundane tasks that are currently largely unsatisfying for people. However, it is an open question as to whether we end up with a society that just does ‘more’ stuff or one that displaces such tasks for better options. We may replace the mundane tasks that AI does for us with potentially meaningless interactions with AI-generated people and AI-generated content. OR we may more thoughtfully use our newfound free time to do things that are truly meaningful and that cater to people’s aspirations, which likely are not about spending more time with AI. In an ideal world, AI would enable a broad set of people to connect more with the people they love and achieve their aspirational goals – not just be more entertained throughout their day.”
AI Will Help Humans to Be More ‘Intelligent as a Civilization’
Jose Luis Cordeiro, a vice president for Humanity Plus based in Madrid, Spain, commented, “Humanity needs AI to solve the big world problems. I am not afraid of AI, but I am afraid of human stupidity, that is why we need AI, more AI, mucho more AI, so that we can finally become intelligent as a civilization. AI will be the savior of humanity!”
Success in the Age of AI Requires More Focus on Media and Information Literacy Education
Drissia Chouit, co-chair of UNESCO’s Media and Information Literacy Alliance and professor of linguistics and communication and University of Moulay Ismail, based in Meknès, Morrocco, commented, “It is my firm belief that human beings will be able to humanize technology, keeping oversight of human agency for inclusive and ethical AI as a public good through quality transformative 21st century education that should ensure media, Information and digital technology literacy to all across curricula and age groups, in line with the UNESCO Global Initiative of Media and Information Literacy for All and By All and its proactive measures for a human-rights based, people-centered, ethical AI.”
Leaders Need to Focus More on Encouraging Stability, Emotional Intelligence and Kindness
A professor of writing, rhetoric and cultures, wrote, “If we continue on our current course, I can only see negative effects of AI. Not the fault of the technology, per se, but given the ways in which social media platform governance has sent us down so many dangerous paths. For our society, governments and communities to flourish, we need the kind of stability, emotional intelligence and kindness that we have seem to have been completely incapable of over the last 10 years.”
Collaboration between Humans and AIs Will Redefine Work, Education and Creativity
Aleksandra Przegalinska, head of the Human-Machine Interaction Research Center and leader of the AI in Management program at Kozminski University in Warsaw, Poland, wrote, “By 2035, AI will likely become seamlessly integrated into every aspect of our lives, evolving beyond narrow applications into systems that can understand and adapt to complex human contexts. We might see a proliferation of specialized, smaller language models trained on diverse, multilingual datasets, addressing biases and expanding accessibility globally. In this scenario, AI won’t replace humans but enhance their capabilities. Collaboration between humans and AI will redefine work, education and creativity. Imagine AI co-pilots in daily life, helping us make informed decisions, automate routine tasks and unlock deeper insights. Ethical considerations will remain crucial, ensuring these technologies support fairness, privacy and sustainability.”
The Experience of Being Human Will Continue to Be Driven By Our Souls
Zizi Papacharissi, professor and head of the communication department at the University of Illinois-Chicago, wrote, “People will feel both more and less human in 2035. This will depend primarily on political, economic, social and cultural developments, and less on AI-related ones. The trajectory of AI developments will be defined by economic policy and interest. The experience of being human will be driven by our souls and mediated by technology. This has always been the case and always will be.”
AI Will Increase Productivity, Creativity and Inequality
Carol Chetkovich, longtime professor of public policy at Harvard University and Mills College, now retired, commented, “What is ‘the experience of being human’ now? I doubt there’s strong consensus on that among those responding here. If you ask instead about whether the effects of AI are likely to be socially productive or destructive, I can hazard a guess. Like the effects of any shock to the society – technological, natural or social – the effect on humans will be heavily mediated by our distribution of wealth and power. AI will tend to increase productivity and creativity among those with greater wealth and power, and reduce productivity or creativity among those with less, with some less-predictable outcomes at the margin. I would be more sanguine about the overall effects of AI if we lived in a society with a more equitable distribution of wealth, but even if we had a society in which wealth did not translate so easily into political power, the negative effects could be mitigated.”
Humans May Lose Touch with Important Skills Unless They Are Bolstered By Education
John Paul Nkurunziza, online tutor and expert moderator with the Internet Society based in Burundi, wrote, “Given the fact that technology and AI are coming, we can’t avoid collaboration between human and AI. Yes, humans will be deepening partnership with AI, but the risk would be that in absence of AI, humans could be unable to solve even simple issues, because they will be used to relay on AI. Therefore, there is a need to rethink the educational system so that every human is provided with basic skills to be used without calling upon the assistance of AI.”
The Cultural Impact of Generative AI is Extremely Complex; It Requires Careful Study
Jillian C. York, director of international freedom of expression at the Electronic Frontier Foundation, based in Berlin, wrote, “While there are valuable applications of AI for medical, engineering and other industrial uses, the cultural impact of generative AI cannot be understated. As we think about what it means to be human in the 21st century, we must consider the impact of AI-generated ideas, particularly on young people. Growing up as an aspiring writer, I relished the ways in which my mistakes enhanced my learning. Autocorrect, an early form of AI, would often provide less-than-helpful suggestions to my creative writing. Eventually, I turned it off, choosing instead to proofread my own papers. Teaching English as a foreign language in my early twenties helped me to understand some of the things that had come so naturally to me. I worry that over-reliance on AI for writing will create a uniformity of structure, dulling our senses. On the other hand, I must admit that AI for language learning has been a triumph for me, personally, and I’m sure this is true for countless others who can’t afford the time or money to sit in a language classroom. In other words, the impact of AI on society is complex and thus requires careful study.”
< Continue to Part III of the experts’ essays. The eight experts whose work is featured in this brief closing section of the essays take a look at the big picture as humanity moves forward: Ray Schroeder, Andy Opel, Mauro D. Rios, Jim Dator, Anriette Esterhuysen, Warren Yoder, Jan Hurwitch and Frank Kaufmann.