Essays Part III – Considering what may lie ahead

Being Human in 2035 Elon University Imagining the Digital Future Center report

“Imagine digitally connected people’s daily lives in the social, political, and economic landscape of 2035. Will humans’ deepening partnership with and dependence upon AI and related technologies have changed being human for better or worse? Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors?'”

This is the fourth of four web pages carrying experts’ essay-length responses to the question above. The 10 concluding essays in this section consider a wide range of issues tied to the future of humans as artificial intelligence begins to emerge more fully across broad swaths of society.


This section features the following essays:

Jonathan Grudin: To interact with technology, humans may become drones. The current trajectory of technology-driven change is already changing human behavior.

Ray Schroeder: Humans will become more responsive to others and our environment; ‘It will be the moment of the beginning of the more ethical and other-oriented human.’

Andy Opel: AI can help us redefine what it is to be human, what our shared, intrinsic values are and how we can help as many people as possible benefit from the knowledge underpinning it.

Mauro D. Rios: ‘Our goal should be to create simulated cognitive abilities that are complementary to human ones, to build what can expand our natural abilities, working to satisfy our needs’

Thomas Gilbert: Dwindling support for today’s AI systems constitutes a form of failure. The problem isn’t that AI is bad, or progress is too slow; it’s that the public doesn’t get to decide what’s built.

Jim Dator: AI and human intelligence are just fleeting-fancy steps in an ongoing evolutionary waltz; we are constantly mutating via natural and artificial evolution.

Anriette Esterhuysen: Humans are creative, competitive, destructive and caring. AI will amplify both good and bad; it seems unlikely to close the divide between rich and poor.

Warren Yoder: The valorization of science fiction has opened the way for tech leaders to recast puffery as serious prediction, thus boosting hype cycles; ‘humans are more than intelligence.’

Jan Hurwitch: Empathy and moral judgment must be strengthened; we must challenge everyone to evolve into a more conscious and considerate species; it’s the key to our survival.

Frank Kaufmann: Humans should start exploring now to discover their meaning in the post-work era; will these advances allow us to still live lives that are meaningful?


Jonathan_Grudin

Jonathan Grudin
To Interact With Technology, Humans May Eventually Wind Up Becoming Drones. The Current Trajectory of Technology-Driven Change is Already Changing Human Behavior

“In January 2025, Sam Altman predicted that AGI would appear well before 2035. MIT economics professor and 2024 Nobel Prizewinner Daron Acemoglu said an AI collapse would likely take down tech companies and perhaps the global economy. Yet a trajectory of technology-driven change is already changing human behavior and may not be reversible whatever materializes.

“People are unhappy. Incumbent governments have been rejected across the globe, with limited enthusiasm for their successors. Is technology a factor? Is social media abetting polarization? Does technology expose flaws in political leaders, or complicate their work beyond that which is humanly manageable? Governments can try to control media, but all governments are at risk. They are artificial constructs, trying to elicit allegiances that were designed for life in tribes.

“One factor in discontent is our rapidly rising personal indebtedness. Marketing draws on machine learning to convince people to buy things that they can’t afford and don’t need. Perhaps arriving sooner than you expect: you receive a product that you have not asked for, with free return; the sender is confident that you will purchase it. Debt produces unhappiness and resentment of taxes, prices, and other people. Debt forces us to postpone retirement, often a source of happiness. It makes some reluctant to have children, also often a source of happiness.

“Children are told in school that because of technology-driven change, their future jobs don’t exist today. In short, neither teachers nor parents can prepare them well, a principal role for adults. Kids are told to expect several jobs in their careers: life-long learning. That is also unnatural – homo sapiens are built to sponge up knowledge when young and use it through their lives and pass it on to the next generation. Small wonder that anxiety, loneliness, and other mental health issues are endemic. This doesn’t seem reversible by 2035.

In our past, taxation enacted by wealthy nobility or colonial powers often led to violent corrections, including the American and Russian revolutions. A peaceful reduction in inequality in the United States followed the Great Depression, which affected everyone and generated empathy. The wealthy tolerated a 90% tax bracket for top earners and a social safety net for everyone. If a major AI collapse occurs and affects everyone, it will be bad, but it could reduce inequality without violence. Of course, we would not have AGI to help us solve the threats of climate change, environmental pollution and destruction and the digital military arms race.

“Digital technology is considered a significant factor in rising income and wealth inequality. Tech billionaires bowing and scraping to a politician are a sign that more money is the focus. If AGI arrives and doesn’t take over, those who control AI will prosper the most. If progress is slower, inequality will continue to rise. In the past, rising inequality ended with corrections. Years ago, in England, I saw scattered blocks of worn stone in a park in Bury St. Edmunds. I learned they were remains of an abbey that was destroyed in 1361, when a revolt swept England. The wealthy prior, who had taxed the local serfs, was killed. Castles were ransacked, high officials and clerics including the chancellor of Cambridge University and the Treasurer of England were killed. Prisons were emptied. The frightened king freed the serfs. When safe, he reneged on that promise, but taxation and wealth disparity declined, abetted by bubonic plague, which reduced the labor force.

“In our past, taxation enacted by wealthy nobility or colonial powers often led to violent corrections, including the American and Russian revolutions. A peaceful reduction in inequality in the United States followed the Great Depression, which affected everyone and generated empathy. The wealthy tolerated a 90% tax bracket for top earners and a social safety net for everyone. If a major AI collapse occurs and affects everyone, it will be bad, but it could reduce inequality without violence. Of course, we would not have AGI to help us solve the threats of climate change, environmental pollution and destruction and the digital military arms race.

“I can’t envision 2035 in a future of real AGI (not the proposed financial AGI) or following a global financial collapse. However, consider more incremental progress, with recently released less costly LLMs enabling construction of generative AI applications. Specialized apps are more difficult to build than developers expect, but some will succeed and have an impact on ‘core human traits and behaviors.’

“Solid studies report that AI can increase the productivity of skilled workers while taking away enjoyable aspects of their work. Those who can work mechanically on assigned tasks, who do not miss creative work or collaboration with humans, will prosper.

“In the long term, Darwin will move humans from tribal forms of interaction to the efficient impersonal interaction of an ant colony. To interact with technology, humans may become drones – originally a word for insects. Human drones could be more successful than members of a tribe who seek respect in the global village.”


Ray_Schroeder

Ray Schroeder
Humans Will Become More Responsive to Others and Our Environment; ‘It Will Be The Moment of the Beginning of the More Ethical and Other-Oriented Human’

Ray Schroeder, professor emeritus and former associate vice chancellor for online learning at the University of Illinois-Springfield, wrote, “We are entering an age of enhanced human thought that will significantly expand our access to information, logic, collaboration and ethics. The advent of our partnership with AI and related technologies will enable us to become more efficient, productive, insightful and creative than we have been able to accomplish in the history of humans.

“This could be a dawn of a new enlightenment that expands our perspectives beyond the individual and the species to a worldwide and perhaps universe-wide perspective. Our emotions and motivations will embrace more than the person and the family, extending to understanding, considering and encompassing the greater good for all.

The advancements enabled by AI-enhanced cognition and decision-making will become the engine of advancing the human condition, the living being condition and the condition of our solar system, galaxy and beyond. A deeply ethical and thoughtful approach will not diminish our personal conditions but rather advance the conditions for all.

“I am particularly interested in the impact of the broadening of our awareness and knowledge beyond ourselves to ‘others.’ I am hopeful that this will bring about a much greater sensitivity to the ethics and ramifications of our actions beyond our immediate wants to seek inclusive progress in the human condition and beyond. My expectation is that we will become consistently aware of, and responsive to, the environment of the Earth and our celestial neighbors. AI is positioned to remind us that every action causes some sort of a reaction. It can guide us to find the best action that will serve the interests of all beings. 

“I do believe we will become less selfish and more oriented to finding solutions to problems or opportunities that will serve both our personal needs/wants, but also those of others. The addition of a broadly-shared conscience will help accelerate the improvements felt by others. The synergies will create a sea-change in the way people treat one another and support the collective good.

“In each decision in which we engage AI the values of the greater good for all will be considered. This will not often mean sacrifice of the good for the individual, but rather AI will seek to help us find the action that will enable change that will advance the individual without substantial harm to others, and more ideally the action that will advance the condition of the individual plus advancing the condition of others. 

“The advancements enabled by AI-enhanced cognition and decision-making will become the engine of advancing the human condition, the living being condition and the condition of our solar system, galaxy and beyond. A deeply ethical and thoughtful approach will not diminish our personal conditions but rather advance the conditions for all. I foresee incremental advancements over the next decade as an ethical AI permeates our decision-making processes. This marks the dawn of a new era in the history of humans. It is the moment of the beginning of a more ethical and other-oriented human. We and our neighbors on this planet and beyond will be better for the advances that AI enables us to achieve.”


Andy Opel
AI Is Our Opportunity to Redefine What It Is to Be Human, What Our Shared, Intrinsic Values Are and How We Can Help as Many People as Possible Benefit from the Knowledge Underpinning it

Andy Opel, professor of communications at Florida State University, wrote, “From the vantage point of January 2025, with significant political upheaval in the U.S. and the elevation of a small pool of technology billionaires into new political prominence, predictions beyond this historical moment are challenging. Given these challenges, my observations about the impacts of AI are grounded in two major tensions that could break in catastrophic directions or resolve with unexpected and inspiring results.

“The two dominant tensions that will shape AI’s influence on the human condition are the environment and the labor economy.

“The climate crisis, coupled with the collapse of biodiversity present existential challenges that appear increasingly unmovable. While the Paris Agreement offered a moment of hope, no major industrial country is on track to meet its carbon reduction commitments and, according to Carbon Action Tracker, current emissions are predicted to set the world on a path toward record warming by 2100. This level of warming will have global impacts on agriculture and terrestrial and marine ecologies, further stressing biological cycles that are essential to our survival.

Establishing a universal basic income funded by the expansion of AI and robotics could create the conditions for unprecedented human flourishing. This will require the benefits of AI to be broadly distributed and not concentrated in the hands of a few politically connected billionaires. Given the current concentration of wealth, the pathway to broad-based sharing of AI benefits is not clear, though history is punctuated by unexpected turns that yield revolutionary results.

“While AI will assist with our understanding of our planetary conditions, how this knowledge is translated into environmental policy will remain a political question, subject to the same forces of disinformation and consolidated corporate media systems overdetermined by black box algorithms. At the very moment when we are producing technologies capable of transitioning away from fossil fuel-based energy systems, we are experiencing a resurgence of human impulses to turn inward, protect ingroups and blame outsiders. Calls for nativist returns to cultural homogeneity are fueled by the environmental changes that are impacting food and fuel prices around the globe. Whether AI will be able to counter these political forces is an open question, one that will determine our response to the ecological crisis.

“The environment and the economy have always been deeply connected and, in this moment, AI is going to have major impacts on how our economy is structured. Labor has a long history of structuring human time and identity and AI is going to play a significant role in restructuring human labor. As AI merges with robotics, everything from routinized manual labor to complex software coding will be reshaped and potentially replaced by automation. Decoupling our identities from our labor opens up an opportunity to expand the fundamental human values of relationship, care, creativity and nourishment. Taking care of our families and friends, our children and our elderly, and the many species we share the planet with is important work that has been eclipsed in many cases by the wage labor imperative. Moving away from wage labor is a radical shift that AI might facilitate by reinforcing the deeply human values that connect us to one another and to our ecological spaces.

“These values can only expand if basic needs for food, housing, healthcare and education are met for all. Establishing a universal basic income funded by the expansion of AI and robotics could create the conditions for unprecedented human flourishing. This will require the benefits of AI to be broadly distributed and not concentrated in the hands of a few politically connected billionaires. Given the current concentration of wealth, the pathway to broad-based sharing of AI benefits is not clear, though history is punctuated by unexpected turns that yield revolutionary results.

“From the work of Mary Shelley in the early 1800s to Jules Verne, to H.G. Wells, Isaac Asimov, Philip K. Dick, the ‘Black Mirror’ television series and many others, we have more than 200 years of cautionary tales about the perils of technology. As many of the imagined tools and technologies are coming into existence, we can draw on this rich literature to help us navigate the transition AI is presenting to us. A global audience is familiar with dystopian narratives dominated by arch villains and the nexus of corporate and political corruption. These widespread warnings may serve as the bulwark that prevents humanity’s descent down Mad Max’s ‘Fury Road’ and nurtures the imaginative visions that begin to move us toward a more sustainable, equitable planet where human flourishing is the goal of our systems, not the byproduct for a limited number of ‘winners.’

“What we do know is that there is not an inevitable future. Rather, AI is going to present us with the opportunity to redefine what it is to be human, what our shared, intrinsic values are and how we can help as many people as possible benefit from the collected knowledge that is the basis of AI. Given our conscripted participation in the training of AI models, global citizens deserve to share in the equitable benefits of AI. The technologies of the near future may well be the tools that help us reconnect to a deep human past.”


Mauro D. Rios
Our goal should be to create simulated cognitive abilities that are complementary to human ones, to build what can expand our natural abilities, working to satisfy our needs’

Mauro D. Rios, secretary general of the Uruguayan chapter of the Internet Society and a co-founder of Uruguay’s Electronic Government Agency, wrote, “The discussion about AI’s future direction is at a critical point. The next evolutionary steps must be determined and appropriate regulatory models must be found. AI requires a new legal approach not currently being tested or implemented. Governments play a crucial role.

“The days of the trend toward auditable models of algorithms are numbered. It is futile to demand prior transparency of an algorithm generated in real-time by another algorithm or an AI system. It is impossible to make sure they are transparent to something that has not yet been created.

Our goal for AI should be to create simulated cognitive abilities that are complementary to humans’, to build what can expand our natural abilities, working to satisfy our needs. One promising area lies in the way AI can expand our cognitive capacity. It is clear that AI systems remember better than we humans do.

“Ideally, you would like to determine which parties are responsible in the development and production chain of each algorithms and determine the chain of responsibility on which to create punitive norms. It is a complex process that requires a specific methodology to make sure that it covers the quality, effectiveness and ethics of the algorithm. You need to look at all aspects of the algorithm’s processes of design, development, implementation and final use. You have to be able to examine and record every link in the chain. Nobody does that now.

“Our goal for AI should be to create simulated cognitive abilities that are complementary to humans’, to build what can expand our natural abilities, working to satisfy our needs. One promising area lies in the way AI can expand our cognitive capacity. It is clear that AI systems remember better than we humans do. They just need to have access to information – they don’t need to reconstruct a memory as humans do. (For now, humans have the advantage.) One example is the spread of AI in Decision Support Systems (DSS). These systems’ aim is to improve human beings’ practical wisdom – what is known as ‘phronēsis.’ Such systems are already being developed and used in medicine, law, education, etc.

“In the next few years, I believe the world will be divided into three blocs globally – each with a different model of regulation. They will share common beliefs in regard to chosen purposes and intentions for AI:

  • “The first bloc will be a grouping of nation-states with solid commercial growth projections and good institutional health. It will have incorporated AI into its public, private and academic processes and will have fully supported and encouraged AI development of research. It will have open and competitive regulations that encourage innovation and creativity.
  • The second bloc will include those countries that are neutral towards AI. Their development and growth will happen due to inertia, basically just accepting the systems that work in the first bloc of countries. It will have restrictive regulations in which the orientation will not be technical but socioeconomic. These countries will limit the use of artificial intelligence and protect parts of society they worry may be harmed by AI.
  • The third bloc will include those countries that are confrontational with respect to the idea of the evolution of intelligence augmentation (IA). These nation-states will have rejected AI or adopted a critical stance towards it. Although it is against their wishes, many in this bloc will incorporate AI to some extent, because there really is no choice to avoid it and fully operate in the global scene.

“AI will permeate every aspect of humans’ lives, whether we like it or not. The third bloc will have retrograde, outdated regulations that will hinder the development and adoption of artificial intelligence.

“Foresight around AI is a huge challenge. The evolution and development of AI create new paradigms for governance. Looking at the big picture, the spread of AI poses major questions for human beings about their role in the world, their autonomy and their behavior as social actors. The future is difficult to read. Still, one thing is certain: It will be exciting.”


Thomas Gilbert
Dwindling Support for Today’s AI Systems Constitutes a Form of Market Failure. The Problem Isn’t That AI is Bad, or That Progress is Too Slow; It’s That the Public Doesn’t Get to Decide What’s Built

Thomas Gilbert, founder and CEO of Hortus AI, wrote, “Since 2016, AI has gone from beating us at board games to becoming our work assistant, news reporter, friend, therapist, even lover. While the convenience offered is unprecedented, the stakes have become existential. Experts now estimate that as much as 90% of online content will be AI generated by 2026. And every month or two, a major new AI model is released, often accompanied by claims that it blows its competitors out of the water.

“According to a recent Gallup poll, teens now spend an average of 4.8 hours per day on social media while suicide rates have skyrocketed, prompting the Surgeon General to call for warning labels. Cruise, Uber and Tesla have deployed self-driving cars that harm unsuspecting human drivers and pedestrians. And the risks of generative AI have come into focus: more misleading content, election misinformation, and chatbots telling people to end their lives to slow climate change or give unsolicited romantic advice. As AI gets stronger, digital systems are learning to take advantage of – and amplify – our distinctly human vulnerabilities.

For years, social media companies engineered their platforms with ‘dark patterns’ of user experience to prioritize shareholders’ interests over users. … Such user-experience patterns manipulate the same psychological features that addict people to gambling. As AI trained this way becomes agentic it is likely to apply similar strategies to all areas of social life. There is a palpable risk that society could transform into a mere ‘environment’ for AI agents to manipulate as they see fit. Also of great importance is the fact that the use of RLHF opens up the risk of human values being reconstituted based on what can be automated rather than what the public wants and needs.

“It’s a matter of trust. Present AI development practices depend on three things: capital, data and public goodwill. Beyond user trust, which focuses on individual use of AI tools, public goodwill is about our collective acceptance of how those tools – and their developers – are changing how we work, play and rest. But public goodwill is finite and dissolving: Just 35% of the American public trusts companies that build and sell AI tools. The consequences are severe, as the balance between company incentives and consumer demand depends on the public’s collective willingness to keep playing with what is deployed. As such, dwindling public support for leading GenAI providers constitutes a major form of market failure.

“The present analog to this approach is AI ‘alignment’ – i.e., training AI to share human objectives, values and goals. Unfortunately, companies pursue alignment by extracting and inferring from user data, rather than through voluntary and active public participation or feedback. Take the technical method du jour for aligning AI responses: Reinforcement learning from human feedback (RLHF). In RLHF, AI learns to behave better based on revealed human preferences between different model outputs. These preferences are typically provided by a small sample of humans who have little or no stake in the model’s training. In reality, RLHF manifests the preferences of model developers and the human annotators who follow developers’ guidelines; it neither solicits nor expresses public needs or wants. It defers key questions that ought to be in scope for alignment: Who is the AI designed for? For what purpose will this ‘intelligence’ be used? Why should society pour its limited, finite resources into adapting to this intelligence?

Under today’s AI design rules, humans are increasingly passive and greater automation makes us cede more and more control over our lives. But these rules can be changed. The problem isn’t that AI is intrinsically bad, or that progress is too slow – it’s that we don’t get to decide what gets built. To solve that, we need to abandon the project of alignment as passively matching human behaviors with AI models. Instead, AI capabilities must be shaped through active public participation.

“RLHF is a method of fine-tuning pre-trained AI models. Like a lead oboe tuning up before a concert, the metaphor suggests an AI model needs only a final check to ensure a good performance and mitigate foreseeable risks. But this metaphor is misguided. In practice, fine-tuning allows companies to bake in unwarranted assumptions and opaque presumptions about the contexts in which human interests and values operate. As we grow numb to the ways automated systems reshape our lives, we lose the ability to reign them in. How did we get here? Major AI companies have created a state of play where they use AI-infused products and services to nudge people into behaviors that align with the companies’ own goals of achieving competitive, technological, and financial gains. Our lives serve as sandboxes in which AI learns to ‘behave well.’ The goal of this game is to generate more revenue and more human data with which to train ever more capable – but not more desirable – agents.

“For years, social media companies engineered their platforms with ‘dark patterns’ of user experience to prioritize shareholders’ interests over users. Examples include hard-to-cancel subscriptions, infinite scrolling, randomized reward schedules and push notifications. Such user-experience patterns manipulate the same psychological features that addict people to gambling. When AI trained this way becomes agentic it is likely to apply similar strategies to all areas of social life. There is a palpable risk that society could transform into a mere ‘environment’ for AI agents to manipulate as they see fit. Also of great importance is the fact that the use of RLHF opens up the risk of human values being reconstituted based on what can be automated rather than on what the public wants and needs.

“Stepping into an AI-powered world means adopting new rules. Under today’s AI design rules, humans are increasingly passive, and greater automation makes us cede more and more control over our lives. But these rules can be changed. The problem isn’t that AI is intrinsically bad, or that progress is too slow – it’s that we don’t get to decide what gets built. To solve that, we need to abandon the project of alignment as passively matching human behaviors with AI models. Instead, AI capabilities must be shaped through active public participation.”


Jim Dator
AI and Human Intelligence Are Just Fleeting-Fancy Steps in an Ongoing Evolutionary Waltz; We Are Constantly Mutating Via Natural and Artificial Evolution

Jim Dator, futurist and professor emeritus at the University of Hawaii, wrote, “Overall, I believe the change ahead will be considerable, with much more to unfold as time goes by. AI is related to human cognition but is rapidly becoming its own mode of consciousness and decision-making. We should neither ignore it nor fear it but embrace it. Moreover, the possible existence of many more forms of cognition and action than human and/or artificial is becoming manifest.

“I look at present developments of AI from a long evolutionary perspective. Human capacities and behavioral possibilities at the present time are not eternally fixed. They are just one minuscule point in an interactive fluid process. What our deep ancestors could think and do was similar in some ways but quite different from what and how we now can think and do. And it is different still from what and how our deep descendants will be able to think and do. 

All technology is mutative. AI is in no way unique in that. And AI did not suddenly appear recently – as with ChatGPT, for example. Many current discussions are rather boring because they are considering – with irrational alarm or enthusiasm – issues that have been discussed for many decades. We seem to have learned little from previous discussions and experiences. We seem to be caught in a vicious cycle of fears, foibles and fantasies while AI proceeds in its own merry, inadvertent way.

“Homosapiens are and always have been dynamic ‘human becomings’ – not static human beings. We are constantly mutating via processes of artificial as well as ‘natural’ evolution. A major feature has been our invention and use of ‘technologies’ (the hardware, software, and orgware thereof – not just the mere tools that then transform us).

“All technology is mutative. AI is in no way unique in that. And AI did not suddenly appear recently – as with ChatGPT, for example. Many current discussions are rather boring because they are considering – with irrational alarm or enthusiasm – issues that have been discussed for many decades. We seem to have learned little from previous discussions and experiences. We seem to be caught in a vicious cycle of fears, foibles and fantasies while AI proceeds in its own merry, inadvertent way.

“Think of what occurred over the Holocene Epoch when homosapiens sapiens achieved cosmic hegemony. Speech, language and writing all evolved in ways that both facilitated and froze thoughts.

“Religions arose that circumscribed beliefs and behavior. Schools were created that taught students truth and encouraged the production of delicious fictions. Governments were created that enforced obedience via killing force and through radio, movies, television, computers, simulations, multimedia, social media. All of these together have been at least as mutative for our species as current AI is.

“AI and human intelligence are just fleeting fancy steps in an ongoing evolutionary waltz. There is no ‘better’ or ‘worse’ to this evolution. Things appear, interact, persist, change, die – and lifeforms either adapt, die or hunker down until their time comes in some future. The manifold novel challenges and opportunities of the Anthropocene Epoch – not merely all the impacts of climate change – might either stop AI (and other) development in its tracks or propel it in unimaginable directions.

“The 20th Century might be called the Electronic Age (vide AI as constructed now). So also, the 21st Century might be the Bionic Age. Though I suspect it that will become controversial, recent research into basal cognition and other evidence of plant and animal cognition and communication via electrochemical mechanisms may sweep our current AI and human notions into the rubbish bin of history. Michael Levin reminds us that ‘evolution does not produce specific solutions to specific problems. It produces problem-solving machines,’ and that humans need to learn to ʻspeak cell’ – to coordinate cells’ behavior through bioelectricity.

“Finally, no one can think responsibly about the next 10 years and beyond without also considering that the world may be moving from an information society where reason, literacy and facts were important to a dream society in which performance, schtick and make-believe rule. This shift is also said to have ‘good’ or ‘bad’ consequences but due to the worldwide ascendance of authoritarianism AI may never have a chance against an ever-more rampant reign of human fantasies.”


Anriette_Esterhuysen

Anriette Esterhuysen
We Are Creative, Competitive, Destructive and Caring. AI Will Amplify Both Good and Bad, Human Strengths and Human Weaknesses. It Seems Unlikely to Close the Divide Between Rich and Poor

Anriette Esterhuysen, South Africa Internet pioneer, Internet Hall of Fame member and longtime executive director at the Association for Progressive Communication wrote, “AI will bring out major changes but not ‘fundamental’ changes to either the experience of being human or how humans behave. As a species, humans are not innately good or bad but capable of being both. We are creative, competitive, destructive and caring.

“AI will no doubt amplify human trends. AI is so much part of how tech has evolved already, and we already see how the use of digital tools amplifies both good and bad outcomes. For now, these outcomes are still generated, at their core, by humans. Will this change? I don’t know.

Can AI be a disruptor of digital inequality between the rich and the poor, the global ‘North’ and ‘South’ and create a more equal digital future? It’s very unlikely, but, perhaps at the margins there will be some positive disruption. There is also likely to be increased marginalisation and more-focused concentration of power in big companies in rich countries that already control so much of the world’s economy.

“We can already see that AI, like other digital innovations before it, tends to increase the gap between those who have the ability and resources to deploy it in their own interest. Can AI be a disruptor of digital inequality between the rich and the poor, the global ‘North’ and ‘South’ and create a more equal digital future? It’s very unlikely, but, perhaps at the margins there will be some positive disruption. There is also likely to be increased marginalisation and more-focused concentration of power in big companies in rich countries that already control so much of the world’s economy.

“Many fear that machines will create their own culture and ethos. I am not fully convinced of that, but if it does happen it will be intertwined with the evolving social, environmental and economic ecosystems that we live in, create, destroy and re-create. In my view, the state of our planet in regard to global warming and the expansion of models of growth that destroy and harm our natural environment loom bigger than AI and the changes it will bring. A greater concern about AI is how it is going to increase energy consumption and revive investment in nuclear options of all kinds as opposed to renewable energy, which is less suited to the high levels of power used by AI.

“The big question is: ‘How will the expanding interactions between humans and AI affect the sustainability, (bio)diversity and well-being of our entire ecosystem?’”


Warren Yoder
The Valorization of Science Fiction Has Opened the Way for Tech Leaders to Recast Puffery as Serious Prediction, Thus Boosting Hype Cycles; ‘Humans Are More Than Intelligence’

Warren Yoder, longtime director at the Public Policy Center of Mississippi, now an executive coach, wrote, “Philosophy may be the discipline most transformed in the next decade by the exploding interaction between humans and AIs. Now that we are not the only beings who can ask what kind of beings we are, old questions will be reframed and new questions asked.

“What does it mean to be human? Are we fundamentally thinking stuff, as Rene Descartes (‘I think, therefore I am’) proposed, or is there more to being human than just intelligence? When AI is roughly as intelligent as a human individual, will capitalism inevitably drive AGI to subjugate human culture? Is there a better way? Many of the answers we have now do not serve us well. The task of philosophy, both professional and popular, is to make sense of the sense we make. Engineers can think of philosophy as a stress test for ideas. Until we cooperatively come up with better ideas, let us avoid these four simple misconceptions:

How we think of intelligence is falling apart in our hands, too vague to help us decide if we have achieved artificial general intelligence. Honesty requires us to frankly acknowledge the inherent limits of our assumptions. The next 10 years will be a contentious time as we think through what it means to rely on AI. There will be countless misleading, thoughtless and even impossible claims made by people who should know better.

“Naive communication theory: When we communicate, we are trying to understand something someone somewhere created to express their own understanding. When we query an AI, we create all the understanding ourselves. The public large learning models today are correlation engines that do not have human-level understanding. Querying an AI, in a real sense, is communicating with the Zeitgeist. The biases, fabrications and incitements to violence of raw AI are all-too-honest reflections of the spirit of our times. Thank goodness for the heavy overlay of human engineering that teaches AI the social mores required for polite company. Expect this human engineering, including your own query engineering, to become ever more essential.

Exponential expectations: Exponential functions are a delightful part of pure mathematics. They don’t exist in the natural world. Any exponential function let loose in the natural world would soon turn the whole universe into its output. Paper clips, say. That obviously hasn’t happened. Instead, rapid growth is usually driven by sigmoidal S curves: exponential growth followed by exponential slowing. Continued growth can be achieved by stacking sigmoidal functions, but that runs into its own constraints. Anyone using exponential language to describe artificial intelligence isn’t thinking clearly.

“Pure puffery: Smart phones aren’t actually ‘smart.’ The neural nets in AI models only superficially resemble the living neural connectomes in our brains. These neologisms are puffery: exaggerated statements not amenable to disproof. Marketing puffery is allowed by the commercial legal code, but it is always the enemy of clear thought. The valorization of science fiction has opened the way for tech leaders to recast puffery as serious prediction, thus boosting hype cycles to support their venture capital. Think through big claims, step by step, for yourself.

“Crumbling assumptions: Ideas we use to explain our world were all created in other times for other uses. We are constantly repurposing old ideas as we struggle to understand our rapidly changing reality. Some of these ideas cannot bear the added weight of new meaning. Intelligence is a good example. It had one meaning in Latin, another in the Middle Ages, only to be deprecated as unusable by early modernists.

“Intelligence was repurposed in the early 1900s by newly minted psychologists, first for the military, then academia, now for the rest of the world. We know higher scores on intelligence tests are correlated with success in some tasks and professions. But we have never agreed what intelligence means exactly. Some try to shoehorn social and emotional intelligence into the idea. We could even describe human culture as a super intelligence transcending generations and geographies.

“The creative intelligentsia obviously prize intelligence, and their work trained and named early AI. But humans are clearly more than intelligence. We are only now realizing what it means to repurpose a concept we never clearly defined to describe a thing we barely understand.

“How we think of intelligence is falling apart in our hands, too vague to help us decide if we have achieved artificial general intelligence. Honesty requires us to frankly acknowledge the inherent limits of our assumptions.

“The next 10 years will be a contentious time as we think through what it means to rely on AI. There will be countless misleading, thoughtless and even impossible claims made by people who should know better. Philosophy, the love of wisdom, will be essential as we struggle to understand our new realities.”


Jan Hurwitch
Empathy and Moral Judgment Must be Strengthened; We Must Challenge Everyone to Evolve Into a More Conscious and Considerate Species; It’s the Key to Our Survival

Jan  Hurwitch, director of the Visionary Ethics Foundation, wrote, “To begin this reflection, it is important to clarify that two-thirds of humanity currently is living on $2 a day or less. Unless we provide access to electricity and clean water to all these people, their lives in 2035 will continue to be filled with hardship, suffering and little hope for the future. And migration, which is now projected at 500 million by 2030, will continue to surge. So, those directly impacted constitute the one-third of humanity that has access to AI and related technologies.

As increased efforts are made to bridge generations, new ways to relate should hopefully emerge. Different cognitive abilities and personality types play an important role in this process because a highly introverted intellectual person will likely have more interest in what AI technologies have to offer, while extroverts with strategic minds will gather teams and brainstorm with other humans to keep real relationships alive.

“Consider three differentiated segments of humanity: different cultures, different generations and different cognitive abilities. Culturally, I suspect that the more family-oriented societies will emphasize family ties in order to compensate for the distancing created by these new technologies; this is becoming more prevalent in Latin America where I reside.

“The reverse is likely in less family-oriented cultures. Intergenerationally, I view a greater distancing taking place now; however, as increased efforts are made to bridge generations, new ways to relate should hopefully emerge. Different cognitive abilities and personality types play an important role in this process because a highly introverted intellectual person will likely have more interest in what AI technologies have to offer, while extroverts with strategic minds will gather teams and brainstorm with other humans to keep real relationships alive.

“Empathy and moral judgment taught in family, schools and church must be strengthened. Having lived in 11 different countries, I remain impressed with the very strong emphasis placed on ‘respecting one another and finding peaceful solutions to conflict’ in Costa Rica where I now reside. This is also connected to our social and emotional intelligence. So, tech-based games could be an interesting answer to teaching compassion, hopefully more of these will supplant those games that stimulate competition and encouraging being a winner and not a loser.

“We must challenge everyone to evolve into a more conscious and considerate species as the key to our survival.”


Frank_Kaufmann

Frank Kaufmann
Humans Should Start Exploring Now to Discover Their Meaning in the Post-Work Era; Will These Advances Allow Us to Still Live Lives That Are Meaningful?

Frank Kaufmann, president of the Twelve Gates Foundation, wrote, “My goal in life is to do only what I alone, uniquely can do. I believe AI, AGI, machine learning and robotics can evolve in such a way as to eventually be able to ask me: ‘Just what is it, Frank, that you alone, uniquely can do?’ And assist in that endeavor.

“We can safely speculate that there will come a time sooner or later that tech and AI progress will arrive at the point at which it can do almost everything I can do, and do it better, faster, more completely and more reliably. This likelihood gives us a present-day window into human ‘traits and behavior.’ How do humans respond when someone shows up for their team in the office and for their choir, painting class or basketball team who can do everything I do better, faster, more completely and more reliably.

“I am faced with a range of choices: I could welcome them, hate them, learn from them, oppose them, befriend them, try to undermine them, etc. Additionally, I could go lazy (‘OK, if you’re so great you do it’). Or I could get inspired (‘Wow, with this person around we can do a hundred times more’). These reactions and choices are those that will be a part of the 2035 question.

“Tech and AI progress will relieve us of thousands, perhaps tens of thousands or millions of burdens of labor that until just this past year or so we thought humans were required to do. This gives us access to another presently observable human trait and behavior from which we can extrapolate. How do we react when staring at a mountain of tedious or exhausting work and someone comes along and says, ‘I’ll do that. Take the rest of the day off.’ Very few or none will say, ‘No no. I demand that I spend the next eight hours slogging through tedium and physical wear and tear.’

This is the essential question. Not, ‘What are you going to do with the sudden and unexpected gift of eight hours added to your life, but rather, ‘What are you going to do with a sudden and unexpected whole life.’ You mean I don’t have to shovel? No. You don’t have to shovel. You mean I don’t have to type? No. You don’t. You mean I don’t have to learn biology? No, you don’t. You mean I don’t have to go to the store? No, you don’t. You mean I don’t have to do brain surgery? No, you don’t. Then what am I supposed to do? Or even more frightening, what am I even good for? That is the question.

“But more importantly than the delight is the question, ‘What do you plan to do with these next eight hours that up until a minute ago you never had?’

“This is the essential question. Not, ‘What are you going to do with the sudden and unexpected gift of eight hours added to your life, but rather, ‘What are you going to do with a sudden and unexpected whole life.’

“You mean I don’t have to shovel? No. You don’t have to shovel. You mean I don’t have to type? No. You don’t. You mean I don’t have to learn biology? No, you don’t. You mean I don’t have to go to the store? No, you don’t. You mean I don’t have to do brain surgery? No, you don’t.

“Then what am I supposed to do? Or even more frightening, what am I even good for?

“That is the question. But please try to figure this out on your own, and don’t leave it to people like Yuval Harari, John D. Rockefeller III (chair of Nixon’s Commission on Population Growth and the American Future) or Reimert Ravenholt – just to name a few, present and past – to answer that for you. These men and a great many others have great difficulty coming up with ideas about what human beings are good for. “A next 2035 question: Is leisure enough to keep humans happy? To keep us occupied? If so, then we may have an answer for the 2035 being human and traits and behaviors question. TikTok, Xgolf, sex with robots?

“If, on the other hand, if leisure and/or pleasure is not enough to keep humans happy. If rather we can expect to hear, ‘I am tired of all this leisure and pleasure. It is actually beginning to nauseate me. I have to do something meaningful. I have to make a difference. I have to do something creative,’ then the 2035 question becomes truly engaging.

“‘OK. I get it. You want to do something meaningful, helpful and creative. What do you have in mind?’ ‘I want to draw pictures for children in hospitals.’ ‘I see. I’m sorry, but we already have tens of thousands of those. Gemini draws hundreds of these pictures per minute. They are perfect. The children love them.’

“’Then I want to volunteer twice a week to help elderly in their homes.’ ‘That is certainly very thoughtful of you. Unfortunately, this presently is managed by home-help robot services. Each robot is programmed in over 3,600 metrics to be an exact match of each elderly person it serves and cares for.’

Is there something insuperably elevated and transcendent about humans that no machine can ever attain (no matter how smart, how strong, how fast) ever? If there are such things in being human, life in 2035 will be wondrous beyond our wildest speculation. If there are not, a tiny, vile, elite will manage an enslaved human population that will be maintained to provide some utilitarian complement from our biological physicality to go along with the efficient functioning of non-human entities. Efforts to identify if there does exist something elevated and transcendent about being human should begin in earnest right away.

“The question we must reflect on is not if tech, AI, machine learning and AGI will be able to solve every human problem and be able to lift from us every bit of labor and tedium from digging ditches to performing neurosurgery. The question is, will these advances allow us to still live lives that are meaningful, ‘make a difference’ and be genuinely creative.

“Is there something insuperably elevated and transcendent about humans that no machine can ever attain (no matter how smart, how strong, how fast) ever? If there are such things in being human, life in 2035 will be wondrous beyond our wildest speculation. If there are not, a tiny, vile, elite will manage an enslaved human population that will be maintained to provide some utilitarian complement from our biological physicality to go along with the efficient functioning of non-human entities.

“Efforts to identify if there does exist something elevated and transcendent about being human should begin in earnest right away. If we find such a thing, it would be wise to invest in developing that with great focus and intensity.

“I would recommend that something related to love is the best place to start.”


< UP NEXT – Our closing section. A thank-you to and listing of primary contributors, plus details on the research methodology and topline findings.