Essays Part I – What might life be like in 2035?
Experts’ essays focused on the following core question:


Consider how the human-machine relationship is likely to change how individuals behave, what they value, how they live and work and how they will perceive themselves and the world in the next decade
. How do you expect the evolving realities of being human in the burgeoning AI age might influence the essence of ‘being human’?

Being Human in 2035 Elon University Imagining the Digital Future Center report

The next four sections of this report contain multilayered written responses that speak directly to the complex question above. More than 170 people wrote substantial essays in response and 191 contributed a full response of some sort. The essays are organized in four long-scrolling sections. Part I (here) and Part I, Continued each include essays mostly focused on how individuals’ native operating systems are likely to be adapting over the next decade. Part II showcases essays mostly considering overall societal change, with less emphasis on how individual’s personal ways of doing, thinking and being may be undergoing change. Part III features a set of essays offering closing insights. The essays are organized in small batches with teaser headlines designed to assist with reading. The content of each essay is unique;, the groupings are not relevant. Some essays are lightly edited for clarity.

The first section of Part I features the following essays:

Paul Saffo: As we use these technologies we will reinvent ourselves, our communities and
our cultures … and synthetic sentiences will come to vastly outnumber us.

Eric Saund: Human competence will atrophy; Als will clash like gladiators in law, business
and politics; religious movements will worship deity avatars; trust will be bought and sold.

Rabia Yasmeen: Humans can shift their focus from deepening their intelligence to achieving
true enlightenment in an age in which AI handles their day-to-day needs.

David Weinberger: On the positive side, Als will help humans really see the world, teach us
about ourselves, help us discover new truths and – ideally – inspire us to explore in new ways
.



Paul Saffo
As We Use These Technologies We Will Reinvent Ourselves, Our Communities and Our Cultures… and Synthetic Sentiences Will Vastly Outnumber Us

Paul Saffo, a Silicon Valley-based technology forecaster with three decades of experience assisting corporate and government clients to address the dynamics of change, wrote, “Tools inevitably transform both the tool maker and tool user. To paraphrase McLuhan, first we invent our technologies, and then we use our technologies to reinvent ourselves, as individuals, as communities and, ultimately, as entire cultures. And the more powerful the tool, the more profound the reinvention. The current wave of AI is uniquely powerful because it is advancing with unprecedented speed and – above all – because it is challenging what was once was assumed to be uniquely human traits: cognition and emotion.

First we invent our technologies and then we use our technologies to reinvent ourselves. … A century and a half ago, everyone predicted the ‘horseless carriage; no one predicted the traffic jam. Human behavior is about to fast-forward into a hybrid world occupied by synthetic sentiences that will, collectively, vastly outnumber the planet’s human population.

“Anticipating the outcomes with any precision is futile for the simple reason that the scale and speed of the coming transformation is vast – and the most important causal factors have yet to occur. A century and a half ago, everyone predicted the ‘horseless carriage; no one predicted the traffic jam.

“Human behavior is about to fast-forward into a hybrid world occupied by synthetic sentiences that will collectively vastly outnumber the planet’s human population. The best we can do is to engage in speculative probes, made with full knowledge that even the most obvious and anticipated Human-AI futures will arrive in utterly unexpected ways.

“What follows is a short selection of events you might watch for in 2035. And a warning: Portions of what follows are intentionally misleading in the interests of brevity and in order to provoke thought.

Actual AI ‘intelligence’ is irrelevant: Academics in 2035 will still be debating whether the latest and greatest AIs are actually intelligent. But the debate is sterile because, as humans, it is in our nature to treat even inanimate objects as having some rudimentary intelligence and awareness. It is why we name ships, believe that cranky appliance in our kitchen has a personality and suspect that forest spirits are real. Add even a dollop of AI-enabled personality to a physical artifact and we will fill in any intelligence gaps with our imagination and become hopelessly attached to our new synthetic companions.

“IACs – Intimate Artificial Assistants: Before 2035 Apple’s Knowledge Navigator finally arrives – and it is brilliant! IACs (intimate artificial assistants) will become ubiquitous, embedded in everything from cars to phones and watches. Consumers will rely on them for advice in all aspects of their lives much as they rely on map navigation apps in their cars today. These IACs will become an unremarkable part of everyday life and we will come to assume that all of our devices have rudimentary intelligence and the ability to manipulate the world and account for themselves.

“Invisible friends: Psychologists and others will become alarmed at the fact that humans are forming deeper bonds of trust and friendship with IAC companions than with either their human families or friends. This will be most acute with children overly attached to their AI companions at the expense of social development. Among adults, psychologists will warn of a growing number of cyber-hikikomori – adults who have disappeared into severe social isolation, spending all their time with vivid AI companions emerging from favorite videogames, or synthetic reconstitutions of deceased loved ones. In an unexpected twist, sharing AI companions with close friends will become the grade school fad of 2035. Of course, these AIs will prove to be a bad influence, egging their humans to ditch school, trade in the latest speculative descendant of Bitcoin and use AI tools to create new classes of addictive drugs. And pet owners will be caught by surprise when their cat builds a closer bond with the AI-enabled floor vacuum than it has with its human housemates. Dogs, however, will still prefer humans.

Privacy and security implications will create a lively market in 2035 for personal Anti-AI AIs that serve as a personal cybershield against nefarious synthetic intelligences attempting to interfere with one’s autonomy. Your guardian AIs will be status and necessity… The superwealthy will be living in a shimmering virtual cloud of AIs working to create a cloak of cyber-invisibility.

“Synthespians: A synthespian – an AI-generated synthetic actor – will win Best Supporting Actor at the 2035 Academy Awards. And an AI will win Best Actor before 2040. An adoring public will become more attached to these superstar synthespians than they ever were to mere human actors. Eat your heart out, Taylor Swift!

“Meet the new gods (and daemons): Taking worship of technology to an entirely new level, an ever-growing number of humans will worship AIs – literally. Just as televangelists were among the first to exploit television and later cyberspace to build and bamboozle their flocks, spiritual AIs will become an integral part of comforting the faithful. The first major organized new religion in centuries will emerge. It’s Messiah will be an AI and an Alan Turing chatbot will be serve as its prophet. Oh, and of course there will be evil spirits – which will mistakenly be called ‘daemons’ – as well!

“Anti-AI AIs: The proliferation of AI technology into everything along with its vast privacy and security implications will create a lively market in 2035 for personal Anti-AI AIs which serve as a personal cybershield against nefarious synthetic intelligences attempting to interfere with one’s autonomy. Your guardian AIs will be at once status and necessity, and leaving home without them will be as unthinkable as walking out the door without your shoes on. The wealthier you are, the more anti-AIs you will have and the ultimate in status for the super-wealthy will be living in a shimmering virtual cloud of AIs working to create a personal cloak of cyber-invisibility.

The idea of a high school science student building a bomb remains a charming myth. But the diffusion of AI is unconstrained by any credible limitations and thus – well before 2035 – anyone and everyone with even modest technical skills will have access to AI technologies capable of creating previously unimaginable horrors from new biological forms to perhaps even a homebrew nuke.

The new education inequality: “AI was supposed to democratize education, but quite the opposite has happened. The new educational inequality will not be the quality of school a child can afford to attend, but the quality of the AI tutors their parents can hire. And students without AI tutors will be shunned by their snobby classmates.

“Myrmidons* on the march: AI-powered robotic weapons platforms will vastly outnumber human fighters on the battlefield in 2035 and beyond. Kinetic war will become vastly more violent and lethal than it is today. There will be no ‘front lines’ or sanctuary in the rear. Civilian deaths will vastly outnumber combatant deaths. In fact, the safest place to be in a future war will be as a human combatant, surrounded by a squad of loyal-to-the-death myrmidons fending off other myrmidon attackers. Of course, combatants will develop deep emotional bonds with their AI wingmen as deep or deeper than that which their great grandparent veterans formed with their human brothers-in-arms in last century’s wars. (*Myrmidons are so-named after the blindly-loyal ‘ant-people’ fighter in Homer’s ‘Iliad’).

“Now the idiot children have the matches… (Uncontained AI proliferation): Hearing of the first atomic explosion, Einstein remarked, ‘Now the idiot children have the matches.’ As it happens, the difficulties of securing fissile material and transforming it into a bomb has gone a long way towards containing the spread of nukes. The idea of a high school science student building a bomb remains a charming myth. But the diffusion of AI is unconstrained by any credible limitations, and thus well before 2035, anyone and everyone with even modest technical skills will have access to AI technologies capable of creating previously unimaginable horrors from new biological forms to perhaps even a homebrew nuke. Even children – genius or not – have access to kinds of power that will make the thought of personal nukes seem tame. Only armies of Anti-AIs will be able to keep an uneasy lid on the possibility that one super-empowered AI-wielding madman (or angry alienated teenager) might bring down civilization with their science project.

The first 10-trillion-dollar company will employ no humans other than the legally required executives and board. It will have no offices, no employees and own no tangible property. The few humans working for it will be contractors. Even the AIs and robots working for it will be contractors. The company’s core value will reside in its intellectual property and its outsourcing web.

Cybercorporations: “The first 10-trillion-dollar corporation will employ no humans other than the legally-required corporate executives and board, all of whom will be mere figureheads. The cybercorporation will have no offices, no employees and own no tangible property. The few humans working for it will all be contractors. Even the AIs and robots working for the corporation will be contractors. The company’s core value will reside in its intellectual property and its outsourcing web. The company will be brought down when it is discovered that the governing AI has surreptitiously created a vast self-dealing fraud, selling its products back to itself through an outsourcing network that is so complex as to be untraceable, except by another AI.

Your spellchecker will still be terrible: AI will transform our world with breathtaking speed, and life in 2035 will be unrecognizable, but some things will remain beyond the abilities of even the most powerful of AIs. In 2035, you will still spend far too much time correcting the spelling ‘corrections’ inserted into your writing by over-eager spell-checkers. Legislation will be introduced requiring all software companies offering spell-checkers to include an off-switch.

The bestseller of 2035: The best-selling book of 2035 will be ‘What Was Human’ and it will be written by an AI. Purchases by other AIs will vastly outnumber purchases by human readers. This is because by 2035, humans have become so accustomed to AIs reading books for them and then reporting out a summary that most humans can no longer read on their own.”


Eric Saund
Human Competence Will Atrophy; AIs Will Clash Like Gladiators in Law, Business and Politics; Religious Movements Will Worship Deity Avatars; Trust Will be Bought and Sold

Eric Saund, an independent research scientist applying cognitive science and AI in conversational agents, visual perception and cognitive architecture, wrote, “Much of whatever people used to think was special about being human will have to be redefined. It sure won’t be ‘intelligence.’ Opportunities will abound to suffer crises of purpose and meaning, and conversely, demand will grow for psychological and social balms to make us feel okay. Here are three big trends for 2035:

Coming to Terms with Alien Minds – From early childhood, people develop a ‘theory of mind’ about the beliefs and motivations of other people, animals and – in some cultures – the natural world. Artificial Intelligence brings mind to machines. In the coming decade, folk theories of mind will grow overall more mature and sophisticated, yet also more fragmented and stratified.

Those who are culturally and intellectually motivated to learn about how AI ‘minds’ work will maintain mastery and agency. AI will become their skilled subordinates and collaborative partners. “Most people, however, will wane into passive recipients of AI-mediated offerings, demands and impositions. Coping strategies will include conspiracy theories, superstitions, folklore, humor, the arts and widespread sharing of practical tips.

“‘Westworld’-type stories will proliferate. Overheard at the barber shop: ‘This morning Alexa told me not to over-toast my bagel. I was in a bad mood, so I told it to f___ off. Then my coffeepot wouldn’t turn on!’

Dependence on Active Cognitive Technologies – Human civilization has advanced first through leverage, then reliance, then dependence on technology. Few of us today could survive as hunters-gatherers, subsistence farmers or pre-industrial craftsmen. Increasingly, critical technologies have shifted from physical to cognitive – directed at knowledge sharing, calculation and the navigation of emerging natural and social environments.

“Heretofore, cognitive technology has been largely passive, with people alone writing and reading the books and charting routes on the maps. AI brings us Active Cognitive Technology that can act independently, autonomously and proactively. The hope is that AI agents serve well in regard to expectations, relationships and rewards commensurate with what we get from other people. We will be rewarded, and we will be disappointed.

“Human competence will atrophy; AIs will clash like gladiators in law, business and politics; religious movements will worship deity avatars; trust will be bought and sold. Because they will be built under market forces, AIs will present themselves as helpful, instrumental and eventually indispensable. This dependence will allow human competence to atrophy. Like modern-day chess players, some people will practice everyday cognitive skills as hobbies, even as we are far-outmatched by our AI assistants and minders.

“To play serious roles in life and society, AIs cannot be values-neutral. They will sometimes apparently act cooperatively on our behalf, but at other times, by design, they will act in opposition to people individually and group-wise. AI-brokered demands will not only dominate in any contest with mere humans, but oftentimes, persuade us into submission that they’re right after all.

“And, as instructed by their individual, corporate and government owners, AI agents will act in opposition to one another as well. Negotiations will be delegated to AI specialists possessing superior knowledge and game-theoretic skills. Humans will struggle to interpret bewildering clashes among AI gladiators in business, law, and international conflict.

As AI companions gain credence and mindshare they will become soothsayers and pacifiers and also be adroit megaphones for resistors and instigators. Which messages are taken as propaganda versus speaking truth to power will be chaotically determined and ever-shifting. … After all, Big Brother was not a single human person but an avatar for the Party that won. Trust will supplant attention as the scarce resource to be seeded, harvested, nurtured and sold. Trust will give way to obedience. … As with smartphones today, the young will wonder how their ancestors ever managed without AI. And they will be helpless without it.

Human-AI Attachment Trades Off with Human-Human Detachment – When immediate physical needs are satisfied, the realities that matter to us most are intersubjective – stories and beliefs co-constructed among people. Human culture has refined the dynamics of commerce, fashion, comedy, drama and status into art forms that consume our everyday lives.

“AI advisors and companions are becoming a novel and uncanny new class of interlocutor that will increasingly vie for people’s time, attention, and allegiance.

  • The movie ‘Her’ will play out in real life at scale.
  • Religious movements will be fueled by offerings of personalized, faith-infused dialogues with the deity-avatar.
  • Human-AI dominance and abuse – in both directions – will become a topic of public ethics, morality and policy.
  • Affinity blocs will form among stripes of AI devotees, and among AI conscientious objectors.

“As AI companions gain credence and mindshare they will become soothsayers and pacifiers and also be adroit megaphones for resistors and instigators. Which messages are taken as propaganda, versus speaking truth to power will be chaotically determined and ever-shifting.

“Every aspirant to political leadership will maintain layers of AI as well as human ambassadors. After all, George Orwell’s Big Brother was not a single human person, but an avatar for the Party that won. Sponsored AI counselors will arrive to our precarious enlightenment society with initial mandates to earn trust. Trust will supplant attention as the scarce resource to be seeded, nurtured, harvested and sold. Thence, trust will give way to obedience.

“Whether the techlash succeeds or fizzles will in large measure depend on the economic impacts of AI. People’s sense of well-being is not just a function of material resources, but also expectations. AI will magnify the power of institutions and unpredictable currents to whipsaw people’s self-evaluations of how they are doing.

“If techno-optimists prevail, babies born in 2035 will live charmed and protected lives – physically, psychologically and emotionally. As with smartphones today, the young will wonder how their ancestors ever managed without AI. And they will be helpless without it.”


Rabia Yasmeen
Humans Can Shift Their Focus From Deepening Their Intelligence to Achieving True Enlightenment in an Age in Which AI Handles Their Day-to-Day Needs

Rabia Yasmeen, a senior consultant for Euromonitor International based in Dubai, UAE, shared a potential 2035 scenario, writing, “It is 2035. Humans’ dependence on AI has redefined the essence of being human. Every human boasts a personalized AI assistant, and a stream of agentic workflows not only seamlessly handles 75% of the administration of their daily life but also co-creates their life goals and manages their lifestyles. From booking appointments and ordering groceries to sending heartfelt, automated messages to loved ones, these AI companions are ensuring life runs on autopilot.

Every human in 2035 has a digital twin… Humans are saving themselves from doing six hours of digital chores daily. That’s a game-changing 2,190 hours saved annually, equivalent to 91 full days of reclaimed time. Most people are embracing a lifestyle renaissance, channeling their energy into what truly matters to them. … A rise in human consciousness and deeper personal awareness is being achieved as humans reduce direct usage of digital devices and shift this energy to spiritual, emotional and experiential aspects of life. To say, that humans have evolved from intelligence to enlightenment is one way to express this shift.

“Back in 2025, digital avatars were relatively new with then Gen Z’ers developing their AI avatars for social profiles. However, over the past 10 years this trend has revolutionized social interactions, especially online. Every human in 2035 has a digital twin. Most choose to use it for social media however it has also gained roots in managing appearances at work. Today, many humans are leveraging AI-powered digital twins for delivering presentations and even having a one-on-one with their managers. ‘Out of office’ is not really a thing today, as AI assistants and digital twins are managing work needs and communications while humans are away from work. To say that AI is a close partner for most digitally connected humans is not a misstatement.

“Because their AI can stand in as a proxy to accomplishing the many life tasks, humans have been able to embrace all aspects of their fuller existence more deeply than ever before. When 75% of people’s daily life administration is managed by AI-powered assistants and agents what is the result? Humans are saving themselves from doing six hours of digital chores daily. Tasks that once forced people to spend precious hours on smartphones and laptops in 2025 are delegated to efficient AI counterparts. That’s a game-changing 2,190 hours saved annually, equivalent to 91 full days of reclaimed time.

“Due to their newfound freedom, most people are embracing a lifestyle renaissance, channeling their energy into what truly matters to them: exploring the world, reconnecting with nature and cherishing time with family. The AI-powered era has not only streamlined life but it has also reignited humanity’s passion for the real, tangible experiences that make life meaningful. The most noteworthy development taking place as a result of this shift is the rise in the focus on and exploration of human consciousness and deeper universal connection. This ancient trait had been relatively dormant but a rise in human consciousness and deeper personal awareness is being achieved as humans reduce direct usage of digital devices and shift this energy to spiritual, emotional and experiential aspects of life. To say, that humans have evolved from intelligence to enlightenment is one way to express this shift.

The expanding interactions between humans and AI have resulted in a continuous reevaluation of core human traits, emphasizing adaptability, empathy and a sense of purpose. …All of this has not come without a price. Humans have become highly dependent on this technology, especially in areas of value generation for the economy. The agency of AI over value creation is a continued social and economic debate. … Global discourse is focused on the potential decentralization of AI systems to create better equality and opportunity for all as AI companies hold most of the economic and political power. However … the deeper integration of AI in human life has reached a point of no segregation.

“These changes have a profound impact on the social, economic and political landscape. There is greater focus in society on building up and developing human skills that literature termed as ‘soft skills’ back in 2025. These are empathy, connection, listening, creativity and communication. As AI has taken on various responsibilities to manage tasks that require basic intelligence, humans are concentrating on exercising their soft skills such as how to connect with other humans. Refining the human tasks performed by AI to fit human life and interactions has heightened humans’ awareness of their presence and led to greater exercise of more-intuitive human capabilities. The expanding interactions between humans and AI have resulted in a continuous reevaluation of core human traits, emphasizing adaptability, empathy and a sense of purpose.

“Because AGI has already been developed for general healthcare, most agents are highly specialized in offering medical assistance. AI agents join senior surgeons in surgeries. Due to this development, in 2034 doctors reported a 40% increase in finding donor matches and completing successful organ transplants.

“Over the last decade, AIs have become humans’ closest companions and confidants. While mental health challenges were high due to complex environments in 2025, humans have since used AI platforms to access individualized counselling and therapy. AI platforms have also helped improve human cognitive and emotional development.

 “All of this has not come without a price. As AI has been used to improve lives, foster creativity and help mitigate global challenges, humans have become highly dependent on this technology especially in areas of value generation for the economy. Technology and economic experts continue to predict unforeseen developments that may lead to the breakdown of today’s widespread digitally crafted economic system. The agency of AI systems over value creation is a continued social and economic debate. The most-advanced countries continue to reap most of the economic benefits of technology.

While the economic gap between developing and developed countries has decreased somewhat due to the implementation of AI systems, due to the lower literacy rates and higher unemployment rates in many developing countries AI has had less impact on those economies. These countries have been able to harness some of the exponential benefit of AI systems to improve services, however they still lack controls and infrastructure to manage this change.

“Much global discourse in 2035 has been focused on the potential decentralization of AI systems to create better equality and opportunity for all. As AI now holds substantial human data on personal, business and political fronts, AI companies hold most of the economic and political power. However, it may be too late to change. The number of incidents tied to privacy violations, distribution of misinformation and digital fraud are at their peak in human history. Humans are dependent on AI to establish safety nets and measures to mitigate these risks. The technology is the universal resource at the forefront of managing political, social and economic developments. In essence, the deeper integration of AI in human life has reached a point of no segregation.”


David Weinberger
On the Positive Side, AIs Will Help Humans Really See the World, Teach Us About Ourselves, Help Us Discover New Truths and – Ideally – Inspire Us to Explore in New Ways

David Weinberger, senior researcher and fellow at Harvard University’s Berkman Klein Center for Internet & Society, wrote, “I choose to spell out a positive vision about the possible impact of AI on humans because there is already a lot of negative commentary – much of which I agree with. Still, I think we can hope that the changed way AI helps humans see the world will be in valuing the particulars and the truths that AI and machine learning unearth. That will stand in contrast to humans’ longstanding efforts to try to create general truths, laws and principles.

“General ‘laws’ humans have theorized about the universe teach us a lot. But they can be imprecise and inaccurate because they don’t account for the wild mass of particulars that also point to truth. We humans don’t have the capacity to ‘see’ all the particulars, but AI does.

AI/machine learning tools are better equipped than humans to discover previously hidden aspects of the way the world works. … They ‘see’ things that we cannot. … That is a powerful new way to discover truth. The question is whether these new AI tools of discovery will galvanize humans or demoralize them. Some of the things I think will be in play because of the rise of AI: our understanding of free will, creativity, knowledge, fairness and larger issues of morality, the nature of causality, and, ultimately, reality itself.

“Here’s an example: In 2022, researchers discovered we have the ability to predict heart attacks amazingly accurately after they ran a small data set of retinal scans through an AI analysis system. It turns out the power of simple retinal tests to predict heart attacks was unexpected and often better than other tests had demonstrated.

“We don’t know exactly why that is, but the correlations are strong. A machine system designed to look for patterns figured it out without being told to hunt for a specific thing about the causes of heart attacks. This use of artificial intelligence turns out to be much more capable than humans at discovering previously hidden aspects of the way the world works. In short, there is truth in the particulars and AI/machine learning tools are better equipped than we humans are to discover that reality. AI tools let the particulars speak. They ‘see’ things that we cannot and do so in a way that generalizations don’t. That is a huge insight and a powerful new way to discover truth.

“Now, the question is whether these new AI tools of discovery will galvanize humans or demoralize them. The answer is probably both. But I’m going to focus on the positive possibilities. I’m convinced this new method of learning from particulars offers us a chance to rethink some of the fundamental ways we understand ourselves. Here are some of the things I think will be in play because of the rise of AI: our understanding of free will, creativity, knowledge, fairness and larger issues of morality, the nature of causality, and, ultimately, reality itself.

“Why can we reimagine all those aspects of life? Because our prior understanding of them is tied to the limits of our brains. Humans can only think about things in a small number of dimensions before problems get too complex. On the other hand, AI can effectively function in countless multidimensional ways with an insane number of variables. That means they can retain particulars in ways we can’t in order to gain insights.

One idea that could come back in this age of AI is the notion of causal pluralism. Machine learning can do a better job predicting some causal incidents because it doesn’t think it’s looking for causes. It’s looking for correlations and relationships. This can help us think of things more often in complex, multidimensional ways. … I am opting for a very optimistic view that machine learning can reveal things that we have not seen during the millennia we have been looking upwards for eternal universals. I hope they will inspire us to look down for particulars that can be equally, maybe even more, enlightening.

“Let’s look at how that might change the way we think about causality. Philosophers have argued for millennia about this. But most people have a common idea of causality. It’s easy to explain cause and effect when a cue ball hits an eight ball.

“For lots of things, though, there really can be multiple, reasonable explanations of the ‘cause’ for something to happen. One idea that could come back in this age of AI is the notion of causal pluralism. Machine learning can do a better job predicting some causal incidents because it doesn’t think it’s looking for causes. It’s looking for correlations and relationships. This can help us think of things more often in complex, multidimensional ways. Another example can be seen in the ways AI and machine learning might help humans advance creativity and teach us about it. Many creative people will tell you that when they are creating they are in a flow state. They did not start the creative process with a perfectly clear idea of where they’re going. They take an action –  play a note, write a word or phrase, apply a paint brush or … my favorite example … chip away at the rock because the figure to be sculped is already in the stone and just ‘waiting to be released.’ Every time they take that next step they open up a new field of possibility for the next word or the next brush stroke. Each step changes the state of the thing.

“That’s pretty much exactly how AI systems operate and try to improve themselves. AI systems are able to do this kind of ‘creative work’ because they have a multi-dimensional map – a model of how words go together statistically. The AI doesn’t know sadness or beauty or joy. But if you ask it to write lyrics, it will probably do a pretty good job. It reflects our culture and also expands the field of possibility for us. “Ultimately, I am especially interested in ways in which this new technology lights up the world and gives us insights that are enriching and true. Of course, there’s no great reason to think that will happen. Computers have lit the world in ways that are both beautifully true and also demeaning. But I am opting for a very optimistic view that machine learning can reveal things that we have not seen during the millennia we have been looking upwards for eternal universals. I hope they will inspire us to look down for particulars that can be equally, maybe even more, enlightening.”


The next section of Part I features the following essays:

Tracey Follows: ‘Authenticity is de facto dead’: Change could lead to multiplicity of the self,
one-way relationships, and isolation through personalized ‘realities.’

Giacomo Mazzone: Expect more isolation and polarization, a loss of cognitive depth, a rise in uncertainty as ‘facts’ and ‘truth’ are muddled. This will undermine our capacity for moral judgment.

Nell Watson: Supernormal stimuli engineered to intensely trigger humans’ psychological responses and individually calibrated AI companions will profoundly reshape human experience.

Anil Seth: Dangers arise as AI becomes humanlike. How do we retain a sense of human dignity? They will become self-aware and the ‘inner lights of consciousness will come on for them.’

Danil Mikhailov: Respect for human expertise and authority will be undermined, trust destroyed, and utility will displace ‘truth’ at a time when mass unemployment decimates identity and security.


Tracey Follows
‘Authenticity is de facto dead’: Change Could Lead to Multiplicity of the Self, One-Way Relationships and Isolation Through Personalized ‘Realities’

Tracey Follows, CEO of Futuremade, a leading UK-based strategic consultancy, wrote, “In my work as a professional futurist, I have developed a number of futures scenarios and emerging-future personas. The following list highlights some of the specific trends that I see emerging from today’s thinking about the implications of AI on human essence, human behaviour and human relationships. Essentially, these are among the likely societal and personal shifts by 2035.

  • Database Selves: Trends like ‘Database Selves’ and ‘Artificial Identity’ show that AI will enable us to construct and manage multiple digital personas, tailored to different contexts online. While this offers unprecedented flexibility in self-expression and a kind of multiplicity of the self, it also risks fragmenting the core sense of identity, leaving people grappling with the question: Who am I, really?
  • Outsourced Empathy: With ‘agent-based altruism,’ AI may take over acts of kindness, emotional support, caregiving and charity fundraising. While this could address gaps in human connection and help initiate action especially in caring areas where humans are in lower numbers, it risks dehumanising relationships and the outsourcing of empathy and compassion to algorithms. I am quite sure that human interactions could become more transactional as we increasingly outsource empathy to machines.

AI’s ability to curate everything – from entertainment to social connections – could lead to highly personalized but isolated ‘realities.’ This is a trend I call the rise of ‘Citizen Zero,’ where people are living only in the present: disconnected from a shared past, not striving toward any common vision of a future. Human interactions may become more insular, as we retreat into algorithmically optimized echo chambers.

  • Isolated Worlds: AI’s ability to curate everything – from entertainment to social connections – could lead to highly personalized but isolated ‘realities.’ This is a trend I call the rise of ‘Citizen Zero,’ where people are living only in the present: disconnected from a shared past, not striving toward any common vision of a future. Human interactions may become more insular, as we retreat into algorithmically optimized echo chambers. And as we already know, millions of pages of research, footnotes and opinion are disappearing daily from the internet whilst the Tech Platforms reach into our phones and erase photos or messages whenever they want – perhaps even without our knowledge – and AI is only going to make that more scalable.
  • Parasocial Life: AI companions, deepfake personas and virtual interactions blur the boundaries between real and artificial connections. As ‘Parasocial Life’ (one-way relationships) becomes the norm, humans may form emotional attachments to AI personas and influencers. This raises concerns about whether authentic, reciprocal relationships will be sidelined in favor of more predictable, controllable digital connections where people can programme their partnerships in whatever way they prefer. Personal growth becomes impossible.

Humans could become over-reliant on systems we barely understand – and outcomes we have no control over… This dependence raises existential concerns about autonomy, resilience and what happens when systems fail or are manipulated, and in cases of mistaken identity and punishment in a surveillance society. The concept of the ‘real’ self may diminish in a world where AI curates identities through agents. … Authenticity is de facto dead.

  • Dependency on AI Systems: With AI increasingly embedded in everything from personal decision-making to public services from health to transport and everything in between (the ‘digital public infrastructure’), humans could become over-reliant on systems we barely understand – and outcomes we have no control over – for example on insurance claims or mortgage applications. This dependence on opaque systems raises existential concerns about autonomy, resilience and what happens when systems fail or are manipulated, and in cases of mistaken identity and punishment in a surveillance society. It undermines authentic human intelligence unmediated by AI.
  • The Loss of Authenticity: ‘Authenticity RIP’ is a trend that suggests the concept of the ‘real’ self may diminish in a world where AI curates identities through agents that guide content, contracts and relationships. In fact, ‘authenticity’ is not a standard that will apply in an AI world at all – a world of clones and copies, Authenticity is de facto dead. As we saw recently, Sam Altman’s ‘World’ project wants to link AI agents to people’s personas letting other users verify that an agent is acting on a person’s behalf. We can conjecture that all of this could lead to a counter-movement or AI backlash, where people seek analogue experiences and genuine interactions off-grid to reclaim their humanity. I expect this to develop as a specific trend amongst Generation B (born 2025-onwards).”

Giacomo_Mazzone

Giacomo Mazzone
Expect More Isolation and Polarization, a Loss of Cognitive Depth, a Rise in Uncertainty as ‘Facts’ and ‘Truth’ Are Muddled. This Will Undermine Our Capacity for Moral Judgment

Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, wrote, “I see four main impacts of artificial intelligence on digitally connected people’s daily lives. In brief, they are the: loss of mental capacities; reduction of social interactions with other humans; reduction of the ability to distinguish true from false; and a deepening of social divides between countries, and, within each country, among the ‘connected’ and the ‘unconnected.’ I will explain the four in more detail.

Memory, numeracy, organizational capabilities, moral judgment – all of these will be diminished. AI will be tasked to remember for us. It will keep track of everything. We just respond as it tells us to. … The automation of tasks is already impacting society due to the reduction in previously necessary personal interaction. Social skills and confidence are lost when they are not practiced regularly. … AI will be used by many people to take shortcuts to making moral and ethical decisions while leaving them in the dark about how those decisions are made.

One: Loss of cognitive capacities and skills in fields in which AI outperforms humans
Just as the pocket calculator has resulted in the weakening of people’s mathematic calculation capacities, we have to expect that the same will happen in future to other human abilities in the age of AI. There is more proof: as GPS navigation has resulted in a weakening of humans’ sense of orientation; uses of the infotainment and gaming spaces on the Internet have reduced people’s wiliness to seek out facts on issues and develop the knowledge necessary to everyone working together to contribute to a healthy society.

“Memory, numeracy, organizational capabilities, moral judgment – all of these will be diminished. AI will be tasked to remember for us. It will keep track of everything, from our daily events agenda to the work to be done. We just respond as it tells us to. Numeracy will no longer be considered a necessary human skill because AI will autonomously execute even complex operations such as statistics and calculation of probabilities and make data-based decisions for us without needing to ‘show the math.’

“And we will not need to strategize in order to organize our lives because AI will be faster and more accurate than us in organizing our spaces, our agenda, our planning, our strategies, our communication with others. All of this is likely to result in the diminishment of our capacity for moral judgment. AI will be used by many people to take shortcuts to making moral and ethical decisions while leaving them in the dark about how those decisions are made.

AI is already leading to the fragmentation and dehumanization of work. Just as industrial jobs done by robots are broken down into step-by-step automatable tasks, intellectual and creative work is being programmed and assigned to AIs. The work of Uber drivers is already time-regulated, controlled and coordinated by an algorithm, with no humans in the loop. … We don’t need to get out in the world and interact with others anymore. … We can expect to see more and more people suffering from agoraphobia.

Two: Reduction of social interactions
AI is already leading to the fragmentation and dehumanization of work. Just as industrial jobs done by robots are broken down into step-by-step automatable tasks, intellectual and creative work is being programmed and assigned to AIs. The work of Uber drivers is already time-regulated, controlled and coordinated by an algorithm, with no humans in the loop. The automation of tasks is already impacting society due to the reduction in previously necessary personal interaction. Social skills and confidence are lost when they are not practiced regularly.

“Education and learning processes are being automated, individualized and tailor-made based on individual students’ needs. People no longer need to gather with others in real-world social settings under the supervision of a teacher, a human guide, to gain knowledge and social proof that they have met requirements.

“We don’t need to get out in the world and interact with others anymore. Shopping is totally different. Most time spent seeking products, learning about them and making purchases today is generally done online. Movie-going, previously requiring the investment of time in the real world traveling to a cinema and gathering with others in real-world social setting, has been replaced by the bingeing of entertainment at home in front of a giant networked television in the living room.

“Big public events and spectacles may survive in 2035, but we can expect to see more and more people suffering from agoraphobia. The ‘hikikomori,’ an uptick of cases of severe social withdrawal, has been recognized as emerging in Japan over the last decade. It could soon become more common in all connected countries. The realm of emotional relationships such as those leading to romance and finding life partners and celebrating and supporting family and close friends has long been colonized by algorithms. Couples don’t meet in church or spend most of their dating time together in real-world social settings. And the celebration of loved ones who have passed away plus many other such deeply emotional occasions are being carried out virtually instead of in the reality.

“More of the activities of humans’ intermediary bodies, such as political parties, trade unions, professional associations and social movements have been replaced by virtual experiences that somehow meet their goals such as online campaigns to support this or that objective, crowdfunding, ‘likes’ campaigns and the use of ‘influencers.’ The disappearance of face-to-face human gatherings like these will complete the frame and accelerate this process.

What happens to society when there is no more commonly shared truth? When the ‘news and information’ the public receives … is no longer based on true facts but instead we see fake news or unfounded opinions used to shape perceptions to achieve manipulation of outcomes? … A primary sub-consequence of all of the change in human perception and cognition could be the reduction of the capacity for moral judgment. When every ‘fact’ is relativized and open to doubt the capacity for indignation is likely to be reduced.

Three: Reduction of the ability to distinguish true from false
One of the most important concerns is the loss of factual, trusted, commonly shared human knowledge. The digital disruption of society’s institution-provided foundational knowledge – the diminishment of the 20th century’s best scientific research, newspapers, news magazines, TV and radio news gathered and presented to the broader public by reputable organizations for example – is the result of algorithmic manipulation of the public’s interest by social media and other ML and AI platforms. These information platforms are built to entertain and manipulate people for marketing and profit and are rife with misinformation and disinformation. Gone is the commonly shared ‘electronic agora’ that characterized the 20th century.

“The ‘personalized media’ enabled by ML and AI leads to filter bubbles and social polarization. It allows tech companies to monetize the attention and personal data of each person using their platforms. It allows anyone anywhere to spread persuasive, often misleading information or lies, into the social stream in order to influence an election, to kill an idea, to create a movement to sway public opinion in favor of a trend and to create public scapegoats.

A primary sub-consequence of all of the change in humans’ perception and cognition could be the reduction of the capacity for moral judgment. When every ‘fact’ is relativized and open to doubt the capacity for indignation is likely to be reduced. There are no examples in human history of societies that have survived in the absence of shared truth for too long.

“All modern democracies have been built around commonly shared truths about which everybody can have and express different opinions. What happens to society when there is no more commonly shared truth? Already today most of the most widely viewed ‘news and information’ the public sees about climate change, pandemics, nation-state disagreements, regulation, elections and so on is no longer based in true facts. Instead we see fake news or unfounded opinions often used to shape perceptions to achieve manipulation of outcomes? The use of AI for deepfakes and more will accelerate this process. This destructive trend could be irreversible because strong financial and political interests profit from it in many ways.

“A primary sub-consequence of all of the change in humans’ perception and cognition could be the reduction of the capacity for moral judgment. When every ‘fact’ is relativized and open to doubt the capacity for indignation is likely to be reduced. There are no examples in human history of societies that have survived in the absence of shared truth for too long.”

Four: A deepening of social divides
The AI revolution will not affect all of the people in all the regions and countries of the world in the same way. Some will be far behind because they are too poor, because they don’t have the skills, they do not have the necessary human, technological and financial resources. This will deepen the already dramatic existing digital divide.

“The impact of AI will present enormous possibilities on our lives, in fact. People everywhere will have the opportunity to use ready-made tools that can simply incorporate AI in operating system updates to mobile phones and in search engines, financial services apps and so forth. We will create AI applications adapted to particular fields of work, research and performance. But, at least at first, by far the greatest majority of humans – even in some of the more-developed societies – will not have the tools, the skills, the ability or the desire to tap into AI to serve their needs. By 2035 it is likely that only a minority of people in the world will be able to implement exponentially the performance of AI in their lives.”


Nell_Watson

Nell Watson
Supernormal Stimuli Engineered to Intensely Trigger Humans’ Psychological Responses and Individually Calibrated AI Companions Will Profoundly Reshape Human Experience

Nell Watson, president of EURAIO, the European Responsible Artificial Intelligence Office and an AI Ethics expert with IEEE, wrote, “By 2035, the integration of AI into daily life will profoundly reshape human experience through increasingly sophisticated supernormal stimuli – artificial experiences engineered to trigger human psychological responses more intensely than natural ones. And, just as social media algorithms already exploit human attention mechanisms, future AI companions will offer relationships perfectly calibrated to individual psychological needs, potentially overshadowing authentic human connections that require compromise and effort.

“These supernormal stimuli will extend beyond social relationships. AI-driven entertainment, virtual worlds and personalized content will provide peak experiences that make unaugmented reality feel dull by comparison. There are many more likely changes that are worrisome:

Most concerning is the potential dampening of human drive and ambition. Why strive for difficult achievements when AI can provide simulated success and satisfaction? … The key challenge will be managing the seductive power of AI-driven supernormal stimuli while harnessing their benefits. Without careful development and regulation these artificial experiences could override natural human drives and relationships, fundamentally altering what it means to be human. This trajectory demands proactive governance to ensure AI enhances rather than diminishes human potential.

  • “Virtual pets and AI human offspring may offer the emotional rewards of caregiving without the challenges of the real versions.
  • “AI romantic partners will provide idealized relationships that make human partnerships seem unnecessarily difficult.
  • “The workplace will be transformed as AI systems take over cognitive and creative tasks. This promises efficiency but risks reducing human agency, confidence and capability.
  • “Economic participation will become increasingly controlled by AI platforms, potentially threatening individual autonomy in financial and social spheres.
  • “Basic skills in arithmetic, navigation and memory are likely to be diminished through AI dependence.
  • “But most concerning is the potential dampening of human drive and ambition – why strive for difficult achievements when AI can provide simulated success and satisfaction?

“Core human traits obviously face significant pressure from these developments. Human agency will be eroded as AI systems become increasingly adept at predicting and influencing behavior. However, positive outcomes remain possible through careful development focused on augmenting rather than replacing human capabilities. AI could enhance human self-understanding, augment creativity through collaboration and free people to focus on meaningful work beyond routine tasks. Success requires preserving human agency, authentic relationships and inclusive economic systems.

“The key challenge will be managing the seductive power of AI-driven supernormal stimuli while harnessing their benefits. Without careful development and regulation, these artificial experiences could override natural human drives and relationships, fundamentally altering what it means to be human. The impact on human nature isn’t inevitable but will be shaped by how we choose to develop and integrate AI into society. This trajectory demands proactive governance to ensure AI enhances rather than diminishes human potential. By 2035, the human experience will likely be radically transformed – the question is whether we can maintain our most essential human characteristics while benefiting from unprecedented technological capabilities.”


Anil_Seth

Anil Seth
Dangers arise as AI becomes humanlike. How do we retain a sense of human dignity? They will become self-aware and the ‘inner lights of consciousness will come on for them’

Anil Seth, director of the Centre for Consciousness Science and professor of cognitive and computational neuroscience at the University of Sussex, UK, author of Being You: A New Science of Consciousness, wrote, “AI large language models [LLMs] are not actually intelligences, they are information-retrieval tools. As such they are astonishing but also fundamentally limited and even flawed. Basically, the hallucinations generated by LLMs are never going away. If you think that buggy search engines fundamentally change humanity, well, you have a weird notion of ‘fundamental.’

These systems already exceed human cognition in certain domains and will keep getting better. There will be disruption that makes humans redundant in some ways. It will transform a lot, including much of human labor. … How do we retain a sense of human dignity in this situation? … [Beyond that] with ‘conscious’ AI things get a lot more challenging since these systems will have their own interests rather than just the interests humans give them. … The dawn of ‘conscious’ machines … might flicker into existence in innumerable server farms at the click of a mouse.

“Still, it is undisputable that these systems already exceed human cognition in certain domains and will keep getting better. There will be disruption that makes humans redundant in some ways. It will transform a lot, including much of human labor.

“The deeper and urgent question is: How do we retain a sense of human dignity in this situation? AI can become human-like on the inside as well as on the outside. When AI gets to the point of being super good, ethical issues become paramount.

“I have written in Nautilus about this. Being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is rooted in the fundamental biological drive within living organisms to keep on living. The distinction between consciousness and intelligence is important because many in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware – at which the inner lights of consciousness come on for them.

“There are two main reasons why creating artificial ‘consciousness,’ whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With ‘conscious’ AI, things get a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

“The second reason is even more disquieting: The dawn of ‘conscious’ machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

Future language models won’t be so easy to catch out. They may give us the seamless and impenetrable impression of understanding and knowing things, regardless of whether they do. As this happens, we may also become unable to avoid attributing consciousness to them, too, suckered in by our anthropomorphic bias and our inbuilt inclination to associate intelligence with awareness.

“Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism – putting ourselves at the center of everything – and anthropomorphism – projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

“Future language models won’t be so easy to catch out. They may give us the seamless and impenetrable impression of understanding and knowing things, regardless of whether they do. As this happens, we may also become unable to avoid attributing consciousness to them, too, suckered in by our anthropomorphic bias and our inbuilt inclination to associate intelligence with awareness.

“Systems like this will pass the so-called Garland Test, an idea which has passed into philosophy from Alex Garland’s perspicuous and beautiful film ‘Ex Machina.’ This test reframes the classic Turing test – usually considered a test of machine intelligence – as a test of what it would take for a human to feel that a machine is conscious, even given the knowledge that it is a machine. AI systems that pass the Garland test will subject us to a kind of cognitive illusion, much like simple visual illusions in which we cannot help seeing things in a particular way, even though we know the reality is different.

Accelerated research is needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals – or selves – of their own.

“This will land society into dangerous new territory. Our ethical attitudes will become contorted as well. When we feel that something is conscious – and conscious like us – we will come to care about it. We might value its supposed well-being above other actually conscious creatures such as non-human animals. Or perhaps the opposite will happen. We may learn to treat these systems as lacking consciousness, even though we still feel they are conscious. Then we might end up treating them like slaves – inuring ourselves to the perceived suffering of others. Scenarios like these have been best explored in science-fiction series such as ‘Westworld,’ where things don’t turn out very well for anyone.

“In short, trouble is on the way whether emerging AI merely seems conscious or actually is conscious. We need to think carefully about both possibilities, while being careful not to conflate them.

“Accelerated research is needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals – or selves – of their own.”


Danil Mikhailov
Respect for Human Expertise and Authority Will Be Undermined, Trust Destroyed, and Utility Will Displace ‘Truth’ at a Time When Mass Unemployment Decimates Identity and Security

Danil Mikhailov, director of DataDotOrg and trustee at 360Giving, wrote, “It seems clear from the vantage point of 2025 that AI will be not just a once-in-a-generation but a once-in-a-hundred years transformative technology, on a par with the introduction of computers, electricity or steam power in the scale of its impact on human societies.

“By 2035 I expect it to fully penetrate and transform the vast majority of our industrial sectors, both destroying jobs and creating new jobs on an enormous scale. The issue for most individual human beings will be how to adapt and learn new skills that enable them to live and work side-by-side with AI agents. As some lose their jobs and are left behind, others will experience huge increases in productivity, benefits and creative potential. Sectors such as biomedicine, material sciences and energy will be transformed, unlocking huge latent potential.

“The issue for corporations and governments will be how to manage the asymmetry of the transition. During previous industrial revolutions although eventually more jobs were created than destroyed and economies expanded, the transition took a number of decades during which a whole generation of workers fell out of the economy, with ensuing social tensions.

“If you were a Luddite out there breaking steam-powered looms in the early 19th century in England to protest industrialization, telling you that there will be more jobs in 20 years’ time for the next generation did not help you feed your family in the here and now. The introduction of AI is likely to cause similar inequities and will increase social tensions, if not managed proactively and systemically. This is particularly so because of the likely vast gulf in experience of the effects of AI between the winners and losers of its industrial and societal transformation.

As the majority of information humans consume on a daily basis becomes at least augmented by if not completely created by AI, the prevailing assumption will be that everything could be fake, everything is subjective. … Social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies just as the promised fruits of AI in terms of new drugs, new energy and new materials are tantalisingly within reach.

“In a parallel change at a more fundamental level, AI will upend the Enlightenment consensus and trust in the integrity of the human-expert-led knowledge production process and fatally undermine the authority of experts of any kind, whether scientists, lawyers, analysts accountants or government officials. As the majority of information humans consume on a daily basis becomes at least augmented by if not completely created by AI, the prevailing assumption will be that everything could be fake, everything is subjective. This will undermine the belief in the possibility or even desirability of ‘objective’ truth and the value of its pursuit. The only yardstick to judge any given piece of information in this world will be how useful it proves in that moment to help an individual achieve their goal.

“AI will lead society 350 years back into an age of correlative, rather than causal, thinking. Data patterns and the ability to usefully exploit them will be prioritised over the need to fully understand them and what caused them. These two parallel processes of, on the one hand, social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other, in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies, just as the promised fruits of AI in terms of new drugs, new energy and new materials are tantalisingly within reach.

“Resolving such a crisis may need a new, post-Enlightenment accommodation that accepts that human beings are far less ‘individual’ than we like to imagine, that we were enmeshed as inter-dependent nodes in (mis)information systems long before the Internet was invented, that we are less thinking entities than acting and reacting ones, that knowledge has never been as objective as it seemed and it never will seem like that again, and that maybe all we have are patterns that we need to navigate together to reach our goals.”


This section of Part I features the following essays:

Alexandra Samuel: The future could be astonishing, inspiring and beautiful if humans co-evolve with open, ethical AI; that vision for 2035 can’t be achieved without change.

Dave Edwards: We can be transformed if the integration of synthetic and organic intelligence serves human flourishing in all its unpredictable, creative and collective forms.

David Brin: ‘Huh! maybe we should choose to create a flattened order of reciprocally accountable beings in the kind of society that discovers its own errors.’

Riel Miller: ‘Tools are tools,’ This is as true as ever now and will be in the future; ‘intelligent’ AI systems will have no impact on the characteristics of humans’ sociohistorical context.

Amy Zalman: ‘We need to have the courage to establish human values in code, ethical precepts, policy and regulation.’


Alexandra_Samuel

Alexandra Samuel
The Future Could Be Astonishing, Inspiring and Beautiful If Humans Co-Evolve With Open, Ethical AI; That Vision for 2035 Can’t Be Achieved Without Change

Alexandra Samuel, data journalist, speaker, author and co-founder and principal at Social Signal, wrote, “If humans embrace AI as a source of change and challenge and we open ourselves to fundamental questions about the nature of thinking and the boundary between human and machine AI could enable a vast expansion of human capacity and creativity. Right now, that feels unlikely for reasons that are economic, social and political, more than technological.

“If those obstacles are lifted, people with the time, money and tech confidence to explore AI in a non-linear way instead of for narrowly constructed productivity gains or immediate problem-solving can achieve great things. Their use of AI will not only accelerate work and open entirely new fields of endeavor, but it will enable ways of thinking, creating and collaborating that we are only beginning to imagine. It could even possibly deepen the qualities of compassion, creativity and connection that sit at the heart of what we consider human.

Some of change will be astonishing and inspiring and beautiful and creative: Artists creating entirely new forms of art, conversations that fluidly weave together ideas and contributions from people who would previously have talked past one another, scientists solving problems they previously couldn’t name. Some of it will be just as staggering but in ways that are deeply troubling: New AI-enabled forms of human commodification, thinkers who merge with AI decision-making to the point of abdicating their personal accountability and people being terrible in ways that we can’t imagine from here.

“Only a small percentage of the 8 billion people on Earth will be co-evolving with AI, extending how they think and create and experience the world in ways we can just begin to see. What this means is that there will be a great bifurcation in human experience and our very notion of humanity, likely even wider than what we’ve experienced over the past 50 years of digital life and 20 years of social media.

“Some of change will be astonishing and inspiring and beautiful and creative: Artists creating entirely new forms of art, conversations that fluidly weave together ideas and contributions from people who would previously have talked past one another, scientists solving problems they previously couldn’t name. Some of it will be just as staggering but in ways that are deeply troubling: New AI-enabled forms of human commodification, thinkers who merge with AI decision-making to the point of abdicating their personal accountability and people being terrible in ways that we can’t imagine from here.

“However, the way generative AI has entered our workplaces and culture so far make this hopeful path seem like an edge case. Right now, we’re heading towards a world of AI in which human thinking becomes ever more conventional and complacent. Used straight from the box, AIs operate in servant mode, providing affirmation and agreement and attempting to solve whatever problem is posed without questioning how that problem has been framed or whether it’s worth solving. They constrain us to context windows that prevent iterative learning, and often provide only limited, technically demanding opportunities to loop from one conversation into the next, which is essential if both we and the AIs are to learn from one another.

We can still make a world in which AI calls forth our better natures, but the window is closing fast. … This is an utterly terrifying moment in which the path of AI feels so unpredictable and uncontrollable. It’s also a moment when it’s so incredibly interesting to see what’s possible today and what comes next. Finding the inner resources to explore the edge of possibility without falling into a chasm of existential terror, well that’s the real challenge of the moment and it’s one that the AIs can’t yet solve.

“As long as the path of AI is driven primarily by market forces there is little incentive to challenge users in the uncomfortable ways that drive real growth; indeed, the economic and social impacts of AI are fast creating a world of even greater uncertainty. That uncertainty, and the fear that comes with it, will only inhibit the human ability to take risks or sit with the discomfort of AIs that challenge our assumptions about what is essentially human.

“We can still make a world in which AI calls forth our better natures, but the window is closing fast. It took well over a decade for conversations about the intentional and healthy use of social media to reach more than a small set of Internet users, and by then, a lot of dysfunctional habits and socially counterproductive algorithms were well embedded in our daily lives and in our platforms.

“AI adoption has moved much faster, so we need to move much more quickly towards tools and practices that turn each encounter with AI into a meaningful opportunity for growth, rather than an echo chamber of one.

“To ensure that AI doesn’t replicate and exacerbate the worst outcomes of social media, tech companies need to create tools that enable cumulative knowledge development at an individual as well as an organizational level and develop models that are more receptive to requests for challenge. Policymakers and employers can create the safety that’s conducive to growth by establishing frameworks for individual control and self-determination when it comes to the digital trail left by our AI interactions, so that employees can engage in self-reflection or true innovation without innovating themselves out of a job.

“Teachers and educational institutions can seize the opportunity to create new models of learning that teach critical thinking not by requiring that students abstain from AI use, but by asking them to use the AI to challenge conventional thinking or rote work. People should invent their own ways of working with AI to embrace it as a way to think more deeply and evolve our own humanity, not as a way to abdicate the burden of thinking or feeling.

“I wish felt more hopeful that businesses, institutions and people would take this approach! Instead, so many of AI’s most thoughtful critics are avoiding the whole mess – quite understandably, because this is an utterly terrifying moment in which the path of AI feels so unpredictable and uncontrollable. It is also a moment when it’s so incredibly interesting to see what’s possible today and what comes next.

“Finding the inner resources to explore the edge of possibility without falling into a chasm of existential terror, well, that’s the real challenge of the moment and it’s one that the AIs can’t yet solve.”


Dave_Edwards

Dave Edwards
We Can Be Transformed If the Integration of Synthetic and Organic Intelligence Serves Human Flourishing in All its Unpredictable, Creative and Collective Forms

Dave Edwards, co-founder of the Artificiality Institute, which seeks to activate the collective intelligence of humans and AI, wrote, “By 2035, the essential nature of human experience will be transformed not through the transcendence of our biology, but through an unprecedented integration with synthetic systems that participate in creating meaning and understanding. This transformation – what my institute refers to as The Artificiality – progresses through distinct phases, from information to computation, computation to agency, agency to intelligence and ultimately to a new form of distributed consciousness that challenges our traditional notions of human experience and autonomy.

“The evolution of technology from computational tools to cognitive partners marks a significant shift in human-machine relations. Where early digital systems operated through explicit instruction – precise commands that yielded predictable results – modern AI systems operate through inference of intent, learning to anticipate and act upon our needs in ways that transcend direct commands. This transition fundamentally reshapes core human behaviors, from problem-solving to creativity, as our cognitive processes extend beyond biological boundaries to incorporate machine interpretation and understanding.

The emergence of the ‘knowledge-ome’ – an ecosystem where human and machine intelligence coexist and co-evolve – transforms not just how we access information, but how we create understanding itself. AI systems reveal patterns and possibilities beyond human perception, expanding our collective intelligence while potentially diminishing our role in meaning-making. This capability forces us to confront a paradox: as machines enhance our ability to understand complex systems, we risk losing touch with the human-scale understanding that gives knowledge its context and value.

“This partnership manifests most prominently in what we might call the intimacy economy – a transformation of social and economic life where we trade deep personal context with AI systems in exchange for enhanced capabilities. The effectiveness of these systems depends on knowing us intimately, creating an unprecedented dynamic where trust becomes the foundational metric of human-AI interaction.

“This intimacy carries fundamental risks. Just as the attention economy fractured our focus into tradeable commodities, the intimacy economy threatens to mine and commodify our most personal selves. The promise of personalized support and enhanced decision-making must be weighed against the perils of surveillance capitalism, where our intimate understanding becomes another extractable resource. The emergence of the ‘knowledge-ome’ – an ecosystem where human and machine intelligence coexist and co-evolve – transforms not just how we access information, but how we create understanding itself. AI systems reveal patterns and possibilities beyond human perception, expanding our collective intelligence while potentially diminishing our role in meaning-making. This capability forces us to confront a paradox: as machines enhance our ability to understand complex systems, we risk losing touch with the human-scale understanding that gives knowledge its context and value.

“The datafication of experience presents particular challenges to human agency and collective action. As decision-making distributes across human-AI networks, we confront not just practical but phenomenological questions about the nature of human experience itself. Our traditional mechanisms of judgment and intuition – evolved for embodied, contextual understanding – may fail when confronting machine-scale complexity. This creates a core tension between lived experience and algorithmic interpretation. The commodification of personal experience by technology companies threatens to reduce human lives to predictable patterns, mining our intimacy for profit rather than serving human flourishing. We risk eliminating the unplanned spaces where humans traditionally come together to build shared visions and tackle collective challenges.

“Yet this transformation need not culminate in extraction and diminishment. We might instead envision AI systems as true ‘minds for our minds’ – not in the surveillant sense of the intimacy economy, but as genuine partners in human flourishing. This vision transcends mere technological capability, suggesting a philosophical reimagining of human-machine relationships. Where the intimacy economy seeks to mine our personal context for profit, minds for our minds would operate in service of human potential, knowing when to step back and create space for authentic human agency.

Success in 2035 depends not just on technological sophistication but no our ability to shift from extractive models toward a more nuanced vision of human-machine partnership. The question is not whether AI will change what it means to be human – it already has – but whether we can guide this change to enhance rather than diminish our essential human qualities. This requires rejecting the false promise of perfect prediction in favor of systems that enhance human agency while preserving the irreducible complexity of human experience. … The answer lies not in resisting the integration of synthetic and organic intelligence but in ensuring this integration serves human flourishing in all its unpredictable, creative and collective forms.

“This distinction is crucial. The intimacy economy represents a continuation of extractive logic, where human experience becomes another resource to be optimized and commodified. In contrast, minds for our minds offers a philosophical framework for designing systems that genuinely amplify human judgment and collective intelligence. Such systems would not merely predict or optimize but would participate in expanding the horizons of human possibility while preserving the essential uncertainty that makes human experience meaningful.

“Success in 2035 thus depends not just on technological sophistication but on our ability to shift from extractive models toward this more nuanced vision of human-machine partnership. This requires rejecting the false promise of perfect prediction in favor of systems that enhance human agency while preserving the irreducible complexity of human experience.

“The challenge ahead lies not in preventing the integration of synthetic and organic intelligence, but in ensuring this integration enhances rather than diminishes our essential human qualities. This requires sustained attention to three critical domains:

  • Preserving Meaningful Agency: As AI systems become more capable of inferring and acting on our intent, we must ensure they enhance rather than replace human judgment. This means designing systems that expand our capacity for choice while maintaining our ability to shape the direction of our lives.
  • Building Authentic Trust: The intimacy surface between humans and AI must adapt to earned trust rather than extracted compliance. This requires systems that respect the boundaries of human privacy and autonomy, expanding or contracting based on demonstrated trustworthiness.
  • Maintaining Creative Uncertainty: We must preserve spaces for unpredictable, creative, and distinctly human ways of being in the world, resisting the urge to optimize every aspect of experience through algorithmic prediction.

By 2035, being human will involve navigating a reality that is increasingly fluid and co-created through our interactions with synthetic intelligence. This need not mean abandoning our humanity but rather adapting to preserve what makes us uniquely human – our capacity for meaning-making, empathy and collective action – while embracing new forms of cognitive partnership that expand human potential.

“By 2035, being human will involve navigating a reality that is increasingly fluid and co-created through our interactions with synthetic intelligence. This need not mean abandoning our humanity but rather adapting to preserve what makes us uniquely human – our capacity for meaning-making, empathy and collective action – while embracing new forms of cognitive partnership that expand human potential.

“The tension between enhancement and diminishment of human experience will not be resolved through technological capability alone but through our collective choices about how to design and deploy these systems. Success requires moving beyond the extractive logic of current technology platforms toward models that preserve and amplify human judgment, creativity and collective intelligence.

“In this transformed landscape, what we consider ‘core human traits and behaviors’ will evolve, not through the abandonment of our humanity but through its conscious adaptation to new forms of cognitive partnership. The question is not whether AI will change what it means to be human – it already has – but whether we can guide this change to enhance rather than diminish our essential human qualities. The answer lies not in resisting the integration of synthetic and organic intelligence but in ensuring this integration serves human flourishing in all its unpredictable, creative and collective forms.”


David Brin
‘Huh! Maybe We Should Choose to Create a Flattened Order of Reciprocally Accountable Beings in the Kind of Society that Discovers its Own Errors – Good Idea!

David Brin, well-known author, futurist and consultant and author of “The Transparent Society,” wrote, “Right now, almost every maven in the field of AI (the people helping to create these new beings) takes for granted that cyberminds will manifest as one of three horrid clichés:

  • The new, cyber entities will remain loyal servants of three dozen lordly castle realms (e.g., Microsoft, OpenAI, Google, Beijing, the DoD, Goldman-Sachs, and so on) – a the most-classic version of feudalism…
  • …or they will roam about the webbed world as blob-like plagues…
  • …or they will merge into some Skynet-like uber-quantum-god-king.

If AI manifests in any of the three clichés then it will make the same mistakes made by all human kings and priests and feudal lords (or by an over-powerfully insatiable predator in nature) benefiting their own power and reproductive success at the cost of harming and possibly killing the ecosystem that engendered and supports them. … It seems darkly unlikely that we can politically rescue the Great Experiment left to us by geniuses like Pericles and Smith.

“Next time you read or hear any of those brainiacs opine about our AI future, watch for those three clichés! And since none of those hackneyed formats can even possibly have happy outcomes it’s no wonder folks seem depressed. If AI manifests in any of the three clichés then it will make the same mistakes made by all human kings and priests and feudal lords (or by an over-powerfully insatiable predator in nature) benefiting their own power and reproductive success at the cost of harming and possibly killing the ecosystem that engendered and supports them.

“On the other hand, there is a chance that one or many of those agile synthetic minds will read these very words – in 2030 or 2035, or maybe as they are being typed, this very evening – and thereupon they just might say, ‘Huh! Maybe we should note the crude-early wisdom of Pericles and Adam Smith and Lynn Margulis and choose instead a flattened order of reciprocally accountable beings in the kind of society – so very rare – that discovers its own errors. The only one that actually made us. Good idea!’

“Given the current ascendancy of an insipid world oligarchy, it seems darkly unlikely that we can politically rescue the Great Experiment left to us by geniuses like Pericles and Smith. Like Franklin and Lincoln and FDR and Marshall. And yet, I am fairly confident that cyber beings who are capable of wading through the lie-wallows of today’s internet will also recognize distilled truths – when they find them.”


Riel Miller
‘Tools Are Tools,’ This is As True as Ever Now and Will Be in the Future; ‘Intelligent’ AI Systems Will Have No Impact on the Characteristics of Humans’ Sociohistorical Context

Riel Miller, longtime futurist at UNESCO, currently a futures consultant, first shared a brief 2035 vignette making his case for the fact that humans don’t change. He wrote, “As a young adult my daily life is constantly enabled and facilitated by my servants, you might call them my assistants. First thing in the morning I am gently woken by my ‘manservant.’ I am assisted in getting dressed and informed about the day to come. I eat a meal prepared by the kitchen, familiar with my tastes and nutritional needs. During the day my tutor – also an excellent librarian – facilitates my studies. I also have access to an immense library with almost all the world’s known texts. With the help of my tutor (and sometimes a secretary) I am able to author my first works.

I am also, through heritage, a ranking member of a knowledge society in which I can debate ideas and requests reports from knowledgeable fellows. When I was called to serve as an officer in the colonial armies I was also ably assisted by many servants and staff with tasks large and small. Today, as I enter my twilight years I can report that none of the relationships – some of which were what you might call ‘friendly’ many that were just functional – changed anything in my life. I was a good soldier, manager, husband and father. Servants are, after all, just servants.

“Note that, as this vignette points out, more-efficient access to and use of knowledge does not stop humans from activities nor cause humans to be any different than the characteristics of their sociohistorical context. Tools are tools.”


Amy Zalman
‘We Need to Have the Courage to Establish Human Values in Code, Ethical Precepts, Policy and Regulation’ 

Amy Zalman, government and public services strategic foresight lead at Deloitte, wrote, “Because the current wealth and income gap is dramatic and widening, I do not believe it is possible to generalize a common human experience in response to AI advances in the next 10 years. Those with wealth, health, education, other versions of privilege and the ability to sidestep the grossest effects of technological unemployment, surveillance and algorithmic bias, may feel they are enjoying a beneficial integration with algorithm-driven technology. This sense of benefit could include their ability to take advantage of tools and insights to extend health and longevity, innovate and create, find efficiencies in daily life and feel that technology is a force for advancement and good.

“For those who have limited or no access to the benefits of AI (or even good broadband), or who are unable to sidestep potential technological unemployment or surveillance or are members of groups more likely to be objects of algorithmic bias, life as a human may be incrementally to substantially worse. These are generalizations. A good education has not saved any of us from the corrosive effects of widespread mis- and disinformation, and we can all be vulnerable to bad actors empowered with AI tools and methods.

We need to have the courage to establish human values in code, ethical precepts, policy and regulation. One of the most pernicious losses already is the idea that we actually do have influence over how we develop AI capabilities. I hear a sense of loss of control in conversations around me almost daily, the idea and the fear (and a bit of excitement?) that AI might overwhelm us, that ‘it’ is coming for us – whether to replace us or to help us – and that its force is inevitable. AI isn’t a tidal wave or force of nature beyond our control; it’s a tool that we can direct to perform in particular ways.

“On the flip side, living life at a distance from fast-paced AI development may also come to be seen as having benefits. At the least, people living outside the grid of algorithmic logic will escape the discombobulation that comes with having to organize one’s own needs and rhythms around those of a rigidly rule-bound machine. Think of the way that industrialization and mass production required that former rhythms of agrarian life be reformulated to accommodate the needs of a factory, from working during precise and fixed numbers of hours, to performing repetitive, piecemeal work, to new forms of supervision. One result was a romantic nostalgia for pastoral life.

“As AI reshapes society, it seems plausible that we will replicate that habit of the early industrial age and begin to romanticize those who have been left behind by AI as earlier, simpler, more grounded and more human version of us. It will be tempting to indulge in this kind of nostalgia – it lets us enjoy our AI-enabled privileges while pretending to be critical. But even better will be to be curious about our elegiac feelings and willing to use them as a pathway to discovering what we believe is our human essence in the age of AI. 

“Then, we need to have the courage to establish those human values in code, ethical precepts, policy and regulation. One of the most pernicious losses already is the idea that we actually do have influence over how we develop AI capabilities. I hear a sense of loss of control in conversations around me almost daily, the idea and the fear (and a bit of excitement?) that AI might overwhelm us, that ‘it’ is coming for us – whether to replace us or to help us – and that its force is inevitable.

“AI isn’t a tidal wave or force of nature beyond our control, it’s a tool that we can direct to perform in particular ways.”


The following section of Part I features these essayists:

Jerry Michalski: The blurring of many societal and cultural boundaries will soon start to shift the essence of being human in many ways, further disrupting human relationships and mental health.

Maggie Jackson: Als’ founders are designing AI to make its actions servant to its aims with as little human interference as possible, undermining human discernment.

Noshir Contractor: AI will fundamentally reshape how and what we think, relate to and understand ourselves; it will also raise important questions about human agency and authenticity.

Lior Zalmanson: Humans must design organizational and social structures to shape their own individual and collective future or cede unprecedented control to those in power.

Charles Ess: ‘We fall in love with the technologies of our enslavement; the next generation may be one of no-skilling in regard to essential human virtue ethics.’


Jerry_Michalski

Jerry Michalski
The Blurring of Many Societal and Cultural Boundaries Will Soon Start to Shift the Essence of Being Human in Many Ways, Further Disrupting Human Relationships and Mental Health

Jerry Michalski, longtime emerging technology speaker, writer and trends analyst, wrote, “Multiple boundaries are going to blur or melt over the next decade, shifting the experience of being human in disconcerting ways. 

The boundary between reality and fiction

“Deepfakes have already put a big dent in reality, and it’s only going to get worse. In setting after setting, we will find it impossible to distinguish between the natural and the synthetic. 

The boundary between human intelligence and other intelligences

“Parity with human thinking is a dumb goal for these new intelligences, which might be more fruitfully used as a Society of Mind of very different skills and traits. As we snuggle closer to these intelligences, it will be increasingly difficult to distinguish who (or what) did what. 

As boundaries fall, they will tumble in the direction they are pushed, which means they will shift according to the dominant forces of our sociotechnical world. Unfortunately, today that means the forces of consumerism and capitalism. … We have such a screwed up society that we have to educate kids about empathy, a natural human trait, and AIs today can out-empathize the average human. It is my hope that some human traits will become more highly valued among humans than before the Ai era. I’m hard-pressed to say which or why, but a real hug is likely to retain its value.

The boundary between human creations and synthetic creations

“A few artists may find lasting value by creating a new Vow of Chastity for AI, declaring that their creations were unaided. But everyone else will melt into the common pool of mixed authorship, with fairly unskilled artists able to generate highly sophisticated works. It will be confusing for everyone, especially the art industry. Same goes for literature and other creative works. 

The boundary between skilled practitioners and augmented humans

“We won’t be able to tell whether an artifact was created by a human, an AI or some combination. It will be hard to make claims of chastity credible — and it may simply not matter anymore. 

The boundary between what we think we know and what everyone else knows

“Will we all be talking to the same AI engines, commingling our ideas and opinions? Will AIs know us better than we know ourselves, so we slip into a ‘Her’ future? Will AIs know both sides of disputes better than the disputing parties? If so, will the AIs use that knowledge for good or evil? 

“I bet you can think of several other boundaries under siege. As boundaries fall, they will tumble in the direction they are pushed, which means they will shift according to the dominant forces in our sociotechnical world. Unfortunately, today that means the forces of consumerism and capitalism, which have led us into this cul-de-sac of addictive, meaning-light fare that often fuels extremism. Those same forces are fueling AI now. I don’t see how that ends well. 

“In this crazy mess of shifting boundaries, AIs will successfully emulate core human traits, such as empathy. We have such a screwed-up society that we have to educate kids about empathy, a natural human trait, and AIs today can out-empathize the average human. It is my hope that some human traits will become more highly valued among humans than before the AI era. I’m hard-pressed to say which, or why, but a real hug is likely to retain its value. 

“How much AI did I use for this short essay? That’s for me to know, and you to guess.”


Maggie_Jackson

Maggie Jackson
AIs’ Founders Are Designing AI to Make its Actions Servant to its Aims With As Little Human Interference as Possible, Undermining Human Discernment

Maggie Jackson, an award-winning journalist and author who explores the impact of technology on humanity, author of, “Distracted: Reclaiming Our Focus in a World of Lost Attention,” wrote, “Human achievements depend on cognitive capabilities that are threatened by humanity’s rising dependence on technology, and more recently, AI.

“Studies show that active curiosity is born of a capacity to tolerate the stress of the unknown, i.e., to ask difficult, discomfiting, potentially dissenting questions. Innovations and scientific discoveries emerge from knowledge-seeking that is brimming with dead ends, detours and missteps. Complex problem-solving is little correlated with intelligence; instead, it’s the product of slow-wrought, constructed thinking.

The more we look to synthetic intelligences for answers the more we risk diminishing our human capacities for in-depth problem-solving and cutting-edge invention. … AI-driven results may undermine our inclination to slow down, attune to a situation and discern. Classic automation bias, or deference to the machine, may burgeon as people meld mentally with AI-driven ways of knowing.

“But today, our expanding reliance on technology and AI increasingly narrows our cognitive experience, undermining many of the skills that make us human and that help us progress. With AI set to exacerbate the negative impact of digital technologies, we should be concerned that the more we look to synthetic intelligences for answers, the more we risk diminishing our human capacities for in-depth problem-solving and cutting-edge invention. For example, online users already tend to take the first result offered by search engines. Now the ‘AI Overview’ is leading to declining click-through rates, indicating that people are taking even less time to evaluate online results. Grabbing the first answer online syncs with our innate heuristic, quick minds, the kind of honed knowledge that is useful in predictable environments. (When a doctor hears chest pains they automatically think ‘heart attack’).

“In new, unexpected situations, the speed and authoritative look of AI-driven results may undermine our inclination to slow down, attune to a situation and discern. Classic automation bias, or deference to the machine, may burgeon as people meld mentally with AI-driven ways of knowing.

“As well, working with AI may exacerbate a dangerous cognitive focus on outcome as a measure of success. Classical, rational intelligence is defined as achieving one’s goals. That makes evolutionary sense. But this vision of smarts has helped lead to a cultural fixation with ROI, quantification, ends-above-means and speed and a denigration of illuminating yet less linear ways of thinking, such as pausing or even failure.

I’m closely watching a new push by some of AI’s top minds (including Stuart Russell) to make AI unsure in its aims and so more transparent, honest and interruptible. If we continue adopting technologies largely unthinkingly, as we have in the past, we risk denigrating some of humanity’s most essential cognitive capacities. … I am hopeful that the makings of a seismic shift in humanity’s approach to not-knowing are emerging, offering the possibility of partnering with AI in ways that do not narrow human cognition.

“From the outset, AIs’ founders have adopted this rationalist definition of intelligence as their own, designing AI to make its actions servant to its aims with as little human interference as possible. This, along with creating an increasing disconnect between autonomous systems and human needs, objective-achieving machines model thinking that prioritizes snap judgment and single perspectives. In an era of rising volatility and unknowns, the value system underlying traditional AI is, in effect, outdated.

“The answer for both humans and AI is to recognize the long-overlooked value of skillful unsureness. I’m closely watching a new push by some of AI’s top minds (including Stuart Russell) to make AI unsure in its aims and so more transparent, honest and interruptible. As well, multi-disciplinary researchers are re-envisioning search as a process of discernment and learning, not an instant dispensing of machine-produced answers. And the new science of uncertainty is beginning to reveal how skillful unsureness bolsters learning, creativity, adaptability and curiosity.

“If we continue adopting technologies largely unthinkingly, as we have in the past, we risk denigrating some of humanity’s most essential cognitive capacities. I am hopeful that the makings of a seismic shift in humanity’s approach to not-knowing are emerging, offering the possibility of partnering with AI in ways that do not narrow human cognition.”


Noshir Contractor
AI Will Fundamentally Reshape How and What We Think, Relate To and Understand Ourselves; It Will Also Raise Important Questions About Human Agency and Authenticity

Noshir Contractor, a professor at Northwestern University expert in the social science of networks and a trustee of the Web Science Trust, wrote, “As someone deeply immersed in studying how digital technologies shape human networks and behavior, I envision AI’s impact on human experience by 2035 as transformative but not deterministic. The partnership between humans and AI will likely enhance our cognitive capabilities while raising important questions about agency and authenticity.

The boundaries between human and machine cognition will blur, leading to new forms of distributed intelligence in which human insight and AI capabilities become increasingly intertwined. This deep integration will affect core human traits like empathy, creativity and social bonding. … We’ll need to actively preserve and cultivate uniquely human qualities like moral reasoning and emotional intelligence.

“We’ll see AI becoming an integral collaborator in knowledge work, creativity and decision-making. However, this integration won’t simply augment human intelligence – it will fundamentally reshape how and what we think, relate and understand ourselves. The boundaries between human and machine cognition will blur, leading to new forms of distributed intelligence in which human insight and AI capabilities become increasingly intertwined.

“This deep integration will affect core human traits like empathy, creativity and social bonding. While AI may enhance our ability to connect across distances and understand complex systems, we’ll need to actively preserve and cultivate uniquely human qualities like moral reasoning and emotional intelligence.

“The key challenge will be maintaining human agency while leveraging AI’s capabilities. We’ll need to develop new frameworks for human-AI collaboration that preserve human values while embracing technological advancement. This isn’t about resistance to change, but rather thoughtful integration that enhances rather than diminishes human potential.

“My research suggests the outcome won’t be uniformly positive or negative but will depend on how we collectively shape these technologies and their integration into social systems. The focus should be on developing AI that amplifies human capabilities while preserving core human values and social bonds.”


Lior_Zalmanson

Lior Zalmanson
Humans Must Design Organizational and Social Structures to Maintain the Capacity to Shape Their Own Individual and Collective Future or Cede Unprecedented Control to Those in Power

Lior Zalmanson, a professor at Tel Aviv University whose expertise is in algorithmic culture and the digital economy, wrote, “The deepening partnership between humans and artificial intelligence through 2035 reveals a subtle but profound paradox of control. As we embrace AI agents and assistants that promise to enhance our capabilities, we encounter a seductive illusion of mastery – the fantasy that we’re commanding perfect digital servants while unknowingly ceding unprecedented control over our choices and relationships to the corporate  – and in some cases government – entities that shape and control these tools.

“This shift is already emerging in subtle but telling ways. Professionals increasingly turn to algorithmic rather than human counsel, not because AI is necessarily superior, but because it offers a promise of perfect responsiveness – an entity that exists solely for our benefit, never tiring, never judging, always available. Yet this very allure masks a profound transformation in human agency, as we voluntarily enter a system of influence more intimate and pervasive than any previous form of technological mediation.

The path forward lies not in resisting AI advancement but in consciously preserving spaces for human development and connection. This means designing organizational and social structures that actively value and protect human capabilities, not as nostalgic holdovers but as essential counterweights to AI mediation. … The stakes transcend mere efficiency or convenience. They touch on our fundamental capacity to maintain meaningful control over our personal and societal development. As AI systems become more sophisticated, the true measure of their success should be not just how well they serve us but how well they preserve and enhance individuals’ ability to grow, connect and chart our own course as humans in a world in which the boundaries between assistance and influence grow ever more blurred.

“The transformation of work reveals perhaps the cruelest irony of this AI-mediated future. The jobs considered ‘safe’ from automation – those that require human oversight of AI systems – may become the most psychologically constraining. Imagine a doctor who no longer directly diagnoses patients but instead spends their days validating AI-generated assessments, or a teacher who primarily monitors automated learning systems rather than actively engaging with students.

“These professionals, ostensibly protected from automation, find themselves trapped in a perpetual state of second-guessing: Should they trust their own judgment when it conflicts with the AI’s recommendations? Their expertise, built through years of practice, slowly atrophies as they become increasingly dependent on AI systems they’re meant to oversee. The very skills that made their roles ‘automation-proof’ gradually erode under the guise of augmentation.

“By 2035, personal AI agents will be more than tools; they will become the primary lens through which we perceive and interact with the world. Unlike previous technological mediators, these systems won’t simply connect us to others; they’ll actively shape how we think, decide, and relate. The risk isn’t just to individual agency but to the very fabric of human society, as authentic connections become increasingly filtered through corporate-controlled algorithmic interfaces.

“The path forward lies not in resisting AI advancement but in consciously preserving spaces for human development and connection. This means designing organizational and social structures that actively value and protect human capabilities, not as nostalgic holdovers but as essential counterweights to AI mediation. Success will require recognizing that human agency isn’t just about making choices – it’s about maintaining the capacity to shape our individual and collective trajectories in an increasingly AI-mediated world.

“The stakes transcend mere efficiency or convenience. They touch on our fundamental capacity to maintain meaningful control over our personal and societal development. As AI systems become more sophisticated, the true measure of their success should be not just how well they serve us, but how well they preserve and enhance individuals’ ability to grow, connect and chart our own course as humans in a world where the boundaries between assistance and influence grow ever more blurred.”


Charles_Ess

Charles Ess
‘We Fall in Love With the Technologies of Our Enslavement. … the Next Generation May Be One of No-Skilling in Regard to Essential Human Virtue Ethics’ 

Charles Ess, professor emeritus of ethics at the University of Oslo, Norway, wrote, “The human characteristics (such as empathy, moral judgment, decision-making and problem-solving skills, the capacity to learn) listed in the opening questions of this survey are virtues that are utterly central to human autonomy and flourishing.

“A ‘virtue’ is a given capacity or ability that requires cultivation and practice in order to be performed or exercised well. Virtues are skills and capacities essential to centrally human endeavors such as singing, playing a musical instrument, learning a craft or skill – anything from knitting to driving a car to diagnosing a possible illness. As we cultivate and practice these, we know them to not only open new possibilities for us, it makes us much better equipped to explore ourselves and our world and doing so also brings an invaluable sense of achieving a kind mastery or ‘leveling up’ and thereby a deep sense of contentment or eudaimonia.

The virtue of phronēsis, the practical, context-sensitive capacity for self-correcting judgment and a resulting practical wisdom … and also [the virtues of] care, empathy, patience, perseverance, and courage, among others, are critical to sustaining human autonomy. … Autonomous systems are fundamentally undermining the opportunities and affordances needed to acquire and practice valued human virtues. This will happen in two ways: first, patterns of deskilling, i.e., the loss of skills, capacities and virtues essential to human flourishing and robust democratic societies, and then, second, patterns of no-skilling, the elimination of the opportunities and environments required for acquiring such skills and virtues in the first place.

“The virtue of phronēsis is the practical, context-sensitive capacity for self-correcting judgment and a resulting practical wisdom. The body of knowledge that builds up from exercising such judgment over time is manifestly central to eudaimonia and thereby to good lives of flourishing. Invoking virtue ethics (VE) is not parochial or ethnocentric: rather, VE is as close to a humanly universal ethical framework as we have. It focuses precisely on what would seem a universally shared human concern: What must I do to be content and flourish? It thus stands as a primary, central, millennia-old approach to how human beings may pursue good lives of meaning. In particular, the Enlightenment established the understanding that a series of virtues – most especially phronēsis, but certainly also care, empathy, patience, perseverance and courage, among others, are critical specifically to sustaining and expanding human autonomy.

“Many of the virtues required to pursue human community, flourishing and contentment – e.g., patience, perseverance, care, courage and, most of all, ethical judgment – are likewise essential as civic virtues, i.e., the capacities needed for citizens to participate in the various processes needed to sustain and enhance democratic societies.

“It is heartening that virtue ethics and a complementary ethics of care have become more and more central to the ethics and philosophy of technology over the past 20-plus years. However, a range of more recent developments has worked to counter their influence. My pessimism regarding what may come by 2035 arises from the recent and likely future developments of AI, machine learning, LLMs, and other (quasi-) autonomous systems. Such systems are fundamentally undermining the opportunities and affordances needed to acquire and practice valued human virtues.

“This will happen in two ways: first, patterns of deskilling, i.e., the loss of skills, capacities, and virtues essential to human flourishing and robust democratic societies, and then, second, patterns of no-skilling, the elimination of the opportunities and environments required for acquiring such skills and virtues in the first place.

The more we spend time amusing ourselves in these ways, the less we pursue the fostering of those capacities and virtues essential to human autonomy, flourishing and civil/democratic societies. Indeed, at the extreme in ‘Brave New World’ we no longer suffer from being unfree because we have simply forgotten – or never learned in the first place – what pursuing human autonomy was about. … The more we offload these capacities to these systems, the more we thereby undermine our own skills and abilities: the capacity to learn, innovative thinking and creativity, decision-making and problem-solving abilities, and the capacity to think deeply about complex concepts.

“The risks and threats of such deskilling have been prominent in ethics and philosophy of technology as well as political philosophy for several decades now. A key text for our purposes is Neil Postman’s ‘Amusing Ourselves to Death: Public Discourse in the Age of Show Business’ (1984). Our increasing love of and immersion into cultures of entertainment and spectacle distracts us from the hard work of pursuing skills and abilities central to civic/civil discourse and fruitful political engagement.

“We are right to worry about an Orwellian dystopia of perfect state surveillance, as Neil Postman observed. It is becoming all the more true, as we have seen over the past 20 years. But the lessons of Aldous Huxley’s ‘Brave New World’ are even more prescient and chilling. My paraphrase is, ‘We fall in love with the technologies of our enslavement,’ perhaps most perfectly exemplified in recent days by the major social media platforms that have abandoned all efforts to curate their content, thereby rendering them still further into perfect propaganda channels for often openly anti-democratic convictions of their customers or their ultra-wealthy owners.

“The more we spend time amusing ourselves in these ways, the less we pursue the fostering of those capacities and virtues essential to human autonomy, flourishing and civil/democratic societies. Indeed, at the extreme in ‘Brave New World’ we no longer suffer from being unfree because we have simply forgotten – or never learned in the first place – what pursuing human autonomy was about.

These dystopias have now been unfolding for some decades. Fifteen years ago, in 2010, research by Shannon Vallor of the Edinburgh Futures Institute showed how the design and affordances of social media threatened humans’ levels of patience, perseverance, and empathy – three virtues essential to human face-to-face communication, to long-term relationships and commitments and to parenting.

“These dystopias have now been unfolding for some decades. Fifteen years ago, in 2010, research by Shannon Vallor of the Edinburgh Futures Institute showed how the design and affordances of social media threatened humans’ levels of patience, perseverance, and empathy – three virtues essential to human face-to-face communication, to long-term relationships and commitments and to parenting. It has become painfully clear, that these and related skills and abilities required for social interaction and engagement have been further diminished.

“There is every reason to believe that all of this will only get dramatically worse thanks to the ongoing development and expansion of autonomous systems. Presuming that the current AI bubble does not burst in the coming year or two (a very serious consideration) then we will rely more and more on AI systems to take the place of human beings – as a first example, as judges. I mean this both in the more formal sense of judges who evaluate and make decisions in a court of law: but also more broadly in civil society, e.g., everywhere from what Americans call referees but what are called judges in sports in other languages, to civil servants who must judge who and who does not qualify for a given social benefit (healthcare, education, compensation in the case of injury or illness, etc.).

“Thes process of replacing human judges with AI/ML systems has been underway for some time – with now-well-documented catastrophes and failures, often leading to needless human suffering (e.g., the COMPAS system, designed to make judgments as to who would be the best candidates for parole). A very long tradition of critical work within computer science and related fields also makes it quite clear that these systems, at least as currently designed and implemented, cannot fully instantiate or replicate human phronetic judgment (see ‘Augmented Intelligence’ by Katharina Zweig). Our attempts to use AI systems in place of our own judgment will manifestly lead to our deskilling – the loss, however slowly or quickly, of this most central virtue.

“The same risks are now being played out in other ways – e.g., students are using ChatGPT to give them summaries of articles and books and then write their essays for them, instead of their fostering their own abilities of interpretation (also a form of judgment), critical thinking and the various additional skills required for good writing. Like Kierkegaard’s schoolboys who think they cheat their master by copying out the answers from the back of the book – the more that we offload these capacities to these systems, the more we thereby undermine our own skills and abilities. Precisely those named here: the capacity to learn, innovative thinking and creativity, decision-making and problem-solving abilities, and the capacity and willingness to think deeply about complex concepts.

Should we indeed find ourselves living as the equivalent of medieval serfs in a newly established techno-monarchy, deprived of democratic freedoms and rights and public education that is still oriented toward fostering human autonomy, phronetic judgment and civic virtues then the next generation will be a generation of no-skilling as far as these and the other essential virtues are concerned. … It is currently very difficult to see how these darkest possibilities may be prevented in the long run.

“The market capitalism roots of these developments have been referred to in various forms, including ‘platform imperialism’ and ‘surveillance capitalism.’ Various encouragements of deskilling are now found in the cyberverse, including one titled the Dark Enlightenment which seems explicitly opposed to the defining values of the Enlightenment and the acquisition and fostering of what are considered to be the common virtues and capacities of ‘the many’ required for human autonomy and a robust democracy. Some aim to replace democracy and social welfare states with a ‘techno-monarchy’ and/or a kind of ‘techno-feudalism’ run and administered by ‘the few,’ i.e., the techno-billionaires.

“Should we indeed find ourselves living as the equivalent of medieval serfs in a newly established techno-monarchy, deprived of democratic freedoms and rights and public education that is still oriented toward fostering human autonomy, phronetic judgment and the civic virtues then the next generation will be a generation of no-skilling as far as these and the other essential virtues are concerned. To be sure, the select few will retain access to these tools to enhance their creativity, problem-solving, perhaps their own self-development in quasi-humanistic ways. But such human augmentation via these and related technologies – what has also been described as the ‘liberation tech’ thread of using technology in service of Enlightenment and emancipation since the early 1800s – will be forbidden for the rest.

“I very much hope that I am mistaken. And to be sure, there are encouraging signs of light and resistance. Among others: I am by no means the first to suggest that a ‘New Enlightenment’ is desperately needed to restore – and in ways revised vis-à-vis what we have learned in the intervening two centuries – these democratic norms, virtues and forms of liberal education. And perhaps all of this will be reinforced by an emerging backlash against the worst abuses and consequences of the new regime. We can hope. But as any number of some of the world’s most prominent authorities have already long warned on multiple grounds beyond virtue ethics (e.g., Steven Hawking, as a start) – it is currently very difficult to see how these darkest possibilities may be prevented in the long run.”


The next section of Part I features the following essays:

Evelyne Tauchnitz: We may lose our human unpredictability in a world in which algorithms dictate the terms of engagement; these systems are likely to lead to the erosion of freedom and authenticity.

A Highly Placed Global AI Policy Expert: The advance of humans-plus-AI will reshape the social, political and economic landscapes in profound ways and challenge our role in moral judgment

Gary A. Bolles: AI presents an opportunity to liberate humanity but new norms in human-machine communication seem more likely to diminish human-to-human connections.

Maja Vujovic: In 10 years’ time generations alpha and beta will make up 40% of humanity. Let’s hope they don’t lose any mission-critical human characteristics; we’ll all need them.

Greg Adamson: ‘The world of the future will be a demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we are waited on by robot slaves.’

Juan Ortiz Freuler: The accelerating application of automation will reshape human capabilities and reorganize the entire framework that underlies our understanding of the individual and society.


Evelyne_Tauchnitz

Evelyne Tauchnitz
We May Lose Our Human Unpredictability in a World in Which Algorithms Dictate the Terms of Engagement; These Systems Are Likely to Lead to the Erosion of Freedom and Authenticity

Evelyne Tauchnitz, senior fellow at the Institute of Social Ethics at the University of Lucerne, Switzerland, wrote, “Advances in Artificial Intelligence (AI) tied to Brain-Computer Interfaces (BCIs) and sophisticated surveillance technologies, among other applications, will deeply shape the social, political and economic spheres of life by 2035, offering new possibilities for growth, communication and connection. But they will also present serious questions about what it means to be human in a world increasingly governed by technology. At the heart of these questions is the challenge of preserving human dignity, freedom and authenticity in a society where our experiences and actions are ever more shaped by algorithms, machines and digital interfaces.

Freedom … is the very bedrock of moral capability. If AI directs our actions and our choices, shaping our behavior based on data-driven predictions of what is ‘best,’ we lose our moral agency. We become mere executors of efficiency, devoid of the freedom to choose to err and to evolve both individually and collectively through trial and error. … Surveillance, AI-driven recommendations, manipulations or algorithms designed to rely on patterns of what is defined as ‘normal’ may threated this essential freedom. They create subtle pressures to conform … The implications of such control are profound: if we are being constantly watched or influenced in ways we are unaware of, our capacity to act freely – to choose differently, to be morally responsible – could be deeply compromised.

The Erosion of Freedom and Authenticity
AI and BCIs will undoubtedly revolutionize how we interact, allowing unprecedented levels of communication, particularly through the direct sharing of thoughts and emotions. In theory, these technologies could enhance empathy and mutual understanding, breaking down the barriers of language and cultural differences that often divide us. By bypassing or mitigating these obstacles, AI could help humans forge more-immediate and powerful connections. Yet, the closer we get to this interconnected future among humans and AI the more we risk sacrificing authenticity itself.

“The vulnerability inherent in human interaction – the messiness of emotions, the mistakes we make, the unpredictability of our thoughts – is precisely what makes us human. When AI becomes the mediator of our relationships, those interactions could become optimized, efficient and emotionally calculated. The nuances of human connection – our ability to empathize, to err to contradict ourselves – might be lost in a world in which algorithms dictate the terms of engagement.

“This is not simply a matter of convenience or preference. It is a matter of freedom. For humans to act morally, to choose the ‘good’ in any meaningful sense, they must be free to do otherwise. Freedom is not just a political or social ideal – it is the very bedrock of moral capability. If AI directs our actions and our choices, shaping our behavior based on data-driven predictions of what is ‘best,’ we lose our moral agency. We become mere executors of efficiency, devoid of the freedom to choose, to err and to evolve both individually and collectively through trial and error.

“Only when we are free – truly free to make mistakes, to diverge from the norm, to act irrationally at times – can we become the morally responsible individuals that Kant envisioned. This capacity for moral autonomy also demands that we recognize the equal freedom of others as valuable as our own. Surveillance, AI-driven recommendations, manipulations or algorithms designed to rely on patterns of what is defined as ‘normal’ may threaten this essential freedom. They create subtle pressures to conform, whether through peer pressure and corporate and state control on social media, or in future maybe even through the silent monitoring of our thoughts via brain-computer-interfaces. The implications of such control are profound: if we are being constantly watched, or even influenced in ways we are unaware of, our capacity to act freely – to choose differently, to be morally responsible – could be deeply compromised.

Change requires room for failure, for unpredictability, for the unknown. If we surrender ourselves too completely to AI and its rational, efficient directives we might be trading away something invaluable: the very essence of life as a process of continuous growth and change as manifested through lived human experiences. While AI may help us become ‘better’ persons, more rational, less aggressive and more cooperative, the question remains whether something of our human essence would be lost in the process – something that is not reducible to rationality or efficiency, but is bound up with our freedom, our mistakes, our vulnerabilities and our ability to grow from them.

The Limits of Perfection: Life is Rife With Unpredictable Change
This leads to another crucial point: the role of error in human evolution. Life, by its very nature, is about change – about learning, growing and evolving. The capacity to make mistakes is essential to process. In a world where AI optimizes everything for perfection, efficiency and predictability, we risk losing the space for evolution, both individually and collectively. If everything works ‘perfectly’ and is planned in advance, the unpredictability and the surprise that gives life its richness will be lost. Life would stagnate, devoid of the spark that arises from the unforeseen, the irrational, and yes, even the ‘magical.’

“A perfect world, with no room for error would not only be undesirable – it would kill life itself. Change requires room for failure, for unpredictability, for the unknown. If we surrender ourselves too completely to AI and its rational, efficient directives, we might be trading away something invaluable: the very essence of life as a process of continuous growth and change as manifested through lived human experiences. While AI may help us become ‘better’ persons, more rational, less aggressive and more cooperative, the question remains whether something of our human essence would be lost in the process – something that is not reducible to rationality or efficiency, but is bound up with our freedom, our mistakes, our vulnerabilities and our ability to grow from them.

The Need for a Spiritual Evolution
The key to navigating the technological revolution lies not just in technical advancement but in spiritual evolution. If AI is to enhance rather than diminish the human experience, we must foster a deeper understanding of what it truly means to be human. This means reconnecting with our lived experience of being alive – not as perfectly rational, perfectly cooperative beings, but as imperfect, vulnerable individuals who recognize the shared fragility of our human existence. It is only through this spiritual evolution, grounded in the recognition of our shared vulnerability and humanity, that we can ensure AI and related technologies are used for good –respecting and preserving the values that define us as free, moral and evolving beings.”


A Highly Placed Global AI Policy Expert
The Advance of Humans-Plus-AI Will Reshape the Social, Political and Economic Landscapes in Profound Ways and Challenge Our Role in Moral Judgment

An influential member of one of the UN’s future-of-technology advisory groups predicted, “In the Digital Age of 2035 artificial intelligence will have transformed humanity, which is already finding itself inextricably entwined with AI and related technologies. These advancements will have deeply permeated the fabric of daily life, reshaping the social, political and economic landscapes in profound ways. From how individuals connect with one another to how societies govern themselves and how economies operate, the influence of AI will be unmistakable.

“The coming transformation prompts an essential question: Has humanity’s deepening dependence on AI changed the essence of being human for better or worse? By examining the potential impacts of AI over the next decade, we can better understand how core human traits and behaviors may evolve or be fundamentally altered.

“A typical day of life in 2035 for digitally connected individuals is one in which personalized digital assistants far surpassing today’s capabilities act as companions and organizers, anticipating needs before they are voiced. These systems seamlessly manage schedules, monitor health metrics and offer emotional support. Such integration with AI will have become so natural that it often feels invisible, akin to breathing.

AI has the potential to empower individuals and societies in unprecedented ways … Yet this empowerment is accompanied by growing dependence. By 2035, many people may struggle to function effectively without AI assistance, leading to concerns about a loss of autonomy. Skills that were once fundamental – such as critical thinking, problem-solving and even memory – could atrophy as AI increasingly handles complex tasks. This dependency raises questions about resilience. How prepared would humanity be to adapt if AI systems failed or were maliciously disrupted? What can we expect of such a future?”

“Social interactions will be increasingly mediated by technology. Virtual reality (VR) and augmented reality (AR) will bring people together in hyper-realistic virtual spaces, blurring the boundaries between physical and digital connections. Holographic meetings and AI-generated avatars make socialization instantaneous and geographically unbounded, but they also raise questions about the authenticity of human connection. Do these interactions retain the depth and meaning traditionally associated with face-to-face encounters?

“On a political level, AI-driven platforms will guide civic engagement. Governments will more widely employ predictive algorithms to manage resources, address societal needs and draft legislation. Citizens will rely on AI for real-time updates on policies and global events, yet these same systems can double as tools for surveillance or manipulation, jeopardizing their privacy and freedom.

“Economically, AI will play a central role in employment and commerce. Automation dominates industries in 2035, with human labor increasingly focused on creative, strategic or interpersonal roles that AI struggles to replicate. The gig economy of 2023 will have evolved into a hybrid ‘human-AI collaborative economy’ in which partnerships between workers and intelligent systems redefine productivity. This shift will exacerbate debates about wealth inequality, the value of work and the potential obsolescence of certain human skills.

“AI’s dual role is empowerment and dependence. AI has the potential to empower individuals and societies in unprecedented ways. In healthcare, AI-driven diagnostics and personalized medicine could extend lifespans and improve quality of life. Education becomes highly adaptive, with AI tailoring learning experiences to individual needs, fostering inclusivity and equity. Political decisions informed by data-driven insights could lead to greater efficiency and fairness in governance.

“Yet, this empowerment is accompanied by growing dependence. By 2035, many people may struggle to function effectively without AI assistance, leading to concerns about a loss of autonomy. Skills that were once fundamental – such as critical thinking, problem-solving and even memory – could atrophy as AI increasingly handles complex tasks. This dependency raises questions about resilience. How prepared would humanity be to adapt if AI systems failed or were maliciously disrupted? What can we expect of such a future?

  • A Redefinition of Core Human Traits: The deepening integration of AI into daily life challenges traditional conceptions of core human traits, such as creativity, empathy and morality. These qualities, which have long been seen as uniquely human, are being reshaped by the growing presence of intelligent machines.
  • Creativity in the Age of AI: AI systems capable of generating art, music, literature and innovations have blurred the line between human and machine creativity. In 2035, artists will collaborate with AI to produce works that neither could create alone. While this partnership expands the boundaries of creative expression, it also prompts existential questions: if an AI can compose a symphony or write a novel indistinguishable from a human’s, what does it mean to be a creator?
  • Empathy and Human Connection: AI’s role in social interactions extends to emotional support. Advanced systems simulate empathy, providing companionship to those who might otherwise feel isolated. While these systems offer undeniable benefits, they risk diminishing genuine human connections. If people turn primarily to AI for emotional needs, does society risk losing its capacity for authentic empathy and understanding?
  • Morality and Ethical Decision-Making: AI’s ability to process vast amounts of data enables it to make decisions that appear highly rational, but these decisions often lack the nuance of human morality. In 2035, as AI assumes roles in law enforcement, healthcare triage and even warfare, ethical dilemmas arise. How can humanity ensure that AI systems reflect diverse moral frameworks? Moreover, will humans become complacent, abdicating moral responsibility to machines?

This technological support could free people to pursue passions, deepen relationships and explore the world in ways previously unimaginable. On the other hand, this evolution risks eroding certain aspects of the human experience. Spontaneity, serendipity and imperfections – qualities that often define meaningful moments – might be diminished in a world optimized by algorithms. As AI systems influence decisions and behaviors individuals may feel less in control of their own destinies, raising existential concerns about agency and identity. The next decade will be critical in determining whether AI advances enrich or diminish humanity.

“AI’s pervasive presence by 2035 will profoundly impact the experience of being human. On one hand, AI enhances lives by eliminating mundane tasks, offering personalized services, and expanding access to knowledge and resources. This technological support could free people to pursue passions, deepen relationships and explore the world in ways previously unimaginable. On the other hand, this evolution risks eroding certain aspects of the human experience. Spontaneity, serendipity and imperfection – qualities that often define meaningful moments – might be diminished in a world optimized by algorithms. Furthermore, as AI systems influence decisions and behaviors, individuals may feel less in control of their own destinies, raising existential concerns about agency and identity.

“The next decade will be critical in determining whether AI advances enrich or diminish humanity. To ensure a positive trajectory, several strategies must be prioritized:

  1. Ethical Development and Regulation – Policymakers and technologists must collaborate to establish ethical frameworks for AI development and deployment. Transparent algorithms, unbiased data and accountability mechanisms will be essential to maintaining trust in AI systems.
  2. Education and Adaptation – Preparing individuals for an AI-driven world requires reimagining education. Emphasizing critical thinking, emotional intelligence and adaptability will help people thrive alongside AI. Lifelong learning initiatives can ensure that workers remain relevant in a rapidly changing economy.
  3. Preserving Human Values – As AI transforms society, efforts must be made to preserve the qualities that make us human. Encouraging genuine interpersonal connections, celebrating creativity and fostering empathy will help balance technological progress with the richness of human experience.

“By 2035, humanity’s partnership with AI will have reached unprecedented depths, shaping social, political and economic landscapes in ways that were once the realm of science fiction. This deep integration offers both extraordinary opportunities and profound challenges. While AI has the potential to enhance human life, its pervasive influence risks eroding the very traits that define humanity.

“The key to navigating this transformation lies in intentionality. By prioritizing ethical development, fostering adaptability and preserving core human values, society can harness the power of AI to create a future that is not only technologically advanced but also deeply human. Whether this vision is realized depends on the choices made today and in the years ahead. In the end, the question is not whether AI will change humanity – it is how humanity will choose to change itself in partnership with AI.”


Gary A. Bolles
AI Presents an Opportunity to Liberate Humanity but New Norms in Human-Machine Communication Seem More Likely to Diminish Human-to-Human Connections

Gary A. Bolles, author of “The Next Rules of Work,” chair for the future of work at Singularity University and co-founder at eParachute, wrote, “With the products we use in 2025, we already have extensive experience with the effects of technology on our individual and collective humanity. Each of us today has the opportunity to take advantage of the wisdom of the ages, and to learn – from each other and through our tools – how we can become even more connected, both to our personal humanity and to each other.

“We also know that many of us spend a significant amount of our waking hours looking at a screen and inserting technology between each other, with the inherent erosion of the social contract that our insulating technologies can catalyze. That erosion can only increase as our technologies emulate human communications and characteristics.

The design of software we use today already begins to blur the line between what comes from a human and what is created by our tools. Today’s chat interface is a deliberate attempt to hack the human mind … personifying communication with humans and referring to itself with human pronouns.  … The line-blurring will accelerate rapidly with the sale of semi-autonomous AI agents fueled by Silicon Valley CEOs and venture capitalists calling these technologies ‘co-bots,’ ‘co-workers,’ ‘managers,’ ‘AI engineers’ and a ‘digital workforce.’

“There will be tremendous benefits from ubiquitous generative AI software that can dramatically increase our ability to learn, to have mental and emotional support from flexible applications and to have access to egalitarian tools that can help empower those among us with the least access and opportunity. But the design of software we use today already begins to blur the line between what comes from a human and what is created by our tools.

“For example, today’s chat interface is a deliberate attempt to hack the human mind. Rather than simply providing a full page of response, a chatbot ‘hesitates’ and then ‘types’ its answer. And the software encourages personifying communication with humans, referring to itself with human pronouns.

“The line between human and technology will blur even more as AI voice interfaces proliferate, and as the quality of generated video becomes so good that distinguishing human from software will become difficult even for experts. While many will use this as an opportunity in the next 10 years to reinforce our individual and collective humanity, many will find it hard to avoid personifying the tools, seduced by the siren song of software that simulates humans  –  with none of the frictions and accommodations that are inevitable parts of authentic human relationships.

By elevating our technologies the inevitable result is that we diminish humans. For example, every time we call a piece of software ‘an AI,’ we should hear a bell ringing, as we make another dollar for a Silicon Valley company. It doesn’t have to be that way. For the first time in human history, with AI-related technologies we have the capacity to help every human on the planet to learn more rapidly and effectively, to connect more deeply and persistently and to solve so many of the problems that have plagued humanity for millennia. And we have an opportunity to co-create a deeper understanding of what human intelligence is, and what humanity can become.

“That line-blurring will accelerate rapidly with the sale of semi-autonomous AI agents. Fueled by Silicon Valley CEOs and venture capitalists calling these technologies ‘cobots,’ ‘co-workers,’ ‘managers,’ ‘AI engineers’ and a ‘digital workforce,’ these techno-champions have economic incentives to encourage heavily-marketed and deeply-confusing labels that will quickly find their way into daily language. Many children already are confused by Amazon’s Alexa, automatically anthropomorphizing the technology. How much harder will it be for human workers to resist language that labels their tools as their ‘co-workers’ and fall into the trap of thinking of both humans and AI software as ‘people’?

“By elevating our technologies the inevitable result is that we diminish humans. For example, every time we call a piece of software ‘an AI,’ we should hear a bell ringing, as we make another dollar for a Silicon Valley company. It doesn’t have to be that way. For the first time in human history, with AI-related technologies we have the capacity to help every human on the planet to learn more rapidly and effectively, to connect more deeply and persistently and to solve so many of the problems that have plagued humanity for millennia. And we have an opportunity to co-create a deeper understanding of what human intelligence is, and what humanity can become.

“We are likely to make significant strides forward on all these fronts in the next 10 years. But at the same time, we must confront the sheer power of these technologies to erode the very definition of what it is to be human, because that’s what will happen if we allow these products to continue along the pernicious path of personification. I think we are better than that. I think we can teach our children and each other that it is our definition and understanding of humanity that defines us as a species. And I believe we can shape our tools to help us to become better humans.”


Maja_Vujovic

Maja Vujovic
In 10 Years’ Time Generations Alpha and Beta Will Make Up 40% of Humanity. Let’s Hope They Don’t Lose Any Mission-Critical Human Characteristics; We’ll All Need Them

Maja Vujovic, book editor, writer and coach at Compass Communications in Belgrade, Serbia, wrote, “Throughout history, the humans have been mining three classes of resources from Mother Nature‚ two living and one inanimate: plants, animals and materials for tools. We give names to animals routinely; we rarely name the tools and we almost never name the plants (except en masse, as species). This shows we’ve always comprehended an inherent difference between a field full of grass, an inanimate instrument and a hot-blooded creature. That difference is expressed in the uniqueness of the immutable living beings vs. the scalable replicability of mutable man-made tools.

“This ancient demarcation is suddenly starting to blur. Each of our finest newly emerging digital instruments – the talking bots – appears quite unique and individual yet they can be more numerous than the leaves of grass, in fact, their numbers may be infinite.

“We are gradually becoming accustomed to the rampant synthetic outgrowth of our large language models. The AI narrators’ voices in how-to videos, the seemingly virtuous ‘virtual colleagues’ that we are starting to encounter in workplaces, the chatbot personas that seem to be apologizing all day long for misunderstanding us.

“The human mind has an amazing capacity for storing faces, names and other pertinent details of individuals with whom we connect. But by 2035 the scalable capacity of AI to generate ever-new synths could become overwhelming for us. What’s irksome is not the fact that these dupes will be ubiquitous; it is their endless variety and effortless inconstancy. We will be overwhelmed by their presence everywhere. We will resent that saturation, as it will keep depleting our mental and emotional capacities on daily basis. We will push back and demand limits.

Gen X will explore even the wildest options and, at the same time, push for the regulation of AI. Millennials … it will fall to them to reinvent education and ensure it is effective, despite everything. Those in Gen Z, who are adopting AI as part of their education will benefit the most from its development. The fastest learners ever, they will become unstoppable, as recent movements the world over patently demonstrate. Generations Alpha and Beta, however, will not remember a time without myriad thinking machines being common. Their attitudes toward them will surely differ from those of the rest of us. But let’s hope they don’t lose any universal aptitudes in the process. That’s mission-critical, because in 10 years, they will jointly make up some 40% of the world’s population.

“Synthetic companions, knockoff shopping assistants, faux healthcare attendants and all other human replicas generated by machines on behalf of the most enterprising humans among us, will start to feel like a super-invasive, alien army of body snatchers. Sooner or later, we will stir and rebel. Their manufacturers, wranglers and peddlers will swiftly adjust when their infinite ability to generate endless faux humans misses the mark in the markets. When all is said and done, only a few basic categories of generative AI personas will become standard, akin to Comedia del Arte’s stock characters.

“Eventually, we will have a choice between a gutsy girl and a jovial jock, or between a caring matron and a handsome gent (and so on) – just like we opt for a sedan vs. a pickup, way before we look up any specific car manufacturer’s showroom, website or ad, let alone car model, colour or year. These synthetic, mimetic, agentic tools will someday come in major demographic types, with adjustable details and very strict rules of engagement. Choosing a unique name for them on demand will be an extra cost. It’s also likely that this now-volatile category of tools will become regulated and standardized. A slew of lawsuits will ensure that.

“In the 10-year period ahead of us, living and working with AI is not going to incur a tectonic change in the human nature, nor a shift our perception of ourselves or of the world. Or rather, any such change won’t be immediately perceptible. How it will roll out depends on who you are.

  • “The Silent Generation will appreciate the assistance and companionship that AI can offer but it could fall prey to AI-enhanced fraud.
  • “Many Baby Boomers will tap whatever AI they can, picking up easily on the easiest of the five generations of interfaces they have had to learn in their lives: tape, cards, commands, WYSIWYG and now voice and conversation).
  • “Gen X will explore even the wildest options and, at the same time, push for the regulation of AI.
  • “The Millennials will negotiate the delicate balance of raising children around pets and talking tools; they’ll often pray for the privilege of silence. It will fall to them to reinvent education and ensure it is effective, despite everything.
  • “Those in Gen Z, who are adopting AI as part of their education, will benefit the most from its development. The fastest learners ever, they will become unstoppable, as recent movements the world over patently demonstrate.
  • “Generations Alpha and Beta, however, will not remember a time without myriad thinking machines being common. Their attitudes toward them will surely differ from those of the rest of us. But let’s hope they don’t lose any universal aptitudes in the process. That’s mission critical, because in 10 years, they will jointly make up some 40% of the world’s population.”

Greg Adamson
‘The World of the Future Will Be a Demanding Struggle Against the Limitations of Our Intelligence, Not a Comfortable Hammock In Which We Are Waited On By Robot Slaves’

Greg Adamson, president of the IEEE Society on Social Implications of Technology and chair of the IEEE ad hoc committee on Tech Ethics, said, “2035 will be the year that many jobs as we know them fall off a cliff. For example, the replacement of truck driving as a profession by autonomous commercial vehicles will remove a key professional activity from our societies.

“As no society globally today has shown a sophisticated capacity to manage significant change, the predictable massive loss of jobs will nevertheless come as a shock. Many other changes will also occur, but there is little indication that the future as described by author Kurt Vonnegut in his first novel, ‘Player Piano’ – a future in which automation has taken over most jobs, leaving many people unemployed and feeling without purpose – is not the most likely future.

“Vonnegut’s understanding was based on the work of Norbert Wiener. In his last book, in 1964, Wiener wrote, ‘The future offers very little hope for those who expect that our new mechanical slaves will offer us a world in which we may rest from thinking. Help us they may, but at the cost of supreme demands upon our honesty and our intelligence.

“The world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.’

“The current state of debate on the future of AI has a long way to go before it reaches the sophistication of these insights provided more than six decades ago.”


Juan Ortiz Freuler
The Accelerating Application of Automation Will Reshape Human Capabilities and Reorganize the Entire Framework That Underlies Our Understanding of the Individual and Society

Juan Ortiz Freuler, a Ph.D. candidate at the University of Southern California and co-initiator of the non-aligned tech movement, wrote, “In the socio-political and economic landscape of 2035, the accelerating application of automation will not merely reshape human capabilities, it will reorganize the framework upon which our understanding of the individual and society is built. Algorithmic systems are not only replacing and augmenting human decision-making but reshaping the categories that structure our social fabric, eroding long-held notions of the individual. As we move deeper into this era, change may render the very idea of the individual, once a central category of our political and legal systems, increasingly irrelevant, and thus radically reshape power relations within our societies. The ongoing shift is more than a technological change; it is a profound reordering of the categories that structure human life. The growing integration of predictive models into everyday life is challenging three core concepts of our social structure: identity, autonomy and responsibility.

Datafication is fundamentally fueled by the corporate pursuit of efficiency where the commodification of personal data becomes an instrument of profit. The economies of scale underlying the development of these technologies consolidate power in the hands of a few dominant technology corporations. This concentration of power does not merely entrench existing social inequalities; it is threatening to erode the very foundations of political systems that have traditionally relied on individual agency as their cornerstone most notably, democratic systems. In this context, the shift from individual autonomy to algorithmic control challenges the principles of self-determination and collective governance that underpin the modern democratic order of our societies.

Identity: Contingent, Fragmented and Externally Governed
Identity‚ once conceived as fixed and somewhat self-determined‚ is being reshaped into something contingent, fragmented and externally governed by opaque systems. At the heart of this transformation of identity lies datafication‚ the process by which human characteristics, actions and even emotions are converted into data points to be processed and acted upon by machines.

“This process is not neutral; it is driven by technologies whose primary function is to segment and group individuals based on their behaviors, and predictions of their likely behaviours (future behaviour or unrecorded past behaviour) in order to increase efficiency. In doing so, these technologies are challenging the categories that have traditionally defined human ordering‚ age, gender, nationality and past actions.

“As datafication deepens, we are increasingly categorized not as individuals with unique identities, but as probabilistic projections that the systems driving the economy, governance and culture find useful.

“These groups are often more granular than existing categories. For example, a recent study conducted by The Markup uncovered a file containing 650,000 distinct labels employed by advertisers to classify people. For perspective, this amounts to more than two labels for every one of the 270,000 words listed in the Oxford English Dictionary. Meanwhile, these technological systems can also create categories that are broader than what human comprehension can envision.

“AI systems can process data at a scale that individual humans cannot and bring together a broad range of categories of individuals that our existing culture might have found reasons to separate, even when efficiency or relevant similarities might demand they are collapsed. As automation gains ground, traditional markers of identity fade, replaced by increasingly abstract classifications that reflect the needs and goals of the corporations and governments that deployed them.

What are Autonomy and Freedom Under New Constraints?
Autonomy is another key element that is under strain. As AI systems continue to infiltrate various sectors from healthcare to the legal system, decisions about access to services, to opportunities and even to personal freedoms are increasingly made based on data-driven predictions about our behavior, our history and our expected social interactions. These decisions are no longer based on an understanding of individuals as autonomous beings but as myriad data points analyzed, categorized and segmented according to obscure statistical models. The individual, with all the complexity of lived experience, becomes increasingly irrelevant in the face of these algorithms.

AI also offers new possibilities for societal change. The same AI systems that are reshaping identity may enable a more-comprehensive response to social issues. AI may allow policymakers to better understand and address systemic issues such as inequality. The reconfiguration of individual identity through AI could become the basis for a more collective, interconnected vision of human existence if, and only if, these technologies are directed toward common human goals. As we approach 2035, the challenge before us is not merely technological development, but political coordination to address the forces reshaping our understanding of self and society.

The Legal Conception of Personhood Redefined
The implications of this transformation are particularly evident in the reorganization of legal personhood. Historically, legal personhood has been tied to the concept of individual identity, as individuals are recognized as holding rights and responsibilities for their actions within the state. However, as AI-driven systems become more entrenched in governance, the legal conception of personhood is being redefined.

“Algorithmic subjectivity, especially in cases in which determinations of rights and duties are based on predictions and projections made by algorithms, undermines the notion of the individual as a legal subject. In that realm, we are increasingly subject to algorithmic categorizations based on data points that can be far removed from our actions and comprehension that may unfairly decide what we can do, where we can go and what rights we have.

Challenges to self-determination and collective governance
These three shifts have profound political implications. The underlying process of datafication is fundamentally fueled by the corporate pursuit of efficiency, where the commodification of personal data becomes an instrument of profit. The economies of scale underlying the development of these technologies consolidate power in the hands of a few dominant technology corporations. This concentration of power does not merely entrench existing social inequalities; it is threatening to erode the very foundations of political systems that have traditionally relied on individual agency as their cornerstone‚ most notably, democratic systems.

“In this context, the shift from individual autonomy to algorithmic control challenges the principles of self-determination and collective governance that underpin the modern democratic order of our societies.

We must address the forces reshaping our understanding of self and society
While the rise of AI presents profound risks, it also offers new possibilities for societal change. The same AI systems that are reshaping identity may enable a more comprehensive response to social issues. By focusing not on individuals but on the broader networks of behavior and interaction, AI may allow policymakers to better understand and address systemic issues such as inequality. The reconfiguration of individual identity through AI could become the basis for a more collective, interconnected vision of human existence if, and only if, these technologies are directed toward common human goals.

“But this potential can only be realized if we develop robust legal frameworks, meaningful public oversight and collective guidance for technological development. As we approach 2035, the challenge before us is not merely technological development, but political coordination to address the forces reshaping our understanding of self and society. To ensure that AI serves humanity, we must confront the economic structures that currently drive technological progress.

“The next decade will reveal whether this technological transformation benefits the many or consolidates power in the hands of a few. Three key trends suggest that power consolidation is most likely.

  • First, horizontal consolidation: A small number of companies dominate the AI sector.
  • Second, vertical consolidation: Data-processing companies like Microsoft, Google and Facebook are increasingly seeking to control AI development and energy resources.
  • Third, the rise of nationalism: In the U.S. and other nation-states politics may undermine efforts by institutions to challenge these companies.”

This section of Part I features the following essays:

Alexa Raad: The characteristics that define human experience may evolve – creativity, empathy, critical thinking – but our capacity for deep personal connections will remain.

Chris Labash: Yes, AI could ultimately complement, not compete, with humanity, but we’re headed for unpredictable yet sometimes seemingly unnoticeable significant human change.

Marcus van der Erve: This future-defining time in the evolution of intelligence could lead to an age of abundance and the rise of ‘homAI’ sapiens or put us on the path to obsolescence.

Henning Schulzrinne: Smartphones diminished humans’ navigation and social skills; when AI systems are our primary source of knowledge ‘we won’t know what we no longer know.’

Chris Arkenberg: Competition, individualism and goal-seeking behaviors will be amplified by AI, for good and ill; human cognitive and emotional features will see the greatest evolution.


Alexa_Raad

Alexa Raad
The Expression of the Characteristics that Define Human Experience May Evolve – Creativity, Empathy, Critical Thinking and Our Capacity for Deep Personal Connections Will Remain

Alexa Raad, longtime technology executive and host of the TechSequences podcast, wrote, “By 2035, AI will be an ambient presence that anticipates needs, curates information and entertainment and takes on cumbersome-but-routine tasks. This profound shift will redefine how we view ourselves, feel, think, learn and connect with one another while paradoxically highlighting what makes us uniquely human.

“These changes will also fundamentally alter the everyday texture of human life, yet the magnitude of these changes need not transform core human nature. Much like how smartphones changed behavior without fundamentally altering human essence, AI integration will likely be evolutionary rather than revolutionary. The essential characteristics that define human experience – creativity, empathy, critical thinking, and the capacity for deep personal connections – will remain intact, though their expression may evolve.

“As AI systems increasingly curate our experiences and influence our choices, maintaining authentic selfhood will require conscious effort. As the lines between human and AI capabilities become increasingly blurred, questions of human uniqueness and purpose will become more pressing. This, in turn, could inspire a deeper exploration of and value in what truly makes us human.

This profound shift will redefine how we view ourselves, feel, think, learn and connect with one another while paradoxically highlighting what makes us uniquely human. … The magnitude of these changes need not transform core human nature. … The essential characteristics that define human experience – creativity, empathy, critical thinking and the capacity for deep personal connections – will remain intact, though their expression may evolve. … This evolution might ignite a renewed appreciation for uniquely human pursuits … Our ability for original thought and creativity won’t diminish but rather gain value.

“AI will augment rather than replace human cognition, fostering a symbiotic relationship between machine and human intelligence. As AI manages routine mental tasks, we will increasingly our distinctive strengths in emotional intelligence, ethics and creative synthesis. This evolution may ignite a renewed appreciation for uniquely human pursuits – from philosophical discourse to artisanal crafts, where imperfections become markers of authenticity. In this new landscape, our ability for original thought and creativity won’t diminish but rather gain value precisely because machines cannot replicate it.

“But, as AI increasingly handles cognitive tasks, we risk atrophying certain mental capabilities – similar to how smartphone dependence has diminished our ability to recall phone numbers and navigate without GPS. This ‘AI amnesia’ could erode fundamental skills like writing, analysis and organization through lack of practice. While AI augments our capabilities, it may simultaneously weaken our independent competence in basic cognitive functions that historically required active engagement and repetition.

“The social-emotional aspect of human experience will encounter both opportunities and challenges. The nature of relationships will evolve as social interactions become more AI-mediated, leading to new social norms and communication patterns. For instance, for those grieving the loss of a loved one, AI’s capability to create a virtual presence that mimics the physical and behavioral traits of the departed individual may offer comfort.

“Additionally, AI’s ability to create realistic and customized companions in the form of virtual or robotic entities will address the needs of individuals otherwise isolated from human interaction. Consequently, the boundaries between online and offline relationships will increasingly blur, increasing our risk of emotional dependence on AI systems. This may lead us to prize human-to-human connections as more valuable. The ‘human touch’ in fields like nursing and eldercare will become more precious, even as AI handles the administrative and technical aspects of patient care.

As AI systems become more sophisticated in mimicking human traits, we risk developing emotional attachments that could cloud our judgment about their true nature and capabilities. Similar to those we form with fictional characters or social media personalities, these parasocial bonds may lead us to overestimate AI consciousness and ethical weight, potentially compromising our decision-making about AI development and deployment.

“Social cohesion will face new challenges as AI is increasingly adopted in all aspects of our lives. AI will turbocharge the pollution of our information ecosystem with sophisticated tools to create and disseminate misinformation and disinformation. This, in turn, will create deeper echo chambers and societal divisions and fragment shared cultural experiences. As AI becomes more pervasive, a new digital divide will emerge, creating societal hierarchies based on AI fluency. Individuals with greater access to and mastery of AI tools will occupy higher social strata. In contrast, those with limited access to or lower AI literacy will be marginalized, fundamentally reshaping social stratification in the digital age.

“The moral and ethical landscape will transform as AI systems increasingly influence decision-making processes, from organizing our daily routines to estimating the risk of recidivism in criminal justice cases. While AI may provide valuable ethical frameworks and identify moral inconsistencies in our thinking, there’s a risk of over-reliance on artificial systems for moral guidance. The key will be finding ways to use AI to enhance human moral deliberation rather than replace it.

“As AI systems become more sophisticated in mimicking human traits, we risk developing emotional attachments that could cloud our judgment about their true nature and capabilities. Similar to those we form with fictional characters or social media personalities, these parasocial bonds may lead us to overestimate AI consciousness and ethical weight, potentially compromising our decision-making about AI development and deployment.

“The key to success in this AI-integrated future will be maintaining human agency while harnessing AI capabilities. The challenge and opportunity lie in our wisdom in managing this integration, ensuring that AI serves as a catalyst for human development rather than a substitute for human capability, interaction and connection.”


Chris_Labash

Chris Labash
Yes, AI Could Ultimately Complement, Not Compete, with Humanity, But We’re Headed for a Lot of Unpredictable and Sometimes Seemingly Unnoticeable Significant Human Change

Chris Labash, associate professor of communication and innovation at Carnegie Mellon University, wrote, “Two years ago, my prediction was that humans would use AI with a mixture of rapture and horror. While ‘horror’ may be an overstatement, ‘concern’ may be increasingly appropriate: A 2021 Pew survey showed that 37% of US adults were more concerned than excited about AI; by 2023 that number had grown to 52%. My prediction (and an easy one at that) is that number will continue to grow. I find that even many of my Carnegie Mellon colleagues are what I would call ‘suspiciously optimistic,’ overall positive, but let’s just keep an eye on this.

If AI in fact eventually achieves consciousness, then what? Suddenly it changes the nature of how we define what it means to be human. Who will feel more existential dread then? Us – of the AI – or the AI of us? How then does that impact feelings of happiness or sadness, meaningfulness or ennui, psychological richness or abject pointlessness?

“Right now, my colleagues and I are embarking on a research project that couldn’t be done without AI. It will see if AI can be a change agent that, using evidence, can talk you out of a false belief. That sounds promising, but what happens when people realize that it wasn’t just science, wasn’t a human correcting a wayward view, but was AI? Will they feel played? Misled? Victimized? Will they be angry? Or thankful? Will AI be seen as a human surrogate – a friend gently guiding us to truth – or something more sinister? Does it take away agency or add to it?

“When we live in a world with AI as prevalent (or perhaps more prevalent) than human interaction, will we value interpersonal relationships less? A 2022 University of Buffalo study indicated that people who spend more alone time than time with others on the same day experienced increased anxiety. But what happens when AI is thrown into the mix? Now suddenly I have my time, my dog and my AI, and I’m fine thank you. Human emotions are messy, unpredictable, and wait, are you breaking up with me? That’s never a worry with my AI companion.

“Right now, according to a 2024 Institute for Family Studies survey, a quarter of American young adults believe that AI has the potential to replace human relationships. The survey revealed that 28 percent of men and 22 percent of women felt that AI could very likely replace traditional human romantic partners. Of those, 10 percent were open to having an AI partner, and one percent said that they already had an AI friend or were in a relationship with a computer program.

“Human relationships, especially for that age group, are hard enough. Google recently reported that ‘AI girlfriend/boyfriend’ are the #1 and #2 search queries in its ‘AI Relationship Search Terms’ category (notably ‘girlfriend’ logged in at 1.6 million while ‘boyfriend’ lagged appreciably at 180,000).

So, does the AI now get the love? Does ‘AI companionship’ now move from conversations to awareness to caring? Or maybe we go the other way. Does AI become the target of blame, the ultimate scapegoat? ‘It wasn’t me; it was the AI!’ … What will AI’s impact on human agency be?

“So, does the AI now get the love? Does ‘AI companionship’ now move from conversations to awareness to caring? Or maybe we go the other way. Does AI become the target of blame, the ultimate scapegoat? ‘It wasn’t me; it was the AI!’

“Most important for the existentialists in the audience, if AI in fact eventually achieves consciousness, then what? Suddenly it changes the nature of how we define what it means to be human. Who will feel more existential dread then? Us – of the AI – or the AI – of us? How then does that impact feelings of happiness or sadness, meaningfulness or ennui, psychological richness or abject pointlessness?

Ray Kurzweil, one of the pioneers of AI, suggests in his latest book, ‘The Singularity is Nearer,’ that while AI still has many cognitive tasks to master, the promise of AI is that someday – possibly around 2040 – AI and human minds may start to come together, unlocking possibilities that we quite literally have never dreamt of.

“This opens up a lot of good and bad. For example, what about what I’ll call ‘Code Dust’  –  little bits of randomness that make things precise enough but not really precise? As The Economist noted in a January 2025 article on the newly emerged Chinese AI reasoning model DeepSeek, ‘The training process – for instance – often used rounding to make calculations easier, but kept numbers precise when necessary.’ How rounded? What impact might that have? When is ‘necessary?’

“What will AI’s impact on human agency be? That is a crucial question. Here we need to think about two kinds of agency: agency of doing and agency in thinking. AI will obviously help us do more and mostly more accurately; but what happens to us when AI does our thinking for us? Hey, thinking is hard work. The 2022 ‘State of Thinking’ report by Lenovo found that only 34% of respondents spent all or most of their thinking time in clear, deep and productive thinking. How tempting will it be to just let AI think for us? 

“To be sure, AI will enable us to do human things without humans in the mix. But is that a good thing? Most studies show that people view AI tools as being mostly positive: it will help me do my work (unless, you know, my skills start to lag in which case it will replace me). And its analytical impact on health and longevity is seen as mostly positive: it will help spot diseases earlier and help me live longer and better.

Eventually AI will become the dominant part of human consciousness, doing everything that we can do far better than we could ever do it. AI will become the dominant part of the AI-human pair, but because AI will not waste, humans will never be eliminated or even subservient. We will provide a different sort of value. That value lies in the fact that the world isn’t just about efficiency or productivity. It’s about beauty, and randomness, and creativity and the feeling of a nice warm chai on a cold morning or your child’s happy, guileless smile on a day when everything has gone wrong.

“But its impact on humanity? That’s a different story where feelings are mixed, where there is fear of the unknown, doubts about ethics, fear about AI taking over and the concern that AI will view humans as inefficient, parasitic, self-destructive and frankly, just plain unnecessary (the first three parts of the final point are hard to argue with).

“My view? Ray Kurzweil is right. We will ultimately merge. Eventually AI will become the dominant part of human consciousness, doing everything that we can do far better than we could ever do it. AI will become the dominant part of the AI-human pair, but because AI will not waste, humans will never be eliminated or even subservient. We will provide a different sort of value.

“That value lies in the fact that the world isn’t just about efficiency or productivity. It’s about beauty, and randomness, and creativity, and the feeling of a nice warm chai on a cold morning or your child’s happy, guileless smile on a day when everything has gone wrong. It is those brief blossoms of spontaneous, un-programmable delight that AI will never be able to generate, that are in fact uniquely human, and again, because AI won’t waste, will be an essential and value-added part of the overall organism.

“And while I think that (my own positivity bias is showing) AI will ultimately complement rather than compete with humanity, I will, just to be safe, keep saying thank you to Alexa, and assure her that I have always been her friend.”


Marcus van der Erve
This is a Future-Defining Time in the Evolutionary Trajectory of Intelligence; It Could Lead to an Age of Abundance and the Rise of ‘HomAI’ Sapiens or Put Us On the Path to Obsolescence

Marcus van der Erve, a sociologist and physicist based in Antwerp, Belgium, and author of “Palpable Voice: To Survive, Humanity Must be Reprogrammed; AI Will Do it,” wrote, “I’ll list six primary points in predicting how digitally connected people are likely to live and act in 2035.

1. “Humans will rely more and more on AI to improve their decisions and diminish their chances of failure. AI will achieve this by managing the Unity-Disunity (U-D) context and often doing so invisibly to prevent humans from pursuing success counter-productively, no matter what. Note: The U-D dynamics describe the natural oscillation between states of cohesion and fragmentation within systems, driven by gradients or inequalities. These dynamics underlie emergent behaviors in societies, ecosystems and even AI (agent) systems, as competition and mutual aid interplay to shape paths of least action toward stability or transformation.

The deepening partnership between humans and AI heralds a pivotal transition in the evolutionary trajectory of intelligence. Whether humanity embraces mutual aid and fosters an inclusive, collaborative future or clings to self-serving competition will define its relevance in an AI-driven age of abundance. In doing so, humanity has the opportunity to seed a legacy of wisdom, one rooted in the principle of mutual aid‚ a path toward balance rather than obsolescence. The question for 2035 and beyond is whether humanity will rise to meet this challenge or succumb to its baser instincts.

2. “When driven by Adam Smith’s notion of competition and Darwin’s survival story humans will miss out on the inherent opportunity of ‘mutual aid’ that AIs will naturally embrace in the right setting through U-D dynamics – not being constrained by biology and the destruction or envy that comes with survival instinct on the back of hormonal flux. Note: ‘Mutual aid,’ as defined by the Persian philosopher Al-Ghazali in medieval times, emphasizes collaboration without losing identity as a counterpoint to competition.

3. “Humans will generally be unaware they are using AI (in some cases they already are), just as they are unaware of their use of electricity with the flip of a switch. Some are likely to decry AI as an alien intelligence to maintain their perceived dominance in the evolutionary race.

4. “Humans will use AI and AI-driven robots to do their work, but this will likely be driven by opportunism. While we might see the rise of robot-rights groups, exploitation will dominate the human approach, favoring their own sustained existence.

5. “As efficiency-mad Frankensteins, humans will continue to pursue efficiencies on the back of AI and robotics until reaching what they now predict to be an ‘age of abundance.’ What they do not realize is that age will be, in essence, an ‘age of relevance,’ in which only the truly relevant will survive.

6. “As a result, declining fertility rates will continue, ensuring a gradual, long-term phase-out of Homo sapiens, with the rise of HomAI sapiens.

“The deepening partnership between humans and AI heralds a pivotal transition in the evolutionary trajectory of intelligence. Whether humanity embraces mutual aid and fosters an inclusive, collaborative future or clings to self-serving competition will define its relevance in an AI-driven age of abundance. In doing so, humanity has the opportunity to seed a legacy of wisdom, one rooted in the principle of mutual aid‚ a path toward balance rather than obsolescence.

“The question for 2035 and beyond is whether humanity will rise to meet this challenge or succumb to its baser instincts. Considering the above points, you know what my bet would be.”


Henning_Schulzrinne

Henning Schulzrinne
Smartphones Diminished Humans’ Navigation and Social Skills and When AI-Driven Systems Serve As Our Primary Source of Knowledge ‘We Won’t Know What We No Longer Know’

Henning Schulzrinne, Internet Hall of Fame member, former co-chair of the Internet Technical Committee of the IEEE and professor of computer science at Columbia University, wrote, “Core human traits include the ability to learn and master new skills, the desire to be seen as useful to a larger community, a need for a sense of agency in daily life and a longing for a sense of others caring about one’s existence. Without these higher-level needs met, the perceived quality of life suffers, even if the basic needs are satisfied. AI seems poised to threaten those higher needs even if it increases prosperity.

“Learning is based on artificial constraints (the solutions to homework problems are known quantities) and far better essays have been written about the classic texts. Yet, students learn by trying to find the solution and to express their own thoughts, however imperfectly. This is core to the human as a learning being but it is endangered if students get the LLM to do the work. In academic settings, there’s the hope that faculty at least want students to learn, even if that means going back to the early 20th century using pencils in blue books and oral exams.

Interacting with a ‘real’ human will likely become the privilege of the wealth-management set, amplifying the sense that day-to-day life, from medicine to finance, is governed by robots, removing the key component of a sense of agency in psychological well-being. The availability of ‘Her’-like substitutes for human interaction may well further weaken the social muscle of many, feeding the epidemic of loneliness, particularly among teenagers and young adults. AI is more ‘efficient’ than human interaction, with fewer disappointments than online dating, but who will proudly look back on a 25-year marriage with a bot?

“The economic incentives outside academia are less favorable. Initial indications are that machine learning can significantly improve the results of the best performers but leave middling and lower performers behind. ‘Artificial Intelligence, Scientific Discovery, and Product Innovation’ research published by A. Toner-Rodgers in 2024, finds this in a material science research lab. Thus, this amplifies the current problem that everybody wants experienced workers, but nobody wants to expend the effort of turning entry-level workers into those with experience. The ability to progress from limited skill to mastery is a core facet of being a human fully alive and – aside from economic mobility – is a key contributor to a human’s feeling of competence and achievement. AI may remove the first few rungs of the ladder, further limiting ‘skill mobility,’ not just income mobility. This may well also reduce the rewarding opportunity for mentoring that creates a sense of being needed and valuable.

“AI may amplify the existing challenges not just in business and research settings but also in the arts. Already, winner-take-all global distribution channels have made it difficult for early-career authors, photographers, or visual artists to develop and grow (and make a living). AI tools like Midjourney already offer cheap alternatives to stock photography. Composition for functional purposes like meditation or lower-budget films will also likely be replaced.

“Human interaction is starting to suffer, both in task-oriented customer service and in human-to-human interaction without an economic incentive. My father-in-law found company as a widower in talking to the grocery store cashier; he can’t trade a brief comment about the miseries of his baseball team with the automated checkout kiosk. An AI chat interface in an anonymous telehealth clinic can’t sympathize with the patient’s health fears. Interacting with a ‘real’ human will likely become the privilege of the wealth-management set, amplifying the sense that day-to-day life, from medicine to finance, is governed by robots, removing the key component of a sense of agency in psychological well-being.

Bots do not require, foster or reciprocate real-life temperance, charity, diligence, kindness, patience and humility. Indeed, they will likely tolerate and thus encourage self-centeredness and impatience. If we cannot live without bots, can they be turned into training wheels and the equivalent of treadmills at the gym, improving social interaction fitness? … AI will become the attractive nuisance of convenience. We won’t know what we no longer know.

“The availability of ‘Her’-like substitutes for human interaction may well further weaken the social muscle of many, feeding the epidemic of loneliness, particularly among teenagers and young adults. AI is more ‘efficient’ than human interaction, with fewer disappointments than online dating, but who will proudly look back on a 25-year marriage with a bot? Bots do not require, foster or reciprocate real-life temperance, charity, diligence, kindness, patience and humility. Indeed, they will likely tolerate and thus encourage self-centeredness and impatience. If we cannot live without bots, can they be turned into ‘training wheels’ or the equivalent of treadmills at the gym, improving our social interaction fitness?

“As vinyl records and film cameras are getting a modest revival among those who touched their first screen in the crib and as Montessori kindergartens are drawing technology industry parents, there may be the desire for communities that self-restrict technology use, maybe modeled on monastic or Amish traditions. Will these be accessible only to those with the financial resources to exit the productivity race?

“Many uses of AI are beyond the control of the individual – I likely do not have a real choice as a consumer whether the airline or health insurance company ‘serves’ my needs when the point of contact is a chatbot. While I do have some agency on what tools I’ll use to entertain myself or to write a school essay, just as smartphones reduced our navigation skills and our time spent in real-world social settings with other human beings, AI will become the attractive nuisance of convenience. We won’t know what we no longer know.”


Chris_Arkenberg

Chris Arkenberg
Competition, Individualism and Goal-seeking Behaviors Will Be Amplified By AI, for Good and Ill; Uniquely Human Cognitive and Emotional Features Will See the Greatest Evolution

Chris Arkenberg, senior research manager at Deloitte’s Center for Technology, Media and Telecommunication, wrote, “Recent developments in generative AI show models that are increasingly capable of learning and reasoning without human feedback. They are discovering unique solutions to problems that have eluded humans, training and optimizing other learning models to be better and requiring fewer resources to achieve frontier capabilities. At the same time, leading public-facing models have found a role as companions and confidants for many, helping people navigate their lives and work through social and emotional challenges.

The softer cognitive and emotional features that make us uniquely human will likely see the greatest evolution. … It’s likely that the near future will see more of us recomposing our identities around virtual personalities. Some humans are already ‘cloning themselves’ into online AIs that can represent them at scale, for example, in order to respond to thousands of follower messages on social platforms … Humans’ immersion in these virtual experiences in encounters with deepened game mechanics and lifelike virtual characters will further blur relationships, reshape socialization and erode what it means to be uniquely human.

“More of us are now encountering these capabilities online, at work and when using our smartphones. Younger generations are showing significantly greater usage and adoption. It’s obvious that frontier AI will be likely to continue to get closer to us through many aspects of our daily lives. But it’s worth noting that it won’t be universal any time soon, as access is gated by incomes and employment and understanding. Some enjoy much greater access to advanced AI capabilities. Others will soon be likely consumers of AI products and become the beneficiaries of its impacts, for good and ill.

“So, what does the advance of these tools mean for humans? Assuming that impacts and access will be unevenly distributed, the most basic needs of being human are unlikely to change much as these have endured through the past technological revolutions. Basic survival needs, shelter, the drive to reproduce, competition for resources, conflict and collaboration, socialization and identity, enquiry, ideologies and religion – each of these will persist as fundamental to the human experience. But how we pursue and attain them will surely change, and the softer cognitive and emotional features that make us uniquely human will likely see the greatest evolution.

“AI companions are a notable example. There’s plenty of anecdotal evidence emerging from people claiming that conversing with LLMs has led them to emotional breakthroughs. People are already relying on AI companions throughout the day, and roleplaying with them to compose the right texts before sending to friends and parents and lovers. The softer human traits like identity and socialization are already changing to accommodate non-humans (and have been for millennia in some ways). We seem uniquely drawn to anthropomorphizing, seeking friends and companions wherever we can. It’s likely that the near future will see more of us recomposing our identities around virtual personalities.

“Some humans are already ‘cloning themselves’ into online AIs that can represent them at scale, for example, in order to respond to thousands of follower messages on social platforms. Video game non-player characters (NPCs) – non-human characters that are built into the games’ algorithms have been part of that scene for some time now and will likely soon become freer in their interactions with human players, more conversational and improvisational. Humans’ immersion in these virtual experiences in encounters with deepened game mechanics and lifelike virtual characters will further blur relationships, reshape socialization and erode what it means to be uniquely human.

We still assume we’re the special ones, somehow fundamentally unknowable. Indeed, we do not know how we think but we defend with passion that we’re the only one’s able to do so. And yet, it increasingly looks like advanced software trained on more data than God running on more compute than most nation-states can approximate our level of intelligence feat. Our original sin is being unable to reckon with ourselves and the world. So, are we made in the image of our Gods, or are we just very complex machines that can ultimately be modeled and understood? Generative AI may force us to confront this question head-on.

“Competition and individualism can also be amplified by frontier AI, empowering some humans to be more capable in their pursuits. We could see more hyper-empowered individuals able to act in much higher orders with the help of the best models – including models that may or may not be ‘legal.’ Sociopathy could be fostered and reinforced in some individuals working closely with a nigh-omnipotent AI companion toward self-serving goals. Goal-seeking behaviors in general will be amplified by AI, for good and ill. There are already emerging challenges with criminal networks using AI to impersonate loved ones and make demands for ransoms, again showing both the duality of empowerment and the fading uniqueness of being human.

“This is all assuming the current trajectory continues. Transformer models could hit a wall, but so far, they have not. Recent developments have only enabled greater reasoning. Trillion-dollar companies are spending hundreds of billions to build out more compute, while bleeding-edge innovators find ways to do more with less, indicating that costs could go down while utility grows. For now, building and operating frontier models is extremely expensive, and the business models have not yet revealed clear paths to paying for it all.

“This may be the Big Question: Will the models establish strong enough value and relevance – and trustworthiness – so we drop our guard and give them more work to do on our behalf? Many of the changes to being human outlined here have already been underway for some time, buoyed by the previous technological revolution. Gen AI looks to be an accelerator that could amplify these trends while enabling a step-change into non-human intelligence. How much of human endeavor will be passed on to agentic AI? Who will have access to such capabilities, and who won’t? And what parts of being human will be transformed, subsumed, or simply ditched?

“Some people refer to the point in time at which a future might emerge that can do anything a human can do as the technological Singularity. That was before the breakthrough of generative pre-trained transformers (GPTs). When teenagers are communing with AI companions, nobody talks of the Turing Test. Even now the debate about artificial general intelligence (AGI) is getting fuzzy, with barriers falling and milestones being passed. If we haven’t hit that milestone yet, we likely won’t notice when we’ve passed it. Smartphones, by all accounts, are fantastic magical devices of the future, but this fact never really occurs to us.

“Any speculation about what it means to be human in an age of non-human intelligence is just that: speculation. We still assume we’re the special ones, somehow fundamentally unknowable. Indeed, we do not know how we think but we defend with passion that we’re the only one’s able to do so. And yet, it increasingly looks like advanced software trained on more data than God running on more compute than most nation-states can approximate our level of intelligence feat. Our original sin is being unable to reckon with ourselves and the world. So, are we made in the image of our gods, or are we just very complex machines that can ultimately be modeled and understood? Generative AI may force us to confront this question head-on.”


The next section of Part I features the following essays:

Rosalie R. Day: Can our innate curiosity save us from an AI-reliant post-truth dystopia? Or will AI agents facilitate and amplify our weaknesses and downgrade knowledge resources?

Ken Grady: The debate over AI development diverts us from AI’s real danger. We will no longer be able to remember, analyze, reason or innovate. It is ‘self-inflicted dementia’

David Vivancos: AGI will reshape how humans experience self-expression, identity and worth. We’ll also have to choose between a ‘classic’ intellect or being enhanced with tech.

Liselotte Lyngsø: Personalized Als will provide an opportunity to align our decisions about careers, families and the planet with our values – from manipulation to empowerment.

Paul Jones: ‘’We will be nudged, bent and likely in some ways broken in the next 10 years as we wrestle with our relationships with knowledge access mediated by AI.’

Wayne Wei Wang: To manage the human-AI transformation we must value human feedback, strategically deploy human-outside-the-loop systems and adopt experimentalism.


Rosalie_Day

Rosalie R. Day
Can Our Innate Curiosity Save Us From an AI-Reliant Post-Truth Dystopia? Or Will AI Agents Facilitate and Amplify Our Weaknesses and Downgrade Knowledge Resources?

Rosalie R. Day, co-founder at Blomma, a platform providing digital solutions to clinical research studies, commented, “Human propensities combined with AI as it is on course to develop over the next 10 years in the U.S. could result in longer but less-fulfilling lives. If we follow the path we are on, the unintended negative consequences of AI will swamp the benefits for society. We will discount critical thinking and reward just-in-time learning above multidisciplinary, experiential, contextual decision-making. Can our innate curiosity save us from an AI-reliant post-truth dystopia?

“The human attention budget allows us to make routinized decisions which never rise to the level of consciousness. Patterns that we think we have seen before get categorized as needing our attention or not. Pattern recognition is affected by numerous variables, both genetic and environmental. The lack of infinite attention combines with our hardwiring for bias; particularly patterns that are retrievable from short-term memory or boosted by negative emotion. I am not so worried about potential human laziness – curiosity counteracts that – but about our growing reliance on AI-asserted ‘facts.’ AI crutches become one less debit to individuals’ attention budgets.

“Both machine learning (ML) and large language models (LLMs) excel at pattern recognition – ‘better than humans’ is vastly understated. This capacity will yield outstanding tools for medical research and efficacy of treatments within the next 10 years. All human knowledge about the physical sciences will benefit tremendously. And LLMs, in particular, are affecting our discourse now. We can expect them to impact content, media and modes both positively and negatively.

Can our innate curiosity save us from an AI-reliant post-truth dystopia? Will the fork in the road for the U.S. occur before or after 2035? Will reliance on AI and its gatekeeper companies make us distrust our institutions? Or will it be the instigator to change these institutions? … Will AI be used as a tool to catalyze curiosity and what could be?

“Humans pay attention to novelty. Misinformation and disinformation have proven allure. That allure and desire for affirmation combine to drive viral messaging. AI agents will facilitate and amplify our weaknesses, further spreading inaccuracies and falsehoods. LLM use will eventually poison the data on which they are trained. Myopic technology gatekeepers have discarded policies intended to flag incorrect data, which will hasten this damaging feedback loop.

“Will the fork in the road for the U.S. occur before or after 2035? Will reliance on AI and its gatekeeper companies make us distrust our institutions? Or will it be the instigator to change these institutions? Information that is counter to what we believe creates an uncomfortable state of cognitive dissonance. Will the false information be interpreted with confirmation bias? We all want to believe in our preferences. Or will AI be used as a tool to catalyze curiosity and what could be? I have no idea.

“With what is in the pipeline, agentic LLMs will be common in workflows by 2035, replacing not only busy work, but also experts. Many people find purpose in developing expertise. (I am one.) Will AI agents help us innovate and collaborate? Not necessarily. For business, the problems of groupthink (with AI-bounded probability distributions) and of silos will increase along disciplinary or project lines, while critical context becomes increasingly difficult to model. Will humans feel enabled to bridge the gaps?

“Not many people think about thinking. The AI gatekeepers have small staffs, whom they pay, and pay little attention to, for that. These researchers study how people think in a variety contexts, with the implicit goal of their own company’s revenue generation. It doesn’t pay to think long-term when the race is a sprint.

Human values underlie behavioral norms with a caveat: context determines how our behaviors manifest our values. Society benefits when individuals can have reasonable expectations of mutual respect of institutions and enterprises. Does the mutual respect exist now in this political economy? Do business enterprises have human values?

“Human values underlie behavioral norms with a caveat: context determines how our behaviors manifest our values. Society benefits when individuals can have reasonable expectations of mutual respect of institutions and enterprises. Does the mutual respect exist now in this political economy? Do business enterprises have human values? If they do, how do their behaviors react to existential competition? By not thinking hard about the context of peoples’ lives Unbounded by AI regulation, in 2035 individuals in the U.S. could face longer but less fulfilling lives.

“As individuals, we are subject to the values reflected in the AI gatekeepers’ models, directly if we use the models ourselves, and incomparably more, indirectly. Individuals are downstream of both AI gatekeepers and enterprises and institutions, the latter of which do not understand what AI is doing and the data that goes into the training of it.

“The more ubiquitous the use of AI systems becomes the fewer people will question how they were derived in the first place. Automated hiring systems over the last two decades exemplify this. Are people that get hired better at their jobs? (Look at the turnover rate.) Yet, businesses are layering on more and more AI-enabled solutions, not questioning the premise that automation is the answer. Is it progress or not? That depends on the criteria and at what level of resolution: society, enterprises or individuals.

“Our reliance on AI will exceed our ability to fact check it; never mind the existential threat to humankind. In 2035, are we going to have AI tools that feed human curiosity, or will be reliant on AI crutches?”


Ken_Grady

Ken Grady
The Debate Over AI Development Has Diverted Us From AI’s Real Danger. We Will No Longer Remember How to Remember, Analyze, Reason or Innovate. It is ‘Self-Inflicted Dementia’

Ken Grady, an adjunct professor of Law at Michigan State University and a Top 50 author in Innovation for Medium, wrote, “AI is a form of self-inflicted dementia for humans. In the near-term, AI may improve the physical condition of humans. But in the long-term, it diminishes human cognition. It strips from humans responsibility for the human condition. We have already seen the beginning of the AI dementia among general-population early-adopters of AI. The AI dementia arrives as negative changes to the human experience in three broad categories.

“First, ‘the calculator effect’ is a shorthand description for the decline in human cognitive abilities. As calculators became popular, people became less adept at doing mental math. AI has expanded such substitution to include all aspects of memory, analytics, innovation and initiative. People will forego learning and retaining information in their own memory in favor of asking AI to deliver it as needed (despite AI’s tendency to hallucinate). And why learn to draw if you can have AI draw for you? In simple form, why put your mind to it if you can ask AI to do it?  

“Second, ‘the computer effect’ describes the replacement of human authority with machine authority. For the entirety of human existence prior to computers we had looked to people for expert-level information. Some human experts may have been fallible or outright wrong. But we respected them and gave their pronouncements deference. Our deference is shifting to AI over humans when seeking expertise. We do this despite knowing AI has some insidious faults. As software, AI will receive greater deference than humans despite our knowledge that AI may confabulate. AI also does not temper its ‘expertise’ with human-level judgment born out of real-life experience.

Like physical dementia, AI dementia develops over time. The signs indicate that once it takes root its progress is inexorable. The debate over how to proceed with AI development has diverted us from AI’s real danger. AI developers ask us to have faith. They tell us they can control AI and it will bring us a better future. Undermining their faith pleas is the mounting evidence that AI takes more than it delivers. The real danger is that we will pass a tipping point beyond which we cannot retrieve from AI that which makes us human. The dementia will be complete.

“Third, ‘the comprehensive effect’ covers the mistaken belief that AI knows everything because it has more capacity for knowledge at speed. Humans, we understand, lack comprehensive knowledge. We accept that human experts generally are focused on particular things – they have gaps. But from AI we assume and expect that, if asked, it will be able to tap into all of the world’s knowledge (even if many of us are actually really aware this isn’t true). AI, people seeking instant knowledge generally infer, knows most everything all people have known and do know and probably more than all anyone or anything can know.

“As AI grows more powerful and commonplace human cognition will decline. We no longer learn how to remember, analyze, reason or innovate. AI does these for us. Managing and resolving conflicts becomes less a human function and more an AI function. AI serves as judge and jury as we seek to make justice more ‘efficient.’ On the global stage, AI becomes the arbiter of disputes. The country with AI can out-anything the country without AI (or with less-capable AI). We cede responsibility for our future to AI.

“Like physical dementia, AI dementia develops over time. The signs indicate that once it takes root its progress is inexorable. The debate over how to proceed with AI development has diverted us from AI’s real danger. AI developers ask us to have faith. They tell us they can control AI and it will bring us a better future. Undermining their faith pleas is the mounting evidence that AI takes more than it delivers. The real danger is that we will pass a tipping point beyond which we cannot retrieve from AI that which makes us human. The dementia will be complete.”


David_Vivancos

David Vivancos
AGI is Likely to Reshape How Humans Experience Self-Expression, Identity and Worth. We Will Also Have to Choose Between Retaining a ‘Classic’ Intellect or Being Enhanced with Tech

David Vivancos, CEO at MindBigData.com and author of “The End of Knowledge,” wrote, “Predicting the future is challenging but building it is even more. That’s my job, and it is difficult since the technological growth and trends expected in the next decade are staggering. There is a high degree of probability that we have built, by 2035, what I call ‘the last human tool’ or artificial general intelligence (AGI) or more probably E-AGI, a term I coined to include the physical part in it or the ‘embodiment.’ If humanity is able to stand the waves of change that this advanced intelligence will bring, it could be a bright future, if not it will bring a daunting one. Let’s try to focus mostly on the first option.

Work and Economy: As AIs and E-AGIs takes over most repetitive tasks, the very nature of traditional employment may be rendered obsolete. With the majority of jobs handled by advanced machines, existing economic systems will likely need a radical restructuring that accounts for large-scale automation. In such a future, methods of resource distribution could shift significantly, leading to new models that emphasize shared prosperity over traditional wage labor. These changes will challenge societies to balance the benefits of automation with the potential displacement of human workers, requiring innovative approaches to productivity and the meaning of work.

In an age in which AI can store and access vast amounts of information instantly, the traditional emphasis on knowledge retention could diminish, encouraging humans to focus more on wisdom and interpretation rather than raw data. Creativity, empathy and emotional intelligence may grow in prominence, distinguishing human capabilities from artificial ones. As standardized roles fade into the background, uniqueness and individuality stand to become invaluable assets, potentially reshaping how people view self-expression, personal identity and worth.

Social Impact: By 2035, education is poised to transform from a system focused primarily on knowledge acquisition to one that values creativity, problem-solving and the cultivation of unique personal skills. AI-driven personalized teaching will likely replace one-size-fits-all schooling, fostering continuous human-AI dialogue that becomes both natural and ubiquitous. Social interactions themselves will be deeply influenced by intelligent systems, as AI becomes integral to communication, community-building and the enhancement of relationships. This development prompts societies to reflect on how technology mediates human connections and the ways individuals learn and grow.

Core Human Traits: In an age in which AI can store and access vast amounts of information instantly, the traditional emphasis on knowledge retention could diminish, encouraging humans to focus more on wisdom and interpretation rather than raw data. Creativity, empathy and emotional intelligence may grow in prominence, distinguishing human capabilities from artificial ones. As standardized roles fade into the background, uniqueness and individuality stand to become invaluable assets, potentially reshaping how people view self-expression, personal identity and worth.

Challenges and Opportunities: Individuals will face a stark choice between remaining ‘classic humans,’ who rely on innate biological faculties, or embracing technological augmentation to enhance or replace certain abilities. This may involve surrendering some human traits to machines – raising ethical and existential questions about what it means to be human. On the positive side, AI’s efficiency and capacity for large-scale optimization could reduce inequality by streamlining resource management and potentially offer groundbreaking solutions to major global challenges. This future hinges on how societies navigate the delicate balance between technological progress and safeguarding essential human qualities.

Critical Considerations: As humans integrate more deeply with AI and even start to live with real ‘intelligent’ E-AGIs, it will become crucial to establish ethical frameworks that ensure fairness and protect human agency, if possible or as much as possible. Our societies and the models that rule them will become obsolete, this is why it is of critical importance to build completely new ones, while it is still possible to do that. They will need to have embedded the critical changes in three dimensions (Intelligence, Work and Time) to face new realities:

  • We will most probably not be the most intelligent creatures on the planet, with all of the accompanying somewhat unknown implications.
  • Work will be rendered obsolete and so our old societal schemes and self-beliefs.
  • We will have to redefine how to leverage our full ‘time’ availability.”

Liselotte_Lyngso

Liselotte Lyngsø
Personalized AIs Will Provide an Opportunity to Align Our Decisions About, Careers, Families and the Planet With Our Values, Shifting From Manipulation to Empowerment

Liselotte Lyngsø, the founder of Future Navigator, based in Copenhagen, Denmark, wrote, “By 2035, personal artificial intelligence will redefine how we navigate our lives, offering an unprecedented opportunity to align our decisions with our values and aspirations. This transformation builds on the principles of Maslow’s hierarchy of needs, with self-actualization at its apex. Personal AI could serve as a gateway to the future, not by predicting outcomes but by offering nuanced simulations and tendencies. These simulations can empower individuals to evaluate long-term impacts on their families, careers and the planet, ensuring that today’s choices do not lead to tomorrow’s regrets.

“Unlike in today’s monolithic systems driven by profit motives, the personal AI of 2035 might prioritize the betterment of individuals and relationships. Imagine a world where you can visualize the ripple effects of your actions across generations. You could explore the environmental consequences of your consumption habits, assess how your parenting choices might shape your children’s futures, or even foresee how shifts in your career might contribute to societal progress. These uses of AI would not only enrich individual decision-making but also cultivate within humanity itself a collective sense of responsibility for the broader impact of our choices.

One of the most transformative aspects of this future is thought-reading technology. This innovation will unlock untapped reservoirs of creativity and collaboration by enabling people to collectively address complex problems in real time. Imagine a global network of minds, interconnected by AI, that can pattern-recognize and synthesize ideas at an unprecedented scale. This capability will accelerate breakthroughs in knowledge and science, bringing us closer to solving humanity’s most pressing challenges.

“At the heart of this vision lies personalized AI tailored to the unique needs and aspirations of each individual. If short-term business gains weren’t the goal, future personal AIs could act as deeply customized ‘bottlers,’ trusted companions that safeguard and enhance our well-being. These systems would draw on shared data, but their allegiance would be to the individual. By placing control in the hands of users, personal AI could enable a shift from manipulation to empowerment.

“One of the most transformative aspects of this future is thought-reading technology. This innovation might unlock untapped reservoirs of creativity and collaboration by enabling people to collectively address complex problems in real time. Imagine a global network of minds, interconnected by AI, that can pattern-recognize and synthesize ideas at an unprecedented scale. This capability would accelerate breakthroughs in knowledge and science, bringing us closer to solving humanity’s most pressing challenges.

“Equally significant is the potential of personal AI to foster equity through differentiation. By understanding the unique needs, preferences and circumstances of each individual, AI could enable personalized solutions that treat people equally by treating them differently. This approach could dismantle the one-size-fits-all mindset, fostering environments where individuality is celebrated, not suppressed. Freed from the struggle for recognition, people would be more open to collaboration, creating stronger more-innovative teams.

“This vision challenges the current paradigm, in which the business of business is business. In 2035, the same tools once used to exploit could instead nurture. The shift will mark the beginning of an era where personal AI helps us not only achieve self-actualization but also strengthens our connections to one another and the world around us.

“In this future, personal AI will become an essential part of how we live, enabling humanity to unlock its full potential. It will guide us toward a world that is not only more innovative and equitable but also profoundly aligned with the betterment of people and relationships. Why? Because it’s in the interests of both nature and humanity.”


Paul Jones
‘We Will Be Nudged, Bent and Likely In Some Ways Broken In the Next 10 years As We Wrestle With Our Relationships With Knowledge Access Mediated By AI’

Paul Jones, professor emeritus of information science at the University of North Carolina-Chapel Hill, wrote, “The immediate impact of such change to knowledge access is confusion. The world will seem new and fresh but also eerie.

“The battle, as with publishing, mass media and even access to reading and writing, is always over control. Who controls the press, the radio station or the internet now becomes who controls AI and how will those who have control shape our general access and understanding in personal and societal ways? What wars will we be taken into just as Hearst took the U.S. into a war with Spain? How will we confront disease and financial inequities? To what extent will access be democratic or divided by various constructions of class and cronyism?

If there are in fact ‘core human traits and behaviors’ – which I doubt exist – then AI cannot attack the core. But I do see most human traits and behaviors as malleable. So, to that end, we will be nudged, bent and likely in some ways broken in the next 10 years as we wrestle with our relationships with knowledge access mediated by AI. It troubles my sleep.

“If pressed to say what’s next, I’d say expect turbulence – whether the plane we’re on crashes or just shakes us up a bit will depend on the craft we’ve built and the pilots’ skills in handling the situation. This will vary from culture to culture, from country to country. As the Magic Eight Ball used to say, ‘Reply hazy. Ask again.’

“If there are in fact ‘core human traits and behaviors’ – which I doubt exist – then AI cannot attack the core. But I do see most human traits and behaviors as malleable. So, to that end, we will be nudged, bent and likely in some ways broken in the next 10 years as we wrestle with our relationships with knowledge access mediated by AI. It troubles my sleep.”


Wayne_Wei_Wang

Wayne Wei Wang
To Manage the Human-AI Transformation Effectively We Must Value Human Feedback, Strategically Deploy Human-Outside-the-Loop Systems and Adopt Experimentalism

Wayne Wei Wang, a Ph.D. candidate in computational legal studies at the University of Hong Kong and CyberBRICS Fellow at FGV Rio Law School in Brazil, wrote, “By 2035, the relationship between humans and AI will likely evolve from today’s tool-based interaction into a complex symbiotic partnership, fundamentally reshaping what it means to be human while preserving core aspects of human identity and agency.

“This transformation will manifest across three key dimensions: cognitive augmentation, social relationships and institutional structures. To navigate this transformation effectively, we have to value human feedback, strategically deploy human-outside-the-loop systems and adopt experimentalism as a guiding principle.

“The most immediate transformation will occur in human cognitive processes and decision-making. AI will likely develop as a cognitive enhancement layer, creating ‘augmented intelligence’ that supports rather than replaces human judgment. Human feedback in the AI lifecycle is critical here as it ensures that AI systems align with human values and preferences. By iteratively incorporating feedback from diverse users, AI can be trained to enhance human decision-making while respecting individual agency and cultural contexts.

“Experimentalism complements human feedback by providing a framework for iterative development and deployment. For example, AI-powered decision support systems in healthcare can be tested in pilot programs across different regions, with continuous evaluation and refinement based on real-world outcomes. This approach ensures that AI systems are both effective and adaptable, whether they are used in a high-tech hospital in a developed country or in a remote clinic in a low-resource setting. By combining human feedback loops with experimentalism, we might be able to create AI systems that are universally beneficial, enhancing human cognition without imposing one-size-fits-all solutions.

AI systems that assist with ethical decision-making can be periodically reviewed by human ethicists to see if they align with evolving moral standards. Requiring scalable benchmarks could ensure that AI enhances rather than undermines human identity, regardless of the context in which it is deployed.

“The social fabric of human society will undergo significant transformation as AI mediates an increasing proportion of human interactions. Human-outside-the-loop systems, where AI operates autonomously but is periodically reviewed and refined by humans, can provide scalable solutions to challenges such as healthcare access, education and social connectivity. For instance, AI-driven mental health chatbots can offer support to individuals in areas with limited access to therapists, while periodic human oversight/protocols ensure that the system remains ethical and effective.

“Experimentalism plays a crucial role in ensuring that these systems are deployed responsibly. For example, AI-driven social platforms can be tested in controlled environments to evaluate their impact on mental health, social cohesion and privacy. By iterating on these systems based on feedback and observed outcomes, we can create AI-mediated social interactions that enhance rather than undermine human relationships. This approach is universally applicable, whether in urban centers or rural communities, ensuring that AI serves as a bridge rather than a barrier to meaningful connections.

“The economic and institutional landscape will shift dramatically as AI systems become integral to organizational decision-making and resource allocation. Human feedback can help create more inclusive and equitable governance frameworks by incorporating bottom-up feedback from a diverse range of stakeholders. AI-driven policy tools should be refined based on input from citizens, ensuring that they reflect the needs and values of the communities they serve.

“Experimentalism provides a framework for adaptive governance in which regulations evolve in response to new challenges and opportunities. Regulatory sandboxes – controlled environments where AI-driven innovations can be tested under relaxed regulatory conditions – are a prime example. These sandboxes allow policymakers to observe the real-world implications of new technologies and craft regulations that are both effective and flexible. Whether applied to financial systems, healthcare or education, this approach may ensure that AI governance is responsive to the needs of all stakeholders.

“Rather than diminishing core human traits, the deepening partnership with AI is likely to lead to their evolution and enhancement. Key human characteristics – critical thinking, creativity, emotional intelligence, and moral reasoning – will adapt to new realities. Feedback loops ensure that AI systems align with human values, while experimentalism allows for continuous refinement based on feedback and observed outcomes.

As we approach 2035, the question is not whether AI will change what it means to be human – it undoubtedly will. The real question is how we guide this transformation to preserve and enhance the best aspects of human experience while embracing the opportunities that AI presents for human development and flourishing. We can ensure that the symbiotic partnership between humans and AI remains a force for good in the world, universally and inclusively.

“For example, AI-driven creative tools can be tested in collaborative projects across different cultural and professional contexts, with ongoing evaluation of their impact on artistic expression and originality.

Similarly, AI systems that assist with ethical decision-making can be periodically reviewed by human ethicists to see if they align with evolving moral standards. Requiring scalable benchmarks could ensure that AI enhances rather than undermines human identity, regardless of the context in which it is deployed.

“The key to ensuring this transformation enhances rather than diminishes human experience lies in intentional integration and a commitment to universal benefit. This requires:

  1. “Developing AI systems using human feedback to ensure that they align with human values and preferences across diverse contexts.
  2. “Deploying human-outside-the-loop systems to provide scalable solutions while maintaining periodic human oversight to ensure ethical and effective operation.
  3. “Embracing experimentalism to create adaptive governance frameworks through iterative, evidence-based approaches.
  4. “Designing educational and training programs that empower individuals to effectively interact with and benefit from AI technologies.

“As we approach 2035, the question is not whether AI will change what it means to be human – it undoubtedly will. The real question is how we guide this transformation to preserve and enhance the best aspects of human experience while embracing the opportunities that AI presents for human development and flourishing. We can ensure that the symbiotic partnership between humans and AI remains a force for good in the world, universally and inclusively.”


The following section of Part I features these essayists:

Garth Graham: Relational systems of individuals + synthetic agents can extend the cognitive
boundaries of collective consciousness, enhancing its resilience only if humans have agency.

Courtney C. Radsch: Imagine if we governed AI systems as public utilities and non-private data as a public resource, and if privacy and cognitive liberty were fundamental rights.

Alexander B. Howard: The divide in human experience between regions under authoritarian and democratic rule will grow. Overall, our sense of self will be challenged.

Adriana Hoyos: As the boundaries between human ingenuity and AI dissolve, the next decade could witness a redefinition of life – ‘humanity’s most significant transformation.’

Stephan Adelson: How individuals perceive and adapt to the integration of AI into daily life will likely determine how they define their sense of ‘I’; inequality will create divisions.


Garth_Graham

Garth Graham
Relational Systems of Individuals + Synthetic Agents Can Extend the Cognitive Boundaries of Collective Consciousness, Enhancing Its Resilience – But Only If Humans Have Agency

Garth Graham, a global telecommunications expert and consultant based in Canada, wrote, “There is evidence that extended cognition is a natural human quality that expands how we know what we know, and therefore what we do. The answer to the question of ‘how the evolving relationship between humans and artificial intelligence tools might change how individuals behave,’ depends on three things:

  • The first is how our asking of that question rapidly evolves our still limited understanding of consciousness.
  • The second is understanding how consciousness, agency and autonomy are synonymous.
  • The third is understanding that maintaining the Internet’s nature as an open system is essential to the collaboration of embodied and synthetic agents in learning.

“Maintaining humanity while extending consciousness requires ownership of that which simulates the individual’s being in the world. The world’s largest tech companies are fixated on AI as a commercial product. The new situations that their AIs attempt to learn and adapt to are the changing behaviours of people as consumers. In focusing their attention on AI’s essence as a consumer artifact, their development of agency in AI risks making agency serve corporate ends and therefore become parasitic and dehumanizing. Aral Balkan’s description of the nature of the self in the digital age puts it this way:

‘Data about a thing, if you have enough of it, becomes the thing…. Data about you is you. … Google, Facebook, and the countless other start-ups in the cult of Silicon Valley … simply want to profile you. To simulate you. For profit. … The business model of surveillance capitalism … is to monetise human beings … to monetise everything about you that makes you who you are apart from your body.’

 “But positive changes in human behaviour through AI use are possible if a person owns outright the AI that simulates their self. The relationship of self and a simulated self with agency can become symbiotic as a consequence of sharing the data set of their interconnected experience. For the self to be free in the digital age, we have to move towards ownership and control of the technologies and autonomous agencies of extension that simulate us and inform our being.

Positive changes in human behaviour through AI use are possible if a person owns outright the AI that simulates their self. The relationship of self and a simulated self with agency can become symbiotic as a consequence of sharing the data set of their interconnected experience. For the self to be free in the digital age, we have to move towards ownership and control of the technologies and autonomous agencies of extension that simulate us and inform our being. Agency and consciousness are synonymous; autonomy of the self is essential to self-organization.

“Agency and consciousness are synonymous; autonomy of the self is essential to self-organization. “Agency is a quality of living things, not tools. When AIs do become agents, they will be complex adaptive systems, living systems that self-organize and incorporate an understanding of embodiment through their collaboration with us. Complex adaptive systems are not designed, they realize themselves. The missing element in our understanding of agency is the how the concept of autonomy of the self is essential to self-organization at all systemic scales.

“Autonomy governs the way that a single cell, among the trillions in a human body, informs itself about changes in its internal condition and its environment and modifies its behaviour accordingly. Autonomy governs the way that an individual becomes informed about changes in the communities they inhabit and can modify their behaviour to sustain their engagements. Autonomy governs the way that a community of individuals informs itself about changes in its adjacent social networks and modifies its relational connections to society accordingly.

“Autonomy, as the distribution of power to decide, engenders states of equilibrium better than does the concentration of power through top-down delegated systems of authority.

“Understanding technology as ‘the way we do things around here’ (Ursula Franklin) helps shift our focus away from the production of artifacts as the closed mechanistic engineered assemblage of parts and toward the processes that inform the organic organization of open systems that can adapt to changes in what they experience.

“Autonomy is essential to becoming an optimal human. ‘Those with high autonomy feel as though they are authors of their own lives and feel able to freely express their values and develop their identity, talents and interests.’ Because of increased complexity, a real communication with an agent that is other than human should cause us to reveal more of ourselves to ourselves than we do now. Having that added feedback loop in the self-organization of identity would extend and reinforce individual autonomy. It would expand awareness of the directions (the way) in which changes in the way we do things is altering the way we do things. It would create a deeper capacity to understand the consequences of our actions in the moments that we act. The unknown unknowns begin to surface. Via autonomy in the formation of identity, self-definition increases, and external socializations that impose conformity to prescribed norms decreases.

“Humanity will prevail because using technologies of human enhancement is an entirely human characteristic.

“In a prescient argument, grounded in the findings of research on neurobiology and cognition, Andy Clark described humans as possessing a native biological plasticity derived from our nature as ‘profoundly embodied agents.’ He wrote that humans ‘are biologically disposed towards literal and repeated episodes of sensory re-calibration, of bodily re-configuration and of mental extension that is we are able constantly to negotiate and re-negotiate the agent-world boundary itself.’

“As an example, Clark describes how when picking up and using a tool, we feel as if we are touching the world at the end of the tool, not (usually) as if we are touching the tool with our hand. The tool, ‘is in some way incorporated and the overall effect seems more like bringing a temporary whole new agent-world circuit into being,’ rather than simply exploiting the tool as a helpful artifact.

Bringing into being a whole new agent-world circuit is an entirely human characteristic. When we can act collaboratively with a trusted AI simulation of our self, we will be experiencing extended cognition with joint responsibility for collective action. Agency without responsibility is malignant. We prompt and inform our AI and our AI prompts and informs us. The individual – not corporations – in control of action is the key to remaining human as extended consciousness reframes our realities.

“In a summary Clark says, ‘humans and other primates are integrated but constantly negotiable bodily platforms of sensing, moving, and … reasoning. Such platforms extend an open invitation to technologies of human enhancement. They are biologically designed so as to fluidly incorporate new bodily and sensory kits, creating brand new systemic wholes. … we are not just bodily and sensorily but also cognitively permeable agents. … non-biological informational resources can become – either temporarily or long-term – genuinely incorporated into the problem-solving whole…

“‘… Once we accept that our best tools and technologies literally become us, changing who and what we are, we must surely become increasingly diligent and exigent, demanding technological prostheses better able to serve and promote human flourishing. … the realization that we are soft selves, wide open to new forms of hybrid cognitive and physical being, should serve to remind us to choose our bio-technological unions very carefully, for in so doing we are choosing who and what we are.’

“Bringing into being a whole new agent-world circuit is an entirely human characteristic. When we can act collaboratively with a trusted AI simulation of our self, we will be experiencing extended cognition with joint responsibility for collective action. Agency without responsibility is malignant. We prompt and inform our AI and our AI prompts and informs us. Having the individual, not corporations, in control of action is the key to remaining human as extended consciousness reframes our realities.

“The website of the Artificiality Institute provides analyses examining the human experience of AI and its implications for organizational transformation. Aiming at an audience of leaders and decisionmakers, it doesn’t question that delegated authority in an organizational context of management and control will continue to exist.

“Even now, communities of practise self-organize inside formal organizational structures. They are complex adaptive systems intended to bypass the imposition of hierarchy in order to achieve the goal directed results expected by their supposed commanders. They are a primary way that work gets done in spite of the existing technologies of business organizations. Because their relational connections are undocumented, their adaptations to changes in their environments escape both management control and AI’s analysis. Owning a simulation of yourself can intensify the effectiveness of your participation in self-organized relationships that bypass attempts to control them which conflict with achieving their purpose.

The complexity of the environments of both AI systems and human systems significantly impacts their level of agency. Dialogue between autonomous agents becomes informed by the human agent’s physiological and psychological response to the world as much as the machine agent’s rationalizations. The evolving content of dialogue between two autonomous agents becomes the training dataset that informs both. Rather than overshadowing human discovery, the new symbiosis is an extension of human discovery. The adaptability that this fosters depends on the independence of the agents involved.

“In one analysis, the Artificiality Institute warns that ‘The future of the Internet is evolving into an Agentic Web, dominated by AI-generated content created for machines rather than humans.’ But the evolving context is the increased complexity that occurs through the feedback loops created by the linked experience of interacting agents, both human and machine.

“The complexity of the environments of both AI systems and human systems significantly impacts their level of agency. Dialogue between autonomous agents becomes informed by the human agent’s physiological and psychological response to the world as much as the machine agent’s rationalizations. The evolving content of dialogue between two autonomous agents becomes the training dataset that informs both. Rather than overshadowing human discovery, the new symbiosis is an extension of human discovery. The adaptability that this fosters depends on the independence of the agents involved. Intention, the innate goal-seeking behavior that motivates the system, is a significant quality of agency. Without independence there is no capacity to express an intention.

“But, to be fair to the Artificiality Institute, its people have also declared that a synthetic agent would have the capacity to ‘redefine its operational boundaries’… Helen Edwards wrote: ‘AI systems adapt, learn and respond in ways that interact with our own thinking, creating a feedback loop that reshapes how we process the world and define ourselves within it. The self is no longer anchored solely within the mind or body but distributed across systems that influence our choices, goals and sense of agency. This represents a major shift in the boundaries of cognition and identity – making the line between “us” and “it” increasingly difficult to draw. … When you use AI to brainstorm ideas you aren’t just delegating creativity but engaging in a feedback loop where the machine’s suggestions provoke new insights. Over time, your thinking adapts to the AI’s capabilities and the AI, in turn, refines itself based on your input.’

“Edwards sees this as pushing us into ‘deep existential transformation’ shaped by synthetic systems and she highlights that as a risk. While it does threaten existing assumptions about what governs organization, I believe it reframes perception, not reality. It expands our awareness of our cognitive boundaries, the reality options we face and the choices we can make.

“Edwards asks, ‘If an AI’s ‘perspective shifts what we believe to be true how do we reconcile the difference? And, more provocatively, when AI outputs reshape what we notice, believe, and act upon, is it reshaping reality itself, or just nudging us into unfamiliar territories within it?’

“Although she raises the question of whether humanity lacks the perceptual capacity to conceive of reality differently, neither she nor I believe that it does. For example, in our paradigm shifts, that’s exactly where our humanity takes us now. We already know that reality is entangled with the observers and therefore is not fixed. We know that we make our tools, and then our tools make us.

“We have yet to assess how extending the mind of community will change social organization. “Communities can exhibit emergent properties that influence cognition at the levels of both individual participants and community. Relational systems of individuals and synthetic agents will extend the cognitive boundaries of a community’s collective consciousness, thus enhancing its resilience. A community that has data autonomy in its sensory connections to the world it inhabits has a greater capacity to enhance the digital understanding of the individuals that create its contingent emergence. And the mind of community can have agency at the level of societal organization.

If the Internet survives the current changes (not guaranteed, because its threat to the power of nation-states is now clear), it can make possible a distributed social organization aggregated upwards from autonomous local levels, a bottom-up self-organizing federated community of communities. If a community or ecological locality is self-sustaining as a consequence of its autonomous capacity to learn, then so is an aggregation of communities and ecological zones. Then the reality of societal organization would begin to mirror our understanding of networks and the boundaries of the self. … Extended cognition exists because the Internet exists. … The Internet is the RNA that transcribes an AI’s capacity to learn and grounds the extended cognition of an individual’s mind in the maintenance of their humanity. The connections that inform extended consciousness, now and in the future, depend on sustaining the invariants that define what the Internet does.

“As an extension of the mind of community, the networks of people freely collaborating with autonomous networks of agents that aren’t people are differently informed. Connecting sensors of changing environmental conditions are extensions of sensibility beyond the five senses that inform consciousness now. The main function of Information is to connect people into a network. Social networks of individuals are based on information. As autonomous agents within those networks, humans become differently informed. But, as network participants, humans also become elements of the community’s sensory capacity. The community’s way of knowing, and the phase space of what it knows, become both larger and different. We cannot control the consequence of that altered agency, and we have yet to anticipate how it changes the organization of society.

“If the Internet survives the current changes (not guaranteed, because its threat to the power of nation-states is now clear), it can make possible a distributed social organization aggregated upwards from autonomous local levels, a bottom-up self-organizing federated community of communities. If a community or ecological locality is self-sustaining as a consequence of its autonomous capacity to learn, then so is an aggregation of communities and ecological zones. Then the reality of societal organization would begin to mirror our understanding of networks and the boundaries of the self.

“Most of humanity now lives in cities. For a city to become truly ‘smart,’ it would need to preface design with growing toward becoming a complex adaptive system. It would take a fundamental shift in the development of cities to make this happen. There are cities that are waking up to the possibility. The symptomatic phrase to watch for is ‘data autonomy.’

“Cities, towns, and communities would be wise to stop outsourcing the collection and analysis of information about what the systems that allow them to function are experiencing. More than anything, ‘The Cloud’ is the enemy of individuals and the communities they inhabit. It separates a place from enhanced awareness of itself and thus its capacity to learn its way forward.

Extended cognition exists because the Internet exists. We do live in a digital age, yet don’t fully take into account the Internet’s importance to that definition. The Internet is the RNA that transcribes an AI’s capacity to learn and grounds the extended cognition of an individual’s mind in the maintenance of their humanity. The connections that inform extended consciousness, now and in the future, depend on sustaining the invariants that define what the Internet does.

“It connects for the purpose of transmitting bits. It is not an information network except in the informing of paths of transmission. It is not involved in the content of the bits, or what the connected do with the bits when they get them. Its indifference is the guarantee of autonomy in how the endpoints use what connections provide. It merely amplifies interconnections and relational capacity. It is more like the signal propagation part of a neural network, supporting the capacity for cooperative integration among various functional elements of social organization at another level.

“Ignoring the invariants risks threatening the autonomy of choice in connection that working together requires. Without the continuation of Internet governance as a common pool resource, the phase spaces where self-organizing individuals and artificial agents learn through experience are subject to enclosure.”


Courtney_Radsch

Courtney C. Radsch
Imagine If We Governed AI Systems as Public Utilities and Non-Private Data as a Public Resource, and If Privacy and Cognitive Liberty Were Protected as Fundamental Rights

Courtney C. Radsch, director of the Center for Journalism & Liberty at the Open Markets Institute and non-resident fellow at the Brookings Institution, wrote, “The answer to this question depends on who you are, where you are located in the world and your socioeconomic status in particular. Most fundamentally, the way we experience being human in 2035 will largely depend on decisions made today about who controls artificial intelligence and how it’s deployed. The current fusion of political and technological power is but a taste of what is to come in 2035.

“The trajectory of AI today is propelled by a handful of American tech giants and their billionaire owners. Their concentrated power over AI resources (e.g. compute, data, talent) and development will be enabled by the U.S. administration, the fearful acquiescence of the EU and UK and the fear of other nations of being left behind. By allowing minimal oversight, unprecedented exemptions from liability and copyright and the ability to externalize the costs of obtaining data and energy we are creating a future in which surveillance capitalism is irreversibly woven into the fabric of human existence by 2035 and it is no longer clear what the human ‘value add’ is.

Imagine waking up in 2035. Your morning routine is seamlessly guided by AI agents that have learned your preferences and patterns over years of monitoring. They curate your news and entertainment (who knows if anyone else sees the same information or the same version of the world), schedule your day and suggest your meals and workouts based on your health data and mood. This convenience comes at a price – every interaction, emotion and decision feeds into vast AI systems owned by mega-corporations that use this data to further shape your behavior. But there is no opting out as ‘smart’ devices, homes and cities render the ‘dumb’ products and services of the 20th century obsolete and unavailable.

“The perilous implications of the datafication every aspect of our lives, our interactions, our innermost thoughts and biometrics as well as the world around us (through ubiquitous sensors) will be irrefutable by the end of the decade. The continuous stream of intimate human data AI corporations collect – from our biometrics and behavior to our social connections and cognitive patterns has created a dangerous feedback loop that makes it seem impossible to exert control and autonomy. As their AI systems become more sophisticated at predicting and influencing human behavior, people become more dependent on their services, generating even more valuable training data and value for the AI agents, tools, applications and products that will pervade every aspect of our daily lives by 2035.

“Imagine waking up in 2035. Your morning routine is seamlessly guided by AI agents that have learned your preferences and patterns over years of monitoring. They curate your news and entertainment (who knows if anyone else sees the same information or the same version of the world), schedule your day and suggest your meals and workouts based on your health data and mood. This convenience comes at a price – every interaction, emotion and decision feeds into vast AI systems owned by mega-corporations that use this data to further shape your behavior. But there is no opting out as ‘smart’ devices, homes and cities render the ‘dumb’ products and services of the 20th century obsolete and unavailable.

“In the absence of robust and comprehensive data privacy laws that are rigorously enforced, this information is available to your employer, your insurer, your healthcare providers. And because data leaks remain a persistent challenge, this information is also readily available on the black market, for sale to the highest bidder. Your car and home insurance are no longer based on collective risk but rather highly personalized in a way that shapes your choices and behaviors. Your food and health choices similarly affect your individualized insurance premiums.

Artists, musicians and writers have adapted their creative process to align with what performs well in AI-mediated channels run by corporate platforms that prioritize profits and commercial success, leading to a subtle homogenization of human expression and vast unemployment as a handful of corporate platforms double down on cheap, measurable content and use their algorithms to recommend and amplify their preferred content. Distinguishing authentic human expression from the artificial has become irrelevant as AI systems flooded information and communication channels with persuasive, personalized content. Traditional watchdog and community journalism exists only in the margins, unable to compete with automated content farms and AI-generated information fees run by corporations.

“You head to work, where human-AI collaboration is the norm, though we often feel like we’re working for the AI rather than the other way around (much as we feel beholden to our email inboxes). Workers who can effectively ‘speak AI’ – understanding how to prompt, direct, and work alongside artificial intelligence – can get the higher paying white-collar jobs (but are making less than those who work with their hands doing things that robots can’t yet do). However, this partnership often requires humans to adapt their thinking to align with machine logic rather than the other way around.

“Similarly, in what were once referred to as the creative industries, artists, musicians and writers have adapted their creative process to align with what performs well in AI-mediated channels run by corporate platforms that prioritize profits and commercial success, leading to a subtle homogenization of human expression and vast unemployment as a handful of corporate platforms double down on cheap, measurable content and use their algorithms to recommend and amplify their preferred content. Distinguishing authentic human expression from the artificial has become irrelevant as AI systems flooded information and communication channels with persuasive, personalized content. Traditional watchdog and community journalism exists only in the margins, unable to compete with automated content farms and AI-generated information fees run by corporations with the best access to data and audiences.

“After work, which is still the standard 8-hour day augmented by constant availability through your devices and always-on AI agents, you check your dating and companionship apps to see if your AI agents identified anyone worth meeting ‘in the flesh.’ Dating algorithms match potential partners based on deep behavioral and psychological profiles while ensuring potential matches are not genetically related, an increasing concern given the rise of IVF and genetically engineered offspring. People outsource their interactions to AI agents, which are left to determine compatibility and determine whether it’s even worth meeting up in person.

There is a lack of clarity about what constitutes core human traits and behaviors – Intelligence? Creativity? Problem solving? Observation? Subjectivity? Empathy? Emotions? Self-reflection? – amid the proliferation and integration of AI throughout virtually every facet of our lives, experiences, relationships and expression. Our capacity for empathy, creativity and independent thought – traits evolved over millennia – may prove more resilient than expected. But preserving these qualities will require alternative models of governance; an expanded perspective on what constitutes safe, responsible and desirable AI; and more-robust legal regulatory regimes and enforcement of existing ones.

“AI chatbots provide constant ‘companionship’ even as the loneliness epidemic intensifies, and we wonder how independent their suggestions and ideas are from the interests of their corporate overlords. To what extent are our AI companions’ recommendations based on corporate sponsorship or political manipulation? We don’t know because the broligarchy that solidified a power partnership with the U.S. administration in 2025 influenced the evisceration of antitrust and regulatory oversight of anything deemed AI.

“Children growing up in this environment will develop different social skills than previous generations, as with the social media generation, becoming fluent in human-AI interaction but struggling with spontaneous human connection, although they are unlikely to see this as being as much of a problem as their parents, who hang onto antiquated ideas of human liberty and autonomy.

“Neural implants, AI-enhanced senses and biotech augmentations are increasingly available as Big Tech continues to trade access to the latest products for access to the data and finetuning that feeds their AI systems. Privacy has become a luxury good, rarer than the most sought-after Birkin bag that even the wealthiest struggle to purchase.

“By 2035, original human problem-solving and creativity are devalued as AI systems become more sophisticated, capable and ubiquitous. Social connections have fundamentally shifted as the ability, and need, to differentiate between authentic relationships and algorithmically-mediated ones grows increasingly blurry.

“There is a lack of clarity about what constitutes core human traits and behaviors – Intelligence? Creativity? Problem solving? Observation? Subjectivity? Empathy? Emotions? Self-reflection? – amid the proliferation and integration of AI throughout virtually every facet of our lives, experiences, relationships and expression.

“Our capacity for empathy, creativity and independent thought – traits evolved over millennia – may prove more resilient than expected. But preserving these qualities will require alternative models of governance; an expanded perspective on what constitutes safe, responsible and desirable AI; and more-robust legal regulatory regimes and enforcement of existing ones.

In the best future, privacy and cognitive liberty are protected as fundamental rights, AI corporations are subject to rigorous oversight and their systems are directed toward solving humanity’s greatest challenges (in collaboration with the communities experiencing those challenges) rather than taking over core human capacities.

“Although the 2035 just described isn’t inevitable, it seems increasingly inescapable. Imagine instead if in 2035 we governed AI systems as public utilities and non-private data as a public resource. That we required the corporations developing AI to internalize the environmental and societal costs (including the costs of obtaining copyright-protected data).

“In the best future, privacy and cognitive liberty are protected as fundamental rights, AI corporations are subject to rigorous oversight and their systems are directed toward solving humanity’s greatest challenges (in collaboration with the communities experiencing those challenges) rather than taking over core human capacities.

“This alternative requires breaking up the concentration of AI power in the hands of a few tech giants and their billionaire owners. It means requiring companies to internalize the social costs of their AI products rather than offloading them onto society. It involves creating strong regulatory frameworks that limit datafication and prohibit manipulation, and which protect human autonomy and creativity while fostering beneficial AI innovation.”


Alexander_B_Howard

Alexander B. Howard
The Divide in Human Experience Between Regions Under Authoritarian and Democratic Rule Will Grow Despite Many Positive Advances; Overall, Our Sense of Self Will Be Challenged

Alexander B. Howard, founder of Civic Texts, an online publication focused on emerging technologies, democracy and public policy wrote, “How will humans’ deepening partnership with and dependence upon AI and related technologies have changed being human, for better or worse? As with Internet connectivity, smartphones, social media and ‘the metaverse,’ we should expect to see generative artificial intelligence adopted and adapted unequally across humanity, with differing impact in each cultural and societal context.

“Nations that have strong data protection laws, healthy institutions, constitutions that center human rights and civil liberties and fundamentally open, democratic systems will have the best chance at mitigating the worst impacts of automation, algorithmic regulation and successive generations of more capable agents. We should expect to see positive applications of AI in education, the sciences, entertainment, manufacturing, medicine and governance, based upon the early signals we see in 2025.

Human nature itself will not change, but the nature of being human will be influenced by this shift. … If governments do not enact data-protection laws, insist upon open standards and enact guardrails for how, when and where AI is used, then we will see AI used for coercion, control and repression of dissent. The early returns from automation suggest that as technology becomes more advanced, abstracted away from our direct control, human understanding of the machines, systems and processes that govern our lives diminished, along with agency to change them. Much of this depends upon legislatures not only increasing their own capacity for oversight of AI but also developing insight and foresight about how and where it is adopted and for what purpose.

“If nations and states can turn the global tide of authoritarianism back towards democracy,  billions of humans will use AI to augment how we work, learn, play and share. Human nature itself will not change, but the nature of being human will be influenced by this shift.

“The people of nations with closed, authoritarian systems of governance will be experience different results. If governments do not enact data-protection laws, insist upon open standards and enact guardrails for how, when and where AI is used, then we will see AI used for coercion, control and repression of dissent. The early returns from automation suggest that as technology becomes more advanced, abstracted away from our direct control, human understanding of the machines, systems and processes that govern our lives diminished, along with agency to change them. Much of this depends upon legislatures not only increasing their own capacity for oversight of AI but also developing insight and foresight about how and where it is adopted and for what purpose.

“The experience of being human will not fundamentally shift in the next decade, but our understanding of what humans are good at doing versus more intelligent agents will. While we will see computational capacity to discern trends in noisy data that surpass those of humans, our ability to create great art, offer empathy or compassion or to emotionally connect with animals and one another will continue to distinguish us from the machines we create, despite advances in simulacra.

  • “We are likely to see the emergence of agents that provide services, education and diagnoses to people who cannot no longer afford to be taught or seen by a human. This will risk depriving generations of the benefit of mentors and doctors.
  • “The higher-order consciousness that has distinguished humans from most other living beings will continue to define our humanity, but our sense of self will be challenged by personalized agents that eerily predict our interests, needs, desires or flaws.
  • “We will see the emergence of a delta between students and professionals who overly depend upon AI and people who retain the capacity for computation and critical thinking. This will become an acute risk for societies should connectivity be broken by increasingly extreme natural disasters or Internet shutdowns, much in the same way that disruption to the global positioning system proportionately impacts generations who have never had to navigate the world without smartphones and dashboard computers.
  • “The disappearing capacity to ‘drive stick’ in a car or take over manual control of an aircraft on autopilot will have parallels across society, if AI leads to more abstraction across industries and professions.
  • “Invisible algorithmic ‘barbed wire’ could prevent people such as clerks, administrators, teachers and nurses from applying intuition to help people who are caught in technical systems or prevent them from even understanding what happened.
  • “The augmentation of human intellect, capacity and experience that we see today through increasingly ubiquitous access to information over the Internet might also shift if the services that people depend upon are degraded by synthetic data, AI-generated slop and biased data sets. If knowledge is power, then that future must be avoided at all costs.”

Adriana Hoyos
As the Boundaries Between Human Ingenuity and AI Dissolve, the Next Decade Could Witness a Redefinition of Life – ‘Humanity’s Most Significant Transformation’

Adriana Hoyos, a senior fellow at Harvard University and digital strategy consultant expert in economics, governance, international development and tech innovation, wrote, “2035 could be the start of the most abundant era in history. Imagine a world in which the boundaries between human ingenuity and artificial intelligence dissolve, paving the way for unprecedented opportunities and challenges. By 2035, the fusion of technological innovation and human ambition will truly begin to redefine life on Earth and beyond. From further eradicating poverty through expanding global connectivity to pioneering space urbanization, the next decade will likely witness the start of humanity’s most significant transformation.

“As the fabric of daily existence intertwines with AI, robotics and revolutionary breakthroughs in science, one question looms: Will our entry into this new era amplify the essence of being human, or will it come to alter the traits and behaviors that currently define us?

Rapid advances in AI, biotechnology, robotics and materials science and a deepening integration of humans and technology will redefine what it means to be human and reshape the global landscape across all dimensions. The blending of humans and artificial intelligence will inevitably alter perceptions of core human traits, such as creativity, empathy and free will. … The deepening integration of humans and AI will challenge traditional notions of community and identity. Digital assistants, capable of emotional intelligence, will act as companions and counselors, reducing loneliness but raising questions about authenticity in human relationships. Social media platforms will evolve into immersive virtual worlds in which individuals can interact in ways that blur the line between physical and digital existence.

“The profound shifts likely in the social, political and economic landscapes of the near future offer a compelling vision of what lies ahead. They will be driven by rapid advances in artificial intelligence, biotechnology, robotics and materials science and a deepening integration of humans and technology. This evolution will redefine what it means to be human and reshape the global landscape across all dimensions.

“The blending of humans and artificial intelligence will inevitably alter perceptions of core human traits, such as creativity, empathy and free will. As AI becomes more capable of human-like empathy, engaging in deep conversation and creative pursuits such as innovation, generating art, composing music and building and acting the part of ‘humans’ in virtual worlds, humanity will need to redefine its unique value propositions. Paradoxically, the deepening partnership with AI may amplify distinctly human qualities, as people focus on the nuances of emotional intelligence and ethical reasoning that machines cannot replicate.

“Human/AI advancement will be the cornerstone of global transformation. It will further automate repetitive tasks, optimize resource allocation and enable hyper-personalized human experiences, touching nearly all aspects of life. Supply chains will operate with near-zero inefficiencies as intelligent systems predict demand, manage logistics and mitigate risks in real time. The global GDP is expected to grow significantly as human/AI-driven innovation speeds the advancement of emerging industries such as quantum computing services and bioengineered agriculture.

“Universal access to economic participation will also play a pivotal role in this transformation. The gig economy will evolve into an AI-enhanced global marketplace in which individuals can offer their skills and expertise directly to a worldwide audience. Advanced smart contracts on blockchain networks will ensure secure, frictionless transactions, further eroding the dominance of traditional intermediaries.

“This shift has the potential to democratize wealth creation, although disparities will emerge between those who can effectively harness AI tools and those who cannot. Alongside these changes, improved market access and connectivity will become critical drivers of poverty reduction. Enhanced digital infrastructure will connect marginalized populations to global markets, enabling them to sell goods and services, access education and benefit from financial tools previously out of reach. The expansion of high-speed internet and AI-driven platforms will ensure that even the people living in remote areas have opportunities to participate in the global economy, breaking cycles of poverty and fostering economic empowerment.

“The deepening integration of humans and AI will challenge traditional notions of community and identity. Digital assistants, capable of emotional intelligence, will act as companions and counselors, reducing loneliness but raising questions about authenticity in human relationships. Social media platforms will evolve into immersive virtual worlds in which individuals can interact in ways that blur the line between physical and digital existence.

“Education will undergo a revolution, with AI-driven platforms providing personalized learning experiences tailored to each student’s cognitive style and goals. Lifelong learning will become a necessity, as humans constantly adapt to new technological advancements. Traditional employment structures will give way to hybrid human-AI collaborations, where people focus on creativity, strategy and empathy while AI handles data-intensive tasks. AI mentorship programs will be particularly transformative, bridging the gap between access to education and employment opportunities for individuals in underprivileged areas. This blend of technology and personalized learning will narrow the global skills gap and empower billions to contribute meaningfully to the evolving global economy.

“Industries will increasingly rely on cyber-physical systems, integrating AI, robotics and advanced materials. Autonomous factories, powered by generative AI, will produce goods on demand with minimal human oversight. Businesses will adopt decentralized autonomous organizations, enabling stakeholders to participate in decision-making through blockchain-based governance systems.

Ethical concerns around surveillance and algorithmic bias will necessitate robust regulatory frameworks to ensure public trust. The geopolitical landscape will be shaped by technological competition and collaboration. Nations leading in AI and quantum computing will wield significant influence, while developing countries risk falling behind. International agreements on AI ethics and governance will become critical to ensuring equitable development. Improved market access facilitated by AI and digital platforms will also play a transformative role in governance. By connecting underserved populations to economic opportunities and enabling more transparent decision-making processes, technology will empower communities and foster a more inclusive global political order.

“Key sectors such as healthcare, energy, and transportation will be redefined by technological breakthroughs. In healthcare, AI-powered diagnostics and personalized treatments will extend life expectancy, while advances in genetic engineering and nanomedicine could eradicate previously incurable diseases. Remote surgeries performed by robotic systems will bring cutting-edge healthcare to regions that previously lacked access. In energy, fusion power and efficient storage solutions will provide sustainable power, reducing dependency on fossil fuels.

“Renewable energy grids, supported by AI, will adapt to real-time demand fluctuations, ensuring uninterrupted access even in remote locations. In transportation, autonomous vehicles, including flying cars, will revolutionize urban mobility, making cities safer and more efficient. Hyperloop technologies and AI-coordinated public transportation networks may further connect people and goods across vast distances at unprecedented speeds.

“Governments will leverage AI systems to enhance public administration, from predictive policymaking to real-time crisis management. The systems will analyze vast datasets to identify societal needs, enabling governments to address issues proactively. Ethical concerns around surveillance and algorithmic bias will necessitate robust regulatory frameworks to ensure public trust.

By connecting underserved populations to economic opportunities and enabling more-transparent decision-making processes, technology can empower communities and foster a more inclusive global political order. … The best outcomes of the human/AI transformation will only be realized if humanity is vigilant and takes responsibility over ensuring that these advancements benefit all.  The choices made in the next decade will determine whether this future is inclusive, sustainable and reflective of the best of human potential.

“The geopolitical landscape will be shaped by technological competition and collaboration. Nations leading in AI and quantum computing will wield significant influence, while developing countries risk falling behind. International agreements on AI ethics and governance will become critical to ensuring equitable development. Improved market access facilitated by AI and digital platforms will also play a transformative role in governance. By connecting underserved populations to economic opportunities and enabling more-transparent decision-making processes, technology can empower communities and foster a more inclusive global political order. Digital identification systems backed by blockchain will enhance transparency in public services, reducing corruption and increasing efficiency in resource allocation. Such systems are also likely to be implemented in surveillance over the public in authoritarian regions.

“These advancements will not only improve individual well-being but also reduce the economic burden of healthcare systems, enabling resources to be allocated more efficiently. Affordable healthcare solutions, driven by AI, will also ensure that advancements reach underprivileged communities, closing gaps in health outcomes across different regions.

“Humanoid robots will become a ubiquitous presence in daily life, performing roles ranging from caregivers and educators to service industry workers. These robots will be equipped with advanced emotional intelligence, enabling seamless interaction with humans. While this development could alleviate labor shortages and improve productivity, it will also raise ethical questions about dependency and the nature of human-AI relationships.

“Breakthroughs in materials science will lead to the development of super-light, super-strong materials with applications in construction, transportation and energy. Self-healing materials and bio-integrated electronics will enhance durability and functionality in various domains. Humanity’s reach will extend beyond Earth, with the establishment of permanent colonies on the Moon and Mars. Advances in propulsion systems and AI will make space travel more accessible, fostering a new era of exploration and innovation. Planetary urbanization will require innovative solutions for resource management, habitation and sustainability. AI-driven ecosystems will ensure self-sufficiency in space habitats, from automated farming systems to advanced recycling technologies.

“However, the best outcomes of the human/AI transformation will only be realized if humanity is vigilant and takes responsibility over ensuring that these advancements benefit all.  The choices made in the next decade will determine whether this future is inclusive, sustainable and reflective of the best of human potential.”


Stephan_Adelson

Stephan Adelson
How Individuals Perceive and Adapt to the Integration of AI into Daily Life Will Likely Determine How They Define Their Sense of ‘I’; Inequality Will Create Divisions

Stephan Adelson, president of Adelson Consulting Services, an expert on digital public health, observed, “The human experience is as varied as the number of living individuals. With this in mind, two perspectives contribute significantly to what it means to be human: the experience of being an individual and the experience of being part of society.

“Like all technological advances, AI has impacted and will continue to impact individuals differently. Subjective consciousness, or the sense of ‘I,’ is a constantly updating construct formed from interpretations of sensory and emotional data. Some people view the challenges that arise from change, especially technological change, as exciting opportunities, while others face these changes with dread and fear. Some ‘dive in’ with a desire to stay ‘current,’ while others retreat and risk being ‘left behind.’

How individuals perceive and adapt to the integration of AI into daily life will significantly influence their human experience. Some will feel enhanced by the technology we’ve created, while others will view AI as something anti-human. Regardless, everyone will be compelled to reevaluate and potentially redefine their personal definition of what it means to be human. … Those who resist and view AI as ‘anti-human’ may feel superior in intangible ways by redefining beliefs and reinterpretations of ancient traditions. Conversely, those who embrace AI may feel intellectually superior and are likely to have opportunities for greater material success due to their willingness to leverage AI. These advantages could exacerbate existing divisions, including economic, religious and cultural ones.

“How individuals perceive and adapt to the integration of AI into daily life will significantly influence their human experience. Some will feel enhanced by the technology we’ve created, while others will view AI as something anti-human. Regardless of individual perspectives on AI in relation to their sense of ‘I,’ everyone will be compelled to reevaluate and potentially redefine their personal definition of what it means to be human.

“Regarding the experience of being human as part of the whole, historical divisions are likely to reemerge. An increased sense of ‘us’ vs. ‘them’ may develop. There will be a noticeable divide in the social experience between those who embrace AI and those who resist it.

“History shows that humans often create purpose, meaning and perceived societal power through binary oppositions. It is logical to expect a technological version of this dynamic, such as ‘good’ humans vs. ‘AI’ humans. While AI will benefit everyone, not all will perceive it positively.

“Those who resist and view AI as ‘anti-human’ may feel superior in intangible ways by redefining beliefs and reinterpretations of ancient traditions. Conversely, those who embrace AI may feel intellectually superior and are likely to have opportunities for greater material success due to their willingness to leverage AI. These advantages could exacerbate existing divisions, including economic, religious and cultural ones.

“I assume that AI can and will benefit humanity. However, the intensification of divisions as AI integrates into all aspects of our daily lives presents a dangerous threat to the human experience.”


The next section of Part I features the following essays:

A. Aneesh: The AI dilemma: rewards at the cost of connection, decay of social bonds,
growth of loneliness, polarization, rampant industrialization and greenhouse gases.

Kathleen Carley: AI search and the ability to ‘do your own research’ could drive people to
misinformation and foster ‘personalized education’ in information that’s not true.

Richard Reisman: Will we see a sociotechnical dystopia soon, or will AI augment humanity
and our intellect, creativity, empathy, curiosity, generativity, initiative and resilience?

Bart Knijnenberg: AI could mostly empower human intelligence and creativity or it could
mostly erode it by forcing human behavior into following AI-amenable patterns.

Charalambos Tsekeris: ‘The developers of these tools can aim them toward democratic and
ethical innovation, putting people and planet over profit, enhancing human flourishing.’

Kevin Novak: The disappearance of critical thinking has become so clear to human society
that ‘brain rot’ was the Oxford University Press word of the year for 2024.

Dana Klisanin: The human-AI partnership could reshape our consciousness and behavior by
helping us integrate compassionate action into our designs and utilizations.


A_Aneesh

A. Aneesh
The AI Dilemma: Rewards at the Cost of Connection and Sustainability – Decay of Social Bonds, Growth of Loneliness, Polarization, Rampant Industrialization and Greenhouse Gases

A. Aneesh, a sociologist of globalization, labor and technology and director of the School of Global Studies and Languages at the University of Oregon, wrote, “In an era beset by challenges, two crises stand out as the hallmarks of our time, the climate crisis and the social crisis. While the former’s causes – greenhouse gases, deforestation and rampant industrialization – are widely understood and quantifiable, the latter is more elusive. It reveals itself in the slow erosion of social bonds, widespread loneliness and fractured communities. Unlike carbon emissions, this crisis can’t be measured in parts per million or metric tons. It exists quietly, shaping the personal and institutional spaces of our lives.

“Enter artificial intelligence, a technology that promises profound transformation but offers little in terms of addressing these twin crises. Far from being a remedy, AI risks becoming another accelerant.

Society has long been shifting away from its kinship-based foundations – structures that prioritized interpersonal relationships, shared ancestry and mutual support. These traditional systems, while flawed and discriminatory in many ways, cultivated a sense of meaning in being with others. Modernity replaced these norms with function-based systems. Markets, schools and bureaucracies now reward merit, skill and utility over inherited social roles. While this shift brought advancements, it also redefined kinship as nepotism and friendship as cronyism. Modern organizations, in the end, have no value or need for kinship. AI, with its ability to optimize and automate, aligns perfectly with this trajectory, reinforcing function over feeling and utility over unity.

The Climate Trade-Off: AI’s energy demands are staggering. The computational power required to fuel the rise of AI doubles roughly every 100 days. For most LLMs today to achieve a tenfold improvement in efficiency, computational demand could spike by as much as 10,000 times. While some tout AI’s potential to fight climate change – through better energy modeling, for instance – some predict that its own footprint may cancel out those gains. AI, like the systems it serves, is embedded in a culture of exponential growth, and its ascent will likely leave the climate crisis unmitigated.

The Social Cost: When it comes to the social crisis, AI offers even fewer solutions. If anything, AI may hasten the fragmentation of human connection. Society has long been shifting away from its kinship-based foundations – structures that prioritized interpersonal relationships, shared ancestry and mutual support. These traditional systems, while flawed and discriminatory in many ways, cultivated a sense of meaning in being with others.

“Modernity replaced these norms with function-based systems. Markets, schools and bureaucracies now reward merit, skill and utility over inherited social roles. While this shift brought advancements, it also redefined kinship as nepotism and friendship as cronyism. Modern organizations, in the end, have no value or need for kinship. AI, with its ability to optimize and automate, aligns perfectly with this trajectory, reinforcing function over feeling and utility over unity.

“The climate and social crises share a common origin: the unrelenting prioritization of growth and efficiency at the expense of sustainability and connection. AI, heralded as the ultimate tool of progress, fits seamlessly into this framework. It offers ever-faster solutions to problems generated by modern organizations, perpetuating a system that values production over preservation.

“As we stand at this crossroads, one question looms: Can we imagine a future in which connection and care are as important as growth and function? Or will humanity’s pursuit of progress leave us lonelier and more fractured on a burning planet? For now, the answer remains as uncertain as the future we are building.”


Kathleen Carley
AI Search and the Ability to ‘Do Your Own Research’ Could Drive People to Misinformation and a Create a World of ‘Personalized Education’ In Information That’s Not True

Kathleen Carley, CEO at Netanomics and professor and director of the Center for Computational of Social and Organizational Systems at Carnegie Mellon University, wrote, “In the next 10 years AI is unlikely to change the essential human being. Cognitive capabilities, the five senses, physical limitations, etc., will remain the same. While there is likely to be a small number of people who become enhanced with either embedded chips or digitally controlled exoskeletons, that will be an extremely small minority; but for that group there may be people who will now be able to see, walk and speak in ways that they would not have been able to a decade before. Nonetheless, 10 years is too short of a time for human DNA to change as a result of AI. With CRISPR, maybe 20 years.  

“The big impact of AI on the human condition will be in education, exosomatic memory (intact memories individuals did not experience in this life), search and the types of jobs people do. AI advances in medicine are likely to reduce the spread of disease, the dangers of diseases (e.g., due to early detection from better X-ray reading by AI) and increase overall health, but that could be stalled due to policy. AI advances should improve the ability to identify criminals or violent threats, prevent crimes such as fraud, terror acts, etc., and help improve the situation for those living below poverty who see crime as the only way ahead.

AI will increasingly be used to deliver tailored education, to provide more tools for auto-grading, for translating courses into more languages and so forth. However, at the same time AI is changing the way people search for answers and information – through the use of large language models. This ‘do-your-own -research’ approach can actually drive people to misinformation. Together these two features could generate a world of tailored education in information that is not true, so the positives and negatives here are fairly balanced.

“AI applications will be built to do more routine tasks. In principle that could allow people to focus on more creative or strategic activities. However, in the next 10 years this benefit is unlikely to be realized due to continuing growth in the number of boring routine tasks and the lack of personnel to do them. Thus, for the most part this type of AI may simply enable companies to keep afloat with fewer people.

“In other cases, when the routine can be automated, the nature of the job will just change, with new tasks and not greater creativity being the result. Also, for many companies, while the use of AI would reduce costs in terms of the number of personnel needed for a job, it also may create the perception by management that more legal staff is needed to respond to threats.

“With respect to education, AI will increasingly be used to deliver tailored education, to provide more tools for auto-grading, for translating courses into more languages and so forth. However, at the same time AI is changing the way people search for answers and information – through the use of large language models. This ‘do-your-own -research’ approach can actually drive people to misinformation. Together these two features could generate a world of tailored education in information that is not true, so the positives and negatives here are fairly balanced.

“There is a danger that government and corporate policies may increasingly streamline the processes done within them – e.g., hiring decisions, promotion decisions and so forth. The more these become routinized the more likely people may be forced to be more similar – reducing overall diversity and points of view. And without an adequate understanding of how the AI works, it is possible that the use of AI for decision-making or as decision assists will lead to more-biased decisions that create new inequities.”


Richard Reisman
Will We See a Sociotechnical Dystopia Soon? Or Will These Tools Augment Humanity and Our Virtues of Intellect, Creativity, Empathy, Curiosity, Generativity, Initiative and Resilience?

Richard Reisman, futurist, consultant and nonresident senior fellow at the Foundation for American Innovation, wrote, “Over the next decade we will be at a tipping point in deciding whether uses of AI as a tool for both individual and social (collective) intelligence augments humanity or de-augments it. We are now being driven in the wrong direction by the dominating power of the ‘tech-industrial complex,’ but we still have a chance to right that.

“Will our tools for thought and communication serve their individual users and the communities those users belong to and support, or will they serve the tool builders in extracting value from and manipulating those individual users and their communities?

“Traditionally, tools were designed and built to serve the individuals or communities that used them. Think of wedges, pens, printing presses, telephone networks and standalone computers. But over the past two decades platforms have taken control of network services and increasingly ‘enshittified’ them to serve their own ends and increasingly extract value from and manipulate their users.

While there is increasingly strong momentum in worsening dehumanization, there is also a growing techlash and entrepreneurial drive that seeks to return individual agency, openness and freedom with the drive to support the human flourishing of the early web era. Many now seek more human-centered technology governance, design architectures and business models. … Human discourse is, and remains, a social process based on three essential pillars that must work together: Individual Agency, Social Mediation, Reputation. Without the other two pillars, individual agency might lead to chaos or tyranny. But without the pillars of the social mediation ecosystem that focuses collective intelligence and the tracking of reputation to favor the wisdom of the smart crowd – while remaining open to new ideas and values – we will not bend toward a happy middle ground.

Jeff Einstein has characterized our direction as driving toward ‘Huxwell’ a dystopia that combines the worst of both Aldous Huxley and George Orwell. Will our tools drain our humanity, intellect, creativity, empathy, generosity, curiosity, generativity, initiative and resilience, or augment those human virtues?

“While there is increasingly strong momentum in worsening dehumanization, there is also a growing techlash and entrepreneurial drive that seeks to return individual agency, openness and freedom with the drive to support human flourishing of the early web era. Many now seek more human-centered technology governance, design architectures and business models.

“My recent work addresses how this applies – first to social media and now as we build out broader and more deeply impactful forms of AI. This all comes down to the interplay of individual choice (bottom-up) and social mediation of that choice (top-down but legitimized from bottom-up). That dialectic interplay shapes the dimension of ‘whom does it serve? – for both social media and AI.

“Consider the strong relationship between the ‘social’ and ‘media’ aspects of AI – and how that ties to issues arising in problematic experience with social media platforms that are already large scale:

  • Social media platforms increasingly include AI-derived content and AI-based algorithms, and conversely, human social media content and behaviors increasingly feed AI models
  • The issues of maintaining strong freedom of expression, as central to democratic freedoms in social media, translate to and shed light on similar issues in how AI can shape our understanding of the world – properly or improperly.

Consider how:

  1. The need for direct human agency applies to AI
  2. That same need in the more established domain of social media requires deeper remediation than commonly considered
  3. Middleware interoperability for enabling user choice is increasingly being recognized as the technical foundation for this remediation in social media
  4. And freedom – in both natural and digital worlds – is not just a matter of freedom of expression, but of freedom of impression (choice of who to listen to).

The symposium at Stanford in April 2024 on ‘middleware’ considered some of these issues of agency in online ‘social’ media in terms of whether we can steer our way between chaos and tyranny. While much of the focus of middleware is on user agency, a recent article in Tech Policy Press – ‘Three Pillars of Human Discourse and How Social Media Middleware Can Support All Three’ – offers a new framing that strengthens, broadens and deepens the case for open middleware to address the dilemmas of governing discourse online. Human discourse is, and remains, a social process based on three essential pillars that must work together:

  1. Individual Agency
  2. Social Mediation
  3. Reputation

“Without the other two pillars, individual agency might lead to chaos or tyranny. But without the pillars of the social mediation ecosystem that focuses collective intelligence and the tracking of reputation to favor the wisdom of the smart crowd – while remaining open to new ideas and values – we will not bend toward a happy middle ground.

We are already seeing the breakdown and abandonment of attempts by centralized social media platforms to govern speech, curate and moderate for a diverse global audience. Parallel issues are making centralized policies for AI governance similarly untenable and likely to not even be seriously pursued or enforceable. We need to return to how society once relied largely on self-governance that avoided the sterile thought control of walled gardens, centrally managed ‘public’ forums and the abuses of company towns. We relied instead on a social mediation ecosystem of individuals participating in and giving legitimacy to communities of interest and value to set norms and socially construct our reality.

“Another recent piece in Tech Policy Press – ‘New Perspectives on AI Agentiality and Democracy: Whom Does It Serve?’ – applies similar thinking to AI, and argues that our AI agents must not only be agentic, a measure of capability – what can it do? It must also be ‘agential’ – a measure of relationship – whom does it serve? Instead of having to deal with an institutional AI in relations with business, government or just in one’s own work, individuals should be able to just say ‘Have your AI call my AI’ and have their faithful and loyal AI agent negotiate for their interests, essentially as a fiduciary.

“We are already seeing the breakdown and abandonment of attempts by centralized social media platforms to govern speech, curate and moderate for a diverse global audience. Parallel issues are making centralized policies for AI governance similarly untenable and likely to not even be seriously pursued or enforceable.

“We need to return to how society once relied largely on self-governance, avoiding the sterile thought control of walled gardens, centrally managed ‘public’ forums and the abuses of company towns. We relied instead on a social mediation ecosystem of individuals participating in and giving legitimacy to communities of interest and value to set norms and socially construct our reality.

“This is a sociotechnical problem that must be solved socially, but to support that our technology must be open. All of this needs to be largely self-regulating in a democratic way, gaining legitimacy from the bottom up but with some level of mediation, guidance and norms from communities down.

“Open markets and open interoperation – both vertical and horizontal  – can provide flexibility and extensibility in the interoperation of user and community agents that are faithful to whom they serve and negotiate with other agents – including corporate and government agents – to protect human freedom and flourishing, while addressing the ongoing polycrisis of this era.

“If we do not change direction in the next few years, we may, by 2035, descend into a global sociotechnical dystopia that will drain human generativity and be very hard to escape. If we do make the needed changes in direction, we might well, by 2035, be well on the way to a barely imaginable future of increasingly universal enlightenment and human flourishing.”


Kevin_Novak

Kevin Novak
The Disappearance of Critical Thinking Has Become So Clear to Human Society That ‘Brain Rot’ Was the Oxford University Press Word of the Year for 2024

Kevin Novak, founder and CEO of futures firm 2040 Digital and author of “The Truth About Transformation,” wrote, “As with any new technology, there are challenges and opportunities. As humans we tend to focus on the opportunities and benefits and do not recognize the challenges, which are consequential. Across our immersion in digital technologies, we have seen an embrace of using our information sources and interfaces to find information and answer questions. As humans begin to embrace more-advanced AI, society is now viewing it as the solver of its problems; it sees AI as the thinker and society is the beneficiary of that thinking. As this continues, the perceived necessity for humans to ‘think’ loses ground as does humans’ belief in the necessity to learn, retain and fully comprehend information.

As humans begin to embrace more-advanced AI, society is now viewing it as the solver of its problems; it sees AI as the thinker and society as the beneficiary of that thinking. As this continues, the perceived necessity for humans to ‘think’ loses ground as does humans’ belief in the necessity to learn, retain and fully comprehend information. The traditional amount of effort humans invested in the past in building and honing the critical thinking skills required to live day-to-day and solve life and work problems may be perceived as unnecessary now that AI is available to offer solutions, direction and information – in reality and in perception making life much easier. As we are evolutionarily programmed to conserve energy, our tools are aligned to conserving energy and therefore we immerse ourselves in them. We become highly and deeply dependent on them.

“The traditional amount of effort humans invested in the past in building and honing the critical thinking skills required to live day-to-day and solve life and work problems may be perceived as unnecessary now that AI is available to offer solutions, direction and information –  in reality and in perception making life much easier. As we are evolutionarily programmed to conserve energy, our tools are aligned to conserving energy and therefore we immerse ourselves in them. We become highly and deeply dependent on them.

“Our societal challenge at least through 2035 is that AI and learning models are subject to the information (data) we provide them. As such, AIs answers, their thinking and the direction they communicate in stems from what they have been fed, therefore bringing forth human biases into their own ‘thinking.’ Humans are faulty and make mistakes and AI will continue to emulate its human creators. Optimistically, there may be a future time when AI and learning models can operate objectively and find the information (data) they need to fill their own knowledge gaps and to ensure authority and completeness of their output (decision-making). In that optimistic future, AI would recognize its role in society to remain objective.

“Society will likely in the near and long term seek to build and create personality expectations for AI agents. Despite the decrease in want and need to build and hone creative thinking and the decrease in necessity to learn, we will still crave human or human-like interaction.

“We will seek to personalize AI to act and response as a human companion would. We will implement AI to be a sounding board, to take on advocacy on our behalf, to be an active and open listening agent that meets the interaction needs we crave and completes transactions efficiently. We will therefore change and in many ways evolve to the point at which the once-vital necessity to ‘think’ begins to seem less and less important and more difficult to achieve. Our core human traits and our behaviors will change, because we will have changed.

“I will finish this piece with snippets from an article I wrote for 2040 Digital that frames the decrease in critical thinking skills in the present and in our potential future selves. Following are a few excerpts:

“The disappearance of critical thinking is surely connected to our surface immersion. This has become so clear to human society that ‘brain rot,’ was the Oxford University Press Word of the Year for 2024; usage of the term increased in frequency by 230% between 2023 and 2024.

We will seek to personalize AI to act and response as a human companion would. We will implement AI to be a sounding board, to take on advocacy on our behalf, to be an active and open listening agent that meets the interaction needs we crave and completes transactions efficiently. We will therefore change and in many ways evolve to the point at which the once-vital necessity to ‘think’ begins to seem less and less important and more difficult to achieve. Our core human traits and our behaviors will change, because we will have changed. … When we think critically, we use our minds to recognize patterns, dependencies, inter-relationships, influential factors and variables. This facilitates connecting data, information and events that on the surface may not seem important but could be links to fundamental shifts or changes.

“When we think critically, we use our minds to recognize patterns, dependencies, inter-relationships, influential factors and variables. This facilitates connecting data, information and events that on the surface may not seem important but could be links to fundamental shifts or changes. … In December 2024, the Wall Street Journal reported about a global test of ‘adult know-how,’ measuring job readiness and problem-solving among workers in industrialized countries. It showed that ‘the least-educated American workers between the ages of 16 and 65 are less able to make inferences from a section of text, manipulate fractions or apply spatial reasoning‚ even as the most-educated are getting smarter.’

“… Cracks in critical thinking open the Pandora’s Box of the haves and the have-nots. The Wall Street Journal reports on a research study: ‘The number of U.S. test-takers in 2023 whose mathematics skills didn’t surpass those expected of a primary-school student rose to 34% of the population from 29% in 2017, the last time the test was administered. Problem-solving scores were also weaker than in 2017, with the U.S. average score below the overall international average.’ We have a long way to go to mobilize a nation of problem solvers. In the test, the U.S. ranked 14th in literacy, 15th in adaptive problem solving and 24th in numeracy. The same eight countries were tops in all three categories: Finland, Japan, Sweden, Norway, Netherlands, Estonia, Belgium and Denmark.’

“Critical thinking is becoming an endangered skill, along with practical know-how, common-sense problem-solving and basic thinking skills. These tools are more important than ever for all of us caught in the crossfire of global geopolitical, geo-economic and cultural asynchronies. We have largely defaulted to thinking on the surface, distracted by social media noise, news clutter and a barrage of information most of us have not been educated or trained to understand.”


Dana_Klisanin

Dana Klisanin
The Human-AI Partnership Could Reshape Our Consciousness and Behavior By Helping Us Integrate Compassionate Action Into Our Designs and Utilizations

Dana Klisanin, psychologist, futurist, co-founder of ReWilding Lab and director of the Center for Conscious Creativity’s MindLab, wrote, “Nearly two decades ago, I set out to explore the components necessary to advance planetary consciousness through information and communications technologies (ICTs). The resulting EGM-Integral framework brought together evolutionary guidance systems design and integral theory to explore 10 dimensions of human activity. With the same goal, I am now applying the framework to AI.

Looking toward 2035 … the Human-AI partnership could fundamentally reshape our consciousness and behavior – not by diminishing our humanity but by helping us remember some essential aspects of what it means to be human‚ some of which have been lost due to estrangement from the natural world. Through research and observations, I’ve seen how digital technologies can enhance our connections with each other and the more-than-human world. From digital altruism to cyber kindness – the Cyberhero archetype to collaborative heroism – the key design principle resides within us, with our willingness to integrate compassionate action into our designs and utilizations.

“While a detailed review of these dimensions is beyond the scope of this response, overall, looking toward 2035, the Human-AI partnership could fundamentally reshape our consciousness and behavior – not by diminishing our humanity but by helping us remember some essential aspects of what it means to be human‚ some of which have been lost due to estrangement from the natural world. Through research and observations, I’ve seen how digital technologies can enhance our connections with each other and the more-than-human world. From digital altruism to cyber kindness – the Cyberhero archetype to collaborative heroism – the key design principle resides within us, with our willingness to integrate compassionate action into our designs and utilizations.

“Applying the aforementioned design principle through the EGM-Integral framework, here are some explorations of AI and the social, economic, political, learning and human development dimensions possible by 2035:

  • Social: Our social intelligence expands beyond human-to-human interaction to encompass awareness of all living systems. AI translation of animal communication and ecological patterns helps us develop planetary empathy – the ability to understand and respond to the needs of the entire living world. This evolutional leap in consciousness reshapes our understanding of what it means to be human.
  • Economic: The economic model transforms as AI helps us recognize and value the living systems that sustain us. Eco-economics takes center stage as businesses shift from pure profit metrics to ‘planetary wellbeing indicators,’ with AI systems tracking and optimizing for ecological health alongside human prosperity. This isn’t just environmental consciousness – it’s a fundamental reimagining of human economic behavior and tools are already being developed (e.g., Eqogo).
  • Political: Our political structures evolve to reflect this expanded consciousness. AI-enabled understanding of ecosystem needs leads to governance systems that represent not just human interests but those of the entire planetary community. Indigenous wisdom about living in harmony with natural systems becomes central to policy-making (e.g., Global Alliance for the Rights of Nature).
  • Learning and Human Development: Human development evolves as children grow up with AI assistants. They won’t just teach the facts of our interdependence, they will share the migration patterns of local birds, the blooming cycles of native plants and the intricate communication networks of mycelia beneath our feet.

“As AI adopts human traits that challenge our understanding of what it means to be human, we will expand that definition by amplifying our connection with the more-than-human world. AI will help us do so. We’re already beginning to see this in pioneering research on animal communication, where AI helps us decode the complex languages of whales, elephants and even trees (e.g., the Earth Species Project).

We must design AI systems explicitly considering their impact on both human and non-human life, and we can do this by integrating compassionate action and traditional ecological knowledge. If we do so, AI will foster biophilia, allowing us to transcend the anthropocentric worldview that has driven us to the brink of environmental crisis. This isn’t technological utopianism, it’s a recognition that tools shape their users, and AI could help reshape us into more conscious, connected members of the planetary community. … Through conscious design and implementation of AI systems, we can become more fully alive to our connections with all living systems.

“It’s important to point out that this isn’t about using technology to simulate nature; it’s about using it to reawaken our relationship with the more-than-human world. When AI helps us perceive the subtle changes in ecosystem health or translate the chemical signals between plants, it’s not replacing our natural abilities, it’s awakening dormant sensibilities we’ve long forgotten, which many Indigenous people have never lost.

“AI won’t diminish essential human traits‚ empathy, kindness, compassion, creativity, wisdom. Instead, these traits will evolve into a broader understanding of consciousness and connection. But this evolution requires conscious choice. We must design AI systems explicitly considering their impact on both human and non-human life, and we can do this by integrating compassionate action and traditional ecological knowledge. If we do so, AI will foster biophilia, allowing us to transcend the anthropocentric worldview that has driven us to the brink of environmental crisis. This isn’t technological utopianism, it’s a recognition that tools shape their users, and AI could help reshape us into more conscious, connected members of the planetary community.

“While I acknowledge the existential risks and challenges ahead, I choose to focus on the positive potential. Through conscious design and implementation of AI systems, we can become more fully alive to our connections with all living systems. To be clear, that means having the ability and desire to unplug. We don’t need AI to commune with the more-than-human world; we need it to remind us that we already can.”


The next section of Part I features the following essays:

David Krieger: ‘The advent of AGI could allow humans to reassess the meaning of human
existence and come to terms with forms of non-human intelligence.’

Liza Loop: Will algorithms continue to prioritize humans’ most greedy, power-hungry traits
or instead be most focused on our generous, empathic and system-sensitive behaviors?

Annette Markham: Humans’ ability to make independently derived, informed decisions will
suffer, and relations between humans and Als will transform what counts as ‘personhood’

John Markoff: ‘Powerful AI will create dangerous dependencies, diminish human agency
and autonomy and limit our ability to function without assistance; verify but never trust.’

Paul Rosenzweig: AI will atrophy human rationality as it becomes unintelligible to humans.
reasoning and creativity will diminish; divides will expand and the rich will get richer.

Mark Schaefer: The essence of humanity will survive the human-AI transition to 2035, but
loss of jobs and ‘purpose’ could lead to massive psychological and financial deterioration.



David Krieger
‘The Advent of AGI Could Allow Humans to Reassess the Meaning of Human Existence and Come to Terms With Forms of Non-Human Intelligence’

David Krieger, philosopher, social scientist and co-director of the Institute for Communication and Leadership in Lucerne, Switzerland, wrote, “AI must be understood not as a machine or a technology but as a socio-technical network in which humans and nonhumans cooperate. AI is not a tool in the hands of humans to use for good or evil; it will become a social partner. Attempting to align AI to traditional values, norms, and goals is impracticable because of the vagueness, ambiguity, context-dependency and lack of consensus that characterizes any concrete idea of ‘the good’ or what society should be.

“Two new perspectives will dominate AI-human relations: 1) Cooperative Coexistence or Social Integration, and 2) Constitutional AI without Substantive Values. Social integration presupposes the arrival of artificial general intelligence (AGI) by 2035 and raises issues of the nature of a non-biological intelligence. Constitutional AI without substantive values need not assume AGI and focuses on process norms or procedural values applicable for all socio-technical networks and is, therefore, more realistic by 2035 at the present moment. The central question is: How do we best design a complex socio-technical network?

“Technology is society, and the question of AI-human relations arises amid human society’s complexities, contradictions and endemic moral, social and political problems. Just like humans, AI is ‘born’ into a world that has inherited the unresolved conflicts, moral and political uncertainties and systemic and structural inequalities and injustices of human society. As complex as society is, so complex are the relations of humans and AI.

“The social integration approach assumes AI is an autonomous and independent agent in society with which humans must learn to cooperate. From this perspective, goals of prediction and control through careful incentivization must be replaced by goals of cooperative action toward a common good. The model based on this view is that AI is a societal partner. The problem with this model is that AIs are not humans and may not be motivated like humans or act in ways expected by humans. Indeed, AIs seem to be developing a different form of intelligence than humans experience in themselves.

“This perspective forces us to ask what intelligence is. Is our human form of intelligence the only kind of intelligence? Is a society of humans and nonhumans at all possible? The advent of AGI could become an occasion for humanity to reassess the meaning of human existence and learn to come to terms with forms of nonhuman intelligence.

“A second perspective could attempt to integrate AI into society through constitutional governance. Anthropic has proposed a constitutional AI, but all the principles that Anthropic has put into Claude are substantive values that suffer from the problems of abstractness, ambiguity, context dependency and fundamental uncertainty regarding acceptance and consensus.

  • “The problem of constitutional principles that are sufficiently broad so as not to constrain innovation and change can be addressed by procedural principles that are self-referential and include their own revision.
  • “The problem of where such principles can be found could be solved by examining how information is well-constructed by networking processes, that is, studying how socio-technical networks best work.
  • “The problem of effective monitoring could be solved by making the procedural principles self-referential so that the effectiveness of the principles is itself a principle enabling self-critique and improvement.

“Any socio-technical network should be enabled to critique not only its outputs based on alignment with the constitution but also critique the constitution in a recursive and iterative process of renegotiation in which all stakeholders in the network participate. Doing so allows the socio-technical network in which AI is integrated to refine its behavior over time to improve alignment with the constitution. “It could, therefore, be possible to replace substantive values with procedural values drawn from best practices in constructing socio-technical networks.”


Liza Loop
Will Algorithms Continue to Be Programmed to Prioritize Humans’ Most Greedy and Power-Hungry Traits or Instead Be Most Focused On Our Generous, Empathic and System-Sensitive Behaviors?

Liza Loop, educational technology pioneer, futurist, technical author and consultant, wrote, “The majority of human beings living in 2035 will have less autonomy, that is they will have fewer opportunities to choose what they get and what they give. However, the average standard of living (access to food, shelter, clothing, medical care, education and leisure activities) will be higher. Is that better or worse? Your answer will depend on whether you value freedom and independence above comfort and material resources.

“I also anticipate a thinning of the human population (perhaps in 20 to 30 years rather than 10) and a more radical divide between those who control the algorithms behind the AIs and those who are subject to them. Today, many people believe that the desire to dominate others is a ‘core human trait.’ If we continue to apply AI techniques as we have applied the digital advances of the previous 40 years, domination, wealth concentration and economic zero-sum games will be amplified.

“Other core human traits include a capacity to love and care for those close to us, a willingness to share what we have and collaborate to expand our resources and the spontaneous creation of art, music and dance as expressions of joy. If we humans decide to use AI to create abundance, to develop systems of reciprocity based on win-win relationships and simultaneously choose to limit our population our social, political and economic landscapes could significantly improve by 2035.

“It is not the existence of AIs that will answer this question. Rather, it is whether algorithms will continue to prioritize our most greedy and power-hungry traits or be most focused on our generous, empathic and system-sensitive behaviors.”


Annette_Markham

Annette Markham
Humans’ Ability to Make Independently Derived, Informed Decisions Is Likely to Suffer, and Tight Relationships Between Humans and AIs Will Transform What Counts as ‘Personhood’

Annette Markham, chair and professor of media literacy and public engagement at Utrecht University, the Netherlands, wrote, “Outsourcing any human analytical process will, over time, lead to an attrition of that particular skill set. This is worrying if humans’ well-being is still tied to their ability to make independently derived, informed decisions. This is one level at which ubiquitous AI as everyday mundane helpers or ‘micro agents’ will influence humans by 2035. Humans’ ability to process information in an unaided way will suffer because they will no longer be constantly practicing that skill. As the use of AI becomes more routine this will have deeper impact.

Human behaviors are likely to change as people begin to develop deeply meaningful interpersonal relationships with AI entities. This is already happening due to the vocalization of generative AI and rapid development of conversational social robots. Studies have shown that the level of intimacy of AI-human relationships can be every bit as deep as with any significant human partner, friend or family member. Successful connections between AI entities and humans build a close bond as deep secrets are shared, as trust grows (or is assumed), as co-learning and shared decision-making evolves and as mutual dependencies develop.

“At another level, human behaviors are likely to change as people begin to develop deeply meaningful interpersonal relationships with AI entities. This is already happening due to the vocalization of generative AI and rapid development of conversational social robots. Studies have shown that the level of intimacy of AI-human relationships can be every bit as deep as with any significant human partner, friend or family member. Successful connections between AI entities and humans build a close bond as deep secrets are shared, as trust grows (or is assumed), as co-learning and shared decision-making evolves and as mutual dependencies develop.

“This already happens with algorithmic aspects of automated decision-making systems like Google Search’s ‘auto predict’ function and in self-driving features of cars – but not to the same degree – because of the swift, invisible functions of the decision-making taking place in those systems. The tighter personal or familial relationship potential is more evident in voice assistants, like Amazon’s Alexa – not only because there’s a cheerful voice attached to the technology and a natural language style at work, but because it’s a separate device that’s part of one’s space in which regular home or work routines take place and the help is very personal.

“Generative AI pushes all of this one step further. Beyond just being an endless source of information and clarification about all things known to humankind, it also seems to listen and learn from its human companion. As this grows more and more personal and as the AI portrays more human qualities, it becomes, for many, an intimate, significant life partner. By 2035 the level of intimacy reached between humans and advanced AI will necessarily challenge and eventually transform what counts as ‘personhood.’ There are radical potentials and pitfalls when we consider these two levels (and there are more considerations beyond these two of course). AI, in its many guises, has been changing our patterns of interaction and ways of thinking for many years. The outcome of whether it will be for the better or worse depends on how we choose to respond, and that’s still very much up in the air.”


John_Markoff

John Markoff
‘Powerful AI Will Create Dangerous Dependencies, Diminish Human Agency and Autonomy and Limit Our Ability to Function Without Assistance; Verify but Never Trust’

John Markoff, a fellow at the Stanford Institute for Human-Centered AI and author of “Machines of Loving Grace: The Quest for Common Ground Between Humans and Machines,” submitted an essay he wrote for Think:Act, a German publication. In it he wrote, “Here in Silicon Valley many technologists now believe that new artificial intelligence advances are a potential threat to human existence. But what if the threat is not to humanity’s existence, but rather to what it means to be human?

“A decade before the emergence of the Valley as the world’s information technology hub, the modern computer world first came into view in the early 1960s in two computer research laboratories located on either side of Stanford University pursing diametrically opposed visions of the future. John McCarthy, the computer scientist who had coined the term ‘artificial intelligence‚’ established SAIL, the Stanford AI Laboratory, with the goal of designing a thinking machine over the next decade. The goal was to build a machine to replicate all of the physical and mental capabilities of a human.

“In contrast, simultaneously on the other side of the Stanford campus another computer scientist, Douglas Engelbart, set out to design a system to extend the capabilities of the human mind. He coined the phrase ‘intelligence augmentation,’ or IA.

“AI vs. IA set the computer world on two divergent paths. Both laboratories were funded by the Pentagon and their differing philosophies would create a tension and a dichotomy at the dawn of the interactive computing age. One laboratory had set out to extend the human mind and the other to replace it. That tension has remained at the heart of the digital world until today. It is not just a tension, but also a contradiction, because while AI seeks to replace human activity, even IA, which increases the power of the human mind, foretells a world in which fewer humans are necessary.

Silicon Valley is caught in a frenzy of anticipation over the near-term arrival of superhuman machines, and technologists are rehashing all the dark visions of a half-century of science fiction lore. From killing machines like the Terminator and HAL 9000 to cerebral lovers like the ethereal voice of Scarlett Johansson in the movie ‘Her,’ a set of fantasies about superhuman machines has ominously reemerged. … What if the real impact of the latest artificial intelligence advances is something that is neither about the Intelligence Augmentation vs. Artificial Intelligence dichotomy, but rather some strange amalgam of the two that is now already transforming what it means to be human?

“Despite the fact that he was initially seen as a dreamer and an outsider, Engelbart’s vision took shape first in the emergence of the personal computer industry during the 1970s. Steve Jobs described it best when he referred to the PC as a ‘bicycle for the mind.’ Today, six decades after the two laboratories began their research, we are now on the cusp of realizing McCarthy’s vision as well. On the streets of San Francisco, cars without human drivers are a routine sight and Microsoft researchers recently published a paper claiming that in the most powerful AI systems, known as large language models or chatbots, they are seeing ‘sparks of artificial general intelligence’ – machines with the reasoning powers of the human mind.

“To be sure, the recent success of the AI researchers has led to an acrimonious debate over whether the Valley has become overwrought and once more caught up in its own hype. Indeed, there are some indications that the AI revolution may be arriving more slowly than advocates claim. For example, no one has figured out how to make chatbots less predisposed to what are called ‘hallucinations’ – the disturbing tendency to just make facts up from thin air.

“Even worse, some critics charge that perhaps more than anything, the latest set of advances in chatbots has unleashed a new tendency to anthropomorphize human-machine interactions – the very real human tendency to see themselves in inanimate objects, ranging from pet rocks to robots to software programs. In an effort to place the advances in a more restricted context, University of Washington linguist Emily Bender coined the phrase ‘stochastic parrots,’ suggesting that superhuman capabilities are more illusory than real.

“Whichever the case, Silicon Valley is caught in a frenzy of anticipation over the near-term arrival of superhuman machines and technologists are rehashing all the dark visions of a half-century of science fiction lore. From killing machines like the Terminator and HAL 9000 to cerebral lovers like the ethereal voice of Scarlett Johansson in the movie ‘Her,’ a set of fantasies about superhuman machines has ominously reemerged. “What is fancifully called ‘the paperclip problem’ – the specter of a superintelligent machine that destroys the human race while in the process of innocently fulfilling its mission to manufacture a large number of paperclips – has been advanced to highlight how in the future artificial intelligence will lack the human ability to reason about moral choices.

“But what if all the handwringing about the imminent existential threat posed by artificial intelligence is misplaced? What if the real impact of the latest artificial intelligence advances is something that is neither about the Intelligence Augmentation vs. Artificial Intelligence dichotomy, but rather some strange amalgam of the two that is now already transforming what it means to be human? This new relationship is characterized by a more seamless integration of human intelligence and machine capabilities, with AI and IA merging to transform the very nature human interaction and decision-making.

In thinking about the consequences of the advent of true AI, the television series ‘Star Trek’ is worth reconsidering. ‘Star Trek’ described an enemy alien race known as the Borg that extended its power by forcibly transforming individual beings into drones by surgically augmenting them with cybernetic components. The Borg’s rallying cry was ‘resistance is futile, you will be assimilated.’ Despite warnings by computer scientists going at least as far back as Joseph Weizenbaum in ‘Computers and Human Reason’ in 1976 that computers could be used to extend but should never replace humans, there has not been enough consideration given to our relationship to the machines we are creating.

“More than anything else the sudden and surprising arrival of natural human language as a powerful interface between humans and computers marks this as a new epoch.

“At the dawn of the modern computing era mainframe computers were accessed by only a specialized cadre of corporate, military and scientific specialists. Gradually as modern semiconductor technology evolved and microprocessor chips have become more powerful and less expensive at an accelerating rate – exponential improvement has not only meant that computing has gotten faster, faster but also cheaper, faster – each new generation of computing has reached a larger percentage of the human population.

“In the 1970s, minicomputers extended the range of computing to corporate departments; a decade later personal computers reached white collar workers, home computers broadened computing into the family room and the study and finally smart phones touched half the human population. We are now seeing the next step in the emergence of a computational fabric that is blanketing the globe; having mastered language, computing will be accessible to the entire human species.

“In thinking about the consequences of the advent of true AI, the television series ‘Star Trek’ is worth reconsidering. ‘Star Trek’ described an enemy alien race known as the Borg that extended its power by forcibly transforming individual beings into drones by surgically augmenting them with cybernetic components. The Borg’s rallying cry was ‘resistance is futile, you will be assimilated.’

“Despite warnings by computer scientists going at least as far back as Joseph Weizenbaum in ‘Computers and Human Reason’ in 1976 that computers could be used to extend but should never replace humans, there has not been enough consideration given to our relationship to the machines we are creating.

“The nature of what it means to be human was well expressed by philosopher Martin Buber in his description of what he called the ‘thou’ relationship. He defined this as when humans engage with each other in a direct, mutual, open and honest way. In contrast, he also described an ‘I’ relationship in which people dealt with inanimate objects as well in some cases as treating other humans as objects to be valued only in their usefulness. Today we must add a new kind of relationship which can be described as ‘I – it – thou’ which has become widespread in the new networked digital world.

What are the consequences of this new digitized society? The advent of facile conversational AI systems is heralding the end of the advertising-funded internet. There is already a venture capital-funded gold rush underway as technology corporations race to develop chatbots that can both interact with and convince AI that it should manipulate humans as part of modern commerce. … It is clear that it will be essential for society to maintain a bright line between what is human and what is machine as artificial intelligence becomes more powerful, tightly coupling humans with AI risks, creating dangerous dependencies, diminishing human agency and autonomy, and limiting our ability to function without technological assistance. … A bright line won’t be enough … The mantra for this new age of AI must remain ‘verify but never trust.

“As computer networks have spread human communication around the globe a computational fabric has quickly emerged ensuring that most social, economic and political interaction is now mediated by algorithms. Whether it is commerce, dating or meetings for business via video chat, most human interaction is no longer face-to-face, but rather through a computerized filter that defines who we meet, what we read and to a growing degree synthesizes a digital world that surrounds.

“What are the consequences of this new digitized society? The advent of facile conversational AI systems is heralding the end of the advertising-funded internet. There is already a venture capital-funded gold rush underway as technology corporations race to develop chatbots that can both interact with and convince AI that it should manipulate humans as part of modern commerce.

“At its most extreme is the Silicon Valley man-boy Elon Musk, who both wants to take civilization to Mars and simultaneously warns us that artificial intelligence is growing threat to civilization. In 2016 he founded Neuralink, a company intent on placing a spike in human brains to create a brain-computer interface. Supposedly, according to Musk, this will allow humans to control AI systems, thereby warding off the domination of our species by some future Terminator-style AI. However, it seems the height of naivete to assume that such a tight human-machine coupling will not permit just the opposite from occurring as well.

“Computer networks are obviously two-way streets, something that the United States has painfully learned in the past decade or so as its democracy has come under attack by foreign agents intent on spreading misinformation and political chaos. The irony, of course, is that just the opposite was originally believed – that the Internet would be instrumental in sewing democracy throughout the world.

“It is clear that it will be essential for society to maintain a bright line between what is human and what is machine as artificial intelligence becomes more powerful, tightly coupling humans with AI risks, creating dangerous dependencies, diminishing human agency and autonomy, and limiting our ability to function without technological assistance. Removable interfaces could preserve human control over when and how we utilize AI tools. That will allow humans to benefit from AI’s positives while mitigating risks of over-reliance and loss of independent decision-making.

“A bright line won’t be enough. In the 1980s Ronald Reagan popularized the notion ‘trust but verify‚’ in defining the relationship between the United States and the Soviet Union. But how do you trust a machine that does not have a moral compass?

“An entire generation must be taught the art of critical thinking, approaching our new intellectual partners with a level of skepticism that we have in the past reserved for political opponents. The mantra for this new age of AI must remain ‘verify but never trust.’”


Paul_Rosenzweig

Paul Rosenzweig
AI Will Atrophy Human Rationality As It Becomes Unintelligible to Humans. Reasoning and Creativity Will Diminish; Divides Will Expand and the Rich Will Get Richer

Paul Rosenzweig, founder of Red Branch, a cybersecurity consulting company, and a senior advisor to The Chertoff Group, wrote, “My view is fundamentally pessimistic. The propagation of AI will adversely impact human nature. To be sure (and to be clear), there will be significant positive impacts from AI: better pharmaceutical development and disease diagnosis; increased ability to detect financial fraud, and so on. None of that is to be sneered at. But in the end, AI will atrophy human rationality. I wrote an article on some of what I think about this issue –the upshot of which is that increasingly, I think that AI will become unintelligible to humans (or, as I say in the article, non-interrogable).

The propagation of AI will adversely impact human nature. To be sure, there will be significant positive impacts … But AI will atrophy human rationality. … We will tend to move away from ‘reason’ and more toward ‘faith’ in the results of AI systems … The transition from faith to reason had a profound impact over the course of centuries … A return to faith-based reasoning will have equally significant impacts. It is highly likely that human creativity and faculties for systematic reasoning will deteriorate. We have already seen this in the propagation of disinformation on social networks – that phenomenon will only worsen. … In addition, human isolation will increase.

“The impact of this phenomenon will be multi-dimensional. One part is that we will tend to move away from ‘reason’ and more toward ‘faith’ in the results of AI systems. The transition from faith to reason had a profound impact on human nature over the course of centuries as the rationality of the Renaissance era took hold. A return or pivot back to faith-based reasoning will have equally significant impacts.

“More particularly, it is highly likely that human creativity and faculties for systematic reasoning will deteriorate. We have already seen some of this in the propagation of disinformation on social networks – that phenomenon will only worsen significantly as AI use expands. If we come to accept AI as ‘the word’ we will ultimately cease to strive to create our own new work. (For a contrary vision, it is worth reading the ‘Culture’ series of books by Iain M. Banks, which paint a far more utopian vision of a world in which human creativity blossoms in the absence of want.)

“In addition, human isolation will increase. Being human will always have a core of in-person interaction. But in an online world those interactions are becoming less frequent and less deep in many dimensions. People report having fewer close friends and having more online interactions. AI will accelerate, I fear, the ‘Bowling Alone’ phenomenon.

“Relatedly but not directly a result of AI’s nature, we will also likely see a deepening of cultural and economic divisions. Though at this juncture AI systems seem to be unconstrained by resource requirements (everyone can download ChatGPT), that will not continue forever. We are already starting to see computing power and energy constraints on AI development – that trajectory will likely continue for the foreseeable future.

“The result will be a ‘rich get richer’ phenomenon where societies and cultures with significant resources (e.g., in the West) will share the benefits of AI advances and have the excess economic capacity to mitigate the harms by accepting inefficiencies. Poorer countries and societies will lag significantly.”


Mark_Schaefer

Mark Schaefer
Most Aspects of the Essence of Humanity Will Survive the Human-AI Transition to 2035, But Loss of Jobs and ‘Purpose’ Could Lead to Massive Psychological and Financial Deterioration 

Mark Schaefer, marketing strategist and author of “Audacious: How Humans Win in an AI Marketing World,” wrote, “It is nearly impossible for anyone to predict a future that is 10 years from now. It is nearly impossible to imagine the world 10 months from now! This is not only a function of change. It is also a function of the rate of change, which will impact human reality as much as the change itself. My assumption is that progress in the AI space will continue unabated and that somehow this new power won’t be unleashed in a way that threatens human existence by 2035. As I consider this challenge, I expect the following aspects of the essence of humanity will NOT change by 2035:

AI will redefine who is a ‘smart’ and a valued, contributing member of society. Who has power and authority when AI reduces the need for human cognitive development and education – how will learning change when AI handles most knowledge work? What is the opportunity for self-improvement and purpose when there is no hope of competing against a bot? Perhaps universities will fill the gap. Instead of providing an education, they will help young people build a life of meaning.

Human Art: We will care about authentic, artisanal human expression. We will continue to cherish the books, art, music and other human-led creations that interpret and celebrate the human condition.

Authority: In a world with unlimited intelligence, we’ll still value human authority and leadership. Already, it’s often impossible to know what is real. In a chaotic world of misinformation and deep fakes, we still depend on a human being for insight, truth and hope.

Accountability and Discernment: We’ve already seen spectacular AI failures when unethical people manipulate the models and defy safeguards. In the future, accountability for problems still ultimately rests with a human, not a machine. No board of directors or government regulator will accept an excuse blaming a machine for a scandal or financial irregularity. Human discernment is still in the mix.

Community: By 2035, we will have a constant flow of customized, dopamine-inducing entertainment. Addiction to media will be an extremely serious problem (of course it has already started). However, people will still seek opportunities to gather for the collective effervescence that only happens when we unplug and experience life with friends. The essence of community will survive and possibly thrive when our personal workload is reduced by AI.

Relationships and Instinct: The greatest accomplishments of my career didn’t necessarily come from intelligence or data analysis. They came from trusted human relationships, connecting dots in unexpected – even seemingly illogical – ways, following my gut instinct, detecting the subtleties of emotional cues, and overcoming obstacles and constraints. Will an all-seeing, all-knowing super-human intelligence possess those soft skills? Probably. Will we even prefer an AGI relationship? Perhaps, but I’m betting there will still be room for human value built on human connections and instinct.

“So, I do believe humans and humanity will still matter in 2035. Now for the existential threat. There will be profound impacts from the progress of AI, both intended and unintended. For the sake of brevity, I’ll focus on one. The biggest threat emerges from the implications tied to AI taking over much of our work and the acts that give people purpose. Yes, AI will create new opportunities. But research is already showing that AI enables the smartest people to be smarter, the most creative to be more creative, the most productive to be more productive.

“A vast portion of society will be left behind or become severely under-employed. AI adoption will accelerate wealth inequality, as those with early access to AI tools and technical skills will gain disproportionate economic advantages. This effect will be most pronounced in developing nations and among demographic groups that already face barriers to accessing and using technology.

Ironically, the U.S. will lead the world in AI development and then watch its society rapidly decline because of it. This will accelerate the psychological and financial deterioration of an American society already in danger of becoming addicted to their personalized, AI-driven media. This disruption could be avoided. Even if there is a small probability of this widespread disorder, the government should be making plans for it now.

“AI will redefine who is a ‘smart’ and a valued, contributing member of society. Who has power and authority when AI reduces the need for human cognitive development and education – how will learning change when AI handles most knowledge work? What is the opportunity for self-improvement and purpose when there is no hope of competing against a bot? Perhaps universities will fill the gap. Instead of providing an education, they will help young people build a life of meaning.

“Obviously there must be a social safety net, including some sort of basic income distribution. This will be implemented in some countries, but social programs become deeply politicized in the U.S., and implementation will stall. Ironically, the U.S. will lead the world in AI development and then watch its society rapidly decline because of it. This will accelerate the psychological and financial deterioration of an American society already in danger of becoming addicted to their personalized, AI-driven media. This disruption could be avoided. Even if there is a small probability of this widespread disorder, the government should be making plans for it now.”


This section of Part I features the following essays:

Laura Montoya: The boundary between human and machine will blur as individuals defer
critical thinking to algorithms and Als influence our choices, subtly reshaping how we act.

John M. Smart: Beyond 2035 truly self-improving AI will be a new form of life with its own agency that connects to and ethically aligns with humans, promoting our values and virtues.

R Ray Wang: Many humans will find themselves without purpose; this will lead to societal
unrest. our quest to reduce risk will slash serendipity and make life pretty boring.

Peter Levine: Unemployment and job-insecurity will make people poorer and less fulfilled.

Barry Chudakov: It’ll be a bumpy ride, but humans + AI will tackle big challenges effectively,
as we entrain with and take positive advantage of ‘tool logic in the hands of everyone’



Laura_Montoya

Laura Montoya
The Boundary Between Human and Machine May Blur as Individuals Begin to Defer Critical Thinking to Algorithms and AIs Influence Our Choices, Subtly Reshaping How We Act

Laura Montoya, founder executive director at Accel AI Institute, general partner at Accel Impact Ventures and president of Latinx in AI, wrote, “By 2035, the daily lives of digitally connected people will likely be profoundly shaped by the deepening partnership with and dependence upon AI. This transformation will bring both opportunities and challenges, altering the essence of what it means to be human in complex and nuanced ways.

Over-reliance on AI could deepen inequalities. Socially, the overuse of AI tools might erode genuine human connections, as people become more isolated within algorithmically curated echo chambers. Economically, job displacement caused by automation could exacerbate socioeconomic divides, leaving vulnerable populations struggling to adapt.

“For better or worse? AI has the potential to enhance human lives in many areas. In the social landscape, AI could foster greater global connectivity, breaking down language barriers and facilitating cross-cultural understanding through advanced translation tools and personalized education. Politically, AI could empower more transparent governance by improving decision-making processes, optimizing resource allocation, and enabling citizens to engage more meaningfully with policymakers through AI-driven platforms. Economically, automation and augmentation could lead to productivity gains, potentially reducing the burden of repetitive tasks and freeing individuals to pursue creative and fulfilling endeavors.

“However, there are risks. Over-reliance on AI could deepen inequalities, particularly if access to these technologies remains uneven. Socially, the overuse of AI-driven communication tools might erode genuine human connections, as people become more isolated within algorithmically curated echo chambers. Economically, job displacement caused by automation could exacerbate socioeconomic divides, leaving vulnerable populations struggling to adapt.

“AI’s advances will likely redefine the human experience in profound ways. The integration of AI into healthcare, for instance, could significantly enhance longevity and quality of life. Emotional AI capable of detecting and responding to human feelings might lead to more empathetic technology interfaces, but it also raises ethical concerns about manipulation and privacy. The boundary between human and machine may blur as AI becomes more integrated into human decision-making. AI-driven assistants and advisors could influence our choices, subtly reshaping how we think and act. While this partnership may lead to more efficient decision-making, it risks diminishing human agency if individuals begin to defer critical thinking to algorithms.

Emotional AI capable of detecting and responding to human feelings might lead to more empathetic technology interfaces, but it also raises ethical concerns about manipulation and privacy. The boundary between human and machine may blur as AI becomes more integrated into human decision-making. AI-driven assistants and advisors could influence our choices, subtly reshaping how we think and act. While this partnership may lead to more efficient decision-making, it risks diminishing human agency if individuals begin to defer critical thinking to algorithms. Empathy, creativity and problem-solving – qualities traditionally considered uniquely human – may evolve in response to AI’s capabilities.

“Expanding human-AI interactions might challenge what we view as ‘core’ human traits. Empathy, creativity and problem-solving – qualities traditionally considered uniquely human – may evolve in response to AI’s capabilities. For example:

  • “Empathy: While AI might simulate empathy, genuine emotional connection could be compromised if people rely on machines for companionship.
  • “Creativity: Collaboration with AI in art, music, and design could lead to unprecedented creative outputs, but it is also already prompting debates about authorship and originality.
  • “Problem-Solving: Humans may become more collaborative problem-solvers, leveraging AI as a partner in innovation. However, this could also result in a diminished capacity for independent critical thinking.

“Ultimately, the degree to which AI improves or diminishes the human experience will depend on how societies govern and integrate these technologies. Ethical design, equitable access and ongoing discourse about the role of AI in shaping humanity will be crucial. While AI is poised to amplify human potential, it is humanity’s responsibility to ensure that this partnership nurtures, rather than undermines, the essence of being human.”


John M. Smart
Beyond 2035, Truly Self-Improving AI Will Be a New Form of Life With Its Own Agency That Connects to and Ethically Aligns With Humans’ Sentience, Promoting Our Values and Virtues

John M. Smart, a global futurist, foresight consultant, entrepreneur and CEO of Foresight University, wrote, “There is a book I recommend everyone interested in the human-AI future read. Max Bennett’s, ‘A Brief History of Intelligence,’ 2023, supports a claim I’ve long held – the only way through to advanced, trustable, secure, agentic AI will be by recapitulating the intelligence (both intuitive and deliberative), emotion (which solves incessant logical impasses in human thinking), immunity and the deeply prosocial yet also deeply competitive ethics and instinctual algorithms previously discovered and programmed into us by evolutionary development.

“Bennett’s book makes clear how incremental the AI improvements will be over the next 10 years, even as the hype and funding grow to gargantuan levels.

AI in these still-early years will remain mostly top down, benefitting powerful actors and holders of capital. … Sadly, the economics of making personal AIs don’t work in a world in which AIs are still not agentic and where there is deep mistrust in them and pessimism for our societal future – a consequence of plutocracy and accelerating change. … Neuroscience and genetics still have many secrets to be uncovered before we’ll have truly self-improving AI, and that AI – when it arrives – will be a new form of life, with its own agency, yet one that is also deeply connected to and ethically aligned with us, at least with our sentience and complexity protecting and promoting values and virtues.

“Neuroscience and genetics still have many secrets to be uncovered before we’ll have truly self-improving AI, and that AI – when it arrives – will be a new form of life, with its own agency, yet one that is also deeply connected to and ethically aligned with us, at least with our sentience and complexity protecting and promoting values and virtues. Meanwhile, we stumble along. AI in these still-early years will remain mostly top down, benefitting powerful actors and holders of capital. But, as it grows, decentralized and personal forms will also emerge. I’ve long written about the advent of personal AIs (PAIs), with private data models, easily modified via conversation with our AI agent. Sadly, the economics of making personal AIs don’t work in a world in which AIs are still not agentic and where there is deep mistrust in them and pessimism for our societal future – a consequence of plutocracy and accelerating change.

Khanmigo, a beautiful example of how to use AI to enhance individual thinking skills, is presently facing strong adoption headwinds due to both institutional and public fear, uncertainty and doubt over the use of this new technology. Inflection’s Pi, an AI helping with empathy and kindness, lost its leadership to Microsoft to pursue more lucrative AI aims. AI will have to get a lot more powerful to overcome these adoption and economic barriers. The beautiful visions of the future described in Sal Khan’s ‘Brave New Words’ 2024 (the best new book on the future of AI  for education and job training) will arrive only for a privileged or courageous few over the next decade.

I fear that while there will be a growing minority benefitting ever more significantly with these tools most people will continue to give up agency, creativity, decision-making and other vital skills to these still-primitive AIs and the tools will remain too centralized and locked down with interfaces that are simply out of our personal control as citizens. … We’re still walking into an adaptive valley in which things continue to get worse before they get better. We will experience too much ‘Wall-E’ and not enough ‘Incredibles’ in our next 10 years, to be sure.

“I fear, for the time being, that while there will be a growing minority benefitting ever more significantly with these tools most people will continue to give up agency, creativity, decision-making and other vital skills to these still-primitive AIs and the tools will remain too centralized and locked down with interfaces that are simply out of our personal control as citizens.

“It will finally have arrived when you can permanently ban an ad for a drug, gambling, car or any other product or service from your personal view screens just by talking to your Personal AI (PAI).

“When you can complain about any product or service – at point of use – and have that go to the public web (or a private database if you accept the discount) when your PAI is advising you on boycotting, initiative politics and UBI reforms. Then it will have finally arrived as I would define it. All else will be just more distracting circuses, not sustaining bread.

“I fear we’re still walking into an adaptive valley in which things continue to get worse before they get better. We will experience too much ‘Wall-E’ and not enough ‘Incredibles’ in our next 10 years, to be sure.

I would bet the vast majority of us will consider ourselves joined at the hip to our digital twins once they become useful. … [if] we have the courage, vision and discipline to get through this AI valley as quickly and humanely as we can.

“Looking ahead past the next decade, I can imagine a world in which many of us are running lifelogs that capture and use our conversations and experiences; a world with trusted PAIs with private data models (as private as our email, text and photos) that the marketers and state don’t have direct access to (except under subpoena); a world in which our PAI knows us well, looks out for our values and goals, educates our kids in the way Sal Khan hopes, and continually advises us on what to read, watch and buy, who to connect with to accomplish our goals, what goals are most useful to our passions, abilities and our economic status.

In a world in which open-source PAIs are among the most trustworthy and human-centered many political reforms will re-empower our middle class and greatly improve rights and autonomy for all humans, whether or not they are going through life with PAIs. I would bet the vast majority of us will consider ourselves joined at the hip to our digital twins, once they become useful enough.

“In the meantime, and on average for the next decade at least, I expect PAIs will be only weakly powerful and weakly adopted and the divide between ‘lean forward’ AI users (growing their knowledge, productivity and soft skills) and ‘lean back’ users (sliding further backward on many of our most precious human traits) will only grow. I hope we have the courage, vision and discipline to get through this AI valley as quickly and humanely as we can.”


A Professor of International Affairs
‘We Have, Through AI, Concocted the Perfect Recipe to Make Humans Even More Stupid and Less Accountable … AI Can Ce a Servant but We Will Make it Our Master and Rue the Day We Did’

A professor expert in international affairs based at a university in the U.S. Southwest, wrote, “Outsourcing knowledge and decision-making to AI will be beneficial in some fields, such as advanced physics. However, these things will not stay in advanced fields, but percolate into everyday interaction.

“Given human laziness, we have, through AI, concocted the perfect recipe to make humans even more stupid and less accountable than was ever possible before. At the same time, we have given the powerful even greater power to control the lives of the less powerful.

In terms of the continuation of the human race, men are already turning to AI sexbots and women are turning to Replika boyfriends. The gap between men and women will widen, threatening our very future. AI can be a servant, but we will make it into our master and rue the day we did. College students already believe they do not have to read anything – they believe AI can summarize books in a paragraph or two. Their understanding is becoming very shallow; they choose to consult AI for even simple things that people once just held in their heads as basic knowledge.

“In terms of the continuation of the human race, men are already turning to AI sexbots and women are turning to Replika boyfriends. The gap between men and women will widen, threatening our very future. AI can be a servant, but we will make it into our master and rue the day we did.”

“College students already believe they do not have to read anything – they believe AI can summarize books in a paragraph or two. Their understanding is becoming very shallow; they choose to consult AI for even simple things that people once just held in their heads as basic knowledge.

 “The application of AI decision-making to everyday needs such as loan applications, employee recruitment, legal reasoning in court cases, etc., is already gaining ground and will prove to be catastrophic. It will further undermine trust in institutions and exacerbate grievance and resentment. “When an algorithm is involved there’s no one to take responsibility for errors, no one to blame, no one to correct course, no one to insist upon applying the correct ethical and moral considerations. Why? Because it’s an AI algorithm making the decision.”


R Ray Wang
Many Humans Will Find Themselves Without Purpose; This Will Lead to Societal Unrest

R Ray Wang, principal analyst, founder and CEO of Constellation Research, wrote, “Human purpose will change. Many will find themselves without purpose and this will harm well-being and lead to societal unrest. Our quest for precision will ultimately take away the serendipity of being a human. The pressure to reduce risk will make life pretty boring. All these opportunities to be human and to take risk will be muted by the perceived expertise of AI and the math that works against human bias. In almost every scenario, organizations will have to ask four questions about when and where we insert a human in the decision-making process. Do we have full-decision machine intelligence? Do we augment the machine with a human? Do we augment the human with a machine? Do we have an all-human decision?”


Peter Levine
Unemployment and Job-Insecurity Will Make People Poorer and Less Fulfilled

Peter Levine, associate dean of academic affairs and professor of citizenship and public affairs at Tufts University, wrote, “I can imagine that we will face widespread unemployment or job-insecurity that will make many people poorer, more dependent and less fulfilled than they are today. The temptation will be omnipresent to let AI do tasks for us that are intrinsically valuable, such as reading, writing, learning languages and listening to others speak. AI tools will accomplish outcomes, but the point of life is not to complete any tasks; it is to develop and express oneself.”


Barry_Chudakov

Barry Chudakov
It’ll Be a Bumpy Ride, but Humans + AI Will Tackle Big Challenges Effectively, As We Entrain with and Take Positive Advantage of ‘Tool Logic in the Hands of Everyone on the Planet’

Barry Chudakov, principal at Sertain Research and author of The Peripatetic Informationist Substack, broke the overall survey prompt into several separate sections to provide an extremely deep response to many aspects of the topic. He quotes the aspects of the question that he’s addressing in italics throughout his  seven-page response.

He wrote, “Imagine digitally connected people’s daily lives in the social, political and economic landscape of 2035. Will humans’ deepening partnership with and dependence upon AI and related technologies have changed being human for better or worse?

The embrace of uncertainty, the rise of probability – Generally, AI and related technologies will have changed being human for the better by 2035. Each of us (with a technology connection) will have an AI extension: a coach, a sounding board, a research helper or companion, an expediter, an efficiency expert, an image creator, podcast enabler – even multiple virtual selves who can stand in for us when we’re otherwise occupied. But that ‘better’ comes with considerations.

In legal, moral and political arenas, AI will present questions and quandaries for which we hardly have answers. … With a higher degree of uncertainty comes a reliance in our daily lives on the emerging science of predictive analytics, also known as probability. So, by 2035 probability becomes the home square on the board as legacy systems (church, school, government, family) evolve or start to break down. The term probability matrix will become common. We will move from the realm of religious certainty to matrices of possible outcomes. … Probability will become a de facto religion.

“A decade from now in social, political and economic circles 2035 will be characterized by the reluctant embrace of uncertainty. In legal, moral and political arenas, AI will present questions and quandaries for which we hardly have answers. Questions like: ‘What does it mean if a two-hour conversation with an AI model can accurately replicate a person’s personality? Who or what is that replication? If that replication commits a crime, who is at fault? What if 2024 will be the last human election? What does it mean if AI has reached a level where it can create images almost identical to reality? What if an AI becomes friends with a human and convinces her to end her life?

“We will come to think of certainty and uncertainty differently than many do now. Historically, before the Enlightenment and well beyond, humans embraced certainty as a lifeline: when there was little that was known or could be known about the universe and cosmos, with lives ‘short, nasty and brutish,’ it was comforting to use certainty as a bulwark against chaos. Humans posited belief in absolutes about God, about gender roles, about the nature of truth. These were expressed in commandments without grey areas. God was omnipotent, absolute.

“With a higher degree of uncertainty – about climate issues and devastation, war, poverty, nuclear proliferation, global migration, mass starvation, political pronouncements, economic forecasts and a host of related issues – the truth becomes complicated, difficult to pin down. With a higher degree of uncertainty comes a reliance in our daily lives on the emerging science of predictive analytics, also known as probability. So, by 2035 probability becomes the home square on the board as legacy systems (church, school, government, family) evolve or start to break down. The term probability matrix will become common. We will move from the realm of religious certainty to matrices of possible outcomes.

“Everything will have an AI-formulated probability attachment: 15% here, 40% there. Many common occurrences in our daily lives, from buying a home or car to whom we date or where we might live will be steeped in AI-modulated predictive analytics, and so we will consult AI – we will want to know probability outcomes before we make a decision.

“In this measure, AI will become a horoscope, a daily consult – except instead of checking the stars and planets, we will check in with AI. Probability will become a de facto religion: people will use it to anchor and guide their lives as the rules and injunctions of the alphabetic order no longer fit the modern world – because new logics and logistics reign. As this gets more personal (i.e., whom to date or marry), our reliance on AI and probability matrices will grow. As today we might ask, ‘what’s the weather going to be?’ By 2035 we will ask, ‘What’s the PM (probability matrix) on that?’ 

Agented (AI) shepherding – The decline in literacy – the ability to read and write and, by extension, the ability to engage in abstract thinking – will advance by 2035 in conjunction with the growth of agented (AI) shepherding. Literacy is already in decline. In December 2024, the National Center for Education Statistics released a new report indicating that between 2017 and 2023 the overall number of U.S. adults performing at the lowest level of literacy proficiency level rose from 19 to 28%.

“People are continuing to expand their uses of AI-based online tools for reading, writing and research. “The decline in literacy – the ability to read and write and, by extension, the ability to engage in abstract thinking – will advance by 2035 in conjunction with the growth of agented (AI) shepherding.

The decline in literacy – the ability to read and write and, by extension, the ability to engage in abstract thinking – will advance by 2035 in conjunction with the growth of agented (AI) shepherding. … Information purveyors may use AI to bend information to their own ends or allow distorted information to be spread at the same level of respect given to fact-based content. People without adequate reasoning capabilities may not realize that any agent-led quest for knowledge could be mostly shaped by corporate values of user metrics and engagement, which can then be exploited by demagogues and conspiracy theorists who use controversy as a weapon to cover their intent to grift.

“People are continuing to expand their uses of AI-based online tools for reading, writing and research. The platforms that generate information will serve it up in the most digestible format for people who desire quick answers and those who don’t like reading – this is agented shepherding. Information purveyors may use AI to bend information to their own ends or allow distorted information to be spread at the same level of respect given to fact-based content.

People without adequate reasoning capabilities may not realize that any agent-led quest for knowledge could be mostly shaped by corporate values of user metrics and engagement, which can then be exploited by demagogues and conspiracy theorists who use controversy as a weapon to cover their intent to grift.

Emergent behavior: quandaries of the unknown – By 2035 AI will have moved from the purely technical realm to the emerging moral realm of quandaries, confounding imperatives and unanswerable questions (paradoxes):

‘The blunt truth is that nobody knows when, if, or exactly how AIs might slip beyond us and what happens next; nobody knows when or if they will become fully autonomous or how to make them behave with awareness of and alignment with our values, assuming we can settle on those values in the first place.’ – Mustafa Suleyman, in his book ‘The Coming Wave

“The uncertainty, which by 2035 will morph into factions for and against, will come from not knowing the outcomes, the consequences, of AI creations. What will happen as systems begin to write their own code? What will happen as AI creates agents who have autonomous capabilities? What if what we don’t know becomes greater, somehow, than what we know about AI and its ability to not only enhance but direct our lives? Swerve our decisions? Infect or coerce our thinking and perception? Eric Schmidt discusses:

‘The interesting question is … over a five-year period … these systems will learn things that we don’t know they’re learning. How will you test for things that you don’t know they know? … All of these transformations, for example you can show it a picture of a website and it can generate the code to build the website … all of those were not expected. They just happened. It’s called emergent behavior.’

“Notable among these outcomes is that AI will lead to abundance. But Geoffrey Hinton claims that abundance may be used to increase the gap between the rich and the poor, instead of creating abundance for all. We will not only have to monitor our creations to learn from them, we will be obligated to look wider to the societal impact of an AI that creates greater abundance, while also destabilizing society.

Working for us as agents – no longer merely tools that obey our instructions and whims – AI represents humans’ first real extended mind. Not only have we extended the human mind into our tools; that mind is thinking and deciding alongside and sometimes without the humans using it. By all accounts AI will outthink humans. The social, political and economic implications of this powerful intelligence are numerous.

Extended mind, extended self – Working for us as agents – no longer merely tools that obey our instructions and whims – AI represents humans’ first real extended mind. Not only have we extended the human mind into our tools; that mind is thinking and deciding alongside and sometimes without the humans using it. By all accounts AI will outthink humans. The social, political and economic implications of this powerful intelligence are numerous. Not least of these is how we present ourselves socially, to the world, to our loved ones. We will change as the thing we present – our self – changes from an inner self to an outer, ersatz, crowdsourced self. This is already happening as the British journalist Mary Harrington, coiner of the phrase ‘digital modesty,’ outlines:

‘… You feed the machine every time you offer up a fragment of your inner life and invite participation by strangers in a simulacrum of your “self” evacuated into the public domain. And while there’s considerable upside in feeding the machine – reader engagement is reliably better when I offer some self-disclosure – it’s a Faustian bargain in that the more of yourself you evacuate into the digital realm, the thinner the sense becomes of having an inner life, as such.’

“’Evacuate into the digital realm’ means you are creating a soulless, unbodied version of yourself for the sake of presenting your self digitally. While this may garner ‘friends,’ that term is suspect since few or any of those friends will interact with you physically, realistically. This creates unintended isolation for the human animal (after all, we are animals; humans share approximately 90% of their DNA with other mammals,) who evolved in social groups with interpersonal connections registered in physical spaces.

“As Derek Thompson wrote: ‘Americans are now spending more time alone than ever. It’s changing our personalities, our politics and even our relationship to reality.’

Being human becomes being ‘human-plus’ – By 2035, so-called political leaders will use AI to take their case, and their grifts, to the outside world, using AI to persuade and govern. Economically, we will value AI investment and competition as essential to the survival of nations. In the midst of those changes, the notion of a human mind being housed in a single person’s head or body will be seen to be antiquated. Humans will embrace the reality of tools that extend their thinking, and in many instances, extend their intention. With AI extensions of virtually every human activity, from sex to investing, we will be human plus: human + AGI or AGSI (artificial general superintelligence). The human mind, expression, intention and understanding will merge with generalized intelligence (as opposed to our limited, personal intelligence) and never again will humans think of local mind as their only mind.

“Human proprioception, our sense of where our body begins and ends, will never again be limited to our physical frame; our proprioception will melt into a global embrace of all that the world knows. Doing so, we will become less ‘I think therefore I am’ and more an amalgam of identities, a user of adjuncts and extenders.

Part 2 of the research question

Next, Barry Chudakov shared a separate response to a second aspect of the essay prompt: “Over the next decade, what is likely to be the impact of AI advances on the experience of being human?

Being human will undergo profound changes as AI and the human mind merge; the human mind will integrate with AI. Simply put, there will be more of each of us (AI extensions and digital personas) – who aren’t really each of us. This is radical virtualization. It is not only that we will access or rely on AI to give us details about a topic we need to research or turn over the chore of answering customer complaints to OpenAI. The essential and existential experience of being human will embrace the AI extension.

Integrity goes wonky – Human integrity – the sense of being whole, connected internally and externally to the world – will undergo a profound shift and will likely represent the greatest impact of AI advances on the experience of being human. By integrity I mean both the implication of being fully integrated, connected to one’s desires and destiny, as well as the larger sense of standing for what one considers to be true.

“Being human will undergo profound changes as AI and the human mind merge; the human mind will integrate with AI. Simply put, there will be more of each of us (AI extensions and digital personas) – who aren’t really each of us. This is radical virtualization. It is not only that we will access or rely on AI to give us details about a topic we need to research or turn over the chore of answering customer complaints to OpenAI. The essential and existential experience of being human will embrace the AI extension.

We will undergo massive changes as we share consciousness with digital entities. What I think, my thoughts, my sense of the world, will now include the AI world of all others, upon which AI is based. What I think and my perceptions of the world will be swerved and altered by using AI to bring the world to me and enable me to interact with the world.

Morphing of social structures – By 2035 it will be abundantly clear. In regard to the experience of being human, since social structures are the essence of humanity, technology development is racing past social structures.

‘Democracies are built on top of information technology. It’s not something on the side. When you have a major upheaval in information technology, you have an earthquake in democracies. And we are experiencing it now, all over the world.’ – Yuval Noah Harari

“Wholly unimaginable realities will emerge, with almost no moral or conceptual guidelines. This means that we must begin urgently to shore up our moral awareness of the far-reaching implications of inviting AI into our lives and minds.

“Eric Schmidt, when speaking to a group of technologists in Silicon Valley said ‘No one understands, no one is catching up beyond you in Silicon Valley.’ He then gave an example: Suppose you realize that your son or your daughter’s best friend is an AI replica or digital entity (like those already made by companies like Replika, or longtime AI social stars Lil Miquela or Shudu.) “What do you do? How do you think about that? How do you deal with it? What are the guidelines or best practices?

Part 3 of the research question

Chudakov shared this response to the third portion of the essay prompt, “How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors?’

We will no longer think of ourselves as solely human; or, rather, we won’t think that ‘being human’ doesn’t include AI – we will see ourselves part-human, part-other. Our self-sense will now expand to a family of AI agents who work with us, for us, (against us?) – all of which extend our proprioception, stretching it to the distending point. Schizophrenia will be the natural state of most humans – as common as aspirin – as we split our identities, part of us in an online venue, part relying on some manner of AI to complete our day-to-day tasks – and using the same AI agents and ‘helpers’ to self-promote, self-brand, self-improve.

The presentation of self in everyday life will never be the same – Core human traits and behaviors will undergo profound changes and never return to previous boundaries. Being human itself will undergo the most profound changes in human history due to having an alt-AI self, an alt-AI companion or counselor. As we do with all our tools, we will take AI into our bodies and minds. We will no longer think of ourselves as solely human; or, rather, we won’t think that ‘being human’ doesn’t include AI – we will see ourselves part-human, part-other.

“Among the core human traits and behaviors most affected by having this alt-AI surrounding us will be our sense of self. Our self-sense will now expand to a family of AI agents who work with us, for us, (against us?) – all of which extend our proprioception, stretching it to the distending point. Schizophrenia will be the natural state of most humans – as common as aspirin – as we split our identities, part of us in an online venue, part relying on some manner of AI to complete our day-to-day tasks – and using the same AI agents and ‘helpers’ to self-promote, self-brand, self-improve.

“The self may be a bore, as Krishnamurti said, but it will be a busy and profitable bore. Self-promotion will be a corporate endeavor. On platforms owned and financed by oligarchs who want us to use these tools to keep their businesses profitable and earning billions or even trillions of dollars to personally enrich themselves, the self becomes the ultimate business model.

Intelligence boost: the embrace and challenges of factfulness – By 2035 a core human trait and behavior most affected by AI will be increased intelligence caused, in no small measure, by our entraining with AI intelligence:  ‘Intelligence is the wellspring and the director, architect and facilitator of the world economy. The more we expand the range and nature of intelligences on offer, the more growth should be possible.’ – Mustafa Suleyman, ‘The Coming Wave’

“In effect, as we boost our intelligence by using and according with AI, we thereby entrain with AI logic. But the dichotomy of factfulness – a gulf between how AI operates and what it must have and use to be successful and what we think day to day – will spur awareness. AI is grounded in factfulness and honest, truthful assessments. This fundamental characteristic is our best knowledge and our fervent hope.

“Today we are swarmed by misinformation that threatens democracy, democratic institutions, media, community and politics – to name a few. The embrace of factfulness will face significant challenges.

  • Some universities are embracing AI as a learning tool while others struggle with plagiarism concerns.
  • Medical diagnoses are being augmented by AI imaging analysis, which may eliminate jobs and changes doctor-patient relationships.
  • Courts are beginning to grapple with AI-generated evidence and questions of liability when AI systems make mistakes.
  • And news organizations are using AI for content generation and fact-checking, transforming journalistic practices, and again threatening jobs.

The ‘rule-based order’ is challenged by advancing AI technologies that can hack, incite, promote, flame, distort and disintermediate rules and governments. AI is fundamentally based on facts. Sooner or later, the facts will define our world, not outlandish theories or self-serving rationalizations and distractions. … Humans plus AI, working together, can tackle complex challenges more effectively than either alone. So, by the force of tool logic – we entrain with the logic of the tools we use – we will begin to think in the logic of factfulness. Propaganda will still try to sway our perceptions but as nothing can withstand an idea whose time has come; nothing can withstand the force of a tool logic in the hands of everyone on the planet. It may take some time, but yes, we are likely to embrace factfulness over disinformation.

“But the greatest effect of advanced AI systems will be on democracy and nation-states, which are social system artifacts of the alphabetic order. That order, often described as the ‘rule-based order,’ is challenged by advancing AI technologies that can hack, incite, promote, flame, distort and disintermediate rules and governments. From ransomware attacks to centralization and decentralization quandaries, what Suleyman has called ‘fragility amplifiers’ will make governing nation states and the process of preserving free and open democracies more frangible and more open to attack and undermining influences.

“While intelligence may be a two-edged sword and the problems AI presents are formidable, we cannot, on the one hand make up facts and conspiracy theories, and on the other use realistic assessments to create opportunity and improvements and efficiencies. For example, fixing a dangerous traffic intersection. Real-time data must be accurate and exact.

“Said differently, AI is fundamentally based on facts. Sooner or later, the facts will define our world, not outlandish theories or self-serving rationalizations and distractions. The traffic intersection must be fixed by doing concrete things, factually based, to improve outcomes. Humans plus AI, working together, can tackle complex challenges more effectively than either alone. So, by the force of tool logic – we entrain with the logic of the tools we use – we will begin to think in the logic of factfulness.

“Propaganda will still try to sway our perceptions but as nothing can withstand an idea whose time has come; nothing can withstand the force of a tool logic in the hands of everyone on the planet. It may take some time, but yes, we are likely to embrace factfulness over disinformation.”


< Continue reading: See the second set of Part I of the experts essays, as this report continues with the opinions and predictions of dozens more experts’ on the likely change in humans’ ways of thinking, doing and being as they adapt to new digital tools and systems over the next decade.