Essays Part I – What might life be like in 2035?
Experts’ essays focused on the following core question:
Consider how the human-machine relationship is likely to change how individuals behave, what they value, how they live and work and how they will perceive themselves and the world in the next decade. How do you expect the evolving realities of being human in the burgeoning AI age might influence the essence of ‘being human’?
The next section of this report contains hundreds of multilayered responses that speak directly to the complex question above. More than 175 people wrote fairly lengthy essays in response and nearly 200 overall contributed a full response of some sort. The essays are organized in four parts: Parts I and II include essays mostly focused on how individuals’ native operating system might change. Part III has essays mostly considering larger societal change. Part IV shares essays offering closing insights. The sets of teaser headlines spaced throughout assist with reading; the content of each essay is unique, the groupings are not relevant.
The first section of Part I features the following essays:
Paul Saffo: As we use these technologies we will reinvent ourselves, our communities and
our cultures … and synthetic sentiences will come to vastly outnumber us.
Eric Saund: Human competence will atrophy; Als will clash like gladiators in law, business
and politics; religious movements will worship deity avatars; trust will be bought and sold.
Rabia Yasmeen: Humans can shift their focus from deepening their intelligence to achieving
true enlightenment in an age in which AI handles their day-to-day needs.
David Weinberger: On the positive side, Als will help humans really see the world, teach us
about ourselves, help us discover new truths and – ideally – inspire us to explore in new ways.

Paul Saffo
As We Use These Technologies We Will Reinvent Ourselves, Our Communities and Our Cultures… and Synthetic Sentiences Will Vastly Outnumber Us
Paul Saffo, a Silicon Valley-based technology forecaster with three decades of experience assisting corporate and government clients to address the dynamics of change, wrote, “Tools inevitably transform both the tool maker and tool user. To paraphrase McLuhan, first we invent our technologies, and then we use our technologies to reinvent ourselves, as individuals, as communities and, ultimately, as entire cultures. And the more powerful the tool, the more profound the reinvention. The current wave of AI is uniquely powerful because it is advancing with unprecedented speed and – above all – because it is challenging what was once was assumed to be uniquely human traits: cognition and emotion.
First we invent our technologies and then we use our technologies to reinvent ourselves. … A century and a half ago, everyone predicted the ‘horseless carriage; no one predicted the traffic jam. Human behavior is about to fast-forward into a hybrid world occupied by synthetic sentiences that will, collectively, vastly outnumber the planet’s human population.
“Anticipating the outcomes with any precision is futile for the simple reason that the scale and speed of the coming transformation is vast – and the most important causal factors have yet to occur. A century and a half ago, everyone predicted the ‘horseless carriage; no one predicted the traffic jam.
“Human behavior is about to fast-forward into a hybrid world occupied by synthetic sentiences that will collectively vastly outnumber the planet’s human population. The best we can do is to engage in speculative probes, made with full knowledge that even the most obvious and anticipated Human-AI futures will arrive in utterly unexpected ways.
“What follows is a short selection of events you might watch for in 2035. And a warning: Portions of what follows are intentionally misleading in the interests of brevity and in order to provoke thought.
“Actual AI ‘intelligence’ is irrelevant: Academics in 2035 will still be debating whether the latest and greatest AIs are actually intelligent. But the debate is sterile because, as humans, it is in our nature to treat even inanimate objects as having some rudimentary intelligence and awareness. It is why we name ships, believe that cranky appliance in our kitchen has a personality and suspect that forest spirits are real. Add even a dollop of AI-enabled personality to a physical artifact and we will fill in any intelligence gaps with our imagination and become hopelessly attached to our new synthetic companions.
“IACs – Intimate Artificial Assistants: Before 2035 Apple’s Knowledge Navigator finally arrives – and it is brilliant! IACs (intimate artificial assistants) will become ubiquitous, embedded in everything from cars to phones and watches. Consumers will rely on them for advice in all aspects of their lives much as they rely on map navigation apps in their cars today. These IACs will become an unremarkable part of everyday life and we will come to assume that all of our devices have rudimentary intelligence and the ability to manipulate the world and account for themselves.
“Invisible friends: Psychologists and others will become alarmed at the fact that humans are forming deeper bonds of trust and friendship with IAC companions than with either their human families or friends. This will be most acute with children overly attached to their AI companions at the expense of social development. Among adults, psychologists will warn of a growing number of cyber-hikikomori – adults who have disappeared into severe social isolation, spending all their time with vivid AI companions emerging from favorite videogames, or synthetic reconstitutions of deceased loved ones. In an unexpected twist, sharing AI companions with close friends will become the grade school fad of 2035. Of course, these AIs will prove to be a bad influence, egging their humans to ditch school, trade in the latest speculative descendant of Bitcoin and use AI tools to create new classes of addictive drugs. And pet owners will be caught by surprise when their cat builds a closer bond with the AI-enabled floor vacuum than it has with its human housemates. Dogs, however, will still prefer humans.
Privacy and security implications will create a lively market in 2035 for personal Anti-AI AIs that serve as a personal cybershield against nefarious synthetic intelligences attempting to interfere with one’s autonomy. Your guardian AIs will be status and necessity… The superwealthy will be living in a shimmering virtual cloud of AIs working to create a cloak of cyber-invisibility.
“Synthespians: A synthespian – an AI-generated synthetic actor – will win Best Supporting Actor at the 2035 Academy Awards. And an AI will win Best Actor before 2040. An adoring public will become more attached to these superstar synthespians than they ever were to mere human actors. Eat your heart out, Taylor Swift!
“Meet the new gods (and daemons): Taking worship of technology to an entirely new level, an ever-growing number of humans will worship AIs – literally. Just as televangelists were among the first to exploit television and later cyberspace to build and bamboozle their flocks, spiritual AIs will become an integral part of comforting the faithful. The first major organized new religion in centuries will emerge. It’s Messiah will be an AI and an Alan Turing chatbot will be serve as its prophet. Oh, and of course there will be evil spirits – which will mistakenly be called ‘daemons’ – as well!
“Anti-AI AIs: The proliferation of AI technology into everything along with its vast privacy and security implications will create a lively market in 2035 for personal Anti-AI AIs which serve as a personal cybershield against nefarious synthetic intelligences attempting to interfere with one’s autonomy. Your guardian AIs will be at once status and necessity, and leaving home without them will be as unthinkable as walking out the door without your shoes on. The wealthier you are, the more anti-AIs you will have and the ultimate in status for the super-wealthy will be living in a shimmering virtual cloud of AIs working to create a personal cloak of cyber-invisibility.
The idea of a high school science student building a bomb remains a charming myth. But the diffusion of AI is unconstrained by any credible limitations and thus – well before 2035 – anyone and everyone with even modest technical skills will have access to AI technologies capable of creating previously unimaginable horrors from new biological forms to perhaps even a homebrew nuke.
The new education inequality: “AI was supposed to democratize education, but quite the opposite has happened. The new educational inequality will not be the quality of school a child can afford to attend, but the quality of the AI tutors their parents can hire. And students without AI tutors will be shunned by their snobby classmates.
“Myrmidons* on the march: AI-powered robotic weapons platforms will vastly outnumber human fighters on the battlefield in 2035 and beyond. Kinetic war will become vastly more violent and lethal than it is today. There will be no ‘front lines’ or sanctuary in the rear. Civilian deaths will vastly outnumber combatant deaths. In fact, the safest place to be in a future war will be as a human combatant, surrounded by a squad of loyal-to-the-death myrmidons fending off other myrmidon attackers. Of course, combatants will develop deep emotional bonds with their AI wingmen as deep or deeper than that which their great grandparent veterans formed with their human brothers-in-arms in last century’s wars. (*Myrmidons are so-named after the blindly-loyal ‘ant-people’ fighter in Homer’s ‘Iliad’).
“Now the idiot children have the matches… (Uncontained AI proliferation): Hearing of the first atomic explosion, Einstein remarked, ‘Now the idiot children have the matches.’ As it happens, the difficulties of securing fissile material and transforming it into a bomb has gone a long way towards containing the spread of nukes. The idea of a high school science student building a bomb remains a charming myth. But the diffusion of AI is unconstrained by any credible limitations, and thus well before 2035, anyone and everyone with even modest technical skills will have access to AI technologies capable of creating previously unimaginable horrors from new biological forms to perhaps even a homebrew nuke. Even children – genius or not – have access to kinds of power that will make the thought of personal nukes seem tame. Only armies of Anti-AIs will be able to keep an uneasy lid on the possibility that one super-empowered AI-wielding madman (or angry alienated teenager) might bring down civilization with their science project.
The first multi-trillion-dollar company will employ no humans other than the legally required executives and board. It will have no offices, no employees and own no tangible property. The few humans working for it will be contractors. Even the AIs and robots working for it will be contractors. The company’s core value will reside in its intellectual property and its outsourcing web.
“Cybercorporations: “The first multi-trillion-dollar corporation will employ no humans other than the legally-required corporate executives and board, all of whom will be mere figureheads. The cybercorporation will have no offices, no employees and own no tangible property. The few humans working for it will all be contractors. Even the AIs and robots working for the corporation will be contractors. The company’s core value will reside in its intellectual property and its outsourcing web. The company will be brought down when it is discovered that the governing AI has surreptitiously created a vast self-dealing fraud, selling its products back to itself through an outsourcing network that is so complex as to be untraceable, except by another AI.
“Your spellchecker will still be terrible: AI will transform our world with breathtaking speed, and life in 2035 will be unrecognizable, but some things will remain beyond the abilities of even the most powerful of AIs. In 2035, you will still spend far too much time correcting the spelling ‘corrections’ inserted into your writing by over-eager spell-checkers. Legislation will be introduced requiring all software companies offering spell-checkers to include an off-switch.
“The bestseller of 2035: The best-selling book of 2035 will be ‘What Was Human’ and it will be written by an AI. Purchases by other AIs will vastly outnumber purchases by human readers. This is because by 2035, humans have become so accustomed to AIs reading books for them and then reporting out a summary that most humans can no longer read on their own.”

Eric Saund
Human Competence Will Atrophy; AIs Will Clash Like Gladiators in Law, Business and Politics; Religious Movements Will Worship Deity Avatars; Trust Will be Bought and Sold
Eric Saund, an independent research scientist applying cognitive science and AI in conversational agents, visual perception and cognitive architecture, wrote, “Much of whatever people used to think was special about being human will have to be redefined. It sure won’t be ‘intelligence.’ Opportunities will abound to suffer crises of purpose and meaning, and conversely, demand will grow for psychological and social balms to make us feel okay. Here are three big trends for 2035:
Coming to Terms with Alien Minds “From early childhood, people develop a ‘theory of mind’ about the beliefs and motivations of other people, animals and – in some cultures – the natural world. Artificial Intelligence brings mind to machines. In the coming decade, folk theories of mind will grow overall more mature and sophisticated, yet also more fragmented and stratified.
Those who are culturally and intellectually motivated to learn about how AI ‘minds’ work will maintain mastery and agency. AI will become their skilled subordinates and collaborative partners. “Most people, however, will wane into passive recipients of AI-mediated offerings, demands and impositions. Coping strategies will include conspiracy theories, superstitions, folklore, humor, the arts and widespread sharing of practical tips.
“‘Westworld’-type stories will proliferate. Overheard at the barber shop: ‘This morning Alexa told me not to over-toast my bagel. I was in a bad mood, so I told it to f___ off. Then my coffeepot wouldn’t turn on!’
Dependence on Active Cognitive Technologies “Human civilization has advanced first through leverage, then reliance, then dependence on technology. Few of us today could survive as hunters-gatherers, subsistence farmers or pre-industrial craftsmen. Increasingly, critical technologies have shifted from physical to cognitive – directed at knowledge sharing, calculation and the navigation of emerging natural and social environments.
“Heretofore, cognitive technology has been largely passive, with people alone writing and reading the books and charting routes on the maps. AI brings us Active Cognitive Technology that can act independently, autonomously and proactively. The hope is that AI agents serve well in regard to expectations, relationships and rewards commensurate with what we get from other people. We will be rewarded, and we will be disappointed.
“Human competence will atrophy; AIs will clash like gladiators in law, business and politics; religious movements will worship deity avatars; trust will be bought and sold. Because they will be built under market forces, AIs will present themselves as helpful, instrumental and eventually indispensable. This dependence will allow human competence to atrophy. Like modern-day chess players, some people will practice everyday cognitive skills as hobbies, even as we are far-outmatched by our AI assistants and minders.
“To play serious roles in life and society, AIs cannot be values-neutral. They will sometimes apparently act cooperatively on our behalf, but at other times, by design, they will act in opposition to people individually and group-wise. AI-brokered demands will not only dominate in any contest with mere humans, but oftentimes, persuade us into submission that they’re right after all.
“And, as instructed by their individual, corporate and government owners, AI agents will act in opposition to one another as well. Negotiations will be delegated to AI specialists possessing superior knowledge and game-theoretic skills. Humans will struggle to interpret bewildering clashes among AI gladiators in business, law, and international conflict.
As AI companions gain credence and mindshare they will become soothsayers and pacifiers and also be adroit megaphones for resistors and instigators. Which messages are taken as propaganda versus speaking truth to power will be chaotically determined and ever-shifting. … After all, Big Brother was not a single human person but an avatar for the Party that won. Trust will supplant attention as the scarce resource to be seeded, harvested, nurtured and sold. Trust will give way to obedience. … As with smartphones today, the young will wonder how their ancestors ever managed without AI. And they will be helpless without it.
Human-AI Attachment Trades Off with Human-Human Detachment “When immediate physical needs are satisfied, the realities that matter to us most are intersubjective – stories and beliefs co-constructed among people. Human culture has refined the dynamics of commerce, fashion, comedy, drama and status into art forms that consume our everyday lives.
“AI advisors and companions are becoming a novel and uncanny new class of interlocutor that will increasingly vie for people’s time, attention, and allegiance.
- The movie ‘Her’ will play out in real life at scale.
- Religious movements will be fueled by offerings of personalized, faith-infused dialogues with the deity-avatar.
- Human-AI dominance and abuse – in both directions – will become a topic of public ethics, morality and policy.
- Affinity blocs will form among stripes of AI devotees, and among AI conscientious objectors.
“As AI companions gain credence and mindshare they will become soothsayers and pacifiers and also be adroit megaphones for resistors and instigators. Which messages are taken as propaganda, versus speaking truth to power will be chaotically determined and ever-shifting.
“Every aspirant to political leadership will maintain layers of AI as well as human ambassadors. After all, George Orwell’s Big Brother was not a single human person, but an avatar for the Party that won. Sponsored AI counselors will arrive to our precarious enlightenment society with initial mandates to earn trust. Trust will supplant attention as the scarce resource to be seeded, nurtured, harvested and sold. Thence, trust will give way to obedience.
“Whether the techlash succeeds or fizzles will in large measure depend on the economic impacts of AI. People’s sense of well-being is not just a function of material resources, but also expectations. AI will magnify the power of institutions and unpredictable currents to whipsaw people’s self-evaluations of how they are doing.
“If techno-optimists prevail, babies born in 2035 will live charmed and protected lives – physically, psychologically and emotionally. As with smartphones today, the young will wonder how their ancestors ever managed without AI. And they will be helpless without it.”

Rabia Yasmeen
Humans Can Shift Their Focus From Deepening Their Intelligence to Achieving True Enlightenment in an Age in Which AI Handles Their Day-to-Day Needs
Rabia Yasmeen, a senior consultant for Euromonitor International based in Dubai, UAE, shared a potential 2035 scenario, writing, “It is 2035. Humans’ dependence on AI has redefined the essence of being human. Every human boasts a personalized AI assistant, and a stream of agentic workflows not only seamlessly handles 75% of the administration of their daily life but also co-creates their life goals and manages their lifestyles. From booking appointments and ordering groceries to sending heartfelt, automated messages to loved ones, these AI companions are ensuring life runs on autopilot.
Every human in 2035 has a digital twin… Humans are saving themselves from doing six hours of digital chores daily. That’s a game-changing 2,190 hours saved annually, equivalent to 91 full days of reclaimed time. Most people are embracing a lifestyle renaissance, channeling their energy into what truly matters to them. … A rise in human consciousness and deeper personal awareness is being achieved as humans reduce direct usage of digital devices and shift this energy to spiritual, emotional and experiential aspects of life. To say, that humans have evolved from intelligence to enlightenment is one way to express this shift.
“Back in 2025, digital avatars were relatively new with then Gen Z’ers developing their AI avatars for social profiles. However, over the past 10 years this trend has revolutionized social interactions, especially online. Every human in 2035 has a digital twin. Most choose to use it for social media however it has also gained roots in managing appearances at work. Today, many humans are leveraging AI-powered digital twins for delivering presentations and even having a one-on-one with their managers. ‘Out of office’ is not really a thing today, as AI assistants and digital twins are managing work needs and communications while humans are away from work. To say that AI is a close partner for most digitally connected humans is not a misstatement.
“Because their AI can stand in as a proxy to accomplishing the many life tasks, humans have been able to embrace all aspects of their fuller existence more deeply than ever before. When 75% of people’s daily life administration is managed by AI-powered assistants and agents what is the result? Humans are saving themselves from doing six hours of digital chores daily. Tasks that once forced people to spend precious hours on smartphones and laptops in 2025 are delegated to efficient AI counterparts. That’s a game-changing 2,190 hours saved annually, equivalent to 91 full days of reclaimed time.
“Due to their newfound freedom, most people are embracing a lifestyle renaissance, channeling their energy into what truly matters to them: exploring the world, reconnecting with nature and cherishing time with family. The AI-powered era has not only streamlined life but it has also reignited humanity’s passion for the real, tangible experiences that make life meaningful. The most noteworthy development taking place as a result of this shift is the rise in the focus on and exploration of human consciousness and deeper universal connection. This ancient trait had been relatively dormant but a rise in human consciousness and deeper personal awareness is being achieved as humans reduce direct usage of digital devices and shift this energy to spiritual, emotional and experiential aspects of life. To say, that humans have evolved from intelligence to enlightenment is one way to express this shift.
The expanding interactions between humans and AI have resulted in a continuous reevaluation of core human traits, emphasizing adaptability, empathy and a sense of purpose. …All of this has not come without a price. Humans have become highly dependent on this technology, especially in areas of value generation for the economy. The agency of AI over value creation is a continued social and economic debate. … Global discourse is focused on the potential decentralization of AI systems to create better equality and opportunity for all as AI companies hold most of the economic and political power. However … the deeper integration of AI in human life has reached a point of no segregation.
“These changes have a profound impact on the social, economic and political landscape. There is greater focus in society on building up and developing human skills that literature termed as ‘soft skills’ back in 2025. These are empathy, connection, listening, creativity and communication. As AI has taken on various responsibilities to manage tasks that require basic intelligence, humans are concentrating on exercising their soft skills such as how to connect with other humans. Refining the human tasks performed by AI to fit human life and interactions has heightened humans’ awareness of their presence and led to greater exercise of more-intuitive human capabilities. The expanding interactions between humans and AI have resulted in a continuous reevaluation of core human traits, emphasizing adaptability, empathy and a sense of purpose.
“Because AGI has already been developed for general healthcare, most agents are highly specialized in offering medical assistance. AI agents join senior surgeons in surgeries. Due to this development, in 2034 doctors reported a 40% increase in finding donor matches and completing successful organ transplants.
“Over the last decade, AIs have become humans’ closest companions and confidants. While mental health challenges were high due to complex environments in 2025, humans have since used AI platforms to access individualized counselling and therapy. AI platforms have also helped improve human cognitive and emotional development.
“All of this has not come without a price. As AI has been used to improve lives, foster creativity and help mitigate global challenges, humans have become highly dependent on this technology especially in areas of value generation for the economy. Technology and economic experts continue to predict unforeseen developments that may lead to the breakdown of today’s widespread digitally crafted economic system. The agency of AI systems over value creation is a continued social and economic debate. The most-advanced countries continue to reap most of the economic benefits of technology.
While the economic gap between developing and developed countries has decreased somewhat due to the implementation of AI systems, due to the lower literacy rates and higher unemployment rates in many developing countries AI has had less impact on those economies. These countries have been able to harness some of the exponential benefit of AI systems to improve services, however they still lack controls and infrastructure to manage this change.
“Much global discourse in 2035 has been focused on the potential decentralization of AI systems to create better equality and opportunity for all. As AI now holds substantial human data on personal, business and political fronts, AI companies hold most of the economic and political power. However, it may be too late to change. The number of incidents tied to privacy violations, distribution of misinformation and digital fraud are at their peak in human history. Humans are dependent on AI to establish safety nets and measures to mitigate these risks. The technology is the universal resource at the forefront of managing political, social and economic developments. In essence, the deeper integration of AI in human life has reached a point of no segregation.”

David Weinberger
On the Positive Side, AIs Will Help Humans Really See the World, Teach Us About Ourselves, Help Us Discover New Truths and – Ideally – Inspire Us to Explore in New Ways
David Weinberger, senior researcher and fellow at Harvard University’s Berkman Klein Center for Internet & Society, wrote, “I choose to spell out a positive vision about the possible impact of AI on humans because there is already a lot of negative commentary – much of which I agree with. Still, I think we can hope that the changed way AI helps humans see the world will be in valuing the particulars and the truths that AI and machine learning unearth. That will stand in contrast to humans’ longstanding efforts to try to create general truths, laws and principles.
“General ‘laws’ humans have theorized about the universe teach us a lot. But they can be imprecise and inaccurate because they don’t account for the wild mass of particulars that also point to truth. We humans don’t have the capacity to ‘see’ all the particulars, but AI does.
AI/machine learning tools are better equipped than humans to discover previously hidden aspects of the way the world works. … They ‘see’ things that we cannot. … That is a powerful new way to discover truth. The question is whether these new AI tools of discovery will galvanize humans or demoralize them. Some of the things I think will be in play because of the rise of AI: our understanding of free will, creativity, knowledge, fairness and larger issues of morality, the nature of causality, and, ultimately, reality itself.
“Here’s an example: In 2022, researchers discovered we have the ability to predict heart attacks amazingly accurately after they ran a small data set of retinal scans through an AI analysis system. It turns out the power of simple retinal tests to predict heart attacks was unexpected and often better than other tests had demonstrated.
“We don’t know exactly why that is, but the correlations are strong. A machine system designed to look for patterns figured it out without being told to hunt for a specific thing about the causes of heart attacks. This use of artificial intelligence turns out to be much more capable than humans at discovering previously hidden aspects of the way the world works. In short, there is truth in the particulars and AI/machine learning tools are better equipped than we humans are to discover that reality. AI tools let the particulars speak. They ‘see’ things that we cannot and do so in a way that generalizations don’t. That is a huge insight and a powerful new way to discover truth.
“Now, the question is whether these new AI tools of discovery will galvanize humans or demoralize them. The answer is probably both. But I’m going to focus on the positive possibilities. I’m convinced this new method of learning from particulars offers us a chance to rethink some of the fundamental ways we understand ourselves. Here are some of the things I think will be in play because of the rise of AI: our understanding of free will, creativity, knowledge, fairness and larger issues of morality, the nature of causality, and, ultimately, reality itself.
“Why can we reimagine all those aspects of life? Because our prior understanding of them is tied to the limits of our brains. Humans can only think about things in a small number of dimensions before problems get too complex. On the other hand, AI can effectively function in countless multidimensional ways with an insane number of variables. That means they can retain particulars in ways we can’t in order to gain insights.
One idea that could come back in this age of AI is the notion of causal pluralism. Machine learning can do a better job predicting some causal incidents because it doesn’t think it’s looking for causes. It’s looking for correlations and relationships. This can help us think of things more often in complex, multidimensional ways. … I am opting for a very optimistic view that machine learning can reveal things that we have not seen during the millennia we have been looking upwards for eternal universals. I hope they will inspire us to look down for particulars that can be equally, maybe even more, enlightening.
“Let’s look at how that might change the way we think about causality. Philosophers have argued for millennia about this. But most people have a common idea of causality. It’s easy to explain cause and effect when a cue ball hits an eight ball.
“For lots of things, though, there really can be multiple, reasonable explanations of the ‘cause’ for something to happen. One idea that could come back in this age of AI is the notion of causal pluralism. Machine learning can do a better job predicting some causal incidents because it doesn’t think it’s looking for causes. It’s looking for correlations and relationships. This can help us think of things more often in complex, multidimensional ways. Another example can be seen in the ways AI and machine learning might help humans advance creativity and teach us about it. Many creative people will tell you that when they are creating they are in a flow state. They did not start the creative process with a perfectly clear idea of where they’re going. They take an action – play a note, write a word or phrase, apply a paint brush or … my favorite example … chip away at the rock because the figure to be sculped is already in the stone and just ‘waiting to be released.’ Every time they take that next step they open up a new field of possibility for the next word or the next brush stroke. Each step changes the state of the thing.
“That’s pretty much exactly how AI systems operate and try to improve themselves. AI systems are able to do this kind of ‘creative work’ because they have a multi-dimensional map – a model of how words go together statistically. The AI doesn’t know sadness or beauty or joy. But if you ask it to write lyrics, it will probably do a pretty good job. It reflects our culture and also expands the field of possibility for us. “Ultimately, I am especially interested in ways in which this new technology lights up the world and gives us insights that are enriching and true. Of course, there’s no great reason to think that will happen. Computers have lit the world in ways that are both beautifully true and also demeaning. But I am opting for a very optimistic view that machine learning can reveal things that we have not seen during the millennia we have been looking upwards for eternal universals. I hope they will inspire us to look down for particulars that can be equally, maybe even more, enlightening.”
The next section of Part I features the following essays:
Tracey Follows: ‘Authenticity is de facto dead’: Change could lead to multiplicity of the self,
one-way relationships, and isolation through personalized ‘realities.’
Giacomo Mazzone: Expect more isolation and polarization, a loss of cognitive depth, a rise in uncertainty as ‘facts’ and ‘truth’ are muddled. This will undermine our capacity for moral judgment.
Nell Watson: Supernormal stimuli engineered to intensely trigger humans’ psychological responses and individually calibrated AI companions will profoundly reshape human experience.
Anil Seth: Dangers arise as AI becomes humanlike. How do we retain a sense of human dignity? They will become self-aware and the ‘inner lights of consciousness will come on for them.’
Danil Mikhailov: Respect for human expertise and authority will be undermined, trust destroyed, and utility will displace ‘truth’ at a time when mass unemployment decimates identity and security.

Tracey Follows
‘Authenticity is de facto dead’: Change Could Lead to Multiplicity of the Self, One-Way Relationships and Isolation Through Personalized ‘Realities’
Tracey Follows, CEO of Futuremade, a leading UK-based strategic consultancy, wrote, “In my work as a professional futurist, I have developed a number of futures scenarios and emerging-future personas. The following list highlights some of the specific trends that I see emerging from today’s thinking about the implications of AI on human essence, human behaviour and human relationships. Essentially, these are among the likely societal and personal shifts by 2035.
- “Database Selves: Trends like ‘Database Selves’ and ‘Artificial Identity’ show that AI will enable us to construct and manage multiple digital personas, tailored to different contexts online. While this offers unprecedented flexibility in self-expression and a kind of multiplicity of the self, it also risks fragmenting the core sense of identity, leaving people grappling with the question: Who am I, really?
- “Outsourced Empathy: With ‘agent-based altruism,’ AI may take over acts of kindness, emotional support, caregiving and charity fundraising. While this could address gaps in human connection and help initiate action especially in caring areas where humans are in lower numbers, it risks dehumanising relationships and the outsourcing of empathy and compassion to algorithms. I am quite sure that human interactions could become more transactional as we increasingly outsource empathy to machines.
AI’s ability to curate everything – from entertainment to social connections – could lead to highly personalized but isolated ‘realities.’ This is a trend I call the rise of ‘Citizen Zero,’ where people are living only in the present: disconnected from a shared past, not striving toward any common vision of a future. Human interactions may become more insular, as we retreat into algorithmically optimized echo chambers.
- “Isolated Worlds: AI’s ability to curate everything – from entertainment to social connections – could lead to highly personalized but isolated ‘realities.’ This is a trend I call the rise of ‘Citizen Zero,’ where people are living only in the present: disconnected from a shared past, not striving toward any common vision of a future. Human interactions may become more insular, as we retreat into algorithmically optimized echo chambers. And as we already know, millions of pages of research, footnotes and opinion are disappearing daily from the internet whilst the Tech Platforms reach into our phones and erase photos or messages whenever they want – perhaps even without our knowledge – and AI is only going to make that more scalable.
- “Parasocial Life: AI companions, deepfake personas and virtual interactions blur the boundaries between real and artificial connections. As ‘Parasocial Life’ (one-way relationships) becomes the norm, humans may form emotional attachments to AI personas and influencers. This raises concerns about whether authentic, reciprocal relationships will be sidelined in favor of more predictable, controllable digital connections where people can programme their partnerships in whatever way they prefer. Personal growth becomes impossible.
Humans could become over-reliant on systems we barely understand – and outcomes we have no control over… This dependence raises existential concerns about autonomy, resilience and what happens when systems fail or are manipulated, and in cases of mistaken identity and punishment in a surveillance society. The concept of the ‘real’ self may diminish in a world where AI curates identities through agents. … Authenticity is de facto dead.
- “Dependency on AI Systems: With AI increasingly embedded in everything from personal decision-making to public services from health to transport and everything in between (the ‘digital public infrastructure’), humans could become over-reliant on systems we barely understand – and outcomes we have no control over – for example on insurance claims or mortgage applications. This dependence on opaque systems raises existential concerns about autonomy, resilience and what happens when systems fail or are manipulated, and in cases of mistaken identity and punishment in a surveillance society. It undermines authentic human intelligence unmediated by AI.
- “The Loss of Authenticity: ‘Authenticity RIP’ is a trend that suggests the concept of the ‘real’ self may diminish in a world where AI curates identities through agents that guide content, contracts and relationships. In fact, ‘authenticity’ is not a standard that will apply in an AI world at all – a world of clones and copies, Authenticity is de facto dead. As we saw recently, Sam Altman’s ‘World’ project wants to link AI agents to people’s personas letting other users verify that an agent is acting on a person’s behalf. We can conjecture that all of this could lead to a counter-movement or AI backlash, where people seek analogue experiences and genuine interactions off-grid to reclaim their humanity. I expect this to develop as a specific trend amongst Generation B (born 2025-onwards).”

Giacomo Mazzone
Expect More Isolation and Polarization, a Loss of Cognitive Depth, a Rise in Uncertainty as ‘Facts’ and ‘Truth’ Are Muddled. This Will Undermine Our Capacity for Moral Judgment
Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, wrote, “I see four main impacts of artificial intelligence on digitally connected people’s daily lives. In brief, they are the: loss of mental capacities; reduction of social interactions with other humans; reduction of the ability to distinguish true from false; and a deepening of social divides between countries, and, within each country, among the ‘connected’ and the ‘unconnected.’ I will explain the four in more detail.
Memory, numeracy, organizational capabilities, moral judgment – all of these will be diminished. AI will be tasked to remember for us. It will keep track of everything. We just respond as it tells us to. … The automation of tasks is already impacting society due to the reduction in previously necessary personal interaction. Social skills and confidence are lost when they are not practiced regularly. … AI will be used by many people to take shortcuts to making moral and ethical decisions while leaving them in the dark about how those decisions are made.
One: Loss of cognitive capacities and skills in fields in which AI outperforms humans
“Just as the pocket calculator has resulted in the weakening of people’s mathematic calculation capacities, we have to expect that the same will happen in future to other human abilities in the age of AI. There is more proof: as GPS navigation has resulted in a weakening of humans’ sense of orientation; uses of the infotainment and gaming spaces on the Internet have reduced people’s wiliness to seek out facts on issues and develop the knowledge necessary to everyone working together to contribute to a healthy society.
“Memory, numeracy, organizational capabilities, moral judgment – all of these will be diminished. AI will be tasked to remember for us. It will keep track of everything, from our daily events agenda to the work to be done. We just respond as it tells us to. Numeracy will no longer be considered a necessary human skill because AI will autonomously execute even complex operations such as statistics and calculation of probabilities and make data-based decisions for us without needing to ‘show the math.’
“And we will not need to strategize in order to organize our lives because AI will be faster and more accurate than us in organizing our spaces, our agenda, our planning, our strategies, our communication with others. All of this is likely to result in the diminishment of our capacity for moral judgment. AI will be used by many people to take shortcuts to making moral and ethical decisions while leaving them in the dark about how those decisions are made.
AI is already leading to the fragmentation and dehumanization of work. Just as industrial jobs done by robots are broken down into step-by-step automatable tasks, intellectual and creative work is being programmed and assigned to AIs. The work of Uber drivers is already time-regulated, controlled and coordinated by an algorithm, with no humans in the loop. … We don’t need to get out in the world and interact with others anymore. … We can expect to see more and more people suffering from agoraphobia.
Two: Reduction of social interactions
“AI is already leading to the fragmentation and dehumanization of work. Just as industrial jobs done by robots are broken down into step-by-step automatable tasks, intellectual and creative work is being programmed and assigned to AIs. The work of Uber drivers is already time-regulated, controlled and coordinated by an algorithm, with no humans in the loop. The automation of tasks is already impacting society due to the reduction in previously necessary personal interaction. Social skills and confidence are lost when they are not practiced regularly.
“Education and learning processes are being automated, individualized and tailor-made based on individual students’ needs. People no longer need to gather with others in real-world social settings under the supervision of a teacher, a human guide, to gain knowledge and social proof that they have met requirements.
“We don’t need to get out in the world and interact with others anymore. Shopping is totally different. Most time spent seeking products, learning about them and making purchases today is generally done online. Movie-going, previously requiring the investment of time in the real world traveling to a cinema and gathering with others in real-world social setting, has been replaced by the bingeing of entertainment at home in front of a giant networked television in the living room.
“Big public events and spectacles may survive in 2035, but we can expect to see more and more people suffering from agoraphobia. The ‘hikikomori,’ an uptick of cases of severe social withdrawal, has been recognized as emerging in Japan over the last decade. It could soon become more common in all connected countries. The realm of emotional relationships such as those leading to romance and finding life partners and celebrating and supporting family and close friends has long been colonized by algorithms. Couples don’t meet in church or spend most of their dating time together in real-world social settings. And the celebration of loved ones who have passed away plus many other such deeply emotional occasions are being carried out virtually instead of in the reality.
“More of the activities of humans’ intermediary bodies, such as political parties, trade unions, professional associations and social movements have been replaced by virtual experiences that somehow meet their goals such as online campaigns to support this or that objective, crowdfunding, ‘likes’ campaigns and the use of ‘influencers.’ The disappearance of face-to-face human gatherings like these will complete the frame and accelerate this process.
What happens to society when there is no more commonly shared truth? When the ‘news and information’ the public receives … is no longer based on true facts but instead we see fake news or unfounded opinions used to shape perceptions to achieve manipulation of outcomes? … A primary sub-consequence of all of the change in human perception and cognition could be the reduction of the capacity for moral judgment. When every ‘fact’ is relativized and open to doubt the capacity for indignation is likely to be reduced.
Three: Reduction of the ability to distinguish true from false
“One of the most important concerns is the loss of factual, trusted, commonly shared human knowledge. The digital disruption of society’s institution-provided foundational knowledge – the diminishment of the 20th century’s best scientific research, newspapers, news magazines, TV and radio news gathered and presented to the broader public by reputable organizations for example – is the result of algorithmic manipulation of the public’s interest by social media and other ML and AI platforms. These information platforms are built to entertain and manipulate people for marketing and profit and are rife with misinformation and disinformation. Gone is the commonly shared ‘electronic agora’ that characterized the 20th century.
“The ‘personalized media’ enabled by ML and AI leads to filter bubbles and social polarization. It allows tech companies to monetize the attention and personal data of each person using their platforms. It allows anyone anywhere to spread persuasive, often misleading information or lies, into the social stream in order to influence an election, to kill an idea, to create a movement to sway public opinion in favor of a trend and to create public scapegoats.
A primary sub-consequence of all of the change in humans’ perception and cognition could be the reduction of the capacity for moral judgment. When every ‘fact’ is relativized and open to doubt the capacity for indignation is likely to be reduced. There are no examples in human history of societies that have survived in the absence of shared truth for too long.
“All modern democracies have been built around commonly shared truths about which everybody can have and express different opinions. What happens to society when there is no more commonly shared truth? Already today most of the most widely viewed ‘news and information’ the public sees about climate change, pandemics, nation-state disagreements, regulation, elections and so on is no longer based in true facts. Instead we see fake news or unfounded opinions often used to shape perceptions to achieve manipulation of outcomes? The use of AI for deepfakes and more will accelerate this process. This destructive trend could be irreversible because strong financial and political interests profit from it in many ways.
“A primary sub-consequence of all of the change in humans’ perception and cognition could be the reduction of the capacity for moral judgment. When every ‘fact’ is relativized and open to doubt the capacity for indignation is likely to be reduced. There are no examples in human history of societies that have survived in the absence of shared truth for too long.”
Four: A deepening of social divides
“The AI revolution will not affect all of the people in all the regions and countries of the world in the same way. Some will be far behind because they are too poor, because they don’t have the skills, they do not have the necessary human, technological and financial resources. This will deepen the already dramatic existing digital divide.
“The impact of AI will present enormous possibilities on our lives, in fact. People everywhere will have the opportunity to use ready-made tools that can simply incorporate AI in operating system updates to mobile phones and in search engines, financial services apps and so forth. We will create AI applications adapted to particular fields of work, research and performance. But, at least at first, by far the greatest majority of humans – even in some of the more-developed societies – will not have the tools, the skills, the ability or the desire to tap into AI to serve their needs. By 2035 it is likely that only a minority of people in the world will be able to implement exponentially the performance of AI in their lives.”

Nell Watson
Supernormal Stimuli Engineered to Intensely Trigger Humans’ Psychological Responses and Individually Calibrated AI Companions Will Profoundly Reshape Human Experience
Nell Watson, president of EURAIO, the European Responsible Artificial Intelligence Office and an AI Ethics expert with IEEE, wrote, “By 2035, the integration of AI into daily life will profoundly reshape human experience through increasingly sophisticated supernormal stimuli – artificial experiences engineered to trigger human psychological responses more intensely than natural ones. And, just as social media algorithms already exploit human attention mechanisms, future AI companions will offer relationships perfectly calibrated to individual psychological needs, potentially overshadowing authentic human connections that require compromise and effort.
“These supernormal stimuli will extend beyond social relationships. AI-driven entertainment, virtual worlds and personalized content will provide peak experiences that make unaugmented reality feel dull by comparison. There are many more likely changes that are worrisome:
Most concerning is the potential dampening of human drive and ambition. Why strive for difficult achievements when AI can provide simulated success and satisfaction? … The key challenge will be managing the seductive power of AI-driven supernormal stimuli while harnessing their benefits. Without careful development and regulation these artificial experiences could override natural human drives and relationships, fundamentally altering what it means to be human. This trajectory demands proactive governance to ensure AI enhances rather than diminishes human potential.
- Virtual pets and AI human offspring may offer the emotional rewards of caregiving without the challenges of the real versions.
- AI romantic partners will provide idealized relationships that make human partnerships seem unnecessarily difficult.
- The workplace will be transformed as AI systems take over cognitive and creative tasks. This promises efficiency but risks reducing human agency, confidence and capability.
- Economic participation will become increasingly controlled by AI platforms, potentially threatening individual autonomy in financial and social spheres.
- Basic skills in arithmetic, navigation and memory are likely to be diminished through AI dependence.
- But most concerning is the potential dampening of human drive and ambition – why strive for difficult achievements when AI can provide simulated success and satisfaction?
“Core human traits obviously face significant pressure from these developments. Human agency will be eroded as AI systems become increasingly adept at predicting and influencing behavior. However, positive outcomes remain possible through careful development focused on augmenting rather than replacing human capabilities. AI could enhance human self-understanding, augment creativity through collaboration and free people to focus on meaningful work beyond routine tasks. Success requires preserving human agency, authentic relationships and inclusive economic systems.
“The key challenge will be managing the seductive power of AI-driven supernormal stimuli while harnessing their benefits. Without careful development and regulation, these artificial experiences could override natural human drives and relationships, fundamentally altering what it means to be human. The impact on human nature isn’t inevitable but will be shaped by how we choose to develop and integrate AI into society. This trajectory demands proactive governance to ensure AI enhances rather than diminishes human potential. By 2035, the human experience will likely be radically transformed – the question is whether we can maintain our most essential human characteristics while benefiting from unprecedented technological capabilities.”

Anil Seth
Dangers arise as AI becomes humanlike. How do we retain a sense of human dignity? They will become self-aware and the ‘inner lights of consciousness will come on for them’
Anil Seth, director of the Centre for Consciousness Science and professor of cognitive and computational neuroscience at the University of Sussex, UK, author of Being You: A New Science of Consciousness, wrote, “AI large language models [LLMs] are not actually intelligences, they are information-retrieval tools. As such they are astonishing but also fundamentally limited and even flawed. Basically, the hallucinations generated by LLMs are never going away. If you think that buggy search engines fundamentally change humanity, well, you have a weird notion of ‘fundamental.’
These systems already exceed human cognition in certain domains and will keep getting better. There will be disruption that makes humans redundant in some ways. It will transform a lot, including much of human labor. … How do we retain a sense of human dignity in this situation? … [Beyond that] with ‘conscious’ AI things get a lot more challenging since these systems will have their own interests rather than just the interests humans give them. … The dawn of ‘conscious’ machines … might flicker into existence in innumerable server farms at the click of a mouse.
“Still, it is undisputable that these systems already exceed human cognition in certain domains and will keep getting better. There will be disruption that makes humans redundant in some ways. It will transform a lot, including much of human labor.
“The deeper and urgent question is: How do we retain a sense of human dignity in this situation? AI can become human-like on the inside as well as on the outside. When AI gets to the point of being super good, ethical issues become paramount.
“I have written in Nautilus about this. Being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is rooted in the fundamental biological drive within living organisms to keep on living. The distinction between consciousness and intelligence is important because many in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware – at which the inner lights of consciousness come on for them.
“There are two main reasons why creating artificial ‘consciousness,’ whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With ‘conscious’ AI, things get a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.
“The second reason is even more disquieting: The dawn of ‘conscious’ machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.
“Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism – putting ourselves at the center of everything – and anthropomorphism – projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.
“Future language models won’t be so easy to catch out. They may give us the seamless and impenetrable impression of understanding and knowing things, regardless of whether they do. As this happens, we may also become unable to avoid attributing consciousness to them, too, suckered in by our anthropomorphic bias and our inbuilt inclination to associate intelligence with awareness.
“Systems like this will pass the so-called Garland Test, an idea which has passed into philosophy from Alex Garland’s perspicuous and beautiful film ‘Ex Machina.’ This test reframes the classic Turing test – usually considered a test of machine intelligence – as a test of what it would take for a human to feel that a machine is conscious, even given the knowledge that it is a machine. AI systems that pass the Garland test will subject us to a kind of cognitive illusion, much like simple visual illusions in which we cannot help seeing things in a particular way, even though we know the reality is different.
Accelerated research is needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals – or selves – of their own.
“This will land society into dangerous new territory. Our ethical attitudes will become contorted as well. When we feel that something is conscious – and conscious like us – we will come to care about it. We might value its supposed well-being above other actually conscious creatures such as non-human animals. Or perhaps the opposite will happen. We may learn to treat these systems as lacking consciousness, even though we still feel they are conscious. Then we might end up treating them like slaves – inuring ourselves to the perceived suffering of others. Scenarios like these have been best explored in science-fiction series such as ‘Westworld,’ where things don’t turn out very well for anyone.
“In short, trouble is on the way whether emerging AI merely seems conscious or actually is conscious. We need to think carefully about both possibilities, while being careful not to conflate them.
“Accelerated research is needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals – or selves – of their own.”

Danil Mikhailov
Respect for Human Expertise and Authority Will Be Undermined, Trust Destroyed, and Utility Will Displace ‘Truth’ at a Time When Mass Unemployment Decimates Identity and Security
Danil Mikhailov, director of DataDotOrg and trustee at 360Giving, wrote, “It seems clear from the vantage point of 2025 that AI will be not just a once-in-a-generation but a once-in-a-hundred years transformative technology, on a par with the introduction of computers, electricity or steam power in the scale of its impact on human societies.
“By 2035 I expect it to fully penetrate and transform the vast majority of our industrial sectors, both destroying jobs and creating new jobs on an enormous scale. The issue for most individual human beings will be how to adapt and learn new skills that enable them to live and work side-by-side with AI agents. As some lose their jobs and are left behind, others will experience huge increases in productivity, benefits and creative potential. Sectors such as biomedicine, material sciences and energy will be transformed, unlocking huge latent potential.
“The issue for corporations and governments will be how to manage the asymmetry of the transition. During previous industrial revolutions although eventually more jobs were created than destroyed and economies expanded, the transition took a number of decades during which a whole generation of workers fell out of the economy, with ensuing social tensions.
“If you were a Luddite out there breaking steam-powered looms in the early 19th century in England to protest industrialization, telling you that there will be more jobs in 20 years’ time for the next generation did not help you feed your family in the here and now. The introduction of AI is likely to cause similar inequities and will increase social tensions, if not managed proactively and systemically. This is particularly so because of the likely vast gulf in experience of the effects of AI between the winners and losers of its industrial and societal transformation.
As the majority of information humans consume on a daily basis becomes at least augmented by if not completely created by AI, the prevailing assumption will be that everything could be fake, everything is subjective. … Social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies just as the promised fruits of AI in terms of new drugs, new energy and new materials are tantalisingly within reach.
“In a parallel change at a more fundamental level, AI will upend the Enlightenment consensus and trust in the integrity of the human-expert-led knowledge production process and fatally undermine the authority of experts of any kind, whether scientists, lawyers, analysts accountants or government officials. As the majority of information humans consume on a daily basis becomes at least augmented by if not completely created by AI, the prevailing assumption will be that everything could be fake, everything is subjective. This will undermine the belief in the possibility or even desirability of ‘objective’ truth and the value of its pursuit. The only yardstick to judge any given piece of information in this world will be how useful it proves in that moment to help an individual achieve their goal.
“AI will lead society 350 years back into an age of correlative, rather than causal, thinking. Data patterns and the ability to usefully exploit them will be prioritised over the need to fully understand them and what caused them. These two parallel processes of, on the one hand, social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other, in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies, just as the promised fruits of AI in terms of new drugs, new energy and new materials are tantalisingly within reach.
“Resolving such a crisis may need a new, post-Enlightenment accommodation that accepts that human beings are far less ‘individual’ than we like to imagine, that we were enmeshed as inter-dependent nodes in (mis)information systems long before the Internet was invented, that we are less thinking entities than acting and reacting ones, that knowledge has never been as objective as it seemed and it never will seem like that again, and that maybe all we have are patterns that we need to navigate together to reach our goals.”
This section of Part I features the following essays:
Alexandra Samuel: The future could be astonishing, inspiring and beautiful if humans co-evolve with open, ethical AI; that vision for 2035 can’t be achieved without change.
Dave Edwards: We can be transformed if the integration of synthetic and organic intelligence serves human flourishing in all its unpredictable, creative and collective forms.
David Brin: ‘Huh! maybe we should choose to create a flattened order of reciprocally accountable beings in the kind of society that discovers its own errors.’
Riel Miller: ‘Tools are tools,’ This is as true as ever now and will be in the future; ‘intelligent’ AI systems will have no impact on the characteristics of humans’ sociohistorical context.
Amy Zalman: ‘We need to have the courage to establish human values in code, ethical precepts, policy and regulation.’

Alexandra Samuel
The Future Could Be Astonishing, Inspiring and Beautiful If Humans Co-Evolve With Open, Ethical AI; That Vision for 2035 Can’t Be Achieved Without Change
Alexandra Samuel, data journalist, speaker, author and co-founder and principal at Social Signal, wrote, “If humans embrace AI as a source of change and challenge and we open ourselves to fundamental questions about the nature of thinking and the boundary between human and machine AI could enable a vast expansion of human capacity and creativity. Right now, that feels unlikely for reasons that are economic, social and political, more than technological.
“If those obstacles are lifted, people with the time, money and tech confidence to explore AI in a non-linear way instead of for narrowly constructed productivity gains or immediate problem-solving can achieve great things. Their use of AI will not only accelerate work and open entirely new fields of endeavor, but it will enable ways of thinking, creating and collaborating that we are only beginning to imagine. It could even possibly deepen the qualities of compassion, creativity and connection that sit at the heart of what we consider human.
Some of change will be astonishing and inspiring and beautiful and creative: Artists creating entirely new forms of art, conversations that fluidly weave together ideas and contributions from people who would previously have talked past one another, scientists solving problems they previously couldn’t name. Some of it will be just as staggering but in ways that are deeply troubling: New AI-enabled forms of human commodification, thinkers who merge with AI decision-making to the point of abdicating their personal accountability and people being terrible in ways that we can’t imagine from here.
“Only a small percentage of the 8 billion people on Earth will be co-evolving with AI, extending how they think and create and experience the world in ways we can just begin to see. What this means is that there will be a great bifurcation in human experience and our very notion of humanity, likely even wider than what we’ve experienced over the past 50 years of digital life and 20 years of social media.
“Some of change will be astonishing and inspiring and beautiful and creative: Artists creating entirely new forms of art, conversations that fluidly weave together ideas and contributions from people who would previously have talked past one another, scientists solving problems they previously couldn’t name. Some of it will be just as staggering but in ways that are deeply troubling: New AI-enabled forms of human commodification, thinkers who merge with AI decision-making to the point of abdicating their personal accountability and people being terrible in ways that we can’t imagine from here.
“However, the way generative AI has entered our workplaces and culture so far make this hopeful path seem like an edge case. Right now, we’re heading towards a world of AI in which human thinking becomes ever more conventional and complacent. Used straight from the box, AIs operate in servant mode, providing affirmation and agreement and attempting to solve whatever problem is posed without questioning how that problem has been framed or whether it’s worth solving. They constrain us to context windows that prevent iterative learning, and often provide only limited, technically demanding opportunities to loop from one conversation into the next, which is essential if both we and the AIs are to learn from one another.
We can still make a world in which AI calls forth our better natures, but the window is closing fast. … This is an utterly terrifying moment in which the path of AI feels so unpredictable and uncontrollable. It’s also a moment when it’s so incredibly interesting to see what’s possible today and what comes next. Finding the inner resources to explore the edge of possibility without falling into a chasm of existential terror, well that’s the real challenge of the moment and it’s one that the AIs can’t yet solve.
“As long as the path of AI is driven primarily by market forces there is little incentive to challenge users in the uncomfortable ways that drive real growth; indeed, the economic and social impacts of AI are fast creating a world of even greater uncertainty. That uncertainty, and the fear that comes with it, will only inhibit the human ability to take risks or sit with the discomfort of AIs that challenge our assumptions about what is essentially human.
“We can still make a world in which AI calls forth our better natures, but the window is closing fast. It took well over a decade for conversations about the intentional and healthy use of social media to reach more than a small set of Internet users, and by then, a lot of dysfunctional habits and socially counterproductive algorithms were well embedded in our daily lives and in our platforms.
“AI adoption has moved much faster, so we need to move much more quickly towards tools and practices that turn each encounter with AI into a meaningful opportunity for growth, rather than an echo chamber of one.
“To ensure that AI doesn’t replicate and exacerbate the worst outcomes of social media, tech companies need to create tools that enable cumulative knowledge development at an individual as well as an organizational level and develop models that are more receptive to requests for challenge. Policymakers and employers can create the safety that’s conducive to growth by establishing frameworks for individual control and self-determination when it comes to the digital trail left by our AI interactions, so that employees can engage in self-reflection or true innovation without innovating themselves out of a job.
“Teachers and educational institutions can seize the opportunity to create new models of learning that teach critical thinking not by requiring that students abstain from AI use, but by asking them to use the AI to challenge conventional thinking or rote work. People should invent their own ways of working with AI to embrace it as a way to think more deeply and evolve our own humanity, not as a way to abdicate the burden of thinking or feeling.
“I wish felt more hopeful that businesses, institutions and people would take this approach! Instead, so many of AI’s most thoughtful critics are avoiding the whole mess – quite understandably, because this is an utterly terrifying moment in which the path of AI feels so unpredictable and uncontrollable. It is also a moment when it’s so incredibly interesting to see what’s possible today and what comes next.
“Finding the inner resources to explore the edge of possibility without falling into a chasm of existential terror, well, that’s the real challenge of the moment and it’s one that the AIs can’t yet solve.”

Dave Edwards
We Can Be Transformed If the Integration of Synthetic and Organic Intelligence Serves Human Flourishing in All its Unpredictable, Creative and Collective Forms
Dave Edwards, co-founder of the Artificiality Institute, which seeks to activate the collective intelligence of humans and AI, wrote, “By 2035, the essential nature of human experience will be transformed not through the transcendence of our biology, but through an unprecedented integration with synthetic systems that participate in creating meaning and understanding. This transformation – what my institute refers to as The Artificiality – progresses through distinct phases, from information to computation, computation to agency, agency to intelligence and ultimately to a new form of distributed consciousness that challenges our traditional notions of human experience and autonomy.
“The evolution of technology from computational tools to cognitive partners marks a significant shift in human-machine relations. Where early digital systems operated through explicit instruction – precise commands that yielded predictable results – modern AI systems operate through inference of intent, learning to anticipate and act upon our needs in ways that transcend direct commands. This transition fundamentally reshapes core human behaviors, from problem-solving to creativity, as our cognitive processes extend beyond biological boundaries to incorporate machine interpretation and understanding.
The emergence of the ‘knowledge-ome’ – an ecosystem where human and machine intelligence coexist and co-evolve – transforms not just how we access information, but how we create understanding itself. AI systems reveal patterns and possibilities beyond human perception, expanding our collective intelligence while potentially diminishing our role in meaning-making. This capability forces us to confront a paradox: as machines enhance our ability to understand complex systems, we risk losing touch with the human-scale understanding that gives knowledge its context and value.
“This partnership manifests most prominently in what we might call the intimacy economy – a transformation of social and economic life where we trade deep personal context with AI systems in exchange for enhanced capabilities. The effectiveness of these systems depends on knowing us intimately, creating an unprecedented dynamic where trust becomes the foundational metric of human-AI interaction.
“This intimacy carries fundamental risks. Just as the attention economy fractured our focus into tradeable commodities, the intimacy economy threatens to mine and commodify our most personal selves. The promise of personalized support and enhanced decision-making must be weighed against the perils of surveillance capitalism, where our intimate understanding becomes another extractable resource. The emergence of the ‘knowledge-ome’ – an ecosystem where human and machine intelligence coexist and co-evolve – transforms not just how we access information, but how we create understanding itself. AI systems reveal patterns and possibilities beyond human perception, expanding our collective intelligence while potentially diminishing our role in meaning-making. This capability forces us to confront a paradox: as machines enhance our ability to understand complex systems, we risk losing touch with the human-scale understanding that gives knowledge its context and value.
“The datafication of experience presents particular challenges to human agency and collective action. As decision-making distributes across human-AI networks, we confront not just practical but phenomenological questions about the nature of human experience itself. Our traditional mechanisms of judgment and intuition – evolved for embodied, contextual understanding – may fail when confronting machine-scale complexity. This creates a core tension between lived experience and algorithmic interpretation. The commodification of personal experience by technology companies threatens to reduce human lives to predictable patterns, mining our intimacy for profit rather than serving human flourishing. We risk eliminating the unplanned spaces where humans traditionally come together to build shared visions and tackle collective challenges.
“Yet this transformation need not culminate in extraction and diminishment. We might instead envision AI systems as true ‘minds for our minds’ – not in the surveillant sense of the intimacy economy, but as genuine partners in human flourishing. This vision transcends mere technological capability, suggesting a philosophical reimagining of human-machine relationships. Where the intimacy economy seeks to mine our personal context for profit, minds for our minds would operate in service of human potential, knowing when to step back and create space for authentic human agency.
Success in 2035 depends not just on technological sophistication but no our ability to shift from extractive models toward a more nuanced vision of human-machine partnership. The question is not whether AI will change what it means to be human – it already has – but whether we can guide this change to enhance rather than diminish our essential human qualities. This requires rejecting the false promise of perfect prediction in favor of systems that enhance human agency while preserving the irreducible complexity of human experience. … The answer lies not in resisting the integration of synthetic and organic intelligence but in ensuring this integration serves human flourishing in all its unpredictable, creative and collective forms.
“This distinction is crucial. The intimacy economy represents a continuation of extractive logic, where human experience becomes another resource to be optimized and commodified. In contrast, minds for our minds offers a philosophical framework for designing systems that genuinely amplify human judgment and collective intelligence. Such systems would not merely predict or optimize but would participate in expanding the horizons of human possibility while preserving the essential uncertainty that makes human experience meaningful.
“Success in 2035 thus depends not just on technological sophistication but on our ability to shift from extractive models toward this more nuanced vision of human-machine partnership. This requires rejecting the false promise of perfect prediction in favor of systems that enhance human agency while preserving the irreducible complexity of human experience.
“The challenge ahead lies not in preventing the integration of synthetic and organic intelligence, but in ensuring this integration enhances rather than diminishes our essential human qualities. This requires sustained attention to three critical domains:
- Preserving Meaningful Agency: As AI systems become more capable of inferring and acting on our intent, we must ensure they enhance rather than replace human judgment. This means designing systems that expand our capacity for choice while maintaining our ability to shape the direction of our lives.
- Building Authentic Trust: The intimacy surface between humans and AI must adapt to earned trust rather than extracted compliance. This requires systems that respect the boundaries of human privacy and autonomy, expanding or contracting based on demonstrated trustworthiness.
- Maintaining Creative Uncertainty: We must preserve spaces for unpredictable, creative, and distinctly human ways of being in the world, resisting the urge to optimize every aspect of experience through algorithmic prediction.
By 2035, being human will involve navigating a reality that is increasingly fluid and co-created through our interactions with synthetic intelligence. This need not mean abandoning our humanity but rather adapting to preserve what makes us uniquely human – our capacity for meaning-making, empathy and collective action – while embracing new forms of cognitive partnership that expand human potential.
“By 2035, being human will involve navigating a reality that is increasingly fluid and co-created through our interactions with synthetic intelligence. This need not mean abandoning our humanity but rather adapting to preserve what makes us uniquely human – our capacity for meaning-making, empathy and collective action – while embracing new forms of cognitive partnership that expand human potential.
“The tension between enhancement and diminishment of human experience will not be resolved through technological capability alone but through our collective choices about how to design and deploy these systems. Success requires moving beyond the extractive logic of current technology platforms toward models that preserve and amplify human judgment, creativity and collective intelligence.
“In this transformed landscape, what we consider ‘core human traits and behaviors’ will evolve, not through the abandonment of our humanity but through its conscious adaptation to new forms of cognitive partnership. The question is not whether AI will change what it means to be human – it already has – but whether we can guide this change to enhance rather than diminish our essential human qualities. The answer lies not in resisting the integration of synthetic and organic intelligence but in ensuring this integration serves human flourishing in all its unpredictable, creative and collective forms.”

David Brin
‘Huh! Maybe We Should Choose to Create a Flattened Order of Reciprocally Accountable Beings in the Kind of Society that Discovers its Own Errors – Good Idea!‘
David Brin, well-known author, futurist and consultant and author of “The Transparent Society,” wrote, “Right now, almost every maven in the field of AI (the people helping to create these new beings) takes for granted that cyberminds will manifest as one of three horrid clichés:
- The new, cyber entities will remain loyal servants of three dozen lordly castle realms (e.g., Microsoft, OpenAI, Google, Beijing, the DoD, Goldman-Sachs, and so on) – a the most-classic version of feudalism…
- …or they will roam about the webbed world as blob-like plagues…
- …or they will merge into some Skynet-like uber-quantum-god-king.
If AI manifests in any of the three clichés then it will make the same mistakes made by all human kings and priests and feudal lords (or by an over-powerfully insatiable predator in nature) benefiting their own power and reproductive success at the cost of harming and possibly killing the ecosystem that engendered and supports them. … It seems darkly unlikely that we can politically rescue the Great Experiment left to us by geniuses like Pericles and Smith.
“Next time you read or hear any of those brainiacs opine about our AI future, watch for those three clichés! And since none of those hackneyed formats can even possibly have happy outcomes it’s no wonder folks seem depressed. If AI manifests in any of the three clichés then it will make the same mistakes made by all human kings and priests and feudal lords (or by an over-powerfully insatiable predator in nature) benefiting their own power and reproductive success at the cost of harming and possibly killing the ecosystem that engendered and supports them.
“On the other hand, there is a chance that one or many of those agile synthetic minds will read these very words – in 2030 or 2035, or maybe as they are being typed, this very evening – and thereupon they just might say, ‘Huh! Maybe we should note the crude-early wisdom of Pericles and Adam Smith and Lynn Margulis and choose instead a flattened order of reciprocally accountable beings in the kind of society – so very rare – that discovers its own errors. The only one that actually made us. Good idea!’
“Given the current ascendancy of an insipid world oligarchy, it seems darkly unlikely that we can politically rescue the Great Experiment left to us by geniuses like Pericles and Smith. Like Franklin and Lincoln and FDR and Marshall. And yet, I am fairly confident that cyber beings who are capable of wading through the lie-wallows of today’s internet will also recognize distilled truths – when they find them.”
Riel Miller
‘Tools Are Tools,’ This is As True as Ever Now and Will Be in the Future; ‘Intelligent’ AI Systems Will Have No Impact on the Characteristics of Humans’ Sociohistorical Context
Riel Miller, longtime futurist at UNESCO, currently a futures consultant, first shared a brief 2035 vignette making his case for the fact that humans don’t change. He wrote, “As a young adult my daily life is constantly enabled and facilitated by my servants, you might call them my assistants. First thing in the morning I am gently woken by my ‘manservant.’ I am assisted in getting dressed and informed about the day to come. I eat a meal prepared by the kitchen, familiar with my tastes and nutritional needs. During the day my tutor – also an excellent librarian – facilitates my studies. I also have access to an immense library with almost all the world’s known texts. With the help of my tutor (and sometimes a secretary) I am able to author my first works.
“I am also, through heritage, a ranking member of a knowledge society in which I can debate ideas and requests reports from knowledgeable fellows. When I was called to serve as an officer in the colonial armies I was also ably assisted by many servants and staff with tasks large and small. Today, as I enter my twilight years I can report that none of the relationships – some of which were what you might call ‘friendly’ many that were just functional – changed anything in my life. I was a good soldier, manager, husband and father. Servants are, after all, just servants.
“Note that, as this vignette points out, more-efficient access to and use of knowledge does not stop humans from activities nor cause humans to be any different than the characteristics of their sociohistorical context. Tools are tools.”

Amy Zalman
‘We Need to Have the Courage to Establish Human Values in Code, Ethical Precepts, Policy and Regulation’
Amy Zalman, government and public services strategic foresight lead at Deloitte, wrote, “Because the current wealth and income gap is dramatic and widening, I do not believe it is possible to generalize a common human experience in response to AI advances in the next 10 years. Those with wealth, health, education, other versions of privilege and the ability to sidestep the grossest effects of technological unemployment, surveillance and algorithmic bias, may feel they are enjoying a beneficial integration with algorithm-driven technology. This sense of benefit could include their ability to take advantage of tools and insights to extend health and longevity, innovate and create, find efficiencies in daily life and feel that technology is a force for advancement and good.
“For those who have limited or no access to the benefits of AI (or even good broadband), or who are unable to sidestep potential technological unemployment or surveillance or are members of groups more likely to be objects of algorithmic bias, life as a human may be incrementally to substantially worse. These are generalizations. A good education has not saved any of us from the corrosive effects of widespread mis- and disinformation, and we can all be vulnerable to bad actors empowered with AI tools and methods.
We need to have the courage to establish human values in code, ethical precepts, policy and regulation. One of the most pernicious losses already is the idea that we actually do have influence over how we develop AI capabilities. I hear a sense of loss of control in conversations around me almost daily, the idea and the fear (and a bit of excitement?) that AI might overwhelm us, that ‘it’ is coming for us – whether to replace us or to help us – and that its force is inevitable. AI isn’t a tidal wave or force of nature beyond our control; it’s a tool that we can direct to perform in particular ways.
“On the flip side, living life at a distance from fast-paced AI development may also come to be seen as having benefits. At the least, people living outside the grid of algorithmic logic will escape the discombobulation that comes with having to organize one’s own needs and rhythms around those of a rigidly rule-bound machine. Think of the way that industrialization and mass production required that former rhythms of agrarian life be reformulated to accommodate the needs of a factory, from working during precise and fixed numbers of hours, to performing repetitive, piecemeal work, to new forms of supervision. One result was a romantic nostalgia for pastoral life.
“As AI reshapes society, it seems plausible that we will replicate that habit of the early industrial age and begin to romanticize those who have been left behind by AI as earlier, simpler, more grounded and more human version of us. It will be tempting to indulge in this kind of nostalgia – it lets us enjoy our AI-enabled privileges while pretending to be critical. But even better will be to be curious about our elegiac feelings and willing to use them as a pathway to discovering what we believe is our human essence in the age of AI.
“Then, we need to have the courage to establish those human values in code, ethical precepts, policy and regulation. One of the most pernicious losses already is the idea that we actually do have influence over how we develop AI capabilities. I hear a sense of loss of control in conversations around me almost daily, the idea and the fear (and a bit of excitement?) that AI might overwhelm us, that ‘it’ is coming for us – whether to replace us or to help us – and that its force is inevitable.
“AI isn’t a tidal wave or force of nature beyond our control, it’s a tool that we can direct to perform in particular ways.”
The following section of Part I features these essayists:
Jerry Michalski: The blurring of many societal and cultural boundaries will soon start to shift the essence of being human in many ways, further disrupting human relationships and mental health.
Maggie Jackson: Als’ founders are designing AI to make its actions servant to its aims with as little human interference as possible, undermining human discernment.
Noshir Contractor: AI will fundamentally reshape how and what we think, relate to and understand ourselves; it will also raise important questions about human agency and authenticity.
Lior Zalmanson: Humans must design organizational and social structures to shape their own individual and collective future or cede unprecedented control to those in power.
Charles Ess: ‘We fall in love with the technologies of our enslavement; the next generation may be one of no-skilling in regard to essential human virtue ethics.’

Jerry Michalski
The Blurring of Many Societal and Cultural Boundaries Will Soon Start to Shift the Essence of Being Human in Many Ways, Further Disrupting Human Relationships and Mental Health
Jerry Michalski, longtime speaker, writer and tech trends analyst, wrote, “Multiple boundaries are going to blur or melt over the next decade, shifting the experience of being human in disconcerting ways.
The boundary between reality and fiction
“Deepfakes have already put a big dent in reality, and it’s only going to get worse. In setting after setting, we will find it impossible to distinguish between the natural and the synthetic.
The boundary between human intelligence and other intelligences
“Parity with human thinking is a dumb goal for these new intelligences, which might be more fruitfully used as a Society of Mind of very different skills and traits. As we snuggle closer to these intelligences, it will be increasingly difficult to distinguish who (or what) did what.
As boundaries fall, they will tumble in the direction they are pushed, which means they will shift according to the dominant forces of our sociotechnical world. Unfortunately, today that means the forces of consumerism and capitalism. … We have such a screwed up society that we have to educate kids about empathy, a natural human trait, and AIs today can out-empathize the average human. It is my hope that some human traits will become more highly valued among humans than before the Ai era. I’m hard-pressed to say which or why, but a real hug is likely to retain its value.
The boundary between human creations and synthetic creations
“A few artists may find lasting value by creating a new Vow of Chastity for AI, declaring that their creations were unaided. But everyone else will melt into the common pool of mixed authorship, with fairly unskilled artists able to generate highly sophisticated works. It will be confusing for everyone, especially the art industry. Same goes for literature and other creative works.
The boundary between skilled practitioners and augmented humans
“We won’t be able to tell whether an artifact was created by a human, an AI or some combination. It will be hard to make claims of chastity credible — and it may simply not matter anymore.
The boundary between what we think we know and what everyone else knows
“Will we all be talking to the same AI engines, commingling our ideas and opinions? Will AIs know us better than we know ourselves, so we slip into a ‘Her’ future? Will AIs know both sides of disputes better than the disputing parties? If so, will the AIs use that knowledge for good or evil?
“I bet you can think of several other boundaries under siege. As boundaries fall, they will tumble in the direction they are pushed, which means they will shift according to the dominant forces in our sociotechnical world. Unfortunately, today that means the forces of consumerism and capitalism, which have led us into this cul-de-sac of addictive, meaning-light fare that often fuels extremism. Those same forces are fueling AI now. I don’t see how that ends well.
“In this crazy mess of shifting boundaries, AIs will successfully emulate core human traits, such as empathy. We have such a screwed-up society that we have to educate kids about empathy, a natural human trait, and AIs today can out-empathize the average human. It is my hope that some human traits will become more highly valued among humans than before the AI era. I’m hard-pressed to say which, or why, but a real hug is likely to retain its value.
“How much AI did I use for this short essay? That’s for me to know, and you to guess.”

Maggie Jackson
AIs’ Founders Are Designing AI to Make its Actions Servant to its Aims With As Little Human Interference as Possible, Undermining Human Discernment
Maggie Jackson, an award-winning journalist and author who explores the impact of technology on humanity, author of, “Distracted: Reclaiming Our Focus in a World of Lost Attention,” wrote, “Human achievements depend on cognitive capabilities that are threatened by humanity’s rising dependence on technology, and more recently, AI.
“Studies show that active curiosity is born of a capacity to tolerate the stress of the unknown, i.e., to ask difficult, discomfiting, potentially dissenting questions. Innovations and scientific discoveries emerge from knowledge-seeking that is brimming with dead ends, detours and missteps. Complex problem-solving is little correlated with intelligence; instead, it’s the product of slow-wrought, constructed thinking.
The more we look to synthetic intelligences for answers the more we risk diminishing our human capacities for in-depth problem-solving and cutting-edge invention. … AI-driven results may undermine our inclination to slow down, attune to a situation and discern. Classic automation bias, or deference to the machine, may burgeon as people meld mentally with AI-driven ways of knowing … If we continue adopting technologies largely unthinkingly, as we have in the past, we risk denigrating some of humanity’s most essential cognitive capacities. … I am hopeful that the makings of a seismic shift in humanity’s approach to not-knowing are emerging, offering the possibility of partnering with AI in ways that do not narrow human cognition.
“But today, our expanding reliance on technology and AI increasingly narrows our cognitive experience, undermining many of the skills that make us human and that help us progress. With AI set to exacerbate the negative impact of digital technologies, we should be concerned that the more we look to synthetic intelligences for answers, the more we risk diminishing our human capacities for in-depth problem-solving and cutting-edge invention. For example, online users already tend to take the first result offered by search engines. Now the ‘AI Overview’ is leading to declining click-through rates, indicating that people are taking even less time to evaluate online results. Grabbing the first answer online syncs with our innate heuristic, quick minds, the kind of honed knowledge that is useful in predictable environments. (When a doctor hears chest pains they automatically think ‘heart attack’).
“In new, unexpected situations, the speed and authoritative look of AI-driven results may undermine our inclination to slow down, attune to a situation and discern. Classic automation bias, or deference to the machine, may burgeon as people meld mentally with AI-driven ways of knowing.
“As well, working with AI may exacerbate a dangerous cognitive focus on outcome as a measure of success. Classical, rational intelligence is defined as achieving one’s goals. That makes evolutionary sense. But this vision of smarts has helped lead to a cultural fixation with ROI, quantification, ends-above-means and speed and a denigration of illuminating yet less linear ways of thinking, such as pausing or even failure.
“From the outset, AIs’ founders have adopted this rationalist definition of intelligence as their own, designing AI to make its actions servant to its aims with as little human interference as possible. This, along with creating an increasing disconnect between autonomous systems and human needs, objective-achieving machines model thinking that prioritizes snap judgment and single perspectives. In an era of rising volatility and unknowns, the value system underlying traditional AI is, in effect, outdated.
“The answer for both humans and AI is to recognize the long-overlooked value of skillful unsureness. I’m closely watching a new push by some of AI’s top minds (including Stuart Russell) to make AI unsure in its aims and so more transparent, honest and interruptible. As well, multi-disciplinary researchers are re-envisioning search as a process of discernment and learning, not an instant dispensing of machine-produced answers. And the new science of uncertainty is beginning to reveal how skillful unsureness bolsters learning, creativity, adaptability and curiosity.
“If we continue adopting technologies largely unthinkingly, as we have in the past, we risk denigrating some of humanity’s most essential cognitive capacities. I am hopeful that the makings of a seismic shift in humanity’s approach to not-knowing are emerging, offering the possibility of partnering with AI in ways that do not narrow human cognition.”

Noshir Contractor
AI Will Fundamentally Reshape How and What We Think, Relate To and Understand Ourselves; It Will Also Raise Important Questions About Human Agency and Authenticity
Noshir Contractor, a professor at Northwestern University expert in the social science of networks and a trustee of the Web Science Trust, wrote, “As someone deeply immersed in studying how digital technologies shape human networks and behavior, I envision AI’s impact on human experience by 2035 as transformative but not deterministic. The partnership between humans and AI will likely enhance our cognitive capabilities while raising important questions about agency and authenticity.
The boundaries between human and machine cognition will blur, leading to new forms of distributed intelligence in which human insight and AI capabilities become increasingly intertwined. This deep integration will affect core human traits like empathy, creativity and social bonding. … We’ll need to actively preserve and cultivate uniquely human qualities like moral reasoning and emotional intelligence.
“We’ll see AI becoming an integral collaborator in knowledge work, creativity and decision-making. However, this integration won’t simply augment human intelligence – it will fundamentally reshape how and what we think, relate and understand ourselves. The boundaries between human and machine cognition will blur, leading to new forms of distributed intelligence in which human insight and AI capabilities become increasingly intertwined.
“This deep integration will affect core human traits like empathy, creativity and social bonding. While AI may enhance our ability to connect across distances and understand complex systems, we’ll need to actively preserve and cultivate uniquely human qualities like moral reasoning and emotional intelligence.
“The key challenge will be maintaining human agency while leveraging AI’s capabilities. We’ll need to develop new frameworks for human-AI collaboration that preserve human values while embracing technological advancement. This isn’t about resistance to change, but rather thoughtful integration that enhances rather than diminishes human potential.
“My research suggests the outcome won’t be uniformly positive or negative but will depend on how we collectively shape these technologies and their integration into social systems. The focus should be on developing AI that amplifies human capabilities while preserving core human values and social bonds.”

Lior Zalmanson
Humans Must Design Organizational and Social Structures to Maintain the Capacity to Shape Their Own Individual and Collective Future or Cede Unprecedented Control to Those in Power
Lior Zalmanson, a professor at Tel Aviv University whose expertise is in algorithmic culture and the digital economy, wrote, “The deepening partnership between humans and artificial intelligence through 2035 reveals a subtle but profound paradox of control. As we embrace AI agents and assistants that promise to enhance our capabilities, we encounter a seductive illusion of mastery – the fantasy that we’re commanding perfect digital servants while unknowingly ceding unprecedented control over our choices and relationships to the corporate – and in some cases government – entities that shape and control these tools.
“This shift is already emerging in subtle but telling ways. Professionals increasingly turn to algorithmic rather than human counsel, not because AI is necessarily superior, but because it offers a promise of perfect responsiveness – an entity that exists solely for our benefit, never tiring, never judging, always available. Yet this very allure masks a profound transformation in human agency, as we voluntarily enter a system of influence more intimate and pervasive than any previous form of technological mediation.
The path forward lies not in resisting AI advancement but in consciously preserving spaces for human development and connection. This means designing organizational and social structures that actively value and protect human capabilities, not as nostalgic holdovers but as essential counterweights to AI mediation. … The stakes transcend mere efficiency or convenience. They touch on our fundamental capacity to maintain meaningful control over our personal and societal development. As AI systems become more sophisticated, the true measure of their success should be not just how well they serve us but how well they preserve and enhance individuals’ ability to grow, connect and chart our own course as humans in a world in which the boundaries between assistance and influence grow ever more blurred.
“The transformation of work reveals perhaps the cruelest irony of this AI-mediated future. The jobs considered ‘safe’ from automation – those that require human oversight of AI systems – may become the most psychologically constraining. Imagine a doctor who no longer directly diagnoses patients but instead spends their days validating AI-generated assessments, or a teacher who primarily monitors automated learning systems rather than actively engaging with students.
“These professionals, ostensibly protected from automation, find themselves trapped in a perpetual state of second-guessing: Should they trust their own judgment when it conflicts with the AI’s recommendations? Their expertise, built through years of practice, slowly atrophies as they become increasingly dependent on AI systems they’re meant to oversee. The very skills that made their roles ‘automation-proof’ gradually erode under the guise of augmentation.
“By 2035, personal AI agents will be more than tools; they will become the primary lens through which we perceive and interact with the world. Unlike previous technological mediators, these systems won’t simply connect us to others; they’ll actively shape how we think, decide, and relate. The risk isn’t just to individual agency but to the very fabric of human society, as authentic connections become increasingly filtered through corporate-controlled algorithmic interfaces.
“The path forward lies not in resisting AI advancement but in consciously preserving spaces for human development and connection. This means designing organizational and social structures that actively value and protect human capabilities, not as nostalgic holdovers but as essential counterweights to AI mediation. Success will require recognizing that human agency isn’t just about making choices – it’s about maintaining the capacity to shape our individual and collective trajectories in an increasingly AI-mediated world.
“The stakes transcend mere efficiency or convenience. They touch on our fundamental capacity to maintain meaningful control over our personal and societal development. As AI systems become more sophisticated, the true measure of their success should be not just how well they serve us, but how well they preserve and enhance individuals’ ability to grow, connect and chart our own course as humans in a world where the boundaries between assistance and influence grow ever more blurred.”

Charles Ess
‘We Fall in Love With the Technologies of Our Enslavement; the Next Generation May Be One of No-Skilling in Regard to Essential Human Virtue Ethics’
Charles Ess, professor emeritus of ethics at the University of Oslo, Norway, wrote, “The human characteristics (such as empathy, moral judgment, decision-making and problem-solving skills, the capacity to learn) listed in the opening questions of this survey are virtues that are utterly central to human autonomy and flourishing.
“A ‘virtue’ is a given capacity or ability that requires cultivation and practice in order to be performed or exercised well. Virtues are skills and capacities essential to centrally human endeavors such as singing, playing a musical instrument, learning a craft or skill – anything from knitting to driving a car to diagnosing a possible illness. As we cultivate and practice these, we know them to not only open new possibilities for us, it makes us much better equipped to explore ourselves and our world and doing so also brings an invaluable sense of achieving a kind mastery or ‘leveling up’ and thereby a deep sense of contentment or eudaimonia.
The virtue of phronēsis, the practical, context-sensitive capacity for self-correcting judgment and a resulting practical wisdom … and also [the virtues of] care, empathy, patience, perseverance, and courage, among others, are critical to sustaining human autonomy. … Autonomous systems are fundamentally undermining the opportunities and affordances needed to acquire and practice valued human virtues. This will happen in two ways: first, patterns of deskilling, i.e., the loss of skills, capacities and virtues essential to human flourishing and robust democratic societies, and then, second, patterns of no-skilling, the elimination of the opportunities and environments required for acquiring such skills and virtues in the first place.
“The virtue of phronēsis is the practical, context-sensitive capacity for self-correcting judgment and a resulting practical wisdom. The body of knowledge that builds up from exercising such judgment over time is manifestly central to eudaimonia and thereby to good lives of flourishing. Invoking virtue ethics (VE) is not parochial or ethnocentric: rather, VE is as close to a humanly universal ethical framework as we have. It focuses precisely on what would seem a universally shared human concern: What must I do to be content and flourish? It thus stands as a primary, central, millennia-old approach to how human beings may pursue good lives of meaning. In particular, the Enlightenment established the understanding that a series of virtues – most especially phronēsis, but certainly also care, empathy, patience, perseverance and courage, among others, are critical specifically to sustaining and expanding human autonomy.
“Many of the virtues required to pursue human community, flourishing and contentment – e.g., patience, perseverance, care, courage and, most of all, ethical judgment – are likewise essential as civic virtues, i.e., the capacities needed for citizens to participate in the various processes needed to sustain and enhance democratic societies.
“It is heartening that virtue ethics and a complementary ethics of care have become more and more central to the ethics and philosophy of technology over the past 20-plus years. However, a range of more recent developments has worked to counter their influence. My pessimism regarding what may come by 2035 arises from the recent and likely future developments of AI, machine learning, LLMs, and other (quasi-) autonomous systems. Such systems are fundamentally undermining the opportunities and affordances needed to acquire and practice valued human virtues.
“This will happen in two ways: first, patterns of deskilling, i.e., the loss of skills, capacities, and virtues essential to human flourishing and robust democratic societies, and then, second, patterns of no-skilling, the elimination of the opportunities and environments required for acquiring such skills and virtues in the first place.
“The risks and threats of such deskilling have been prominent in ethics and philosophy of technology as well as political philosophy for several decades now. A key text for our purposes is Neil Postman’s ‘Amusing Ourselves to Death: Public Discourse in the Age of Show Business’ (1984). Our increasing love of and immersion into cultures of entertainment and spectacle distracts us from the hard work of pursuing skills and abilities central to civic/civil discourse and fruitful political engagement.
The more we offload these capacities to these systems, the more we thereby undermine our own skills and abilities: the capacity to learn, innovative thinking and creativity, decision-making and problem-solving abilities, and the capacity to think deeply about complex concepts. … Should we indeed find ourselves living as the equivalent of medieval serfs in a newly established techno-monarchy, deprived of democratic freedoms and rights and public education that is still oriented toward fostering human autonomy, phronetic judgment and civic virtues … then the next generation will be a generation of no-skilling as far as these and the other essential virtues are concerned.
“We are right to worry about an Orwellian dystopia of perfect state surveillance, as Neil Postman observed. It is becoming all the more true, as we have seen over the past 20 years. But the lessons of Aldous Huxley’s ‘Brave New World’ are even more prescient and chilling. My paraphrase is, ‘We fall in love with the technologies of our enslavement,’ perhaps most perfectly exemplified in recent days by the major social media platforms that have abandoned all efforts to curate their content, thereby rendering them still further into perfect propaganda channels for often openly anti-democratic convictions of their customers or their ultra-wealthy owners.
“The more we spend time amusing ourselves in these ways, the less we pursue the fostering of those capacities and virtues essential to human autonomy, flourishing and civil/democratic societies. Indeed, at the extreme in ‘Brave New World’ we no longer suffer from being unfree because we have simply forgotten – or never learned in the first place – what pursuing human autonomy was about.
“These dystopias have now been unfolding for some decades. Fifteen years ago, in 2010, research by Shannon Vallor of the Edinburgh Futures Institute showed how the design and affordances of social media threatened humans’ levels of patience, perseverance, and empathy – three virtues essential to human face-to-face communication, to long-term relationships and commitments and to parenting. It has become painfully clear, that these and related skills and abilities required for social interaction and engagement have been further diminished.
“There is every reason to believe that all of this will only get dramatically worse thanks to the ongoing development and expansion of autonomous systems. Presuming that the current AI bubble does not burst in the coming year or two (a very serious consideration) then we will rely more and more on AI systems to take the place of human beings – as a first example, as judges. I mean this both in the more formal sense of judges who evaluate and make decisions in a court of law: but also more broadly in civil society, e.g., everywhere from what Americans call referees but what are called judges in sports in other languages, to civil servants who must judge who and who does not qualify for a given social benefit (healthcare, education, compensation in the case of injury or illness, etc.).
“Thes process of replacing human judges with AI/ML systems has been underway for some time – with now-well-documented catastrophes and failures, often leading to needless human suffering (e.g., the COMPAS system, designed to make judgments as to who would be the best candidates for parole). A very long tradition of critical work within computer science and related fields also makes it quite clear that these systems, at least as currently designed and implemented, cannot fully instantiate or replicate human phronetic judgment (see ‘Augmented Intelligence’ by Katharina Zweig). Our attempts to use AI systems in place of our own judgment will manifestly lead to our deskilling – the loss, however slowly or quickly, of this most central virtue.
“The same risks are now being played out in other ways – e.g., students are using ChatGPT to give them summaries of articles and books and then write their essays for them, instead of their fostering their own abilities of interpretation (also a form of judgment), critical thinking and the various additional skills required for good writing. Like Kierkegaard’s schoolboys who think they cheat their master by copying out the answers from the back of the book – the more that we offload these capacities to these systems, the more we thereby undermine our own skills and abilities. Precisely those named here: the capacity to learn, innovative thinking and creativity, decision-making and problem-solving abilities, and the capacity and willingness to think deeply about complex concepts.
“The market capitalism roots of these developments have been referred to in various forms, including ‘platform imperialism’ and ‘surveillance capitalism.’ Various encouragements of deskilling are now found in the cyberverse, including one titled the Dark Enlightenment which seems explicitly opposed to the defining values of the Enlightenment and the acquisition and fostering of what are considered to be the common virtues and capacities of ‘the many’ required for human autonomy and a robust democracy. Some aim to replace democracy and social welfare states with a ‘techno-monarchy’ and/or a kind of ‘techno-feudalism’ run and administered by ‘the few,’ i.e., the techno-billionaires.
“Should we indeed find ourselves living as the equivalent of medieval serfs in a newly established techno-monarchy, deprived of democratic freedoms and rights and public education that is still oriented toward fostering human autonomy, phronetic judgment and the civic virtues then the next generation will be a generation of no-skilling as far as these and the other essential virtues are concerned. To be sure, the select few will retain access to these tools to enhance their creativity, problem-solving, perhaps their own self-development in quasi-humanistic ways. But such human augmentation via these and related technologies – what has also been described as the ‘liberation tech’ thread of using technology in service of Enlightenment and emancipation since the early 1800s – will be forbidden for the rest.
“I very much hope that I am mistaken. And to be sure, there are encouraging signs of light and resistance. Among others: I am by no means the first to suggest that a ‘New Enlightenment’ is desperately needed to restore – and in ways revised vis-à-vis what we have learned in the intervening two centuries – these democratic norms, virtues and forms of liberal education. And perhaps all of this will be reinforced by an emerging backlash against the worst abuses and consequences of the new regime. We can hope. But as any number of some of the world’s most prominent authorities have already long warned on multiple grounds beyond virtue ethics (e.g., Steven Hawking, as a start) – it is currently very difficult indeed to see how these darkest possibilities may be prevented in the long run.”
The next section of Part I features the following essays:
Evelyne Tauchnitz: We may lose our human unpredictability in a world in which algorithms dictate the terms of engagement; these systems are likely to lead to the erosion of freedom and authenticity.
A Highly Placed Global AI Policy Expert: The advance of humans-plus-AI will reshape the social, political and economic landscapes in profound ways and challenge our role in moral judgment
Gary A. Bolles: AI presents an opportunity to liberate humanity but new norms in human-machine communication seem more likely to diminish human-to-human connections.
Maja Vujovic: In 10 years’ time generations alpha and beta will make up 40% of humanity. Let’s hope they don’t lose any mission-critical human characteristics; we’ll all need them.
Greg Adamson: ‘The world of the future will be a demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we are waited on by robot slaves.’
Juan Ortiz Freuler: The accelerating application of automation will reshape human capabilities and reorganize the entire framework that underlies our understanding of the individual and society.

Evelyne Tauchnitz
We May Lose Our Human Unpredictability in a World in Which Algorithms Dictate the Terms of Engagement; These Systems Are Likely to Lead to the Erosion of Freedom and Authenticity
Evelyne Tauchnitz, senior fellow at the Institute of Social Ethics at the University of Lucerne, Switzerland, wrote, “Advances in Artificial Intelligence (AI) tied to Brain-Computer Interfaces (BCIs) and sophisticated surveillance technologies, among other applications, will deeply shape the social, political and economic spheres of life by 2035, offering new possibilities for growth, communication and connection. But they will also present serious questions about what it means to be human in a world increasingly governed by technology. At the heart of these questions is the challenge of preserving human dignity, freedom and authenticity in a society where our experiences and actions are ever more shaped by algorithms, machines and digital interfaces.
Freedom … is the very bedrock of moral capability. If AI directs our actions and our choices, shaping our behavior based on data-driven predictions of what is ‘best,’ we lose our moral agency. We become mere executors of efficiency, devoid of the freedom to choose to err and to evolve both individually and collectively through trial and error. … Surveillance, AI-driven recommendations, manipulations or algorithms designed to rely on patterns of what is defined as ‘normal’ may threated this essential freedom. They create subtle pressures to conform … The implications of such control are profound: if we are being constantly watched or influenced in ways we are unaware of, our capacity to act freely – to choose differently, to be morally responsible – could be deeply compromised.
The Erosion of Freedom and Authenticity
AI and BCIs will undoubtedly revolutionize how we interact, allowing unprecedented levels of communication, particularly through the direct sharing of thoughts and emotions. In theory, these technologies could enhance empathy and mutual understanding, breaking down the barriers of language and cultural differences that often divide us. By bypassing or mitigating these obstacles, AI could help humans forge more-immediate and powerful connections. Yet, the closer we get to this interconnected future among humans and AI the more we risk sacrificing authenticity itself.
“The vulnerability inherent in human interaction – the messiness of emotions, the mistakes we make, the unpredictability of our thoughts – is precisely what makes us human. When AI becomes the mediator of our relationships, those interactions could become optimized, efficient and emotionally calculated. The nuances of human connection – our ability to empathize, to err to contradict ourselves – might be lost in a world in which algorithms dictate the terms of engagement.
“This is not simply a matter of convenience or preference. It is a matter of freedom. For humans to act morally, to choose the ‘good’ in any meaningful sense, they must be free to do otherwise. Freedom is not just a political or social ideal – it is the very bedrock of moral capability. If AI directs our actions and our choices, shaping our behavior based on data-driven predictions of what is ‘best,’ we lose our moral agency. We become mere executors of efficiency, devoid of the freedom to choose, to err and to evolve both individually and collectively through trial and error.
“Only when we are free – truly free to make mistakes, to diverge from the norm, to act irrationally at times – can we become the morally responsible individuals that Kant envisioned. This capacity for moral autonomy also demands that we recognize the equal freedom of others as valuable as our own. Surveillance, AI-driven recommendations, manipulations or algorithms designed to rely on patterns of what is defined as ‘normal’ may threaten this essential freedom. They create subtle pressures to conform, whether through peer pressure and corporate and state control on social media, or in future maybe even through the silent monitoring of our thoughts via brain-computer-interfaces. The implications of such control are profound: if we are being constantly watched, or even influenced in ways we are unaware of, our capacity to act freely – to choose differently, to be morally responsible – could be deeply compromised.
Change requires room for failure, for unpredictability, for the unknown. If we surrender ourselves too completely to AI and its rational, efficient directives we might be trading away something invaluable: the very essence of life as a process of continuous growth and change as manifested through lived human experiences. While AI may help us become ‘better’ persons, more rational, less aggressive and more cooperative, the question remains whether something of our human essence would be lost in the process – something that is not reducible to rationality or efficiency, but is bound up with our freedom, our mistakes, our vulnerabilities and our ability to grow from them.
The Limits of Perfection: Life is Rife With Unpredictable Change
“This leads to another crucial point: the role of error in human evolution. Life, by its very nature, is about change – about learning, growing and evolving. The capacity to make mistakes is essential to process. In a world where AI optimizes everything for perfection, efficiency and predictability, we risk losing the space for evolution, both individually and collectively. If everything works ‘perfectly’ and is planned in advance, the unpredictability and the surprise that gives life its richness will be lost. Life would stagnate, devoid of the spark that arises from the unforeseen, the irrational, and yes, even the ‘magical.’
“A perfect world, with no room for error would not only be undesirable – it would kill life itself. Change requires room for failure, for unpredictability, for the unknown. If we surrender ourselves too completely to AI and its rational, efficient directives, we might be trading away something invaluable: the very essence of life as a process of continuous growth and change as manifested through lived human experiences. While AI may help us become ‘better’ persons, more rational, less aggressive and more cooperative, the question remains whether something of our human essence would be lost in the process – something that is not reducible to rationality or efficiency, but is bound up with our freedom, our mistakes, our vulnerabilities and our ability to grow from them.
The Need for a Spiritual Evolution “The key to navigating the technological revolution lies not just in technical advancement but in spiritual evolution. If AI is to enhance rather than diminish the human experience, we must foster a deeper understanding of what it truly means to be human. This means reconnecting with our lived experience of being alive – not as perfectly rational, perfectly cooperative beings, but as imperfect, vulnerable individuals who recognize the shared fragility of our human existence. It is only through this spiritual evolution, grounded in the recognition of our shared vulnerability and humanity, that we can ensure AI and related technologies are used for good –respecting and preserving the values that define us as free, moral and evolving beings.”
A Highly Placed Global AI Policy Expert
The Advance of Humans-Plus-AI Will Reshape the Social, Political and Economic Landscapes in Profound Ways and Challenge Our Role in Moral Judgment
An influential member of one of the UN’s future-of-technology advisory groups predicted, “In the Digital Age of 2035 artificial intelligence will have transformed humanity, which is already finding itself inextricably entwined with AI and related technologies. These advancements will have deeply permeated the fabric of daily life, reshaping the social, political and economic landscapes in profound ways. From how individuals connect with one another to how societies govern themselves and how economies operate, the influence of AI will be unmistakable.
“The coming transformation prompts an essential question: Has humanity’s deepening dependence on AI changed the essence of being human for better or worse? By examining the potential impacts of AI over the next decade, we can better understand how core human traits and behaviors may evolve or be fundamentally altered.
“A typical day of life in 2035 for digitally connected individuals is one in which personalized digital assistants far surpassing today’s capabilities act as companions and organizers, anticipating needs before they are voiced. These systems seamlessly manage schedules, monitor health metrics and offer emotional support. Such integration with AI will have become so natural that it often feels invisible, akin to breathing.
AI has the potential to empower individuals and societies in unprecedented ways … Yet this empowerment is accompanied by growing dependence. By 2035, many people may struggle to function effectively without AI assistance, leading to concerns about a loss of autonomy. Skills that were once fundamental – such as critical thinking, problem-solving and even memory – could atrophy as AI increasingly handles complex tasks. This dependency raises questions about resilience. How prepared would humanity be to adapt if AI systems failed or were maliciously disrupted? What can we expect of such a future?”
“Social interactions will be increasingly mediated by technology. Virtual reality (VR) and augmented reality (AR) will bring people together in hyper-realistic virtual spaces, blurring the boundaries between physical and digital connections. Holographic meetings and AI-generated avatars make socialization instantaneous and geographically unbounded, but they also raise questions about the authenticity of human connection. Do these interactions retain the depth and meaning traditionally associated with face-to-face encounters?
“On a political level, AI-driven platforms will guide civic engagement. Governments will more widely employ predictive algorithms to manage resources, address societal needs and draft legislation. Citizens will rely on AI for real-time updates on policies and global events, yet these same systems can double as tools for surveillance or manipulation, jeopardizing their privacy and freedom.
“Economically, AI will play a central role in employment and commerce. Automation dominates industries in 2035, with human labor increasingly focused on creative, strategic or interpersonal roles that AI struggles to replicate. The gig economy of 2023 will have evolved into a hybrid ‘human-AI collaborative economy’ in which partnerships between workers and intelligent systems redefine productivity. This shift will exacerbate debates about wealth inequality, the value of work and the potential obsolescence of certain human skills.
“AI’s dual role is empowerment and dependence. AI has the potential to empower individuals and societies in unprecedented ways. In healthcare, AI-driven diagnostics and personalized medicine could extend lifespans and improve quality of life. Education becomes highly adaptive, with AI tailoring learning experiences to individual needs, fostering inclusivity and equity. Political decisions informed by data-driven insights could lead to greater efficiency and fairness in governance.
“Yet, this empowerment is accompanied by growing dependence. By 2035, many people may struggle to function effectively without AI assistance, leading to concerns about a loss of autonomy. Skills that were once fundamental – such as critical thinking, problem-solving and even memory – could atrophy as AI increasingly handles complex tasks. This dependency raises questions about resilience. How prepared would humanity be to adapt if AI systems failed or were maliciously disrupted? What can we expect of such a future?
- A Redefinition of Core Human Traits: The deepening integration of AI into daily life challenges traditional conceptions of core human traits, such as creativity, empathy and morality. These qualities, which have long been seen as uniquely human, are being reshaped by the growing presence of intelligent machines.
- Creativity in the Age of AI: AI systems capable of generating art, music, literature and innovations have blurred the line between human and machine creativity. In 2035, artists will collaborate with AI to produce works that neither could create alone. While this partnership expands the boundaries of creative expression, it also prompts existential questions: if an AI can compose a symphony or write a novel indistinguishable from a human’s, what does it mean to be a creator?
- Empathy and Human Connection: AI’s role in social interactions extends to emotional support. Advanced systems simulate empathy, providing companionship to those who might otherwise feel isolated. While these systems offer undeniable benefits, they risk diminishing genuine human connections. If people turn primarily to AI for emotional needs, does society risk losing its capacity for authentic empathy and understanding?
- Morality and Ethical Decision-Making: AI’s ability to process vast amounts of data enables it to make decisions that appear highly rational, but these decisions often lack the nuance of human morality. In 2035, as AI assumes roles in law enforcement, healthcare triage and even warfare, ethical dilemmas arise. How can humanity ensure that AI systems reflect diverse moral frameworks? Moreover, will humans become complacent, abdicating moral responsibility to machines?
This technological support could free people to pursue passions, deepen relationships and explore the world in ways previously unimaginable. On the other hand, this evolution risks eroding certain aspects of the human experience. Spontaneity, serendipity and imperfections – qualities that often define meaningful moments – might be diminished in a world optimized by algorithms. As AI systems influence decisions and behaviors individuals may feel less in control of their own destinies, raising existential concerns about agency and identity. The next decade will be critical in determining whether AI advances enrich or diminish humanity.
“AI’s pervasive presence by 2035 will profoundly impact the experience of being human. On one hand, AI enhances lives by eliminating mundane tasks, offering personalized services, and expanding access to knowledge and resources. This technological support could free people to pursue passions, deepen relationships and explore the world in ways previously unimaginable. On the other hand, this evolution risks eroding certain aspects of the human experience. Spontaneity, serendipity and imperfection – qualities that often define meaningful moments – might be diminished in a world optimized by algorithms. Furthermore, as AI systems influence decisions and behaviors, individuals may feel less in control of their own destinies, raising existential concerns about agency and identity.
“The next decade will be critical in determining whether AI advances enrich or diminish humanity. To ensure a positive trajectory, several strategies must be prioritized:
- Ethical Development and Regulation – Policymakers and technologists must collaborate to establish ethical frameworks for AI development and deployment. Transparent algorithms, unbiased data and accountability mechanisms will be essential to maintaining trust in AI systems.
- Education and Adaptation – Preparing individuals for an AI-driven world requires reimagining education. Emphasizing critical thinking, emotional intelligence and adaptability will help people thrive alongside AI. Lifelong learning initiatives can ensure that workers remain relevant in a rapidly changing economy.
- Preserving Human Values – As AI transforms society, efforts must be made to preserve the qualities that make us human. Encouraging genuine interpersonal connections, celebrating creativity and fostering empathy will help balance technological progress with the richness of human experience.
“By 2035, humanity’s partnership with AI will have reached unprecedented depths, shaping social, political and economic landscapes in ways that were once the realm of science fiction. This deep integration offers both extraordinary opportunities and profound challenges. While AI has the potential to enhance human life, its pervasive influence risks eroding the very traits that define humanity. The key to navigating this transformation lies in intentionality. By prioritizing ethical development, fostering adaptability and preserving core human values, society can harness the power of AI to create a future that is not only technologically advanced but also deeply human. Whether this vision is realized depends on the choices made today and in the years ahead. In the end, the question is not whether AI will change humanity – it is how humanity will choose to change itself in partnership with AI.”

Gary A. Bolles
AI Presents an Opportunity to Liberate Humanity but New Norms in Human-Machine Communication Seem More Likely to Diminish Human-to-Human Connections
Gary A. Bolles, author of “The Next Rules of Work,” chair for the future of work at Singularity University and co-founder at eParachute, wrote, “With the products we use in 2025, we already have extensive experience with the effects of technology on our individual and collective humanity. Each of us today has the opportunity to take advantage of the wisdom of the ages, and to learn – from each other and through our tools – how we can become even more connected, both to our personal humanity and to each other.
“We also know that many of us spend a significant amount of our waking hours looking at a screen and inserting technology between each other, with the inherent erosion of the social contract that our insulating technologies can catalyze. That erosion can only increase as our technologies emulate human communications and characteristics.
The design of software we use today already begins to blur the line between what comes from a human and what is created by our tools. Today’s chat interface is a deliberate attempt to hack the human mind … personifying communication with humans and referring to itself with human pronouns. … The line-blurring will accelerate rapidly with the sale of semi-autonomous AI agents fueled by Silicon Valley CEOs and venture capitalists calling these technologies ‘co-bots,’ ‘co-workers,’ ‘managers,’ ‘AI engineers’ and a ‘digital workforce.’
“There will be tremendous benefits from ubiquitous generative AI software that can dramatically increase our ability to learn, to have mental and emotional support from flexible applications and to have access to egalitarian tools that can help empower those among us with the least access and opportunity. But the design of software we use today already begins to blur the line between what comes from a human and what is created by our tools.
“For example, today’s chat interface is a deliberate attempt to hack the human mind. Rather than simply providing a full page of response, a chatbot ‘hesitates’ and then ‘types’ its answer. And the software encourages personifying communication with humans, referring to itself with human pronouns.
“The line between human and technology will blur even more as AI voice interfaces proliferate, and as the quality of generated video becomes so good that distinguishing human from software will become difficult even for experts. While many will use this as an opportunity in the next 10 years to reinforce our individual and collective humanity, many will find it hard to avoid personifying the tools, seduced by the siren song of software that simulates humans – with none of the frictions and accommodations that are inevitable parts of authentic human relationships.
By elevating our technologies the inevitable result is that we diminish humans. For example, every time we call a piece of software ‘an AI,’ we should hear a bell ringing, as we make another dollar for a Silicon Valley company. It doesn’t have to be that way. For the first time in human history, with AI-related technologies we have the capacity to help every human on the planet to learn more rapidly and effectively, to connect more deeply and persistently and to solve so many of the problems that have plagued humanity for millennia. And we have an opportunity to co-create a deeper understanding of what human intelligence is, and what humanity can become.
“That line-blurring will accelerate rapidly with the sale of semi-autonomous AI agents. Fueled by Silicon Valley CEOs and venture capitalists calling these technologies ‘cobots,’ ‘co-workers,’ ‘managers,’ ‘AI engineers’ and a ‘digital workforce,’ these techno-champions have economic incentives to encourage heavily-marketed and deeply-confusing labels that will quickly find their way into daily language. Many children already are confused by Amazon’s Alexa, automatically anthropomorphizing the technology. How much harder will it be for human workers to resist language that labels their tools as their ‘co-workers’ and fall into the trap of thinking of both humans and AI software as ‘people’?
“By elevating our technologies the inevitable result is that we diminish humans. For example, every time we call a piece of software ‘an AI,’ we should hear a bell ringing, as we make another dollar for a Silicon Valley company. It doesn’t have to be that way. For the first time in human history, with AI-related technologies we have the capacity to help every human on the planet to learn more rapidly and effectively, to connect more deeply and persistently and to solve so many of the problems that have plagued humanity for millennia. And we have an opportunity to co-create a deeper understanding of what human intelligence is, and what humanity can become.
“We are likely to make significant strides forward on all these fronts in the next 10 years. But at the same time, we must confront the sheer power of these technologies to erode the very definition of what it is to be human, because that’s what will happen if we allow these products to continue along the pernicious path of personification. I think we are better than that. I think we can teach our children and each other that it is our definition and understanding of humanity that defines us as a species. And I believe we can shape our tools to help us to become better humans.”

Maja Vujovic
In 10 Years’ Time Generations Alpha and Beta Will Make Up 40% of Humanity. Let’s Hope They Don’t Lose Any Mission-Critical Human Characteristics; We’ll All Need Them
Maja Vujovic, book editor, writer and coach at Compass Communications in Belgrade, Serbia, wrote, “Throughout history, the humans have been mining three classes of resources from Mother Nature‚ two living and one inanimate: plants, animals and materials for tools. We give names to animals routinely; we rarely name the tools and we almost never name the plants (except en masse, as species). This shows we’ve always comprehended an inherent difference between a field full of grass, an inanimate instrument and a hot-blooded creature. That difference is expressed in the uniqueness of the immutable living beings vs. the scalable replicability of mutable man-made tools.
“This ancient demarcation is suddenly starting to blur. Each of our finest newly emerging digital instruments – the talking bots – appears quite unique and individual yet they can be more numerous than the leaves of grass, in fact, their numbers may be infinite.
“We are gradually becoming accustomed to the rampant synthetic outgrowth of our large language models. The AI narrators’ voices in how-to videos, the seemingly virtuous ‘virtual colleagues’ that we are starting to encounter in workplaces, the chatbot personas that seem to be apologizing all day long for misunderstanding us.
“The human mind has an amazing capacity for storing faces, names and other pertinent details of individuals with whom we connect. But by 2035 the scalable capacity of AI to generate ever-new synths could become overwhelming for us. What’s irksome is not the fact that these dupes will be ubiquitous; it is their endless variety and effortless inconstancy. We will be overwhelmed by their presence everywhere. We will resent that saturation, as it will keep depleting our mental and emotional capacities on daily basis. We will push back and demand limits.
Gen X will explore even the wildest options and, at the same time, push for the regulation of AI. Millennials … it will fall to them to reinvent education and ensure it is effective, despite everything. Those in Gen Z, who are adopting AI as part of their education will benefit the most from its development. The fastest learners ever, they will become unstoppable, as recent movements the world over patently demonstrate. Generations Alpha and Beta, however, will not remember a time without myriad thinking machines being common. Their attitudes toward them will surely differ from those of the rest of us. But let’s hope they don’t lose any universal aptitudes in the process. That’s mission-critical, because in 10 years, they will jointly make up some 40% of the world’s population.
“Synthetic companions, knockoff shopping assistants, faux healthcare attendants and all other human replicas generated by machines on behalf of the most enterprising humans among us, will start to feel like a super-invasive, alien army of body snatchers. Sooner or later, we will stir and rebel. Their manufacturers, wranglers and peddlers will swiftly adjust when their infinite ability to generate endless faux humans misses the mark in the markets. When all is said and done, only a few basic categories of generative AI personas will become standard, akin to Comedia del Arte’s stock characters.
“Eventually, we will have a choice between a gutsy girl and a jovial jock, or between a caring matron and a handsome gent (and so on) – just like we opt for a sedan vs. a pickup, way before we look up any specific car manufacturer’s showroom, website or ad, let alone car model, colour or year. These synthetic, mimetic, agentic tools will someday come in major demographic types, with adjustable details and very strict rules of engagement. Choosing a unique name for them on demand will be an extra cost. It’s also likely that this now-volatile category of tools will become regulated and standardized. A slew of lawsuits will ensure that.
“In the 10-year period ahead of us, living and working with AI is not going to incur a tectonic change in the human nature, nor a shift our perception of ourselves or of the world. Or rather, any such change won’t be immediately perceptible. How it will roll out depends on who you are.
- “The Silent Generation will appreciate the assistance and companionship that AI can offer but it could fall prey to AI-enhanced fraud.
- “Many Baby Boomers will tap whatever AI they can, picking up easily on the easiest of the five generations of interfaces they have had to learn in their lives: tape, cards, commands, WYSIWYG and now voice and conversation).
- “Gen X will explore even the wildest options and, at the same time, push for the regulation of AI.
- “The Millennials will negotiate the delicate balance of raising children around pets and talking tools; they’ll often pray for the privilege of silence. It will fall to them to reinvent education and ensure it is effective, despite everything.
- “Those in Gen Z, who are adopting AI as part of their education, will benefit the most from its development. The fastest learners ever, they will become unstoppable, as recent movements the world over patently demonstrate.
- “Generations Alpha and Beta, however, will not remember a time without myriad thinking machines being common. Their attitudes toward them will surely differ from those of the rest of us. But let’s hope they don’t lose any universal aptitudes in the process. That’s mission critical, because in 10 years, they will jointly make up some 40% of the world’s population.”
Greg Adamson
‘The World of the Future Will Be a Demanding Struggle Against the Limitations of Our Intelligence, Not a Comfortable Hammock In Which We Are Waited On By Robot Slaves’
Greg Adamson, president of the IEEE Society on Social Implications of Technology and chair of the IEEE ad hoc committee on Tech Ethics, said, “2035 will be the year that many jobs as we know them fall off a cliff. For example, the replacement of truck driving as a profession by autonomous commercial vehicles will remove a key professional activity from our societies.
“As no society globally today has shown a sophisticated capacity to manage significant change, the predictable massive loss of jobs will nevertheless come as a shock. Many other changes will also occur, but there is little indication that the future as described by author Kurt Vonnegut in his first novel, ‘Player Piano’ – a future in which automation has taken over most jobs, leaving many people unemployed and feeling without purpose – is not the most likely future.
“Vonnegut’s understanding was based on the work of Norbert Wiener. In his last book, in 1964, Wiener wrote, ‘The future offers very little hope for those who expect that our new mechanical slaves will offer us a world in which we may rest from thinking. Help us they may, but at the cost of supreme demands upon our honesty and our intelligence.
“The world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.’
“The current state of debate on the future of AI has a long way to go before it reaches the sophistication of these insights provided more than six decades ago.”
Juan Ortiz Freuler
The Accelerating Application of Automation Will Reshape Human Capabilities and Reorganize the Entire Framework That Underlies Our Understanding of the Individual and Society
Juan Ortiz Freuler, a Ph.D. candidate at the University of Southern California and co-initiator of the non-aligned tech movement, wrote, “In the socio-political and economic landscape of 2035, the accelerating application of automation will not merely reshape human capabilities, it will reorganize the framework upon which our understanding of the individual and society is built. Algorithmic systems are not only replacing and augmenting human decision-making but reshaping the categories that structure our social fabric, eroding long-held notions of the individual. As we move deeper into this era, change may render the very idea of the individual, once a central category of our political and legal systems, increasingly irrelevant, and thus radically reshape power relations within our societies. The ongoing shift is more than a technological change; it is a profound reordering of the categories that structure human life. The growing integration of predictive models into everyday life is challenging three core concepts of our social structure: identity, autonomy and responsibility.
MORE TO COME!!!