Essays Part I Continued: Surviving change
Detailed responses to the following core question:


“Imagine digitally connected people’s daily lives in the social, political, and economic landscape of 2035. Will humans’ deepening partnership with and dependence upon AI and related technologies have changed being human for better or worse? Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors?'”
Being Human in 2035 Elon University Imagining the Digital Future Center report

This is the second of four pages with responses to the question above. The following sets of experts’ essays are a continuation of Part I of the overall series of insightful responses focused on how “being human” is most likely to change between 2025 and 2035, as individuals who choose to adopt and then adapt to implementing AI tools and systems adjust their patterns of doing, thinking and being. This web page features many sets of essays organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique;, the groupings are not relevant. Some essays are lightly edited for clarity.

The next section of Part I includes the following essays:

Senior Foresight Analyst for a Major Nation-State: Als that work with us can help us be more successful; a compassionate AI might be a better friend than 95% of your social network.

Eni Mustafaraj: Smartphone technology has already transformed humanity; we don’t need to wait 10 more years to understand that things are not going well for us.

Greg Sherwin: Memory, creative thinking and the ability to rapidly establish baseline competencies to the mean in novel areas will gradually increase and become more accessible due to AI.

Tom Wolzein: By 2035 Als’ decision-making functionality will be everywhere; they will impact our lives directly, often spurring humans (or Als) to take action without any oversight.

Marine Ragnet: AI development should prioritize human flourishing and agency over efficiency, ethics over technical capabilities and democratic oversight over rushed innovation.


Senior Foresight Analyst for a Major Nation-State
AIs That Work with Us Can Help Us Be More Successful; Your Best Friend May Not Be an AI But a Compassionate AI Might Be a Better Friend Than 95% of Your Social Network

A senior foresight analyst with 20 years of leadership experience working for a major world government who chose to remain anonymous wrote, “We have seen a dramatic expansion of large language model and related generative AI capabilities in the 2020-2025 period. Progress has slowed recently, but new models are doing more, faster and cheaper than technologies of even 2023 or early 2024.

“It is reasonable to assume that by the early 2030s the technology will be woven into many pieces of human life, with the ability to go deeper on a question, to expand understandings, to query the truth or falsity of an understanding, and the option of using an agent able to execute tasks on your behalf will be both widespread and standard. This could lead to obvious outcomes like hyper-personalized digital content environments and erosion of privacy, but also to a redefinition of relationships. Many will be lost, albeit temporarily, to their virtual chatbot friends and lovers. Others will treat AIs as friends or grad students, able to support them and encourage better relationships with peers, families and employers. Switching between conversations and codes with one’s AIs and with other humans could become second nature to many.

“The social landscape will be reshaped by differential access to AIs that work for us, with us, and through us, allowing us to be more successful at achieving our goals than we could be individually. One possibility is that advanced AI chatbots may be more compassionate and kinder than other humans in our world. Your best friend may not be an AI, but the AI might be a better friend than 95% of your social network. But access to AIs, particularly the best and most “’human’ ones, may be a problem if the good ones are differentially available.

Economically, we could see an uptick in the presence of the (AI-supported) ‘Renaissance Man’ – but not just men, people of all kinds. I expect we will see many more individuals with broad interests using AI to develop deep knowledge on a host of different topics. … The benefits are likely to be more modest than some boosters say and largely will augment jobs rather than displace them. For knowledge work, AI will often be a kind of digital knight, enabling satisfaction of job requirements exactly when pressures grow to do more with less. … Overall, will AI have changed being human for better or worse? I suspect in time, the answer will be ‘absolutely better,’ but that may take a long while to develop.

“The political landscape may be significantly reshaped by advanced AIs. Today we have AI-deepfakes, tomorrow, AIs that do parallel reading and ground-truthing to root out and identify fake news. Governments will use AI to do a lot of work, enabling their operations to be more efficient at lower cost, as will the private sector. The benefits are likely to be more modest than some boosters say and largely will augment jobs rather than displace them. For knowledge work, AI will often be a kind of digital knight, enabling satisfaction of job requirements exactly when pressures grow to do more with less.

“For authoritarian governments, identification of ‘seditious elements’ and personality profiles of citizens are likely to also be enabled through AI, with the potential of having the equivalent of a psych research lab coding citizen desires, responses and utterances operating full time on each citizen, at scale. I expect to see very intrusive data demands by authoritarian governments with the goal of identifying emergent movements and finding soft targets to infiltrate such movements.

“Economically, we could see an uptick in the presence of the (AI-supported) ‘Renaissance Man’ – but not just men, people of all kinds. I expect we will see many more individuals with broad interests using AI to develop deep knowledge on a host of different topics. Expertise will become more common even if humans’ ability to understand what may happen in future and why certain outcomes are probable or improbable remains difficult to package in an algorithm.

“Costs for knowledge work will fall. There will be higher-volume productivity but only modestly increased dollar productivity. Expectations will rise as capabilities increase, and new jobs that did not exist previously will provide most of the new growth attributable to AI in this period.

I don’t foresee a huge amount of penetration of AI in the experience of being human in this time frame, as people will continue to want experiences, relationships and social if not economic reward for demonstrating their skills or progress towards their own self-actualization. AI, working in the virtual space, will play only tangentially in that endeavour. … Overall, will AI have changed being human for better or worse? I suspect in time, the answer will be ‘absolutely better,’ but that may take a long while to develop. AI can teach us how to be kind, where others are coming from and allow us to dry run difficult situations or conversations. Mid-century, I could see this becoming more normalized everywhere, but not by 2035.

“The impact on core human traits and behaviours could play out in a variety of spaces. As I think to my own young family, I could see future generative AI externalizing thinking, making internal processing interactive and having dialectic analysis be available on demand for life decisions, potentially moving toward having a helpful virtual management consultant available for both major and minor life decisions. This could aid creativity and problem-solving, while expanding empathy and emotional intelligence if well-directed. To what extent this will be available and used by members of the public in the early 2030s is unclear. I would venture that due to dramatic reductions in price and increases in availability and uptake, a much larger fraction of the world’s population could benefit, although not clear that we crack even 10%, much less 50%.

“I could see many traditionally taught subjects, such as math skills, being largely automated, while human cognition will be necessary for sanity-checking, working through errors and decoding pipelines for what is actually useful.

“All this goes to identity and purpose: the creator economy will benefit from AI support in research, design and production, but can the global economy build on a billion small-craft suppliers of insights and takes? I would think that some will seek enlightenment or guidance on how to live by consulting AI tools.

“I don’t foresee a huge amount of penetration of AI in the experience of being human in this time frame, as people will continue to want experiences, relationships and social if not economic reward for demonstrating their skills or progress towards their own self-actualization. AI, working in the virtual space, will play only tangentially in that endeavour – more strongly if VR or AR take off as major and effective social spaces – but direction intermediation of the relationship between people seems a little science-fiction at this time.

“I could see AI support for mediating virtual relationships, including using influencers using personalized LLMs as interactive embodiments of parasocial relationships, particularly if there is a way to get backchannel information to the creator summarizing conversations with fans. Overall, will AI have changed being human for better or worse? I suspect in time, the answer will be ‘absolutely better,’ but that may take a long while to develop. AI can teach us how to be kind, where others are coming from and allow us to dry run difficult situations or conversations. Mid-century, I could see this becoming more normalized everywhere, but not by 2035.”


Eni_Mustafaraj

Eni Mustafaraj
Smartphone Technology Has Already Transformed Humanity; We Don’t Need to Wait 10 More Years to Understand That Things Are Not Going Well for Us

Eni Mustafaraj, associate professor of computer science at Wellesley College, wrote, “In 2001, Tim Berners-Lee, the inventor of the World Wide Web, together with two colleagues, Jim Hendler and Ora Lassila wrote a vision piece for Scientific American magazine titled ‘The Semantic Web.’ It imagined a future in which we would all have a personal digital assistant capable of managing our everyday mundane chores: scheduling meetings on our calendar, coordinating tasks on our behalf, finding trusted information on the web, booking flights, comparing products, securely paying bills, the list goes on.

“These are the types of tasks that wealthy people pay human assistants to do for them, so they can use their time to focus either on creative or decision-making tasks. The authors believed that we would not need AI to do these tasks (at that time the progress of AI had stalled), instead the key would be the augmentation of the existing web with semantic layers and other technologies that would allow these software agents to ‘understand’ the information on the web in order to carry out these tasks on our behalf.

AI could well be (or become at any moment) a kind of a Trojan horse. It will always carry the risk of doing someone else’s bidding when we expect it the least. … Today’s AI advances are not being developed to carry out the mundane tasks to free up our time to do other things. Instead, it is doing tasks that highly-paid humans used to do: write software; generate images, graphics, video, music; write poetry and fiction; create business plans; give life advice; create study guides and summarize new research.

“More than 20 years later, we don’t have a semantic web or the personal software agents that are truly capable of doing these tasks. The generative AI technology being developed at the moment is fundamentally different in a couple of ways:

  1. “AIs are not publicly owned technology, as the web technologies invented by Tim Berners-Lee were. Instead, they are being developed behind closed doors, without transparency and public accountability. This means that they cannot be trusted to have one’s individual interests at heart. AI could well be (or become at any moment) a kind of a Trojan horse. It will always carry the risk of doing someone else’s bidding when we expect it the least.
  2. “Today’s AI advances are not being developed to carry out the mundane tasks to free up our time to do other things. Instead, it is doing tasks that highly-paid humans used to do: write software; generate images, graphics, video, music; write poetry and fiction; create business plans; give life advice; create study guides and summarize new research.

 “By doing such things quickly and reasonably (as well as an average person), it is taking away the motivation for young people to enter these fields. (We are already seeing a decline in the number of students who want to enroll in our introductory programming courses, which have a reputation for being time-demanding and in which the use of AI tools is not allowed.)

“My biggest worry is that the future of generative AI will follow the path that social media from its advent in the early 2010s to today. When Facebook and Twitter started spreading across the world, due to their uses in the early days we had high hopes for these types of platforms to become tools of democratization and freedom. That is not what happened. Today it is clear that young people who use social media at least five hours a day (which is the average today) are suffering anxiety and depression; studies show such use has increased the level loneliness among adults and the platforms carry manipulative content that has exacerbated political polarization across the world.

The dystopian future depicted in the movie ‘Wall-E’ seems suddenly more likely: humans addicted to their algorithmic-driven entertainment devices (powered by AI), oblivious to the catastrophic consequences of consumerism on the planet (powered by the energy-hungry data centers that are spreading like mushrooms across the globe). … By becoming addicted to our phones and the entertainment/distraction that they provide, we have already changed our behavior and might already be in the process of losing many of our core human traits. AI might simply accelerate our descent into the dystopian abyss, because we are already losing or surrendering our agency to make decisions for ourselves.

“It is very likely that the enthusiastic adoption of generative AI at this moment, with its utopian vision of a wonderful AI-Human partnership, will soon show its own harmful effects – one isolated example is that it seems to be motivating young people to not want to study science any longer, because what’s the point of doing the hard work of thinking if the AI can do it faster and without any pain.

“The dystopian future depicted in the movie ‘Wall-E’ seems suddenly more likely: humans addicted to their algorithmic-driven entertainment devices (powered by AI), oblivious to the catastrophic consequences of consumerism on the planet (powered by the energy-hungry data centers that are spreading like mushrooms across the globe).

“Although I have no doubt that some researchers or organizations will use AI to achieve significant scientific breakthroughs, I doubt that the major tech companies now developing AI have a vision for a future for humanity that is equitable and committed to human flourishing.

“In my opinion, smartphone technology has already transformed humanity. We don’t need to wait 10 more years to understand that things are not going well for us. By becoming addicted to our phones and the entertainment/distraction that they provide, we have already changed our behavior and might already be in the process of losing many of our core human traits. AI might simply accelerate our descent into the dystopian abyss, because we are already losing or surrendering our agency to make decisions for ourselves.”


Greg Sherwin
Memory, Creative Thinking and the Ability to Rapidly Establish Baseline Competencies to the Mean in Novel Areas Will Gradually Increase and Become More Accessible Due to AI

Greg Sherwin, Singularity University global faculty member, and technology consultant and board member, wrote, “Like many disruptive technologies that came before it, frontier AI will change human social, political and economic life for both better and worse. Each advancement will come at a cost, requiring tradeoffs or a social ‘forgetting.’ For example, GPS has served to severely reduce human risks of getting physically lost, but at the cost of diminishing our prior skills at direction-finding and opportunities for emergent discovery by exploring less travelled paths. The types of change we might expect by 2035 include:

Increasing use of AI will highlight the preciousness of true human expertise, rare genius and originality. … While answers will be revered far more than questions, the overall value of questions – more and better questions – will be elevated. Unfortunately … by outsourcing our ethics to algorithms we will absolve ourselves of agency and responsibility in an indirect attempt to run our ethics by machine.

  1. “Memory, creative thinking and an ability to rapidly establish baseline competencies to the mean in novel areas will gradually increase and become more accessible. In many instances, it will challenge us to remember how we achieved these skills without AI.
  2. “Increasing use of AI will highlight the preciousness of true human expertise, rare genius and originality.
  3. “We will lose some curiosity in why something is the correct answer – we will be more satisfied by merely knowing what ‘correct’ is. But while answers will be revered far more than questions, the overall value of questions – more and better questions – will be elevated.
  4. “Unfortunately, our dependence on immediate answers without pausing as to why will also fuel a slippery-slope temptation to absolve ourselves of moral thought in how decisions are made. By outsourcing our ethics to algorithms we will absolve ourselves of agency and responsibility in an indirect attempt to run our ethics by machine.
  5. “Economic growth for individuals will continue to largely correlate with greater loneliness, disconnection and isolation from other humans. Many will seek solace in the artificial care and support of algorithms. Machine companionship might provide some emotionally resonant support at first, but society will quickly come to acknowledge its emptiness and ‘cheapness.’
  6. “Meanwhile, the risk of our human languages becoming used more for human-to-machine and machine-to-machine interactions will abate once non-verbal machine communications with AI begin to become the norm.”

Daniel_S_Schiff

Daniel S. Schiff
‘Capitalism, marketing, attention economics, precarious work, competition and inequality are major forces poised to shape the design of AI systems, human-AI interactions and human life’

Daniel S. Schiff, co-director of the Governance and Responsible AI Lab at Purdue University and secretary of the IEEE 7010-2020 AI ethics industry standard, wrote, “By 2035, many of the digital interconnections that we are experimenting with will have matured into standard aspects of daily life as ‘winning’ products, services and workflows emerge. Many aspect of human psychology, values and behaviors will remain fundamentally the same.

Many aspects of human psychology, values and behaviors will remain fundamentally the same. Our ways of living will be strongly mediated by economic and social forces, not by technological advances alone. … Forces such as consumerism, economic competition and inequality seem likely to continue to shape the essence of human life, behavior and self-perception, even in a ‘native’ human-AI world. There are likely to be major gains in wealth, creativity and poverty and an increased variance in human experience owing to deep human-AI integration.

“Importantly, our ways of living will be strongly mediated by economic and social forces, not by technological advances alone. For instance, even if AI is pervasive in healthcare and education, nurses will remain overworked and teachers will be underpaid. Forces such as consumerism, economic competition and inequality seem likely to continue to shape the essence of human life, behavior and self-perception, even in a ‘native’ human-AI world. There are likely to be major gains in wealth, creativity and poverty and an increased variance in human experience owing to deep human-AI integration.

“By 2035, human-AI or human-machine interaction will be more normalized for those who are connected to these technologies. Some of today’s technological tools are highly imperfect or brittle (e.g., virtual agents, home robotics) and others are more mature but far from seamless in terms of their reliability and integration quality (e.g., smart homes, basic digital assistants). In the next decade, many of these tools will become commonplace and broadly reliable, at least insofar as technologies like home appliances and standard software are ‘reliable.’ While today, humans may interact with or be affected by AI systems hundreds or thousands of times a day, e.g., through entertainment, news, or shopping recommender systems, in a decade, humans will have normalized interactions with virtual and embodied machine intelligence in a greater variety of settings and modalities.

“For example, AI tools in educational settings, despite their decade-long history, are still very much in a disruptive and troubled state, while AI tools in healthcare settings are only beginning to benefit from developing best practices and standards. In a decade, some of the ‘winning’ products, services, workflows and modes of interaction in these settings will be normalized, just as the Internet, search engines and social media are embedded in personal and economic life. That said, there will still be plenty of failures, errors and experimental efforts as the marketplace continues to innovate and human society reacts.

It’s possible that the superficial persistence of ‘traditional’ human interactions in traditional settings will understate the actual degree of transformation, as a huge portion of the work, value and impact of life occurs through ‘background’ behaviors and computationally-driven systems. Many critical aspects of humankind are likely to remain the same, in part because our core human instincts, psychology and biology are likely to remain similar (absent an AI singularity that drives change at the genetic or advanced cybernetic level). In the context of work and personal life, this includes the continued manifestation of things like outgroup conflict, boredom, stress, interest in entertainment, greed and status seeking, romantic attachments, addiction, loyalty, etc.

“It’s unclear what the level of reliance or human-AI integration will be in specific settings, e.g., educational, healthcare or manufacturing settings. If I had to predict, however, I would say that standard ‘environments’ will remain similar, at least superficially: teachers in classrooms, but with lots of backend usage of AI software support, and the same for students, with lots of backend usage of AI for learning. Medical professionals will remain in hospital rooms with patients, but, importantly, with tremendous usage of AI for research, data management, diagnosis and guided medical advice.

“Along these lines then, it’s possible that the superficial persistence of ‘traditional’ human interactions in traditional settings will understate the actual degree of transformation, as a huge portion of the work, value and impact of life occurs through ‘background’ behaviors and computationally-driven systems.

“Many critical aspects of humankind are likely to remain the same, in part because our core human instincts, psychology and biology are likely to remain similar (absent an AI singularity that drives change at the genetic or advanced cybernetic level). In the context of work and personal life, this includes the continued manifestation of things like outgroup conflict, boredom, stress, interest in entertainment, greed and status seeking, romantic attachments, addiction, loyalty, etc.

“I would not expect massive revolutions by 2035, e.g., that 90% of students around the world are hyper-engaged in personalized AI tutoring and become incredible experts at young ages. However, there may be mini-revolutions at the fringes, such as a growing number of young individuals or individuals from impoverished settings being able to perform incredible feats of learning, creativity and innovation such as becoming experts or starting leading companies. In that sense, increased access to advanced AI may create more variance or volatility in what is possible, with both positive and negative outcomes. Yet, humans are likely to ultimately value many of the same things: security, stability, relationships, pleasure, wealth and so on.

“Critically, much of the norm of human interaction, behavior and essence, is also likely to continue to be driven by major economic forces. Capitalism, marketing, attention economics, precarious work, competition and inequality are amongst the forces that seem poised to shape the design of AI systems, human-AI interactions, and, ultimately, human life. Thus, while an ‘Oasis’-style virtual world with unlimited human-AI-enabled creativity and empathy could evolve in theory, it’s likely that a major AI-VR environment will be (at least as) replete with marketing, attention seeking mechanisms, and various unhealthy and unfortunately predatory behaviors.

Economic forces will continue to replace labor with capital. … Absent dramatic transformations in human education or in the willingness of societies to distribute wealth and leisure more broadly, there is likely to be continued disruption, insecurity and inequality. Individuals in some economic, educational, or social classes, or in various regions of the world, may find themselves continually desperate to find economic security and meaningful work. Even if new innovations increase wealth or health broadly, leading to net positives for the world, it seems unlikely that human-AI workflows will make work itself utopic. If anything, the severely growing skill gaps between AI and humans seem likely to threaten the human sense of self-worth, creating new pathologies, social disruption and the need for new outlets.

“The essence of our cultural and economic milieu, therefore, seems likely to heavily mediate how human-AI interactions shape human essence. Technology’s impact on the essence of humanity cannot be understood exogenously, in the absence of recognizing the importance of social and economic (and political and cultural) forces.

“One area of significant concern is human meaning and self-valuation, particularly in the context of continued competition, inequality and economic stresses. A pessimistic reading of the future of work discourse is that there are massive skill gaps, persisting or even growing over decades. If educational systems fail to make transformative progress, which seems likely, then economic forces will continue to replace labor with capital, making AI a substitute for human intelligence rather than a tool to enhance it. Capitalist logic cautions that employers are just as likely to replace workers or make their jobs worse when incorporating AI workflows as they are to create new meaningful jobs or say, decrease the length of the work week. New jobs that are created may include menial labor like data annotation or perhaps human moderation of content (though some of these specific tasks are also themselves likely to be automated, e.g., through use of synthetic data).

“So, absent dramatic transformations in human education or in the willingness of societies to distribute wealth and leisure more broadly, there is likely to be continued disruption, insecurity and inequality. Individuals in some economic, educational, or social classes, or in various regions of the world, may find themselves continually desperate to find economic security and meaningful work. Even if new innovations increase wealth or health broadly, leading to net positives for the world, it seems unlikely that human-AI workflows will make work itself utopic. If anything, the severely growing skill gaps between AI and humans seem likely to threaten the human sense of self-worth, creating new pathologies, social disruption and the need for new outlets.”


Tom_Wolzien

Tom Wolzein
By 2035 AIs’ Decision-Making Will Be Everywhere; It Will Impact Us Directly, Often Spurring Humans (or AIs) to Take Action Without Oversight; ‘AI Automatons Will Beget Human Automatons’

Tom Wolzien, inventor, analyst and media executive, wrote, “People are basically lazy. Research, analysis and thinking are hard work. AI provides an alternative to hard work. The issue will be the unverifiable, those things that take moral, ethical, or intuitive judgment.

“The use of AI to write code, for example, is verifiable. It either works or it doesn’t. I used to manage software writers, but I am not a coder myself. Now I manage AI to write code almost exactly as I used to manage humans. All I need to know is how to run it. And, as with software written by humans, sometimes the code works and sometimes it doesn’t do what I want it to do. Then I tell the AI to fix it and I repeat that command until it works. Just as I do with human employees. AI just does it, with no judgment, morality or ethics in the mechanics, except for human jobs lost.

“People are lazy, and the AI in 2035 will do much more of the work for us, often leaving us out of the loop in decision-making. It’s a small jump but a giant leap for humanity going from AIs that simply answer when we ask, ‘I want to know’ to AIs that are called to duty when we ask, ‘What should I do?’ The first provides data, and, assuming that data is correct (a technical issue), it helps me develop positions or make decisions. The second bypasses the data collection and analysis stage and lets me leap to a decision without all the work.”

“Today’s AI provides much-improved search capabilities, better to read and with more knowledge. It allows me to expand my curiosity. I can ask it, ‘What about this? Explain that in terms I can understand,’ and so on. I’m not a scientist nor am I an environmentalist, but AI can help me understand the damaging significance of methane when compared with CO2. It can visualize the size of the block of carbon produced as a result of a flight I take across the country or around the world. What I do with that visualization is up to me. Again, no judgment, morality or ethics.

“But, as I said, people are lazy, and the AI in 2035 will do much more of the work for us, often leaving us out of the loop in decision-making. It’s a small jump but a giant leap for humanity going from AIs that simply answer when we ask, ‘I want to know’ to AIs that are called to duty when we ask, ‘What should I do?’ The first provides data, and, assuming that data is correct (a technical issue), it helps me develop positions or make decisions. The second bypasses the data collection and analysis stage and lets me leap to a decision without all the work.

“In 10 years, this AI decision-making functionality will be everywhere and it will impact our lives directly. Go into an ER and AI will not just inform the (diminishing number of) doctors of your status it will do the triage. Decisions on a child’s future education will be made mechanically, without a teacher’s recognition of a ‘spark’ of warning or noting of something special. Decisions on employment will be made based not just on applications, but also on facial recognition, facial traits and body movements, without the traditional lengthy interviews that sometimes result in a more-successful hire because of something that ‘clicks’ between two humans a half hour in.

“The AI automatons will beget human automatons.”


Marine_Ragnet

Marine Ragnet
AI Development Should Prioritize Human Flourishing and Agency Over Efficiency, Ethics Over Technical Capabilities and Democratic Oversight Over Rushed Innovation

Marine Ragnet, an affiliate researcher at the New York University Peace Research and Education Program working on a framework to promote ethical AI development, wrote, “The relationship between humans and AI by 2035 will fundamentally reshape our social fabric in ways that demand careful consideration of institutional design and democratic oversight. The research I lead at NYU shows that the complex interplay between enhanced capabilities and the potential erosion of human agency will require proactive governance frameworks that achieve the right balance of:

  1. Innovation and democratic oversight
  2. Technical capability and ethical consideration
  3. Efficiency and human flourishing.

“It is highly likely that AI systems will enhance learning and decision-making in the future if we reach and maintain the right balance in regard to these aspects of human-AI collaboration. It could allow us to enhance rather than diminish human agency. There are several areas of concern.

We risk creating systems that subtly shift agency away from human decision-makers. … Empathy and moral judgment might face pressure from automated decision systems that prioritize efficiency over ethical complexity. The impact on self-identity and shared cultural values warrants particular attention. Technological systems can either strengthen or erode local value systems depending on their design and implementation. By 2035, this tension will likely intensify, requiring robust institutional frameworks to ensure AI systems respect and enhance rather than homogenize cultural diversity.

“Most critically, individual agency may face unprecedented challenges. Our research on democratic technology governance reveals how institutional design choices directly impact whether AI systems enhance or diminish human autonomy. Without careful attention to participatory governance mechanisms, we risk creating systems that subtly shift agency away from human decision-makers.

“The key to navigating these changes lies in developing governance frameworks that ensure AI systems remain tools for human empowerment rather than replacement. We and other international organizations are collaborating in the development of participatory approaches that maintain human agency while leveraging AI capabilities. This includes mechanisms for community oversight, democratic governance of AI systems and institutional designs that prioritize human flourishing.

“The path forward requires careful attention to power dynamics in technological development. Our research demonstrates that when communities have meaningful input into AI system design and deployment the resulting technologies better serve human needs while preserving essential aspects of human agency. This participatory approach will be crucial for ensuring that advanced AI systems enhance rather than diminish what makes us human.

“The capacity for deep thinking about complex concepts may face particular challenges as AI systems offer increasingly sophisticated outputs that could reduce incentives for independent analysis. This dynamic recalls patterns we’ve observed in our research on community engagement with AI systems, where convenience can inadvertently reduce participatory decision-making.

“Social and emotional intelligence present perhaps the most nuanced trajectory. While AI could possibly enhance the ability to understand emotional patterns, research indicates that an overreliance on algorithmic interpretation of human emotion could atrophy natural emotional intelligence. Similarly, empathy and moral judgment might face pressure from automated decision systems that prioritize efficiency over ethical complexity.

“The impact on self-identity and shared cultural values warrants particular attention. Technological systems can either strengthen or erode local value systems depending on their design and implementation. By 2035, this tension will likely intensify, requiring robust institutional frameworks to ensure AI systems respect and enhance rather than homogenize cultural diversity.

“By 2035, the quality of human-AI interaction will largely depend on the governance frameworks we develop today. Institutional design choices can either empower or marginalize human agency. Success will require moving beyond technical capabilities to consider how these systems integrate with and support human social structures.”


This section of Part I includes the following essays:

Dmitri Williams: The efficiency and low friction of tech-enabled living immerses us in experiences mediated by capitalist or socialist interests that mute real human togetherness.

Micah Altman: Profit-driven uses of AI may make it difficult to judge the humanity, identity and sincerity of our daily interactions, ‘diluting human relationships and making being human worse.’

Michael Wollowski: Humans will spend their time in smaller ‘communities’ of like-minded people, leading more-solitary lives and substituting interaction with tech for human contact.

Peter Reiner: Widespread job displacement will destroy many people’s ‘meaning in life’ and humans’ self-image will take a big hit when they no longer have cognitive superiority.

Sarah Scheffler: AI brings connectivity down to something that can be simulated without needing an actual person. the societal changes technology enables change humans, we are people, people.

Erhardt Graeff: Generative AI devalues the virtue of humility; awareness of our human limitations inspires us to be more open and tolerant, to seek out others, to be more well-rounded.


Dmitri Williams

Dmitri Williams
The Efficiency and Low Friction of Tech-Enabled Living Immerses Us In Experiences Mediated by Capitalist or Socialist Interests That Mute Real Human Togetherness

Dmitri Williams, professor of technology and society at the University of Southern California, wrote, “I teach a class and do research on the social impacts of technology. This question is the heart of everything. I typically start the first day of that class out by noting that there is a baseline set of behaviors that come from being human that we’ve derived from hundreds of thousands of years of evolution.

“There are a lot of theories on this. Let’s use Ithiel de Sola Pool’s ‘Time’s Arrow.’ Humans evolved to interact, feel, touch, mate, hunt, nurture and fight, and our senses and biology have adapted to do these things well, whether on the savannah or in the city. There is a lot of inertia in that baseline, built over a long time, compared to the recent and future disruptions that are occurring on much, much shorter timescales that we can’t adapt to as easily.

We can do more, be more efficient, etc., with the tech but the cost is loneliness and that’s why we already have an epidemic of it. That’s technology running up against the weight of evolutionary adaptation. Add to that the incentives created by capitalism to go farther faster and to monetize our time and attention and you have a recipe for very productive, very unhappy people, all feeling less human.

“Most of the challenges and opportunities of technology come from instances when the tech incents us away from that evolved biological baseline. In the positive, that’s when it augments us, allowing us to do our human stuff faster and better. A car that gets you to your friend or lover faster is valuable for the enabling of connection. In the negative, it’s when it gives us a reason to be less human. The chief examples of this are the amount of friction or ease we feel when we move from interacting face-to-face to going online. The former is how we evolved and it feels best, but the latter is almost always easier and more efficient. So, when we think about a Zoom meeting vs. face-to-face, or a hangout vs. text chatting, it’s these same tensions. We can do more, be more efficient, etc., with the tech but the cost is loneliness, and it’s why we already have an epidemic of it.

“That’s technology running up against the weight of evolutionary adaptation. Add to that the incentives created by capitalism to go farther faster and to monetize our time and attention and you have a recipe for very productive, but very unhappy people, all feeling less human. AI is going to continue us down this same path by making things even more efficient, and even faster. Capitalist systems will allow AI to keep going down the productivity route while more socialist systems will create boundaries and incentives to build in human values.

“I expect AI to combine with AR to allow people to alter their daily lived experiences visually. But if you can layer anything onto the real world and power it by AI you conflicting human factors result. On the one hand, it is a reason to get back together in-person, while on the other hand it is still mediated by tech. I can imagine AI-powered advertising layered onto everything in a paid, tiered system in capitalist systems, with likely some safeguards in socialist ones.

“Maybe I’ve read too much science fiction, but the core plot points of a hundred stories are about this tension between technology and its capital and being human. Inevitably in the stories, that human baseline from evolution bends and bends and bends until it either crushes people’s humanity, or results in a whiplash of revolution against it. As a very mild example, we have seen a resurgence in young people playing board games in person, not because they make more sense than their online versions – they’re slower and possibly more cumbersome – but because the whole point is human togetherness. People need to touch, to flirt, to hit, to feel the visceral. We don’t want to ‘bowl alone,’ so as AI evolves, the question I will keep asking is almost the Amish one: will that next change make my family, friends, community and workplace better and more human, or merely more efficient, and less human?”


Micah Altman

Micah Altman
Profit-Driven Uses of AI May Make It Difficult to Judge the Humanity, Identity and Sincerity of Our Daily Interactions, ‘Diluting Human Relationships and Making Being Human Worse’

Micah Altman, a social and information scientist at MIT’s Center for Research in Equitable and Open Scholarship, opens with a quote from an Umberto Eco novel, writing, “‘Men are animals but rational, and the property of man is the capacity for laughing.’ This is how the fictional protagonist of Umberto Eco’s ‘Name of the Rose’ – a scholastic monk – defines humanity. And, in fact, this is the definition of what it is to be human as descended from the Greek philosopher Aristotle and recast into the form above by the French Renaissance scholar Rabelais. It has dominated much of Western thought for two millennia.

 “Homo sapiens have been recognizably human across all of recorded history. We can still readily recognize the reasoning and emotion in the earliest written story, ‘The Epic of Gilgamesh,’ and the humor in ‘The Iliad,’ written thousands of years ago. Although we are divorced from the language they spoke, the beliefs they held and the conditions of their daily lives, we recognize the characters as human. When, if ever, will technology provide such immediate and extensive access to information that people can never be surprised by a joke? How thoroughly would we need to be digitally networked for loneliness to become unimaginable?

“[Today] it is legal to exploit our affinity for relationship to produce and sell addictive fantasy companionship, to strengthen a parasocial relationship with a human influencer to manipulate our political opinions or to induce an imaginary relationship with a chatbot to sell us more products. … These uses of AI, driven by profit and allowed by weak regulation, may make it substantially harder to judge the humanity, identity and sincerity of our daily interactions. This won’t change what it means to be human, but it could dilute human relationships and make being human worse.

“The experience of being human may be fundamentally changed if and when technological advances enable the direct integration of additional memory and cognitive capacity into our consciousness. Writers such as Olaf Stapleton (in ‘StarMakers’) and Charles Stross (in ‘Accelerando’) have presented wonderful visions of kinds of future cognitive possibilities for humanity. But this is not yet the future of 2035, since many decades (if not centuries) of research are required before such integration could be possible. In a more-limited way, our societal conception of what it is to be a human could be substantially changed if we were forced to interact with separate but sapient artificial intelligences. However, this too is at least a couple of decades in the future – while AIs now produce language well enough to tell jokes, they can’t yet truly laugh.

“What current AI technology does make possible is the rapid expansion of imaginary relationships. Although imaginary relationships have occurred throughout history – children have befriended imaginary companions and adults have conversed with muses – technology qualitatively changes the prevalence and purpose of imaginary relationships. Over the last century, the growth of mass-media technology has catalyzed non-reciprocal (‘parasocial’) relationships with famous figures (or even the characters that actors portray) – for both good and ill. Now, as AI increasingly masters the capability of producing conversation, it can be used to manipulate and exploit others through artificial relationships.

“Artificial relationships can be beneficial – for example, as a well-chosen cuddly doll can calm a child, a well-designed robot seal can calm an adult. Unfortunately, strong incentives exist within the existing market and regulatory structure to apply AI to induce artificial relationships for profit.

“It is legal to exploit our affinity for relationship to produce and sell addictive fantasy (AI) companionship, to strengthen a parasocial relationship with a human influencer to manipulate our political opinions, or to induce an imaginary relationship with a chatbot to sell us more products. It is also increasingly simple to employ AI to trick others into believing that they are interacting not with a machine, but with real people with whom they already have relationships.

“These uses of AI, driven by profit and allowed by weak regulation, may make it substantially harder to judge the humanity, identity and sincerity of our daily interactions. This won’t change what it means to be human, but it could dilute human relationships and make being human worse.”


Michael_Wollowski

Michael Wollowski
‘What Will Happen to Societies as a Minority of People Who Seek Enlightenment Interact with a Majority of People Who Just Aren’t? How Are We Going to Advance?’

Michael Wollowski, professor of computer science at Rose-Hulman Institute of Technology, and associate editor of AI Magazine, wrote: “Modern AI is an amplifier. For people who are curious, it is a boon to satisfy their curiosity. For people who are hateful, it is a powerful tool to generate more hate. For people who live in alternate realities, it may foster a twisted perception of the world.

In 2035, people will spend their time in smaller and smaller ‘communities’ of like-minded people … We know that the ability to communicate and resolve conflict is steadily eroding, as people lead more solitary lives or substitute interaction with technology for interaction with people. I am truly concerned that the will to seek a worldview that is supported by science will largely vanish.

“In 2035, people will spend their time in smaller and smaller ‘communities’ of like-minded people. I have not sorted out yet how those communities might interact, if at all. We know that the ability to communicate and resolve conflict is steadily eroding, as people lead more solitary lives or substitute interaction with technology for interaction with people.

“I am truly concerned that the will to seek a worldview that is supported by science will largely vanish. What will happen to societies as a minority of people who seek enlightenment interact with a majority of people who just aren’t. How are we going to advance societies, engineering, science, the arts in a world in which such things are not appreciated by large numbers of people?”


Peter_Reiner

Peter Reiner
Widespread Job Displacement Will Destroy Many People’s ‘Meaning in Life’ and Humans’ Self-Image Will Take A Big Hit When They No Longer Have Cognitive Superiority

Peter Reiner, professor emeritus of neuroscience and neuroethics at the University of British Columbia, wrote, “The experience of being human will be significantly impacted by AI advances in the next decade. Many of the pluses will be instrumental, such as advances in scientific research and further reductions in the friction of navigating everyday living. Few of these are likely to impact the experience of being human, but two major consequences will emerge from the social side of the equation.

The experience of being human will be significantly impacted by AI advances in the next decade. … The widespread job displacement as AI systems provide economically more efficient means of achieving many of the tasks currently carried out by humans will not only have an impact on the instrumental ways in which people make a living, but – given the central role that work plays in many people’s lives – the ‘meaning in life’ will take a substantial hit.

“The first is the widespread job displacement as AI systems provide economically more efficient means of achieving many of the tasks currently carried out by humans. This will not only have an impact on the instrumental ways in which people make a living, but – given the central role that work plays in many people’s lives – the ‘meaning in life’ will take a substantial hit.

“The second will be reconsideration of human exceptionalism. The human self-image has long been tied to an understanding that we may not be the strongest nor the fastest, but that we are the most cognitively endowed beings in our known universe. With the advent of AI tools that surpass humans in many tasks, this long-cherished self-concept will suffer substantially. Precisely how humans will respond is unknown, but without some sort of support there is real danger that anomie – the breakdown of social norms – and other dystopic sequelae might emerge.”


A Professor of Computer Science at a Top Engineering School
AI Brings Connectivity Down to Something That Can Be Simulated Without Needing an Actual Person. The Societal Changes Technology Enables Change Humans, We Are People, People

A professor of computer science at a top U.S. engineering school who is expert in cryptography, privacy and law, wrote, “While I do believe there will be significant change, ‘the experience of being human’ wouldn’t even make my top 100 concerns. The closest analogy I can think of is something like Google Search or its predecessors – remember Ask Jeeves? And perhaps the Internet as a whole.

Did those technologies fundamentally alter our societies? Yes. But did they change ‘the experience of being human’ or our ‘core human traits’? … Will our values look different in 2035? Almost certainly. … [But] the societal changes thinking machines enable will reshape us, not the technology in isolation. Maybe I’m splitting a hair, but I think it’s an important one. We are people, people.

“Did those technologies fundamentally alter our societies? Yes. But did they change ‘the experience of being human’ or our ‘core human traits’? On paper, you could argue ‘yes’; if you compared humans in 2024 to those in 1974, you’d likely see significant shifts in what people value, how they are informed and how they spend their time. However, I believe those shifts weren’t caused directly by the technology itself but by the increased connectivity between people that technology enabled. Tech changed society, and society, in turn, changed humans.

“As I see it, AI essentially brings that connectivity down to something that can be simulated without needing an actual person. Will our values look different in 2035? Almost certainly. But I don’t think the technology itself will be the direct cause. The question seems to ask whether thinking machines will fundamentally change us as people. My argument is: not directly. The societal changes thinking machines enable will reshape us, not the technology in isolation. Maybe I’m splitting a hair, but I think it’s an important one. We are people, people. While I think these things have potential for very positive change, I do believe the negative changes will happen faster and more widely than the positives.”


Erhardt_Graeff

Erhardt Graeff
Generative AI Devalues the Virtue of Humility; Awareness of Our Human Limitations Inspires Us to Be More Open and Tolerant, to Seek Out Others, to Be More Well-Rounded

Erhardt Graeff, educator, social scientist, and public interest technologist at Olin College of Engineering, wrote, “I am worried about the future of humility – epistemic humility in particular. Most humans struggle with awareness of what they know and what they don’t know.

Generative AI technologies allow us to use knowledge that is beyond us without helping us appreciate what we know or don’t know. In fact, it devalues the virtue of humility … [and] gives us the illusion that we need not be limited by our own experiences and education, that we can simply access all collective knowledge. … If we design our generative AI interfaces to obscure our lack of knowledge and ability, I fear we will diminish a key aspect of our humanity and our civic capacity.

“Moreover, it can be challenging to value knowledge you don’t have, such as: others’ lived experiences; and wisdom from unfamiliar cultures, faiths and traditions or fields you have never meaningfully studied. Generative AI technologies allow us to use knowledge that is beyond us without helping us appreciate what we know or don’t know. In fact, it devalues the virtue of humility.

“Humility ensures that we value the creation of new knowledge, that we are awed when other people do things we cannot or did not think to do, and that we take the time to embrace curiosity and deep listening. Generative AI gives us the illusion that we need not be limited by our own experiences and education, that we can simply access all collective knowledge the AI have been trained on (which is not actually all knowledge).

“Awareness of our limitations enables us to be more open and tolerant, to seek out and collaborate with people from different backgrounds, and to want to be more well-rounded humans. If we design our generative AI interfaces to obscure our lack of knowledge and ability, I fear we will diminish a key aspect of our humanity and our civic capacity.”


The next section of Part I includes these essayists:

Russell Poldrack: Als will definitely change what we think of as core human traits and behaviors, in particular, knowledge and expertise are likely to be downgraded.

Jeff Eisenach: We will create and apply knowledge with vastly increased proficiency as AI advances, but the nature of human beings will remain constant.

Simeon Yates: Al’s record has been one of increasing environmental degradation, social exclusion, polarization and growing digital and social divides. why do we allow this to continue?

Charlie Firestone: Trust, personal identity and agency are the most interesting and vulnerable aspects of being human likely to undergo great change in the next decade.

Jeremy Foote: By 2035 most AI dependence will mirror our current relationships with smartphones, integrative but not transformative; AI can help us express our humanity more fully.


Russell_Poldrack

Russell Poldrack
AIs’ Will Definitely Change What We Think of as Core Human Traits and Behaviors, In Particular, Knowledge and Expertise are Likely to be Downgraded

Russell Poldrack, psychologist, neuroscientist and director of the Stanford Center for Reproducible Neuroscience, wrote, “I have wide confidence intervals around my answers; I think that predicting the future in a time like this is well-nigh impossible.  

“The impacts will probably be mostly negative when it comes to changes in human abilities. We know from research in psychology that cognitive effort is aversive for most people in most circumstances. The ability of AI systems to perform increasingly powerful reasoning tasks will make it easy for most humans to avoid having to think hard and thus allow that muscle to atrophy even further. I worry that the urge to think critically will continue to dwindle, particularly as it becomes increasingly harder to find critical sources in a world in which much internet content is AI-generated.

“I do hope that the advances in AI will spur more humans to think deeply about what it means to be human, but I doubt that it will. I worry that this will mostly lead to bad outcomes.

“We have been the apex species for millions of years, but this is coming to an end, at least with respect to many cognitive abilities, where AI systems already are or soon will outshine us. It seems doubtful that humans will embrace this change, given the major impacts it will have on our lives, particularly in the context of work. Will we rethink the role that work plays in our identity? It seems hard for me to imagine that humans will deal with this gracefully.

“AI will definitely change what we think of as core human traits and behaviors. In particular, knowledge/expertise is likely to be downgraded as a core human value. A positive vision is that humans will embrace values like empathy and human connection more strongly, but I worry that it will take a different turn in which core humanity focuses more on the human body, with physical feats and violence becoming the new core trait of the species.

“Finally, the ongoing degradation of our climate will likely be exacerbated by the energy usage of AI systems. This will probably interact badly with the other disruptions in human society that we will be experiencing related to our use of AI.”


Jeff Eisenach
We Will Create and Apply Knowledge With Vastly Increased Proficiency as AI Advances, and the Nature of Human Beings Will Remain Constant

Jeff Eisenach, senior managing director at NERA Economic Consulting and visiting scholar at the American Enterprise Institute, wrote, “Human beings will remain human beings. Artificial intelligence is just that – intelligence. It will change the way people think and solve problems. But human nature – the conflicts in all of us between right and wrong, kindness and cruelty, diligence and sloth – are inalterable.

This is simply another phase of the transformation Peter Drucker described in ‘Post-Capitalist Society’ – the increasingly sophisticated ability to apply knowledge to craft new knowledge. And because the pace of change is accelerating – as Alvin and Heidi Toffler divined and described in ‘Future Shock’ over 50 years ago – the transformation will accelerate. A lot.

“There are of course no perfect analogies, but the changes that come will be akin to those that came with the written word, the printing press and, more recently, the Internet. In this sense, this is simply another phase of the transformation Peter Drucker described in ‘Post-Capitalist Society’ – the increasingly sophisticated ability to apply knowledge to craft new knowledge. And because the pace of change is accelerating – as Alvin and Heidi Toffler divined and described in ‘Future Shock’ over 50 years ago – the transformation will accelerate. A lot.

“Yet the nature of human beings has remained and will remain constant. We will create and apply knowledge with vastly increased proficiency, but we will still experience war and peace, sickness and health, poverty and wealth, triumph and despair. And in our lives we will still love (and hate), rejoice and despair, celebrate and mourn. And those experiences and feelings will be no more or less profound and moving than in any previous era. The wisdom of the Greeks, of the Bible, of Shakespeare is the wisdom of human nature. It is immutable – even in the face of a very, very smart computer.”


Simeon Yates
AI’s Record Has Been One of Increasing Environmental Degradation, Social Exclusion, Polarization and Growing Digital and Social Divides. Why Do We Allow This to Continue?

Simeon Yates, professor of digital culture, co-director of Digital Media and Society Institute at the University of Liverpool and research lead for the UK government’s Digital Culture team, wrote, “AI is not a thing ‘sui generis,’ it is not created separately from society, the economy and politics. It is a product of these, not separate from them. AI is not one thing. ‘AI’ as a term is now used to cover everything from (M)LLMs, image analysis, protein identification, automation of tasks, robotics, data analytics to basic statistics.

“Under this definition, we have had AI since the Industrial Revolution. And many digital ‘AI’ tools have been around for decades. LLMs are new, and, as they deliver ‘human-like’ output, they are, of course, the poster child for AI. Also, nearly all of these technologies were developed for commercial gain (even LLMs) and are deeply embedded in contemporary capitalism’s socio-technical networks.

AI is not in a ‘partnership’ with humans.  … We do not talk about ‘partnering’ with microwave ovens, JCB diggers nor word processors. … Until we have full general AI, to talk of partnering is to fall afoul of the discourse/ideology of AI that is developing. The reality is that … AI is being foisted upon all sectors of society, economy and politics without assessment, evaluation, risk assessment nor critique. If things like LLMs were a set of new cars, most would not meet roadworthiness checks; were they airplanes (ignoring the AI of autopilots!), they would be grounded.

“AI is not in a ‘partnership’ with humans; it is a thing without the agency and standing of people. AI, as currently deployed, is a tool. AIs can sometimes do excellent work (e.g., AlphaFold), but at the time of this writing the popular large language models almost always produce nothing more than ‘bullshit’ (see the research paper ‘ChatGPT is Bullshit’ by Hicks, Humphries, Slater).

“We do not talk about ‘partnering’ with microwave ovens, JCB diggers nor word processors. Until we have full general AI, to talk of partnering is to fall afoul of the discourse/ideology of AI that is developing.

“The reality is that AI (in all forms) is not being openly and transparently presented as just one potential tool to be used. Instead, AI is being foisted upon all sectors of society, economy and politics without assessment, evaluation, risk assessment nor critique. If things like LLMs were a set of new cars, most would not meet roadworthiness checks; were they airplanes (ignoring the AI of autopilots!), they would be grounded. Given the levels of investment in things like LLMs – they have to be pushed to warrant the investment.

“This is the crux of the matter. We cannot evaluate the likely impact on ‘being human’ without considering the socio-economic and socio-technical context. We also need to pour some cold, icy water on AI development’s current rhetoric/discourse. Let’s look at four aspects of this:

  • “LLMs are not tremendous nor well evaluated for many proposed uses. So, will we use them untested and unverified in ever more contexts, likely leading to many social, political, personal and environmental ills? What will an ‘AI Chernobyl’ incident look like? Or do we start soon to assess and regulate these technologies rigorously; without this, we cannot guarantee positive outcomes.
  • “Their track record is already one of increasing social exclusion (see Eubanks, V. (2018). See ‘Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor’ by Virginia Eubanks), social polarization, environmental degradation and growing digital and social divides. Again, do we allow these ‘impacts’ to happen, or do we regulate this technology (as we have done with nearly every other technology from cars to the internet)?
  • “We need to carefully differentiate (through critical reflection, assessment and evaluation) the different technologies under the banner of AI. Otherwise, we will start to argue that all are ‘bad’ or ‘good,’ which is not the case. There are dangers in both directions.
  • “We don’t know how good or ill might be perceived in a decade or 50 years. The car, the washing machine, contraceptive pills and telecommunications all contributed something to the context in which women in developed countries gained social, cultural, economic, political and personal emancipation from a highly misogynistic culture (not claiming things are perfect now). To many at the start of the 1900s, to many in certain countries now, and (it seems) to a growing number of men in some Western societies, this emancipation is/would be ‘wrong/bad/harmful.’ Social values and technology developments are linked but not in directly causative and determinist ways. AI is not immune to these realities of cultural context. If it will be good or ill will all depend on our value sets at the time of assessment.

The question is not what we view as core human traits, but what kind of humans will this make? Is Human A who reads in depth Proust and reflects on what makes a good life, or Shakespeare’s Sonnets and reflects on love … then writes an essay the same as Human B who gets ChatGPT to summarise all of this or asks NotebookLM to do a podcast? No, they are not. Both are changed by this activity but in very different ways. Is one better than the other? Is a society with or without either better or not? Unfortunately, the reality is we may not get a choice – the push of AI into all aspects of life, as with earlier information and communication technologies and digital media, will rapidly move ahead, driven by economic imperative and political expediency.

“What we can do is evaluate the impact it is having now. The current answer, as ever, is mixed.

“Next, we need to unpack ‘being human.’ Considering humans’ interaction with these tools as a ‘growing partnership’ is to buy into the ‘vapourware’ rhetoric of the Big Tech firms. Framing the question in terms of ‘being human’ is essentialist. It assumes what is human is something that holds for all. It is not. It is highly varied and contextual and already includes lots of technology interactions. Implicit in this is the idea that AI is a thing of self-agency with which we interact; it is not (though I could write at length about the importance of black boxes in Actor Networks and their apparent/implicit agency). We define what it is to be human in our current context.

“The question is not what we view as core human traits, but what kind of humans will this make? Is Human A who reads in depth Proust and reflects on what makes a good life, or Shakespeare’s Sonnets and reflects on love, or reads Solzhenitsyn or Primo Levi and reflects on human evil, then writes an essay the same as Human B who gets ChatGPT to summarise all of this or asks NotebookLM to do a podcast?

“No, they are not. Both are changed by this activity but in very different ways. Is one better than the other? Is a society with or without either better or not? Unfortunately, the reality is we may not get a choice – the push of AI into all aspects of life, as with earlier information and communication technologies and digital media, will rapidly move ahead, driven by economic imperative and political expediency.

“What will be the case in 2035 is that we will be unpacking the crashes caused by un-regulated AI (as we are have been doing with social media today, and as we did in the 1960s with cars (see ‘Unsafe At Any Speed’ by Ralph Nader).”


Charlie_Firestone

Charlie Firestone
Trust, Personal Identity and Agency Are the Most Interesting and Vulnerable Aspects of Being Human Likely to Undergo Great Change in the Next Decade

Charlie Firestone, president of the Rose Bowl Institute, previously vice president and executive vice president at The Aspen Institute, wrote, “The world of 2035 will be highly digitally connected. AI will be integrated such subtle ways that it is barely noticeable. The more digitally adept will incorporate AI and other innovative techniques to separate themselves more from those who are not as capable. There will be a great AI divide, creating greater divergence in functional capabilities among humans.

The trend toward polarization could reach its peak in by 2035. Hopefully … we will move toward greater convergence of thought and cooperation. At the same time, further digitization and use of AI is likely to lead to more personal isolation, particularly for those who are already so inclined. Already today, the polarization accelerated by digital tools can be used to dampen public empathy to such a great extent that it can escalate horrifying human conflicts. … Already today, we have to question everything we experience in the digital sphere. The need for the application of critical digital literacy skills will increase greatly at a time in which most people may not be inclined or able to implement them.

“There will be extremely significant advances to the human condition – particularly in health remedies and collective ventures – as well as a significant increase in individuals’ productivity. Challenges will also increase. First, minor actors will be able to create significant AI-enhanced weapons that could be life-threatening to billions. An example: The threat of backpack nukes could blossom into destructive cyber-weapons of equal disaster.

“Second, the trend toward polarization could reach its peak in by 2035. Hopefully, it will hit that peak earlier and we will move toward greater convergence of thought and cooperation. At the same time, further digitization and use of AI is likely to lead to more personal isolation, particularly for those who are already so inclined. Already today, the polarization accelerated by digital tools can be used to dampen public empathy to such a great extent that it can escalate horrifying human conflicts. The issues of trust, personal identity and agency are the most interesting and vulnerable aspects of being human likely to undergo great change over the next 10 years. None of these traits can be thought about individually, so the broader trends will affect each.

“The trend toward polarization, exacerbated by the divergence in human use of digital tools, will create more challenges to humans’ trust in others, in institutions and in their world views. Already today, we have to question everything we experience in the digital sphere. The need for the application of critical digital literacy skills will increase greatly at a time in which most people may not be inclined or able to implement them. Determining who and what to trust will be a significant life skill that some will develop but many will not. Each person’s management of their digital selves will strongly impact personal agency.

“Much wider changes to human qualities are likely to come, but probably not in the next 10 years. But looking beyond to 20 years out requires a dip into science fiction. That is left to our imaginations which, by the way, will be with us for a much longer time.”


Jeremy_Foote

Jeremy Foote
By 2035 Most AI Dependence Will Mirror Our Current Relationships with Smartphones, Integrative but Not Transformative; AI Can Help Us Express Our Humanity More Fully

Jeremy Foote, a computational social scientist teaching and doing research at Purdue University about cooperation and collaboration in online communities, wrote, “What it means to be human, what it feels like to exist in the world, is a product of much more than our technology. It is embedded in social relationships, in the long weight of culture and history and even in our bodies. In that sense, no matter how dramatic the technological change, 10 years will never be enough to change the experience of being human in any fundamental way.

It seems likely that many activities that are contested today will be resolved such that norms allow for AI assistance. Scientific papers, journalism and even most classroom work will be authored with AI collaboration, much as we now accept calculators and spell-checkers. … Over a longer timeframe, we will need to develop new ethical frameworks around how to treat increasingly sophisticated AI systems. It is likely that we will create autonomous beings long before we are willing to truly recognize them as such entities. However, while these challenges are on the way I predict that by 2035 we will not yet have to confront them head-on.

“While we will almost certainly use AI systems for many daily tasks by 2035, for most people, this dependence will probably mirror our current relationship with smartphones and internet connectivity. It will be deeply integrated into our lives but not transformative of our core human traits. The most likely outcome is that we will develop new norms around having AI assistants who we see as sophisticated tools and collaborators rather than as agentic intelligences.

“It seems likely that many activities that are contested today will be resolved such that norms allow for AI assistance. Scientific papers, journalism and even most classroom work will be authored with AI collaboration, much as we now accept calculators and spell-checkers. Human-AI artistic and musical collaborations are inevitable, and we will see a flowering of creativity as creative work becomes more accessible to more people. In that sense, AI may actually help us to express our humanity more fully.

“Over a longer timeframe, we will need to develop new ethical frameworks around how to treat increasingly sophisticated AI systems. It is likely that we will create autonomous beings long before we are willing to truly recognize them as such entities. However, while these challenges are on the way I predict that by 2035 we will not yet have to confront them head-on.”


The next section of Part I includes the following essays:

Youngsook Park: To create a world that is more prosperous, equitable and fulfilling we must strike a balance between technological advancement and human values.

Volker Hirsch: Critical thinking and problem-solving skills may erode if robust and neutral governance, reliable knowledge sources and major education reforms are not undertaken.

Mario Moreno: The possibilities to improve humanity are beyond our current understanding. so are the risks. change is arriving quickly. will we ever take a pause for absorption and adaptation?

Peter Suber: In the AI age, the gift of trust to the untrustworthy and the acceptance of answers without inquiry will be a clear loss for humanity; there will be widespread, undetectable fraud.

Risto Uuk: We are risking the loss of our ability to plan, to think critically, to confidently communicate in-person with others of our kind, even risking our overall well-being.


Youngsook_Park

Youngsook Park
To Create a World that Is More Prosperous, Equitable and Fulfilling We Must Strike a Balance Between Technological Advancement and Human Values

Youngsook Park, CEO at Almindbot, futurist and chair of the Korean Node of The Millennium Project, wrote, “The next decade will witness exponential growth in AI capabilities, leading to more-sophisticated autonomous systems. In education, AI-powered personalized learning platforms will tailor instruction to each student’s unique needs and pace. AI tutors will provide instant feedback and support, freeing up human teachers to focus on fostering creativity, critical thinking and social-emotional skills.

The reduction in mundane labor will allow individuals to focus on creativity, innovation and social connection. With AI handling many of the world’s problems, humans can turn their attention to addressing grand challenges such as climate change, poverty and inequality. … While the integration of AI into society presents numerous benefits, it also brings serious difficulties. Issues such as job displacement, algorithmic bias and the potential for AI to be used for malicious purposes must be carefully considered.

“Healthcare will undergo a similar transformation, with AI enabling earlier disease detection, more accurate diagnoses and personalized treatment plans. AI-driven drug discovery will accelerate the development of new therapies, while robotic surgery will enhance precision and minimize risks. In industries, AI-powered automation will streamline operations, increase productivity and create new job opportunities. And from self-driving cars to smart factories, AI will revolutionize transportation and manufacturing.

“There will be a shift in human values and purpose. As AI takes on more routine tasks, humans will be liberated to pursue more fulfilling and meaningful endeavors. The reduction in mundane labor will allow individuals to focus on creativity, innovation and social connection. With AI handling many of the world’s problems, humans can turn their attention to addressing grand challenges such as climate change, poverty and inequality.


Volker Hirsch
Critical Thinking and Problem-Solving Skills May Erode If Robust and Neutral Governance, Reliable Knowledge Sources and Major Education Reforms Are Not Undertaken

Volker Hirsch, chief commercial officer at the UK’s Medicines Discovery Catapult and venture partner at Amadeus Capitala, wrote, “AI will mediate many human interactions, from personalised virtual assistants and multi-agentic chatbots to AI-driven social platforms. While this could enhance communication, it risks diminishing organic social skills.

“In the short term, a disbalance between tech-savvy early adopters (and owners of and actors on the respective digital platforms) may well lead to negative distortions and misinformation. Longer-term, I expect that these will be counter-steered by better checks and balances on such systems; a healthy equilibrium is, arguably, a requirement for the economic longevity of these platforms.

“If deployed well, AI can enhance accessibility, giving individuals with disabilities tools to live more independent and fulfilling lives. We are likely to see significant innovations in life sciences and healthcare powered by AI, which should lead to better (and earlier) diagnostics and advances in personalized cell and gene therapy, cancer detection and treatment, improving quality of life and longevity, whilst, at the same time, impacting the economics of health dramatically.

“In the workplace, AI is likely to automate routine tasks and augment human decision-making. This should lead to more efficient workflows and new opportunities following increased productivity, it might also exacerbate wealth inequality IF benefits are not evenly distributed. AI will likely aid governance through predictive analytics, enabling data-driven policies. However, reliance on AI for political decisions might raise concerns about transparency, bias, and accountability.

Without [unbiased, balanced data sets; a drastic change to educational systems; and robust and neutral governance] being in place, there is a distinct danger that critical thinking and problem-solving skills might be eroded; they both depend on good education and reliable knowledge sources. The abuse of AI for short-term goals, including the use of deepfake technology and AI-enhanced misinformation could undermine trust in media and public discourse, leading to significant societal turmoil. Other areas that might be impacted by a lack of adapted educational approaches are empathy and creativity.

“Privacy concerns are likely to escalate, as more personal data becomes integrated into AI-driven systems, potentially leading to mass surveillance or misuse of information. This might, however, be short-term as people gain better understanding on how AI utilises data. I also expect that specific AI approaches, like federated learning (which does not expose the raw data to the algorithms), will likely alleviate/eradicate concerns about private and confidential data (for instance in health).

“Within the broader society, a full and equitable enjoyment of AI’s benefits will, however, crucially depend on three factors:

  • “Comprehensive, unbiased and balanced data sets that contain continuous checks on the maintenance of these values.
  • “A drastic change to our approach in education: The stale, calcified approach to teaching and learning is not fit to deal with the kind of quick change we are likely going to see, which risks leaving behind those least equipped to catch up under their own steam; educational systems and approaches need to be adapted to allow for continuous learning and training.
  • “Robust and neutral governance from state actors. This might be the Achilles heel in the present political environment. The U.S., Russia and China are lagging behind other nation-states in this category.

“Without these three factors being in place, there is a distinct danger that critical thinking and problem-solving skills might be eroded; they both depend on good education and reliable knowledge sources. The abuse of AI for short-term goals, including the use of deepfake technology and AI-enhanced misinformation could undermine trust in media and public discourse, leading to significant societal turmoil. Other areas that might be impacted by a lack of adapted educational approaches are empathy and creativity. As AI takes on caregiving or companionship roles, humans might interact less with each other, potentially dulling empathy and interpersonal skills. And an over-reliance on AI for generating ideas might narrow the definition of creativity or make human creativity less valued.”


Mario Morino
The possibilities to improve humanity are beyond our current understanding, and so are the risks. Change is Arriving Quickly. Will We Ever Take a Pause for Absorption and Adaptation?

Mario Morino, chairman of the Morino Institute and co-founder at Venture Philanthropy Partners, a pioneer in venture philanthropy, wrote, “By 2035, AI will drive many innovations, improvements and disruptions, assuming a ‘normal’ evolutionary path. The pace of change will literally explode thanks to the speed at which AI empowers its users. Will we ever reach a point where the sheer volume of change will necessitate a pause, allowing for absorption and adaptation? It’s impossible to predict with certainty what will happen in the next 10 years. That said, here are three potential scenarios – ranging from normal evolution to absolutely radical change – that could define the next decade.

Normal Evolution: With AI’s inherently increasing speed, many aspects of its use will help humans improve in both work and life, leading to changes in behavior in transitions similar to those we experienced during the introduction of personal computers, distributed systems, smartphones and social media but this change will be even more pervasive, with both greater benefits and risks than we can currently imagine. This is the ‘normal’ view.

Unimaginable opportunities and threats lie before us. Will AI be the unifying force that helps humans unlock greater value by integrating data, predictive analytics, robotics, nanotechnology, synthetic biology and more? Or will it be a destructive force in the hands of dictators, terrorists, sociopaths and other malicious actors? Or both? While AI can help humans solve global problems such as finding a cure for cancer, combating climate change and limiting the use of weapons of mass destruction, there’s also the very real danger of it being misused in ways history has shown will happen.

Expedited Learning: AI will revolutionize how we learn and the speed at which we absorb information. By tapping into existing resources – text, video, audio, and future generative content – imagine digesting information from most human systems (broadly defined), research papers, YouTube and other streaming channels, the Library of Congress, the human genome and more. Future learners will aim their AIs at meeting their specific needs and become skilled at prompting it for tailored insights with supporting explanations. Picture in-depth, multi-sensory, real-time learning and experimentation.

Seismic Societal Shifts: Unimaginable opportunities and threats lie before us. Will AI be the unifying force that helps humans unlock greater value by integrating data, predictive analytics, robotics, nanotechnology, synthetic biology and more? Or will it be a destructive force in the hands of dictators, terrorists, sociopaths and other malicious actors? Or both? While AI can help humans solve global problems such as finding a cure for cancer, combating climate change and limiting the use of weapons of mass destruction, there’s also the very real danger of it being misused in ways history has shown will happen.

“The possibilities to improve humanity are beyond our current understanding, but with this great opportunity comes the risk of unintended negative consequences. We face fascinating and frightening times ahead.”


Peter Suber
In the AI Age, the Gift of Trust to the Untrustworthy and the Acceptance of Answers Without Inquiry Will Be a Clear Loss for Humanity; There Will be Widespread, Undetectable Fraud

Peter Suber, an expert in the philosophy of law, director of the Harvard Open Access Project and senior researcher at Harvard’s Berkman Klein Center for Internet & Society, wrote, “We will depend on AI in more and more aspects of our lives. But it’s undependable. It will improve, and the improvements will reduce many but not all the risks of our dependence. However, for the same reason, these improvements will deepen our dependence.

“AI supports writing, and it can be better than nothing for novice, rushed and commercial writers. However, writers like scholars, journalists and novelists understand how the process of writing supports the process of thinking. That’s why they’ll use AI less often or less deeply than others, and why those who use it most will least appreciate what they are missing. Students who turn to AI-assisted writing in the easiest ways will deprive themselves of a fundamental part of their education. Students who use AI in ‘less convenient’ ways, for example to challenge their drafts with argued objections, could enhance their educations.

The gift of trust to the untrustworthy and the acceptance of answers without inquiry will be a clear loss. … We could become credulous about public figures (or the ones we like), credulous about their attackers (or the ones we like), or incredulous, suspicious and unpersuadable about nearly everyone. We could let antecedent bias and trust replace truth-seeking or let cynicism and denial replace truth-seeking.

“AI supports conversation and the illusion of companionship, and it could be better than nothing for the lonely. But it will always be a weak substitute for casual and committed human connection. AI supports curiosity. We can easily ask any questions that occur to us and get instant answers. But a hefty fraction of the answers will be false, undocumented, or both. The cultivation of spontaneous curiosity will be a clear gain. The gift of trust to the untrustworthy and the acceptance of answers without inquiry will be a clear loss.

“AI supports undetectable fraud, for example in email phishing attacks and political smear campaigns. (This is just one front on which AI improvements will increase rather than decrease the risks of our dependence on it.) We’ll know this in general even if we can’t know it in individual cases. We’ll know it because every day we’ll hear notable people claim that some embarrassing photograph or video of them is a fake. We won’t know when they’re right or when they’re wrong. We might give them credence in general; give it on partisan lines; or withhold it in general.

“These are the major forks in the road, though there are others. Each of them leads to disaster. We could become credulous about public figures (or the ones we like), credulous about their attackers (or the ones we like), or incredulous, suspicious and unpersuadable about nearly everyone. We could let antecedent bias and trust replace truth-seeking or let cynicism and denial replace truth-seeking.”


Risto_Uuk

Risto Uuk
We Are Risking the Loss of Our Ability to Plan, to Think Critically, to Confidently Communicate In-Person With Others of Our Kind, Even Risking Our Overall Well-Being

Risto Uuk, European Union research lead for the Future of Life Institute, based in Brussels, Belgium, wrote, “Over the past few decades, material well-being indicators have largely improved and this trend could be expected to continue. However, measures of life satisfaction and experience sampling haven’t shown comparable improvements, and loneliness has increased. Mental well-being appears to have stagnated or even declined for many people. Given that income and life circumstances significantly influence life satisfaction, AI could potentially drive further improvements. This potential, however, depends on coordinated intervention across all sectors, including government. AI possibly also presents serious risks, including catastrophes, existential threats, increased surveillance, erosion of democracy and concentration of power, among others. The shift toward living more online rather than in the physical world may challenge human psychological well-being.

Automating every task, including critical ones, through new technologies, may not yield positive outcomes overall. Should we accept the loss of our ability to plan, or think critically or to communicate with people in physical spaces? A general-purpose technology like AI could potentially have that impact. … in areas like creativity, decision-making and problem-solving, AI tends to do it for users rather than encourage the users to practice those skills. People naturally gravitate toward the path of least resistance, turning to AI for immediate solutions rather than working hard on a solution themselves.

“Socrates was allegedly opposed to the technology of writing, which he believed would reduce the capacity to remember things. He was right about that. But we now recognize that writing has enabled tremendous improvements in daily life, particularly through its role in advancing modern science. That said, automating every task, including critical ones, through new technologies, may not yield positive outcomes overall. Should we accept the loss of our ability to plan, or think critically or to communicate with people in physical spaces? A general-purpose technology like AI could potentially have that impact.

“Looking ahead to the potential impact of AI on specific capacities in the coming decade, the outlook for curiosity and learning ability is especially concerning to me. Many current applications offer gimmicky features or simply provide answers rather than encouraging learning to think or amplifying curiosity. Without self-motivation to use AI as a learning tool, users merely receive answers from AI (sometimes incorrect ones). Similarly, in areas like creativity, decision-making and problem-solving, AI tends to do it for users rather than encourage the users to practice those skills. People naturally gravitate toward the path of least resistance, turning to AI for immediate solutions rather than working hard on a solution themselves.

“Regarding social and emotional intelligence, while AI could help users explore how to overcome communication challenges or ways to support others, this requires proactive engagement – something most users don’t naturally pursue.

“I expect dramatic changes in human capabilities and behaviors due to AI in the next decade. When a smartphone was introduced and widely adopted, it had a dramatic effect on human capacities and behaviors in many ways. From the near-complete loss of phone number memorization to several hours of daily use, even to the point of people not noticing surroundings when walking and not speaking with each other in restaurants. AI will have a similar and even larger effect because it is more general-purpose. For instance, almost nobody might draft an essay on their own from scratch or even have the ability to do so. Frankly, I’m already tempted by it right now when brainstorming these thoughts.”


The next section of Part I includes these essays:

Cristos Velasco: Human responsibility will be altered by 2035 and traits like creativity, empathy and reasoning will evolve and continue to prevail as the main differentiators of humanness.

Amy Sample Ward: Choice, analysis and reasoning are valuable practices of being human that are being eroded by the integration of AI into nearly every technology tool and service.

Calton Pu: How might AI change ‘being human’? it’s yet to be seen, but most people simply think of cars, computers and smartphones as useful extensions of their humanness.

Jeremy Pesner: If people cede more and more work to AI while forgetting how to do it themselves, they could change in several ways, losing thinking, writing and organizational skills.



Cristos_Velasco

Cristos Velasco
Human Responsibility Will Be Altered By 2035 and Traits Like Creativity, Empathy and Reasoning Will Evolve and Continue to Prevail as the Main Differentiators of Humanness

Cristos Velasco, international practitioner in cyberspace law and regulation and board member at the Center for AI and Digital Policy, based in Mannheim, Germany, wrote, “I strongly believe that dependence upon AI and related technologies will continue to change being human for the better by 2035.

“The pace of human adaptation due to AI will depend on many different factors that will be impacted widely and diversely based on the economic and social development of countries, and more particularly based on the mindset of citizens (regardless of the existent generation gap). Some will be more able and willing than others to adapt and shift to trusting in the use of AI to improve their quality of life, to bridge communication barriers or simply to redefine work and leisure. This shift will eventually be unstoppable, however, thus most humans will need to adapt and coexist with AI.

Changes in humans’ sense of responsibility is one of the impacts I expect as AI advances due to an increase in complex ethical implications in many aspects of being a citizen, including the administration of justice, law enforcement, healthcare, citizen security and consumer protection. This will eventually lead to redefining human responsibility.

“Changes in humans’ sense of responsibility is one of the impacts I expect as AI advances due to an increase in complex ethical implications in many aspects of being a citizen, including the administration of justice, law enforcement, healthcare, citizen security and consumer protection. This will eventually lead to redefining human responsibility.

“Further, societies will face cross-border pitfalls, legal and regulatory issues and possible conflicts between ethical principles and the rule of law that are not yet fully developed, interpreted, or resolved at the international and regional level. Preserving key and fundamental human values and balanced technological progress will help us enjoy and preserve the experience of being human.

“How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors? As the interaction between AI systems and human deepens, core human traits like creativity, empathy and reasoning will evolve and continue to prevail as the main differentiators of human qualities and attributes that AI systems and computer algorithms still lack and may not be able to fully develop. Maintaining a balance between embracing the benefits of AI while preserving core traits and human behaviors will be the next race to preserving the future of our existence in a fully connected and AI driven society.”


Amy Sample Ward
Choice, Analysis and Reasoning Are Valuable Practices of Being Human That Are Being Eroded By the Integration of AI Into Nearly Every Technology Tool and Service

Amy Sample Ward, CEO of NTEN and author of “The Tech That Comes Next,” wrote, “It is already the case that the drive to establish the technological and infrastructure systems associated with building out new AI tools is competing with the vital need for individuals and communities to receive basic services like access to water, electricity and internet connectivity.

“By 2035, the experience of being human could change to be in even more competition for resources against data centers and technology services that do not otherwise support or aid billions of human lives. The deep irony of this is quite evident right now in the city of Detroit where community members are being funded to create ‘innovations’ such as AI tools that identify if their water is safe to drink, while relying on large technology companies that are using up massive amounts of water to run the data center enabling the app.

“Choice, analysis and reasoning are valuable and exercised practices of being human that are being eroded by the integration of AI into nearly every technology tool and service. How do people continue to experience being human when these important practices are disabled from their regular life?”


Calton Pu
How Might AI Change ‘Being Human’? It’s Yet to Be Seen, But Most People Simply Think of Cars, Computers and Smartphones as Useful Extensions of their Humanness

Calton Pu, co-director of the Center for Experimental Research in Computer Systems at Georgia Institute of Technology, wrote, “Over the last 50 years, humans have become comfortable with the evolution of new technologies and the incorporation of technological advances into their lives. Most humans think of cars, computers and smartphones as useful tools, not a threat They see them as useful extensions of their humanness.

Whether AI and related technologies are going to change humans or extend humans’ core characteristics will depend on each individual’s perspective. How did smartphones change us; did they extend our reach and abilities?

“Whether AI and related technologies are going to change humans or extend humans’ core characteristics will depend on each individual’s perspective. How did smartphones change us; did they extend our reach and abilities?

“What about the influence of computers back in the days before we developed our relationship with smartphones? And how did we view our relationship to cars before we used computers? These revolutionary tools have all become significant tools in our lives. It is difficult for most of us to imagine our life without cars, computers or smartphones.

“Before the arrival of today’s AI, humans had accepted and incorporated many such person-enhancing technologies as extensions of them, and that may have changed their humanness. As these extensions of the ‘human core’ were adopted and then ended up changing human lives, we learned how adaptable humans can be. Overt time, it’s only natural that at least some humans will adapt to having and using AI tools as extensions that they may find to eventually broaden their humanness.”


Jeremy Pesner

Jeremy Pesner
If People Cede More and More Work to AI While Forgetting How to Do It Themselves, They Could Change in Several Ways, Losing Thinking, Writing and Organizational Skills

Jeremy Pesner, a policy analyst, researcher and speaker expert on technology, innovation and futurism, wrote, “Children learn to add numbers by hand, but never actually do it once they graduate elementary school. At that point, they are given a calculator because it’s assumed they know how to add well enough to skip the tedium of the process. I suspect that AI systems will be used in the same way – used as shortcuts for thinking once students have already proved that they’re capable of thinking without them. It would be hugely problematic to cede to AI the abilities to think and organize writing and responses before students learn to do that themselves.

“I suspect our education system will remain AI-free until students learn how to do their own research and writing. Given the growing consensus of not giving children mobile phones or social media accounts before high school, I suspect that is the time that they should learn how to use these systems. Hopefully, high schools will teach students how to meaningfully query and use AI, while always double-checking and remaining critical of the output. It may well be possible within the next decade for students to learn to train their own AIs, which can introduce them to the promises and perils of the technology.

I asked ChatGPT about the pros and cons of using a calculator and it highlighted the increased efficiency and its use as aid for advanced learning as professionals and said the cons are decreased engagement with the process and foundations of math. That is an excellent metaphor for the path before us now. How do we maintain our engagement and understanding of the work and material we want to produce while still letting machines handle the parts we don’t want to?

“Depending on how well all of this is handled and executed, humans’ sense of themselves could change in several different ways.

“In what I imagine we would consider to be the better scenario humans learn how to harness AI while still maintaining their innate abilities; just because we all use calculators doesn’t mean we’ve forgotten how to add. In this case, we have clearly delineated the instances in which AI is helpful or routinely outperforms humans, but also where the human touch is still necessary, and how the resultant output changes depending on how many humans and machines are in the mix. Humans, therefore, still understand their unique value contributions to digital work, artistic and entertainment outputs and feel empowered to create what they want while farming out the tedious, busywork that is the more complex version of adding numbers together.

“However, the less-optimistic scenario is that people cede more and more work to AI, while forgetting how to do it themselves. If students just feed essay prompts into ChatGPT and never engage with how to write a response themselves, they will be at a loss for not only the skill of actual writing, but the process of thinking through and structuring their ideas.

“If they just ask ChatGPT for the answers to questions without verifying the response or searching the web themselves, they’ll never understand how to conduct research or use the Internet to bring together information to their fingertips (the technology’s original promise). If they only rely on AI to generate images, music or video and never attempt to create anything original themselves, they won’t develop an engagement with the creative process or understand how to come up with something that has truly never been seen before. What this essentially means is that being human is a muscle we have to stretch and use, just like regular exercise. If we don’t use these traits, we’ll lose them.

“I asked ChatGPT about the pros and cons of using a calculator and it highlighted the increased efficiency and its use as aid for advanced learning as professionals and said the cons are decreased engagement with the process and foundations of math. That is an excellent metaphor for the path before us now. How do we maintain our engagement and understanding of the work and material we want to produce while still letting machines handle the parts we don’t want to? The better an answer we can provide to that question, the greater a chance we stand of maintaining our identity and autonomy as humans.”


The next section of Part I includes these essays:

Umut Pajaro Velasquez: The time is now to help humanity make a positive transition to a new world where AI augments people’s lives far beyond simply making things more efficient.

William Ian O’Byrne: We must ensure that human-AI integration is focused on ethical considerations and a commitment to preserving valuable core human traits.

Robert Atkinson: AI is an ‘additive’ technology, not a transformational one.

A Professor of Computational Social Science: It’s not likely that AI or any technology will shift core
human traits or behavior.


Umut Pajaro Velasquez
The Time is Now to Help Humanity Make a Positive Transition to a New World in Which AI Augments Individuals’ Lives Far Beyond the Point of Simply Making Things More Efficient

Umut Pajaro Velasquez, a researcher and professor from Cartagena, Colombia, expert on issues related to the ethics and governance of AI, wrote, “By 2035, artificial intelligence (AI) could be seamlessly integrated into every facet of our existence, anticipating our needs, augmenting our capabilities, and reshaping our social, political and economic realities. This future presents both extraordinary possibilities and profound challenges. However, a 2023 Pew Research and Elon University study found that only 28% of tech experts believe AI systems will prioritize human control by 2035. We have very little time to change that and focus on human-centered AI before it is late.

The pervasive use of AI in social media and digital communication could lead to more social isolation, not a desirable outcome … In politics, it can enhance democratic processes and government efficiency … but it will also exacerbate inequalities, erode privacy and threaten human autonomy … It also has the potential to enhance human creativity and self-expression.

“If we achieve human-centered design, AI could revolutionize daily life in a more-positive way. AI-powered devices will anticipate our needs, automate tasks and personalize experiences. In healthcare, AI will detect diseases earlier, personalize treatment and assist in surgery. In education, AI will personalize learning and provide tailored feedback. AI is already contributing to scientific advancement, but it also raises difficult questions about humans’ social connection.

“AI companions might reduce loneliness, but an overreliance on them could hinder people’s ability to form meaningful relationships with other humans. The pervasive use of AI in social media and digital communication could lead to more social isolation, not a desirable outcome. Regulation and the deepening of digital literacy to not only foster critical thinking but also help humans understand tapping into their own emotional regulation and person-to-person real-world social communication are crucial.

“AI will revolutionize the economy and workforce. While it may lead to job displacement in certain sectors, it will also create new jobs and change the nature of work. Human workers will focus on tasks requiring creativity, critical thinking and emotional intelligence. AI has the potential to boost economic growth significantly, and we need to prepare ourselves accordingly for it.

“AI presents both opportunities and challenges. In politics, it can enhance democratic processes and government efficiency. However, it could also be used for malicious purposes, such as manipulating public opinion and spreading misinformation. It can enhance our lives in countless ways, but it will definitely also exacerbate inequalities, erode privacy and threaten human autonomy. Navigating this duality requires a nuanced understanding of AI’s potential benefits and risks, a commitment to ethical AI development and proactive multistakeholder AI governance.

“Ethical concerns include AI bias and privacy issues. There are also long-term risks, such as the potential for AI to pose an existential threat or erode human values. However, AI also has the potential to enhance human creativity and self-expression. Education plays an important role.

“Proactive governance and regulation are essential to navigate the complex landscape of AI and ensure it is used responsibly. Policymakers have a crucial role in shaping AI’s development and deployment, addressing ethical concerns and mitigating potential risks with a human-centered perspective.

“The future of being human in an AI-driven world is not predetermined. It is a future that we should be shaping collectively, through our choices and actions and our commitment to ensuring that AI serves humanity and enhances the human experience.”


William Ian O’Byrne
We Must Ensure That Human-AI Integration is Focused on Ethical Considerations and a Commitment to Preserving Valuable Core Human Traits

William Ian O’Byrne, associate professor of literacy education at the College of Charleston, wrote, “As we look ahead to 2035, integrating artificial intelligence and related technologies into our daily lives will profoundly influence our social, political and economic landscapes. This deepening partnership presents both opportunities and challenges that will shape the essence of what it means to be human.

“On one hand, AI has the potential to enhance various aspects of our lives. For instance, AI can provide personalized learning experiences in education, catering to individual student needs and promoting more effective learning outcomes. AI-driven diagnostics and treatment plans can improve patient care and efficiency in healthcare. Economically, AI can optimize operations, drive innovation and open new avenues for growth.

“However, this increasing reliance on AI also raises concerns. There’s a risk that over-dependence on technology could erode personal agency, critical thinking and privacy. The commodification of personal data and the potential for algorithmic biases may lead to social inequalities and ethical dilemmas. Therefore, it’s crucial to approach AI integration thoughtfully, ensuring that it serves as a tool for empowerment rather than a replacement for human judgment and interaction.

There’s a risk that over-dependence on technology could erode personal agency, critical thinking and privacy. … Over the next decade, AI advancements are likely to transform our experiences significantly. As our interactions with AI systems that anticipate our needs and preferences become more seamless, we make sacrifices to gain convenience. Adapting to AI also necessitates a reevaluation of core human traits such as empathy, creativity and authenticity. … An understanding of digital literacy and ethical AI practices is essential to navigate this evolving landscape.

“Over the next decade, AI advancements are likely to transform our experiences significantly. As our interactions with AI systems that anticipate our needs and preferences become more seamless, we make sacrifices to gain convenience. Adapting to AI also necessitates a reevaluation of core human traits such as empathy, creativity and authenticity. As AI systems become more adept at mimicking human behaviors, distinguishing between genuine human interaction and AI-generated responses may become challenging.

“The cultivation of an understanding digital literacy and ethical AI practices is essential to navigate this evolving landscape. Educators are pivotal in preparing individuals to critically engage with technology, promoting thoughtful integration into daily life. By emphasizing the development of a digital identity and encouraging reflective practices, we can ensure that technology enhances rather than diminishes our humanity.

“The deepening partnership between humans and AI by 2035 will undoubtedly reshape our understanding of what it means to be human. By approaching this integration with intentionality, ensuring attention to ethical considerations and a commitment to preserving core human traits, we can harness the benefits of AI while safeguarding our humanity.”


Robert Atkinson
AI is an ‘Additive’ Technology, Not a Transformational One

Robert Atkinson, an economist and founder and president of the Information Technology and Innovation Foundation, commented, “Most people do not work in knowledge-based cognitive jobs, and therefore their experience with AI, at least in how they think and process information, will be limited. AI is going to give many people in knowledge-based jobs more tools, in the same way typewriters, computers and the Internet provided more tools to knowledge workers over the last 50 years. For most of what we do – interacting with people, experiencing the world for ourselves and doing physical activity – AI will not be transformative. Just as radio, TV and the Internet were not transformative. They were additive and we learned to adapt to them. We have long been dependent on knowledge technologies such as books, and they have not fundamentally changed who we are as humans. They have complemented human existence in an unalloyed good way.”


A Professor of Computational Social Science
It’s Not Likely That AI or Any Technology Will Shift Core Human Traits or Behavior

An associate professor specializing in computational social science and network science at major U.S. university wrote, “Human-AI interaction will become more seamless and less noticeable over time, with an overall net-positive benefit. If I look back at the history of the Internet, there was initial panic about the deterioration of, for example, social relationships because of the introduction of computer-mediated communication. While there are persistent concerns in this vein, the empirical evidence suggests that computer-mediated communication has a more-or-less neutral impact on social relationships for most people (e.g., neither worse nor better), and is a huge positive for some, especially folks who struggle to find social support and make personal connections in their offline lives. The advance of humans and AI by 2035 will be largely similar – it will become integrated as a routine part of our daily lives and in many cases we will not even notice we are using it. It will make many tasks more efficient and will also introduce some challenges – notably, it will reduce the need for some jobs and cause major shifts in industries such as data analysis and customer support. I don’t think AI or any technology will fundamentally shift core human traits or behavior. My hope is that AI will free up time so humans can focus on uniquely human endeavors like creativity, empathy and care. History would tell us this is unlikely, but I can still hope!”


> UP NEXT – Continue to Part II of the experts’ essays: The experts whose work is featured in the next section mostly focused their responses on overall societal change; many express concerns over the economic and political forces shaping AI; some suggest potential remedies as humans adapt to new digital tools and systems over the next decade.