Gary Bolles
Gary Bolles is author of “The Next Rules of Work,” chair for the future of work at Singularity University and co-founder at eParachute. This essay is his written response in January 2025 to the question, “How might expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’?” It was published in the 2025 research study “Being Human in 2035.

“Due to the wide range of products in use in 2025, we already have extensive experience with the effects of technology on our individual and collective humanity. Each of us today has the opportunity to take advantage of the wisdom of the ages, and to learn – from each other and through our tools – how we can become even more connected, both to our personal humanity and to each other.

“We also know that many of us spend a significant amount of our waking hours looking at a screen and inserting technology between each other, with the inherent erosion of the social contract that our insulating technologies can catalyze. That erosion can only increase as our technologies emulate human communications and characteristics.

The design of software we use today already begins to blur the line between what comes from a human and what is created by our tools. Today’s chat interface is a deliberate attempt to hack the human mind … personifying communication with humans and referring to itself with human pronouns.  … The line-blurring will accelerate rapidly with the sale of semi-autonomous AI agents fueled by Silicon Valley CEOs and venture capitalists calling these technologies ‘co-bots,’ ‘co-workers,’ ‘managers,’ ‘AI engineers’ and a ‘digital workforce.’

“There will be tremendous benefits from ubiquitous generative AI software that can dramatically increase our ability to learn, to have mental and emotional support from flexible applications and to have access to egalitarian tools that can help empower those among us with the least access and opportunity. But the design of software we use today already begins to blur the line between what comes from a human and what is created by our tools.

“For example, today’s chat interface is a deliberate attempt to hack the human mind. Rather than simply providing a full page of response, a chatbot ‘hesitates’ and then ‘types’ its answer. And the software encourages personifying communication with humans, referring to itself with human pronouns.

“The line between human and technology will blur even more as AI voice interfaces proliferate, and as the quality of generated video becomes so good that distinguishing human from software will become difficult even for experts. While many will use this as an opportunity in the next 10 years to reinforce our individual and collective humanity, many will find it hard to avoid personifying the tools, seduced by the siren song of software that simulates humans  –  with none of the frictions and accommodations that are inevitable parts of authentic human relationships.

By elevating our technologies the inevitable result is that we diminish humans. For example, every time we call a piece of software ‘an AI,’ we should hear a bell ringing, as we make another dollar for a Silicon Valley company. It doesn’t have to be that way. For the first time in human history, with AI-related technologies we have the capacity to help every human on the planet to learn more rapidly and effectively, to connect more deeply and persistently and to solve so many of the problems that have plagued humanity for millennia. And we have an opportunity to co-create a deeper understanding of what human intelligence is, and what humanity can become.

“That line-blurring will accelerate rapidly with the sale of semi-autonomous AI agents. Fueled by Silicon Valley CEOs and venture capitalists calling these technologies ‘cobots,’ ‘co-workers,’ ‘managers,’ ‘AI engineers’ and a ‘digital workforce,’ these techno-champions have economic incentives to encourage heavily-marketed and deeply-confusing labels that will quickly find their way into daily language. Many children already are confused by Amazon’s Alexa, automatically anthropomorphizing the technology. How much harder will it be for human workers to resist language that labels their tools as their ‘co-workers’ and fall into the trap of thinking of both humans and AI software as ‘people’?

“By elevating our technologies the inevitable result is that we diminish humans. For example, every time we call a piece of software ‘an AI,’ we should hear a bell ringing, as we make another dollar for a Silicon Valley company. It doesn’t have to be that way. For the first time in human history, with AI-related technologies we have the capacity to help every human on the planet to learn more rapidly and effectively, to connect more deeply and persistently and to solve so many of the problems that have plagued humanity for millennia. And we have an opportunity to co-create a deeper understanding of what human intelligence is, and what humanity can become.

“We are likely to make significant strides forward on all these fronts in the next 10 years. But at the same time, we must confront the sheer power of these technologies to erode the very definition of what it is to be human, because that’s what will happen if we allow these products to continue along the pernicious path of personification. I think we are better than that. I think we can teach our children and each other that it is our definition and understanding of humanity that defines us as a species. And I believe we can shape our tools to help us to become better humans.”


This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report Being Human in 2035.