Danil Mikhailov is director of DataDotOrg and trustee at 360Giving. This essay is his written response in January 2025 to the question, “How might expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’?” It was published in the 2025 research study “Being Human in 2035.

“It seems clear from the vantage point of 2025 that AI will be not just a once-in-a-generation but a once-in-a-hundred years transformative technology, on a par with the introduction of computers, electricity or steam power in the scale of its impact on human societies. By 2035 I expect it to fully penetrate and transform the vast majority of our industrial sectors, both destroying jobs and creating new jobs on an enormous scale.

“The issue for most individual human beings will be how to adapt and learn new skills that enable them to live and work side-by-side with AI agents. As some lose their jobs and are left behind, others will experience huge increases in productivity, benefits and creative potential. Sectors such as biomedicine, material sciences and energy will be transformed, unlocking huge latent potential.

“The issue for corporations and governments will be how to manage the asymmetry of the transition. During previous industrial revolutions although eventually more jobs were created than destroyed and economies expanded, the transition took a number of decades during which a generation of workers fell out of the economy, along with ensuing social tensions.

As the majority of information humans consume on a daily basis becomes at least augmented by if not completely created by AI, the prevailing assumption will be that everything could be fake, everything is subjective. … Social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies just as the promised fruits of AI in terms of new drugs, new energy and new materials are tantalisingly within reach.

“If you were a Luddite out there breaking steam-powered looms in the early 19th century in England to protest industrialization, telling you that there will be more jobs in 20 years’ time for the next generation did not help you feed your family in the here and now. The introduction of AI is likely to cause similar inequities and will increase social tensions, if not managed proactively and systemically. This is particularly so because of the likely vast gulf in experience of the effects of AI between the winners and losers of its industrial and societal transformation.

“In a parallel change at a more fundamental level, AI will upend the Enlightenment consensus and trust in the integrity of the human-expert-led knowledge production process and fatally undermine the authority of experts of any kind, whether scientists, lawyers, analysts accountants or government officials.

“As the majority of information humans consume on a daily basis becomes at least augmented by if not completely created by AI, the prevailing assumption will be that everything could be fake, everything is subjective. This will undermine the belief in the possibility or even desirability of ‘objective’ truth and the value of its pursuit. The only yardstick to judge any given piece of information in this world will be how useful it proves in that moment to help an individual achieve their goal.

AI will lead society 350 years back into an age of correlative, rather than causal, thinking. Data patterns and the ability to usefully exploit them will be prioritised over the need to fully understand them and what caused them. These two parallel processes of, on the one hand, social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other, in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies.

“AI will lead society 350 years back into an age of correlative, rather than causal, thinking. Data patterns and the ability to usefully exploit them will be prioritised over the need to fully understand them and what caused them. These two parallel processes of, on the one hand, social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other, in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies, just as the promised fruits of AI in terms of new drugs, new energy and new materials are tantalisingly within reach.

“Resolving such a crisis may need a new, post-Enlightenment accommodation that accepts that human beings are far less ‘individual’ than we like to imagine, that we were enmeshed as inter-dependent nodes in (mis)information systems long before the Internet was invented, that we are less thinking entities than acting and reacting ones, that knowledge has never been as objective as it seemed and it never will seem like that again, and that maybe all we have are patterns that we need to navigate together to reach our goals.”


This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report Being Human in 2035.