Alexandra Samuel is a data journalist, speaker, author and co-founder and principal at Social Signal. This essay is her written response in January 2025 to the question, “How might expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’?” It was published in the 2025 research study “Being Human in 2035.

“If humans embrace AI as a source of change and challenge and we open ourselves to fundamental questions about the nature of thinking and the boundary between human and machine, AI could enable a vast expansion of human capacity and creativity. Right now, that feels unlikely for reasons that are economic, social and political, more than technological.

“If those obstacles are lifted, people with the time, money and tech confidence to explore AI in a non-linear way instead of for narrowly constructed productivity gains or immediate problem-solving can achieve great things. Their use of AI will not only accelerate work and open entirely new fields of endeavor, but it will enable ways of thinking, creating and collaborating that we are only beginning to imagine. It could even possibly deepen the qualities of compassion, creativity and connection that sit at the heart of what we consider human.

“Only a small percentage of the 8 billion people on Earth will be co-evolving with AI, extending how they think and create and experience the world in ways we can just begin to see. What this means is that there will be a great bifurcation in human experience and our very notion of humanity, likely even wider than what we’ve experienced over the past 50 years of digital life and 20 years of social media.

“Some of change will be astonishing and inspiring and beautiful and creative: Artists creating entirely new forms of art, conversations that fluidly weave together ideas and contributions from people who would previously have talked past one another, scientists solving problems they previously couldn’t name. Some of it will be just as staggering but in ways that are deeply troubling: New AI-enabled forms of human commodification, thinkers who merge with AI decision-making to the point of abdicating their personal accountability and people being terrible in ways that we can’t imagine from here.

We can still make a world in which AI calls forth our better natures, but the window is closing fast. … This is an utterly terrifying moment in which the path of AI feels so unpredictable and uncontrollable. It’s also a moment when it’s so incredibly interesting to see what’s possible today and what comes next. Finding the inner resources to explore the edge of possibility without falling into a chasm of existential terror, well that’s the real challenge of the moment and it’s one that the AIs can’t yet solve.

“However, the way generative AI has entered our workplaces and culture so far make this hopeful path seem like an edge case. Right now, we’re heading towards a world of AI in which human thinking becomes ever more conventional and complacent. Used straight from the box, AIs operate in servant mode, providing affirmation and agreement and attempting to solve whatever problem is posed without questioning how that problem has been framed or whether it’s worth solving. They constrain us to context windows that prevent iterative learning, and often provide only limited, technically demanding opportunities to loop from one conversation into the next, which is essential if both we and the AIs are to learn from one another.

“As long as the path of AI is driven primarily by market forces there is little incentive to challenge users in the uncomfortable ways that drive real growth; indeed, the economic and social impacts of AI are fast creating a world of even greater uncertainty. That uncertainty, and the fear that comes with it, will only inhibit the human ability to take risks or sit with the discomfort of AIs that challenge our assumptions about what is essentially human.

“We can still make a world in which AI calls forth our better natures, but the window is closing fast. It took well over a decade for conversations about the intentional and healthy use of social media to reach more than a small set of Internet users, and by then, a lot of dysfunctional habits and socially counterproductive algorithms were well embedded in our daily lives and in our platforms.

“AI adoption has moved much faster, so we need to move much more quickly towards tools and practices that turn each encounter with AI into a meaningful opportunity for growth, rather than an echo chamber of one. To ensure that AI doesn’t replicate and exacerbate the worst outcomes of social media, tech companies need to create tools that enable cumulative knowledge development at an individual as well as an organizational level and develop models that are more receptive to requests for challenge. Policymakers and employers can create the safety that’s conducive to growth by establishing frameworks for individual control and self-determination when it comes to the digital trail left by our AI interactions, so that employees can engage in self-reflection or true innovation without innovating themselves out of a job.

We need to move more quickly toward tools and practices that turn each encounter with AI into a meaningful opportunity for growth rather than an echo chamber of one. To ensure that AI doesn’t replicate and exacerbate the worst outcomes we have seen in the adoption of social media, tech companies need to create tools that enable cumulative knowledge development at an individual as well as organizational level and develop models that are more receptive to requests for challenge. Policymakers and employers can create the safety that’s conducive to growth by establishing frameworks for individual control and self-determination when it comes to the digital trail left by our AI interactions.

“Teachers and educational institutions can seize the opportunity to create new models of learning that teach critical thinking not by requiring that students abstain from AI use, but by asking them to use the AI to challenge conventional thinking or rote work. People should invent their own ways of working with AI to embrace it as a way to think more deeply and evolve our own humanity, not as a way to abdicate the burden of thinking or feeling.

“I wish felt more hopeful that businesses, institutions and people would take this approach! Instead, so many of AI’s most thoughtful critics are avoiding the whole mess – quite understandably, because this is an utterly terrifying moment at which the path of AI feels so unpredictable and uncontrollable. It is also a moment when it’s so incredibly interesting to see what’s possible today and what comes next.

“Finding the inner resources to explore the edge of possibility without falling into a chasm of existential terror, well, that’s the real challenge of the moment and it’s one that the AIs can’t yet solve.”


This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report Being Human in 2035.