Maggie Jackson
Maggie Jackson is an award-winning journalist and author who explores the impact of technology on humanity. She is author of, “Distracted: Reclaiming Our Focus in a World of Lost Attention.” This essay is her written response in January 2025 to the question, “How might expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’?” It was published in the 2025 research study “Being Human in 2035.

“Human achievements depend on cognitive capabilities that are threatened by humanity’s rising dependence on technology, and more recently, AI. Studies show that active curiosity is born of a capacity to tolerate the stress of the unknown, i.e., to ask difficult, discomfiting, potentially dissenting questions. Innovations and scientific discoveries emerge from knowledge-seeking that is brimming with dead ends, detours and missteps. Complex problem-solving is little correlated with intelligence; instead, it’s the product of slow-wrought, constructed thinking.

“But today, our expanding reliance on technology and AI increasingly narrows our cognitive experience, undermining many of the skills that make us human and that help us progress. With AI set to exacerbate the negative impact of digital technologies, we should be concerned that the more we look to synthetic intelligences for answers, the more we risk diminishing our human capacities for in-depth problem-solving and cutting-edge invention.

“For example, online users already tend to take the first result offered by search engines. Now the ‘AI Overview’ is leading to declining click-through rates, indicating that people are taking even less time to evaluate online results. Grabbing the first answer online syncs with our innate heuristic, quick minds, the kind of honed knowledge that is useful in predictable environments. (When a doctor hears chest pains they automatically think ‘heart attack’).

AI-driven results may undermine our inclination to slow down, attune to a situation and discern. Classic automation bias, or deference to the machine, may burgeon as people meld mentally with AI-driven ways of knowing … If we continue adopting technologies largely unthinkingly, as we have in the past, we risk denigrating some of humanity’s most essential cognitive capacities. … I am hopeful that the makings of a seismic shift in humanity’s approach to not-knowing are emerging, offering the possibility of partnering with AI in ways that do not narrow human cognition.

“In new, unexpected situations, the speed and authoritative look of AI-driven results may undermine our inclination to slow down, attune to a situation and discern. Classic automation bias, or deference to the machine, may burgeon as people meld mentally with AI-driven ways of knowing.

“As well, working with AI may exacerbate a dangerous cognitive focus on outcome as a measure of success. Classical, rational intelligence is defined as achieving one’s goals. That makes evolutionary sense. But this vision of smarts has helped lead to a cultural fixation with ROI, quantification, ends-above-means and speed and a denigration of illuminating yet less linear ways of thinking, such as pausing or even failure.

“From the outset, AIs’ founders have adopted this rationalist definition of intelligence as their own, designing AI to make its actions servant to its aims with as little human interference as possible. This, along with creating an increasing disconnect between autonomous systems and human needs, objective-achieving machines model thinking that prioritizes snap judgment and single perspectives. In an era of rising volatility and unknowns, the value system underlying traditional AI is, in effect, outdated.

“The answer for both humans and AI is to recognize the long-overlooked value of skillful unsureness. I’m closely watching a new push by some of AI’s top minds (including Stuart Russell) to make AI unsure in its aims and so more transparent, honest and interruptible.

“As well, multi-disciplinary researchers are re-envisioning search as a process of discernment and learning, not an instant dispensing of machine-produced answers. And the new science of uncertainty is beginning to reveal how skillful unsureness bolsters learning, creativity, adaptability and curiosity.

“If we continue adopting technologies largely unthinkingly, as we have in the past, we risk denigrating some of humanity’s most essential cognitive capacities. I am hopeful that the makings of a seismic shift in humanity’s approach to not-knowing are emerging, offering the possibility of partnering with AI in ways that do not narrow human cognition.”


This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report Being Human in 2035.