Henry_Brady
Henry Brady is dean of the School of Public Policy at the University of California-Berkeley and past president of American Political Science Association. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“There could be an increasing division between the set of people who learn to master AI, use it effectively and efficiently and profit from its deployment and another – probably larger – group of people who are flummoxed by it and who retreat into longing for the past, into cults and magical thinking, and (perhaps as the best outcome) stronger attachment to organized religion. AI will raise fundamental questions about the nature of human beings. AI has been called a ‘stochastic parrot’ to differentiate it from human beings, but what if human beings come to believe that they are nothing more than stochastic parrots? Am I just guessing the next word that I will write on this page? What shapes my guesses? How am I to understand that shaping? What differentiates me, if anything, from AI?

“To be clear, I think that I am more than what AI does, but it is easy to fall into the trap of thinking that AI defines an essential characteristic of being human. The problem is parallel to the degree to which many people allow social media to define who they are. Consequently, we need stronger antidotes to the ability of AI to define the nature of personhood.

We will have to consider whether robots with bodies, minds and executive functioning deserve equal consideration. Consequently, AI is just the beginning of questioning that will engage us for the following decades and perhaps centuries as we proceed with our technological engineering feats. Our society should be doing a better job of preparing everyone for that.

“Human eras have been defined by various metaphors, such as using Newtonian physics to define the nature of people or Darwinian biology to define the nature of society. AI may be one of those inventions that defines – even more than the digital computer has defined – the nature of human beings. As a result, people will face the task of defining themselves in relation to that metaphor.

“Religion (and cults and magic) could play a major role here. It could provide meaning that would help people comprehend, locate and tame AI, or it could provide an off-ramp that substitutes for logical thinking.

“It will be interesting to see how the major religions deal with AI. Pope Leo XIV has already warned about AI; will he write a defining encyclical about it? In 2023, the Southern Baptist Convention passed a resolution saying that human beings are created in the image of God and that technology should not supplant this, but, what, concretely, does that mean? What vision will they provide for their members?

“To take another set of institutions. How will K-12 education and colleges and universities act to provide people with the tools they need to use AI effectively? AI is emerging at a time when the humanities are under siege because they can’t be monetized. Yet this may be a time when truly vibrant humanities courses are of the greatest importance. But will the humanities be up to this task given their backward-looking orientation? Will enough humanists “catch-up” with AI so that they can deal with it in their courses?

“My greatest fear is that just as with the Internet and social media, we will allow ‘Big Tech’ to define AI in terms of the profit it can produce. We will not invest in making society ready for AI through our educational system and our governmental structures. The issue here goes far beyond regulating, for example, ‘deepfakes’ or ‘disinformation.’ It goes to the heart of reorienting society to the changes in lives, the redesign and loss of jobs, and perhaps the loss of meaning that will come from AI.

“It is not clear to me that most institutions have the capacity to develop a blueprint for ensuring resiliency. Most governmental institutions (most especially the Congress and the Courts) do not have the capacity to come to grips with AI. Perhaps the best-equipped institutions are colleges and universities that have experts on AI. But I worry that universities will not move fast enough. As I have worked at my own institution (a university) to think about equipping students to wrestle with AI, I have become aware that doing this will be a very big job that will affect all aspects of what we do.

AI poses an enormous challenge for which we are not ready. And I worry that many people will not have the support structures to endure that challenge. … On top of growing wealth and income inequality that has been caused by technological change, there will be a cognitive and emotional gap that will disadvantage those who have already been relegated to lower incomes.

“In summary, AI poses an enormous challenge for which we are not ready. And I worry that many people will not have the support structures to endure that challenge. Those who go to (some) colleges might have such structures that will allow them to rationally, soberly and sensibly deal with AI and to benefit from it. The remainder of the public is likely to have inadequate support to make sense of it all and they could be greatly harmed by AI. Consequently, on top of growing wealth and income inequality that has been caused by technological change, there will be a cognitive and emotional gap that will disadvantage those who have already been relegated to lower incomes.

“So, what makes us human and differentiates us from AI as it is presently constituted? I believe that one major difference is that our minds and bodies are so closely intertwined, leading to the millennia-old debates over the relationship between the body and soul and the problem of ‘mind-body’ duality that have challenged humankind in the writings of most of the world’s religious thinkers and philosophers.

“Buddhism argues that there is no fixed soul, just a continuous flow of changing consciousness. Christian religions have favored a mind-body duality – so that St. Augustine renounced the flesh in favor of the soul –and Descartes found personhood in the mind by saying ‘I think, therefore I am.’ Modern brain science is still struggling with these issues. Because our minds and bodies are intertwined we are more than either one alone. In addition, I also believe that human executive functioning that links our brain and our bodies leads to a sense of personhood that is fundamental to what it means to be human. It is here that thinkers such as Shakespeare, Jane Austen, Dickens, Fyodor Dostoevsky, Virginia Wolff and Ernest Hemingway excel because they consider the whole human being with its passions and interests. 

“But fundamentally, artificial intelligence is disembodied mind (the silicon substrate notwithstanding) without even much in the way of executive function. As we build robots with sensors, executive programs to interact with others and AI they will begin to look and feel more like humans. 

“Just as there is a large literature on whether animals should be given equal consideration to humans, we will have to consider whether robots with bodies, minds and executive functioning deserve equal consideration. Consequently, AI is just the beginning of questioning that will engage us for the following decades and perhaps centuries as we proceed with our technological engineering feats. Our society should be doing a better job of preparing everyone for that.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”