
“Responses to a larger role played by more-advanced AI in human activity will be shaped according to cultures, attentions and abilities. In an individualistic society like the one seen today in digitally-connected spaces, stratification will increase with AI power. Perceptions of unfairness are disruptive and discourage appropriate adaptation. How people respond often depends on whether they perceive AI systems to be fair and/or on how beneficially the systems fit their needs and beliefs. AI is biased by its model design and training data and, as such, ‘fair treatment’ is in the eye of the beholder and can vary from human to human. Because of this, humans’ choices based on the outputs they receive from AI systems can be sources of conflict. One of our most human attributes is our desire to be treated fairly. We inherently dislike biases, and cognitive dissonance makes us uncomfortable.
“In addition, humans are comforted by being on the bandwagon, finding agreeable groupthink and just plain ‘belonging.’ Those who find a like-minded group are likely to adopt its own set of biases: confirmation bias, anchoring bias and availability bias (the habit of taking mental shortcuts that estimate probabilities based only on how easily they come to mind). Unfortunately, monetized algorithms exacerbate this human tendency.
If we continue on the current trajectory, future generations may accept displacement by AI as their lot in life. … Societies with pervasively embedded AI are going to fundamentally change the interdependence of government and business to advantage the controllers of AI. Accordingly, the values which drive cultural norms will evolve.
“Because the general population has so little grasp of information technology, the self-declared progress of the developers of AI systems is shaping the overarching political economy, deepening the interdependencies of government and economic frameworks.
“If we continue on the current trajectory, future generations may accept displacement by AI as their lot in life. Because humans’ attentions are limited and constrained and because humans tend to take short-cuts that serve their immediate needs, most of the population will respond with a despondent shrug.
“Over the past two decades, businesses have been using automated online systems that analyze word frequencies to rank job applicants. To think of this software as equivalent a toddler’s shape-sorter toy would not be far off. Executives and human resources departments turned to using it to more easily handle the process. In the past few years we discovered that the software was systemically embedding bias toward linear careers. The systems did not do as promised – they did not optimize hiring on an individual basis. Yes, they cut process costs but they favored the job applicants who most resembled the hiring managers themselves. No human ever questioned how the software accomplished this work – that would be tantamount to second-guessing the technology – progress, a ‘problem solver.’
“Responses to AI are mostly following a similar path to our adoption of mobile phones: passive acceptance. The betterment of human communications allowed by the cell phone was followed by a more-advanced networked technology. The smartphone enabled social networks to proliferate, misinformation to go viral, the emergence of FOMO (fear of missing out) and an influencer economy. Thanks to smartphones, map-reading skills and more are obsolete. The ensuing splintering of human discourse ushered in the post-fact era; brain rot and AI-generated slop. Most of us have adapted to it – but at an inestimable social cost.
“Whether decisions are made by the AI itself or governments, employers or social influencers who adopt AI, a majority of people will not be attentive enough to differentiate the factors to be considered as this next ‘more-advanced’ technology takes its place. Particularly, as one example, the AI-advantaged populations will abdicate decisions to AI as a rationalization of ‘fairness.’ Humans misunderstand that not all technological change is progress.
Pulling oneself up by one’s bootstraps – by education or grit – will cease to be valued. To me, this has been the definition of individual resilience, the survival instinct, ingenuity, the persistent elements of humanity. However, if human adaptation to AI results in aggregations of individuals who think alike, then any outliers who display more acute survival instincts may not be tolerated.
“As we evolve with these systems, how might the essence and elements of human resilience change? What it means to be human will not be changed by AI, therefore the ‘essence’ will not evolve. We will remain social animals. The characteristics of specific cultures will evolve their values in response to their respective AI-adapted political-economic frameworks. Societies with pervasively embedded AI are going to fundamentally change the interdependence of government and business to advantage the controllers of AI. Accordingly, the values which drive cultural norms will evolve.
“If the prevalent societal message is that these AI systems are going to replace you, the work that you do or the creativity you bring, then it signals to human beings – social animals – that you do not matter. Collectivistic societies, characteristically exhibiting concern for the good of their group, will be more resilient and, counterintuitively protective of a variety of human attributes.
“Buy-in to Adam Smith’s ‘invisible hand’ did not inevitably lead to the current form of U.S. capitalism. Controllers of AI, in their myopic quest for efficiency in the guise of fiduciary responsibility, will finally rupture the intended libertarian social contract.
“Pulling oneself up by one’s bootstraps – by education or grit – will cease to be valued. To me, this has been the definition of individual resilience, the survival instinct, ingenuity, the persistent elements of humanity.
“However, if human adaptation to AI results in aggregations of individuals who think alike, then any outliers who display more acute survival instincts may not be tolerated. In individualistic cultures in which the societal power controls AI, evolved values and social norms may further the hazards of group think and going along to get along.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”