
“AI systems will play a much more significant role in shaping our decisions, work and daily lives, but the transformation will be profoundly unequal. This inequality operates within societies as much as between them. How people embrace, resist and struggle with these changes will vary enormously depending on whether they are choosing to deploy AI or having it deployed upon them. How will societies embrace, resist and struggle during this transition? Actually, the binary of ‘embrace versus resistance’ misses what’s actually happening. Most communities are doing neither. They are selectively integrating AI through existing social structures, adapting technologies to local purposes and negotiating terms of engagement if they have the power to do so. In my fieldwork across Kenya, Malawi and the Philippines I have witnessed: traditional authorities establishing protocols for voice data collection; women’s health committees determining which community members can access system outputs; and village courts adjudicating disputes about technology use. This isn’t resistance. It’s appropriation on community terms. And appropriation requires having terms to negotiate from. Not everyone does.
We should be establishing data rights before widespread AI deployment, not after all of the data extraction has occurred. Democratic deliberation should be protected from synthetic media and algorithmic fragmentation. More diverse voices should be involved in the design, building and governance of AI. And the ‘invisible labor’ behind AI should be made visible.
“The struggle will be sharpest for those who encounter AI as subjects rather than users; people whose creditworthiness is scored by algorithms they never consented to, whose asylum claims are assessed by systems trained on data from contexts nothing like their own, whose labor (annotating data, moderating content, extracting minerals) powers AI systems they will never benefit from. For them, the question isn’t how to embrace or resist but how to gain any meaningful voice at all.
The capacities we must cultivate
“Cognitively, people need to develop what researchers call ‘metacognitive AI literacy.’ This is the possession of more than a simple understanding of how to use AI tools; it is the ability to weigh what such use would mean to them and when to trust that they can rely on AIs to support their own judgment. As AI is relied upon in achieving more cognitive tasks, the temptation to offload thinking grows. Maintaining the capacity for independent reasoning, for choosing the harder path when it matters, becomes a discipline.
“Emotionally, we have to develop a higher tolerance for uncertainty and ambiguity. Our shared sense of what is real is already shifting. Deepfakes dissolve common ground. Algorithmic curation fragments information environments. Living well with AI means accepting that verification is harder, that manipulation is more sophisticated and that some questions won’t resolve cleanly.
“Socially, the most important capacity may be collective governance. My research suggests resilience comes less from individual digital literacy than from communities exercising agency together through adapted existing structures. The capacity to deliberate, to set boundaries, to hold institutions accountable: these are social muscles, not individual skills.
“Ethically, we need frameworks for thinking about consent under conditions of asymmetric power. In crisis contexts, I’ve observed how ‘meaningful consent’ collapses when people desperately need services. As AI-mediated services become essential infrastructure, this pattern will spread. We need ethical vocabularies for what consent means when opting out isn’t realistic.
Practices and resources for resilience
“We need to develop AI governance frameworks that work within existing social structures rather than importing external models, e.g., assuring multilingual AI resources in diverse communities so that intelligence expressed in Chichewa or Tagalog is as legible to AI systems as intelligence expressed in English. We can tap into local universities and community organizations that have the resources available to assist in building capacity that doesn’t depend on external experts. It is vital to develop labor protections for the data workers who remain invisible in the AI story. And we must assure that the public is served with a media literacy and fact-checking infrastructure to protect some shared epistemic ground.
“What must happen now? We should be establishing data rights before widespread AI deployment, not after all of the data extraction has occurred. Democratic deliberation should be protected from synthetic media and algorithmic fragmentation. More diverse voices should be involved in the design, building and governance of AI. And the ‘invisible labor’ behind AI should be made visible – the conditions of data annotators, content moderators and mineral extractors are governance questions.
New vulnerabilities and coping strategies
“We have to prepare now for the future by thinking through what we already know of digital life.
- Expect algorithmic harm without algorithmic benefit: being subject to AI decisions even if you are not an AI user.
- Expect expertise concentration that leaves most communities unable to evaluate the systems affecting them.
- Expect coerced consent to become normalized.
- Expect AI tools to enable surveillance and manipulation by authoritarian actors.
“Coping will require plural strategies: regulatory frameworks in some jurisdictions and community data governance in others; labor organizing among data workers; indigenous data sovereignty movements asserting control over knowledge systems. There is no single model, only the insistence that those affected must have voice in shaping their technological futures. The diversity of approaches is itself a form of resilience against any one model’s failure.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”