Maha_Jouini
Maha Jouini is digital communication officer at the African Union Development Agency and a research fellow at the Global Center on AI Governance. This essay is her written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“We must recognize that resilience is not purely individual – it is collective. Communities, policymakers, technologists and researchers must collaborate to ensure that AI systems are designed with human dignity at their center. As a cancer survivor, I understand resilience in the age of artificial intelligence not as an abstract concept but as a lived reality. AI systems increasingly shape decisions about health, employment, finance and access to care. Yet the experiences of many women – particularly those living with chronic illness – remain poorly represented in the datasets that inform these systems. This absence creates a ‘silent digital condition.’ Invisible data = invisible women. When vulnerability is translated into algorithmic categories human complexity can be reduced to simplified risk signals. A cancer diagnosis can become a data marker interpreted by systems evaluating employability, insurance eligibility or productivity.

“What was once a deeply personal struggle for survival becomes an automated classification. In this process, AI can unintentionally transform vulnerability into exclusion. Consider this example: A woman who survived cancer applies for a job. An AI screening system flags her employment gap as a productivity risk. She never gets an interview. No human ever reviewed her file.

When AI systems encode bias, exclude the vulnerable or concentrate power without accountability, they do not merely produce technical errors, they erode the social fabric. Ibn Khaldun would recognize in algorithmic injustice the same corrosive force that, left unchecked, weakens civilizations from within.

“For women navigating illness, work and social expectations simultaneously, resilience therefore requires more than adapting to technological change. It requires maintaining dignity within systems that increasingly evaluate human lives through data.

“This challenge is particularly visible in the Global South. Artificial intelligence technologies are largely developed within Western technological ecosystems shaped by values such as efficiency, optimization and market performance. While these frameworks have produced remarkable innovation, they often neglect relational and communal understandings of human well-being that exist in many non-Western societies.

“From a decolonial perspective, the question is not simply, ‘How will people adapt to AI?’ It is, ‘Do the appropriate sets of knowledge and correct ethical frameworks guide the design of these systems?’ If AI continues to be built primarily on epistemologies rooted in individualism and economic optimization, it risks reproducing historical patterns of exclusion in new digital forms.

“African philosophical traditions offer an alternative ethical orientation. Ubuntu, often summarized by the expression ‘I am because we are,’ frames intelligence and human flourishing as relational rather than purely individual. It emphasizes care, community and mutual responsibility. Within such a worldview, technological systems should strengthen social bonds rather than fragment them.

“Similarly, the Islamic ethical tradition of Hikma – wisdom – reminds us that knowledge and power must be guided by moral reflection. Historically, Hikma integrated reason, ethics and spirituality in the pursuit of justice and human flourishing. In the context of AI governance, this perspective encourages us to ask not only whether a system works efficiently but also whether it serves the dignity of human beings.

People will inevitably adapt to AI systems – using them for healthcare advice, learning, work and decision-making. Yet adaptation without ethical reflection risks creating societies in which algorithms silently structure opportunity and exclusion.

“This concern for justice is not new to Arab intellectual tradition. The fourteenth-century Arab philosopher Ibn Khaldun argued that justice is the foundation of collective life and that injustice ultimately leads to disorder and decline. His insight carries striking relevance today. When AI systems encode bias, exclude the vulnerable or concentrate power without accountability, they do not merely produce technical errors they erode the social fabric. Ibn Khaldun would recognize in algorithmic injustice the same corrosive force that, left unchecked, weakens civilizations from within. To build AI responsibly is, in this sense, an act of civilizational stewardship.

“These philosophical traditions suggest that resilience in an AI-saturated world must include ethical and cultural capacities alongside technical literacy. People will inevitably adapt to AI systems – using them for healthcare advice, learning, work and decision-making. Yet adaptation without ethical reflection risks creating societies in which algorithms silently structure opportunity and exclusion.

“To cultivate meaningful resilience, societies must develop several capacities:

  • “Communities, policymakers, technologists and researchers must collaborate to ensure transparency and accountability in AI systems that shape human flourishing – health, employment, governance and so on.
  • “Individuals must develop critical awareness of how data and algorithms influence decisions affecting their lives.
  • “Educational systems must integrate ethical reflection, philosophy and cultural perspectives into technological learning.

“This imperative becomes especially urgent as we enter the era of agentic AI – systems capable of autonomous reasoning, planning, and action across complex environments. In a world increasingly fascinated by the power of machines, I insist on one foundational principle: the human being must remain at the center of both design and decision. Intelligence without wisdom is incomplete. A system may optimize, predict, and act—but without moral grounding, without cultural memory, and without accountability to those it affects, it remains a powerful tool in search of a conscience.

“As a cancer survivor, I know that vulnerability can reveal both fragility and strength. In the age of artificial intelligence, our resilience will depend on our ability to transform technological power into ethical responsibility. Technology alone cannot guarantee justice. For AI to truly serve humanity, it must be guided by wisdom.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”