Maria_Randazzo
Maria S. Randazzo is a research professor in the school of law at Australia’s Charles Darwin University and author of “AI is Not Intelligent At All: Why Our Dignity is at Risk.” This essay is her written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“As AI systems become embedded in governance, markets, education, healthcare and everyday decision-making, human adaptation will unfold across interconnected dimensions, including, inter alia, cognitive, institutional, professional, normative/legal and cultural dimensions. Cognitively, individuals will increasingly delegate decision-support tasks to algorithmic systems – from navigation and diagnostics to legal and financial assessment. This will intensify reliance on probabilistic reasoning and heighten expectations for ‘data-backed’ justification. At the same time, new literacies will emerge: the ability to interpret algorithmic outputs, evaluate uncertainty scores and understand bias and model limitations. Knowledge will shift from possessing facts to interrogating systems. Institutionally, authority will be reconfigured. As AI influences hiring, policing, credit allocation, welfare distribution and judicial reasoning, institutions must renegotiate accountability, contestability and the meaning of valid justification. Regulatory frameworks for algorithmic accountability, rights to explanation and appeal and hybrid human-machine oversight models will likely expand. The central adaptation here concerns the redistribution and formalisation of authority.

Resilience in the age of AI depends mainly on institutional design: transparency, rights of explanation, avenues of contestation and meaningful human oversight. Resilience, then, can be conceptualised as the preservation of human dignity, autonomy, reflexivity, under conditions of algorithmic governance.

“Professionally, transformation is probable. Doctors, lawyers and teachers may rely on predictive or diagnostic systems, yet retain interpretive, ethical and relational authority. Routine analytic tasks will increasingly be automated. As contextual reasoning, moral discernment and relational intelligence become more central, the professional shift will be from execution to supervision, integration and normative judgment.

“More profoundly, societies will confront normative/legal recalibration. As algorithmic nudging and predictive modelling shape choices, individuals may experience diffusion of responsibility or diminished agency – ‘the system decided.’ Alternatively, demands for stronger human override mechanisms may intensify. Whether AI systems are treated as tools, advisors or quasi-authoritative actors will shape how responsibility and autonomy are understood. Preserving meaningful space for human contestation and refusal will be decisive.

“Adaptation, however, will not be neutral. It will vary across socio-economic contexts. Highly resourced actors will likely adapt more rapidly, while marginalised communities may encounter intensified surveillance and automation without equivalent control. Without deliberate governance, power asymmetries may widen. The central issue, then, is not whether humans will adapt – they always do – but how. Adaptation may take the form of passive accommodation to automated authority, or active shaping of AI within normative/legal frameworks. If human contestability, accountability and institutional responsibility are preserved, AI may augment human capacity without undermining autonomy. If not, adaptation may harden into the normalization of algorithmic governance.

“Resilience has traditionally meant endurance: the ability of individuals or institutions to withstand disruption and restore balance. In political theory, it evokes civic strength; in psychology, adaptive response; in governance, recovery after crisis. Yet as AI systems become infrastructural – determining access to credit, employment, welfare, healthcare, education and security – these systems must be rethought.

“In algorithmically-mediated environments, the challenge is to survive external epochal change by working to preserve human dignity and agency within the systems that increasingly create the conditions of choice. Resilience shifts from simply enduring to sustaining autonomy under technological mediation.

“Within algorithmic systems, decisions are guided by optimisation rules built into technological infrastructures rather than by principles individuals consciously choose for themselves. Resilience, in this context, implies the capacity to interrogate system outputs and retain deliberative judgment within probabilistic frameworks.

Floridi’s informational ontology adds a further dimension: In a datafied world, persons exist not only as embodied agents but as informational entities whose digital profiles circulate within institutional decision-making. These predictive doubles may shape opportunities before action occurs. Resilience therefore includes safeguarding informational integrity – ensuring that data representations remain contestable and subordinate to individuals they purport to represent.

“Taken together, these perspectives suggest that resilience in the age of AI depends mainly on institutional design: transparency, rights of explanation, avenues of contestation and meaningful human oversight. Resilience, then, can be conceptualised as the preservation of human dignity, autonomy, reflexivity, under conditions of algorithmic governance.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”