Marc_Rotenberg
Marc Rotenberg is director of the Center for AI and Digital Policy. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“Artificial intelligence systems are already embedded in decisions that affect access to employment, credit, housing, public benefits, education and political participation. As these systems become more capable and more widely deployed, the central issue is not whether societies will use AI, but whether they can do so while preserving accountability, human agency and democratic governance.

“Building resilience in the digital future requires more than adaptation. It requires clear limits, effective and enforceable governance frameworks and meaningful avenues for contesting automated decisions.

“Much of the recent public discussion of AI governance has focused on principles and best practices. These efforts are necessary, but insufficient.

“Experience in data protection and consumer protection shows that resilience depends on enforceable rules and institutional capacity, not voluntary commitments. The work of the Center for AI and Digital Policy (CAIDP), including the ‘Universal Guidelines for AI’ and the ‘AI and Democratic Values Index,’ has consistently supported the fact that AI governance must be grounded in law, supervision and remedies. Where these elements are missing, technical advances tend to outpace public safeguards.

Without clear limits, societies risk normalizing practices that undermine equality before the law, freedom of expression and personal autonomy. … Enforcement authorities need technical expertise and legal authority to intervene before harms become widespread. Without credible enforcement, governance frameworks risk becoming symbolic rather than protective.

“One of the most important and underdeveloped aspects of AI governance is the need for clear red lines. Not all AI applications should be permitted, even with safeguards. Certain uses pose risks that are incompatible with fundamental rights or democratic norms. Systems that enable mass biometric surveillance in public spaces, social scoring by governments or private actors or fully automated decisions in areas requiring human judgment and due process raise concerns that cannot be addressed through transparency alone.

“Prohibitions are not a sign of technological pessimism; they are a recognition that some harms are systemic and irreversible once entrenched. They are a necessary component of responsible AI governance, particularly where power asymmetries are extreme and affected individuals lack realistic avenues for resistance.

“Without clear limits, societies risk normalizing practices that undermine equality before the law, freedom of expression and personal autonomy. Red lines also serve an important institutional function: they provide clarity to developers, regulators and the public about what is unacceptable, reducing uncertainty and regulatory arbitrage.

“Equally important is the effective implementation and enforcement of AI governance frameworks that already exist. Many governments have adopted national AI strategies, ethical guidelines or risk-based regulatory approaches. However, our comparative research shows that these frameworks often emphasize innovation and economic growth while underinvesting in oversight, enforcement, and remedies. Regulatory gaps are particularly evident in the absence of well-resourced supervisory authorities, limited audit powers and weak sanctions for noncompliance.

Individuals cannot realistically bear the burden of identifying bias, error or misuse in complex systems on their own. Effective contestability requires collective mechanisms: courts, regulators, ombudspersons and professional standards that recognize automated decision-making as a site of potential injustice.

“Resilience depends on closing this implementation gap. Laws and standards must be operationalized through impact assessments, documentation requirements, independent audits and ongoing monitoring. Enforcement authorities need technical expertise and legal authority to intervene before harms become widespread. Without credible enforcement, governance frameworks risk becoming symbolic rather than protective.

“Another critical requirement for resilience is contestability. Much attention has been given to explainability – the idea that AI systems should provide understandable accounts of how decisions are made. While explainability is valuable, it is not sufficient. An explanation that cannot be challenged does little to protect individual rights. Contestability goes further. It requires that individuals have the ability to question, correct and seek redress for automated decisions that affect them.

“Contestability has both procedural and substantive dimensions. Procedurally, individuals must be informed when automated systems are used, have access to relevant information and be able to engage a human decision-maker. Substantively, there must be mechanisms to change outcomes, correct errors and impose responsibility when systems cause harm. Without contestability, AI systems tend to shift power away from individuals and toward institutions that control data and algorithms.

“An emphasis on contestability reflects a broader understanding of resilience as an institutional property, not just an individual skill. Individuals cannot realistically bear the burden of identifying bias, error or misuse in complex systems on their own. Effective contestability requires collective mechanisms: courts, regulators, ombudspersons and professional standards that recognize automated decision-making as a site of potential injustice.

The challenge is to ensure that these systems operate within boundaries defined by democratic values and human rights. Resilience is built through limits as well as capabilities, through enforcement as well as innovation and through contestability rather than passive transparency. The digital future will be shaped not only by what AI can do, but by what societies decide it should not do and by how seriously they enforce those decisions.

“Looking ahead, many vulnerabilities are likely to intensify if red lines, enforcement and contestability are neglected. Automated systems may become default decision-makers, with human review reduced to a formality. Errors and biases may persist because affected individuals lack practical means to challenge them. Public trust may erode as decisions become less intelligible and less accountable. These outcomes are not inevitable, but they are predictable in the absence of deliberate governance choices.

“Strengthening resilience, therefore, requires action on multiple fronts. Policymakers must be willing to prohibit certain AI applications outright where risks cannot be mitigated. Governments must invest in the institutions responsible for enforcing AI laws and standards. Designers and deployers must be held legally accountable for system impacts, not just technical performance. And individuals must be guaranteed meaningful rights to contest automated decisions, not merely to receive explanations after the fact.

“AI will continue to further shape decisions, work and daily life. The challenge is to ensure that these systems operate within boundaries defined by democratic values and human rights. Resilience is built through limits as well as capabilities, through enforcement as well as innovation and through contestability rather than passive transparency. The digital future will be shaped not only by what AI can do, but by what societies decide it should not do and by how seriously they enforce those decisions.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”