Michele_Visciola
Michele Visciola is president and founding partner of Experientia, a user-experience design and consumer-behavior company based in Turin, Italy. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“A crisis facing human-centered design that I have been exploring in some of my recent work – in which the discipline’s success in removing interaction barriers has paradoxically led to its marginalization – offers a framework for understanding how AI will reshape human decision-making, work and daily life.

“The same dynamics that commodified HCD expertise and embedded it invisibly into automated platforms are now unfolding at unprecedented scale with AI. What we are witnessing is not continuity but acceleration: the prioritization of engagement over agency, the exploitation of cognitive automatisms rather than their correction and the replacement of human capabilities instead of their augmentation.

“As AI systems increasingly shape human experience, a defining question emerges: Will we repeat the trajectory that marginalized HCD, or can we apply its lessons to build genuine resilience? I argue that the five pillars I proposed for sustainable innovation – enhancing agency, addressing cognitive automatisms, correcting automation’s unintended consequences, fostering sustainable change and expanding knowledge and skills – also constitute a roadmap for navigating AI transformation. Together, they aim to protect and develop what my colleagues and I call ‘brain capital’: the cognitive and social capacities that enable individuals and communities to thrive in complex and fragile ecosystems.

Embrace| Resistance | Struggle

“If properly designed, AI adoption might unfold through three intertwined dynamics: embrace, resistance, and struggle. Some individuals and communities will embrace AI as a tool for enhanced agency. We are starting to see this in AI-augmented communities of practice where human expertise remains central, such as healthcare models in which AI supports rather than replaces clinical judgment. Participatory governance initiatives point toward democratic oversight of AI deployment at local and urban levels. Similarly, AI literacy ecosystems – e.g., extending renewable-energy community models – can transform people from passive users into informed stakeholders.

“At the same time, informed resistance might grow. Privacy-conscious communities demand transparency and accountability, echoing earlier movements around food labeling or environmental disclosure. Labor organizations resist AI-driven displacement, not to block innovation but to reorient it toward complementarity. Digital well-being advocates push back against AI-powered addictive and manipulative design, calling for protections of cognitive autonomy in the face of increasingly persuasive systems.

“Between these poles lies struggle: a contested, heterogeneous landscape where unequal access to AI literacy, conflicting incentives and asymmetries of power collide. The traditional designer-user divide becomes an ‘AI developer – affected population’ divide, made more problematic by opaque systems that claim to adapt to human behavior while remaining largely inscrutable. Without deliberate intervention, this struggle risks widening inequalities in brain capital and undermining democratic governance.

Capacities for resilience

“To be significant, ‘resilience’ in the AI age is not conceivable as simply an individual trait but as a collective achievement because it depends on cultivating interconnected cognitive, emotional, social and ethical capacities.

Cognitively, resilience requires moving beyond basic digital literacy toward critical AI consciousness. This includes systems thinking about AI’s ripple effects, metacognitive awareness of when we defer too readily to automated judgments and the ability to recognize bias manipulation disguised as objectivity. Long-term consequence modeling as a result of crucial experimentation is essential to counter short-term optimization and assess impacts on skills, knowledge, social cohesion and sustainability.

Emotionally, resilience involves tolerating uncertainty in the face of systems that project false certainty; regulating anxiety and loss associated with AI-driven disruption; and preserving empathy and authenticity in algorithmically-mediated environments. This is not about smoothing adoption but about supporting the genuine human experience of transformation.

Socially, resilience depends on collaborative intelligence and participatory governance. Communities need shared practices for evaluating AI systems, democratic mechanisms for oversight and dialogue across stakeholders who have unequal power and expertise. Solidarity is crucial, as AI’s costs and benefits are unevenly distributed, and community-specific knowledge must be preserved against homogenization by global models.

Ethically, resilience requires long-term and systemic thinking. AI systems create path dependencies that affect future generations and impose significant environmental costs. Ethical capacity involves equity awareness, care ethics and respect for value pluralism, resisting the tendency of AI to universalize dominant cultural assumptions.

Practices and resources

“Resilience must be supported through concrete practices at multiple levels.

At the individual level, intentional AI engagement – questioning recommendations, developing sensing, maintaining manual skills and reflecting on AI’s influence – helps preserve agency. Tools supporting data sovereignty and continuous-learning communities should enable critical engagement rather than passive acceptance.

At the community level, AI governance communities could mirror renewable energy communities, combining literacy, evaluation and collective negotiation. Participatory technology assessment, community data trusts, local AI development and solidarity networks for displacement all strengthen collective capacity.

At the institutional level, alternative metrics are needed to evaluate AI not only by efficiency or engagement but by contribution to brain capital, equity, sustainability and human flourishing. Longer evaluation horizons, independent oversight, participatory design and just transition frameworks can counter short-term pressures and automation bias.

At the societal level, regulatory frameworks should emphasize complementarity, transparency and accountability. Public investment in AI literacy, open-source resources, brain capital infrastructure and international cooperation is essential to prevent concentration of power and capability.

Taking urgent action

“Action is required now, before AI systems become irreversibly embedded and success metrics must be redefined to capture long-term human and social value. Participatory AI governance mechanisms should be established immediately in cities, sectors and high-stakes domains.

“Massive investment in brain capital – education, mental health, lifelong learning, and cultural resources – is needed to prevent crisis-driven responses.

“Policies must redirect AI toward augmentation rather than replacement, while transparency, auditing, and contestation rights are made non-negotiable. Finally, broad coalitions linking labor, environmental, digital rights, academia, communities, and responsible businesses are required to sustain this shift.

New vulnerabilities and making the AI transition

“AI introduces new vulnerabilities that amplify earlier HCD failures: cognitive atrophy through over-automation, erosion of agency through persuasive AI, epistemic fragility from opaque decision-making, ecosystem brittleness from narrow optimization, inequality amplification through differential access and crises of meaning as work and identity are displaced. Addressing these vulnerabilities requires intentional skill maintenance, persuasion literacy, collective sense-making, diversity preservation, equity-focused policy and renewed attention to purpose and care.

“In sum, the AI transition will either accelerate the depletion of human agency and brain capital or become an opportunity to regenerate them. The outcome depends less on AI’s technical capabilities than on our collective capacity to govern, design and live with it deliberately.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”