
“In my view, society is moving into a world that lacks checks and balances in which commerce provides the infrastructure for our private and public lives and in which trust, remedy and human rights are all hugely at risk. Today, AI systems depend 100% on human agency to determine their use. I see three main human drivers for the adoption of AI systems. The first is commercial. As we already see, AI companies are hyping the potential opportunities of AI systems at the same time as they are embedding AI (as not optional) in the digital services that society already relies on. This ranges from search engines to Excel spreadsheets to social media to professional and bespoke systems used in a host of workplaces. In other words, driven by the search for profit, AI companies promote the benefits (without much independent evidence to support their claims) while making it unavoidable that everyone uses their services.
Organizations accept the promise of AI with insufficient attention to due diligence, conflicts of interest, procurement rules, technical standards, legal compliance or even liability. If businesses are making AI unavoidable for ordinary people, so, too, are our once-trusted public institutions.
“The second is institutional. Public and civic institutions are under enormous pressure to deliver ever more, with ever less funding to pay for it. This includes educational, health, transport, governmental and many other institutions. So these organizations accept the promise of AI with insufficient attention to due diligence, conflicts of interest, procurement rules, technical standards, legal compliance or even liability. If businesses are making AI unavoidable for ordinary people, so, too, are our once-trusted public institutions.
“Third, the public is curious and a bit charmed by the cleverness of AI. So they, too, drive adoption.
“Surviving in such a setting requires difficult, broad change in commercial, public and civic institutions and in the public’s understanding of the risks we see deepening in the infrastructure of society.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”