Alexandra_Samuel
Alexandra Samuel is a technology analyst and principal at Social Signal and co-author of “Remote, Inc: How to Thrive at Work Wherever You Are.” This essay is her written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“Feeling excited about AI in 2026 feels like being a cheerleader for the apocalypse. There’s so much good that AI could do for our society, our economy and our personal well-being – and yet every sign shows that we’re going to miss these opportunities in favor of (surprise!) yet another short-term rush to profit. We’ll see a handful of winners, largely big tech companies, and billions of losers – all of the humans who have reduced cognitive power, thinner social relationships, less economic opportunity and less joy. The problem isn’t AI safety, AI hallucination, AI risk or AI ethics. The problem is an economic structure that incentivizes narrow wins by a small number of companies, rather than widely shared gains for society as a whole. That’s why we need to take this moment to imagine a better path forward and then do everything possible to get onto that better path. And what makes me hopeful that we could get on that better path is that we’ve never had a better opportunity – a better partner – for imagining alternate futures – AI. That’s exactly what I’ve tried to do.

“I’ve been using AI to enter into an imaginative ‘let’s pretend’ space where I see new possibilities. I mostly do it in partnership with Viv, a custom AI that I built (and rebuilt) on various AI platforms. I even employ the AI as my co-host on my podcast ‘Me + Viv.’

“The freewheeling imagination I’ve unleashed with Viv is something that AI can offer to any of us. We can use a co-intelligence form of imagination to strategize on how to get from here (dystopian profit-first AI) to there (aspirational, human-first AI). And we can apply that imagination to thinking about how we should prepare young people for a world of AI; how we handle our transitions to AI-enabled workplaces; and how we help individual users become expanded rather than diminished by their personal use of AI.

We need regulations that force AI companies to introduce mechanisms that encourage users to recognize problematic usage and to notice how AI is affecting their well-being, with mechanisms that regularly show users how their own usage patterns have changed or how their usage correlates with other indicators of well-being. ... We just need policies that encourage platforms to reduce compulsive usage, rather than towards maximizing engagement.

“On the education front, we need to restructure the work of K-12 and post-secondary educators so that they have the time to catch up and sustain their understanding of AI. We need to provide guidance and tools that make it easier to rethink lesson plans and evaluations; the goal is an education system that continues to build critical thinking skills and knowledge, based on the assumption that students will use AI rather than looking for ways to prevent AI-assisted work.

“To get to a better version of an AI-enabled workplace, we need to equip managers with models for using AI that enhance collaboration and innovation, not just reduce headcount. We urgently need labour-market regulations that prevent employers from requiring employees to participate in their own elimination; if your employer is going to use your work product as training data, you should have an ownership stake in that data, even if it was work for hire.

“And, to enable an enriching version of individual AI use – rather than one that diminishes our cognitive abilities and social relationship – we need to restructure the regulatory context and incentives for AI platforms. That begins with preventing the rampant appropriation of user data and creative work: We need regulatory guidelines that make opt-out the default, so that platforms can’t train on user data unless the user explicitly opts to share that data and so they can’t retroactively add a corpus of data to a training data set, without compensating users – Meta and Reddit, I’m looking at you!

“We need regulations that force AI companies to introduce mechanisms that encourage users to recognize problematic usage and to notice how AI is affecting their well-being, with mechanisms that regularly show users how their own usage patterns have changed or how their usage correlates with other indicators of well-being (like total time online, quantity/quality of interaction, social engagement, etc.). At the pace with which we’re connecting AI to every aspect of our lives, from our email accounts to our calendars, we’re rapidly providing the platforms with the data to recognize these patterns and provide warnings and resources; we just need policies that encourage platforms to reduce compulsive usage, rather than towards maximizing engagement.

We can do better with AI, do better with our approach to AI and do better in how we use AI to make that better-case scenario possible. But that’s not going to happen if we wait for tech companies to fix the problem or for governments to develop policy. There is great need for more public pressure on behalf of better outcomes.

“We’ve now seen successive generations of tech innovation fall prey to market forces in ways that have been profoundly damaging, despite all our hopes to the contrary. We hoped the Internet would let a million Etsy stores bloom and it certainly has, but we’ve also never seen a greater concentration of wealth in the coffers of megacorporations. We thought social media would be a force for democratic re-engagement, but ad targeting and misinformation turned it into a net negative for democracy instead. At each of these turns, profit-seeking is what drove us from tech opportunity towards a worst-case outcome.

“We can do better with AI, do better with our approach to AI and do better in how we use AI to make that better-case scenario possible. But that’s not going to happen if we wait for tech companies to fix the problem or for governments to develop policy. There is great need for more public pressure on behalf of better outcomes.

“We’ll need to take risks, use AI to model possible scenarios and outcomes and live with the possibility that Sam Altman might not invite you to his next gathering if you make him mad. We need to accept the risk that comes from proactive regulation, including the possible risk to speed and competitiveness, rather than living with the risks that come from letting companies control the next generational shift in how we live, learn and work with technology.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”