
“In the next decade, the techno-social system in which AI is emerging is not going to remain more or less the same as it stands now. We are in the midst of an enormous turning point between an old system of mass production based on cheap energy and industrial logic to a new system based on cheap intelligence and digital logic. We stand between the two. One of the greatest overall societal impacts is a massive restructuring of work that will certainly disrupt human employment. Resilience stems from many sources, not all directly stem from AI. Most importantly from the choices society’s leaders make. A large and important segment of those choices is whether corporations are going to be permitted to use AI to enrich themselves at the cost of ordinary people. If they are allowed to do this trust is likely to break down and there could be significant displacement of human workers. The old world order once provided job-security, unemployment insurance, backing for mortgages and government funding of research that consciously broadened prosperity. Now, everything will be renegotiated.
The risks here are not marginal. When productivity gains from AI accrue primarily to capital rather than labor, we risk repeating – and amplifying – the dislocations of earlier industrial transitions, but at a far faster pace and with far less warning. Mass displacement of workers across both blue-collar and white-collar roles, without adequate social investment in retraining, income support or alternative opportunity, would destabilize the social fabric in ways that dwarf anything we have seen from prior waves of automation.
“Most of the ingredients that comprise the taken-for-granted ways in which companies operate stem from an era when most of the assets on the books were tangible and companies needed structures that accommodated mass market operations. Today, the bulk of assets are intangible, thus many other forms – such as the LLC, limited liability company, in which owners are not generally held responsible for debts, lawsuits or bankruptcy, are subject to few requirements by law and benefit pass-through taxation – could be viable.
“The very structure of employment – work so many hours a day for so much pay – is being rethought. What do billable hours mean, for instance, when AI can provide astute analysis and research in a flash for essentially zero cost? Assumptions that expertise in knowledge work is going to depend on a human workforce and the expectation that professionals can charge a lot for it are going to be revisited. Value is going to flow to where scarcity still exists, and it seems as if society is only beginning to figure this out.
“The risks here are not marginal. When productivity gains from AI accrue primarily to capital rather than labor, we risk repeating – and amplifying – the dislocations of earlier industrial transitions, but at a far faster pace and with far less warning. Mass displacement of workers across both blue-collar and white-collar roles, without adequate social investment in retraining, income support or alternative opportunity, would destabilize the social fabric in ways that dwarf anything we have seen from prior waves of automation.
“At the same time, AI gives corporations unprecedented tools to identify and exploit individual vulnerabilities – pricing goods and services based on inferred desperation, targeting political messaging based on psychological profiles and allocating credit and opportunity in ways that deepen rather than reduce existing inequalities.
“No amount of individual resilience compensates for a system structurally tilted against ordinary people.
“A lot of what happens is going to come down to policy and regulatory choices made largely by governments regarding how these technologies are allowed to impinge on our lives. The central question is not whether AI will change everything – it will – but whether those changes will be shaped to broadly distribute the gains or to concentrate them. That is ultimately a political choice, not a technological one, not an individual one.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”