“The AI discourse has been too fixated on a possible impending doomsday due to AI that could spiral out of control. The pressing, tangible challenges just at the threshold of the AI technologies we have today are straining legacy systems and institutions to their breaking point, exacerbating negative externalities and potentially nurturing the growth of new kinds of digital warlords.

“This is like worrying about an asteroid collision while your house is in the path of an oncoming wildfire. To be clear: AI could absolutely be an enormous boon for humanity. Yet, like that wildfire, if left unattended it could also consume an awful lot that we would prefer not to see burned down. This isn’t fear-mongering; It is reality.

“Right now, companies are racing to outpace each other in the agentic AI space, prodded by investors seeking astronomical returns. (There is evidence to suggest that the early LLMs were originally intended to be introduced as a component in larger AI ‘agent’ software – AI that is given a goal and then works on accomplishing it on its own.) Indeed, artificial agency may ultimately be even more impactful than traditional artificial intelligence.

The biggest threat now may not be sci-fi’s Skynet terminators or the shibboleth paperclip maximizers, but tomorrow’s now infinitely scalable con artists, sales bots and social media manipulators, all potentially capable of undermining institutional effectiveness and inflicting collateral damage on overall cohesion at a scale we’ve never seen before.

“After all, it allows software scaling and intense competition to be applied to a great game of ‘shaping the physical world.’ The challenge, of course, is that the rest of us still have to live in the physical world while this plays out.

“Traditionally, society has created institutions to protect itself from this kind of thing. But regulation lags behind, always a few steps too slow, always playing catch-up. Imagine AI supercharging this disparity. Even now, problems like climate change and unsustainable resource allocation overwhelm the institutional tools we have to address them. Add exponential AI to this mix and we seem to be setting the stage for an AI-enhanced tragedy of the commons in which digital agents, in their quest for optimization, exponentially leave the negative externalities for the rest of us to clean up.

“The biggest threat now may not be sci-fi’s Skynet terminators or the shibboleth paperclip maximizers, but tomorrow’s now infinitely scalable con artists, sales bots and social media manipulators, all potentially capable of undermining institutional effectiveness and inflicting collateral damage on overall cohesion at a scale we’ve never seen before.

These systems empower the people who want to see traditional institutions fail. … Imagine warlords who wield algorithms instead of or in addition to armies. The potential for destabilization and conflict is rife, as agentic AI amplifies the scale of every bad actor with an internet connection.

“How can our legacy systems be patched quickly enough to handle this? Financial systems, social media, government agencies – all are ripe for exploitation even by very basic AI agents. Cracked AI agents with convincing real-time voice capabilities could potentially be used to create a new open API to most of society’s most fundamental bureaucratic systems.

“If our institutional framework were a literal operating system, this is the sort of situation that could see stack overflow errors and system crashes as the legacy systems simply fail to keep up.

“But it’s not just systemic risk that needs to be considered; the primary concern is that these systems empower the people who want to see traditional institutions fail. There may well be nothing a rogue AI could do that a rogue person somewhere is not likely to try first. Imagine warlords who wield algorithms instead of (or in addition to) armies. The potential for destabilization and conflict is rife, as agentic AI amplifies the scale of every bad actor with an internet connection.

Organizations themselves are a technology, and they need to be patched to keep up with new challenges and take advantage of new affordances. … Now is the time to start putting together the pieces of a new institutional framework, an ‘operating system’ for the AI era.

“This isn’t without precedent. The early days of industrialization saw similar upheavals, as new technologies tore through established norms and systems. The solution then, as now, wasn’t to await a new breed of better or more enlightened human adapted to the technological landscape – but to actively design and construct robust new kinds of institutions capable of channeling these powerful forces toward positive externalities and away from negative externalities.

“Organizations themselves are a technology, and they need to be patched to keep up with new challenges and take advantage of new affordances. From this perspective, it’s pretty clear that now is the time to start putting together the pieces of a new institutional framework, an ‘operating system’ for the AI era, that can adapt as fast as the technologies it seeks to govern.

“This isn’t about stifling innovation; it’s about ensuring that the digital economy continues to give humanity as a whole more than it takes. Where each transaction, each interaction, builds rather than extracts value. In this environment, proactive regulation isn’t just a stopgap; it’s an essential tool to bridge the space between where we are and where we need to be. It is good to see governments start taking this part seriously.

“Over the longer term, if we design these institutional ‘operating systems’ correctly, we have a real chance of illuminating the path to a future of unprecedented progress and human well-being.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”