Theme 4

A large share of these experts said their first concern isn’t that AI will “go rogue.” They mostly worry that advanced AI is likely to significantly magnify the dangers already evident today due to people’s uses and abuses of digital tools. They fear a rise in problems tied to extractive capitalism, menacing and manipulative tactics exercised by bad actors, and autocratic governments’ violations of human rights.


Avi Bar-Zeev
AI is the most-persuasive technology ever, and the most dangerous in greedy human hands

Avi Bar-Zeev, founder and president of Reality Prime and the XR Guild, said, “AI is poised to be the most persuasive technology ever invented, which also makes it the most dangerous in greedy human hands. By 2040, we may decide to let AI influence or decide legal cases. We may continue to see ad-tech with personal data run amok. We may even find that AI makes for better people-managers than people, replacing the top of companies with automation, more so than we originally expected low-level workers to be replaced by robots. Robots are expensive. Software is cheap.

“AI has the power to help humans collaborate. While generative AI indeed robs creators of their credit and income, it is also the most powerful tool for human-to-human collaboration we’ve yet invented. It can let people combine their ideas and expressions in a way that we never could. That power remains still largely untapped. AI has the power to help people heal from emotional trauma, but we may also use it as a substitute for people when what we need most is real human love and compassion.

The best and worse uses of AI are largely a function of the choices we humans make. … If we build tools designed to help people, we can do good and still make mistakes. But if we choose to exploit people for our own gain we will certainly do harm, while any good is incidental. We should be regulating the uses and intentions more than the technologies themselves.

“Will the people most in need turn to proven therapies or use the crutch of AI girlfriends to ease their loneliness? Probably the latter. The most important question about AI is how much control of our lives we grant it. We may trust AI more than individual human bias. But we should know that AI carries all of the same learned biases with, so far, none of the compassion to counteract that.

“All in all, this is one thing I know to be true of AI today as well as what is likely in 2040: The best and worse uses of AI are largely a function of the choices we humans make. If we build tools designed to help people, we can do good and still make mistakes. But if we choose to exploit people for our own gain we will certainly do harm, while any good is incidental.

“We should be regulating the uses and intentions more than the technologies themselves. And we must be educating everyone how to make ethical choices for the best outcomes. The risk of AI extinction is roughly equal to the risk of nanotechnology turning the world to grey goo or some stock-trading algorithm tanking the market. But humans failing to build safe systems can injure people.”

Devin Fidler
Worry about the oncoming wildfire more than the distant asteroid

Devin Fidler, foresight strategist and founder of Rethinkery.com, commented “The AI discourse has been too fixated a possible impending doomsday due to AI that could spiral out of control. The pressing, tangible challenges just at the threshold of the AI technologies we have today are straining legacy systems and institutions to their breaking point, exacerbating negative externalities and potentially nurturing the growth of new kinds of digital warlords.

“This is like worrying about an asteroid collision while your house is in the path of an oncoming wildfire. To be clear: AI could absolutely be an enormous boon for humanity. Yet, like that wildfire, if left unattended it could also consume an awful lot that we would prefer not to see burned down. This isn’t fear-mongering; It is reality.

The pressing, tangible challenges just at the threshold of the AI technologies we have today are straining legacy systems and institutions to their breaking point, exacerbating negative externalities and potentially nurturing the growth of new kinds of digital warlords. … Traditionally, society has created institutions to protect itself from this kind of thing. But regulation lags behind, always a few steps too slow, always playing catch-up. Imagine AI supercharging this disparity.

“Right now, companies are racing to outpace each other in the agentic AI space, prodded by investors seeking astronomical returns. (There is evidence to suggest that the early LLMs were originally intended to be introduced as a component in larger AI ‘agent’ software – AI that is given a goal and then works on accomplishing it on its own.) Indeed, artificial agency may ultimately be even more impactful than traditional artificial intelligence. After all, it allows software scaling and intense competition to be applied to a great game of ‘shaping the physical world.’ The challenge, of course, is that the rest of us still have to live in the physical world while this plays out.

“Traditionally, society has created institutions to protect itself from this kind of thing. But regulation lags behind, always a few steps too slow, always playing catch-up. Imagine AI supercharging this disparity. Even now, problems like climate change and unsustainable resource allocation overwhelm the institutional tools we have to address them. Add exponential AI to this mix and we seem to be setting the stage for an AI-enhanced tragedy of the commons in which digital agents, in their quest for optimization, exponentially leave the negative externalities for the rest of us to clean up.

“The biggest threat now may not be sci-fi’s Skynet terminators or the shibboleth paperclip maximizers, but tomorrow’s now infinitely scalable con artists, sales bots and social media manipulators, all potentially capable of undermining institutional effectiveness and inflicting collateral damage on overall cohesion at a scale we’ve never seen before.

It’s not just systemic risk that needs to be considered; the primary concern is that these systems empower the people who want to see traditional institutions fail. There may well be nothing a rogue AI could do that a rogue person somewhere is not likely to try first. Imagine warlords who wield algorithms instead of (or in addition to) armies. … Agentic AI amplifies the scale of every bad actor with an internet connection.

“How can our legacy systems be patched quickly enough to handle this? Financial systems, social media, government agencies – all are ripe for exploitation even by very basic AI agents. Cracked AI agents with convincing real-time voice capabilities could potentially be used to create a new open API to most of society’s most fundamental bureaucratic systems. If our institutional framework were a literal operating system, this is the sort of situation that could see stack overflow errors and system crashes as the legacy systems simply fail to keep up.

“But it’s not just systemic risk that needs to be considered; the primary concern is that these systems empower the people who want to see traditional institutions fail. There may well be nothing a rogue AI could do that a rogue person somewhere is not likely to try first. Imagine warlords who wield algorithms instead of (or in addition to) armies. The potential for destabilization and conflict is rife, as agentic AI amplifies the scale of every bad actor with an internet connection.

“This isn’t without precedent. The early days of industrialization saw similar upheavals, as new technologies tore through established norms and systems. The solution then, as now, wasn’t to await a new breed of better or more enlightened human adapted to the technological landscape – but to actively design and construct robust new kinds of institutions capable of channeling these powerful forces toward positive externalities and away from negative externalities.

“Organizations themselves are a technology, and they need to be patched to keep up with new challenges and take advantage of new affordances. From this perspective, it’s pretty clear that now is the time to start putting together the pieces of a new institutional framework, an ‘operating system’ for the AI era, that can adapt as fast as the technologies it seeks to govern.

“This isn’t about stifling innovation; it’s about ensuring that the digital economy continues to give humanity as a whole more than it takes. Where each transaction, each interaction, builds rather than extracts value. In this environment, proactive regulation isn’t just a stopgap; it’s an essential tool to bridge the space between where we are and where we need to be. It is good to see governments start taking this part seriously. Over the longer term, if we design these institutional ‘operating systems’ correctly, we have a real chance of illuminating the path to a future of unprecedented progress and human well-being.”

Leah A. Lievrouw
The fight to gain first-mover and network effects advantages is everything

Leah A. Lievrouw, professor of information studies at the University of California-Los Angeles, said, “Of course, many people are thinking about the issues around AIs, especially the major industry players, but I’m not confident that values beyond efficiency, novelty and profit will ultimately prevail in this arena. I question whether the claims made for AI will ultimately pan out as they are now being glowingly promised.

AI research was originally a quest for ‘general intelligence’ for machines, and – despite repeated failed attempts to build such machines over the decades – such human-like capacities still seem some way off.

The current batch of AIs – multiple because so far they each really just do certain types of things well – have been rushed to market with little non-tech oversight, so proponents can gain first-mover and network-effects advantages (and property rights). Under these conditions, who will eventually get to decide what general machine intelligence is, how it should be deployed and under what circumstances and to what ends?

“The difference today, of course, is the sheer brute-force approach being applied to the creation of machine ‘learning’ using imponderably large datasets – despite questionable practices about the sources, cultural/social significance or meaning, or ownership and use of that data – and assumptions that massive computing power will only continue to expand on some kind of unstoppable log scale – despite the environmental risks and foregone opportunities for investing in something other than computing infrastructure that these entail.

“My impression is that the current batch of AIs (multiple because so far they each really just do certain types of things well) have been rushed to market with little non-tech oversight, so proponents can gain first-mover and network-effects advantages (and property rights).

“Under these conditions, who will eventually get to decide what general machine intelligence is, how it should be deployed and under what circumstances and to what ends?”

Howard Rheingold
Corporations with huge financial and computational resources will be in control

Howard Rheingold, pioneering Internet sociologist and author of “The Virtual Community,” wrote, “The future depends on who is in control, and it seems highly likely that corporations with huge financial and computational resources will continue to be in control, strengthening their monopolies. If that is the case, we can expect income inequality – already at a crisis stage – to get worse.

What I fear is that antisocial individuals and groups will gain the power to create weapons of mass destruction that heretofore have been reserved for states: already, the same tools have been used to solve the protein-folding problem and to suggest tens of thousands of potentially fatal compounds to be used in biological and chemical warfare. I did not come up with the phrase, but I agree that a good question to ask about any potentially powerful technology is ‘What might 4chan do with it?’

“The ability of medical researchers to seek cures and prevention for deadly diseases will be multiplied.

“What I fear is that antisocial individuals and groups will gain the power to create weapons of mass destruction that heretofore have been reserved for states: already, the same tools have been used to solve the protein folding problem and to suggest tens of thousands of potentially fatal compounds to be used in biological and chemical warfare. I did not come up with the phrase, but I agree that a good question to ask about any potentially powerful technology is ‘What might 4chan do with it?’

“As a former university lecturer, I’m happy to see student use of ChatGPT blowing up the traditional tools for assigning grades. These institutions and their employees are never likely to radically change destructive processes like traditional grading unless they are faced with an existential threat.

“One significant critical uncertainty is whether AI will evolve as a tool for augmenting human intellect or as a replacement. If the former, unless educational institutions and practices change radically (how many schools offer enough guidance today to students on assessing the accuracy of online information?), there will be a strong divide between those who know how to use these tools to amplify their own capabilities and those who do not have that knowledge/skill.”

Tim Bray
Capitalism limits the focus on AI’s long-term impact on people

Tim Bray, founder/principal at Textuality Services, previously a vice president at Amazon, wrote, “The problem with AI has nothing to do with the technology itself. The problem is the people who are financing and deploying it.

“The imperatives of 21st-century capitalism ensure that their thought processes will not include the impact of those deployments on humans, be they employees or customers. This effect is worsened by the high cost of building and training AI models, ensuring that this capability will mostly be exercised by people whose primary concern is profit, rather than the improvement of the human condition.”

John Battelle
Who will the AIs work for? Who controls the data they work with?

John Battelle, owner of Battelle Media and chairman at Sovrn Holdings, wrote, “We’re at an inflection point as to the ecosystem we build to leverage AI. We have to choose, now, the assumptions we build into agency and rights for individuals interacting with these systems. I’ve written about this on my site. Here are excerpts from a September 2023 post titled ‘On AI: What Should We Regulate?’:

“A platoon of companies is chasing the consumer AI pot of gold known as conversational agents – services like ChatGPT, Google’s Bard, Microsoft’s BingChat, Anthropic’s Claude and so on. Tens of billions have been poured into these upstarts in the past 18 months, and while it’s been less than a year into since ChatGPT launched, the mania over generative AI’s potential impact has yet to abate.

Instead of attempting to control AI through reams of impossible-to-interpret pages of regulation directed at particular companies, I humbly suggest we should focus on regulating the core resource all AI companies need to function: Our personal data. … Regulate how those platforms interact with our personal data.

“The conversation seems to have moved from ‘this is going to change everything’ to ‘how should we regulate it’ in record time. What I’ve found frustrating is how little attention has been paid to the fundamental, if perhaps a bit less exciting, question of what form these generative AI agents might take in our lives. Who will they work for, their corporate owners, or …us? Who controls the data they interact with – the consumer, or, as has been the case over the past 20 years – the corporate entity?…

Most leading AI executives are begging national and international regulatory bodies to quickly pass frameworks for AI regulation. I don’t think they will be up to the task. Not because I think regulators are evil or stupid or misinformed – but rather because a top-down approach to something as slippery and fast-moving as generative AI (or the internet itself) is brittle and unresponsive to facts on the ground.

“This top-down approach will, of course, focus on the companies involved. But instead of attempting to control AI through reams of impossible-to-interpret pages of regulation directed at particular companies, I humbly suggest we should focus on regulating the core resource all AI companies need to function: Our personal data.

“It’s one thing to try to regulate what platforms like Pi or ChatGPT can do, and quite another to regulate how those platforms interact with our personal data. The former approach stifles innovation, dictates product decisions and leads to regulatory capture by large organizations. The latter sets an even playing field that puts the consumer in charge.”

David Bray
Focus on how to co-exist with super-empowered, transnational organizations and individuals

David A. Bray, principal at LeadDoAdapt Ventures and distinguished fellow with the non-partisan Stimson Center, commented, “The distraction here is focusing on questions that are framed as: What if AI did XYZ to humanity? Instead, we really should be focusing on how we learn to co-exist with both super-empowered, transnational organizations and individuals who are now (via the increasing accessibility, ubiquity, and affordability of technology) able to do things that only large nation-states could do 40-50 years ago.

“The challenge with these questions is they treat the world as singular. With AI, it’s probably very much dependent on the specific society, nation and communities’ choices around data, AI, and people that will determine positive versus negative. It is also linked to other contextual influences. After all, there already are 54 different national AI strategies in the world – see https://www.aistrategies.gmu.edu/report.

Instead of the Turing test, we should be asking how AI can amplify the strengths associated with where humans individually and collectively are great – while mitigating our weaknesses both individually and collectively in making decisions. Specifically, instead of AI trying to pass as human, we should be using AI to make us better humans together.

“What if the Turing Test [the long-standing marker of whether a computer system has intelligence] is the wrong test? It could be distracting us from bigger and more important questions about how powerful organizations and individuals use AI.

“It is important to remember the original Turing test – designed by computer science pioneer Alan Turing himself – involved Computer A and Person B, with B attempting to convince an interrogator, Person C, that they were human and that A was not. Meanwhile, Computer A was trying to convince Person C that they were human. What if this test of a computer ‘fooling us’ is the wrong test for the type of AI that 21st-century societies need, especially if we are to improve extant levels of trust among humans and machines collectively?

“Instead of the Turing test, we should be asking how AI can amplify the strengths associated with where humans individually and collectively are great – while mitigating our weaknesses both individually and collectively in making decisions. Specifically, instead of AI trying to pass as human, we should be using AI to make us better humans together.”

Melissa Sassi
Human-centered AI can succeed only if it includes all humans

Melissa Sassi, venture partner at Machinelab Ventures, wrote, “While AI innovation is moving faster than experts anticipated a year or two ago, the likelihood of human-level AI coming to fruition by 2040 is debatable. However, AI is and will be the biggest transformation in our lifetime and the lifetime of our children. It has already played a major role in transforming industries, jobs, products, services and experts’ predictions for the future of work. It’s part of our everyday lives, even if it is behind a tech curtain most cannot see or grasp. It is already transforming financial services, healthcare, education and so much more.

“The opportunities are endless. It is important, then, that since AI relies on data to create its magic and augment our lives society must create solutions that allow the public to own their own data. This gives rise to conversations around decentralization, Web3, digital assets, blockchain and probably requires the future to be set in a world in which the current handful of tech companies and faulty monetary systems no longer determine our future and where our data resides and is sent. I hope a solution takes shape and is adopted that not only protects our children’s data but also allows them to monetize it as they see fit and with informed consent.

“Wherever data resides, it must be protected and kept private, safe and secure. Too many companies are lax with cybersecurity education and lax with the technology they foster. A cultural revolution must take place in which it becomes technically unfeasible for nefarious characters to access our data. Relying on operational assurance instead of technical assurance is hopefully something that will gain more traction across all industries – privacy by design and zero-trust. That said, quality and representative data must be available for AI to do its thing.

Whatever the future holds for our children, AI should augment their intelligence and creativity and not replace it. It should boost their potential, serve as an extension of their innate strengths and superpowers and be available for all. Billions of people do not have access to networked intelligence or the capacity to use it well. It is my hope that AI supports tech innovation that identifies new ways of getting people connected affordably with viable business models versus creating a world of more have-nots while the haves and the top one percent flourish.

“Without humans at the center of every aspect of evolving AI solutions, we will find inherent bias each step of the way, and this will exponentially impact our children and our species. It is incredibly important to have a more-diverse, equitable and inclusive AI workforce representative of all … As healthcare, financial services, agriculture, education, the criminal justice system and so much more intertwine with AI, the majority of the world’s people should not be held back due to faulty algorithms and assumptions. We already have enough divides in the world as it stands.

“While many fear the unknown, it is my hope that our children will find a way for AI to do their work, make their lives more meaningful, give them more time with the things that truly matter – family and friends and ensure the planet is both healthy and long lasting.

“The creators, makers and doers of the world must take responsibility for trust, transparency and fairness when building AI solutions. Without humans at the center of every aspect of evolving AI solutions, we will find inherent bias each step of the way, and this will exponentially impact our children and our species. It is incredibly important to have a more-diverse, equitable and inclusive AI workforce – one representative of all – to ensure the impact of AI does not favor one small class of people over the rest of the world.

“As healthcare, financial services, agriculture, education, the criminal justice system and so much more intertwine with AI, the majority of the world’s people should not be held back due to faulty algorithms and assumptions. We already have enough divides in the world as it stands.

“To stay ahead, it is incumbent upon today’s generation to help enable the future generation, which requires elders to give them a seat at the table, ensuring they have access to future-ready skills and have the support and experience necessary to thrive.

“Whatever the future holds, AI should augment our intelligence and creativity… not replace it. It should boost our potential, serve as an extension of our innate strengths and be available for all. It should help us solve problems, get stuff done, make the impossible possible, gain insights and so much more.

“While many fear the unknown, it is my hope that AI does our work for us, makes lives more meaningful, gives us more time to do the things that truly matter – for ourselves, our family and friends – while ensuring our planet and what may lie beyond it are healthy and long-lasting.”

Continue reading: A selection of essays tied to Theme 5