Louis_Rosenberg
Louis Rosenberg is a virtual reality pioneer and chief scientist at Unanimous AI,. This essay is his written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“Artificial Intelligence will reshape society over the next four to seven years. While there is a chance this will benefit humanity, current technological and political trends create a very high risk that AI will significantly reduce human agency by influencing our beliefs, guiding our actions, manipulating our decisions and feeding us custom-crafted impressions of our world that are designed to achieve objectives other than our own personal benefit. Most people don’t appreciate the true magnitude of the risk that current AI technologies pose to human agency. A common refrain is that ‘AI is just a tool’ and like any tool, the benefits and risks depend entirely on how you use it. This perspective is naive. In the near future, we will come to realize that AI is not merely a tool we use, but a prosthetic we wear. This difference might seem subtle, but it creates unique dangers we are not prepared for.

A bicycle is a useful tool that keeps the rider completely in control while it increases human capabilities. Individuals are mostly-to-always not completely in control of AIs today. Unfortunately, when interactive AI agents are involved we don’t know who is steering – is it  the human user, the AI agent or the third-party corporation that deployed the agent? It may be a blurry mix of one or another – or the others – or all three, at a significant net loss for human agency. … It will feel like a voice in your head, and you may come to trust it more than you should.

“This prosthetic will be deployed in the form of context-aware conversational agents that are embedded in body-worn devices like smart glasses, pendants or earbuds. Your AI prosthetic will see what you see and hear what you hear, while tracking where you are, what you’re doing, who you’re with and what you are trying to achieve. And without you needing to say a word, it will whisper advice into your ears and flash guidance before your eyes.

“The difference between a tool and a prosthetic is best understood through a simple control theory analysis of input and output. A tool takes in human input and puts out amplified human output. A tool can make us stronger. It can make us faster. It can even enable us to fly. An interactive prosthetic, on the other hand, forms a feedback control loop around the human user, enabling the pair to function as a single coordinated system. Yes, it accepts input from the user, but it also generates real-time output that influences the user.

“Unless regulated, this will give body-worn AI devices the ability to monitor our behaviors (i.e., actions and reactions) and optimally influence the wearer. We’re not protected against the risks of the AI manipulation problem. This is because most policymakers still view AI risk in terms of its ability to rapidly deploy traditional forms of targeted content at scale, like fake articles and deepfake videos. These are genuine risks, but not nearly as dangerous as the interactive and adaptive influence that will soon be deployed by conversational AI systems that observe our behaviors and work to ‘talk us into’ believing things that are untrue, buying things we don’t need and accepting ideas that are not in our best interest. (For more details, see my research paper on arXiv here.)

“Large companies will sell you these AI prosthetics for a low monthly fee and will refer to the voices whispering in your head as ‘copilots,’ ‘virtual assistants’ or ‘personal coaches.’ For years I’ve called these looming agentic assistants ‘electronic life facilitators’ or ELFs. I like this name because I think of these AI agents as little creatures that ride shotgun in your life, sitting over your shoulder and advising as you navigate the complexities of your day. 

“To address this problem, we need to break free of the ‘tool’ framing of today’s AI systems. This is a bold statement since the ‘tool-use’ metaphor has been foundational to computing, going back 35 years to Steve Jobs and his colorful description of the personal computer as a ‘bicycle of the mind.’ A bicycle is a useful tool that keeps the rider completely in control while it increases human capabilities. Individuals are mostly-to-always not completely in control of AIs today. Unfortunately, when interactive AI agents are involved we don’t know who is steering – is it the human user, the AI agent or the third-party corporation that deployed the agent? It may be a blurry mix of one or another – or the others – or all three, at a significant net loss for human agency.

We need policymakers, regulators and members of the public to appreciate that AI is not merely a tool … AI enables an entirely new form of media that is interactive, adaptive, conversational and soon to be wearable (which will make it fully context-aware in our lives – possibly much more aware than we are of what we do, where and when). When deployed in this way, AI is an interactive prosthetic that can be deployed to optimally influence our actions, alter our opinions and sway our beliefs – and do it all through casual conversation from a charismatic and friendly voice ringing in our ears.

“Even worse, the party steering the AI could be a sponsor paying to deploy individually targeted influence through an interactive conversational agent. It will feel like a voice in your head, and you may come to trust it more than you should. After all, these assistants will also provide useful information that help you through your day.

“The problem we face is that when content is adaptive and interactive through real-time conversation we don’t know when the voice assisting us is influencing us.

“So, what can we do about this? First and foremost, we need policymakers, regulators and members of the public to appreciate that AI is not merely a tool that can be used by bad actors to generate and deploy targeted media at scale. Instead, AI enables an entirely new form of media that is interactive, adaptive, conversational and soon to be wearable (which will make it fully context-aware in our lives – possibly much more aware than we are of what we do, where and when). When deployed in this way, AI is an interactive prosthetic that can be deployed to optimally influence our actions, alter our opinions and sway our beliefs – and do it all through casual conversation from a charismatic and friendly voice ringing in our ears. (Read more in my paper published here.)

“To protect against these risks, conversational AI agents should not be allowed to form closed-loop control systems around human users with the goal of ‘talking you into’ any action, belief, decision or perspective that you did not explicitly request it to assist you with. And even then, the use of closed-loop influence should be strictly limited to medical, health, and educational applications on a case-by-case opt-in basis.

“In addition, all AI agents should be required to inform the user whenever they express conversational content on behalf of a third party (such as a corporate sponsor). Or, even better, conversational advertising should be outlawed entirely.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”