
“We are digital beings in a digital world. That’s the main thing. And this world is still very new. We’ve operated in the natural world for as long as we’ve been a species, and we are experts at it. But the digital world is not only new, but sure to be with us for many years, decades, centuries and millennia to come. And we still lack countless graces we take for granted in the natural world, such as privacy and independence from algorithmic manipulation. Making full sense of this new world is very hard, because we understand everything metaphorically, and natural-world metaphors mask what’s really going on in the digital world. So, while we speak of ‘domains’ with ‘locations’ that we ‘build’ and ‘own’ (though most people only rent them) and speak of ‘loading’ and ‘transferring’ ‘packets’ of data in ‘up’ and ‘down,’ data are actually collections of ones and zeroes that are by design immaterial non-things that are instantaneously both here and elsewhere, even though ‘where’ only makes full sense in the natural world. How will all this change and make whole new kinds of sense after a few more decades of digital existence?
“Progress is the process by which the miraculous becomes mundane. In the digital world that transition is now happening almost instantly and in many domains because AI is endlessly useful.
Truly personal AI – the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle – is as hard to imagine in 2026 as personal computing was in 1976. But it is no less necessary and inevitable. When we have it, many of the questions that challenge us will have new and better answers. And new challenges.
“Big AI does its best to ingest the totality of human expression in all digital forms, and then to make any and all of it available in the most useful ways it can. At the moment (for me, it’s noon in The Bahamas on February 2nd, 2026), it does this by bringing hunks of that expression back to us, on demand, in constructive conversational forms. Big AI is the world’s largest Magic 8 Ball, within which floats a polyhedron of answers with trillions of facets, each ready to help.
“As with all tech, Big AI has its downsides. (Just check out what Gregory Hinton or Gary Marcus have to say about it.) But its usefulness verges on the absolute so we can’t stop using it, no matter how abysmal some credible prophecies may be. There is one saving upside. It’s the same one that saved us from HAL 9000 in the book and movie ‘2001: A Space Odyssey.’ It’s our humanity and independence. Specifically, in the form of personal AI. We need personal AI for the same reason we need personal homes, shoes and computers. We need it to know our natural and digital selves as fully as possible and to participate with full agency in society, its economies and its governance.
“Think about all the data in our personal lives that is not in our full control. We could use some AI help with our schedules, our past and future work, our property, our finances, our obligations, our writing and correspondence, our photographs, our sound recordings, our videos, our travels, our countless engagements with other persons online and off, our many machines and you name it.
“Truly personal AI – the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle – is as hard to imagine in 2026 as personal computing was in 1976. But it is no less necessary and inevitable. When we have it, many of the questions that challenge us will have new and better answers. And new challenges.
I don’t expect to see personal AI or the intention economy prove out in my lifetime. But I am sure both are worth working toward, so that’s what I do. And I advise anyone wishing to make the world better to look for their best work to manifest somewhere beyond their own life’s horizons.
“Every form of life, from the microbial to the human, is fraught with challenges. Personal AI is necessary for us to meet and surmount our challenges in the digital world and to answer all the questions posed to us in this very research exercise.
“Amara’s Law says we overestimate in the short term and underestimate in the long. I’ve been doing both all my life, and in all my answers to good questions asked by Elon and Pew Research over the years.
“Perhaps the most glaring example of short-term overestimation was my response to a request by The Wall Street Journal in 2012 to compress my new book, ‘The Intention Economy,’ to a single cover piece for the paper’s Marketplace section. My editor at the Journal suggested writing about how the intention economy would look 10 years in the future, which is three years ago as I write this. The piece I wrote was titled (by the WSJ) “The Customer as a God.” In retrospect, I was wrong. The economy I described still hasn’t happened. We are not gods in the marketplace. But there are encouraging signs, and I’m still sure my prophecy will prove out. Meanwhile, the first half of Amara’s Law applies.
“I’ve been young for so long that I now have the life expectancy of a puppy. So, I don’t expect to see personal AI or the intention economy prove out in my lifetime. But I am sure both are worth working toward, so that’s what I do. And I advise anyone wishing to make the world better to look for their best work to manifest somewhere beyond their own life’s horizons.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”