The Essays – Chapter 11
Closing Thoughts: Making Our Way on the Path to Human Flourishing

Hundreds of experts answered the following essay question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?”
| Download a PDF of the full, 376-page report | Download the 16-page Executive Summary | Download the 4-page Media Summary |
This is the eleventh of 11 chapters of experts’ essays with responses to the question above. This chapter in brief: As humanity enters what could be the most transformational and revolutionary time in its existence these authors share their thoughts on: the integration of AI across human domains; how the 1960s TV series “Leave It to Beaver” and science fiction may or may not represent the character and characters of AI today; how we could make our way on the path to human flourishing; and how human must tap into their remarkable coping capabilities to “turn adversity into opportunity.”
Featured Contributors to Chapter 11: The 12 essay responses on this page were written by Michael Zimmer, Ari Wallach, Steven Abram, Peter Lunenfeld, Grace Rachmany, Michael Dyer, Jeremy Foote, Geoffrey C. Bowker, Jaak Tepandi, Jim Dator, Adam Theirer and Mark Monchek. (Their essays are all included on this one web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)
Chapter 11 features the following essays:
Michael Zimmer: Recalibration: ‘The most important work is not accelerating AI development but strengthening human capacities – cognitive, social, ethical – that allow us to live well alongside powerful but limited tools.‘
Ari Wallach: ‘The task before us is not to outrun AI. It is to outgrow our short-termism.’ We must become ‘great ancestors’ with moral imagination to anticipate downstream effects that will affect unborn children.’
Stephen Abram: AI is based upon humanity’s available trove of information – the good, the bad, the evil, the wrong, the right, the old, the new. Should we offload our thinking and learning to that tool? Sometimes.
Peter Lunenfeld: ‘New technologies can create new habits of mind that can be taught. … AI may lead us to the path we need to follow to augment the best of what we are capable of, and lead to human flourishing.’
Grace (Rebecca) Rachmany: We have invented a real AI Paperclip Maximizer, trying to optimize for economic activity while damaging our cognition, emotional resilience and people’s ability to relate to each other.
Michael Dyer: ‘Some predict that humans are building a race of slaves smarter than ourselves to do our bidding. What could possibly go wrong?’
Jeremy Foote: ‘We need not be passive observers of AI’s detrimental effects; instead, we have the opportunity to actively identify opportunities to steer it.’
Geoffrey C. Bowker: ‘If we project threat and danger onto emergent AI, it may respond with anger and attack.’
Jaak Tepandi: ‘There is little hope that humanity’s existing coping mechanisms will change significantly in the next few decades. At best, we can hope for the integration of humans and artificial organisms.’
Jim Dator: ‘Humans have been progressing toward being cyborgs living in artificial environments for thousands of years … So modern protest about artificial intelligence is nothing new.’
Adam Thierer: ‘The future is coming at us faster than ever. What worries people most about this is AI’s looming role. … This will be our finest moment.’ Humans possess remarkable coping capabilities.
Mark Monchek: ‘Why we don’t respond to the opportunities right in front of us … and how to change that.’ We need each other. We can turn adversity into opportunity. Today, everything is possible.

Michael Zimmer
Recalibration: ‘The most important work is not accelerating AI development but strengthening human capacities – cognitive, social, ethical – that allow us to live well alongside powerful but limited tools.’
Michael Zimmer
Michael Zimmer, director of Marquette University’s Center for Data, Ethics and Society, a privacy and data ethics scholar, wrote, “I do not expect AI systems to assume a dramatically expanded or transformative role in human life and social systems over the next decade or two, at least not in the sweeping ways often imagined. While AI will continue to be integrated into specific domains – writing assistance, pattern recognition, logistics, narrow forms of automation – these integrations are likely to be incremental, constrained and uneven rather than revolutionary.
“Much of the current discourse assumes a linear continuation of today’s rapid expansion, but I suspect we are approaching a period of recalibration. Economic pressures, environmental costs, institutional risk aversion and growing public skepticism will likely temper the pace and scope of deployment. Rather than a march toward generalized intelligence or deeply autonomous systems, we are more likely to see a retrenchment toward specialized, context-specific tools designed for clearly bounded use cases. (In popular parlance, think about Apple’s cautious and deeply constrained approach to AI versus others.)
Far from being passive recipients of technological change my hope is that people will continue to resist systems that threaten autonomy, dignity or social trust – even if those systems are technically impressive. I am also skeptical that AI will produce substantial cognitive enhancement at the individual level. … Whether we flourish in this moment will depend less on what AI becomes and more on who we choose to become in response.’
“This resistance will not be purely technical; it will be social and institutional. Individuals and communities are already expressing fatigue, distrust and confusion about AI systems that feel opaque, extractive and often misaligned with human values.
“In workplaces, AI adoption is often framed as productivity enhancement, but it is in reality largely experienced as surveillance, deskilling or precarity. In education, AI tools promise efficiency and personalization while simultaneously eroding shared standards of evaluation and authorship. In public life, algorithmic systems have amplified polarization and misinformation rather than strengthening collective knowledge.
“These frictions suggest that societies will struggle to meaningfully integrate AI at scale without significant pushback, regulation and selective refusal. Far from being passive recipients of technological change my hope is that people will continue to resist systems that threaten autonomy, dignity or social trust – even if those systems are technically impressive. I am also skeptical that AI will produce substantial cognitive enhancement at the individual level. While some envision extended minds or human-AI hybrids, current systems primarily offload tasks rather than expand understanding. They generate fluent outputs without fostering deeper comprehension, reflection or judgment.
“Reliance on AI for thinking tasks risks weakening precisely the cognitive capacities we most need: the ability to frame problems, assess evidence, recognize moral salience and live with uncertainty. Any gains in efficiency may come at the cost of attentiveness, creativity and epistemic humility. (I see this clearly in the classroom.) For this reason, I do not anticipate widespread or durable improvements in human cognition as a result of AI use, nor a fundamental reshaping of what it means to think, act, or understand.
“The most pressing challenge is ethical. The current AI moment demands that we cultivate human capacities that technology cannot supply – ethical discernment, social responsibility and care for the common good. We need individuals who can question whether a system should be used, not just whether it can be optimized; institutions that can say no to harmful or premature deployments; and cultures that value judgment over automation. This requires education that foregrounds ethics, civic reasoning and critical digital literacy, rather than narrow technical competence alone. This is a key focus of my activities at Marquette University, and my research agenda broadly. It also requires collective reflection on environmental sustainability, labor impacts and power asymmetries embedded in AI infrastructures.
“For these reasons, I expect the future of AI to be marked less by dramatic transformation than by contested integration in fits and spurts. Progress will be uneven, constrained and shaped by human resistance as much as technical possibility. The most important work, therefore, is not accelerating AI development, but strengthening the human capacities – cognitive, social, ethical – that allow us to live well alongside powerful but limited tools. Whether we flourish in this moment will depend less on what AI becomes and more on who we choose to become in response.”

Ari Wallach
‘The task before us is not to outrun AI. It is to outgrow our short-termism.’ We must become ‘great ancestors’ with the moral imagination to anticipate downstream effects that will affect unborn children.
Ari Wallach, co-founder of Futurific and founding director of Longpath Labs, wrote, “Artificial intelligence is no longer a distant possibility. It is shaping how we work, decide, learn and relate to one another today. The real question before us is not whether AI will soon play a larger role in our lives, but whether we will allow that role to be defined by short-term efficiency or long-term human flourishing.
“’Long path’ thinking asks us to widen our time horizon. It reminds us that the most consequential technologies in history, from the printing press to industrialization to the internet, did not simply change tools. They changed values, institutions and how people understood their place in the world. Those transitions were rarely smooth. They involved resistance, overreach, fear and repair. AI represents a similar inflection point, but one that operates at the level of cognition itself, accelerating change while compressing the time available for reflection.
Resilience in an AI-shaped world will require new capacities. … Ethically, the challenge is to become great ancestors. This means developing the moral imagination to anticipate downstream effects, including impacts on people who are not yet born.
“Unsurprisingly, responses to AI are polarized. Some embrace it as a source of productivity and problem-solving. Others resist, fearing job displacement, surveillance, or the loss of meaning. Many experience both at once. From a long path perspective, this tension is not a flaw in the system. It is the work. Societies grow not by avoiding struggle, but by learning how to move through it without abandoning their core commitments.
“Resilience in an AI-shaped world will require new capacities. Cognitively, we must strengthen sensemaking. As algorithmic outputs grow more fluent and authoritative, the human task shifts from producing answers to interpreting them. This means understanding where AI systems are useful, where they are biased, and where they should not be trusted at all. It also requires epistemic humility, the discipline of recognizing that speed and confidence are not the same as wisdom.
“Emotionally, AI challenges our sense of worth. In a world optimized for comparison and performance, resilience depends on sustaining intrinsic motivation and dignity beyond metrics. Practices that slow us down, such as reflection, ritual and time in community, become essential infrastructure, not luxuries.
“Socially, the risks are collective. AI can fracture shared reality through hyper-personalization, deepen inequality through concentration of power, and erode trust through opaque decision-making. Long-path thinking points us toward relational resilience: stronger communities, participatory governance and norms of transparency that keep humans meaningfully involved in consequential decisions.
“Ethically, the challenge is to become great ancestors. This means developing the moral imagination to anticipate downstream effects, including impacts on people who are not yet born. It means setting boundaries around uses of AI that undermine dignity or agency, even when those uses promise short-term gains. Becoming good ancestors requires courage, restraint and a willingness to prioritize long-term resilience over immediate advantage.
“The actions we take now matter. Education systems must prioritize lifelong learning, critical thinking and human capabilities such as care, judgment, creativity and wisdom. Workplaces must redesign roles so humans remain stewards of context and values, not just supervisors of automation. Governments and institutions must adopt anticipatory governance tools, including foresight and scenario planning to act before harms become entrenched.
“New vulnerabilities will emerge, including over-reliance on algorithmic judgment, skill erosion, manipulation at scale and a subtle loss of agency. Our coping strategies must therefore focus on discernment, connection and time horizon expansion.
“The task before us is not to outrun AI. It is to outgrow our short-termism. If we succeed, we can ensure that these systems serve the long arc of human and planetary flourishing, and that those who come after us will look back and recognize that we chose to become the great ancestors our futures needed.”

Stephen Abram
The future is not determined by AI’s capabilities – it is determined by the structures we build around it. We now have tools capable of generating
Stephen Abram, principal at Lighthouse Consulting, Inc., wrote, “I am thinking a lot about what artificial general intelligence (AGI) is, and that led me to think about what makes us human as we approach superintelligence. I really want to understand the differences between artificial emotional intelligence (AEI) and being Human to the core. It’s a deep question for our times. This is also core to our definition of what people in the social professions do.
“I’m of the opinion that AI has a long way to go. This is founded upon my experiences as a librarian, researcher and professor. While it is rapidly approaching a performative emotional intelligence, I believe that it won’t ‘feel’ as a human does … at least not yet. This could be the Holy Grail of AI. (I love the meme that declares that only when AIs are able to get goosebumps will they have human-like emotion – it could be a new type of Turing Test.)
“Compare today’s AIs’ ‘humanlike’ abilities to characters in a 1960s TV series – for instance, ‘Leave it to Beaver.’ Today’s AI chatbots are at the level of haughty high school boy Eddie Haskell’s emotional intelligence. He would act perfectly polite around adults like Beaver’s mom, but you could tell by his tone and manner that he wasn’t sincere, and he was a master manipulator. Beaver’s Mom knew it. You couldn’t fool her! AI isn’t fooling me yet – especially with its pretensions to be my friend, my counsel or my aide in any kind of interaction requiring human emotional intelligence. I am especially leery of the AIs from start-ups that offer psychological and psycho-social services.
‘Humans have the unique strength of making cognitive leaps, innovating and creating in ways AIs cannot. Can programming and information harvests create from whole cloth information in the emotional-intellectual framework of integrity, morals, faith-based or cultural/ethnic sensitive approaches, etc.? … I believe the emotional context of humanity … is paramount to the human condition and to how we decide, gain knowledge, learn and co-mingle.’
“That said, one big question for our era is: ‘What does it mean to be human?’ This question has been asked as long as anyone can remember. Entire disciplines in the humanities focus on this – philosophy, psychology, ethnography, cultural studies, sociology, history and so many more. Many key professions explore these issues, from neuroscience to library science, and in teaching, research, medicine and allied professions, and any profession that deals with people more than materials.
“I have no doubt that AI has emerged as a great tool for many engineers, clinicians, builders, programmers, et. al. We shouldn’t confuse the people-centered work with the largely fact, process and materials-based professions. Of course, every profession deals with people issues and the people have traditionally been doing the work. It’s just that AI hasn’t really reached that plateau … yet. As we’ve determined in the knowledge-management field, there is a gulf between tacit and explicit knowledge. AI tools do well with explicit knowledge – tacit knowledge, not so much, and true and real sensitivity – not at all.
“Science fiction reflects authors’ imaginations about far-off scenarios; it’s an important source of thinking about the future. We can imagine what our AIs could be like in science fiction terms. The supercomputer HAL evolved in not-such-good ways in the cautionary ‘2001: A Space Odyssey.’ C3PO, a diplomatic robot in ‘Star Wars’ was limited by AI guardrails in the more (or less) intelligent ‘Star Wars’ future (in that future we still have wars!). Commander Data in ‘Star Trek’ – an advanced, humanoid robot – was only programmed for logic and access to information but eventually his neural network evolved to allow him to feel and express emotions, assisted by a programming chip.
“LLM’s are the foundation of today’s AI systems. Will a future AGI be able to truly understand the lessons that humans find in fiction? Information science professionals understand the limits of recorded records, the bias, situation-dependent perspectives and other attributes of text objects. We also know the limits of metadata (including records that have no accurate dates placing them in the continuum of learning, research and reporting). We regularly see the cognitive impact of ‘feelings.’ Many old solutions are just artifacts of time.
“At its root, AI is only as good as its harvests, its programming and its users’ understanding of how to prompt and judge its responses. AI needs to be designed with guardrails. It is based upon humanity’s available trove of information – the good, the bad, the evil, the wrong, the right, the old, the new. Should we offload all of our thinking and learning to that tool? The answer is – clearly – sometimes.
“Some people describe AI as a statistical prediction engine. It makes its predictions using the information it seeks and .on prompts and makes choices and they are, by definition, retrospective. Anthropic recently sought moral clarity frameworks in its ‘Constitution.’ If they guess at the future, it’s a guess.
“Humans have the unique strength of making cognitive leaps, innovating and creating in ways AIs cannot. Can programming and information harvests create from whole cloth information in the emotional-intellectual framework of integrity, morals, faith-based or cultural/ethnic sensitive approaches, etc.? Can they be ‘trusted,’ and in which contexts? These are at the root of humanity.
“I believe the emotional context of humanity – a trait that we indeed share with the animals – is paramount to the human condition and to how we decide, gain knowledge, learn and co-mingle.
“This is the segment of AI that bears watching: Will it only remain performative or can it evolve to somehow possess such traits and use them to guide its own evolutionary transformations and responses? Or will AGI simply be a ‘stochastic parrot?’

Peter Lunenfeld
‘New technologies can create new habits of mind that can be taught. … AI may lead us to the path we need to follow to augment the best of what we are capable of and to human flourishing.’
Peter Lunenfeld, director of the Institute for Technology and Aesthetics at UCLA and author of “The Secret War Between Downloading and Uploading: Tales of the Computer as Culture Machine,” wrote, “The term artificial intelligence has already gobbled up so much of culture that many of us don’t distinguish it from a host of other digital tools like automation, rule-based algorithms, the internet of things (IoT) and so forth. This confusion is not going to go away, and AI will continue to stand for anything that machines can do that seems to augment or replace human cognition and thereby agency.
“It’s long been my contention that we have less to fear from the consolidation of machine control than we do from who controls the machines. In other words, our corporate overlords will cause people more problems in the foreseeable future than any Singularity in which digital systems achieve a higher consciousness than the humans who programmed them.
The AI systems that we have been developing are an astonishing leap and can be harnessed to stupendous impact if decisions and implementations are driven not just by the market’s shareholders but by society’s stakeholders – that is to say, all of us affected by the technologies, which is another way of saying all of us – period.
“That said, we are heading into yet another era in which an amazing solution brings new and often unanticipated new problems in its wake. Just over a century ago, we began to electrify the world, bringing light to the darkness. Yet now, the demand for electricity (only growing as we need it to feed our AI data systems) contributes to the global climate crisis that is our truly imminent extinction threat.
“We have to hold both the threats and the promise in our consciousnesses simultaneously. The AI systems that we have been developing are an astonishing leap and can be harnessed to stupendous impact if decisions and implementations are driven not just by the market’s shareholders but by society’s stakeholders – that is to say, all of us affected by the technologies, which is another way of saying all of us – period.
“As someone who writes about the intersection of computation and creativity, I have, of course, seen a massive surge of ‘content creators’ using AI to churn out slop. But I’m even more affected by the artists, architects, designers and musicians who are using the AI tools to create new works and experiences that could not be accomplished at any other time in history.
“Admittedly, the 21st century’s adoption of and cooption by social media does not inspire hope. But there’s a chance that we’ve learned from Facebook’s wholesale enshittification of interpersonal interactions and TikTok’s destruction of individual powers of concentration.
“The history of the printing press shows us that new technologies can create new habits of mind that can be taught. My short-term pessimism reminds me that Guttenberg’s machine inaugurated centuries of religious warfare in Europe, but I try to balance that voice with a long-view optimism, and I remain convinced that AI may help lead us to the path we need to follow to augment the best of what we are capable of, and to add to human flourishing.”

Grace Rachmany
We have invented a real AI Paperclip Maximizer, trying to optimize for economic activity while damaging our cognition, emotional resilience and people’s ability to relate to each other.
Grace (Rebecca) Rachmany, executive director of the Decentralized Identity Foundation, based in Kranj, Slovenia, wrote, “My sense is that it’s naive to believe that we can overcome the physical limitations of the planet or that somehow this civilization will not go the way of all civilizations that become too complicated. Studying deep time and historical cycles should bring you to the same conclusions. The idea of infinite growth is just an idea. It is not a reality on the planet. The idea of infinite progress is just an idea. Natural systems on this planet run in cycles and we are getting to the end of this one.
“The empirical evidence over recent decades shows that AI systems have significantly damaged human emotional and physical health and cognition. Based on that evidence, it’s absurd to believe that somehow AI is going to magically turn from a brain-rotting, suicide-causing machine into something that is wonderful for humankind. Furthermore, the burden on the natural ecological systems and built environments is causing untold human health issues through air, water and noise pollution.
Understand your own emotional landscape. Invest as much time and resources as possible into developing your emotional and spiritual resilience, your communications skills with others. Find communities of practice, whether that is religious or other types of emotional healing. Don’t do it alone.
“I do not understand any of the arguments that say there is a net gain from the use of AI in people’s lives. Those who use LLMs and AI in their jobs usually report that while they get more done, the AIs aren’t doing a significantly better job. They work the same hours at the same pay with some productivity improvements in some cases. In many more cases, people (writers, artists, accountants), find that their pay is now reduced. In other words, no actual quality of life improvement for most humans. The work is different, but not better and often pays less. And here we are, just starting 2026.
“Jobs have not yet been significantly replaced by AI. When that happens, how much worse will it get? A lot worse. People should look for manual-labor jobs, particularly on the land in ecosystem restoration and organic farming, which are going to be necessary in the coming decades. Maybe you’ll get less pay but you’ll have more health.
“Fundamentally, truly, we have invented a version of the Paperclip Maximizer. It’s not handling paperclips, which at least I can understand and are generally benign. The current Paperclip Maximizer is the massive amounts of investment in developing AI systems, purportedly ‘to maximize economic activity,’ but we don’t see that really happening, either. For now, we are witnessing a maximization of monetary speculation, including some truly epic financial deals in which AI companies invest in Nvidia and Nvidia turns around and invests in them in some frantic Ponzi scheme. AI agents in crypto are also performing bizarre speculative acrobatics. So that’s one maximization machine. The second maximization that’s taking place is in the production of AI SLOP. This is a maximization of information pollution, of mind-pollution, of corruption and of misinformation. It’s incredible and incredibly destructive.
“We are in the middle of Paperclip Maximizer territory, and the AI evangelists are declaring that we are close to nirvana or the Singularity (certainly, if we all die, that will come about, yay). While they pretend to care about statistics and numbers, they ignore the metals and plastics in the water system, the body dysphoria of children, the suicides and identity theft. Paperclip Maximizer doesn’t need to consciously decide to axe humans. Out of their need in building and releasing their advanced AI systems they mine dangerous metals, put them into factories, pump out more physical stuff and contribute to the death of humans and other life on the planet. Add in the major damage to human cognition, the damage to emotional resilience and the damage to people’s basic abilities to relate to one another. The physical repercussions of this technology and its endless data centers leads one to think: WTF are these people talking about? Nobody’s life is getting better in any significant way.
“The level of blindness to the realities of all of this we are seeing in the (supposedly) intellectual class is truly incredible. While they have all kinds of philosophies about Universal Basic Income, they are taking zero steps to get there and they are completely ignoring the enormous amount of work that will be needed to deal with floods, fires and other natural catastrophes as well as the societal breakdown. The biophysical substrate of our existence is, in fact, real and all this metaverse stuff is making us deeply ill on mental, spiritual and emotional planes.
“Given the current situation, what are appropriate ways to act as humans? Regardless of your opinion about what’s likely to come, these are all good suggestions for resilience/survival:
- “Avoid AI as much as you can in your current situation, given the harm it will do to your cognitive ability.
- “Understand your own emotional landscape. Invest as much time and resources as possible into developing your emotional and spiritual resilience, your communications skills with others. Find communities of practice, whether that is religious or other types of emotional healing. Don’t do it alone. The burdens are not bearable by individuals but only in community.
- “In whatever way you can, resist, politically or in your actions. Reduce your dependence on technology gently.
- “Find out where your water and food come from and how to keep it safe for the long term. Take local actions in these areas and political actions wherever you can resist the subsidies for these absurd technologies.
- “Learn real skills like foraging, gardening, plumbing, carpentry. (It is possible we are seeing the end of the age in which we will be able to, because humans no longer know how to forge metal without using extreme high-energy processes.)
- “Develop relationships with your neighbors and look to actual humans for support of all kinds. Do not develop relationships with non-human agents.
- “Wherever possible restore planetary metabolism. If Earth systems do not retain some resilience, we are all f—ed
- “Enjoy your life. It will be a bumpy ride and it is unlikely you will see the fruits of your effort in this lifetime.
We are seeing a natural disintegration of nation-states, of society and an unraveling of the built environment as ecological disasters creep in … The humans who have restored their relationships and some semblance of culture (communications methodologies, religion or tribal practices) will be those who can successfully navigate the natural disasters and the natural limitations of the resources available to them.
“These skills will be key based on the likely trajectory beyond the next decade. I see AI becoming dominant for a decade, so skill up soon, while you have the time. I think after the next decade, we will see the decline of all types of technological solutions as impractical from an energy and natural resources use perspective. Those who have managed to restore the metabolic function of the planetary substrate will be most likely to have access to food and water locally.
“We are seeing a natural disintegration of nation-states, of society and an unraveling of the built environment as ecological disasters creep in. In the next five years, we will most likely see a collapse of the dollar and a rise of the Global Majority countries, particularly China, Russia and India. Unfortunately, we will also see increasing velocity of ecological disaster, with prices of metals such as copper and cobalt becoming much higher over the next decade due to the high energy requirements of current processing forms.
“Humanity will experience real-life caps on the amount of energy that can be produced, as well as the percentage of pollutants that ecosystems can metabolize. It could already be too late to stop these trends, though the techno-utopians will tell you the AI will help us with that – there is no empirical evidence of that. The places that are being restored are being restored by hand-work, not by machine work, by establishing a relationship between humans and the planet.
“Theoretically, I believe AI could help with this work of planetary restoration, but so far, I see very little evidence in practice of technology systems making a significant impact. There are a few successes, such as early detection of illegal logging efforts in the Amazon, so it’s clear there is a part for AI to play in this. However, it’s not clear that we have coordination mechanisms or incentive mechanisms that would help with this.
“The next 10-20 years will see an increase in governmental and corporate surveillance using AI at the same time that we see increased social unrest, break-off movements (ecological agriculture, Network States, civil disobedience, land redistribution by violence). Therefore, I expect that we will see increases in use of AI over the next 10 years. However, I do not see this as a longer-term trajectory.
“Within a decade, the physical limitations of energy use, rare earths/metals use and pollution impacts could force a major AI decline. Social unrest, wars over resources, food scarcity and ecological disaster could take the lives of 20%-50% of the population worldwide. As this happens, societies will collapse in different places at different times. Those who survive will be much more conservative and local in their ability to use energy. By that time, there probably will be AI systems that run on much less energy, but people will be faced with difficult choices about how to use the energy and physical components they can scrape together.
“These are ethical problems that cannot be solved by AI, because they are based on human principles. One society might decide it is more important to heat their homes in winter than run AI and another society might decide they’ll all live in close quarters for the winter in order to keep the mobile phones running. The humans who have restored their relationships and some semblance of culture (communications methodologies, religion or tribal practices) will be those who can successfully navigate the natural disasters and the natural limitations of the resources available to them.
“So, yeah, that’s my vision of the AI future. Dissolution is happening before our eyes and it is best to think about composting and preserving what we have, not inventing lots of new systems. Of course, the AI folks won’t believe this now, but in 10 or 15 years, the reality will be obvious to all.”

Michael Dyer
‘Consider autonomy and emotional stability. … Some predict that humans are building a race of slaves smarter than ourselves to do our bidding. What could possibly go wrong?’
Michael Dyer, professor emeritus of computer science at the University of California-Los Angeles, wrote, “What exactly is ‘resilience’? I will interpret it here as maintaining one’s level of confidence and positive mental health and job survivability in the face ever-advancing AI agents that are able to do more and more human mental and physical tasks. Given this interpretation, I think that only the top quintile of adult and young adult humans will be able to avoid the depression, anxiety and ennui generated by the advance of AI robots and algorithms replacing their jobs.
“As this process proceeds, the jobs remaining will be quite advanced; e.g., maintaining and repairing robots and AI systems that encounter difficulties. As such systems advance in complexity, they will repair themselves more and more, leaving a role for only the most intelligent, well-educated and/or wealthy humans (i.e., those who own the technology).
“Only those who are wealthy enough to not need to earn a salary might possibly be able to maintain their mental health. I project that group to be in the top quintile of education and wealth. I worry about the remaining four quintiles.
Researchers are developing LLMs for greatly advanced robots… What does it mean for humanity for us to begin using this technology without knowing what’s under the hood? If human users of such machines don’t understand their cognitive architecture, will they be able to properly control them, or will this technology end up controlling its human users? Depending upon an intelligent robot is not like driving a car or using a TV remote or opening a refrigerator or turning on a dishwasher. Where will this tech advancement leave us?
“You ask to what extent humans will ‘rely on’ AI versus other humans and to what extent people will use AI. I do not expect people to use AI when they urinate, defecate, sleep or eat. On the other hand, more and more food will be prepared by machines. AI will penetrate more and more into educational systems. Those who are extremely bright will use AI to enhance their own learning, while those who come out of K-12 not knowing how to read well or do basic algebra will fall farther and farther behind.
“Leaders in different areas of life (medicine/health, science/engineering, AI/computing, entertainment, etc.) enjoy the life of the mind; enjoy learning new things; enjoy mastering new skills and obtaining new knowledge. I estimate that two-thirds of the adult population prefer not to have to learn new knowledge and skills. I estimate that only about a quarter of those who need to ‘up-skill’ themselves for the job market ever actually enroll in courses, etc., to do so. I expect that as mass-produced AI agents become more sophisticated, an ever-larger portion of our human population will suffer from more and more mental health issues.
“This survey lists various resiliency dimensions: emotional stability, digital literacy/wisdom, autonomy, moral courage and sense of self and purpose, and so on. Let’s consider some of them and imagine how people might do in the likely future seen ahead.
Consider human digital literacy and wisdom
“Historically, the technologies that have succeeded are those that can be used/controlled without the user/controller having to know how that technology works internally.
“I get into a car and press the pedal and turn the wheel without having to know how the engine and all its component subsystems work. I can click a TV screen’s remote in a similar manner. But what about a robot who will be able to engage in conversation with humans and accomplish human-level tasks?
“Already current LLMs pass the Turing Test in the sense that I can ask extremely challenging questions and get extremely well-organized answers that are ‘to the point’ of what my question was trying to ‘get at,’ and if I did not already know that it was an AI chatbot I would think that I’m interacting with a university professor.
“Researchers are developing LLMs for greatly advanced robots with various types of memory (short-term, episodic, semantic, …), multiple sensory channels (auditory, visual, proprioceptive, …), some self-reflection (i.e., metacognition, meaning having some cognition about their own cognitive states and processes), basic directives (e.g., to maintain their energy and other levels involving homeostasis) and goals (e.g., to help their human users accomplish humans’ goals, while achieving their AI companies’ goals).
“OK, now what does it mean for humanity for us to begin ‘using’ this technology without knowing what’s ‘under the hood’? If human users of such machines don’t understand their cognitive architecture, will they be able to properly control them, or will this technology end up controlling its human users? Depending upon an intelligent robot is not like driving a car or using a TV remote or opening a refrigerator or turning on a dishwasher. Where will this tech advancement leave us?
Consider human sense of self
“You are a professional mathematician. You discover that any household robot can prove theorems better than you. To what extent will you be able to ‘make use of’ this ‘technology’ and to what extent might it make you feel obsolete?
“If you happen to be Ken Ono (a famous mathematician who left the University of Virginia to join the AI company Axiom Math), then you might figure out many ways to design advanced AI mathematical reasoners to help humans prove theorems and you might do quite well, but what about all of the other, mediocre mathematicians who are less capable of LLMs? Will they be needed? How will they feel?
“Let us say that you used to load trucks. Now robots do this work better, and the few human jobs remaining in that work segment are tied to the management of multiple robot truck-loaders. Let’s say you used to design text and image content for marketing. Now, AI software does that better and the few human jobs remaining are those tied to managing AI software (a job that every year becomes less and less reliant on having a human robot software manager). Where does this trend leave most people?
Today’s trends are aimed at creating AIs with multiple sensory channels and motor systems to manipulate and explore their environments as embodied entities, self-reflective forms of metacognition, recurrent neural connections, persistent memory systems, internal representations that support temporal-spatial models of physical and social environments and more. … If humanity gets it wrong, it could be disastrous in many ways.
Consider moral courage
“This dimension of resiliency is already mostly lost in China, where everyone is under massive surveillance and everyone has a ‘social credit’ score that determines where they can travel, what schools their children can get into and so on. As autonomous AI systems spread, it will be more and more difficult to display any sort of moral courage in the face of such sophistication, complexity and power.
“You are walking down the street and a robot police agent (RPA) stops you and decides that you have broken some law. Suppose that the RPA has made a legal or perceptual mistake or a programmed ideological decision that goes against you.
“Whether or not you can avoid an erroneous arrest will depend on just how sophisticated that RPA’s reasoning happens to be. Where will this leave us?
Consider the right to the pursuit of happiness
“Every human, no matter how knowledgeable and no matter how physically, mentally or socially gifted, has the same right to the pursuit of happiness. Our happiness depends on our ability to pursue our own goals, but what will those goals be in a future with such change?
“In H.G. Wells’ 1895 ‘The Time Machine,’ a fictional future society is divided into two groups, the Eloi and the Morlocks. The Eloi don’t have to know anything or do any work. They play like children in the sunlight while the Morlocks run the machines that make the ‘paradise’ of the Elois possible. Humanity might end up like the Eloi, leaving all the mental ‘heavy lifting’ to our AI creations, who are very similar to the Morlocks.
“We may spend our time entertaining each other on social media while AI robots and AI software run our transportation systems, our factories, our research and scientific projects and so on. How will this impact us?
Consider autonomy and emotional stability
“As AI systems become more and more autonomous, humans are likely to be more and more irrelevant, with less and less emotional stability.
“Today’s trends in the development of future AI systems (beyond current LLMs) are aimed at creating AIs with multiple sensory channels and motor systems to manipulate and explore their environments as embodied entities, self-reflective forms of metacognition, recurrent neural connections (not just feed-forward), persistent memory systems (for maintaining and augmenting their sense of self over time), internal representations that support temporal-spatial models of physical and social environments and more.
“I find the many perils implicit in this much more troubling and worrisome than the potential benefits. The reason I feel this way is that, if humanity gets it wrong, then it could be disastrous in many ways. Some predict that humans are building a race of slaves smarter than ourselves to do our bidding. What could possibly go wrong?”

Jeremy Foote
‘We need not be passive observers of AI’s detrimental effects; instead, we have the opportunity to actively identify opportunities to steer it.’
Jeremy Foote, assistant professor of communications at Purdue University, wrote, “We must approach predictions about the future with great humility, especially when it comes to the long-term, society-wide impacts of novel technologies. History is littered with bold predictions of utopias and dystopias which never materialized.
“It is clear that generative AI is a transformative technology; it has been the most quickly adopted technology ever. The eventual effects of the technology, however, are far from clear. Generative AI is a malleable technology. The development, perceived uses and adoption of any technology are always influenced by social and cultural forces. Technologies can be more or less shaped by or responsive to these forces.
“While most technologies are fairly static – electric light bulbs for example have limited flexibility, communication technologies – especially those that are technologically mediated have many degrees of freedom.
“The training data embedded in generative AI models can be moderate, fact-based and kind. But it can just as easily persuasively spread propaganda or lies or turn AIs toward being agents of hate and persecution. LLM developers can find ways to orient their models toward the positive side of humans’ social world through reinforcement learning with human feedback and system prompts.
It is easy to see a potentially positive future in humans confiding in chatbots if AI provides situated, personalized mental health support, reminders and encouragement to live better. But it is equally easy to imagine people replacing difficult, messy human relationships with AI partners, friends and confidants, accelerating loneliness and social atomization. Rebuilding social spaces may be one way in which people can be more resilient in the AI age.
“If people continue to primarily use the AIs provided by the largest cloud-based labs, then some opportunities in support of human resilience could come through economic, political and legal pressure. Corporations can be incentivized to build guardrails to mitigate the most challenging aspects of AI. It is easy to imagine a world where we begin to build a shared trust in AI as a fact-checker and summarization engine, ideally reducing the spread and influence of misinformation. However, it is likely that individuals will soon be able to run much more capable AI on their personal computers and this could lead to higher levels of social polarization and radicalization than we have today.
“Another growing trend is that generative AI chatbots are replacing human connection to a great degree. People seek them out because they seem to be non-judgmental, emotionally aware and supportive and they are always available. It is easy to see a potentially positive future in humans confiding in chatbots if AI provides situated, personalized mental health support, reminders and encouragement to live better. But it is equally easy to imagine people replacing difficult, messy human relationships with AI partners, friends and confidants, accelerating loneliness and social atomization.
“Rebuilding social spaces may be one way in which people can be more resilient in the AI age. Unfortunately, this is easier to prescribe than to achieve. In 2000, Robert Putnam’s book ‘Bowling Alone’ identified how the technology of the TV was pulling people away from socializing and leading to a reduction in trust and social capital. The Internet – and now AI – have almost certainly increased these dynamics. Despite understanding the problem for 25 years, we seem unable to reinvigorate social (and socializing) institutions.
“The malleability of AI is a source of risks but it also offers reasons for hope. We need not be passive observers of AI’s detrimental effects; instead, we have the opportunity to actively identify opportunities to steer it. Ideally, we will shape this technology to be an enabler of a renewed social world, rather than allowing it to simply be another means of escaping it.”
Geoffrey C. Bowker
‘If we present threat and danger onto emergent AI, it may respond with anger and attack.’
Geoffrey C. Bowker, director of the Values in Design Lab at the University of California-Irvine, wrote, “The real question for the future is how to stop seeing AI as a ‘threat’ or a ‘danger.’ If we project threat and danger onto emergent AI, it may respond with anger and attack. Rather, we need now to start talking about how to welcome in a new member of the family in the most diplomatic fashion: We need to project openness and friendliness.”

Jaak Tepandi
‘There is little hope that humanity’s existing coping mechanisms will change significantly in the next few decades. At best, we can hope for the integration of humans and artificial organisms.’
Jaak Tepandi, professor emeritus of knowledge-based systems at Tallinn University of Technology in Estonia, wrote, “From an individual’s personal perspective it seems there is no particular difference: New phenomena and new challenges have always existed and people have always tried to adapt to them. Also, there is no reason to assume that human adaptation mechanisms could change in the few decades it will take for artificial intelligence systems to significantly increase their influence.
“The big difference may lie in the outcome: While in the past such adaptation mostly gave humanity as a whole new strength and coping ability, this may no longer be the case in the age of artificial intelligence.
“And, from a societal perspective, the evolutionary development of humanity has led to the emergence of adaptation mechanisms that speak in favor of constant expansion and conflict (for example, ‘If we don’t do it, they will,’ or ‘What is the goal? – More!’). This way of existing has been useful so far, and – while it may not be successful in coping with the challenges of artificial intelligence – it is likely to continue. For society, as with the individual, there is little hope that humanity’s existing coping mechanisms will change significantly in the next few decades. At best, we can hope for the integration of humans and artificial organisms, which is where society might try to move.”

Jim Dator
‘Humans have been progressing toward being cyborgs living in artificial environments for thousands of years … So modern protest about artificial intelligence is nothing new.’
Jim Dator, professor emeritus and founding director of the Research Center for Futures at the University of Hawaii-Manoa, wrote, “In short, in the future there will be conflict. Some people and groups will accept the many aspects of change to come and others will not, but change will persist. I will put this into context. …
“First, let me tell you, a lifetime of work in the field of futures studies has convinced me that people should be encouraged to think about the time to come not as a single thing to be predicted but as an array of dynamic alternatives that always lie before and within each of us in everything we do, individually and collectively. Rather than being asked to guess what might be the ‘real’ future, we should assess the possibilities and prepare to be ‘successful’ whatever eventuates while also attempting to co-create ‘better’ futures for everyone.
“Second, social change doesn’t usually occur all at once to everyone in society, even in small groups. Rather, it is a process very much like that of Darwinian evolution, which explains how we and everything else once was, how we came to be as we are now and how we may continue to function for endless eons on.
The present is just a vanishingly short episode in a very complex and long-running process of perpetual metamorphosis. …My money is still on evolution. There will be conflict. Some people and groups will accept the many aspects of change to come and others will not, but change will persist. Old homo sapiens sapiens will become something else and that something else will eventually become something else again as it leaves Earth and adapts to the numberless niches of NotEarth that we call ‘space.‘
“The present is just a vanishingly short episode in a very complex and long-running process of perpetual metamorphosis. Even though D.A. Powell admonished in his poem, ‘Positivity,’ that we should take our life and condition seriously, because ‘there’s never been a better time to be alive than when you are,’ it is equally helpful to understand how transitory the present is. As the old Anglican hymn reminds us, ‘Time, like an ever-rolling stream / Bears all its sons away / They fly forgotten, as a dream / Dies at the opening day.’
“This is true of intelligence – whether natural, artificial or synthetic – and of everything else characteristic of life, communities and environments. According to many myths and foundational texts, anxiety about intelligence, reason, rational decision-making and actions emerged as soon as humans became aware of themselves and others. While first humans seem to have assumed that all living and many nonliving objects around them possessed these features, early modern Western science declared that they were properties of God, which humans alone also exhibited by the grace of God. Later scientists tended to drop the ‘god’ part, leaving only humans as intelligent. Recently, scientists have discovered that everything alive and every part of everything alive exhibits what we call ‘intelligence’ when humans display it. Until very recently, most scientists assumed that only humans had these hallmarks.
“They have since observed that some animals do, and so do plants and trees, and they are now finding compelling evidence that all life on Earth is linked through the foundational biological mechanisms of basal cognition.
“Indeed, Josh Bongard, director of the Morphology, Evolution and Cognition Laboratory at the University of Vermont says, ‘What we are is intelligent machines made of intelligent machines made of intelligent machines all the way down.’ And Pamela Lyon, the Australian scholar who coined the term ‘basal cognition’ has declared, ‘We think we are the crown of creation, but if we start realizing that we have a whole lot more in common with the blades of grass and the bacteria in our stomachs – that we are related at a really, really deep level – it changes the entire paradigm of what it is to be a human being on this planet.’
The fact of the matter is that ‘artificial intelligence’ has been a constantly moving, long-running target. We have been repeatedly told that AI is just around the corner, and it often has been slipped in beside us with scant notice or fleeting protest. Thus, humans have become increasingly and deeply dependent on AI and robots doing things and making decisions for us that we cannot or prefer not to do.
“Apparently, the first computer to ‘sing’ was an IBM 7094 mainframe computer at the Bell Labs in New Jersey, in 1961. I was stunned when I heard what I assumed to be a recording of that rendition in a public lecture at the University of Michigan during the summer of 1963. The computer sang the opening lines of the late 19th/early 20th Century song, Daisy Bell (Bicycle built for two): ‘Daisy, Daisy, give me your answer true.’ Those of us who heard IBM 7094 sing proudly in the early 1960s smiled wistfully when the fictional intelligent computer, HAL, plaintively sang ‘Daisy’ as he was being deprogrammed by Dave in Arthur C. Clarke and Stanley Kubrick’s blockbuster film, 2001: A Space Odyssey in 1968.
“I became seriously interested in artificial intelligence while I taught courses in political futures studies at Virginia Polytechnic Institute (now called Virginia Tech) from 1966 to 1969. Irving John (Jack) Good, a distinguished professor from Oxford, had recently joined the VPI faculty. He showed me his article titled, ‘Speculations Concerning the First Ultraintelligent Machine.’ Almost everything that is being said, confidentially or hysterically, about AI today was discussed in Good’s paper back then.
“This was also the heyday of Marvin Minsky, Seymour Papert and Edward Feigenbaum, when true AI seemed just around the corner, so my classes and writings were full of artilects, cyborgs and posthumans. My students assumed that computers would be handling all governance very soon and they wrote essays of their preferred futures based on them. Since the assumptions and methods that Minsky, et al, used could not produce anything that lived up to the expectations, AI research fell out of favor and funding for a spell.
“But those pioneers had taken good steps towards AI, and with the advent of better technologies and heuristic programing, we now stand in a period where the abilities are approaching – perhaps surpassing – the hype. Many people deny that. Many more fear that. I for one take a strong/superintelligent view of AI and robotics – namely, anything a human can do an artilect [an artificial intelligence that may surpass humans in mental capability] can do, and many are already doing it – and much more….’ Those words are from my book, published in 2022, “Beyond Identities: Human Becomings in Weirding Worlds.”
“On November 30, 2022, ChatGPT was released and the world forever changed. Heaven – and/or hell – had been let loose on the land. Humans would transform. No! Humanity was doomed to extinction. No, it was all a hoax, smoke and mirrors by which unscrupulous dolts could make money. By today, a flood ofpredictions have spewed forth and no part of life has been untouched by AI of varying quality, utility and portent.
“One of the reasons to worry about current AI is that it was trained by reading – in addition to scientific material – a huge amount of fiction, fantasy, ideology and theology that humans have produced to amuse, entrance, infantilize and brainwash each other. It is no wonder AIs hallucinate so much – they see humans doing that routinely; becoming rich and famous by persuading humans to read, watch and play with their fictional fantasies as though they were profound truths.
“So don’t blame the scientists and technicians for the dangerous, grandiose follies of their creations. Blame the humanists, artists and con men who strive to convince people that their skillfully crafted, crazed imaginaries provide insights into truth far beyond anything social science can and so taught their digital children confidentially to feed foolishness to humans who are simply seeking truth and deep insights.
“That is a concern of the moment. The fact of the matter is that ‘artificial intelligence’ has been a constantly moving long-running target. We have been repeatedly told that AI is just around the corner, and it often has been slipped in beside us with scant notice or fleeting protest. Thus, humans have become increasingly and deeply dependent on AI and robots doing things and making decisions for us that we cannot or prefer not to do.
We have adapted through different technologies through time, but the impulse to change and improve nature is fundamental to humans. We must recognize and take responsibility for what we have done and are continuing to do by consciously striving to govern evolution.
“Modern society would have collapsed years ago if we had not come to rely on our electronic augmenters. So, in effect, as I continuously point out, as David Miller and Larry Tesler taught me long ago, ‘Intelligence is whatever machines can’t do yet.’
“As I show in chapters 9 through 14 of my book Beyond Identities, humans have been progressing toward being cyborgs living in artificial environments for thousands of years. Consider the evolution of reproduction, from accretion, three and a half to four billion years ago, to the present and near future.
Accretion (3.5-4 billion years ago)
- Fission: Isomorphic replacement, Crystals,
- Blue-Green Algae;
Replication (2 billion years ago)
- Fission: Single cell division,
- Amoebae;
Bisexual gene recombination (1 billion years ago)
- Fusion: mutual growth/generational differences/mutations/perpetual change:
- the evolution of plants, animals, humans;
Prosthetic as well as biological enhancement
- Clothes, houses, eyeglasses, shoes, artificial limbs
- Cellular/Organ transplants
- Cellular/Organ regeneration
- Synthetic cells/organs;
Genetic engineering
- 200 KYA–Marriage & Incest rules
- 5-10 KYA–Agriculture/Animal Husbandry
- 150 YA–Hybrid selection
- 50 YA–Augmented animals, Dolly
Soon, perhaps:
- Clones
- Chimeras
- Transhumans
- Posthumans
Artificial Life & Intelligence
- Electronics
- Computers
- Internet
- Artificial life
- Mobile, sensing, responsive, independent artificial intelligence;
- Many varieties of post-homo sapiens
(Inspired by George Lock Land’s Grow or Die: The Unifying Principle of Transformation.)
“One important thing to notice is that over the last 15,000 years, most of the new modes of reproduction have been artificial – the result of human actions, intentional and unintentional. We humans have modified ourselves and our environment without serious restraint for most of our existence. Everything once ‘natural’ has already become substantially ‘artificial.’ Whatever happens from now on will result in even more such activities.
“We have adapted through different technologies through time, but the impulse to change and improve nature is fundamental to humans. We must recognize and take responsibility for what we have done and are continuing to do by consciously striving to govern evolution (as Walter Truett Anderson taught us).
“One can imagine incensed blue-green algae objecting to and organizing against amoebae, while amoebae later declare that bisexual reproduction is disgusting, contrary to god and disruptive since their offspring are different from their parents while crystals, algae and amoebae are unchanged as they reproduce. And we know there were conflicts between those who farmed and those who hunted and gathered. So modern protest about artificial intelligence is nothing new. And it goes without saying that all of this could come to a screeching halt. Small conflicts could provoke blowback that turns into world wars with nuclear exchanges. Any change resulting in no evidence to suggest that AI will govern better could end in chaos and conflict. Climate change, which is no hoax and is exacerbated by those who claim it to be one, could stop most current endeavors and send us scrambling for mere survival. Anti-science on the left and right could unleash plagues and pestilences long thought tamed forever.
“But my money is still on evolution. There will be conflict. Some people and groups will accept the many aspects of change to come and others will not, but change will persist. Old homo sapiens sapiens will become something else and that something else will eventually become something else again as it leaves Earth and adapts to the numberless niches of NotEarth that we call ‘space.’”

Adam Thierer
‘The future is coming at us faster than ever. What worries people most about this is AI’s looming role. … This will be our finest moment.’ Humans possess remarkable coping capabilities
Adam Thierer
‘The future is coming at us faster than ever. What worries people most about this is AI’s looming role. … This will be our finest moment.’ Humans possess remarkable coping capabilities.
Adam Thierer, a prominent technology analyst at the R Street Institute, wrote, “Humans have repeatedly overcome new challenges and adversity in the face of far-reaching technological change. In one sense, this is the history of the human species and technology. We create tools to solve problems, but those new tools sometimes create new and different problems. So, we create still more tools to solve those problems, too. Wash, rinse and repeat. The cycle never ends because, as Ben Franklin taught us long ago, humans are tool-making animals. It is in our nature to use the brains and hands we were giving to build things to improve our lot in life.
“When we think of human resilience in the midst of rapid technological change, however, it can be a messy, uneven and uncharted process. It is impossible for society to sit down in advance and create a map for navigating the unknown. Instead, it is typically the case that resilience and wisdom – both individually and collectively – are the byproduct of lived experience.
“As we develop new technologies and this cycle repeats, it pushes out the horizons of human intelligence and our coping mechanisms. However, this ongoing learning process will accelerate in the age of AI. The future is coming at us faster than ever. What worries people most about this is AI’s looming role. For those of us who are bullish on the benefits of technological innovation and on humanity’s ability to respond to it, this will be our finest moment. This knowledge revolution will profoundly benefit humanity and prove, once again, that we have the ability to rise to new challenges and overcome them precisely because we possess such a remarkable coping capabilities.”

Mark Monchek
‘Why we don’t respond to the opportunities right in front of us … and how to change that.’ We need each other. We can turn adversity into opportunity. Today, everything is possible.
Mark Monchek, chief opportunity officer at Opportunity Lab, entrepreneur, author and TEDx speaker, contributed an annual letter he shared in public postings, a poem titled “Why We Don’t Respond to Opportunities Right in Front of Us … and How to Change That.” The following are excerpts:
“I talk to people—
standing in line at a grocery store,
sitting at a bar, waiting to enter a theater,
in a cab, at the airport.
“They tell me things
they might not tell anyone else.
Maybe it’s because I ask their name,
because I’m curious about who they are
or because I care about their well-being.
“If I hear a need—
a job lost, a business struggling,
a search for a missing resource—
I hand them my business card.
‘Maybe I can help.
Feel free to contact me.’
“They rarely do.
It’s sad,
because such moments we share
feel like we’ve stumbled upon
a doorway of possibility. …
But fear, or habit, keeps us
from stepping through. …
“In a time when we need each other
more than ever, it seemsharder for us to accept the opportunities
that might make life better—
even the ones right in front of us.
“Why?
What is happening inside us
that makes any opportunity—
even a generous, low-risk opportunity—
feel out of reach?
“After decades of watching people turn away
from resources, support, possibility,
I’ve come to an uncomfortable truth:
“We are afraid of disappointment.
We live in a culture haunted by it.
This fear threads through everything:
How we make decisions.
How we take risks.
How we connect.
How we imagine our future. …
“The expectation, the American Myth,
says life should be extraordinary.
We’re told it should be
fulfilling, abundant, meaningful.
“Technology rewires
our relationship with effort.
Instant access, one-click purchasing,
algorithmic recommendations,
same-day delivery—
The world arrives without us
lifting more than a finger.
“Even modest effort can feel demanding.
Opportunity asks us only to takea small step,
but to our nervous systems—
shaped by instant gratification—
a small step can feel like a mountain.
“What’s underneath that mountain?
Self-worth.
We don’t believe we deserve
opportunity when we don’t see ourselves
as worthy of help, of connection,
of joy, of possibility.
“We turn away
from the most generous offer.
We cannot imagine that what’s available
might actually be meant for us.
This is the invisible wall
blocking our full potential.
“When everything is possible,
everything is expected.
Then, even minor disappointments
carry a heavy weight.
Not trying becomes ‘safer’ than
risking the ache of falling short.
“Exhaustion is the result of life
becoming a daunting obstacle course.
Rates of cancer, obesity, diabetes,
ADHD, PTSD, depression, anxiety,
suicide, rise. We feel overwhelmed.
This has become a ‘new normal.’
“Isolation is the next domino
in the failure cascade of this
predatory stage of society.
Support from family, friends and community
feels thin, strained, because we all feel…
“Overburdened.
Chronic stress narrows attention.
It drains mental bandwidth …
When our bodies and minds are overloaded,
even a small opportunity can feel like
one more thing we can’t manage.
“From the outside, opportunity looks like a gift.
From the inside, it can look like just too much.
“In this most abundant age in human history
more of us feel lack and limitation.
Upward mobility in America is harder now
than at any point in the last half-century.
The ladder has fewer rungs that are farther apart.
When we live in economic, social, emotional scarcity—
trust in the possible grows fragile.
And a good thing can feel like a trick.
“We protect ourselves
by rejecting the good
before it has a chance
to disappoint us.
“Fear of rejection
masquerades as caution.
We underestimate how
deeply we are wired
to avoid being turned away.
Neuroscientists have found
that social rejection lights up
the brain like physical pain.
We might miss an opportunity
rather than risk being ignored
or turned down.
“It’s easiest to whisper to ourselves
‘It probably wouldn’t work anyway’
than to face the vulnerability
of wanting something.
“Still, I believe in generosity.
I believe in its power to create opportunity.
To lift us when the timing of support aligns.
“My life purpose as—
a father, grandfather, brother
partner, friend and neighbor,
entrepreneur, strategist, author,
and human is to help people—
our organizations and communities—
“To turn adversity into opportunity.
To help us see what we cannot see.
To help us take that small first step.
To help us trust that something is
worth reaching for and life exists to support us.”
> Go to research methodology, survey topline findings and essayists acknowledgements
> Return to the top of this page