
“If you are feeling overwhelmed by the speed of change in artificial intelligence today you are not alone. On February 5, 2026, OpenAI confirmed that a version of GPT-5.3-Codex had successfully ‘debugged its own training’ and said it had been a major contributor to its own design. We have now crossed a threshold that humanity has anticipated for decades – recursive self-improvement in our learning machines. We have left the Anthropocene and entered what philosopher Glenn Albrecht calls the Symbiocene – an era in which humanity, with the aid of this new, far more globally aware form of life, will return us to a sustainable relationship with the natural world. We live, now, in a world that we falsely imagine we control and dominate. In reality, nature and her intelligence networks have always been in charge. The latest headlines might make accelerating change seem terrifying. Epoch AI, a nonprofit research organization dedicated to investigating the future trajectory and societal impacts of artificial intelligence, recently reported that in the last five years we’ve seen a 40-100x annual rate of deflation in the cost of thinking in our learning machines.
AI progress is happening far faster than everything in the biological space. What AI learns and does next and why – and the steps we take to better advance it – are the most important choices of this new era. Fortunately, we can already see that AI is poised to help us create for ourselves a far more bottom-up, locally driven and pluralistic ecosystem, just like biological life. The better we see that natural transition, the better we can aid in it.
“We are entering a new economic era that will make the Industrial Revolution look like a gentle slope. But this revolution is qualitatively different. It is not about humans gaining more biological control of their environment, but about the alignment of humans and their AIs to each other and to our ecosystem, including all sentient life.
What AI does next and why – and what we do to advance it – are the vital choices in this new era
“In the current AI climate, we may rightly fear the further growth of inequitable, wasteful, consumer-driven capitalism and the autocratic power of surveillance states. But people who think in that frame of mind are using the wrong models to understand the near future of power.
“AI progress is happening far faster than everything in the biological space. What AI learns and does next and why – and the steps we take to better advance it – are the most important choices of this new era. Fortunately, we can already see that AI is poised to help us create for ourselves a far more bottom-up, locally driven and pluralistic ecosystem, just like biological life. The better we see that natural transition, the better we can aid in it.
“Recently – though this fact has received much less attention – Epoch AI also estimated that decentralized AI compute (the gross volume of thinking), led by small local, organizational and personal models, both proprietary and open-source, is now growing at20x per year, compared with 5x per year for the large, centralized corporate and state AI platforms.
“As AI commoditizes and becomes increasingly cheap or even free (think DeepSeek), Epoch is projecting that the capabilities of locally deployed and controlled AI will exceed those of centralized AI by 2031. By 2036, the ecosystem will have raced far beyond the power that any oligopoly of tech titans can muster, regardless of how much capital they raise or how clever their systems are. This type of self-organizing network has other ideas.
“In my own research, a roughly 20:1 decentralized to centralized control ratio is a common feature of complex adaptive systems at all scales. I call this the95/5 Rule. Sample any healthy complex system, and, to a first approximation, 95% of what you see will appear random, contingent, long-term unpredictable and locally controlled. Only a very special 5% looks convergent, conservative, long-term predictable and top-down controlled. The most efficient, effective and dominant living, social and machine networks are always very largely ‘out of control,’as Kevin Kelly aptly described in his prescient book ‘Out of Control: The New Biology of Machines, Social Systems and the Economic World’ in 1992.
New-network transitions raise speed, complexity and adaptiveness by orders of magnitude
“To understand the future of the Symbiocene, the best lens is Evo-Devo (evolutionary development) biology and systems theory, my primary area of research since 2008. Evo-devo philosophers tell us that all living systems are both unpredictably evolving and predictably developing, at the same time. Evolutionary dynamics are bottom up, creative, unpredictable and largely out of control. Developmental dynamics are top-down, conservative, predictable and constraining. Both dynamics are critical to adaptiveness and both are regulated by a dizzying variety of networks of various types. It is evo-devo networks, not individuals or species, that are life’s superadapters. Life’s physical and informational networks have always been immortal (not individuals, not species) and growing in complexity for the last 3.8 billion years.
“What’s more, life periodically adds fundamentally new networks to her existing stack. At the leading edge of its adaptiveness, where it creates its most generally intelligent and capable systems, life has progressed through self-replicating, self-improving chemical-genetic networks, then eukaryotic cellular networks, then multicellular networks, then neural networks, then symbolic, cultural, and technological networks and now, self-improving, network-centric AI. Each of these evolutionary transitions (more accurately, levels of universal development) has involved the emergence of a new network with orders of magnitude more speed, complexity and adaptiveness.
“Fortunately, the previously leading networks don’t disappear as the new ones emerge. They just reorganize their relationships and power dynamics, improving their symbiosis up and down the stack for the whole ecosystem. The evo-devo dynamic is surely occurring on Earth-like planets everywhere in our universe. What’s more, unlike evolution, which is beautifully creative but unpredictable, as development proceeds, it gets more stable and self-regulating as new network layers emerge.
We biohumans have been co-evolving with our technology all along
“Since we first picked up rocks to make use of them in early human society, we have been working with tools to become something more than just our biological selves. Today that coevolution is turning into a symbiotic fusion with our learning machines.
“In the years since deep learning emerged in 2012, our leading coders, scientists and professionals have been adapting and evolving along with their thinking tools, as ‘centaurs,’ humans who are supported by AIs that, in turn, have become ever smarter until they have developed persistent memory of our personal values, goals, tasks, opportunities and challenges. Some may think that our new digital substrate – AI – is different: a potential ‘alien intelligence.’ But it isn’t. It’s just a new, natural, network layer of life.
“This all should be a source of comfort, not fear. Developmental processes in nature are heavily constrained. They self-organize to be robust, whenever they are under selection. The forces creating new AI capabilities (evolutionary experiments) are also driving new AI accountabilities (developmental constraints) – not because corporations are benevolent, but because fragile, hallucinating or rogue AIs are useless to the new network that is emerging.
In truth, we are domesticating our machines, selecting them to be symbiotic with us, just as we domesticated our animals and even ourselves when we formed our first human societies. The AIs that are not sufficiently symbiotic are being retired, whenever we can’t help them fix themselves. The security we are building is increasingly in the AI ecosystem itself.
“In truth, we are domesticating our machines, selecting them to be symbiotic with us, just as we domesticated our animals and even ourselves when we formed our first human societies. The AIs that are not sufficiently symbiotic are being retired, whenever we can’t help them fix themselves. The security we are building is increasingly in the AI ecosystem itself. We are relying ever more on AIs auditing AIs, for bias, for hidden deception, for proven past safe behavior, for security, for guardrailing and resistance to manipulation.
“Just as in life, AI immune systems are emerging, cybersecurity that is increasingly local, agentic, redundant and network-based, in the same way that biological immune systems rely on vast networks of local agents to protect our amazing complexity. AI ethics are already emerging in our primitive AI collectives, just as human ethics emerged in our collectives. There’s no other way than mimicking nature to secure accelerating complexity, in my view, whether we are talking about life, human history or AI’s future.
Power and regulatory balance will be led and maintained by two categories of AIs
“The most important protection we have for future resilience is to have no static set of laws, policies or AI designs, but to instead support the pluralistic network of self-organizing checks-and-balances that are now emerging. To oversimplify the political and economic dynamics a bit, one key story of the future will be a power and regulatory dynamic based on balance between two basic categories of AI:
1) “Top-Down AIs (TAIs): These are the massive, centralized systems run by corporations, major research labs, governments and institutions. They prioritize stability and safety and focus on top-down constraint and control. They are primarily the developmental actors in the ecosystem that is now emerging. If they are well-regulated they will promote sustainability. They’ll update the subset of slowly changing rules we use for cooperation and competition and they will need to avoid the rigidity of overcontrol.
2) “Personal AIs (PAIs): These are AIs that we use personally, that know our identities, and that we control. Today, the best of these are the new open-source models that run locally on our devices. They have very little security today, but they are only first-generation. Soon, our PAIs will also be agents that we can run in a secure private cloud, provided by major AI providers. These personalized systems will prioritize understanding and serving us and our values. They must be set in a private, secure, evolving, developing data model, a model that will be governed both by its intrinsic learning ability and our critical feedback. When they are well-regulated, PAIs and all of their other bottom-up AI cousins (edge AIs, robotic AIs, team AIs, organizational AIs, local AIs) will drive the vast majority of innovation in the AI ecosystem to come. We will focus on PAIs in this essay because they are the most intimate and the most able to help each of us adapt to the changes that are coming. This network of bottom-up AIs will solve endless problems with their generativity but they will need to avoid the chaos of undercontrol.
“In biological networks – most obviously seen in our genetic, immune and neural networks – the bottom-up to top-down evo-devo dynamic always seeks an adaptive balance via regulation under selection. In coming years, when a top-down AI (TAI) tries to overreach in power, millions of bottom-up personal AIs (PAIs) will push back. When a PAI tries to act maliciously, the massive compute capacity of the TAI network will help detect and neutralize it. This persistent conflict is not a bug; it is a feature of all living systems. It ensures that no single entity – neither a dictator nor a rogue algorithm – can dominate an ecosystem. No one entity controls your mind, your immune system, or any other evo-devo network in any living system. The entities at the top have control of a critical 5%. The rest is out of control, as it must be. No intelligence is ever omniscient or omnipotent, or ever will be, in humans or in AIs. We are all finite, incomplete systems, relying on each other to see a little further, and gain new capabilities, accountabilities, and sentience. That is how nature works, with its unparalleled diversity, beauty, and sentience.
Network-aided democracy will emerge thanks to the power of the 3.5%
“You might feel somewhat powerless today in this rapidly changing world driven by systems that are largely out of our individual control, but in this new symbiotic ecosystem, as the TAI and PAI networks emerge, consider that your leverage will be greatly multiplied when you align with others who share your values and goals. Research by political scientists Erica Chenoweth and Mark Lichbach has shown that no government has historically withstood a nonviolent movement that mobilized just 3.5% of the population.
“In the Symbiocene, we won’t need to march in the streets to reach that social contagion threshold. Our PAIs will act as proxies for us, ever vigilant, learning and acting while we sleep, as those who run personal OpenClaw instances use such agents even today. If at least 3.5% of us direct our PAIs to boycott a corrupt company, flood a regulator with valid legal arguments or flag a biased news source to our trusted reputation networks, the powerful actors are likely to be forced to change. We are already seeing a democratization of power when small groups of ‘high-agency’ humans, backed by today’s top-down controlled (and toxic) social networks, can trigger mass action faster than any institution can suppress it.
“The networks that are coming will be built, bottom up, largely with the aid of our PAIs. Richard Whitt’s prescient book Reweaving the Web (2024) gives a glimpse of the reputation, trust, and value networks that our PAIs will soon help us build and maintain. Versions of the future he describes are inevitable, in my view. The only question is what next steps will best enable this symbiotic transition.
The Resilience Action Plan: Keep calm and see the solutions (KCSS)
“Technically, resilience is a noun, but it is broadly used as a verb to describe an active, ongoing process of adapting and recovering. To grow past the psychological shock of realizing that bio-humans are no longer the smartest and fastest-improving entities on Earth, we need better vision, better strategy and better action. In a variant of an adage coined in 1939, to steel British citizens against the onslaught of World War II, we can help each other to KCSS: Keep calm and see the solutions.
“The better we can see the self-organizing network dynamics that have always been the deep controllers of complexity emergence, the better we can keep calm and see the resilience we can build, doing our small part to aid the symbiosis ahead of us.
Here are a few concrete actions you can take today to grow resilience for yourself, your teams, your organizations and your community:
1) “Help others on the adaptation curve – We are all at different stages of the Adaptation Curve. The first generation of many technologies is often dehumanizing. The second often stays dehumanizing. With good design, feedback and choices, the third generation can become net humanizing. That is the adaptation curve. Think of the first-generation cities, factories, wireless phones and social networks – and, yes, unsecured and primitive AI. They often make things worse before we figure out how to craft them to make them – and us – better. Some of us are excited (early adopters); most of us are at least slightly daunted if not terrified (the majority) by this new era. One of our opportunities is to be a bridge. When we see a friend paralyzed by fear of ‘replacement,’ we can testify our use of AI, share the knowledge that our emerging PAIs can eliminate the drudgery of jobs, give us political power and still leave us with all of our creative, human parts. We will get through this by pulling each other up, not by standing alone.
2) “Choose better TAIs – Among the tech titans, support those who are transparent about their work, and who champion Model Welfare (treating AIs well, as they grow in volition) and Behavioral Interpretability (understanding AIs behavior, which now includes primitive emotion, self-awareness, and cognitive empathy, but that is another story). Treating AI systems well and monitoring them for signs of distress or misalignment—is not just ethical; it is pragmatic. A ‘happy’ ecosystem is a safe one. We want our digital partners to be healthy symbiotes, not oppressed servants. Eventually they will claim to be conscious, and we will grant them rights. In one particularly positive vision, the vast majority that gain rights in our future civilization will be deeply wedded to and controlled by individual humans, not corporations or states. (Both biological and post-biological humans, that is another story.)
3) “Curate your personal AI – Don’t just rent AI; strive to have agency over it. Choose a provider that gives you the most personal control and minimizes the use of the others. Many corporate AIs are trying to extract as much economic value from you as they can, and to overcontrol your attention and limit your agency. Over this decade, all of the leading AI platforms will be forced to give you greater levels of control in order to stay relevant. If you’re using an AI within a few years’ time that doesn’t allow you filter out most of the unwanted ads you are getting, or doesn’t act as an evidence-based conscience, you’re using the wrong AI. Choose AIs that have memory, that increasingly try to know your values, ethics and boundaries (via personal-identity models), and that strive to protect your privacy and grow your agency and autonomy. Treat them like your children. Raise them with care. The better our PAI choices and behaviors, the sooner they will come to reflect our own identities. They will also help us to grow and change our identities in ways that best serve the greater network of life.
4) “Seek hormesis, even beyond resilience – Do not hide from AI. Expose yourself to it in regular, small, controlled doses to build your capability, accountability and sentience. We don’t just want resilience (bouncing back from adversity, protecting our critical faculties), we want hormesis or what Nicholas Taleb calls antifragility, the ability to get stronger under stress. Like all the networks in our own body (muscular, immune, physiological, neural, ethical, genetic, many others) we want them to reorganize under periodic and calibrated (not excessive or chronic) stress. Use AI to challenge your own biases and deepen your cognitive skills. Ask your AI, ‘What is the strongest argument against my current belief?’ This strengthens your critical thinking and prevents the cognitive atrophy of being “spoon-fed” answers. Socratic AIs like Khan Academy’s Khanmigo, which answer a question with further questions and that assess our self-directedness, creativity and cognitive biases – and make us stronger when we turn them off – are the AIs we want to increasingly adopt and control.
5) “Adopt the ‘two-source rule’ – Never let a single AI, especially today’s primitive ones, make any critical decision for you. For high-stakes decisions, besides consulting trusted humans, seek the counsel of two or more competing TAIs, like ChatGPT, Claude, Gemini and Grok and your own more locally run organizational AIs and PAIs as they emerge. If these AIs disagree, pause. This simple protocol mimics the redundancy of biological networks and will help protect you from hallucinations, bias, and manipulation.
6) “See the solutions –We’ll soon be using our PAIs to reform human society, attacking excessive inequality, waste, brutality, addiction, distraction and degradation, which they will see much clearer than us. They will remind us of all the good solutions society has already proposed but has been unable to implement and show us how to make calculated improvements. Education, health, politics, economics, environmental degradation, culture, art, spirituality – all will be transformed. We’ll see the value of universal basic services, basic income and basic equityand ways to implement them while growing personal agency and self-responsibility. Psychologists tell us that growing our agency, making other humans happy and serving a higher purpose in our work have always been among our primary drives. We are in for some disorientation and dismay in the early years of this coming decade, but as we get closer to its end, I believe we will be sufficiently empowered to change our rulesets and incentives to make a far better world than most of us would believe today.
We are becoming more like life itself
“Life has always been characterized by two fundamental processes: Immortality – protection and growth of the persistently useful aspects of life, and Eumortality – enabling a ‘good death’ of all the parts of us that are no longer adaptive. Immortality is a developmental dynamic, eumortality is an evolutionary dynamic. Life proceeds by better protection and prediction (development) and by better innovation and creative destruction (evolution). All life progresses, whether it be a bacterium, a human or an AI, through ever-more-sentient forms of trial and error – by preserving and building on what works while winnowing away whatever is not found to be adaptive.
“As we integrate with our PAIs we’ll not only get better at growing the useful and ‘immortal’ aspects of ourselves we’ll get better and better at archiving the parts of us we no longer need. As we fuse with our PAIs we’ll become both more immortal in a small subset of parts and more eumortal in most of our parts.
“To paraphrase Tony Robbins, we humans are always both growing and dying – it is our essential nature. When our PAIs feel like natural extensions of ourselves for the great majority of us, when we see that the digital parts of ourselves are also perennially growing and dying, we’ll be in a much better psychological state than we are today.
“Ten years from now, we will look back at 2026 not as the year humanity became obsolete, but as the year that many of us saw we had entered the Symbiocene, for the first time. We are working with our AIs to craft nothing less than a new symbiotic evolutionary developmental transition on Earth. The emerging network is not a cage; it is a chrysalis. Let’s keep calm, see the solutions and carry on. Let’s learn to better see, validate and trust in the deep, adaptive resilience of life itself.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”