Tracey _Follows
Tracey Follows is founder and CEO of UK consultancies Futuremade and Me:chine and author of the book “The Future of You.” This essay is her written response in January 2026 to the question, “How might individuals and societies embrace, resist and/or struggle with transformative change in the AI Age? What cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” It was published in the 2026 research study “Building a Human Resilience Infrastructure for the AI Age.”

“Artificial intelligence systems are no longer peripheral instruments that humans pick up and put down at will. They now operate as continuous, ambient infrastructures that shape how decisions are made, how risks are assessed, how opportunities are distributed and how people are recognised by the systems that govern modern life. In finance, welfare, policing, healthcare, education, employment and border control, AI increasingly functions as an anticipatory layer that structures what is possible, permitted or probable before a person even acts. AI is therefore not best understood as a tool. It is better understood as an environment: something we live inside, move through and are shaped by, often without noticing.

When people no longer inhabit a common informational world, collective decision-making becomes fragile. Democratic societies require spaces for disagreement, deliberation and mutual interpretation that are not governed by engagement-optimising systems.

“This distinction matters. Tools can be evaluated in isolation. Environments cannot. They alter behaviour, perception, incentives and identity simply by being present. As AI becomes embedded into social, economic and political systems the primary question is no longer how well it performs, but how it reshapes the conditions under which human agency operates.

“In my work on identity and technological systems, I have developed the distinction between the machinable and the unmachinable self to describe this shift. The ‘machinable’ consists of everything about a person that can be rendered legible to systems: data, preferences, behavioural patterns, credentials, biometric signals, productivity metrics, risk scores. These elements are increasingly required for participation in society. Identity itself has become infrastructural. Without being machine-readable, individuals cannot access finance, services, mobility or even civic rights.

“The ‘unmachinable,’ by contrast, consists of those human capacities that cannot be fully captured or automated: judgment, meaning-making, ethical reasoning, imagination, intuition, timing and the ability to change oneself in response to context. These are not sentimental attributes. They are the basis of agency. As systems become more predictive and automated the unmachinable becomes the primary site of human resilience.

“The synthesis of these two dimensions is what I call the ‘Me:chine’: a model of the self that acknowledges that modern humans are simultaneously machinable/unmachinable, i.e., system-legible and irreducibly interior. We are not either human- or machine-mediated. We are both. Me:chine is not a technological artefact but a cultural and psychological framework for surviving inside machine-driven environments without becoming reducible to them: Me first, only me – then machine.

“This framework helps explain how individuals and societies may embrace, resist and struggle with AI-driven change. Many people will embrace AI because it offers speed, convenience and efficiency. Systems that predict needs, automate decisions and remove friction feel helpful in the short term. Others will resist AI because they experience it as surveillance, loss of autonomy or moral overreach. Most people will live in a state of ambivalence, benefiting from automation while sensing that something fundamental about agency is being eroded.

Organisations must treat human adaptability and discernment as assets rather than inefficiencies. Governance must protect contestability and human authority. People must be able to understand, challenge and override automated decisions that affect their lives. Digital identity systems must be designed to serve and protect individuals rather than merely rendering them more controllable. Education systems must prioritise perception, judgment and ethical reasoning alongside technical skills. Individuals need practices that preserve interior sovereignty.

“The reason this tension is so difficult to resolve is that AI systems do not simply act on the world. They act on people’s representations of themselves. Credit scores, risk profiles, behavioural predictions and algorithmic classifications become feedback loops that shape how individuals are treated and how they come to see their own possibilities. This is why resilience must include cognitive, emotional, social and ethical capacities that protect the unmachinable dimensions of identity.

Cognitively, resilience requires metacognition: the ability to reflect on one’s own thinking. AI systems generate answers, recommendations and narratives at scale but they do not provide understanding. Without the ability to question outputs, recognise uncertainty and evaluate assumptions, people risk outsourcing not just tasks but judgment. In a machine-mediated environment the ability to think about how one is thinking becomes a form of self-defence.

Emotionally, resilience requires self-regulation in the face of algorithmic influence. AI systems increasingly operate through personalised persuasion, attention engineering and affective computing. They learn what triggers fear, desire, outrage or compliance. In such conditions, emotional literacy is not merely therapeutic; it is political. The capacity to remain grounded, tolerate ambiguity and resist manipulation determines whether individuals act from their own values or from system-induced impulses.

Socially, resilience depends on the preservation of shared meaning. Algorithmic personalisation fragments reality into customised information streams, creating what can be described as ontological enclosures. When people no longer inhabit a common informational world, collective decision-making becomes fragile. Democratic societies require spaces for disagreement, deliberation and mutual interpretation that are not governed by engagement-optimising systems.

Ethically, resilience requires a shift from case-by-case evaluation to systemic awareness. The question is not simply whether a single algorithm is biased, but how entire socio-technical architectures distribute power, visibility and vulnerability over time. Who becomes increasingly legible and governable? Who becomes invisible or excluded? Ethical capacity in an AI environment depends on the ability to see these structural effects rather than being distracted by surface-level controversies.

“Practical resilience, therefore, involves both institutional and individual action.

Organisations must treat human adaptability and discernment as assets rather than inefficiencies.

Governance must protect contestability and human authority. People must be able to understand, challenge and override automated decisions that affect their lives. Digital identity systems must be designed to serve and protect individuals rather than merely rendering them more controllable.

Education systems must prioritise perception, judgment and ethical reasoning alongside technical skills.

Individuals need practices that preserve interior sovereignty: reflection, attention management and identity formation that are not outsourced to platforms“New vulnerabilities will emerge as AI becomes more predictive and immersive. People may experience fatalism as algorithms appear to pre-empt their futures. Trust in evidence may erode under synthetic media. Behaviour may be shaped by invisible optimisation loops. Coping strategies must therefore include discernment, epistemic humility and the cultivation of a coherent sense of self across digital contexts.

“This is the core of the Me:chine doctrine: in an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems. This is now the entire focus of my futures work going forward.”


This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”