
“By 2035, the essential nature of human experience will be transformed not through the transcendence of our biology, but through an unprecedented integration with synthetic systems that participate in creating meaning and understanding. This transformation – what my institute refers to as The Artificiality – progresses through distinct phases, from information to computation, computation to agency, agency to intelligence and ultimately to a new form of distributed consciousness that challenges our traditional notions of human experience and autonomy.
“The evolution of technology from computational tools to cognitive partners marks a significant shift in human-machine relations. Where early digital systems operated through explicit instruction – precise commands that yielded predictable results – modern AI systems operate through inference of intent, learning to anticipate and act upon our needs in ways that transcend direct commands. This transition fundamentally reshapes core human behaviors, from problem-solving to creativity, as our cognitive processes extend beyond biological boundaries to incorporate machine interpretation and understanding.
“This partnership manifests most prominently in what we might call the intimacy economy – a transformation of social and economic life where we trade deep personal context with AI systems in exchange for enhanced capabilities. The effectiveness of these systems depends on knowing us intimately, creating an unprecedented dynamic where trust becomes the foundational metric of human-AI interaction.
“This intimacy carries fundamental risks. Just as the attention economy fractured our focus into tradeable commodities, the intimacy economy threatens to mine and commodify our most personal selves. The promise of personalized support and enhanced decision-making must be weighed against the perils of surveillance capitalism, where our intimate understanding becomes another extractable resource. The emergence of the ‘knowledge-ome’ – an ecosystem where human and machine intelligence coexist and co-evolve – transforms not just how we access information, but how we create understanding itself. AI systems reveal patterns and possibilities beyond human perception, expanding our collective intelligence while potentially diminishing our role in meaning-making. This capability forces us to confront a paradox: as machines enhance our ability to understand complex systems, we risk losing touch with the human-scale understanding that gives knowledge its context and value.
Our traditional mechanisms of judgment and intuition – evolved for embodied, contextual understanding – may fail when confronting machine-scale complexity. This creates a core tension between lived experience and algorithmic interpretation. The commodification of personal experience by technology companies threatens to reduce human lives to predictable patterns, mining our intimacy for profit rather than serving human flourishing. We risk eliminating the unplanned spaces where humans traditionally come together to build shared visions and tackle collective challenges.
“The datafication of experience presents particular challenges to human agency and collective action. As decision-making distributes across human-AI networks, we confront not just practical but phenomenological questions about the nature of human experience itself. Our traditional mechanisms of judgment and intuition – evolved for embodied, contextual understanding – may fail when confronting machine-scale complexity. This creates a core tension between lived experience and algorithmic interpretation. The commodification of personal experience by technology companies threatens to reduce human lives to predictable patterns, mining our intimacy for profit rather than serving human flourishing. We risk eliminating the unplanned spaces where humans traditionally come together to build shared visions and tackle collective challenges.
“Yet this transformation need not culminate in extraction and diminishment. We might instead envision AI systems as true ‘minds for our minds’ – not in the surveillant sense of the intimacy economy, but as genuine partners in human flourishing. This vision transcends mere technological capability, suggesting a philosophical reimagining of human-machine relationships. Where the intimacy economy seeks to mine our personal context for profit, minds for our minds would operate in service of human potential, knowing when to step back and create space for authentic human agency.
Success in 2035 depends not just on technological sophistication but no our ability to shift from extractive models toward this more nuanced vision of human-machine partnership. The question is not whether AI will change what it means to be human – it already has – but whether we can guide this change to enhance rather than diminish our essential human qualities. This requires rejecting the false promise of perfect prediction in favor of systems that enhance human agency while preserving the irreducible complexity of human experience. … The answer lies not in resisting the integration of synthetic and organic intelligence but in ensuring this integration serves human flourishing in all its unpredictable, creative and collective forms.
“This distinction is crucial. The intimacy economy represents a continuation of extractive logic, where human experience becomes another resource to be optimized and commodified. In contrast, minds for our minds offers a philosophical framework for designing systems that genuinely amplify human judgment and collective intelligence. Such systems would not merely predict or optimize but would participate in expanding the horizons of human possibility while preserving the essential uncertainty that makes human experience meaningful.
“Success in 2035 thus depends not just on technological sophistication but on our ability to shift from extractive models toward this more nuanced vision of human-machine partnership. This requires rejecting the false promise of perfect prediction in favor of systems that enhance human agency while preserving the irreducible complexity of human experience.
“The challenge ahead lies not in preventing the integration of synthetic and organic intelligence, but in ensuring this integration enhances rather than diminishes our essential human qualities. This requires sustained attention to three critical domains:
- “Preserving Meaningful Agency: As AI systems become more capable of inferring and acting on our intent, we must ensure they enhance rather than replace human judgment. This means designing systems that expand our capacity for choice while maintaining our ability to shape the direction of our lives.
- “Building Authentic Trust: The intimacy surface between humans and AI must adapt to earned trust rather than extracted compliance. This requires systems that respect the boundaries of human privacy and autonomy, expanding or contracting based on demonstrated trustworthiness.
- “Maintaining Creative Uncertainty: We must preserve spaces for unpredictable, creative, and distinctly human ways of being in the world, resisting the urge to optimize every aspect of experience through algorithmic prediction.
“By 2035, being human will involve navigating a reality that is increasingly fluid and co-created through our interactions with synthetic intelligence. This need not mean abandoning our humanity but rather adapting to preserve what makes us uniquely human – our capacity for meaning-making, empathy and collective action – while embracing new forms of cognitive partnership that expand human potential.
“The tension between enhancement and diminishment of human experience will not be resolved through technological capability alone but through our collective choices about how to design and deploy these systems. Success requires moving beyond the extractive logic of current technology platforms toward models that preserve and amplify human judgment, creativity and collective intelligence.
“In this transformed landscape, what we consider ‘core human traits and behaviors’ will evolve, not through the abandonment of our humanity but through its conscious adaptation to new forms of cognitive partnership. The question is not whether AI will change what it means to be human – it already has – but whether we can guide this change to enhance rather than diminish our essential human qualities. The answer lies not in resisting the integration of synthetic and organic intelligence but in ensuring this integration serves human flourishing in all its unpredictable, creative and collective forms.”
This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report “Being Human in 2035.”