Charles Ess
Charles Ess is professor emeritus of ethics at the University of Oslo, Norway. This essay is his written response in January 2025 to the question, “How might expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’?” It was published in the 2025 research study “Being Human in 2035.

“The human characteristics (such as empathy, moral judgment, decision-making and problem-solving skills, the capacity to learn) are virtues that are utterly central to human autonomy and flourishing. A ‘virtue’ is a given capacity or ability that requires cultivation and practice in order to be performed or exercised well. Virtues are skills and capacities essential to centrally human endeavors such as singing, playing a musical instrument, learning a craft or skill – anything from knitting to driving a car to diagnosing a possible illness.

“As we cultivate and practice virtues, we know them to not only open new possibilities for us, it also makes us much better equipped to explore ourselves and our world, and doing so also brings an invaluable sense of achieving a kind mastery or ‘leveling up’ and thereby a deep sense of contentment or eudaimonia.

Autonomous systems are fundamentally undermining the opportunities and affordances needed to acquire and practice valued human virtues. This will happen in two ways: first, patterns of deskilling, i.e., the loss of skills, capacities and virtues essential to human flourishing and robust democratic societies, and then, second, patterns of no-skilling, the elimination of the opportunities and environments required for acquiring such skills and virtues in the first place.

“The virtue of phronēsis is the practical, context-sensitive capacity for self-correcting judgment and a resulting practical wisdom. The body of knowledge that builds up from exercising such judgment over time is manifestly central to eudaimonia and thereby to good lives of flourishing. Invoking virtue ethics (VE) is not parochial or ethnocentric: rather, VE is as close to a humanly universal ethical framework as we have. It focuses precisely on what would seem a universally shared human concern: What must I do to be content and flourish? It thus stands as a primary, central, millennia-old approach to how human beings may pursue good lives of meaning. In particular, the Enlightenment established the understanding that a series of virtues – most especially phronēsis, but certainly also care, empathy, patience, perseverance and courage, among others, are critical specifically to sustaining and expanding human autonomy.

“Many of the virtues required to pursue human community, flourishing and contentment – e.g., patience, perseverance, care, courage and, most of all, ethical judgment – are likewise essential as civic virtues, i.e., the capacities needed for citizens to participate in the various processes needed to sustain and enhance democratic societies.

“It is heartening that virtue ethics and a complementary ethics of care have become more and more central to the ethics and philosophy of technology over the past 20-plus years. However, a range of more recent developments has worked to counter their influence. My pessimism regarding what may come by 2035 arises from the recent and likely future developments of AI, machine learning, LLMs, and other (quasi-) autonomous systems. Such systems are fundamentally undermining the opportunities and affordances needed to acquire and practice valued human virtues.

“This will happen in two ways: first, patterns of deskilling, i.e., the loss of skills, capacities, and virtues essential to human flourishing and robust democratic societies, and then, second, patterns of no-skilling, the elimination of the opportunities and environments required for acquiring such skills and virtues in the first place.

The more we spend time amusing ourselves in these ways, the less we pursue the fostering of those capacities and virtues essential to human autonomy, flourishing and civil/democratic societies. Indeed, at the extreme in ‘Brave New World’ we no longer suffer from being unfree because we have simply forgotten – or never learned in the first place – what pursuing human autonomy was about. … The more we offload these capacities to these systems, the more we thereby undermine our own skills and abilities: the capacity to learn, innovative thinking and creativity, decision-making and problem-solving abilities, and the capacity to think deeply about complex concepts.

“The risks and threats of such deskilling have been prominent in ethics and philosophy of technology as well as political philosophy for several decades now. A key text for our purposes is Neil Postman’s ‘Amusing Ourselves to Death: Public Discourse in the Age of Show Business’ (1984). Our increasing love of and immersion into cultures of entertainment and spectacle distracts us from the hard work of pursuing skills and abilities central to civic/civil discourse and fruitful political engagement.

“We are right to worry about an Orwellian dystopia of perfect state surveillance, as Neil Postman observed. It is becoming all the more true, as we have seen over the past 20 years. But the lessons of Aldous Huxley’s ‘Brave New World’ are even more prescient and chilling. My paraphrase is, ‘We fall in love with the technologies of our enslavement,’ perhaps most perfectly exemplified in recent days by the major social media platforms that have abandoned all efforts to curate their content, thereby rendering them still further into perfect propaganda channels for often openly anti-democratic convictions of their customers or their ultra-wealthy owners.

“The more we spend time amusing ourselves in these ways, the less we pursue the fostering of those capacities and virtues essential to human autonomy, flourishing and civil/democratic societies. Indeed, at the extreme in ‘Brave New World’ we no longer suffer from being unfree because we have simply forgotten – or never learned in the first place – what pursuing human autonomy was about.

These dystopias have now been unfolding for some decades. Fifteen years ago, in 2010, research by Shannon Vallor of the Edinburgh Futures Institute showed how the design and affordances of social media threatened humans’ levels of patience, perseverance, and empathy – three virtues essential to human face-to-face communication, to long-term relationships and commitments and to parenting.

“These dystopias have now been unfolding for some decades. Fifteen years ago, in 2010, research by Shannon Vallor of the Edinburgh Futures Institute showed how the design and affordances of social media threatened humans’ levels of patience, perseverance, and empathy – three virtues essential to human face-to-face communication, to long-term relationships and commitments and to parenting. It has become painfully clear, that these and related skills and abilities required for social interaction and engagement have been further diminished.

“There is every reason to believe that all of this will only get dramatically worse thanks to the ongoing development and expansion of autonomous systems. Presuming that the current AI bubble does not burst in the coming year or two (a very serious consideration) then we will rely more and more on AI systems to take the place of human beings – as a first example, as judges. I mean this both in the more formal sense of judges who evaluate and make decisions in a court of law: but also more broadly in civil society, e.g., everywhere from what Americans call referees but what are called judges in sports in other languages, to civil servants who must judge who and who does not qualify for a given social benefit (healthcare, education, compensation in the case of injury or illness, etc.).

“Thes process of replacing human judges with AI/ML systems has been underway for some time – with now-well-documented catastrophes and failures, often leading to needless human suffering (e.g., the COMPAS system, designed to make judgments as to who would be the best candidates for parole). A very long tradition of critical work within computer science and related fields also makes it quite clear that these systems, at least as currently designed and implemented, cannot fully instantiate or replicate human phronetic judgment (see ‘Augmented Intelligence’ by Katharina Zweig). Our attempts to use AI systems in place of our own judgment will manifestly lead to our deskilling – the loss, however slowly or quickly, of this most central virtue.

“The same risks are now being played out in other ways – e.g., students are using ChatGPT to give them summaries of articles and books and then write their essays for them, instead of their fostering their own abilities of interpretation (also a form of judgment), critical thinking and the various additional skills required for good writing. Like Kierkegaard’s schoolboys who think they cheat their master by copying out the answers from the back of the book – the more that we offload these capacities to these systems, the more we thereby undermine our own skills and abilities. Precisely those named here: the capacity to learn, innovative thinking and creativity, decision-making and problem-solving abilities, and the capacity and willingness to think deeply about complex concepts.

Should we indeed find ourselves living as the equivalent of medieval serfs in a newly established techno-monarchy, deprived of democratic freedoms and rights and public education that is still oriented toward fostering human autonomy, phronetic judgment and civic virtues then the next generation will be a generation of no-skilling as far as these and the other essential virtues are concerned. … It is currently very difficult to see how these darkest possibilities may be prevented in the long run.

“The market capitalism roots of these developments have been referred to in various forms, including ‘platform imperialism’ and ‘surveillance capitalism.’ Various encouragements of deskilling are now found in the cyberverse, including one titled the Dark Enlightenment which seems explicitly opposed to the defining values of the Enlightenment and the acquisition and fostering of what are considered to be the common virtues and capacities of ‘the many’ required for human autonomy and a robust democracy. Some aim to replace democracy and social welfare states with a ‘techno-monarchy’ and/or a kind of ‘techno-feudalism’ run and administered by ‘the few,’ i.e., the techno-billionaires.

“Should we indeed find ourselves living as the equivalent of medieval serfs in a newly established techno-monarchy, deprived of democratic freedoms and rights and public education that is still oriented toward fostering human autonomy, phronetic judgment and the civic virtues then the next generation will be a generation of no-skilling as far as these and the other essential virtues are concerned. To be sure, the select few will retain access to these tools to enhance their creativity, problem-solving, perhaps their own self-development in quasi-humanistic ways. But such human augmentation via these and related technologies – what has also been described as the ‘liberation tech’ thread of using technology in service of Enlightenment and emancipation since the early 1800s – will be forbidden for the rest.

“I very much hope that I am mistaken. And to be sure, there are encouraging signs of light and resistance. Among others: I am by no means the first to suggest that a ‘New Enlightenment’ is desperately needed to restore – and in ways revised vis-à-vis what we have learned in the intervening two centuries – these democratic norms, virtues and forms of liberal education. And perhaps all of this will be reinforced by an emerging backlash against the worst abuses and consequences of the new regime. We can hope. But as any number of some of the world’s most prominent authorities have already long warned on multiple grounds beyond virtue ethics (e.g., Steven Hawking, as a start) – it is currently very difficult to see how these darkest possibilities may be prevented in the long run.”


This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report Being Human in 2035.