Anil Seth is director of the Centre for Consciousness Science and professor of cognitive and computational neuroscience at the University of Sussex, UK. This essay is his written response in January 2025 to the question, “How might expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors?’” It was published in the 2025 research study “Being Human in 2035.

“AI large language models [LLMs] are not actually intelligences, they are information-retrieval tools. As such they are astonishing but also fundamentally limited and even flawed. Basically, the hallucinations generated by LLMs are never going away. If you think that buggy search engines fundamentally change humanity, well, you have a weird notion of ‘fundamental.’

“Still, it is undisputable that these systems already exceed human cognition in certain domains and will keep getting better. There will be disruption that makes humans redundant in some ways. It will transform a lot, including much of human labor.

“The deeper and urgent question is: How do we retain a sense of human dignity in this situation? AI can become human-like on the inside as well as on the outside. When AI gets to the point of being super good, ethical issues become paramount.

These systems already exceed human cognition in certain domains and will keep getting better. There will be disruption that makes humans redundant in some ways. It will transform a lot, including much of human labor. … How do we retain a sense of human dignity in this situation? … [Beyond that] with ‘conscious’ AI things get a lot more challenging since these systems will have their own interests rather than just the interests humans give them. … The dawn of ‘conscious’ machines … might flicker into existence in innumerable server farms at the click of a mouse.

“I have written in Nautilus about this. Being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is rooted in the fundamental biological drive within living organisms to keep on living. The distinction between consciousness and intelligence is important because many in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware – at which the inner lights of consciousness come on for them.

“There are two main reasons why creating artificial ‘consciousness,’ whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With ‘conscious’ AI, things get a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

“The second reason is even more disquieting: The dawn of ‘conscious’ machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

Future language models won’t be so easy to catch out. They may give us the seamless and impenetrable impression of understanding and knowing things, regardless of whether they do. As this happens, we may also become unable to avoid attributing consciousness to them, too, suckered in by our anthropomorphic bias and our inbuilt inclination to associate intelligence with awareness.

“Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism – putting ourselves at the center of everything – and anthropomorphism – projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

“Future language models won’t be so easy to catch out. They may give us the seamless and impenetrable impression of understanding and knowing things, regardless of whether they do. As this happens, we may also become unable to avoid attributing consciousness to them, too, suckered in by our anthropomorphic bias and our inbuilt inclination to associate intelligence with awareness.

“Systems like this will pass the so-called Garland Test, an idea which has passed into philosophy from Alex Garland’s perspicuous and beautiful film ‘Ex Machina.’ This test reframes the classic Turing test – usually considered a test of machine intelligence – as a test of what it would take for a human to feel that a machine is conscious, even given the knowledge that it is a machine. AI systems that pass the Garland test will subject us to a kind of cognitive illusion, much like simple visual illusions in which we cannot help seeing things in a particular way, even though we know the reality is different.

Accelerated research is needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals – or selves – of their own.

“This will land society into dangerous new territory. Our ethical attitudes will become contorted as well. When we feel that something is conscious – and conscious like us – we will come to care about it. We might value its supposed well-being above other actually conscious creatures such as non-human animals. Or perhaps the opposite will happen. We may learn to treat these systems as lacking consciousness, even though we still feel they are conscious. Then we might end up treating them like slaves – inuring ourselves to the perceived suffering of others. Scenarios like these have been best explored in science-fiction series such as ‘Westworld,’ where things don’t turn out very well for anyone.

“In short, trouble is on the way whether emerging AI merely seems conscious or actually is conscious. We need to think carefully about both possibilities, while being careful not to conflate them.

“Accelerated research is needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals – or selves – of their own.”


This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report Being Human in 2035.