The Line We Are Compelled to Confront
September 17. 2025
By Lee Rainie
Duke University Law School Professor James Boyle argues that “human beings have been philosophizing about what makes us human since philosophizing began” millennia ago. He believes we are at one of those hinge points where debates about the essence of personhood are rising as artificial intelligence systems become evermore advanced.
He began his keynote with the searing observation that we have been fighting about the qualities that make us us since Aristotle began to consider the subject:
“As a species, human beings have a pretty bad history of dealing with personhood issues. We have denied personhood to members of our own species—on the basis of sex, on the basis of race. … We look back on those moments with shame. [Still,] we haven’t stopped fighting about personhood. We still fight about when life begins and when life ends—and those fights are intense and heartfelt.”

In his speech, Boyle fleshed out the arguments of his 2024 MIT Press book, The Line: AI and the Future of Personhood. He believes that the rise of intelligent machines will redefine the moral boundaries we have drawn between subject and object, persons and things. And he thinks it is possible that eventually the line will be drawn to include AI on the side of the line where digital entities have legal rights.
It’s no longer true that ‘sentences imply sentience’
Thanks, again, to Aristotle, many have thought that the signature thing that separates us from other animals is language. But Boyle wonders what happens now when machines speak relatively fluently. “We’ve always believed that sentences imply sentience,” he said. “Now that view has changed. … Here we are. We have machines with language.”
So, then, does consciousness separate us from other species? Maybe not, Boyle argues, because AI systems do a decent job at mimicking us – that is, of meeting the “Turing Test.” The great computer scientist Alan Turing thought that the moment machines were thinking came when people could tell the difference between a machine’s response to a question and a human response. “If you can’t tell, you have to say [the machine] is capable of thinking. Otherwise, the machine could reply: ‘If I’m not thinking, what are you doing—and how are you in a better position to judge me than I am to judge you?’”
Corporate souls and digital citizens
If extending rights to machines sounds absurd, Boyle reminded listeners that we already did it—with corporations: Corporations are artificial persons. “We give them rights, even constitutional rights, not because we empathize with them, but because it’s convenient.”
Yet, he cautioned about a future where, if we create digital entities that can sue and be sued, we may end up with immortal, amoral super-actors—“Citizens United on steroids.” Still, he noted the paradox to an audience full of academic officials and faculty the paradox: The same legal frameworks that protect universities and nonprofits might one day legitimize AIs as participants in commerce or research partnerships.
A mirror of the moment evolution was revealed
Boyle compared the coming reckoning over AI rights to the shock Victorian society felt when Darwin revealed humankind’s evolutionary origins. “It was,” Boyle noted, “a crisis of conscience, and for many, a challenge to faith.”
The coming debate about the line between humans and other intelligent entities could either diminish us or deepen us, according to Boyle. He described three possible responses to the arguments over whether AI should be given rights: First, is denial: “Rights are for humans.” Case closed.
The second is pessimism: “Maybe we’re just like ChatGPT” and just run algorithms and regurgitate what has been pumped into us and instinctively respond to the stimuli upon which we have been trained. “Maybe this has been sent to humble us,” he said of this option.
Boyle said he takes a third and hybrid view, “in which all human beings are inside our line as of right—regardless of coma or lack of brain activity—and, in addition, entities that plausibly have the qualities that we believe entitle us to moral status also have rights.” He concluded his remarks this way:
“Alongside those worries and concerns, my hope is that we capture a faint sense of wonder—that finally there are entities that plausibly could lead us to think that one day we might greet another intelligent entity, another entity that worries about where to draw the line on this planet. And in that moment, there is surely a sense of wonder.”
Coda – Boyle’s new tests for possible AI consciousness
Asked during the Q&A session about his own tests for whether AI has crossed the line to consciousness, Boyle said:
Three Indicators for Crossing the Line
[Philosopher] John Searle tried to kill the Turing test philosophically, as I try and explain in the book, and I hope you would agree that some of his arguments are unconvincing, though he’s right about ChatGPT. He’s not right about all possible AI. In the book, I offer three possibilities.
One possibility is that we would have an entity which plausibly could say, “No, I have learned in the same way a human child learns.” When I understand “chair” it’s not because I have a definition of chair that’s in a dictionary, and I know how to manipulate those words so that the so the syntactical patterns appear to make sense to you. I’ve actually fallen over chairs. I’ve lifted chairs. I’ve been told to take a chair. I’ve made a chair with my robotic manipulators. The more we have that kind of embodied intelligence and interaction with the world and not the word, the harder it will be for us to say you [the AI systems] are qualitatively different.
The second thing that I think might lead us to give entities rights and to recognize them is if we had innovation that is genuinely genre transcending – that is, truly above and beyond anything that is derived from prior human knowledge. It’s one thing to have ChatGPT dig in the scientific literature and come out with an emergent property of an insight. It’s another thing to revolutionize the entire field and to make us realize we’re looking at everything wrong. If AI started to do that, we would no longer think of them as mere chatbots, mere autocomplete mechanisms, because we would have to say they’re doing something that goes beyond that.
And the third thing, and this will strike the AI doomers as absolutely horrific, is the possibility of AI forming communities, of it having of AI showing moral concern for its own kind and not just for us. Aristotle said that it was language is important not because of what it lets us do instrumentally but because of what it lets us be morally as a community. If we saw AI that seemed to have fellow feeling for other AI, I think that would make us that might make us change our minds.