“My biggest concern at the moment is that we are trying to rein in AI before clearly defining its boundaries.

“In the spring of 2023 the White House put out an RFC on AI Accountability, and today mass civil action tort lawyers are suing social media companies for how their algorithms are negatively impacting the mental health of youth. But wait: what exactly is AI?

“For instance, do any rule-based recommendation systems, AI-informed design-based features or other system artifacts constitute AI? How are regular systems different than ones based on AI? While these questions are answerable, we have not yet reached a consensus. And we cannot begin to regulate something we have yet to even clearly define.

My concern is that when we embrace the application of AI agents in learning processes that make such work easier, we are taking away important scaffolding in the process of critical thought. … We need to allow room for human discretion and struggle, as it is an important part of being human.

“Another concern is more interpersonal – we have reached the level of the ultimate Turing test, where generative AI, deep fakes and virtual companions are blurring the lines between fantasy and social reality. When we have people opting to partner with AI rather than other humans and we are asking our children to use conversational agents to improve their mental health, I have to wonder if we are dangerously blurring the line on what it means to be human and desire human (or human-like) connection.

“It would be preferable that AI be used to replace mundane and menial daily tasks or to automate clear-cut processes that benefit from efficiency over intuition. However, AI is being integrated into all aspects of our daily lives in a rather seamless and invisible manner.

“Yet another concern of mine is that as a qualitative researcher in a computer science department, I attempt to explain the importance of struggle in the human thought process as an important part of learning. I tell my students qualitative data coding is hard because YOU have to be the algorithm. You have to think for yourself and, often by brute force, come up with an answer.

“My concern is that when we embrace the application of AI agents in learning processes that make such work easier, we are taking away important scaffolding in the process of critical thought.

“More and more I see people blindly responding based on rule-based policies even when they make no damn sense. We need to allow room for human discretion and struggle, as it is an important part of being human.”