These experts expected in 2025 that by 2035 there will be…
44% – More negative change than positive change
29% – More positive change than negative change
16% – Fairly equal positive and negative change
8% – Little to no change

The views expressed here echo findings from the Imagining the Digital Future Center’s past reports on the “Future of Human Agency“ and “Artificial Intelligence and the Future of Humans.” A plurality of these experts believes AI tools create a paradox of control, convincing individuals that they are enhancing their lives while shaping their decisions to suit others’ needs behind the scenes. Most of these experts expect this will weaken humans’ cognitive and strategic abilities, leading to less self-initiated problem-solving and the diminishment of moral judgment. They also note that as AI systems are further embedded in key systems of business, law and government, they are likely to further remove humans from critical decision processes altogether. Following is a selection of related quotes extracted from these experts’ longer essays:

“The deepening partnership between humans and artificial intelligence through 2035 reveals a subtle but profound paradox of control. As we embrace AI agents and assistants that promise to enhance our capabilities, we encounter a seductive illusion of mastery – the fantasy that we‘re commanding perfect digital servants while unknowingly ceding unprecedented control over our choices and relationships to the corporate – and in some cases government – entities that shape and control these tools. … By 2035, they will become the primary lens through which we perceive and interact with the world. Unlike previous technological mediators, these systems won‘t simply connect us to others; they‘ll actively shape how we think, decide and relate. The risk isn‘t just to individual agency but to the very fabric of human society, as authentic connections become increasingly filtered through corporate-controlled interfaces. … The stakes transcend mere efficiency or convenience. They touch on our fundamental capacity to maintain meaningful control over our personal and societal development.” – Lior Zalmanson, professor at Tel Aviv University – expertise in algorithmic culture and the digital economy

“Outsourcing any human analytical process will, over time, lead to an attrition of any particular skill set. This is worrying if humans’ well-being is still tied to their ability to make independently derived, informed decisions. This is one level at which ubiquitous AI as everyday mundane helpers or ‘micro agents’ will influence humans by 2035. Humans’ ability to process information in an unaided way will suffer because they will no longer be constantly practicing that skill. As the use of AI becomes more routine this will have deeper impact.” – Annette Markham, chair and professor of media literacy and public engagement at Utrecht University, the Netherlands

“In thinking about the consequences of the advent of true AI, the television series ‘Star Trek’ is worth reconsidering. ‘Star Trek’ described an enemy alien race known as the Borg that extended its power by forcibly transforming individual beings into drones by surgically augmenting them with cybernetic components. The Borg’s rallying cry was ‘resistance is futile, you will be assimilated.’ Despite warnings by computer scientists going at least as far back as Joseph Weizenbaum in ‘Computing Power and Human Reason’ in 1976 that computers could be used to extend but should never replace humans, there has not been enough consideration given to our relationship to the machines we are creating.” – John Markoff, author of “Machines of Loving Grace: The Quest for Common Ground Between Humans and Machines

“The perilous implications of the datafication every aspect of our lives, our interactions, our innermost thoughts and biometrics as well as the world around us (through ubiquitous sensors) will be irrefutable by the end of the decade. The continuous stream of intimate human data AI corporations collect – from our biometrics and behavior to our social connections and cognitive patterns has created a dangerous feedback loop that makes it seem impossible to exert control and autonomy. As their AI systems become more sophisticated at predicting and influencing human behavior, people become more dependent on their services, generating even more valuable training data and value for the AI agents, tools, applications and products that will pervade every aspect of our daily lives by 2035. … In the best future, privacy and cognitive liberty are protected as fundamental rights, AI corporations are subject to rigorous oversight and their systems are directed toward solving humanity‘s greatest challenges (in collaboration with the communities experiencing those challenges) rather than taking over core human capacities” – Courtney C. Radsch, director of the Center for Journalism & Liberty at the Open Markets Institute

“Maintaining humanity while extending consciousness requires ownership of that which simulates the individual’s being in the world. The world’s largest tech companies are fixated on AI as a commercial product. In focusing their attention on AI’s essence as a consumer artifact, their development of agency in AI risks making agency serve corporate ends and therefore become parasitic and dehumanizing. … When we can act collaboratively with a trusted AI simulation of our self, we will be experiencingne extended cognition with joint responsibility for collective action. Agency without responsibility is malignant. We prompt and inform our AI and our AI prompts and informs us. Having the individual, not corporations, in control of action is the key to remaining human as extended consciousness reframes our realities.” – Garth Graham, a global telecommunications expert and consultant based in Canada

“In my opinion, smartphone technology has already transformed humanity. We don’t need to wait 10 more years to understand that things are not going well for us. By becoming addicted to our phones and the entertainment/distraction that they provide, we have already changed our behavior and might already be in the process of losing many of our core human traits. AI might simply accelerate our descent into the dystopian abyss, because we are already losing or surrendering our agency to make decisions for ourselves.” – Eni Mustafaraj, associate professor of computer science at Wellesley College

Continue reading: Clicking on the bottom link on each of the next pages leads you through each of the full pages on the 12 Human Traits. Next up – people’s native sense of self-identity, meaning and purpose