“The best consequence of AI – which has existed in the scientific disciplines for many years but has now migrated to the humanities fields and the general consumer space – is that it should inspire a deeper discussion of what it means to be human.
“The great works of philosophy, sociology, ethnography and psychology, etc., need to be brought to forefront of the AI discussion. If we continue to label AGI as so-called near-human-level intelligence we will have failed and we deserve to undergo an existential threat to knock us off the pedestal of narrow critical thinking. If we decide that AGI is human and neglect the spheres of emotional, cognitive leaps in creativity, belief and more, we have failed on a universal level.
“By 2040, the world should have engaged in a rigorous discussion and developed a framework for AI guardrails and principles.
- What does ‘first, do no harm,’ mean in the new context?
- What is a soul?
- What is cognition?
- What is identity?
- What are perspective and point of view?
- Can we be truly inclusive and avoid othering, or will past content reinforce ills of the past and limit human advancement?
- How do we avoid global homogenization of thought? Can English language and Western or hemispheric bias dilute knowledge access?
- What is emotion? How does emotional intelligence play out in AI’s evolution?
- What is hurtful? Can empathy be advanced beyond the performative?
- What is the human contribution to insight, creativity, innovation, invention, filtering, etc.?
- And many more.
“The real rubric – by 2040 – is whether AGI will move beyond the transformations informed by past training and evolve into providing results using humanlike behaviours informed by emotional intelligence and whether it adopts advances such as future-informed predictive learning to develop insights or transformative cognitive leaps in decision-making and creativity, or guiding social constructs that serve the social good.
“Should we institute global guardrails, or are professional-sector principles enough? What are the legal forces and sanctions that could work here? (Think of how poorly we’ve handled spam, viruses and disinformation, and how that failure could serve as a metaphor for evil AGI.) Do we risk regulating too early, when the innovation is just approaching its toddler phase, and how will we handle its adolescent phase?
“The real rubric – by 2040 – is whether AGI will move beyond the transformations informed by past training and evolve into providing results using humanlike behaviours informed by emotional intelligence and whether it adopts advances such as future-informed predictive learning to develop insights or transformative cognitive leaps in decision-making and creativity, or guiding social constructs that serve the social good.
“Rubrics and tests will need to be developed and informed by social and humanities fields that have previously not been widely consulted or well understood by leaders in the scientific and digital programming space. It could be that the finish line will be artificial general intelligence and anything beyond that is a performative chimera that fools some of the people some of the time.”
This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”