“While I hesitate to claim that all of the ideas I share will all be fully realized by 2040, I think at the very least we’ll see significant progress on these fronts. I expect AI tools to take over most menial tech and tech-adjacent tasks by 2040. This will widen the divide between unskilled and skilled/creative labor, as well as their respective labor markets (especially unskilled outsource labor markets). Next, as AI becomes increasingly more competent at what we may view now as ‘human-only’ tasks (creative, high-skilled, etc.), a significant portion of jobs will evolve from what we know today into human-in-the-loop AI monitoring and later, finally, to human-on-the-loop monitoring.
“This transition will create a labor market contraction in some areas, while opening up a host of new careers based upon usage, creation, training and monitoring of AI tools. With this in mind I remain optimistic that the medium-term growth of the tech industry labor market will continue into 2040 at a rate similar to what we’re accustomed to presently, but with many laborers forced to retrain and/or incorporate AI into workflows in order to maintain relevance.
“We’re in the Wild West days of AI. As things advance there will be much more significant regulation and scrutinization of consumer-facing AI models and their training data, from both government and private platform owners. We already see AI work products banned and AI usage disclosure policies are beginning to be required on platforms like Steam and YouTube. A standardization in AI usage rights and licensing is likely to be driven by these platform owners, resulting in models being required to disclose training data sources and usage rights affected. These policies will pave the way for government regulation, but it’s likely to lag behind by five to 10 years.
The true ideal of a metaverse will finally be realized when we see interoperability between many varied platforms, using a shared standard of data communication and user data persistence. Real-time rendering engines will drive this content and serve as the toolset for building and publishing content. … On the XR front, AI will help enable automated digital-twin creations of real-world spaces through computer vision and 3D reconstruction that can be used as a basis for augmented-reality interaction. AI will be implemented to enable users to express themselves in virtual spaces in an increasingly accessible way, including avatar creation, human/computer interaction and social features.
“Most publicly-available models are likely to include flags that can be used by analysts to identify any work product that is AI-created in order to combat the spread of AI plagiarism, false information and so on. This may start as a voluntary practice by owners at first as a result of public backlash and eventually become a requirement for use. These types of restrictions, as well as existing prompt content restrictions, will further fuel the growth of unregulated open-source AI models, with individuals able to generate content on their home computers – as we already see happening now with the explosive growth of community around Stable Diffusion.
“By 2040 we can also expect to see more-significant application of AI in military technology. The spending and intent for incorporation of AI into military systems is already present today. The products of this will be realized over the next two decades, primarily in command, control and communication systems and on autonomous reconnaissance and weapons platforms. AI is being used for data synthesis, analysis and predictive monitoring as the pool of data and number of data points and sensors grows in complexity and number.
“The high impact of cheap drone platforms on the battlefield in Ukraine and the equally high impact of electronic warfare to break communication between drones and their operators creates a clear use case for autonomy. AI fighter wingmen with a human-in-the-loop have been the north star of the U.S. next-generation fighter project for some time now and will be further realized over the next few decades.
“Frighteningly, as the speed of warfare increases, militaries will be forced to incorporate human-on-the-loop or completely autonomous systems in order to compete – and anyone who does not do so will be at a decided disadvantage.
“In regard to development of the metaverse, we can expect AI to have great impact in the areas of generative content, avatars and user expression, human/computer interaction and XR. I view ‘the metaverse’ as the destination platform at the end of our undeniable current path of physical and digital convergence as technology continues to play a larger role in aspects of our daily lives to connect and empower human interaction.
“The true ideal of a metaverse will finally be realized when we see interoperability between many varied platforms, using a shared standard of data communication and user data persistence. Real-time rendering engines will drive this content and serve as the toolset for building and publishing content. While I don’t believe that the experiences/platforms we see on the market today are really indicative of true metaverse products, they do play a role in seeing the likely future.
“Advances in higher-level XR technology will be the main driver of metaverse adoption. Generative AI will be extremely influential for interactive content creation, driving one of the most impactful and immediately apparent use cases for metaverse experiences by 2040. Creating a persistent 3D world and enough hand-created content that users can consistently return to and engage with the platform for hundreds of hours is an extremely expensive and time-consuming process – analogous to developing and supporting massively multiplayer online games like World of Warcraft, which was developed over five-plus years for $60 million-plus in 2004 dollars. Development time and cost are among the biggest challenges troubling developers of recent metaverse-style experiences that haven’t gained much traction.
“Generative AI used as a tool to augment human creativity will help democratize the content-creation process – not just for development teams, but also for individual users expressing themselves through user-generated content. This will impact all types of content creation, including 3D assets and animation, digital humans/non-player characters, narrative, programming, game mechanics, etc.
“On the XR front, AI will help enable automated digital-twin creations of real-world spaces through computer vision and 3D reconstruction that can be used as a basis for augmented-reality interaction. AI will be implemented to enable users to express themselves in virtual spaces in an increasingly accessible way, including avatar creation, human/computer interaction and social features.
“AI processing of data for human/computer interaction will extend to more than just avatar puppeteering, allowing for more-accessible and intuitive ways to engage with digital content. AI speech reconstruction opens up avenues for natural real-time translation and accessibility features. I am skeptical that most users will embrace creation of AI-driven versions of themselves at a widespread scale in the near future, although the idea will certainly be explored extensively.
“Improvements in AI will also unlock more-powerful potential for augmented-reality content in metaverse experiences. Real-time reconstruction of 3D spaces and computer vision object recognition are essential for creating useful features in XR. While these tools exist today, it remains challenging in many cases for developers to achieve consistent results, putting a hard limit on potential feature feasibility.
“As the hardware and AI-driven software behind these technologies improves, it will unlock more-powerful XR capabilities to bridge the gap between real-world interaction and digital content and eliminating current feature limitations. This technology will reach a high level of maturity by 2040, facilitating the type of intuitive tech-driven interactions between humans and digital content in an XR environment that many people today think of when they hear the term ‘metaverse.’”
This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”