“I view this question as depending on what happens to current AI, meaning in practice, to current generative AI. For purposes of this exercise, let’s consider two possible outcomes for the evolution of current generative AI from now to 2040.
- “Scenario 1: Even with larger models, and better tuning and prompting procedures, generative AI technology remains seductive but maddeningly unreliable. It continues to be disconnected from reality outside its training set, unable to reliably perform symbolic reasoning or connect seamlessly and continually to external systems that can, and incapable of being able to reliably quote its sources and indicate its certainty in its pronouncements. It can only interact with a single interlocutor at a time.
- “Scenario 2: These problems are resolved. Generative AI systems can be configured to learn rules (by inferring them or being taught them), or how interact with systems that can. They can support their pronouncements with sources that are correct and verifiable. They can handle inputs of essentially unbounded size and learn to interact with several interlocutors.
“Bridging the gap from Scenario 1 to Scenario 2 would significantly increase the trustworthiness and applicability of GenAI systems. I would not be surprised if this brought us to systems that could perform a wide range of tasks at the level of humans, with sufficient transparency and reliability that they could be certified to perform risky tasks.
“It is not inconceivable that such systems could be taught to avoid many ethical pitfalls that plague most current GenAI systems. But moving from 1 to 2 requires changing the architecture of the systems. I don’t believe it will ever be solved with more data. It is a problem many smart people have been working on for years, but I know of no major developments (and I don’t include chain-of-thought prompting as one) that have become part of the state-of-the-art.
“I have to conclude that the problem is very hard and that a solution, if it exists, may require not a tinker but a total redesign of current systems. Humans are an existence proof that such advanced systems are possible, but I have no idea whether the problem is solvable or by when.
If we draw closer to artificial general intelligence (AGI) I can see such systems becoming certifiable to perform jobs requiring high-skill levels, like law, medicine and banking. … Given the potential capability of these systems, how to prevent them from turning into the sorcerer’s apprentice becomes of primary importance. The first mean of control would be in the rules that these systems would be built to obey. … The second mean would be in establishing unbreakable relations between GenAI systems and humans that gave humans responsibility over the systems, as they now have over existing complex systems like aircraft, factories and banks.
“Back to the question at hand. Both outcomes are scary.
“Outcome to Scenario 1: This puts us in the position where nothing GenAI systems do can be trusted, where everything of importance they do for you needs to be verified before being used, and everything you receive from someone else which could have been generated by such a system may look reasonable but still cannot be trusted. Some applications could be useful even under these circumstances. Ethan Mollick makes a strong case for the use of GenAI systems in brainstorming, e.g., ideas for new businesses, where they provide stimuli to humans who must then verify and assess.
“Special-purpose systems trained on annotated data will continue to be useful, e.g., to read x-rays. Perhaps we develop a certification mechanism for generative AI systems that will support human-in-the-loop systems by annotating system decisions with something like ‘Generated by ChatGPT on October 27, 2023, and verified by John Smith,’ along the lines of the certificates we use to verify computer communications. Then all communication without the certification becomes suspect.
“With certification, many tasks can be performed at least in part by generative AI systems – programming, low- and mid-level tasks requiring interaction with computer systems, customer service, some health care tasks. I am not an expert in just what tasks would be accessible, and what the impact on the job market would be, but there are many studies looking into this.
“I tend to be an optimist as to the ability of the market to create new job types arising from the existence of new technology, though much less so in those being jobs that can be filled by those displaced by it. That is a task for the state, and we are not in a good political position to have the state take major steps to help the displaced.
“Outcome to Scenario 2: If we draw closer to artificial general intelligence (AGI) I can see such systems becoming certifiable to perform jobs requiring high-skill levels, like law, medicine and banking. Jobs requiring significant embedding in the physical world would need these systems to be integrated with robots and high-performance perception systems, but in much of robotics the hardware is limited by the software.
“Given the potential capability of these systems, how to prevent them from turning into the sorcerer’s apprentice becomes of primary importance. The first mean of control would be in the rules that these systems would be built to obey. Although rules could now be taught to them and modified, there would undoubtedly be circumstances in which they conflict, as ethics rules often when humans encounter complex situations.
“Whether we could give them enough common sense to deal with conflicting rules remains to be seen, but one way would be for the systems to recognize the conflict and turn to humans for resolution. The second mean would be in establishing unbreakable relations between GenAI systems and humans that gave humans responsibility over the systems, as they now have over existing complex systems like aircraft, factories and banks.”
This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional responses are included in the report “The Impact of Artificial Intelligence by 2040”