Ben Shneiderman shared a set of insights originally written for readers of his “Notes on Human-Centered AI” column:
“The U.S. White House published President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence Oct. 30, 2023, a 20,000-word document that produced a torrent of analyses important to the future of humanity and AI. I was pleased to see strong human-centered statements focused on developing a positive future, including: ’the critical next steps in AI development should be built on the views of workers, labor unions, educators and employers to support responsible uses of AI that improve workers’ lives, positively augment human work and help all people safely enjoy the gains and opportunities from technological innovation.’
“This executive order shifts the discussion from long-term worries and vague threats to short-term efforts to fix problems, prevent harms and promote positive outcomes. Critics may complain that it should have made more demands on tech companies, but the actions of federal agencies, if followed through, will have a profound effect on big tech and big companies that use AI technologies.
Those of us who believe in human-centered approaches have much work to do to encourage design of artificial intelligence user experiences, audit trails, independent oversight, open reporting of incidents and other governance strategies. Our commitment to amplify, augment, empower and enhance human performance can result in applications that inspire human self-efficacy, creativity, responsibility, social connectedness and collaboration tools.
“The nearly 100 requested actions in the White House order include tasks such as ’Establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems … Establish appropriate guidelines … to conduct AI red-teaming tests to enable deployment of safe, secure and trustworthy systems. … Streamline processing times of visa petitions and applications, including by ensuring timely availability of visa appointments for noncitizens who seek to travel to the United States to work on, study or conduct research in AI or other critical and emerging technologies. … Support the goal of strengthening our nation’s resilience against climate change impacts and building an equitable clean energy economy for the future.’
“Those of us who believe in human-centered approaches have much work to do to encourage design of artificial intelligence user experiences, audit trails, independent oversight, open reporting of incidents and other governance strategies. Our commitment to amplify, augment, empower and enhance human performance can result in applications that inspire human self-efficacy, creativity, responsibility, social connectedness and collaboration tools.
“The contrast between this White House order and the much-heralded statement delivered at the UK- and U.S.-led Bletchley Declaration by Countries Attending the AI Safety Summit Nov. 1, 2023 is striking. The Bletchley Declaration makes familiar calls for positive steps: ‘We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. … The protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed.’ This is fine, but the declaration only restates well-worn terms like ‘must,’ ‘should,’ ‘we resolve,’ and ‘we encourage’ without indicating who does what by when.
“The Bletchley Declaration repeats virtuous phrases with no immediate action. Biden’s executive order contains 90-plus tasks to be carried out by U.S. federal departments and agencies, with deadlines mostly in the 60- to 180-day range. On the positive side, the Bletchley Summit brought together representatives of 28 nations, including China, to consider ‘wider international cooperation on AI.’ South Korea and France have agreed to host future meetings. Maybe both approaches are needed: specific short-term actions by specifically-tasked government agencies and wider international cooperation. While the Bletchley Declaration avoids AI ‘extinction’ rhetoric, it invokes a new phrase – ‘frontier AI’ – which is described as ‘highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks … that match or exceed the capabilities present in today’s most advanced models.’
“The UK plans to launch an AI Safety Institute (AISI) supported by a vague agreement by companies to submit new models to rigorous testing. The AISI could become a positive force for evaluations and research. Of interest during the AI Safety Summit was a side-event conversation in which British Prime Minister Rishi Sunak interviewed technology titan Elon Musk, who has often expressed concerns about potential dangers of AI.
“Musk told Sunak that ‘AI can create a future of abundance’ and added that there is an 80% likelihood of AI being a definite net positive to society, but only if humanity is cognizant and careful about the fact that it will also have a 20% downside. ‘AI will be a force for good, most likely,’ he said. ‘But the probability of it going bad is not zero percent.’
“The Biden Administration’s U.S. executive order is an astonishing document that has the potential to produce substantial changes in U.S. government activities that could significantly influence the future of AI, what businesses and universities do, as well as what other countries will do. Naturally, as some commentators have pointed out, the question is how well all these tasks can be carried out.”
This submission from Ben Shneiderman was written in November 2023; he agreed to submit it as his reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essays are included in the report “The Impact of Artificial Intelligence by 2040.”