“Let’s confront some big questions about AI in a Q-A-style interview format:

Question: Where does AI begin and where does it end?
The answer is that AI will probably have neither beginning nor end, but will be seamlessly integrated into our daily lives, which could mean that in the future we will no longer speak of ‘artificial’ intelligence at all, but only of ‘smart’ or ‘dumb.’ We and everything around us – our houses, our cars, our cities, etc. – will be considered to be smart or dumb.

Q: When is AI obligatory and when is it voluntary?
A: Obligation and freedom are terms that refer to individual human beings and their position in society. According to modern Western beliefs, one has duties towards society and, towards oneself, one is free and independent. AI, in this frame of thinking, is seen as something in society that is a threat to freedom for the individual. But as for all social conditions of human existence, i.e., as for all technologies, one must ask whether one can be truly independent and autonomous. After all, when is using electricity, driving a car, making a phone call, using a refrigerator, etc., voluntary or mandatory? If technology is society, and an individual outside of society and completely independent of all technology does not exist, then the whole discussion about freedom is of little use. Am I unfree if the self-driving car decides whether I turn right or left? Am I free if I can decide whether I want to stay dumb instead of becoming smart?

Whatever I define as ‘sustainable’ at the moment – e.g., the stock of certain trees in a forest – can be destructive and harmful under other conditions – e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex and rapidly changing world is misguided and doomed to failure. We will have to replace sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because if we cannot or do not want to keep given conditions stable we will have to make everything optimally changeable.

Q: How can the status quo be maintained during permanent development?
A: This question is answered everywhere with the term ‘sustainability.’ When it is said that a business, a technology, a school, or a policy should be ‘sustainable,’ the aim is to maintain a balance under changing conditions. But it is doubtful whether real development can take place within the program of sustainability. Whatever I define as ‘sustainable’ at the moment – e.g., the stock of certain trees in a forest – can be destructive and harmful under other conditions – e.g., climate change. Sustainability prioritizes stability and opposes change. To value stability in an uncertain, complex and rapidly changing world is misguided and doomed to failure. We will have to replace sustainability as a value with a different value. The best candidate could be something like flexibility, i.e., because if we cannot or do not want to keep given conditions stable we will have to make everything optimally changeable.

Q: Who is mainly responsible for AI development in a household?
A: In complex socio-technical systems, all stakeholders bear responsibility simultaneously and equally. Within any grouping, from a household to a nation, it is the stakeholders, both humans and machines, who contribute to the operations of the network and consequently share responsibility for the network. This question is ethically interesting, since in traditional ethics one must always find a ‘culprit’ when something goes wrong. Since ethics, morals and the law are called upon the scene and only intervene when someone does something voluntarily and knowingly that is immoral or illegal, there must be a perpetrator. Without a perpetrator to pin down, no one can be held ethically or legally accountable. In complex socio-technical systems – e.g., an automated traffic system with many different actors – there is no perpetrator. For this reason, everyone must take responsibility. Of course, there can and must be role distinctions and specializations, but the principle is that the network is the actor and not any actors in the network. Actors, both human and non-human, can only ‘do’ things within the network and as a network.

Q: Who is primarily responsible for AI use in a community or city? Who is primarily responsible for AI use in a country? Can there be a global regulation on AI?
A: All of these questions reflect our traditional hierarchies and levels of regulation, from household to nation or even the world. What is interesting about socio-technical networks is that they do not follow this hierarchy. They are simultaneously local and global. An AI in a household, for example, Alexa, is globally connected and operates because of this global connectivity. If we are going to live in a global network society in the future, then new forms of regulation must be developed. These new forms of regulation must be able to operate as governance that is bottom-up and distributed rather than hierarchical government. To develop and implement these new forms of governance is a political task but it is not only political. It is also and task of ethics. For, as long as we are guided by values in our laws and rules, politics ultimately rest upon what people in a society value. The new values that guide the regulation of a global network society need to be discovered and brought to bear on all the above questions. This is a fitting task for digital ethics.

Q: Who would develop these regulations?
A: Here again, only all stakeholders in a network can be responsible for setting up regulatory mechanisms and only they should be responsible for control. One could imagine that a governance framework is developed bottom up. In addition to internal controlling, there is an external audit to monitor compliance with the rules. This could be the function of politics in the global network society. There will be no global government, but there will indeed be global governance. The role of government would be to audit the self-organizing governance frameworks of the networks of which society consists.

Q: Should there be an AI ’driver’s license’ in the future?
A: The idea of a driver’s license for AI users, as one might have to have for a car or a computer, assumes that we control the AIs. But what if it is the AIs that are driving us? Would the AIs perhaps have to have a kind of driver’s license certifying their competence for steering humans?

Q: What would the conditions be for that?
A: Whether AIs get a human or social driver’s license that certifies them as socially competent would have to be based on a competence profile of AIs as actors in certain networks. The network constructs the actors and, at the same time, is constructed by the actors who integrate into the network. Each network would need to develop the AIs it needs but also be open to being conditioned as a network by those AIs. This ongoing process is to be understood and realized as governance in the sense described above.”

This essay was written in November 2023 in reply to the question: Considering likely changes due to the proliferation of AI in individuals’ lives and in social, economic and political systems, how will life have changed by 2040? This and more than 150 additional essay responses are included in the report “The Impact of Artificial Intelligence by 2040”