
“Artificial intelligence systems will play a much more significant role in shaping our decisions, work and everyday lives in the coming years. This shift will not happen abruptly, as a single dramatic technological turning point, but rather gradually and almost imperceptibly – through an increasing number of micro-decisions that rely on recommendations, risk assessments, automated processes and personalised information. This invisibility of artificial intelligence may become its strongest societal effect. It will not always feel like a technology we actively use, but like an infrastructure without which functioning becomes difficult. The greatest opportunity will rely on human adaptability in building new knowledge, new professional roles and new forms of social resilience. In this sense, the future will not be a world of artificial intelligence, but a world in which people must learn to live with algorithmic systems, use them intelligently and limit them where they cross the boundaries of what is acceptable.
It is reasonable to assume that the race between generation and detection will remain permanent, meaning that a culture of verification cannot be fully delegated to technology. … The greatest challenge will not be the presence of artificial intelligence itself, but the preservation of autonomy, transparency and trust in a society where recommendations are constant, content is increasingly difficult to verify and surveillance becomes technically trivial.
“The first major trend we are already seeing emerging today is the normalisation of artificial intelligence to the level of a common utility or software application. Today, most people do not think about internet protocols when sending a message or about compression algorithms when watching a video. Artificial intelligence is increasingly being embedded into services that are experienced as standard. It already filters email and suggests replies, optimises traffic routes, manages energy consumption, generates meeting summaries, recognises spending patterns and supports administrative tasks. For a large part of the population, this is not perceived as using artificial intelligence, but simply as using an application. As a result, many people are already unaware of how often they rely on algorithmic assessments and how strongly those assessments guide them.
“In such an environment, a division in awareness and understanding is to be expected. A small segment of users, perhaps around one fifth or fewer, will be informed enough to recognise where algorithms intervene, what their capabilities and limitations are and what consequences they may have for decision-making autonomy. These knowledgeable people will actively choose privacy settings, seek explanations, verify sources and deliberately combine human judgment with system recommendations. The majority of the public, on the other hand, will use artificial intelligence implicitly and pragmatically, without deeper reflection. This is not necessarily a sign of irresponsibility, but rather a result of the pace of life, information overload, perhaps a lack of digital literacy and the fact that technological systems are designed to work by themselves.
“At the same time, public demand for ethical use of artificial intelligence may grow as these tools and systems expand. Although most people may not follow in detail how algorithms operate, they may still expect these tools to follow basic standards of safety, fairness and protection from harm. We can expect the parties responsible for major failures will be held responsible in future: the mass spread of false content, discriminatory outcomes in sensitive domains such as hiring, credit or insurance decisions, or liability in systems that promote risky behaviours. As the technological ecosystem matures and regulation and industry practice stabilise, such failures may become less frequent in mainstream products. This will not be because the technology becomes perfect, but because organisations introduce more checks, standards, auditing and accountability, at least where legal and reputational risk is high.
“A particularly sensitive issue is generated content, including AI-generated video material. In an early phase, societies may go through a period of shock and boundary-testing: what can be fabricated, how convincingly and how it can be misused. Over time, countermeasures will emerge: better tools for authenticity verification, provenance labels, stronger media literacy and the gradual maturation of social norms.
“Artificial intelligence may become part of the solution in these cases, as it can be used for detecting manipulations. Still, it is reasonable to assume that the race between generation and detection will remain permanent, meaning that a culture of verification cannot be fully delegated to technology.
“Another crucial layer of artificial intelligence influence relates to the everyday functioning of cities and systems. Smart cities are not merely a marketing concept but a logical continuation of infrastructure digitalization of traffic regulation, public transport, energy management, utility services, security and healthcare. Artificial intelligence naturally fits this context because it enables real-time optimisation and event prediction, such as congestion, equipment failures or consumption peaks. In the best scenario, the outcome is a more efficient and comfortable urban environment. In the worst scenario, the same mechanisms can turn into a regime of continuous monitoring and citizen scoring.
We will see the emergence of new digital classes and somewhat of a divide between those who develop human-AI co-intelligence capabilities, create content and control tools and those who primarily consume content and follow automated streams. … Those who understand how systems work, know how to ask good questions, verify outputs and combine creativity with tools will gain an advantage. Those who remain passive users are more exposed to manipulation and platform dependence.
“This leads to the political context. In authoritarian or dictatorial systems, artificial intelligence can be used as an instrument of surveillance and control. Examples include facial recognition, movement tracking, behavioural risk scoring, content filtering and subtle manipulation of the information space.
“Even in democratic systems, forms of surveillance exist latently through commercial platforms, security policies or service optimisation, but they generally operate under some formal constraints and are often the subject of public debate. Nevertheless, the key risk of such AI surveillance and data systems is not only direct repression, but the possibility of normalization. It can become passively seen as ‘accepted’ when citizens stop noticing and fail to hold parties responsible for what is being collected, how behaviours are profiled and how digital traces are converted into economic and political capital.
“In such a social landscape, the growth of conspiracy theories is also likely. The reason is not only distrust in institutions, but a broader epistemic crisis: if everything can be generated, edited, distorted or algorithmically distributed, the boundary between fact and impression becomes fragile. When people lack tools to verify sources and context, they often turn to explanations that provide psychological certainty, even if they are false. Artificial intelligence becomes a catalyst here: it increases the speed of information flow, but also the speed of misinformation. That is why trust in sources, journalistic standards, institutional transparency and public education become strategic responses rather than secondary issues.
At the level of language and conceptual framing, it may also be useful to rethink the labels we use. Machine learning, in practice, often refers to systems that support decision-making through statistical generalisation from data. In that sense, the term algorithm-supported decision-making may better describe the social function: These are tools that suggest, rank, assess and optimise, but do not carry full moral and contextual responsibility.
“For this reason, algorithmic literacy and resilience will increasingly enter school and university curricula. This will not mean programming for everyone, but a civic competence: understanding how recommendations are created, why certain content is pushed to users, what model bias means, how data is protected, where reliability ends and what responsible reliance on automated systems entails. This is comparable to financial literacy: not everyone needs to be an economist, but society benefits from citizens who understand basic mechanisms of risk and manipulation. Algorithmic resilience, in this sense, means the ability to maintain autonomy of judgment in an environment where suggestions are constant, personalised and often psychologically rewarding.
“We will see the emergence of new digital classes and somewhat of a divide between those who develop human-AI co-intelligence capabilities, create content and control tools and those who primarily consume content and follow automated streams. This division will not be rigid, but it will be visible. Those who understand how systems work, know how to ask good questions, verify outputs and combine creativity with tools will gain an advantage in career development and social influence. Those who remain passive users are more exposed to manipulation and platform dependence. This does not mean the future will be reduced to technological determinism: intelligent and adaptive individuals will find ways to succeed in a world where artificial intelligence becomes a baseline. The history of technology largely shows that societies change, but people simultaneously develop new skills, new professions and new forms of value.
“At the level of language and conceptual framing, it may also be useful to rethink the labels we use. Machine learning, in practice, often refers to systems that support decision-making through statistical generalisation from data. In that sense, the term algorithm-supported decision-making may better describe the social function: These are tools that suggest, rank, assess and optimise, but do not carry full moral and contextual responsibility. Similarly, generative artificial intelligence largely functions as algorithm-supported content generation – systems that recombine existing patterns and information into new text, images, or sound. Such terminology can be valuable because it reduces mystification and brings attention back to the responsibility of users and institutions. Technology may be powerful, but it is not a neutral subject that decides on its own.
“As artificial intelligence becomes more deeply embedded in our decisions, work and everyday life it will become an invisible infrastructure that demands the development of stronger ethical, educational and regulatory frameworks. The greatest challenge will not be the presence of artificial intelligence itself, but the preservation of autonomy, transparency and trust in a society where recommendations are constant, content is increasingly difficult to verify and surveillance becomes technically trivial.”
This essay was written in January 2026 in reply to the question: “AI systems are likely to begin to play a much more significant role in shaping our decisions, work and daily lives. How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?” This and 200-plus additional essay responses are included in the 2026 report “Building a Human Resilience Infrastructure for the AI Age.”