10:55
15 april 2026

Regulating Uncertainty: The Importance of Trust and Affect in the Regulation of AI in The Netherlands

Regulating Uncertainty: The Importance of Trust and Affect in the Regulation of AI in The Netherlands

Esther Versluis, Aneta Spendzharova, Odile Feltkamp
First published: 22 August 2025  https://doi.org/10.1111/rego.70070  Digital Object Identifier (DOI)

ABSTRACT
The rapid adoption of artificial intelligence (AI) introduces significant uncertainty regarding its future applications and potential risks. What is the preferred regulatory approach when confronted with such uncertainty? To cope with uncertainty, people often screen information in a biased way, consistent with their own prior beliefs and predispositions. Heuristics such as trust and affect are likely to influence how new (scientific) information is judged, in turn influencing the preferred regulatory approach. This article explores the complex interplay between trust, affect, and regulatory preferences in the context of AI governance. Drawing on an exploratory survey and interviews with AI regulators and professionals in the Netherlands, the study finds relatively low trust in AI providers and users, alongside a preference for flexible, adaptive regulation that is strictly enforced by the public authorities. By shedding light on the role of trust and affect in the emerging regulation of AI, this article contributes to understanding how such heuristics influence the preferred regulatory approach.

1 Introduction
The full scope of the adverse effects of using artificial intelligence (AI) is highly uncertain. The Dutch toeslagenaffaire—where the Dutch tax authority “ruined thousands of lives” (Heikkilä 2022) by using AI to create risk profiles as a tool to spot fraud with childcare benefits—testifies to this. AI, like any other emerging technology, is embedded in uncertainty about its future use. These uncertainties “lie in not knowing where the technology will be incorporated, who will be using the technology, or how the technology will ultimately be used” (Nelson and Gorichanaz 2019, 7). But how do we regulate uncertainty? How do regulators and professionals in the AI sector decide about the adoption or use of this emerging technology when confronted with incomplete or even contradictory information about the potential (future) risks and side effects?

To cope with uncertainty, people resort to heuristics, or cognitive shortcuts, that help them “reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations” (Tversky and Kahneman 1974, 1124). For example, our initial perceptions about an emerging technology structure the way new evidence about it is interpreted: new evidence is more likely to be perceived as reliable and legitimate when it aligns with one’s initial beliefs (Slovic et al. 1984). Information that contradicts existing behavior and beliefs creates a state of “cognitive dissonance”—a conflict between prior beliefs and new information (Bradshaw and Borchers 2000). This leads to “motivated reasoning,” where information is selectively interpreted to fit one’s prior beliefs and predispositions (Kahan et al. 2008). Therefore, our pre-existing opinions about a particular emerging technology will shape how we search for and process future information about it. This applies not only to citizens deciding whether to use or buy emerging technology, but also to regulators tasked with determining how to regulate or potentially restrict such technologies (Tversky and Kahneman 1974).

We know from existing research that trust and affect are crucial heuristics influencing how people decide (e.g., Siegrist et al. 2007; Midden and Huijts 2009; Merk and Pönitzsch 2017). We thus anticipate that such heuristics shape how regulators judge new (scientific) information, which, in turn, is likely to influence their regulatory stance. This theoretical expectation, however, has not yet been explored in practice; thus, this article intends to unravel this complex interplay between trust, affect, and preferred regulatory approaches in the field of AI. On the one hand, the use of AI technology has many promises, such as enhancing healthcare, providing better transport, more efficient manufacturing, more consistent decision-making, and cheaper and more sustainable energy, mostly because of its potential in improving prediction, optimizing operations, and resource allocation. On the other hand, the unregulated use of AI, particularly due to the relative autonomy, adaptability, and interactivity of AI systems (Mökander et al. 2022), creates perils such as the rapid spread of disinformation, the perpetuation of human bias in AI-assisted decision-making, or enhanced censorship. There are particular concerns about risks to fundamental rights, such as fears of the use of AI leading to algorithmic discrimination (European Commission 2021). Furthermore, AI is a field where “Tech giants” such as OpenAI, Alphabet, Amazon, and Meta are seen as holding significant informational asymmetry advantages vis-à-vis both citizens and (public) regulators. How do regulators and AI professionals navigate such a complex environment when deciding about AI applications?

This article explores the complex relationship between regulators’ heuristics and regulatory preferences in the context of the European Union’s (EU) regulation of AI. The EU is an early mover in regulating AI (Middleton et al. 2022), adopting the AI Act (Regulation 2024/1689) on 13 June 2024, after 2 years of intense negotiations. The EU operates with a system of indirect implementation (Craig and De Búrca 2020), meaning most legislation is implemented at the member state level, leading to significant variation in how this is done across different countries (Treib et al. 2022). To understand how heuristics such as trust and affect impact preferred regulatory approaches, we thus need to examine it at the domestic level. This article explores the early phase of regulating AI in the Netherlands. Using a survey and interviews with AI regulators, professionals working with AI, and experts, this article conducts exploratory research aimed at better understanding the potential relationship between trust and affect towards AI and the preferred regulatory approach. The empirical analysis indicates that in this Dutch case there are low levels of trust in AI providers and users and a clear preferred regulatory approach of public rulemaking using adaptive, flexible rules that are visibly enforced by public authorities. This seems to indicate that the relationship between regulators’ heuristics and their preferred approach to regulating uncertain emerging technologies is a promising avenue for further research.

Lees verder via onlinelibrary.wiley.com

Meer leren van (één van de auteurs van dit artikel) prof. dr. Esther Versluis en meer weten over wat de impact is van de toenemende samenwerking op het gezag en de effectiviteit van nationale toezichthouders? Kom dan naar de 9de editie van het jaarlijkse HCB Seminar ‘Toezicht in Transitie, dat plaatsvindt op 8 april in Luden, Den Haag en wordt georganiseerd door het Haags Congres Bureau. Aan bod komen: het internationale, Europese en nationale speelveld, vormen van samenwerking en welke kansen en risico’s biedt samenwerking? Met medewerking van Prof. dr. Esther Versluis (Universiteit Maastricht), Amma Asante (CvdM), Kees van Nieuwamerongen (Nederlandse Autoriteit Uitleenmarkt), Huub Janssen (RDI),  Robert van Rheenen (Omgevingsdienst de Vallei) en Paul van Dijk (zelfstandig adviseur).

Geef een reactie

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *