ChatGPT app icon on smartphone screen with pushing finger. Artificial intelligence chatbot service on mobile phone- Credit: pavelyar105@gmail.com / DepositPhotos - License: DepositPhotos

Privacy watchdog urges Dutch government to regulate AI quickly

The new Cabinet must quickly clarify the rules for artificial intelligence (AI), said the Dutch Data Protection Authority (AP), the government’s privacy watchdog. The authority warns that the Dutch authorities can currently do little against unsafe and discriminatory algorithms. The watchdog wants to improve oversight.

Aleid Wolfsein, chair of the AP, points out that algorithms have in the past led to citizens being wrongly suspected of childcare allowance fraud. “Five years after the benefits scandal, the lessons are clear, but the follow-up is lagging. This is mainly due to the lack of strict rules for algorithms and AI, and their enforcement,” said Wolfsen.

The authority has developed a “barometer” to monitor the potential consequences of AI. It consists of nine components. Six months ago, it raised two red flags. The registration of algorithms and AI systems was inadequate, and there was no clear overview of incidents.

Six months later, that is still the case, and now two other red flags have also been raised. The “frameworks and powers” for oversight are not yet properly regulated, and there are no clear standards for AI systems.

The watchdog also warns that young people are not adequately protected. They use AI not only for their homework, but also as a kind of friend to chat with. They can become addicted and are not properly able to assess the risks, AP warned.

The authority identified the greatest dangers as “the uncontrollable increase in deepfakes, AI-driven fraud, and psychological damage caused by chatbots.” The AP also warned that “AI security measures are increasingly lagging behind technological developments.”

The authority said that it appears some organizations are trying to avoid their responsibilities. They are required to register their artificial intelligence, but then pretend it’s just regular algorithms, for which the rules are less strict.

“High-risk AI systems are used in healthcare and crime detection, among other things. Starting next year, they will face a whole list of requirements regarding technical documentation, risk management, and bias. Perhaps it’s unintentional; perhaps they don’t want to wake sleeping dogs,” said Joost van der Burgth, who heads the oversight of AI for the AP.