Tuesday, 02 January 2024 12:17 GMT

Dutch AP warns voters to not rely on AI chatbots for election advice


(MENAFN) The Netherlands’ data protection authority (AP) has urged citizens not to depend on artificial intelligence chatbots for election advice, warning that such tools often provide misleading or biased responses and may unduly influence voter choices.

According to the regulator, AI-generated recommendations tended to favor two dominant political blocs — the far-right Party for Freedom (PVV) and the left-leaning GroenLinks-PvdA alliance — which appeared in 56% of the chatbot responses analyzed. This imbalance, the AP said, does not reflect the diversity of the country’s 15-party parliament. Polling data suggests that the two alliances are projected to win just over a third of the vote in the upcoming October 29 election.

The report further noted that some smaller parties, such as the center-right CDA, “are almost never mentioned, even when the user’s input exactly matches the positions of one of these parties.”

“Chatbots may seem like clever tools, but as a voting aid, they consistently fail,” said the watchdog’s vice-chair, Monique Verdier, who described their functioning as “unclear and difficult to verify.”

She cautioned that the technology could lead voters toward political choices that do not align with their actual beliefs, adding, “We therefore warn against using AI chatbots for voting advice.”

The AP tested four leading AI chatbots—whose names were not disclosed—and found that in several cases, they recommended one of the two largest parties even when provided with the manifesto of a smaller group.

The early election was called following the collapse of the governing right-wing coalition, triggered by the withdrawal of the PVV under its leader, Geert Wilders. The upcoming vote is viewed as a decisive moment that could determine whether the Netherlands moves toward an all-conservative administration or a broader centrist alliance.

Separately, an international study led by the European Broadcasting Union and a news agency found that major AI platforms, including ChatGPT and Google’s Gemini, misrepresented or distorted news information in nearly half of their examined answers. The analysis, which covered more than 3,000 AI-generated responses across 14 languages, discovered that 45% contained “at least one significant issue” when responding to news-related questions.

Both OpenAI and Microsoft have acknowledged that AI “hallucinations”—instances where the systems generate inaccurate or fabricated information—remain a known problem that they continue to work on resolving.

MENAFN22102025000045017640ID1110231909



Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search