Elections and AI
Two reports on AI and politics sparked debate in the Netherlands. Beyond the headlines lies a deeper unease: how do we live with technologies that not only reflect language but act within it?
Recent reports in the Netherlands have questioned whether AI systems distort political information.
One study suggested that Google’s AI answers favoured certain parties. Another warned voters not to rely on chatbots for advice.
The debate touches something larger. About how societies learn to live with new technologies that mediate our understanding of the world.
Takeaways
- AI now retrieves, filters, and composes information in real time.
- It is not a medium for opinions but a tool for exploration.
- Democracies depend on citizens who take responsibility for how they use information.
- What we are witnessing is not entirely new: the web, social media, and voting tools changed us before.
The Netherlands as example
The Dutch discussion around AI and elections shows how quickly public trust becomes unsettled.
Researchers found that some political parties appeared more often in Google’s AI-generated answers.
A government watchdog warned that chatbots are “biased” and should not guide voters.
These reactions are understandable, new technologies often evoke anxiety, yet they overlook how search, ranking, and recommendation have always shaped what we see.
Today’s AI systems extend that pattern.
They don’t simply mirror language; they act within it.
They select, summarise, and frame information dynamically, blending reasoning, retrieval, and presentation.
That does not make them agents with intent, but instruments with reach and it calls for literacy rather than fear.
Learning to work with the tool
In preparing for the elections, I used ChatGPT as one of several instruments.
I went through a few voting questionnaires and had longer conversations, with the model, with friends, and with family.
These exchanges helped clarify what I actually value. I compared parties from two perspectives: a realistic, partly self-interested one, and a more principled view of the world I’d like to live in.
I don’t follow the daily news closely, and the personalities or behaviour of party leaders rarely determine my vote. They can be a reason to hesitate, but not the reason to act.
For me, democracy means accepting that outcomes will always involve compromise and engaging in that process with both realism and principle.
This reflection is not meant as a polemic.
I may be wrong in my impressions. I understand AI technically, but I am not an expert in society, democracy, or the media.
I write simply from experience: as someone learning, like many others, how to think and decide amid systems that reshape how knowledge is formed.
Both point toward a shift: smaller, modular, tool-using systems that think before they speak. By the next elections, we may well work with these new, leaner forms of AI.
Simpler, smarter AI
Medium or tool?
AI becomes dangerous when treated as a medium, a space where opinions circulate and authority is implied.
Used as a tool, it sharpens research, prompts curiosity, and widens access to information. Democracy has always required effort: to compare, to verify, to reason.
What changes is that the instruments of reasoning are now partly algorithmic.
It will take time for individuals, politicians, educators, and media to adjust. But this is not an alien revolution, it is the continuation of a long experiment in how technology and thought intertwine.
Closing thoughts
Modern AI is not our conscience and not our enemy. It is a tool, and like all tools, it can deceive if used passively.
Don’t use it as a medium that tells you what to think.
Use it as an instrument that helps you think more clearly about yourself, your society, and your vote.





