Our children are going to be dumb. Not in the sense that they won’t be good at math, but dumb in the sense of: “Mom, how do you know if a source is reliable or not?” or “Dad, how do you write a persuasive text without using ChatGPT?“.
Which brings us to the question that everyone is asking right now: given that artificial intelligence (AI) is clearly going to permanently change our education system, how will this change impact on our democracy? Knowledge is power, and if all that power is in America or China, we have to find a way to get it back. Power over the next generation, power over our data, power over our democracy.
So why is it that artificial intelligence tools like OpenAI’s ChatGPT or Google’s Gemini are sure to change our education? Well, there are numerous reasons, and one of them is exams. There is no point grading a student’s text if they used AI to write it. And there’s a very good chance they did, given that 75% of students have already used tools of this sort. Another reason are memorization tasks of all kinds. Why learn dates and facts by heart, if you are never going to need to recall them unaided in the future, since knowledge will be accessible anytime, anyplace and anywhere?
Fortunately, there are solutions for these problems. You can have oral exams and create new types of exercises, for example, in which the students process data for themselves.
Who controls AI?
However, not everything is rosy, and the biggest drawback is who controls the AI. There is a concept known as “information democracy”, which describes one part of democracy as a whole. In an information democracy, everyone has access to information and that information comes from independent sources. Transparency is another key element of an information democracy.
But we lose all that with AI, especially in relation to transparency and misinformation. Most AI tools are owned by specific companies or located in surveillance states. And that means, in turn, that no one knows what data was used to train the AI.
AI can also affect the way we think. In many cases, there is no clear right or wrong, and there are a range of opinions. But AI tends to reproduce only one specific perspective, which can influence the user’s opinion on various issues.
Videos and images
What’s more, as AI-generated videos and images get more realistic by the day, it is becoming increasingly difficult to known what is real and what isn’t.
So how can we save our information democracy when AI is so dominant? There are many approaches, but I will propose the following: We need to change our education system.
Instead of memorizing millions of pieces of information, students should learn how to access verified and vetted information. We also need to be careful with the videos and images AI creates, as they become harder and harder to distinguish from reality.
So, are our children doomed? Not necessarily! If we change the way our schools work, and instead of banning students from using AI, we encourage them to use it properly, I think our information democracy should be safe. But we have to act and act now: delay too long, and developments may spin out of our control.
This article was originally published in the insert “The European BHMA” published with “TO BHMA on Sunday” on 11 May 2025.





