In this dialogue on AI and education, two of the field’s most prominent voices come together: Andreas Schleicher, Director for Education and Skills at the OECD and architect of PISA, the global benchmark for school systems, and Wayne Holmes, Professor of Critical Studies of AI and Education at UCL and UNESCO Chair on the Ethics of AI and Education at the International Research Centre on Artificial Intelligence (IRCAI).
The discussion takes on added significance as the OECD prepares the PISA 2029 Media & Artificial Intelligence Literacy (MAIL) assessment, which will shed light on whether young students have had opportunities to learn—and to engage proactively and critically—in a world where production, participation, and social networking are increasingly mediated by digital and AI tools.
Andreas Schleicher: Thinking about AI in education, it really depends on what you want education to achieve. If this is about knowledge transmission, then school is probably not a particularly effective institution in the AI world. But if you take education as a socio-relational enterprise, school will perhaps be more important in the future than it is now, because that’s where you’re going to meet the people who look differently from you, who think and work differently, things that are becoming more difficult in the digital world. Depending on what outcome you seek, school could lose its relevance, or it could be far more important than it is now. I’m pretty confident we will have very good reasons to maintain social institutions like schools. The quality of student–teacher relationships is something very precious for humanity.
Wayne Holmes: I couldn’t agree more. But my worry is that that’s not what’s happening and the kind of things that are beginning to happen are quite worrying. In my talks, I divide AI and education into learning with AI, the use of AI tools in classrooms, often known as AIED, and AI literacy, learning about AI. Policymakers buy into both, but they see them as two sides of the same thing, and that’s strange. The problem is that a lot of the work around AI literacy is very instrumental: getting young people to know how AI “works” (the ‘technological dimension of AI’) so that they can use AI tools “responsibly” (the ‘practical dimension of AI’). I’m not saying those are not important, but they miss the human dimension – the impact of AI on all of us, on the relationships between teachers and learners, on human rights and child rights, on the environment, and so on. In almost all approaches to AI literacy, there’s no in-depth engagement with the human dimension of AI. For me, understanding the impacts of AI is fundamentally important.
A.S.: In a way, it’s the same as a thousand years ago: you could talk about a pen and about literature, two totally different narratives. I’m quite confident that AI has huge potential to transform learning—make it more granular, more adaptive. We see many good examples and many bad examples where young people outsource their thinking. The bigger question is whether AI is going to infantilize us all—make us slaves of algorithms—or super-empower us to become more ethical in our decisions, more collaborative in our ways of working, and less biased in our thinking.
AI is not a magic power; it’s an incredible amplifier, an accelerator. It will amplify good educational practice in the same way it amplifies poor practice. It can super-empower teachers to understand how different students learn—or make them slaves of prefabricated lesson plans. It can make learning far more equitable—what’s now possible for students with special needs is amazing—and at the same time it can amplify almost any form of inequality. AI can help reduce human bias through better data, but it can also amplify and entrench bias. AI can connect people across geographic, linguistic or cultural boundaries, but it can also sort us into echo-chambers that amplify our own views and insulate us from divergent thinking. How this plays out depends on human decisions. If you get it wrong now, you get it really wrong.
W.H.: I agree, but I’m not as confident about the so-called opportunities because, in more than ten years of research, I’ve not seen them. There are some useful AI-enabled applications for people with special needs, but in education more broadly, I’ve not seen good independent evidence at scale for the effectiveness, safety, or positive impact of these tools in classrooms. Yes, there are thousands of studies, but they’re typically conducted by the researcher who’s developed the tool or by the company who owns it. Much of the claims are speculative; the weasel word “potential” is used so much. But most things have potential – so saying something has potential means very little.
We also hear that the increasing use of AI in education is exacerbating the digital divide – the divide between those who have access to technology and those who don’t. That is true, but something more worrying is happening: the divide is flipping. While young people from higher socioeconomic groups are continuing to have access to human teachers, those from lower socioeconomic groups, including many in rural settings, in the global South or developing countries, are increasingly having to ‘make-do’ with computers. That’s inadequate. The child right to education has long been extended to a right to a ‘quality’ education. And while AI tools might ensure that some children are engaging in “educative activities,” that’s a long way from providing them a quality education.
Education is about relationships—collaborating, discussing, arguing, supporting each other. I’ve been in so many classrooms with 30 laptops and 30 children ignoring the child to the left and right and ignoring the teacher. We need a more nuanced approach. I’m all ears for non-speculative evidence; but I’ve not seen any and I’m still not convinced.
A.S.: Let me give two examples that made me see AI’s potential. In Shanghai, they used AI for classroom observation. The first thought is “horror,” big brother in the room. But teachers who designed and developed it told me: in the past, when the school principal came in, they had to put on a show. Now, after every lesson, they can be natural and get analyses of interactions with students and how time was spent. They feel they get better feedback on where they can improve. Another example: detecting early signals of depression by analyzing library data—what kind of books students read—allowing specialists to intervene earlier than any teacher or psychologist.
But I agree with Wayne, humans have always been better to invent new tools than to use them wisely. We have evidence that handing student tools like ChatGPT to write essays leads to worse writing skills afterwards. Ask them for the three main arguments of their essay and 80% couldn’t remember them. The critical part is agency. Are teachers users or designers of these technologies? Do they use them to extend their pedagogical tools, or do they become clients of instruments? In the medical sector, they’d be scandalized by what we’re doing in education: we put tools out first and then say, “Let’s see what happens.”. The real question is not efficiency; it’s what should we teach and learn in this age of AI?
The point is that education should help us become more than the sum of isolated, automatable tasks. The rise of artificial intelligence should sharpen education’s focus on human capabilities that cannot be reduced to code – our capacity to navigate complexity, to exercise ethical judgment in uncertainty, to create something genuinely new. These are not just pedagogical concepts or beautiful words, they are what education is all about, and they belong to the pillars on which we build our societies. And if education doesn’t protect such human capabilities with determination, AI could wash away the very foundations of our societies.
W.H.: The use of technology does lead us to ask such questions, and those questions are fundamental. I also agree that agency is key for teachers and for young people. But agency doesn’t work if people are working in a knowledge vacuum. They need support by national systems to understand the technological and practical dimensions of AI but most importantly the human dimension of AI. There are already widespread misunderstandings among teachers, young people, policymakers, and ministries. The very common anthropomorphization of AI, using words like ‘learning’ and ‘hallucination, suggesting that AI is more capable than it really is, makes that even more challenging. And the anthropomorphization is often deliberate. For example, if you point out to ChatGPT that it was wrong, it replies, “I’m sorry, you’re absolutely right.”
I often use a metaphor from the 1950s: smoking. If you were stressed, your doctor might have said, “Take up smoking; it’ll help you with your stress.” I think we’re in a parallel situation today. Teachers may find interesting things to do with AI, but they don’t understand the human dimension of AI, the background or the history of where these technologies come from or the impact of AI for all of us. They don’t understand how, for example, ChatGPT was trained on Global South workers who were exploited with massive health consequences; they don’t understand the environmental impact—massive energy and water use, and the generation of greenhouse gases. We need to support teachers, help them understand this human dimension of AI, so they can make properly informed and not simply instrumental choices. An AI tool might appear useful in a classroom but that doesn’t mean it’s transferable, sustainable, or even ethical.
A.S.: As humans, we’re led to trade autonomy for convenience. Industrial robots took the work of our hands, making life easier. With AI, we risk outsourcing human capabilities. The real risk is not that AI becomes more human; it’s that we give up human qualities to become compatible with how AI works. When Google Maps came into play, we lost our sense of orientation. Building agency is one of the most important roles in education because schooling in the past was largely about compliance. Teachers need to be not just good instructors but great coaches and mentors, creative designers of innovative learning environments.
We have data on the take-up of AI by teachers across countries. In some high-performing systems, teachers are cautious—Japan is at the extreme. In the United Arab Emirates, teachers engage readily: “We didn’t need to become great teachers; we can now just use AI and do our work.” Many teachers use AI in uncritical ways because it creates nice lesson plans and marks work. Convenience is powerful. One of the most striking findings from PISA 2022 was the steeply negative relationship between student smartphone use for leisure and cognitive, social, and emotional outcomes. There’s no country that escaped it.
W.H.: Back to the smoking metaphor: the world transformed and today far fewer people smoke because we became aware of the damage smoking causes. I wouldn’t suggest that information alone is sufficient, but it is fundamental. For example, most teachers don’t realise that AI standardizes everything. From a creativity perspective, teacher-plus-ChatGPT might be more creative than teacher alone; so, I understand why an individual teacher gets excited. But step back and look at how these technologies work. Across the world, teachers are using ChatGPT and producing lesson plans that are so similar, so standardized.
A.S.: AI encourages convergent thinking and discourages divergent thinking—something that dehumanizes us. We are here to connect the dots, to ask where the next idea comes from, not to repeat what correlation suggests. In PISA we found very little relationship between class size, spending per student, or even learning hours and the quality of outcomes. The single biggest predictor is what students perceive to be the relationship with their teacher. My favorite question: ask 15-year-olds, “If you come back to your school three years from now, do you think your teacher will be excited to see you?” When students say yes, they’re optimistic, looking forward, interested in future learning. When they say no, they just do what the system requires.
If AI does the marking, you no longer know what your student does or doesn’t know. Teachers could become operators of a system rather than designers of the learning context. Yet teachers who are capable can leverage tools to enhance their capabilities. A lot depends on involving teachers in the design of the future school and technologies. The industrialization of schooling has created many problems that AI is now taking over. We have made this field very convenient for AI. We need to bring back the core of what it means to be human. It matters a lot now.
W.H.: I’ve really enjoyed the conversation because there’s a lot more agreement than I anticipated. But I still think we have a long way to go. Teachers are floundering. Universities are floundering. My own university changed assessment so that 50% has to be “AI-resilient” without anyone knowing what that actually means. So, we need more conversations like this, focusing on less obvious aspects—the impacts on human rights, resilience, and the other issues we’ve discussed—that are not really understood. If this doesn’t happen, the consequences are profoundly negative. There’s already growing evidence that young people are becoming over-reliant on the tools, while the use of GenAI is undermining learning.
We also need teachers to be aware of the talk in the financial markets that the GenAI bubble might soon burst, like the dot-com bubble did some years ago. And if that happens, although we don’t know how things will turn out, it’s likely that a lot of the tools that teachers and students are becoming increasingly dependent on might disappear. AI and education is increasingly complicated; so, we need to keep talking.
Andreas Schleicher is Director for Education and Skills at the Organisation for Economic Co-operation and Development (OECD).
Wayne Holmes is Professor of Critical Studies of Artificial Intelligence and Education at University College London. He holds a UNESCO Chair in the Ethics of Artificial Intelligence and Education (International Research Centre on Artificial Intelligence, under the auspices of UNESCO, Jožef Stefan Institute, Slovenia).
The discussion was moderated and edited by Angelos Alexopoulos.







