An Unscientific Look at European-level AI Discussions

Introduction

To say that AI in education is a hot topic is an understatement. In my work with the European Digital Education Hub, I spend a lot of time exploring it together with practitioners from all over Europe, from all sectors of education. Interesting and difficult questions are raised constantly and whenever the topic of “AI and…” pops up, our community shows a disproportionate level of curiosity and engagement. Everyone seems to want to be part of the conversation.

But what exactly are the topics that engage? Well, that is hard to say, and I do not have any scientific report to refer to. What I do have is several years of webinars, working groups, workshops and more, and, based on this, a feeling of what interests digital education practitioners who like to connect with their European peers. So, while being fully aware of the limitations of anecdotal evidence (a friend of mine relied on it once and it was a mess), I would here like to present an unscientific picture of the feeling “out there” in the Europe-wide discussion spaces where AI in education is being examined and debated in 2025.

Balance between optimism and caution

First of all, I would say that our European community neither lands on the “AI is great”, nor on the “AI is bad” side. Astonishment about opportunities is mixed with great concern for a wide range of possible problems – several of which are mentioned below. Very few people seem to be willing to rush forward with the speed of light, and equally few people seem to want to ban all AI from all classrooms. The fact that there is a mix of optimisms and caution is, I think, pretty healthy.  The environmental impact is often mentioned as a point of concern that needs to be dealt with. “There is a need to make more visible the carbon footprint of Ai and relationship with learning for sustainability – both positive and negative” was written by a participant in our consultations for the AI Literacy Framework (which will contribute to the PISA 2029 assessment), and this is just one out of many concerned voices. At the same time, there is a readiness to experiment, to learn from what works, and to remain critical where needed. Many in our community advocate for a “responsible adoption” approach that appreciates innovation while not neglecting ethical and pedagogical considerations. This somewhat balanced stance is a pretty good starting point for further discussions.

Impact on learners

There seems to be a fear of what the use of AI does to our learners. The fact that lazy adults like myself (and perhaps like you who read this article?) use AI for our work is not that big of a problem; we hopefully learned how to use our brains before November 2022. One teacher in higher education said: “In fact, it shocked me to see damage of knowledge in semester three that goes back to earlier semesters where they [the students] avoided learning by bypassing the pain with AI”.

Whether this holds true for the general student population is an open question – this study from last year suggests that generative AI can indeed be harmful to learners, but the question is far from settled and the question is surely interesting. The answer “Oh, but already Socrates complained about the lazy youth” is not a satisfactory one – in fact it is just as sloppy as a ChatGPT-generated answer to a high school test, and it is not crazily wild to speculate that learners who have not yet learned to study the hard way will learn less when they (almost) always have an easy way to complete their tasks. The discussion on how to deal with it deserves to be taken seriously.

Uncertainty about legal framework

The AI Act was formally adopted last year, and some of the specific rules have been gradually started to apply during 2025.

A representative of a national-level school leaders’ association pointed out that their members feel overwhelmed and a bit lost. She said that she felt that the message her members have gotten is: “Here are a bunch of new rules, on a topic that did not even exist a few years ago. It is extremely important that you follow them to the letter, you are completely responsible for not messing anything up, and if you do, you will be fined millions of EUR”. How are they supposed to deal with that? The feeling was echoed by a higher education practitioner from a different country who at an event mentioned “It is now three years since the new ChatGPT model was launched, and we are still not sure what we are allowed and not allowed to do”. The frustration/confusion is understandable, I think. The legal framework is tricky, and many institutions have been slow to set clear rules regarding the use of AI.

The examples above refer both to the very broad level of EU legislation and to country- or institution-specific rules, and it may be less useful to say something super smart that is applicable to both. But I do think that both convey a feeling that is shared by many: that rules regarding what is allowed and what is not are sometimes difficult to grasp. Regarding the EU Act, it must be said that information is available online, both from official EU sources and from elsewhere. Just google it and you will find plenty of it. But I am also sympathetic to the poor school leaders who feel that an unreasonably heavy burden has been placed on them.

Scepticism towards American products

This might have more to do with the general situation in the US and less with AI in particular, but quite often I have heard critical remarks about relying on American AI products. In order to approach this discussion – and I am sure this will be widely discussed at different OEB sessions – we need to first acknowledge just how much better American tech products are. There is a reason why I mentioned Google in the last paragraph, and not… eh, well, whatever the European equivalent would be. As long as this is the case, and I suspect it will be so for quite some time, a satisfactory solution will be hard to find.

What about the European alternatives? I am out on thin ice here, but my feeling is that initial enthusiasm for European tools such as Mistral and Apertus quickly faded away once it became clear that they are – and I am very sorry to say this again – not as good as ChatGPT or other American products.

Application and funding

Finally, outside of my work on the Hub, I have worked on evaluating education project proposals during the last few years. This will not come as a surprise to anyone who has been even remotely involved in application-writing, but the impact that AI has already had on the process is huge. Application-writing is a very time-consuming task and when the cost of producing an ok-but-not-great text on any given topic of any given length goes down to zero, it is somehow understandable that AI tools will be used to reduce the work. However, many applicants are clearly unaware of how important it is that the AI-generated text makes sense in the overall application, and this unawareness has led to a high number of low-quality applications. This is a bigger topic, but if you have the misfortune of being involved in the business of writing applications, my advice to you is simple: go ahead and use AI to fine-tune and proofread your text, but do not rely on it to generate answers to difficult questions in an application form.

As mentioned in the introduction, the topics above seem to be some of the most widely discussed ones in the European-wide discussion forum that we are running. There are surely more of them, and I am looking forward to discussing them with other OEB participants in Berlin.

Written for OEB 2025 by Rasmus Benke-Aberg.

Join Rasmus for his Presentation at OEB25 titled: European Digital Education Hub.


Leave a Reply

Your email address will not be published.