It is often thought that the aim of digital assessment is efficiency: large numbers of students can be tested at the same time, correcting is less time-consuming and there is much less paper involved. While this is true, the potential offered by digital assessment for improving education is equally as important. In the image below you see four perspectives on digital assessment. In my presentation I want to focus on the need to move to the perspectives of ‘improving quality’ and ‘enhance learning’.
Adaptive assessment or, in full, Computer-based Adaptive Testing (CAT) is the most efficient way possible to determine someones skills. In contrast to primary and secondary education, in the Netherlands CAT is still barely used in tertiary education. The exception is the entry exam for language and maths for the PABO (basic teacher training) where it has been used successfully already for 10 years. And it’s recently in use for the interuniversity progress test medicine. Using CAT you can give a result with the same precision using less than half of the original 200 questions. This means that you can better estimate the student’s level, with less effort from the student and the lecturer. Developing CAT, however demands developing itembanks with large quantities of questions and large numbers of students performing the test in order to create high quality well calibrated itembanks. This makes CAT expensive.
Therefore in this presentation I would like to share some other examples of how digital assessment can adapt to the students level and really improve the way students learn. The key to improve the way students learn is to provide timely and meaningful feedback. Digital assessment is ideal for providing this to students. In the Netherlands for example an extensive itembase for 3-D radiology is created. Not only does this itembank provide authentic test-mehods: the test greatly resembles the day-to-day work of radiologists. Moreover the itembank can be used in an interactive game (X-game) where students have the opportunity to practice their radiology skills. The game provides immediate feedback and points to back ground information to make the students understand better why an answer is wrong or right.
An other fine example of digital assessment for learning is the four-step-model developed by Sharon Klinkenberg at University of Amsterdam. Students are provided with digital instruction and on campus working groups. They perform digital exercises regularly. The reports provided by these tests inform student and teachers on their performance. The reports are always accompanied by personal feedback to the student. You can see straight away where the specific gaps are in the students’ knowledge. Then you as teacher and student can deal with them immediately.
These and more examples can be found in my presentation, but also in the thematic issue, I refer to below. Key factors for digital assessment to improve learning are:
- Make use of digital possibilities to create authentic learning
- Build in timely and immediate feedback
- Give students and teachers the possibilities to evaluate the outcomes and to remediate.
In this thematic issue, professionals from a variety of disciplines illuminate the innovations in digital assessment from different angles. And we also take a look at the practice, using SURF projects and practical examples. All of this with the aim of informing and above all inspiring you to create customised education. The combined experience of SURF and the institutions for higher education over the last few years has shown that collaboration is one of the most important driving forces that make innovation and real progress possible. https://www.surf.nl/files/2019-04/thematic-issue-innovations-in-digital-assessment_web.pdf