Embrace the change – Preparing Lecturers and Students for Digital Remote Assessments

After transforming the sphere of teaching and learning within the 2000s, the effects of a fast-moving world of digitalization have especially impacted the field of university assessments. In German higher education, more and more assessments are transposed into a digital setting, as it provides major benefits such as automated scoring, scalability, and automated quality control. Yet, digital assessments were mostly seen as an addition besides prevailing assessment formats like conventional paper pencil exams.

When the Pandemic hit German universities in early 2020, digital assessments suddenly came into the spotlight as they were expected to provide lecturers and students with the possibility to continue assessing and to advance their degrees even under new and difficult circumstances.

Universities quickly set up ad-hoc solutions for digital remote assessments, often using the built-in functions of their respective learning management systems, providing basic assessment functions to cope with the unexpected remote situation. Yet, the last year has shown that digital remote assessments are much more complex than just providing remote access to conventional exams and distinctly differ from on-site digital assessments.

When the E-Examinations unit (EEE) at Freie Universität Berlin (FUB) was tasked with setting up a concept for digital remote assessments, we were able to build upon our experiences with on-site digital assessments. Since 2005, Freie Universität Berlin works in the field of digital assessments. A central examination platform is provided to lecturers and students since 2011. Over the course of 10 years, more than 100.000 digital assessments were conducted and two digital assessment centres are in operation.

Yet, when switching to digital remote assessments, we had to rethink many aspects of our on-site strategy. Supporting lecturers and students became central to a successful transformation.

Adapting Assessment Literacy for the Remote Setting

On-site digital assessments are, for the most part, adaptations of conventional assessments, often consisting of selected-response tasks such as multiple-choice questions. Lecturers benefit from reduced grading times due to automated scoring. The on-site assessment format has hitherto been rarely used for digital-specific tasks such as coding tools or videotaped case studies. Though, the supervised on-site setting provides lecturers with the possibility to validly assess entry level taxonomic competencies mainly required in introductory courses on a large scale.

With the switch to digital remote assessments, measuring the required competencies becomes more complicated. Due to strict data privacy protection rules (GDPR) within the European Union, the use of proctoring tools is off the cards in most German federal states. Within an unproctored setting, conventional closed-book scenarios designed for on-site supervised assessment of entry level taxonomic competencies are prone to being easily searchable on the internet. Moreover, those scenarios provide opportunities for collusion amongst students, as they complete the assessments without oversight. This would hinder a valid assessment of students’ competencies and, thus, render the assessment practice futile.

Therefore, we worked closely with lecturers to readjust for a new setting by adapting their didactic approaches and make use of technical solutions to provide valid and legally sound assessments. By utilizing learning taxonomies such as Bloom’s and Anderson’s taxonomy (Anderson & Krathwohl 2001) and fostering a practice-orientated use of knowledge, lecturers were able to construct tasks that measured students’ competencies by requiring an application and transformation of content, not a mere recognition or reproduction. This can reduce the searchability of assessment tasks significantly (Steger et. al. 2018, P. 179) as notes or a quick google search alone are not sufficient to complete the assessment. When an assessment is fully converted to assess higher level taxonomic competencies, lecturers can fully embrace the new format. By specifically allowing notes and other resources, they can encourage an application of familiar content to new contexts. Generating these complex tasks is time-consuming. Yet, they allow for an assessment of the required competencies even in remote settings and embrace the actual goal: a practice-orientated application of knowledge.

To reduce collusion amongst students, lecturers were encouraged to utilize randomization functions provided by the assessment software. By randomizing the order of tasks and answers, it already becomes less attractive for students to spend time searching for the same tasks as their fellow students. Even more effectively, lecturers can provide a randomized task selection and make use of altered exam versions to raise the bar for collusion even higher. When lecturers create a big enough item pool, each student receives a highly individualized version of the exam, turning attempts of collusion mostly futile in time-limited assessments. Again, this preparation process is time-consuming, as lecturers have to provide different variants of the same assessment, making sure that the same learning outcomes are measured. Yet, specifically designing modular tasks and use interchangeable building blocks can help to significantly reduce the time needed to construct multiple exam versions.

Thus, it becomes evident that constructing assessments for digital remote settings requires a different approach to didactics and technology. The character of a remote assessment differs noticeably from its on-site counterpart. Making use of more complex features of assessment technology becomes more important for lecturers and should be seriously considered in the assessment design and preparation process.

Adapting Assessment Logistics to Support Students

Apart from working with lecturers to adapt assessment literacy for the new situation, the move to a large-scale remote environment requires more extensive support for students as well.

During an exam, students trust in a controlled and planned procedure. Maintaining students’ trust and acceptance is especially important for digital formats due to their novel and unfamiliar character. Any additional stress such as delays or malfunctioning computers leads to a loss of trust in the procedure.

With digital on-site assessments at Freie Universität Berlin, the assessment process was designed to provide a care-free experience for students. The digital exam was checked extensively by both the lecturer and the E-Examination unit to ensure its correct implementation and functionality, computers and the assessment software were prepared in advance and every student received their own workstation including specialized stations for handicapped students. Students, then, sat in front of the computer and completed their exams.

With the switch to a remote setting, students suddenly became an active part of assessment logistics. As infrastructure shifted to students’ homes and computers, students themselves suddenly had to:

  • ensure that the software needed for the exam would run well on their computers,
  • master the use of the assessment software itself and
  • help themselves in case of a device failure or malfunctioning software during the assessment.

This active role puts further stress on students who are already pressured due to the assessment situation. Students’ competencies regarding the use of technology are heterogenous and when technical inexperienced students have to ensure a correct set-up by themselves, trust in the procedure could be at risk.

Thus, the remote setting demanded a robust onboarding and support system that guide students through each step of the process. At Freie Universität Berlin, we set up a central hub for students, providing them with video tutorials, FAQs, the required software, and troubleshooting guides structured along each step of their exam preparation [1]. To get familiar with the software and assessment environment, students were provided with a mock exam using real assessment software, simulating the assessment situation. At the start of the real exam, students met in a web conference with their lecturer and tech support to talk the exam through and to receive final instructions for solving technical problems on their own. In addition, students could also chat with the lecturer and tech support during the assessment.

Yet, it was expected that the shift from university infrastructure to a BYOD scenario would lead to complications due to a high number of different devices and operating systems. Students would experience anomalies and technical difficulties caused by underpowered hardware and individual software settings. Thus, a service was set up with central e-mail and telephone support as well as regular consultation hours, providing students with a point of contact throughout the whole assessment preparation.

The last 12 months showed that these support structures for students were critical. With an average of 1.000 page visits per day and more than 1.200 e-mail requests throughout the year, students used the service offers extensively. Did the concept fulfil its purpose? With a drop-out rate of 1 to 2% during assessments, students seemed well prepared for the remote setting and with over 35.000 completed exams during the course of two semesters, the support logistics provided a scalable and robust solution for the new challenge.

Digital Remote Assessment – beyond the Pandemic

The Pandemic turned out to be a huge catalyst for digital assessments in Germany. Its high level of scalability makes digital remote assessments an attractive alternative to assess large groups of students even beyond the pandemic. Yet, as demonstrated, digital remote assessments constitute a unique format that requires a specific setup. Moreover, legal questions such as equality of opportunity and data protection remain to be fully answered. Thus, for digital remote assessments to prevail, conditions of success in post-pandemic conditions have yet to be elicited, meaning that its characteristics will have to be embraced and conventional workflows still need to be adapted carefully. If successful, digital remote assessments may provide a viable option for conducting large scale assessments as part of the ongoing digital transformation in higher education.

Written by Nils Hernes and Alexander Schulz for OEB21. We are looking forward to learning more about their work in their talk on “Transforming Digital Examinations for Remote Settings”.

  • Anderson, L. W., & Krathwohl, D.R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Boston: Addison-Wesley.

  • Steger, D., Schroeders, U. & Gnambs, T. (2020). A Meta-Analysis of Test Scores in Proctored and Unproctored Ability Assessments. European Journal of Psychological Assessment, 36(1), 174–184. https://doi.org/10.1027/1015-5759/a000494

  • [1] The support hub can be found under https://wikis.fu-berlin.de/x/-QbiPg

Leave a Reply

Your email address will not be published.