The possibilities of using data to inform educational policy and practice have generated enormous enthusiasm in recent years. Behind the hype, however, data use in education is already raising significant concerns and controversies. Education technology developers, policymakers and researchers need to work out how to deal with the consequences of data, algorithms and artificial intelligence.
The year 2018 has revealed the extent to which data collection, analysis and use has penetrated into people’s everyday lives with troubling consequences. The big story is the involvement Cambridge Analytica in voter micro-targeting on Facebook during key political events. Other controversies include an Uber automated vehicle running down a pedestrian, and the expansion of China’s ‘social credit’ system to track citizens’ behaviour online.
Fortunately, there is growing awareness of the deep social, political, cultural and economic consequences of developments in data analytics, automation, algorithmic decision-making and artificial intelligence. A host of new institutions has emerged to take on these challenges. Many of them have an overt commitment to dealing with issues of ethics, fairness, surveillance, bias, discrimination and safety.
For example, the AI Now Institute is an interdisciplinary research centre dedicated to understanding impact of artificial intelligence on society. The Data & Society research institute focuses on the social and cultural issues arising from data-centric technological development. The new Leverhulme Centre for the Future of Intelligence explores topics from algorithmic transparency to the implications of AI for democracy. These and other institutes are building the knowledge base needed to understand the impact of data, AI and algorithms, in order to pre-empt the potential dangers they pose to society.
The field of education needs to recognize and address the risks raised by the race to make teaching, learning and policy more data-centred and digital.
Educational data penetrate every level of the system. Government departments depend on data from assessment scores to formulate policies. Schools are held accountable based on student test data. New learning analytics systems are spreading across higher education. Schools are using systems to collect behavioural data. Adaptive technologies are said to be ‘personalizing learning’ based on analysis of individual-level data.
For some, educational data are an opportunity to overhaul and reform an outdated system. Data can be used to evaluate teachers and monitor students in real-time, and inform immediate intervention where necessary. Performance data analytics and business intelligence software for forecasting and planning are reshaping Higher Education too.
Others see educational data as a source of profit. Huge amounts of funding have been invested in education technology in recent years, particularly for personalized learning technologies. Edtech has become a highly lucrative industry, supported by some of the world’s wealthiest philanthropists.
All this data activity, however, does come with risks attached. One risk is security. School cybersecurity is becoming a major problem, with growing numbers of hacking incidents on school systems. Successful ed-tech platforms have proven vulnerable to data breaches too.
Another risk is reliability. Recently, over 7000 international students studying in the UK incorrectly had their visas revoked after voice analysis software from a global assessment company incorrectly concluded they cheated an English profiency test. In the US, teacher evaluation algorithms have been involved in decisions to terminate employment contracts, although deep flaws have been found in the underlying ratings formula.
Questions also need addressing about the conceptions of learning embedded in data-driven education technologies. The evidence of the educational benefits of personalized learning programs, for instance, remains limited, yet the market for these products is growing. Similarly, many education technologies are designed to measure qualities of learning such as ‘growth mindset,’ though psychologists have begun to argue the evidence does not support such practices.
A range of potential implications are also raised by the growing enthusiasm to embed AI in courses and classrooms. If AI can outperform humans on knowledge retention tasks, what should the curriculum look like? While few AI supporters aim to automate teaching entirely, it seems likely that teachers are going to have to figure out new co-working arrangements with robots. New school-evaluating algorithms are already being tested to help in the process of inspection.
The ethical stakes are high when automated systems are involved in deciding what counts as a ‘good’ student or teacher, a ‘successful’ university or school.
The range of implications of data-centred education should prompt us to think harder about addressing the risks, ethics, consequences and potential impact of introducing new technologies into educational policy and practice. At the very least, educators, education researchers, policymakers and developers should be looking to institutes such as AI Now and the Future Intelligence centre to consider the potential risks, implications and consequences of data-driven education. Or maybe education needs its own interdisciplinary, cross-sector centres to deal with emerging controversies and risks before they impact on students, teachers, schools and universities at large scale.
Written by Ben Williamson
Hear from Keynote Speaker Ben Williamson at OEB’s morning Plenary on Friday, December 7, 2018.