Why does it feel like so many discussions of ethics take it for granted that we all agree on core values? The discussion of, say, the ethics of learning analytics so often skips over the part where we discuss our different perspectives and conflicting values and jumps straight into this list of proscribed behaviours.
What has happened to ethics? In today’s legalistic and formalized world, ethics seems to have been reduced to a list of things you can’t do. Don’t be biased. Don’t violate privacy. Don’t share copyright content. Where is the joy in ethics? Where is the idea that we can do some good in this world, and that ethics is the guiding light that shows us how?
Ethics is not just law. Nor is it just a list of rules or commandments. Ethics speaks to what we aspire to when we make our way in the world: about why we build technology, about why we think learning is important, about the sort of people we would like our staff and students to become. If we have not taken the time to think of these things, then we have not become ethical at all.
In learning technology, the ethical questions have become a crescendo in recent years. How do we manage student data and track learning activities? What role should artificial intelligence play in the design and recommendation of learning resources? Does the use of technology depersonalize learning? What are the wider social impacts on society of an education conducted entirely online?
These questions, and others like them, take us beyond the traditional bounds of ethics. There was a time when we could approach ethical questions in a purely rational manner, appealing to our sense of ethics as a form of reason. Once we made clear the consequences of our actions, we need simply appeal to our understanding of our rights and responsibilities as a reasonable person, and the ethical answers would emerge.
Today it’s not so clear. We understand that many other factors govern our sense of right and wrong. We are not merely creatures of reason, but of passion, empathy, care and consideration. Our loved ones matter to us in a way that cannot be measured with an austere calculation. And our global society forces us to see these things not only from our own personal or social perspective, but from one far removed from the customs and traditions in which we were raised.
And it’s not theoretical, it’s real. In learning technology, we may be building systems that teach pilots how to fly drones that will be used in an ongoing war. We may be employing surveillance systems to identify plagiarism and cheating in exams. We may be designing user interfaces for a company’s anti-union campaign. We may be authoring a learning recommendation system that discriminates against identifiable minorities. We may be creating learning resources that will be used by a private company to undermine public education.
I have my own views on all of these, though recent events have shaken some of them. I wrestle with the fact that there’s no right answer, and yet my position demands that there be an answer. Over the last few years, I have been exploring these questions under the heading of ‘ethics, analytics and the duty of care’. The investigative part of this work broke down into three major components:
- What are we using learning technology to accomplish; and what ethical issues and risks have been identified as a result? This is not based in speculation but involved scouring the literature to be as comprehensive as possible.
- How do analytical learning technologies function, and what decisions are being made by researchers and technicians as they develop new algorithms, models and systems?
- What are the ethical perspectives from which these questions are typically considered? I identify four major approaches: ethics based on moral virtues, good and bad consequences, duties and obligations, and social contracts.
When all is said and done, where there are still questions that cannot be answered, I ask the question, what does our own moral sense tell us what the answer should be. I don’t attempt to define what that moral sense is – after all, it’s probably different for every person – but I consider the question of how it’s developed, the role we as educators play in its formation, and how it informs our individual senses of right and wrong.
My own answer is not what many want to hear, I think. It is that if we want people in the world to be more ethical, then we need to be better role models, and be more ethical ourselves. This is because there are no reasons, no arguments, and no rules that will inform a person’s moral sense, but rather, only what they have felt and experienced in their own lives from the people they believe to be exemplars of moral virtue.
I’ve compiled the work I’ve done to date in an open online course, and I’m in the process of bringing these thoughts together into a final report on my inquiry. I hope you’ll join me through the final stages of this journey.
Written for OEB Global 2022 by Stephen Downes
Stephen is speaking at this year’s OEB conference. Found out more about his presentation here: https://oeb.global/programme/agenda/oeb-22/sessions/23723
Why do we need to gather so much of the data we gather on students?