Learning Analytics and AI: Questions of Ethics and Trust

For most of  my career with USA state government I was housed in, under, or adjacent to the Human Resources (HR) department. I found this to a bit of both blessing and curse. The curse: Much of my work had to do with creating training on dry-as-toast compliance and bureaucratic procedure topics, with colleagues less interested in “learning” than in providing content, often in the form of reading text-based slides aloud to a captive audience. The blessing: This helped me develop a reputation as someone who could find novel or interesting things to do with dry topics, and it gave me an understanding of why—boring or not—they were important.

Another positive takeaway: I saw how frequently the HR side of things was overlooked. I love new technologies and admit to rolling my eyes at some of the attempts to control humans (“We should have lawyers vet every comment before posting!”).  But there have been many times I watched a new technology praised to the heavens by L&D folks with little regard for the legal and paperwork realities involved.

This has been especially noticeable with a recent hottest topic: Learning analytics and AI. The affordances and promises are dazzling: increased efficiency by informing better choices of materials and approaches to suggesting personalized learning paths and even predicting future performance—it raises a number of questions often rooted in the philosophical or policy arenas.

  • Do learners really understand what is being tracked, and when, and who has access to that data?
  • How long are items like test scores kept on file? Do workers have the right to have such material removed?
  • Do people know when they are being “personalized to”? Does such personalization qualify them – or not — for certain career tracks or other opportunities?


Concerns fall into several areas:


Informed Consent

Many of us agree to “tracking” without much understanding of what that actually means. In an event likely familiar to readers I recall one day buying a new tower fan from Amazon. I’d be using it on the porch and thought it would be nice to have a cover for it rather than remembering to bring it inside. Within minutes Facebook showed me an ad for…tower fan covers. I had only thought about it. But Facebook knew I had ordered it online and anticipated the next thing. Employees may not understand exactly what tracking they have given permission for or how it might be used. As noted by Prinsloo & Slade 2017, the option to opt in or out is just too simplistic.


Ownership & Privacy

The surge in remote work brought with it new conversations around who “owns” a worker; the age of learning analytics and AI asks us to consider who owns data. Does everything belong to the organization to do with as it pleases? Does the learner have any ownership or agency regarding the data tracked and collected and stored about them? Beyond that are issues of security: Consider how many data breaches we see every year. What are organizations doing to keep worker data safe?  And where to we draw lines? For instance, EdWeek reports that many tech companies profit off of student data:

Instead of selling things like names and birthdays, the companies charge third parties for user profiles—essentially information they have gleaned from tracking what a child or their family clicks on and looks at, and for how long. The data can include both activity within the app and sometimes across the broader internet, Kelly explained.

That allows the companies to get a “very specific, individualized list of preferences and behaviors” for its users, even if there’s no name, phone number, or IP address attached, Kelly said.

That’s how it’s possible to follow a user from app to app or website to website and continue to encounter “targeted ads or to pop up notifications or persuade them to purchase other things” that might not even be related to the content of the app that sold their data. That means students see targeted ads even when using apps or platforms chosen by their school or district.


Transparency

Apart from the what and who of considerations about learning analytics and AI are the why and how: Why is the employee personality profile so important that it is stored and kept for years?  How is that information being used – to deny a training opportunity, or be put on a specific career path? Why is an employee being nudged toward or away from a particular interest or behavior? Apart from questions of the difference between “personalization” and “manipulation”: Is the worker aware of these things? What are they told, and when?


Predicting and Profiling: Bias & Fairness

Dealing with data brings with it the risk of unfairly stereotyping, overgeneralizing, or unintentionally excluding members of the population. Historical data gathered completely out of context may be used to determine that someone is a low performer, does not have management potential, or is a target for placement on a leadership track. Some data sets may unintentionally exclude, for instance, those with physical disabilities or those who work entirely remotely.


And more…

On top of the array of issues regarding ethics and AI is the need for frank conversation about data. There is danger in putting blind trust in data. Across the literature there is tension between dataphiles, who believe everything can be quantified, and data skeptics, who question whether every element of human emotion and learning can be captured as discrete bits of data. Somewhere in between are interesting conversations about, for instance, whether reliable inferences about humans can be made from activities like tracking eye gaze while completing an online program. And as I have written elsewhere: we need to be careful about letting our measures become targets: quiz scores and the like are only proxies for the learning we hope to support.


Join me at OEB!

As you can see, discussion of ethics related to AI and learning analytics brings with it more questions than answers. As L&D practitioners we need to be part of this discussion across our organizations as we help to move into this new world.  Join me at OEB at my session, “Learning Analytics and AI: Questions of Ethics and Trust”  for a lively conversation about some of these questions—and some possible answers.



Written for OEB Global 2024 by Jane Bozarth.

Leave a Reply

Your email address will not be published.