The robo-readers are coming

Grading

Image: Quinn Dombrowski

They cannot ‘know’ students, think outside the box or reward creativity or innovation. Their feedback is pragmatic, standardised and – some would say – soulless.

 

Despite this, according to recent studies, students would rather submit early drafts of their essays to robo-readers than human teachers. 

 

By Steven Blum

 

While a teacher’s feedback to an essay is often viewed as ‘punitive’, in ways that discourage multiple revisions, professors say that students are willing to revise their essays multiple times when their work is being reviewed by a computer.

 

“They end up writing nearly three times as many words in the course of revising as students who are not offered [automated essay scoring software]”, according to an article in The New Scientist. As a result, the quality of their writing improves.

 

The studies suggest that the impersonal nature of essay grading software is the software’s greatest strength. The less personal the feedback, the freer students feel to try out new things and submit draft after draft.

 

 

But can robo-readers give more than perfunctory feedback? Can they assess meaning and voice, tone and subjectivity?

 

It depends on who you ask.

 

Can Software Learn to Read?

 

Today’s robo-readers use artificial intelligence to ‘learn’ how to grade essay questions. First, thousands of student essays are scored by hand and loaded into the system; then the software learns what components make an outstanding essay.

 

This is helpful, especially for those who teach Massive Open Online Courses (MOOCs) and can’t possibly hope to grade hundreds — even thousands — of essays in mere days.

 

“The idea is to create a tireless, automated version of the professor that can give feedback on a much broader amount of work,” explains Piotr Mitros, chief scientist at edX, an extensive MOOC platform with backing from Harvard, MIT and UC Berkeley.

 

The problem is that the grading software on the market is of varying quality. While some can be helpful, others punish student creativity by favoring a vernacular that closely adheres to a secret sauce of jargon. Meaningless jumbles of words triumph, so long as they contain a few key words.

 

Les Perelman, former director of undergraduate writing at the Massachusetts Institute of Technology, is one of the most outspoken critics of automated assessment software.  Since last year, his petition against robo-readers — Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment — has received over 4,000 signatures, including Noam Chomsky’s.

 

Along with a team of students from MIT and Harvard, Perelman has developed a software program called ‘Basic Automated B.S. Essay Language Generator’ (or Babel for short) that generates essays from scratch using keywords.

 

The software draws attention to the fact that simply arranging a string of keywords in an otherwise meaningless sentence can create what a computer would call an A+ essay.

 

Perelman has shown that even Abraham Lincoln would have received a poor grade for the Gettysburg Address.

 

Satisfying an Algorithm

 

From refrigerators that order food to cars that drive themselves, our world is becoming more automated every day. It makes sense that we would demand computers take up the painstaking task of grading essays as a next step.

 

But, critics say, before we turn over this process to an algorithm, there are both practical and existential concerns to consider.

 

Detractors fear that robo-readers will make writing a soulless enterprise.  “How will it change the way students write when they’re no longer writing to communicate with a human but to satisfy a computer algorithm?” reads one comment below an article on computer assessments.

 

Still others believe that the software will diminish the role of the teacher.

 

“Surrendering one’s professional responsibilities will [also] be good practice for the day when professors will be entirely replaced by computers,” writes Kathleen Anderson in the Chronicle of Higher Education.

 

Ann Marcus Quinn, an upcoming speaker at ONLINE EDUCA BERLIN 2014, studies the use of online assessments at the University of Limerick. She believes online assessments can be helpful for giving “constructive feedback to students in a much more efficient manner”; but doesn’t believe their usefulness extends to analysing essays. “I don’t think, in my experience, that an essay can be comprehensively assessed using automated assessment,” she says.

 

Finding the Balance

 

It’s easy to argue for an ideal educational environment where every student essay is carefully read, graded and commented upon by a highly trained, sensitive and compassionate instructor. And at many liberal arts colleges, this is exactly what students receive. But what about those who cannot afford such lavish attention?

 

Advocates for the grading software say it can be useful for getting immediate feedback on essays and turns the revision process into a fun game. “Students naturally gravitate toward resubmitting the work until they get it right,” according to Daphne Koller, a computer scientist and a founder of Coursera.

 

The software is well-needed in a MOOC environment where essay grading is costly and near-impossible to coordinate. It seems to reduce inhibitions, and could even help students’ who are shy or afraid of criticism to grow as writers.

 

Just don’t let the robots give the final grades.

 

You can hear more about the latest online assessment tools at ONLINE EDUCA BERLIN 2014

 

Image attribution: Quinn Dombrowski

 

Leave a Reply

Your email address will not be published.