Making Learning Evaluation work: Two New Innovations Worth Your Time

Learning evaluation is one of the most critical competencies in the workplace learning field. If done well, it enables us to monitor our success, and maximize the benefits of learning. Sadly, too often we do learning measurement poorly, leaving ourselves and our organizations in the dark. In his workshop at the upcoming OEB conference, Dr. Will Thalheimer, one of our industry’s learning-evaluation visionaries, will describe two innovations that may radically alter how learning evaluation gets done.

 

I didn’t start out focusing on evaluation. I started as an instructional designer, project manager, and simulation designer. Later I became a trainer and ran a leadership-development product line. Still later, I started a consulting practice focused on helping organizations utilize scientific research on learning to accelerate their learning-design improvements.

 

One day, as I was immersed in the learning research, out of the blue popped an idea—and insight rolling around my working memory—that the learning evaluation we were doing was seriously biased and was likely leading us to overrate our successes. We tended to measure learning when everything people had learned was top-of-mind. We measured in the learning context, not later in the on-the-job performance context. We measured people on low-level knowledge checks, asking our learners only to regurgitate trivial information.

 

I knew that as learning professionals we should begin with research-based recommendations and be creative in applying principles in practical ways. But I then realized this wasn’t enough. We needed better evaluation to get feedback on a regular basis—to enable us to improve our techniques, to do our own due diligence, our own research. Creating feedback loops is fundamental to other professions, from lean manufacturing to software engineering, sports, tech startups, investing, medicine, and more. I started to look around for learning-evaluation leverage points.

 

The most obvious target was our learner-feedback surveys. Throughout the world, these are the default method for evaluation our learning interventions. The research was scary though. Our learner-feedback questions did not seem related to learning outcomes. And who of us hadn’t quickly circled fives all the way down the page to get home after an arduous day in the training room—illustrating the point that the data might not be so meaningful?

 

Learning evaluation is difficult, of course. Human learning is diabolically complex. Evaluation makes it even more so. There are no perfect solutions, and I certainly haven’t discovered any secret formula. Still, my approach, detailed in the book Performance-Focused Smile Sheets, is showing promise in organizations of all types, from huge corporations, to the military, to non-profit and non-government organizations. In the workshop, I’ll describe the research that inspired my quest and describe in-depth the new method—enabling you to begin designing learner surveys for your organization.

 

In addition, in the workshop I’ll share with you a new learning-evaluation model, LTEM, the Learning-Transfer Evaluation Model, which is designed to help organizations do better learning evaluations. I’ll describe the rationale behind its design and outline two ways that it can be used. The Kirkpatrick-Katzell Four-Level Model of Evaluation was born before the advent of the cognitive revolution in psychology and the stunning advent of this new century’s world-wide consensus on the most potent learning factors. The new model is designed to fill in the gaps with usable but still sophisticated research-inspired guide posts. I’m delighted to have an opportunity to share it at OEB this year.

 

Written by Will Thalheimer

 

Will Thalheimer, PhD, will be leading a workshop on Getting Radically Improved Data from Learner Feedback at OEB Berlin on Wednesday the 5th of December 2018.

Leave a Reply

Your email address will not be published.