Measuring Learning Effectiveness is considered difficult, time consuming, and difficult to implement. At best, most organizations would conduct assessments and create a pass-fail report. How then, in these times of cost cutting, does a CLO show the business that training is an important contributor to performance improvement? TIS’ white paper, Mapping ID to Performance Needs highlights some of the Models that can be deployed to evaluate learning effectiveness and measure the ROI.
Further, there is tremendous work being done around the globe by the Kirkpatricks and independent learning designers, using an updated Kirkpatrick Model, elements of the Jack Philips model, and a host of other methods. The contemporary learning manager is also conscious about customer satisfaction as they gear up to run their learning department like a business on its own. Traditional evaluation approaches tend to over-survey the learners and sometimes, their managers too. To avoid such overload, the preferred approach is to make the evaluation a part of the activities for the training program itself instead of being designed as independent surveys.
Other methods include Brinkerhoff’s Success Case Method (SCM) which shifts the focus from evaluation of “training” to an evaluation of how effectively the organization uses training. The overarching purpose of the SCM is to dig out and build understanding about the many factors that keep training from being more successful. Then, it serves as a vehicle to teach the key stakeholders in the organization what needs to be done to increase the training success rates and consistently improve rates of return for training investments.
Then there’s Roger Kaufman’s 5 level model, modeled after Kirkpatrick's four level evaluation method. This model applies five levels and is designed to evaluate a program from the trainee's perspective. It also assesses the possible impact of a training program on the client and society.
A Kirkpatrick Example: Building a Chain of Evidence
A lot of frustration and cynicism around the Kirkpatrick model has been caused by the perceived difficulty in evaluating at the higher levels. While evaluating Level 1 (Reaction) and Level 2 (Learning) is fairly simple, Level 3 (Behavior) and Level 4 (Results) pose several challenges, chief of them being the ability to connect training performance with actual business results. As most learning professionals know, whether learning is transferred to the job and then to business results is not determined only by the quality of learning. There are organizational drivers that can also impact performance, positively and negatively and there could be external influencers which drive results.
While both of these are valid concerns, Don Kirkpatrick proposes linking all the levels of evaluation to accommodate the variables mentioned above. He talks about ‘evidence’ vs ‘proof’ of learning effectiveness. His recommendation is to build a chain of evidence from Level 1 to Level 4, which shows that right from learner satisfaction to business result, there has been similar impact, either positive or negative. The hypothesis is: while results at any one level could be skewed by factors other than training, the entire chain cannot be influenced by such factors.
From Learning to Performance
It’s a dynamic world where training, like everything else, must mirror the demands of the time. The result—a call for a change in focus from learning to performance. In fact, learning design has been evolving continuously to meet changing needs of business and the workers. In conclusion, a combination of appropriate ID approaches along with an evaluation framework to measure effectiveness can help establish the value L&D provides to business.
This framework offers process flows, tools, templates, and score cards to implement learning effectiveness evaluation for all 4 levels of Kirkpatrick and the 5th Level of ROI proposed by Jack Philips.