I would be very interested to hear from LSG members on this topic.
We are in the process of reviewing and challenging the metrics we use as part of our Corporate University. Traditionally the only consistent metrics have been in the area of post learning reviews (happy sheets) that measure the event but not the effectiveness of the learning or how this is applied. I know it has always been a difficult challenge to demonstrate ROI on training, or indeed what impact it has had on business performance, but I would be very interested in any ideas / experiences people have.
Many thanks in advance
Hi Andy, makes good sense and thanks for your reply.
This is a nice model and demonstrates a good example of measuring retention post training and some ideas that could be useful additions to our course metrics. In Kirkpatrick terms this helps us move towards the level 2 with regard to measuring the learning. I would also be interested in any examples you may have in then measuring the business results of the learning being applied post training.
Many thanks for your response Andy
One approach I've used with success is to talk to the managers of the learners and ask what impact the training has had. As you rightly point out, calculating ROI is not easy and is fraught with deeper issues.
In my (simple) book, if the manager and senior executives are pleased with the results then that's all that really matters.
Hope this helps.
Thanks for your response. I agree totally that manager / senior executive perspective is a strong indicator of impact and change in behaviour. Our big challenge in that respect is to get managers to invest time beforehand with the learner to talk about the objectives and benefits of the training they are about to receive. This is a real focus of improvement for us.
The other opportunity we are investigating is how we articulate the organisational benefits of the learning program as well as the individual ones. This may well provide a language that we can talk to when communicating the business results. Still in an early thought process though so very keen to see if others are having any success with demonstrating the impact in these terms.
I agree with Jonathan to a large part - if the business is happy with the change in behaviour after training, then that's half the battle. But this anecdotal feedback doesn't qualify as 'metrics'.
For leadership / management development programmes, I use 360 degree on-line appraisals to benchmark skills and behaviours at the beginning of the programme. Learning goals for the programme (programme = a series of class based sessions together with self-learning) are agreed using the 360 feedback for guidance. At the end of the programme the 360 appraisal is repeated and achievement against learning goals assessed. This gives a 'objective' feeling measurement.
On the 'macro' scale, I measure 'home-grown' talent as a way to show success. The percentage of the top team who have progressed to their current position through our internal management & leadership programmes becomes a solid metric of the success of the learning.
We've recently done away with happy sheets completely. For our short seminars/masterclasses/workshops/online moduels we've moved to a short online survey post development-activity. This measures how well the learners personal objectives for doing the development were met (rather than the traditional 'course objectives') and also asks them to give examples of how they've used what they learnt within their role. We also produce a basic Development Metrics report, which tracks the 'stats' - number of attendees, events, internal consultancy pieces of work, online modules accessed, by department and also by demographic group. Whilst this doesn't give us impact, it does highlight the areas supporting their people through development, and those doing less.
For longer programmes like our Leadership and Management development programmes we use a range of evaluation methods including 360 pre and post programme, critical incident methodology in 1-2-1 interviews to determine impact in the role and casuality, reflective essays/professional discussions, and group presentations with QA.
That said, I'm also looking at more rigeous KPIs and measures of success for a range of strategic OD projects we're working on - this includes design, assessment and associated development delivery of Leadership Competencies. Due to the nature of our institution any measures are expected to be academically credible and causality proven. This is somewhat of a challenge!
I am convinced that L&D has one over-riding value-add for the organisation; to enable people to work smarter. And the place that working smarter really shows up is in workforce planning - working smarter is the extent to which the capacity plan and the workforce plan is no longer a 1:1 relationship.
I have one fundamental problem with most of the approaches to metrics discussions; they are based on the assumption that there is a beginning and an end to any learning intervention when, in reality, it's a continuous process. So metrics need to be continuous! The value of training is either risk aversion (e.g. health & safety or compliance) or performance; and each of these two needs to be treated differently from a metrics viewpoint.
One last thought - it's the function of Performance Management to provide analytics and metrics and, whilst L&D does have a part to play on PM, it shouldn't cover up PM's deficiencies by taking ownership of metrics provision.