Any ideas on using learning metrics to measure impact on business performance ?

I would be very interested to hear from LSG members on this topic.

 

We are in the process of reviewing and challenging the metrics we use as part of our Corporate University. Traditionally the only consistent metrics have been in the area of post learning reviews (happy sheets) that measure the event but not the effectiveness of the learning or how this is applied. I know it has always been a difficult challenge to demonstrate ROI on training, or indeed what impact it has had on business performance, but I would be very interested in any ideas / experiences people have.

 

Many thanks in advance

 

Nick

Views: 102

Reply to This

Replies to This Discussion

Hi Nick,

One approach we're looking at is Test, teach, distract, test. What this means is a decent test at the start, which we would expect the user to not do so well at. Then they would go through the learning materials. Then they do something else for the same amount of time. It's important that it IS something different. Finally, we give them the same test again.

A few points here: if it's a long piece of learning, say a day or more, this is challenging because the tests get huge. Suggestions on this welcome, but we don't do long pieces of learning...

Also, if you really want to prove anything, you need to do the same against a control group who have maybe text or traditional resources available. This proves a couple of things: first that the material is actually works. Second, it's more effective than the traditional way. Hopefully you'll see improvement outcomes, but don't make the tests too easy so you can see a progression.

Ideally you'll run these tests on a pilot group before release, so you can fix any problems that come up. So it doesn't demonstrate ROI as such, but it does demonstrate effectiveness, if shouldn't be too hard to drop some numbers in there once you know the training is helping.

Make sense?
Have you looked at spaced learning as an approach? http://www.retenda.com/site/spaced-learning/
Thanks Jon, I have just registered for the free trial to understand a little more about spaced learning.
Hi Jon,

Yes I have seen this work, together with other research in this area. These are good metrics about what gets remembered and for how long. Other research in the area of cognition and cognitive encoding is also applicable.

The key issue (for me) is not what gets remembered but rather what people do with it I.e. Performance and behaviour change. It's also critical to make sure that people remember the right stuff.

Jonathan.

Hi Andy, makes good sense and thanks for your reply.

 

This is a nice model and demonstrates a good example of measuring retention post training and some ideas that could be useful additions to our course metrics. In Kirkpatrick terms this helps us move towards the level 2 with regard to measuring the learning. I would also be interested in any examples you may have in then measuring the business results of the learning being applied post training.

 

Many thanks for your response Andy

Hi Andy,

You may want to think about separating your metrics into process and performance. Process metrics will deal with the food, facilities, joining instratuctions, timliness of course and so on. Performance metrics will deal with the issues of does the business get better as a result of the intervention.

Jonathan.

Hi Nick,

 

One approach I've used with success is to talk to the managers of the learners and ask what impact the training has had.  As you rightly point out, calculating ROI is not easy and is fraught with deeper issues.

 

In my (simple) book, if the manager and senior executives are pleased with the results then that's all that really matters.

 

Hope this helps.

 

Jonathan.


Hi Jonathan,

 

Thanks for your response. I agree totally that manager / senior executive perspective is a strong indicator of impact and change in behaviour. Our big challenge in that respect is to get managers to invest time beforehand with the learner to talk about the objectives and benefits of the training they are about to receive. This is a real focus of improvement for us.

 

The other opportunity we are investigating is how we articulate the organisational benefits of the learning program as well as the individual ones. This may well provide a language that we can talk to when communicating the business results. Still in an early thought process though so very keen to see if others are having any success with demonstrating the impact in these terms.

Hi Nick,

 

I agree with Jonathan to a large part - if the business is happy with the change in behaviour after training, then that's half the battle. But this anecdotal feedback doesn't qualify as 'metrics'.

For leadership / management development programmes, I use 360 degree on-line appraisals to benchmark skills and behaviours at the beginning of the programme. Learning goals for the programme (programme = a series of class based sessions together with self-learning) are agreed using the 360 feedback for guidance. At the end of the programme the 360 appraisal is repeated and achievement against learning goals assessed. This gives a 'objective' feeling measurement.

On the 'macro' scale, I measure 'home-grown' talent as a way to show success. The percentage of the top team who have progressed to their current position through our internal management & leadership programmes becomes a solid metric of the success of the learning.

 

Cheers,

 

Sarah 

Hi Sarah,

My original post was made to generate discussion which I'm glad you have added to so well. My reason for talking about the views of managers and senior executives is that all too often measures and metrics are used in the wrong way to demonstrate effectiveness, value and impact of learning.

I agree that 360 degree measures are a good way of demonstrating imperial measures but in my experience unless the managers feel they are getting value then all the data in the world doesn't help.

A large client of mine has recently introduced a major leadership programme. It's measures in the ways you discuss ( as well as many others) but the real leverage is achieved because managers and executives BELIEVE that the programme is having an impact. By way of comparison I've also worked on other business performance initiates which have been measured within an inch of their lives only for senior executives to state that they don't feel the programme is having the required impact!

Taking senior executives with you is a key success factor. Metrics then become supporting evidence rather then being the sole measure of success and value.

Jonathan.

Hi Nick

 

We've recently done away with happy sheets completely.  For our short seminars/masterclasses/workshops/online moduels we've moved to a short online survey post development-activity.  This measures how well the learners personal objectives for doing the development were met (rather than the traditional 'course objectives') and also asks them to give examples of how they've used what they learnt within their role.  We also produce a basic Development Metrics report, which tracks the 'stats' - number of attendees, events, internal consultancy pieces of work, online modules accessed, by department and also by demographic group.  Whilst this doesn't give us impact, it does highlight the areas supporting their people through development, and those doing less.

 

For longer programmes like our Leadership and Management development programmes we use a range of evaluation methods including 360 pre and post programme, critical incident methodology in 1-2-1 interviews to determine impact in the role and casuality, reflective essays/professional discussions, and group presentations with QA.

 

That said, I'm also looking at more rigeous KPIs and measures of success for a range of strategic OD projects we're working on - this includes design, assessment and associated development delivery of Leadership Competencies.  Due to the nature of our institution any measures are expected to be academically credible and causality proven.  This is somewhat of a challenge!

 

Steph

I am convinced that L&D has one over-riding value-add for the organisation; to enable people to work smarter. And the place that working smarter really shows up is in workforce planning - working smarter is the extent to which the capacity plan and the workforce plan is no longer a 1:1 relationship.

 

I have one fundamental problem with most of the approaches to metrics discussions; they are based on the assumption that there is a beginning and an end to any learning intervention when, in reality, it's a continuous process. So metrics need to be continuous! The value of training is either risk aversion (e.g. health & safety or compliance) or performance; and each of these two needs to be treated differently from a metrics viewpoint.

 

One last thought - it's the function of Performance Management to provide analytics and metrics and, whilst L&D does have a part to play on PM, it shouldn't cover up PM's deficiencies by taking ownership of metrics provision.

RSS

Members

Sponsor promotion

LSG Sponsors

© 2020   Created by Donald H Taylor.   Powered by

Badges  |  Report an Issue  |  Terms of Service