We currently have an evaluation process in place that incorporates a pre training evaluation, an immediate training evaluation and then one again 30-60 days post training. The data we get is ok, but doesn't give us a great indicator of improvements in knowledge, skills or behaviours. Working in a contact centre environment time is tight so I was just wondering if anyone has any evaluation methods or things they do in particular to get some good quality analysis without a huge time commitment from those that need to be evaluated?

Views: 43

Reply to This

Replies to This Discussion

I've come to the conclusion recently that a lot of the discussions we have about evaluation are born out of trainer vanity. Having struggled to find a practical way of implementing Kirkpatrick (and failed), I think we need to simplify and go back to what a business really wants from its L&D teams: to improve the performance of its staff to meet current and future business objectives.

The business doesn't care that 98% of staff liked our presentation style, it doesn't care if you can prove that staff remember what they learned a week or a month ago. It only cares about whether or not the investment it made in training its staff produced a change in their performance that contributed towards meeting the business goals.

I think this means much of the talk about evaluation is redundant because what we're really talking about is Performance Management. If we can create or influence a PM system that encourages managers to link development activity to specific business objectives and individual performance targets, then we can create learning interventions that specifically address these needs and the progress against them can be monitored by the line manager at regular 121 meetings throughout the year.

Of course to make this work and extract the relevant information you probably need a decent electronic PM system and a process that is hard to deviate from. But that doesn't sound any harder than KP to me and it will probably be of much more value to the business.

Sam
Sam,

I absolutely, 100% agree. You said it in a much nicer way than I would but you're absolutely right. Well - apart from needing the electronic PM system but that's just the system.

I think the thing we need to get into L&D's heads (that is, ours) right now is that the business doesn't care about our evaluations or measurements. It cares about it's own (probably: performance, capability, engagement and reputation). If we can work to improve these for the business - then we're valuable.

Great post!
I absolutely agree - my question is how do you prove that the learning intervention contributed to the change in business performance without using an evaluation method/tool (or L&D internal measures??)

If we take Kirkpatrick - L1 - is the business interested? Probably not...
L2 - is the business interested? Maybe but not really...
L3 - is the business interested? Yes...but L&D need to prove it.
L4/5 - is the business interested? Absolutely, but you'd need to provide evidence from L1 to prove how you got there!

Any thoughts?

Tracy
I find it a very tough one -evaluation- and the best I can come up with is a mix of imperfect methods:

Happy Sheets.
Absolutely. Why not? It's cheap, easily done and extremely useful - as long as you don't expect it to measure something as complicated as performance improvement. We need to measure the performance obviously. But surely you also need to know about the learning event, and what can be improved there? Assume it was classroom training: If something was wrong, obviously missing or hard to understand, this is the easiest way for a participant to let you know. Of course the business won't care much - but how about you improving your training? I think there are plenty of good reasons to evaluate the learning event itself!

Surveys after the event
Seems like a good idea. Could be a simple test, repeated after 1,2 4 months - I'm sure there are guidelines for it. Good to inform yourself about retention. I'm not really doing it or planning to do it because I think it will just prove how low the retention is - which we know already. Hmmm come to think of it, I'd better do it sometime and check my assumptions!

Interviews w participants, their managers, and stakeholders
I know it's very soft... but in reality, how do managers know who is performing well and who isn't? There's metrics, and there's an awful lot of observation, interaction and gut feeling. It's scary because it isn't objective - but it is part of how we run our businesses. The fact that it's soft data doesn't mean its not useful - you can get very useful feedback from just asking people what they think.

Measuring Performance
Now this is the holy grail of course. I'm lucky to work in an organisation that is very keen on metrics, and we have lots of systems tracking incidents, resolution times etc If your organisation doesn't - if managers don't measure the performance of their staff, then how on earth are you as a learning function going to do it? Measuring performance is a business issue that should involve much more than just learning. And don't forget performance is a results of Knowledge, Skills, Motivation and Environment.(thank you Nigel Harrison :-) )
Even if you do have lots of metrics in the organisation (like we do) it still feels as only part of the performance: some staff perform less on the system but spend a lot of time updating documents, educating others, providing in-depth insights in root causes or solutions - this kind of indispensible performance is typically not showing up in performance measurements..
One more thing on Performance Measurement: so you've found that since the training, 3 months ago, performance has improved. Or, the new system that has been introduced is working well, and everyone knows what to do. Now how do you relate this to your training plan? Of course we all need to manage our careers so we may just take all the credit. As a learning professional interested in what really works though, how would you use this 'ultimate' measurement of improved performance to relate back to the training? Maybe they didn't learn a thing in your training but they figured it out together. Or one colleague was really getting it and helped his peers. Or just by doing it the wrong way they found out how to do it. Or maybe the environment was great and made it easy. etc...


The best I can come up with, is mix it up:
Do happy sheets (do them online so it's easy to gather the data.. I wish I hadn't done paper again the last time I trained..)
Do interviews with staff, their management and stakeholders. It's soft data but it can be very insightful.
Do repeated tests
Do try to get some hard performance data

The most important thing I find is to understand the limitation of each of the above - I mean, who really expects Happy sheets to measure performance improvement? That's just silly...

Bas
I think this discussion begs the question of 'who is the evaluation for'? Whatever your answer, there should be some element of the learner using the evaluation for their personal reflection and for deciding on 'next steps' in their training pathway. For online/blended learning I think this is best done in a forum at the end of a course where learners are able to post up their comments and suggestions without following the trainer's pre-set agenda. This can, of course, be an option alongside all the others.
I do a lot of action learning that helps people to reflect on incidents and write their reflections, learning and future planning. The stories and follow up to the planning is a good way of evaluating particularly if you do before and after action learning stories.
Lisa, couple of great tools if you are interested: www.metricsthatmatter.com and www.mindmill.co.uk. Applying them for effectiveness is another story.

RSS

Members

Sponsor promotion

LSG Sponsors

 

© 2017   Created by Donald H Taylor.   Powered by

Badges  |  Report an Issue  |  Terms of Service