Our learning objectives may be solid, our content looks beautiful but what about our assessments? In these difficult times we must measure the effectiveness of our training and prove that important ROI, how can our assessment strategies help?
127 Members

You need to be a member of learningandskillsgroup to add comments!

Join learningandskillsgroup

Comments are closed.

Comments

  • Venkat, I believe your statement "bottom line knowledge they will accept from any one claiming certain level of expertise" hits the nail on the head, this should most definitely determine the desired Learning Outcome. The question is how, when assessing competence, can we emulate some of the complicated processes that a human assessor would go through when physically assessing the competence of a learner, maybe a bridge too far for safety critical situations or tasks?
  • I completely agree Venkat! It's crucial that the questions in any assessment are developed with the input of the SME. I also think that it's often useful to create the assessment, or at least a draft of it, before beginning to design the actual training (this seems to match with your advice to identify the basic knowledge required early on). This ensures that what you are testing aligns with the agreed learning outcomes and objectives, and gives you a solid basis on which to design the training units.
  • I have been following the conversation initiated by Stephanie and comments from Ken Jones. From your comments I can see the words of wisdom moderated by scarred and healed wounds !

    I face this dilemma of who is best placed to develop the e-assessment question items. I see this situation very similar to the days of developing an expert systems in the late seventies and early eighties. The knowledge engineer(ID) sits with the domain expert (SME), and through several iterations develop the system. This means much stronger engagement with the SMEs and identify early on what is the bottom line knowledge they will accept from any one claiming certain level of expertise in the chosen field. Then map the items that guarantee that level of expertise that minimises the risk of 'lucky guesses'. I am not too hung up about the MCQs per se - it is the quality of items and the nature of the feedback that define the success or failure.
  • I agree Stephanie, in fact I would say that the assessment should be considered an integral part of the learning process, as opposed to the ubiquitous test/quiz/exam style currently employed.
  • I like your suggestions for how to approach an assessment to avoid short term memory or lucky guessing, Ken, but like you say there's additional work involved there both in ID and development terms. I've written assessments in the past that follow the format of a standard multiple choice questions, but have taken a scenario or behaviour focused approach because I think this strikes a good balance in terms of producing an assessment that genuinely tests the learner on what is relevant (that is, the choices they make and the behaviour they display, rather than definitions of technical terms or the dates of particular laws) and is written in such a way that the right answer is neither immediately obvious nor near impossible to identify, but doesn't add to the development time. Obviously it adds to the ID time but it's my view that the time for writing assessments needs to be factored in from the start and given just as much importance as writing the training - if it results in a more effective assessment, it's time well spent!
  • Whilst I think the concept of separating the instruction and assessment by a stipulated time period, in reality it would be difficult to manage in our industry for two rreasons: the transaction cost of downtime would be difficult to quantify (that is assuming that you would do something about filling the gaps in the learners knowledge if the fail to answer correctly); and a lot of the eLearniing in our sector is based around compliance and this means that if the worker has not passed the course (had the assessment) they can not go offshore, not sure the individuals or unions would be to happy about that.
    Potentially you could ask the learners to volunteer to retake the test at a later date without the repercussion of being bumped from the Platform should they fail.

    However, on your second point Stephanie, what about having tiered or questions for example:Spot the hazard in the scene (picture), then choose (MC) what type of hazard it is, and then how would you remedy the situtation/hazard (MC). By employing this tactic you would be investigating their understanding at a deeper level and be less prone to "Multiple Guessing" paradigm and hopefully encourage longer retention of the information due to the thought that has to go into answering each question. The downside is that it takes a lot longer to create the questions form ID and Developer perspective.
  • I've just skimmed previous comments to get an overview of how this discussion has developed, so apologies if I repeat anything that's been said (or dismissed!) before...

    This is a topic that interests me as I'm finding that many of the people I work with are realising that perhaps an end of training multiple choice assessment isn't the most effective approach - but they aren't yet comfortable with making the leap to anything else. Which is I guess where I come in - how can I work with them to ensure they get the measures they need and that mean something?

    What do people think about asking users to work through the training on one occasion, perhaps setting a period in which this must be done, and then having a second assessment period at a later date? Whatever the format this would go some way to avoiding the problem of simply testing short term memory. In terms of format, I think multiple choice assessments can be made more valuable by creating questions which put the user in a situation similar to those they'll experience in their day to day work - so we're asking them to select behaviours and actions rather than simple facts. This perhaps moves towards testing the right thing? Or perhaps the questions all relate to a case study - again, we're then testing them in relation to actual performance.
  • I work as a contractor to the defense agencies and support the development and funding of advanced distributed learning (ADL) science and technology (S&T) projects. In this vein, as a small UK company, I have developed an ADL S&T Roadmap of the research projects developed and future projects. I can be contacted at jhodges@onrglobal.navy.mil or jhodges@qapltd.com.
  • Personally I never (well seldom) conduct tests of immediate recall except as a means to build the confidence of learners. I doubt that they are much use as indicators of the likelihood of applying leanring to some future task. I like Bob Mager and Peter Pipe's take on this - the simple question, "How will I know it when I see it?" (Thanks for the biochemistry lesson Charles - my greatest scientific achievement to date was a grade 9 O Level in Physics!)
  • Short term, dare I say it! What is the value a check on a compliance sheet, not ideal but all to often fact. Would you consider that in terms of long term maybe the measure of competence is more relevant, often done by assessing a portfolio of evidence or seeing somebody do something in real (or virtual life), either way the assessment has to be more cleverly thought out.
This reply was deleted.

accreditation/certification framework

Has anyone worked on developing an accreditation or certification framework? We are exploring ways of setting something up and need a Goframework as a structure for the organisation which hasn't done this before. We do have assessments built in our training courses but nothing formal. Any advice would be appreciated.

Read more…
2 Replies

IT Screening tools

I am looking for a tool that will allow us to test the IT competence of both existing staff and those that are applying to join my organisation. It would therefore need to be available via the web. Ideally the tool would have simulations to test competence in Microsoft apps such as Word, Excel etc but it would also be beneficial if there was an authoring element that would allow us to create tests for our own bespoke applications. It would also be advantageous if it could link to an LMS we are…

Read more…
0 Replies

e-competence assessment

We're embarking on a project to develop shared competence assessment tools across our region.   So far, we have developed a region-wide Statutory and Mandatory Training Framework which defines learning outcomes and refresher intervals in nine essential skills areas.  All our hospital Trusts in the South Central area have signed up to the Framework.  We are now progressing the next step, which is to create online assessments for each of the nine skills areas, for staff to undertake prior to…

Read more…
1 Reply

e-Assessment in Practice, 10-11 November 2010

First Call for PapersPlease consider actively participating in the conference either by presentinga paper, poster or demonstrating your recent research on e-Assessment. Pleasefeel free to circulate it among your colleagues as you see fit. Look forwardto meeting you in November. I am more than happy to contact any individuals ifyou want me to pursue them for a presentation.Please find further information athttp://www.cranfield.ac.uk/cds/Symposia/EA10.htmlConfirmed speakeres include ... Dr. Kerry…

Read more…
1 Reply