Information

Assessment

Our learning objectives may be solid, our content looks beautiful but what about our assessments? In these difficult times we must measure the effectiveness of our training and prove that important ROI, how can our assessment strategies help?

Members: 120
Latest Activity: May 9

Discussion Forum

accreditation/certification framework

Started by Hootan Zahraei. Last reply by Brian Fox May 4, 2011. 2 Replies

IT Screening tools

Started by Karen Smout Oct 15, 2010. 0 Replies

e-competence assessment

Started by Alison Wright. Last reply by Barry Sampson Sep 24, 2010. 1 Reply

Comment Wall

Add a Comment

You need to be a member of Assessment to add comments!

Comment by Venkat on June 25, 2009 at 14:10
I have been following the conversation initiated by Stephanie and comments from Ken Jones. From your comments I can see the words of wisdom moderated by scarred and healed wounds !

I face this dilemma of who is best placed to develop the e-assessment question items. I see this situation very similar to the days of developing an expert systems in the late seventies and early eighties. The knowledge engineer(ID) sits with the domain expert (SME), and through several iterations develop the system. This means much stronger engagement with the SMEs and identify early on what is the bottom line knowledge they will accept from any one claiming certain level of expertise in the chosen field. Then map the items that guarantee that level of expertise that minimises the risk of 'lucky guesses'. I am not too hung up about the MCQs per se - it is the quality of items and the nature of the feedback that define the success or failure.
Comment by Ken Jones on June 25, 2009 at 13:20
I agree Stephanie, in fact I would say that the assessment should be considered an integral part of the learning process, as opposed to the ubiquitous test/quiz/exam style currently employed.
Comment by Stephanie Dedhar on June 25, 2009 at 8:43
I like your suggestions for how to approach an assessment to avoid short term memory or lucky guessing, Ken, but like you say there's additional work involved there both in ID and development terms. I've written assessments in the past that follow the format of a standard multiple choice questions, but have taken a scenario or behaviour focused approach because I think this strikes a good balance in terms of producing an assessment that genuinely tests the learner on what is relevant (that is, the choices they make and the behaviour they display, rather than definitions of technical terms or the dates of particular laws) and is written in such a way that the right answer is neither immediately obvious nor near impossible to identify, but doesn't add to the development time. Obviously it adds to the ID time but it's my view that the time for writing assessments needs to be factored in from the start and given just as much importance as writing the training - if it results in a more effective assessment, it's time well spent!
Comment by Ken Jones on June 24, 2009 at 17:36
Whilst I think the concept of separating the instruction and assessment by a stipulated time period, in reality it would be difficult to manage in our industry for two rreasons: the transaction cost of downtime would be difficult to quantify (that is assuming that you would do something about filling the gaps in the learners knowledge if the fail to answer correctly); and a lot of the eLearniing in our sector is based around compliance and this means that if the worker has not passed the course (had the assessment) they can not go offshore, not sure the individuals or unions would be to happy about that.
Potentially you could ask the learners to volunteer to retake the test at a later date without the repercussion of being bumped from the Platform should they fail.

However, on your second point Stephanie, what about having tiered or questions for example:Spot the hazard in the scene (picture), then choose (MC) what type of hazard it is, and then how would you remedy the situtation/hazard (MC). By employing this tactic you would be investigating their understanding at a deeper level and be less prone to "Multiple Guessing" paradigm and hopefully encourage longer retention of the information due to the thought that has to go into answering each question. The downside is that it takes a lot longer to create the questions form ID and Developer perspective.
Comment by Stephanie Dedhar on June 24, 2009 at 9:26
I've just skimmed previous comments to get an overview of how this discussion has developed, so apologies if I repeat anything that's been said (or dismissed!) before...

This is a topic that interests me as I'm finding that many of the people I work with are realising that perhaps an end of training multiple choice assessment isn't the most effective approach - but they aren't yet comfortable with making the leap to anything else. Which is I guess where I come in - how can I work with them to ensure they get the measures they need and that mean something?

What do people think about asking users to work through the training on one occasion, perhaps setting a period in which this must be done, and then having a second assessment period at a later date? Whatever the format this would go some way to avoiding the problem of simply testing short term memory. In terms of format, I think multiple choice assessments can be made more valuable by creating questions which put the user in a situation similar to those they'll experience in their day to day work - so we're asking them to select behaviours and actions rather than simple facts. This perhaps moves towards testing the right thing? Or perhaps the questions all relate to a case study - again, we're then testing them in relation to actual performance.
Comment by Jacob Hodges on June 8, 2009 at 8:51
I work as a contractor to the defense agencies and support the development and funding of advanced distributed learning (ADL) science and technology (S&T) projects. In this vein, as a small UK company, I have developed an ADL S&T Roadmap of the research projects developed and future projects. I can be contacted at jhodges@onrglobal.navy.mil or jhodges@qapltd.com.
Comment by Phil Green on May 29, 2009 at 16:46
Personally I never (well seldom) conduct tests of immediate recall except as a means to build the confidence of learners. I doubt that they are much use as indicators of the likelihood of applying leanring to some future task. I like Bob Mager and Peter Pipe's take on this - the simple question, "How will I know it when I see it?" (Thanks for the biochemistry lesson Charles - my greatest scientific achievement to date was a grade 9 O Level in Physics!)
Comment by Ken Jones on May 29, 2009 at 16:32
Short term, dare I say it! What is the value a check on a compliance sheet, not ideal but all to often fact. Would you consider that in terms of long term maybe the measure of competence is more relevant, often done by assessing a portfolio of evidence or seeing somebody do something in real (or virtual life), either way the assessment has to be more cleverly thought out.
Comment by Charles Jennings on May 29, 2009 at 16:09
I think you've made good points regarding the difference in the assessment of 'knowing' and of 'doing' , Phil.

I'd step a little further down the line (or onto the track in front of the train, as the case may be) and suggest that most assessment of 'knowing' that I've seen doesn't actually assess real knowledge acquisition at all.

In 99% of situations where some form of assessment of the 'learning' is applied, the assessment is carried out during or immediately following some type of learning/training event, or immediately following the opportunity for 'learners' to revise what they've been through during the training.

This simply tests short-term memory/retention. The pre-test/post-test model is particularly susceptible to misinterpretation and, I think, is potentially damaging as it can lead people to think that real learning has occurred when it hasn't, and that the training was 'successful' when it wasn't.

All it's really assessing is the ability of the 'learner' to retain information in short-term memory. And short-term memory ipso facto isn't a lot of use once a bit of time has passed. (where did I leave my glasses?)

There's a swathe of research into the difference between short and long-term memory. Learning professionals should know at least a bit about that stuff, and also appreciate that our job is to help people transfer learning into long-term memory which will result in changed behaviour. After all, real learning is nothing more than changed behaviours.

To dive a little deeper (just because I spent a few years studying biochemistry in my youth)......

Different proteins have been identified in the two processes of short-term memory retention and long-term memory retention. In the long-term memory process it's now know that serotonin producing cAMP (cyclic AMP), when in high concentrations in the synapses between neurones, activates a protein/enzyme called PKA (cAMP dependent Protein Kynase) which then targets CREB (the CAMP Respondent Element Binding protein)..... CREB is the 'key' to long-term memory, forming more and different types of synaptic connections that persist. Short-term memory retention is a different process and doesn't do this.

So, when we talk about assessment, are we talking about assessing short-term memory or long-term memory? If the former, what's the value?
Comment by Phil Green on May 28, 2009 at 23:21
Well I am heartened to see how this discussion has moved on. Being a bear of very little brain I like to simplify, so - in the matter of assessment I believe the first question is are we trying to learn about ..... or are we learning how to.....? In the first instance we need a measure of knowledge and in the second we need a test of mastery. It is contriving things too far to separate learning outcomes from performance outcomes even if the performance is some demonstration of having acquired some knowledge (e.g and exam or a quiz). My question always is, "Where do you think learning objectives come from?" Now I won't complicate matters by comparing different motivations to learn, but once again I'll simplify by saying they come directly from performance objectives, and both have the same criteria for assessment. It is only the conditions and standards of the objective that differ. For example, "inflate a punctured tyre sufficient to allow the completion of a journey" may suggest an obvious measure of accomplishment at first glance. However we may then ask for clarification - a tyre on a car? on a bike? on an HGV? on an aircraft? with a full payload on board? across what distance? under favourable conditions? in the dark? when it's wet? whilst under fire? Some elements of the performance may be held in common, for example using a manual pump might require the same physical force in a warm, well-lit garage as it does by the side of the road. However reductionism in instruction has become much maligned and may be an issue too hot for me to argue all in one posting.
 

Members (120)

 
 
 

Members

Sponsor promotion

LSG Sponsors

 

© 2017   Created by Donald H Taylor.   Powered by

Badges  |  Report an Issue  |  Terms of Service