We put together an assessment test for one of the modules of our clinical system training using Captivate,to be completed at the test at the end of the training session before giving out logons for the live system. When it was developed, we decided that a 15 min maximum test would be adequate, and depending on whether they are mixed question types or have interaction in them would determine how long it takes. My feeling is that if it is to be delivered at the end of training, as simple test that covers the course objectives should be adequate, as otherwise people will not answer the questions correctly because they will have had enough.
We develop programs using Lectora, Captivate and Presenter. We complete assessments in two ways: assessments at the end of the program and assessment questions located throughout the program. Pro's and con's (primarily with design) with both but overall both are acceptable and work for us.
We have assessments to close out a compliance requirement (I wiork in the Biotech industry) so we don't issue Certificates. Completion is recorded on the LMS. The idea of these courses/assessments is that content is reenforced during hands-on task completion on the Manufacturing Suites etc.
We don't place a time limit on assessments, but we sometimes place a limit as to how many times a candidate may repeat an assessment if they fail. The number of questions is based on key content but we tend to have a minimum of five questions. The maximum we have used to my knowledge is 15.
Hope this helps.
For each course we typically use 10 question multiple choice post-assessments in a range of question styles and templates. Passing score is 80% i.e. 8/10. Users have 2 attempts to achieve a pass of the course. We do not set a time limit.
Questions are intended to re-cap the key points from the course, and increasingly we try to write these in a scenario form where possible rather than just testing memory of learning content, i.e. can the user take knowledge A + knowledge B to deal with situation C?
Other than formal compliance / safety topics, in the majority of cases we do not take the scores too seriously, rather they're intended more for the user to gauge their own level of understanding and identify any remedial learning which may be needed. Of course if a user appears to be consistently struggling then there may be a case for careful intervention by line manager / HR to find out why.
Questions are worded as clearly as possible, not least to help those working in English as a second language. We aim to keep an eye on pass-fail rates for individual questions, which can sometimes flag that a particular question may be ambiguous and needs improved wording if many users seem to be getting it wrong.
As well as the scored post-assessment we also use a variety of non-scored quiz questions dotted through the course to maximise interaction. Instant right-wrong feedback is provided based on the user's response.
Hope this helps,
Nigel makes two good points that I should have mentioned previously:
We also review assessments for improvement / clarification - this can take place immediately if errors or omissions are detected or during our review of materials (typically when author's come back to review/update the materials). Assessment pass rates vary from 80% to 100% depending on the course.
If a candidate appears to be struggling, we have counselling with Line Managers in the first instance and additional meetings with HR if required.
I would say it depends what you are testing? What will successful completion of the test achieve? Is it in response to some e-learning? Also...is a test essential...or would some evidence of practical application be better?
I would say, do any testing in as few questions or steps as possible necessary to support your outcome. That's my thoughts for what it worth. :)
I'm also firmly of the opinion that the straightforward questions at the end of the test approach shouldn't be the standard means of assessment. Informal learning tools like blogs, wikis and forums all provide trackable evidence of learners actually applying the course material to real world applications, which is what you really should be looking for to prove that your learning intervention met its objectives.