FSA com;liance testing (groan...)

About a year ago I heard a rumour that the FSA was going to say something about knowledge testing that might move it away from the mind-numbing banks of compliance multiple choice questions which companies inflict on their staff in its name. But I never heard any more. Can anyone enlighten me?

You need to be a member of learningandskillsgroup to add comments!

Join learningandskillsgroup

Email me when people reply –

Replies

  • Hi Norman

    Oddly, i was sitting at my computer kitting up for an Item Writing Course which I will be training tomorrow when your response was posted so I'm going to take the opportunity to take a quick break from my prep and respond.

    Point one: Yes, I was building the internal assessment regime for the FSA itself as part of the Supervisory Enhancement Programme announced by Hector Santz following the internal audit report into the Northern Rock debacle. For obvious reasons I can't go into too much detail but, to be fair to the FSA, it was,without doubt, the most robust and well integrated programme (in terms of training and assessment) I have ever been involved in.

    One of the major stumbling blocks I find working with firms is that they fail to articulate clearly exactly what the competence is that they need to assess. This often results in, for instance, firms assessing technical knowledge of support staff, mainly through the use of detailed knowledge based MCQs which require the candidates to memorise all sorts of obscure facts, when, in fact, the competence they should be measuring is the ability to use the firm's systems to access the information they need to know.

    One of the other incredibly important elements of assessment is working out the cognitive level required for the training and assessment. When I am working with firms I use a form of shorthand revolving around the use of antibiotics. "Do you need to know at a parent, nurse or doctor level?"

    Parents simply need to know the name of the medicine, how much to give and when.
    Nurses need to understand the medicine so that they are aware of possible contra-indications etc.
    Doctors need to apply their knowledge and understanding of antibiotics to a specific set of symptoms being suffered by a specific individual and prescribe accordingly.

    So the question we need to ask is "Are you training and assessing parents to the doctor level or, worse, training and assessing doctors at the parent level?"

    Actually, more common is that firms train to the doctor level, but then assess as parents so their is no assessment of their understanding of the subject or the ability to take the knowledge and apply it in specific instances.

    The other element that so often gets missed is the difference between whether candidates CAN do something and whether they WILL. Cognitive and skills based assessments can answer the first two questions but not the third - that needs a much more behavioural approach.

    One of the things we did at the FSA was to ensure that both technical and behavioural competencies were assessed alongside each other so that their internal T&C scheme could be shown to be achieving, to the best of its ability, robust outcomes.

    My other concern, borne out of experience, is that too much testing presents a rather negative facade to external bodies (be they your own compliance or internal audit departments or the FSA). There is a very important role to be played by assessments in measuring competency but it is no substitute for real life data, so that a firm with robust KPIs measuring competence "on-the-job" really shouldn't be doing blanket testing as well as it, at the very least, implies no great faith in real time data.

    By all means use them as a route back to competence for those identified through business measures as falling below a certain threshold but why subject colleagues who have not demonstrated any issues through their ongoing competence measures to the cost and potential emotional turmoil of ongoing offline assessments?

    The other thing I would mention is that, while e-Learning and other computer based assessments are incredibly cost efficient, there is a real limit on the cognitive level that this type of approach can assess. Online testing might well be able to assess how well I know the structure of an effective performance conversation but it can never tell you whether I can actually conduct one (or indeed, if I can, whether I will).

    It's a fascinating subject.
  • Let me declare an interest (or two) up front in the hope of reviving this somewhat moribund (but nevertheless interesting) discussion.

    Interest one: I make at least some of my living from advising firms on how to implement effective assessment regimes, a substantial part of which revolves around using MCQs

    Interest two: I recently spent best part of a year working at the FSA (post Northern Rock) implementing, amongst other things, an internal assessment regime which utilises a great deal of MCQs but which must also withstand the rigour of a great deal of external inspection (Treasury Select Committee and others).

    I have always thought that, provided the assessments were well thought out and adequately implemented, MCQs have a useful (although not unique) role to play in ensuring competence.

    My own modus operandi is to ensure that there exists clear separation between item writers and learning content designers but that both are working to clearly articulated learning outcomes. This ensures that both the training and assessments are mapped to the same clear set of objectives without the training being designed simply to pass the assessment, which I would always see as undesirable. I fully understand that depriving trainers of the opportunity to see the assessments can be difficult to manage and add additional strain to resourcing levels but I also know that there is a commensurate improvement in the learning when the trainer is obliged to deliver the whole syllabus and not just concentrate on those specific areas he or she knows to be included in the assessment. I can't have been the only person to have been involved (on both sides) in one of those training sessions where delegates were told that they "might like to make a particular note of this" or some other clue to the inclusion of a particular detail in the end of course assessment.

    I believe item writers should always be trained and accredited prior to authoring summative assessment quality items as, otherwise, they will simply write a succession of "memory recall" items which will likely have no correlation to actual ability. I have found the various books and research papers written by Thomas Haladyna and Steven Downing to be invaluable tools in this respect, of which I would thoroughly recommend "Developing and Validating Multiple Choice Test Items," "Writing Test Items to Evaluate Higher Order Thinking" and "Handbook of Test Development" to be my three main reference works. My experience is that item writers will only write questions testing understanding and application of knowledge when they have been trained to do so and, left to their own devices will always produce knowledge recall items.

    The other thing which I would do as a matter of course would be to insist on post-testing analysis of the results to include, as a minimum, facility, discrimination and spread of choices. Utilising this information and mapping it to other forms of assessment (whether they be real-time measures such as KPIs or other forms of assessment instrument such as case-studies) is essential to ensure the continuing effectiveness of any item bank. I would also never lose sight of the fact that, just because a person CAN apply the knowledge they have learnt to specific cases or individuals, it doesn't mean that they WILL which is where behavioural assessments can prove invaluable.

    One of the more interesting pieces of current research into the use of MCQs is the proposal to go to three option items (one key and two further options). See, for example, Haladyna, T.M. & Downing, S.M. (1993). How many options is enough for a multiple choice test item? Educational & Psychological Measurement.

    Haladyna and Downing argue that we should start talking, not about distractors but "functional distractors" (i.e. those that actually do the job they are there to do). Having performed a meta-analysis over results going back for almost 80 years they have established that the average number of functional distractors in multiple choice tests is around 2.6 which means that, to all intents and purposes, we are already using three option assessments anyway. As an aside, they also favour ditching the word "distractor" as it has too many negative connotations and just using options and key instead.

    It would certainly agree with the results I have encountered during many years of auditing and analysing item banks, whereby I find consistently that in excess of 90% of the items in the banks I have studied have at least one redundant (or non-functional) distractor per item. I can't help think that the job of the item writer would be made much easier, and research already shows that, psychologically, candidates are more accepting of three option assessments. Given that we already de facto do this it seems but a small step to including this as an accepted element of the assessment creation process.

    Any thoughts on this?
    Exabytes Malaysia - Domain Selling
    • Very interesting. Do you mean you were building the internal assessment regime for the FSA itself, or for financial services companies on behalf of FSA?

      Looking at your main points:
      - the separation of assessment from training to avoid 'training for the exam'
      - post-testing analysis of results
      - link to other forms of assessment
      - functional distractors

      ... it's refreshing to get a picture of how an organisation would work assessment when it was more than a box-ticking exercise. I'm not saying it's purely that in our organisation, but since computer based assessment became available internally in our LMS (business units used to have to use an external e-assessment company) testing has become a 'because we can' thing. I've no idea of the number of different business units and depts who create tests but it's huge and it's not all for compliance - much of it is simply to keep product knowledge refreshed. But the result is something like 80,000 tests per month. I'm not sure how much influence we in the central learning team can have on the 'regime' but you certainly point to some good aims. I think it's just that we have lot of local training teams who add a test as an afterthought 'bacause we can'. Having the LMS should enable us to at least do some of that quality analysis of questions.

      Thanks for a useful post.
  • well, we are now a year on from this and I haven't heard anything yet - we are still inflicting!
    • Yep - and they can't get enough of it! (the managers, not the people)
  • Unfortunately this has not filtered through to us yet.......

    We are still testing our employees using multiple choice questions and have now got e-learning modules on:

    FSA / FOS
    DPA
    Anti-Money Laundering
    Fraud

    I believe the reasons for this though or more for audit purposes than offering a great learning experience.

    And the latest FSA initiative???

    ' Treating the Customer Fairly' - and how was the learning assessed? Yes you've guessed it, multiple choice questions!!!

    Oh the joys.....
    • Hello, I am currently the Training Resources Manager for a large Independent Financial Services network & have read your comments with interest. We are still very reliant on Multi Guess ques.

      But in terms of TCF, I don't think multiple choice questions are going to completely suitable to assess learning & understanding as TCF is a principal based approach rather than hard facts so it will be open to more intepretation so I think the use of case studies may well be more suitable with short answers.

      For example, one we have used on our TCF workshops is a case study assessing the performance of a team & developing the relevant action plan.

      That said the marking & measuring of learning & knowledge will now be more time consuming for my team.

      Nevermind!
  • Great point ...... to my mind the whole area of assessment has changed; here's the sequence as I see it:-
    1. Business Intelligence is all the rage as an application - it provides the metrics for
    2. Performance Management - which allows us to assess each individual's competence by assessing their performance; and that provides the basis for
    3. Talent Management - and the basis of advancement within the organisation
    So PM should mean the end of multiple choice. And is should also mean the end of testing what people know and focusing on how they perform.
  • I work in the Insurance sector where FSA have never dictated the approach to knowledge testing. It seems to me that our industry chose to take the route of multiple choice questions. This is probably because it is relatively easy to set up and measure. Whilst this may allow us to tick a box saying 'job done' it is unlikely to be that effective. The real worry is that so many people seem to make the mistake of thinking that these tests prove competence. Oh Dear!
    • True, the FSA are notorious for stating that we should prove compliance but have never stated what constitues satisfatory training and testing.
      Without this clarity we constantly ask what should be included in training and testing. Lats year I developed two compliance e-learning courses. The courses contained a series of case studies within the learning to ensure quality and application of learning. Separate multiple choice tests were used to prove a level of knowledge was held by idividuals plus a self certification 'page' to confirm training had been undertaken. I think that just about covered all bases!
This reply was deleted.