- News & Media
- Public & Community Health
- Apprenticeship Network Provider
AUSTRALIAN MEDICAL ASSOCIATION (WA)
P’s get degrees.’ It’s the motto observed by many university students, and one that many medical students succumb to as they delve deeper into their degrees.
Our friends and family often find it incredulous to think that medical students could become doctors by achieving, say 50 per cent in tests, or even 60 per cent or 70 per cent for that matter. Why isn’t the expectation a 100 per cent?
So, how do we set ‘this passing mark’? Until recently, the UWA School of Medicine used the Cohen’s method for written examinations and the Rothman’s method for OSCEs. There are various such ‘standard-setting’ methods and all exist so that the passing mark is not arbitrarily picked but can be empirically justifiable.
Cohen’s for example set the passing mark as ‘a’ proportion of marks that the top percentage of students achieved in a test.
For example, the pass mark could be say 70 per cent of the marks that the student(s) in the 90th percentile achieved. It was an unpopular method amongst students, because the passing mark was uncertain and dictated by the top students rather than what was necessary to be known.
It did have its advantages but processes such as this further increase stress levels and competitiveness within medicine, and breed unhappiness and stifle teamwork.
In an ideal world, we would not need to expect anything less than 100 per cent from our medical students and there would be no need for any statistical or non-statistical method to set a passing mark.
Importantly, the purpose of testing would be to solely assess the achievement of outcomes and not to rank our students. Because of this, each question or assessment could be marked in a dichotomous fashion – outcome achieved or outcome unachieved.
This could as easily be applied to traditional assessment forms such as MCQs as it could be to contemporary forms such as OSCEs and various “assessment by simulation” forms. The percentage of MCQs correct would have little meaning and rather, those MCQs, which were not correctly answered, would merely indicate which outcomes were yet to be achieved.
Similarly, in OSCEs, those stations in which students had not demonstrated achievement of the assessed outcome would indicate the outcomes that would need to be taught again, before further testing.
In this ideal world, the focus would be on teaching that revolves around transparent learning outcomes and frequent formative assessment to give feedback on progress. This would be accompanied by strong and early remediation for those that formative assessment flagged as not yet having achieved the expected outcomes.
Summative assessment at the end of the course or at the end of a period should not be a stressful ordeal, yet it often is. It should merely be a formality to confirm what should already be apparent, which is that the student has achieved all the necessary outcomes to perform as a safe doctor.