Towards the end of the high school year in Australia, news programs cover the year 12 examinations that help decide a student’s career path, such as going to university. Students are filmed sitting at individual desks writing with pens on paper, just as I did at age 18 decades ago and just as generations of pupils have before that. The content may be different, but the process of high stakes assessments seems to have stagnated. Classrooms have changed in the last century but not much as altered in the way we deliver or undertake written tests.
At university there are still examination halls and, in Australia, what is known as ‘stuvac’ (= study vacation, the week before the official examination period at the end of each semester) for revising (or cramming in facts).
There is more in-course assessment nowadays and, for health professional students, end-of-placement grading. In addition, contemporary education practice aims to test the application of knowledge rather than simply how well it is regurgitated. But these desk-based hand-written examinations persist and just seem so old-fashioned.
The pandemic has led to trials of online assessments, with varying degrees of success. There have been problems with having sufficient computers in one place for large numbers of students. Some online exams have been available at home, causing access and equity issues for many students, technological challenges, and invigilation headaches.
Open book exams are controversial. This method enables students to refer to books, notes, and sometimes electronic devices, to help answer questions. This process assesses the ability to look up, analyse and apply information – important skills for professionals. It mirrors real-life work. Yes, there is knowledge that as a health professional you need immediately, and yes, it is possible that the internet may not work in an emergency, but in many situations, we should surely be emphasising that it is better to look up things rather than trust to memory, particularly if tired.
Whatever the format, high stakes assessments are stressful.
I had a panic attack in my 2nd year physiology exam at medical school. Moreover, when stressed, I still have dreams about its being the day before a test and I haven’t done any revision. Interestingly, this is usually an end-of-school assessment for physics or maths rather than a medical school or postgraduate exam. This type of dream can be a feature of imposter syndrome (but that’s another topic).
During one practical end-of-year examination I invigilated at a medical school, students had to take each other’s blood pressure at one OSCE station (OSCE – an objective structured clinical examination). Some of the readings in these young adults were higher than 160mmHg systolic – a physical sign of anxiety (similar to the up to 50% increase in BP that has been measured in formula 1 drivers during a grand prix).
Health professional educators may say that if a student can’t cope with the stress of an exam, clinical practice will be difficult and challenging. But why do we insist on these massive expensive assessments at the end of university programs and during postgraduate training? Students frequently spend more time studying than seeing patients; trainees’ work-life balance is disrupted as days and nights become work at work and work at home revising. In addition, data show that many examinations unfairly impact examinees’ attainment due to bias relating to race, culture, country of origin, gender etc.
The ubiquitous OSCE, moreover, has evolved from focused tasks to complex activities such as breaking bad news, discussing vaccine hesitancy and antenatal screening decisions that examinees are expected to undertake in 5-10 minutes (what message does this give?).
In medical education, van der Vleuten and Lambert have championed the use of programmatic assessment. Here, the program of assessment is designed to promote and optimise learning, enable decision-making about learner progression, and to have a curriculum quality-assurance function.
For such a program to succeed, there needs to be commitment to regular observation of learners and engagement in feedback dialogue. Workplace-based assessment is an important component, but can be challenging in busy clinical spaces, particularly in this era of increasing workforce pressures and subsequently less time for observation and learning.
Entrustable professional activities (EPAs) are increasingly being used as part of programmatic assessment and were introduced in the Netherlands in 2005 by ten Cate. A simple way of thinking about an EPA is: ‘as a clinician who has observed and worked with this health professional, would I trust them to carry out this activity in future and at what level of supervision if any is required?’ In other words, at a basic level: would I be happy for this health professional to treat my family?
Again, EPAs require regular observation of each learner/student/trainee. They focus on individual performance in that it is an individual health professional who is ‘entrusted.’ In this era of team-based health care, and as Lingard has noted, there needs to be further consideration of the concept of collective competence and, indeed, how individual EPAs are affected by teamwork or a lack thereof.
Assessment processes need to catch-up with the way society, education, healthcare and technology are changing. We need to factor in AI (artificial intelligence) and chat GPT as well.
 Lingard L. Rethinking competence in the context of teamwork. In: Hodges BD, Lingard L (Eds). The Question of Competence. Reconsidering medical education in the twenty-first century. USA: Cornell University Press, 2012, pp. 42-69.