“On the same page”? The effect of GP examiner feedback on differences in rating severity in clinical assessments: A pre/post intervention study

Sturman, Nancy, Ostini, Remo, Wong, Wai Yee ORCID: https://orcid.org/0000-0002-3053-0729, Zhang, Jianzhen and David, Michael (2017) “On the same page”? The effect of GP examiner feedback on differences in rating severity in clinical assessments: A pre/post intervention study. BMC Medical Education, 17. ISSN 1472-6920

Full text not available from this repository. (Request a copy)

Abstract

Background: Robust and defensible clinical assessments attempt to minimise differences in student grades which are due to differences in examiner severity (stringency and leniency). Unfortunately there is little evidence to date that examiner training and feedback interventions are effective; “physician raters” have indeed been deemed “impervious to feedback”. Our aim was to investigate the effectiveness of a general practitioner examiner feedback intervention, and explore examiner attitudes to this. Methods: Sixteen examiners were provided with a written summary of all examiner ratings in medical student clinical case examinations over the preceding 18 months, enabling them to identify their own rating data and compare it with other examiners. Examiner ratings and examiner severity self-estimates were analysed pre and post intervention, using non-parametric bootstrapping, multivariable linear regression, intra-class correlation and Spearman’s correlation analyses. Examiners completed a survey exploring their perceptions of the usefulness and acceptability of the intervention, including what (if anything) examiners planned to do differently as a result of the feedback. Results: Examiner severity self-estimates were relatively poorly correlated with measured severity on the two clinical case examination types pre-intervention (0.29 and 0.67) and were less accurate post-intervention. No significant effect of the intervention was identified, when differences in case difficulty were controlled for, although there were fewer outlier examiners post-intervention. Drift in examiner severity over time prior to the intervention was observed. Participants rated the intervention as interesting and useful, and survey comments indicated that fairness, reassurance, and understanding examiner colleagues are important to examiners. Conclusions: Despite our participants being receptive to our feedback and wanting to be “on the same page”, we did not demonstrate effective use of the feedback to change their rating behaviours. Calibration of severity appears to be difficult for examiners, and further research into better ways of providing more effective feedback is indicated.

Item Type: Article
Faculty \ School: Faculty of Medicine and Health Sciences > Norwich Medical School
Depositing User: LivePure Connector
Date Deposited: 06 Feb 2023 13:30
Last Modified: 25 Sep 2024 17:08
URI: https://ueaeprints.uea.ac.uk/id/eprint/91035
DOI: 10.1186/s12909-017-0929-9

Actions (login required)

View Item View Item