Can Non-Clinician Raters Be Trained to Assess Clinical Reasoning in Post-Encounter Patient Notes?

2019 
PURPOSE: Clinical reasoning is often assessed through patient notes (PN) following standardized patient (SP) encounters. While non-clinicians can score PNs using analytic tools such as checklists, these do not sufficiently encompass the holistic judgments of clinician faculty. To better model faculty judgments, the authors developed checklists with faculty-specified scoring formulas embedded in a spreadsheet, and studied the resulting inter-rater reliability (IRR) of non-clinician raters (SPs and medics) and student pass/fail status. METHOD: In Study-1 (pilot phase), non-clinician and faculty raters rescored PNs of 55 third-year medical students across 5 cases of the 2017 Graduation Competency Examination (GCE) to determine IRR. In Study-2, non-clinician raters scored all notes of the 5-case 2018 GCE (178 students). Faculty rescored all notes of failing students, and could modify formula-derived scores if they felt appropriate. Faculty also rescored and corrected scores of additional notes, for a total of 90 notes (3 cases, including failing notes). RESULTS: Mean overall percent exact agreement between non-clinician and faculty ratings was 87% (weighted kappa .86) and 83% (weighted kappa .88) for Study-1 and Study-2, respectively. SP and medic IRRs did not differ significantly. Four students failed the note section in 2018; three passed after faculty corrections. Few corrections were made to non-failing students' notes. CONCLUSIONS: Non-clinician PN raters using checklists and scoring rules may provide a feasible alternative to faculty raters for low-stakes assessments and for the bulk of well-performing students. Faculty effort can be targeted strategically at rescoring the notes of low-performing students and providing more detailed feedback.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    2
    Citations
    NaN
    KQI
    []