| Article ID | Journal | Published Year | Pages | File Type |
|---|---|---|---|---|
| 6152677 | Patient Education and Counseling | 2012 | 7 Pages |
ObjectiveTo evaluate how the utility (reliability, validity, acceptability, feasibility, cost and educational impact) of a communication-OSCE was influenced by whether or not station-specific (StSp) checklists were used together with a generic instrument and whether or not narrative feedback was provided to students.MethodsAt ten stations, faculty members rated standardized patient-student interactions using the Common Ground (CG) instrument (at all stations) and StSp-checklists. Both raters and patients provided written feedback. The impact of changing the design on the various utility parameters was assessed: reliability by means of a generalizability study, cost using the Reznick model and the other utility parameters by means of a survey.ResultsUse of the generic instrument (CG) proved more reliable (G coefficient = 0.67) than using the StSp-checklists (G = 0.47) or both (G = 0.65) while there was a high correlation between both scale scores (Pearsons' r = 0.86). The cost was 6.5% higher when StSp-checklists were used and 5% higher when narrative feedback was provided.ConclusionThe utility of a communication OSCE can be enhanced by omitting StSp-checklists and by providing narrative feedback to students.Practice implicationsThe same generic assessment scale can be used in all stations of a communication OSCE. Providing feedback to students is promising but it increases the costs.
