Article ID Journal Published Year Pages File Type
3144622 Journal of Dental Sciences 2013 5 Pages PDF
Abstract

Background/purposeWell-constructed and validated evaluation tools have been used for clinical performance assessment (including objective-structured clinical examination) in many countries for years. The aim of performance assessment in dentistry is to evaluate whether dental graduates are clinically competent in essential skills, and these results can be utilized in modifying teaching and training programming. Thus, to improve the reliability of the evaluation tools, inter-rater reliability is weighted heavily. The aim of this study is to investigate the correlation between rater training and rater reliability.Materials and methodsTwo sixth-year dental students who had already undergone a half-year of internship completed an 8-minute subgingival root planing procedure, and the students' performance was captured on videotape. Nine faculties from the School of Dentistry, who had participated in developing this case, were invited to observe the recorded video and to rate the two students using a checklist. One month later, after receiving further assessment training (workshop including role-play, rating practice, discussion, etc.), the same nine raters observed the same video again and re-rated the students using the same checklist.ResultsAnalysis results of inter-rater reliability for the two students in the initial rating were W = 0.770 and 0.763. Results of re-rating (1 month later) were W = 0.891 and 0.827. All results were statistically significant (P <0.001).ConclusionRater training by means of role-play, rating practice, and discussion do improve the inter-rater reliability of performance assessment in dentistry.

Related Topics
Health Sciences Medicine and Dentistry Dentistry, Oral Surgery and Medicine
Authors
, , , , , , , ,