|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|2646888||1138935||2016||4 صفحه PDF||سفارش دهید||دانلود رایگان|
• The Debriefing for Meaningful Learning Evaluation Scale was developed to assess how well a debriefer uses Debriefing for Meaningful Learning.
• Assessment of debriefing practice is a critical aspect for ensuring learning outcomes.
• Pilot testing of the Debriefing for Meaningful Learning Evaluation Scale demonstrates internal consistency and validity.
BackgroundDebriefing for Meaningful Learning (DML), an evidence-based debriefing method, promotes thinking like a nurse through reflective learning. Despite widespread adoption of DML, little is known about how well it is implemented. To assess the effectiveness of DML implementation, an evaluative rubric was developed and tested.SampleThree debriefers who had been trained to use DML at least 1 year previously, submitted five recorded debriefings each for evaluation.MethodsThree raters who were experts in DML scored each of the 15 recorded debriefing session using DML Evaluation Scale (DMLES). Observable behaviors were scored with binary options. These raters also assessed the items in the DMLES for content validity.ResultsCronbach's alpha, intraclass correlation coefficients, and Content Validity Index scores were calculated to determine reliability and validity.ConclusionUse of DMLES could support quality improvement, teacher preparation, and faculty development. Future testing is warranted to investigate the relationship between DML implementation and clinical reasoning.
Journal: Clinical Simulation in Nursing - Volume 12, Issue 7, July 2016, Pages 277–280