|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|353065||618766||2016||7 صفحه PDF||سفارش دهید||دانلود رایگان|
Practical significance is an important concept that moves beyond statistical significance and p values. While effect sizes are not synonymous with practical significance, it is a basis for evidence of substantive significance. Investigators should find and report effect sizes whenever possible. To build evidence for practical significance in pharmacy education, three methods are discussed. First, effect sizes can be compared to general interpretation guidelines for practical significance. Second, using the effect sizes, investigators can benchmark by comparing effect sizes to external information from other studies; however, this information is not always available. Where prior data is limited, a third method after determining effect size is for investigators to calculate in their cohort an instrument’s minimally important difference; the effect size could be compared to this minimally important difference, as opposed to a general interpretation guideline. A method to calculate the minimally important difference is described, as well as applications. Regardless, effect sizes must be determined and should be reported in articles; its comparator may vary as evidence for practical significance—so interpretation is key. Reporting effect sizes can enable benchmarking by others in the future and facilitate summaries through meta-analysis. It is clear that reporting evidence of practical significance with effect sizes is needed; simply reporting statistical significance is not enough. After reading this article, readers should be able to explain practical significance, recognize evidence of practical significance in other reports, and carry out their own analysis of practical significance using one or more of the methods described herein.
Journal: Currents in Pharmacy Teaching and Learning - Volume 8, Issue 1, January–February 2016, Pages 83–89