کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
469159 698293 2016 9 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Evaluating topic model interpretability from a primary care physician perspective
ترجمه فارسی عنوان
ارزیابی تفسیرپذیری مدل موضوع از دیدگاه پزشکان مراقبت های اولیه
کلمات کلیدی
مدل سازی موضوع؛ مراقبت های اولیه؛ گزارش های بالینی
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر علوم کامپیوتر (عمومی)
چکیده انگلیسی


• A topic model with three different parameter settings is fit to a large collection of clinical reports.
• The interpretability of discovered topics is evaluated by clinicians and laypersons.
• Clinicians are significantly more capable of interpreting topics than laypersons.
• Topics hold potential for applications in automatic summarization.

Background and objectiveProbabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view.MethodsThree latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann–Whitney U tests for each of the tasks.ResultsWhile the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks.ConclusionThis work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Computer Methods and Programs in Biomedicine - Volume 124, February 2016, Pages 67–75
نویسندگان
, , , ,