Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
348163 | Computers & Education | 2016 | 15 Pages |
•This mixed-methods study explored automatic methods for building expert models.•The automatic expert modeling used problem explanations written by experts.•Relevant metrics obtained from different methods were compared.•Graph-based metrics performed better in constructing expert models.•The proposed methods can be applied to various adaptive learning technologies.
This mixed methods study explores automatic methods for expert model construction using multiple textual explanations of a problem situation. In particular, this study focuses on the key concepts of an expert model. While an expert understanding of a complex problem situation provides critical reference points for evidence-based formative assessment and feedback, the extraction of those reference points has proven challenging. Building upon semantic analysis, this study utilizes deep natural language processing techniques to facilitate the automatic extraction of key concepts from textual explanations written by experts. The study addresses the following question: (a) whether experts in a domain represent a common understanding of a problem situation through shared key concepts, (b) which metrics extract key concepts from textual data most accurately, and (c) whether automatic methods enable expert model construction from a corpus of textual explanations instead of a pre-defined, ideal explanation created using the Delphi method. The OntoCmap tool was used to extract concepts from multiple textual explanations and to generate diverse metrics assigned to each concept. The findings indicate that (a) experts have varying ways of understanding a problem situation, (b) graph-based filtering metrics (i.e., betweenness and reachability) performed better in building a set of key concepts, and (c) a single, pre-defined explanation led to a more accurate set of key concepts than a corpus of explanations from various experts.