Article ID Journal Published Year Pages File Type
377748 Artificial Intelligence in Medicine 2012 12 Pages PDF
Abstract

SummaryObjectiveTo use the detection of clinically relevant inconsistencies to support the reasoning capabilities of intelligent agents acting as physicians and tutors in the realm of clinical medicine.MethodsWe are developing a cognitive architecture, OntoAgent, that supports the creation and deployment of intelligent agents capable of simulating human-like abilities. The agents, which have a simulated mind and, if applicable, a simulated body, are intended to operate as members of multi-agent teams featuring both artificial and human agents. The agent architecture and its underlying knowledge resources and processors are being developed in a sufficiently generic way to support a variety of applications.ResultsWe show how several types of inconsistency can be detected and leveraged by intelligent agents in the setting of clinical medicine. The types of inconsistencies discussed include: test results not supporting the doctor's hypothesis; the results of a treatment trial not supporting a clinical diagnosis; and information reported by the patient not being consistent with observations. We show the opportunities afforded by detecting each inconsistency, such as rethinking a hypothesis, reevaluating evidence, and motivating or teaching a patient.ConclusionsInconsistency is not always the absence of the goal of consistency; rather, it can be a valuable trigger for further exploration in the realm of clinical medicine. The OntoAgent cognitive architecture, along with its extensive suite of knowledge resources an processors, is sufficient to support sophisticated agent functioning such as detecting clinically relevant inconsistencies and using them to benefit patient-centered medical training and practice.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , ,