کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
490399 | 707462 | 2013 | 9 صفحه PDF | دانلود رایگان |

Explanation of intelligent systems was and is still an important issue in Artificial Intelligence discipline, especially, in com-plex systems like Multi-Agent Environments. In fact, during its uncontrollable execution, the agent reasoning is not clearly reproducible for the user. The complex nature of such systems requires methods and tools to make them intelligible. In this context, we propose to provide users with traceability, more execution transparency, and to give them the possibility to become familiar with such dynamic and complex systems and to understand how solutions are given, how the resolution has been going on, how and when interactions have been performed. For this purpose, we develop an intelligent approach based on three modules, namely, the observation module, the modeling module, and the interpretation module. The first one generates the explanatory knowledge. The second one represents this knowledge in extended causal maps formalism. The third one analyzes and interprets the built causal maps using a first order logic to produce reasoning explanations.
Journal: Procedia Computer Science - Volume 22, 2013, Pages 241-249