Article ID Journal Published Year Pages File Type
6855215 Expert Systems with Applications 2018 12 Pages PDF
Abstract
This paper suggests a new method for measuring the emotional state among interacting agents in a given environment. We present the modeling of an adaptive emotional framework that takes into account agent emotion, interaction and learning process. For solving the problem, we employ a non-cooperative game theory approach for representing the interaction between agents and a Reinforcement Learning (RL) process for introducing the stimuli to the environment. We restrict our problem to a class of finite and homogeneous Markov games. The emotional problem is ergodic: each emotion can be represented by a state in a Markov chain which has a probability to be reached. Each emotional strategy of the Markov model is represented as a probability distribution. Then, for measuring the emotional state among agents, we employ the Kullback-Leibler distance between the resulting emotional strategies of the interacting agents. It is a distribution-wise asymmetric measure, then the feelings of one player for another are relative (different). We propose an algorithm for the RL process and for solving the game is proposed a two-step approach. We present an application example related to the selection process of a candidate for a specific position using assessment centers to show the effectiveness of the proposed method by a) measuring the emotional distance among the interacting agents and b) measuring the “emotional closeness degree” of the interacting agents to an ideal proposed candidate agent.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,