Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
381051 | Engineering Applications of Artificial Intelligence | 2013 | 13 Pages |
Organisational abstractions have been presented during the last years as common solutions to regulate Open MultiAgent Systems. In particular, the concept of norm is defined at design time to assure the correct behaviour of agents in such systems. However, in many cases, the performance of a system does not only depend on the correct behaviour of the agents according to the imposed norms but also on some other efficiency measures. To tackle this issue, this paper puts forward a novel mechanism that attempts to persuade agents to act according to system's preferences. This mechanism relies on incentive policies that aim to induce (not enforce) agents to perform those actions that are more appropriated from the system's point of view. In particular, two different policies have been presented. On the one hand, a policy that tries to promote the most appropriate action regarding the global utility of the system, by assigning a positive incentive to it. On the other hand, a policy that assigns incentives to all actions an agent can choose in a given state, with the aim of persuading the former to choose a “good” action. Besides, incentives are adapted and defined for each individual agent and contextualised by taking into account the state of the system. This task is carried out through a learning process based on Q-learning. Finally, a p2p file sharing scenario has been chosen to validate our approach.