Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
723289 | IFAC Proceedings Volumes | 2007 | 7 Pages |
As technical processes become more and more complex and the need to support human operators in supervision or control tasks becomes increasingly crucial, the nature of the interactions between human operators and Decision Support Systems (DSS) tends towards Human-Machine cooperation in which the DSS facilitates a partnership with the human operators. In this paper, we first recall a definition for cooperation from the field of psychology: Two agents are cooperating if 1) each one strives towards goals and can interfere with the other (e.g., in terms of goals, resources, procedures) and 2) each agent tries to detect and process such interferences to make the other's activities easier. This definition, which was originally intended to describe human-human cooperation, can be extended to Human-Machine Cooperation if adjustments are made to compensate for the limitations of the machine's capabilities. We then present the four basic elements needed to design a cooperative Decision Support System:• sufficient know-how for solving problems autonomously• know-how-to-cooperate ability• an adequate organizational structure that integrates human and machine, and• a need-to-cooperate.Key studies in engineering have focused on know-how; others have concentrated on the ability to know-how-to-cooperate; still others, on the organizational structure of the Human-Machine partnership and the conditions that motivate human and artificial agents to cooperate to accomplish a task. Using a multi-disciplinary approach that associates research in cognitive psychology and human engineering, we bring these 4 basic elements together in a methodology for designing Human-Machine cooperative systems. This methodology is the result of more than twelve years of experiments at LAMIH in several fields of application, mainly Air Traffic Control and Telecommunication networks.