Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4974681 | Journal of the Franklin Institute | 2015 | 29 Pages |
Abstract
Learning via iterative or repeated implementation is an intelligent method which takes full advantage of experience data from previous iterations or repetitions in the control signals computation to improve the current system performance. In this paper, we incorporate the idea of iterative learning to deal with bipartite coordination problems for multiple mobile agents in networked environments that are described by signed directed graphs. We aim at high-precision bipartite coordination tasks for networked mobile agents subject to a time-varying reference whose information is only available to a portion of agents. To achieve this objective, we construct iterative learning algorithms for agents using the nearest neighbor rule and address the related asymptotic stability and monotonic convergence issues for them. We establish convergence conditions and the guarantees to their feasibility. In particular, we develop a class of linear matrix inequality conditions, as well as providing formulas for the design of gain matrices. We perform simulations to illustrate the effectiveness of the proposed algorithms in enabling mobile agents to achieve high-precision bipartite coordination on networks associated with signed directed graphs.
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Deyuan Meng, Yingmin Jia, Junping Du,