This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Weak synchronization in heterogeneous multi-agent systems

Anton A. Stoorvogel, Ali Saberi, and Zhenwei Liu Anton A. Stoorvogel is with Department of Electrical Engineering, Mathematics and Computer Science, University of Twente, P.O. Box 217, Enschede, The Netherlands (e-mail: A.A.Stoorvogel@utwente.nl)Ali Saberi is with School of Electrical Engineering and Computer Science, Washington State University, Pullman, WA 99164, USA (e-mail: saberi@wsu.edu)Zhenwei Liu is with College of Information Science and Engineering, Northeastern University, Shenyang 110819, China (e-mail: liuzhenwei@ise.neu.edu.cn)
Abstract

In this paper, we propose a new framework for synchronization of heterogeneous multi agent system which we refer to as weak synchronization. This new framework of synchronization is based on achieving the network stability in the absence of any information on communication network including the connectivity. Here by network stability, we mean that in the basic setup of a multi-agent system, we require that the signals exchanged over the network converge to zero. As such if the network happens to have a directed spanning tree then we obtain classical synchronization. Moreover, we design protocols which achieve weak synchronization for any network without making any kind of assumptions on communication network. If the network happens to have a directed spanning tree, then we obtain classical synchronization. However, if this is not the case then we describe in detail in this paper what kind of synchronization properties are preserved in the system and the output of the different agents can behave.

1 Introduction

Multi-agent systems have been extensively studied over the past 20 years. Initiated by early work such as [7, 5], although the roots can be found in much earlier work [15], it has become an active research area. But the realization that control systems often consist of many components with limited or restricted communication between them was already studied in the area of decentralized control, see e.g. [9, 1]. Applications are for instance systems with many generators connected through a grid or traffic applications such as platoons of cars. The fallacy of early decentralized control is that it often created a specific agent which has a kind of supervisory role while other agents ensure communication to and from this supervisory agent. This approach turned out to be highly sensitive to failures in the network. Multi-agent systems created a different type of structure in these networks where all agents basically have a similar role towards achieving synchronization in the network. However, early work still heavily relied on knowledge of the network.

Later it was established that the protocols designed for a multi-agent systems would work for any network structure satisfying some underlying assumptions such as lower or upper bounds on the spectrum of the Laplacian matrix associated to the graph describing the network structure. This suggested some form of robustness against changes in the network. However, this idea still has two major flaws:

  • Firstly, if the network is unknown, then we can never check whether these assumptions are actually satisfied and hence we do not know whether we will achieve synchronization.

  • Secondly, changes in the network can have significant effect on the bounds that have been used. It is easily seen that a few links failing might yields a network that fails connectivity properties. But also [12] showed that these lower bounds on the eigenvalues of the Laplacian almost always converge to zero when the network gets large makes these assumptions impossible to guarantee.

In recent years scale-free protocols have been studied, see for instance [3] and references therein. These protocols get rid of all assumptions on the network such as these bounds on the eigenvalues of the Laplacian. However, it still requires that the network is strongly connected or has a direct spanning tree. This actually still inherently has some of the difficulties presented before. How can we check if this connectivity is present in the network? Secondly, what happens in case of a fault in the network that makes the network fail this assumption.

In the basic setup of a multi-agent system, the signals exchanged over the network converge to zero whenever the network synchronizes. So the fact that the network communication dies out over time is a weaker condition than output synchronization. We will refer to this weaker condition as weak synchronization in this paper. We will consider heterogeneous agents in this paper but the concept equally applies to homogeneous networks.

It turns out that if we have a linear scale-free protocol then synchronization implies weak synchronization. But, more importantly, if the network has a directed spanning tree then the converse implication is true: weak synchronization implies classical synchronization.

We can therefore design protocols which achieve weak synchronization for any network without making any kind of assumptions. If the network happens to have a directed spanning tree then we obtain classical synchronization. However, if this is not the case then we describe in detail in this paper what kind of synchronization properties are preserved in the system. For applications this kind of weak synchronization what one would hope for. If the cars in a platoon lose connectivity between two subgroups because their distance has become too large the protocols will still achieve synchronization in both of these groups. If in a power system the connectivity between two subgroups is lost, each of these groups will internally achieve synchronization but, obviously, no global synchronization will be achieved.

2 Communication network and graph

To describe the information flow among the agents we associate a weighted graph 𝒢\mathcal{G} to the communication network. The weighted graph 𝒢\mathcal{G} is defined by a triple (𝒱,,𝒜)(\mathcal{V},\mathcal{E},\mathcal{A}) where 𝒱={1,,N}\mathcal{V}=\{1,\ldots,N\} is a node set, \mathcal{E} is a set of pairs of nodes indicating connections among nodes, and 𝒜=[aij]N×N\mathcal{A}=[a_{ij}]\in\mathbb{R}^{N\times N} is the weighted adjacency matrix with non negative elements aija_{ij}. Each pair in \mathcal{E} is called an edge, where aij>0a_{ij}>0 denotes an edge (j,i)(j,i)\in\mathcal{E} from node jj to node ii with weight aija_{ij}. Moreover, aij=0a_{ij}=0 if there is no edge from node jj to node ii. We assume there are no self-loops, i.e. we have aii=0a_{ii}=0. A path from node i1i_{1} to iki_{k} is a sequence of nodes {i1,,ik}\{i_{1},\ldots,i_{k}\} such that (ij,ij+1)(i_{j},i_{j+1})\in\mathcal{E} for j=1,,k1j=1,\ldots,k-1. A directed tree is a subgraph (subset of nodes and edges) in which every node has exactly one parent node except for one node, called the root, which has no parent node. A directed spanning tree is a subgraph which is a directed tree containing all the nodes of the original graph. If a directed spanning tree exists, the root of this spanning tree has a directed path to every other node in the network [2].

For a weighted graph 𝒢\mathcal{G}, the matrix L=[ij]L=[\ell_{ij}] with

ij={k=1Naik,i=j,aij,ij,\ell_{ij}=\left\{\;\begin{array}[]{cl}\sum_{k=1}^{N}a_{ik},&i=j,\\ -a_{ij},&i\neq j,\end{array}\right.

is called the Laplacian matrix associated with the graph 𝒢\mathcal{G}. The Laplacian matrix LL has all its eigenvalues in the closed right half plane and at least one eigenvalue at zero associated with right eigenvector 1 [2]. The zero eigenvalues of Laplacian matrix is always semi simple, i.e. its algebraic and geometric multiplicities coincides. Moreover, if the graph contains a directed spanning tree, the Laplacian matrix LL has a single eigenvalue at the origin and all other eigenvalues are located in the open right-half complex plane [8].

A directed communication network is said to be strongly connected if it contains a directed path from every node to every other node in the graph. For a given graph 𝒢\mathcal{G} every maximal (by inclusion) strongly connected subgraph is called a bicomponent of the graph. A bicomponent without any incoming edges is called a basic bicomponent. Every graph has at least one basic bicomponent. A network has one unique basic bicomponent if and only if the network contains a directed spanning tree. In general, every node in a network can be reached by at least one basis bicomponent, see [10, page 7]. In Fig. 1 a directed communication network with its bicomponents is shown. The network in this figure contains 6 bicomponents, 3 basic bicomponents (the blue ones) and 3 non-basic bicomponents (the yellow ones). In Fig. 2 a directed communication network with its bicomponents is shown. The network in this figure contains 4 bicomponents but only one basic bicomponent (the blue one).

Refer to caption
Figure 1: A directed communication network and its bicomponents.
Refer to caption
Figure 2: A directed communication network with a spanning tree and its bicomponents.

In the absence of a directed spanning tree, the Laplacian matrix of the graph has an eigenvalue at the origin with a multiplicity kk larger than 11. This implies that it is a kk-reducible matrix and the graph has kk basic bicomponents. The book [14, Definition 2.19] shows that, after a suitable permutation of the nodes, a Laplacian matrix with kk basic bicomponents can be written in the following form:

L=(L0L01L0k0L100Lk1000Lk)L=\begin{pmatrix}L_{0}&L_{01}&\cdots&\cdots&L_{0k}\\ 0&L_{1}&0&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \vdots&&\ddots&L_{k-1}&0\\ 0&\cdots&\cdots&0&L_{k}\end{pmatrix} (1)

where L1,,LkL_{1},\ldots,L_{k} are the Laplacian matrices associated to the kk basic bicomponents in our network. These matrices have a simple eigenvalue in 0 because they are associated with a strongly connected component. On the other hand, L0L_{0} contains all non-basic bicomponents and is a grounded Laplacian with all eigenvalues in the open right-half plane. After all, if L0L_{0} would be singular then the network would have an additional basic bicomponent.

3 Weak synchronization of MAS

In this section, we introduce the concept of weak synchronization for heterogeneous MAS. Consider NN heterogeneous agents

xi+=Aixi+Biui,yi=Cixi,\begin{array}[]{cl}x_{i}^{+}&=A_{i}x_{i}+B_{i}u_{i},\\ y_{i}&=C_{i}x_{i},\end{array} (2)

where xinix_{i}\in\mathbb{R}^{n_{i}}, uimiu_{i}\in\mathbb{R}^{m_{i}} and yipy_{i}\in\mathbb{R}^{p} are the state, input, output of agent iith for i=1,,Ni=1,\ldots,N. In the aforementioned presentation, for continuous-time systems, xi+(t)=x˙i(t)x_{i}^{+}(t)=\dot{x}_{i}(t) with tt\in\mathbb{R} while for discrete-time systems, xi+(t)=xi(t+1)x_{i}^{+}(t)=x_{i}(t+1) with tt\in\mathbb{Z}.

The communication network provides agent ii with the following information which is a linear combination of its own output relative to that of other agents

ζi=j=1Naij(yiyj),\zeta_{i}=\sum_{j=1}^{N}a_{ij}(y_{i}-y_{j}), (3)

where aij0a_{ij}\geqslant 0 and aii=0a_{ii}=0. The communication topology of the network can be described by a weighted and directed graph 𝒢\mathcal{G} with nodes corresponding to the agents in the network and the weight of edges given by coefficient aija_{ij}. In terms of the coefficients of the associated Laplacian matrix LL, ζi\zeta_{i} can be rewritten as

ζi=j=1Nijyj.\zeta_{i}=\sum_{j=1}^{N}\ell_{ij}y_{j}. (4)

We denote by 𝔾N\mathbb{G}^{N} the set of all graphs with NN nodes. We also introduce a possible additional localized information exchange among agents and their neighbors, i.e. each agent i{1,,N}i\in\{1,\ldots,N\} has access to the localized information, denoted by ζ^i\hat{\zeta}_{i}, of the form

ζ^i=j=1Naij(ηiηj),\hat{\zeta}_{i}=\sum_{j=1}^{N}a_{ij}(\eta_{i}-\eta_{j}), (5)

where ηi\eta_{i} is a variable produced internally by the protocol of agent ii and to be defined later. Finally we might have introspective agents which implies that

yi,m=Ci,mxiy_{i,m}=C_{i,m}x_{i} (6)

is available to the protocol.

Our protocols are of the form:

ξi+=Kiξi+Liζi+Li,eζ^i+Li,myi,mui=Miξiηj=Niξi\begin{array}[]{ccl}\xi_{i}^{+}&=&K_{i}\xi_{i}+L_{i}\zeta_{i}+L_{i,e}\hat{\zeta}_{i}+L_{i,m}y_{i,m}\\ u_{i}&=&M_{i}\xi_{i}\\ \eta_{j}&=&N_{i}\xi_{i}\end{array} (7)

For an agent ii which is introspective we might have Li,m0L_{i,m}\neq 0 while for non-introspective agents we have that Li,m=0L_{i,m}=0. Similar, for an agent ii where extra communication is available of the form (5) we might have Li,e0L_{i,e}\neq 0 while for agents without extra communication we have Li,e=0L_{i,e}=0.

In the following, we introduce the concepts of network stability and weak synchronization that is vastly different from output synchronization.

Definition 1 (Network stability)

Consider a multi-agent network described by (2), (3), (5), (6), and (7). If this network satisfies

ζi(t)=j=1Naij(yiyj)0\zeta_{i}(t)=\sum_{j=1}^{N}a_{ij}(y_{i}-y_{j})\to 0

as tt\to\infty, for any i{1,,N}i\in\{1,\ldots,N\} and for all possible initial conditions, then it can be called a stable network.

Definition 2

Consider an MAS described by (2), (3), (5), (6), and protocols (7). We have:

  • The multi-agent network achieves output synchronization if the outputs of the respective agents satisfy:

    yi(t)yj(t)0y_{i}(t)-y_{j}(t)\rightarrow 0 (8)

    as tt\rightarrow\infty for any i,j{1,,N}i,j\in\{1,\ldots,N\} and for all possible initial conditions.

  • The multi-agent network achieves weak synchronization if the network is stable, i.e.,

    ζi(t)0\zeta_{i}(t)\to 0

    as tt\to\infty, for any i{1,,N}i\in\{1,\ldots,N\} and for all possible initial conditions.

Next, we present two lemmas to explain the difference between these two kind of synchronization

Lemma 1

Consider an MAS described by (2), (3), (5), (6), and protocols (7). In that case output synchronization implies weak synchronization.

Proof: If yi(t)yj(t)0y_{i}(t)-y_{j}(t)\rightarrow 0 then we have:

j=1Naij(yi(t)yj(t))=ζi(t)0\sum_{j=1}^{N}a_{ij}(y_{i}(t)-y_{j}(t))=\zeta_{i}(t)\rightarrow 0

as tt\rightarrow\infty for all i=1,,Ni=1,\ldots,N which immediately implies weak synchronization.  

Lemma 2

Consider an MAS described by (2), (3), (5), and (6). Assume the protocols (7) achieve weak synchronization. In that case:

  • If the network contains a directed spanning tree then we always achieve output synchronization.

  • If the network does not contain a directed spanning tree then we only achieve output synchronization for all initial conditions in the trivial case where yi(t)0y_{i}(t)\rightarrow 0 for all i=1,,Ni=1,\ldots,N.

Proof: If the graph has a directed spanning tree then the associated Laplacian matrix has the property that kerL=span{1N}\operatorname{ker}L=\operatorname{span}\{\textbf{1}_{N}\}. This implies there exists a matrix V(N1)×NV\in\mathbb{R}^{(N-1)\times N} such that

VL=(I1)VL=\begin{pmatrix}I&-\textbf{1}\end{pmatrix}

since kerL=ker(I1N1)\operatorname{ker}L=\operatorname{ker}\begin{pmatrix}I&-\textbf{1}_{N-1}\end{pmatrix}

If we achieve weak synchronization we have that

ζ(t)=(IL)y(t)0\zeta(t)=(I\otimes L)y(t)\rightarrow 0

as tt\rightarrow\infty where:

ζ=(ζ1ζN),y=(y1yN),\zeta=\begin{pmatrix}\zeta_{1}\\ \vdots\\ \zeta_{N}\end{pmatrix},\qquad y=\begin{pmatrix}y_{1}\\ \vdots\\ y_{N}\end{pmatrix},\qquad

This implies

(IV)ζ(t)=(y1(t)yN(t)yN1yN(t))0(I\otimes V)\zeta(t)=\begin{pmatrix}y_{1}(t)-y_{N}(t)\\ \vdots\\ y_{N-1}-y_{N}(t)\end{pmatrix}\rightarrow 0

as tt\rightarrow\infty which clearly implies output synchronization is achieved.

Next we consider the case that the network doesn’t contain a directed spanning tree. In that case, we have at least two basic bicomponents. By construction the behavior within a basic bicomponent is not influenced by the behavior of the rest of the network. Moreover, within this basic bicomponent we have a strongly connected graph. Hence within a basic bicomponent weak synchronization implies output synchronization. Assume within a basic bicomponent consisting of nodes k1,,kik_{1},\ldots,k_{i} we have, for certain initial conditions, output synchronization such that

yka(t)yss(t)0,yss(t)0y_{k_{a}}(t)-y_{\text{ss}}(t)\rightarrow 0,\qquad y_{\text{ss}}(t)\nrightarrow 0

as tt\rightarrow\infty for all a1,,ia\in 1,\ldots,i. Take a node jj within another basic bicomponent. If we have output synchronization then this would imply:

yka(t)yj(t)0y_{k_{a}}(t)-y_{j}(t)\rightarrow 0

as tt\rightarrow\infty which is equivalent to

yj(t)yss(t)0y_{j}(t)-y_{\text{ss}}(t)\rightarrow 0 (9)

as tt\rightarrow\infty. But if we multiply all initial conditions in the first basic bicomponent by a factor 22 we get:

yka(t)2yss(t)0y_{k_{a}}(t)-2y_{\text{ss}}(t)\rightarrow 0 (10)

as tt\rightarrow\infty. On the other hand, if we keep the initial conditions for the second basic bicomponent the same then we would still have (9). Clearly, (9) and (10) contradict output synchronization given that yss(t)0y_{\text{ss}}(t)\nrightarrow 0 as tt\rightarrow\infty. Therefore we obtain our result by contradiction.  

4 Dynamic behavior of the output of the agents in a MAS given weak synchronization

In this section, we focus on the behavior of the output of the agents of a MAS for given protocols which achieves weak synchronization. We have following theorem.

Theorem 1

Consider a MAS with agent dynamics (2) with protocols of the form (7) which achieves weak synchronization as defined by Definition 2.

Assume the network does not have a directed spanning tree which implies that the graph has k>1k>1 basic bicomponents. Then, weak synchronization implies:

  • Within basic bicomponent i\mathcal{B}_{i} for some i{1,,k}i\in\{1,\ldots,k\} the output of the agents synchronize and converge to the trajectories ysiy^{i}_{s}.

  • An agent jj which is not part of any of the basic bicomponents synchronizes to a trajectory yj,sy_{j,s},

    yj,s=i=1kβj,iysiy_{j,s}=\sum_{i=1}^{k}\,\beta_{j,i}y^{i}_{s} (11)

    where the coefficients βj,i\beta_{j,i} are nonnegative, satisfy:

    1=i=1kβj,i1=\sum_{i=1}^{k}\,\beta_{j,i} (12)

    and only depend on the parameters of the network and do not depend on any of the initial conditions.

Proof: Assume the Laplacian of the network is of the form (1). Denote by M1,MkM_{1},\ldots M_{k} the number of agents in the kk basic bicomponents respectively while M0M_{0} is the number of agents not contained in a basic bicomponent.

Since the matrices L1,,LkL_{1},\ldots,L_{k} have a one-dimensional nullspace while L0L_{0} is nonsingular, there exists {β1,βk}M0\{\beta_{1},\ldots\beta_{k}\}\in\mathbb{R}^{M_{0}} such that the nullspace of LL is given by the image of the following matrix:

(β1β2βk1M10001M20001Mk)\begin{pmatrix}\beta_{1}&\beta_{2}&\cdots&\beta_{k}\\ \textbf{1}_{M_{1}}&0&\cdots&0\\ 0&\textbf{1}_{M_{2}}&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&\cdots&0&\textbf{1}_{M_{k}}\end{pmatrix} (13)

We define scalars β1,i,,βM0,i\beta_{1,i},\ldots,\beta_{M_{0},i} such that:

βi=(β1,iβM0,i).\beta_{i}=\begin{pmatrix}\beta_{1,i}\\ \vdots\\ \beta_{M_{0},i}\end{pmatrix}.

Since we know 1N\textbf{1}_{N} is an element of the nullspace of LL we find that:

i=1kβi=1M0\sum_{i=1}^{k}\beta_{i}=\textbf{1}_{M_{0}}

which yields (12). Using that

βi=L01L0i1Mi,\beta_{i}=-L_{0}^{-1}L_{0i}\textbf{1}_{M_{i}},

we find that all coefficients of βi\beta_{i} are nonnegative. This follows since the structure of the Laplacian guarantees that all coefficients of L0iL_{0i} are nonpositive while all coefficients of L0i1L_{0i}^{-1} are nonnegative. The latter follows from [6, Theorem 4.25].

Assume τ1i,τMii\tau^{i}_{1},\ldots\tau^{i}_{M_{i}} are the agents contained in basic bicomponent ii. Then

(IL)(y1(t)yN(t))0(I\otimes L)\begin{pmatrix}y_{1}(t)\\ \vdots\\ y_{N}(t)\end{pmatrix}\rightarrow 0

(which follows from weak synchronization) implies:

(ILi)(yτ1i(t)yτMii(t))0(I\otimes L_{i})\begin{pmatrix}y_{\tau^{i}_{1}}(t)\\ \vdots\\ y_{\tau^{i}_{M_{i}}}(t)\end{pmatrix}\rightarrow 0

and since LiL_{i} is strongly connected we find:

yτaiyτbi(t)0y_{\tau^{i}_{a}}-y_{\tau^{i}_{b}}(t)\rightarrow 0

as tt\rightarrow\infty which implies the first bullet point of the theorem if we set ysi=yτ1iiy^{i}_{s}=y^{i}_{\tau^{i}_{1}}.

Next we consider an agent not part of any of the basic bicomponents. Define the following matrix:

B=(β11M111M1Tβ1kMk11MkTβM01M111M1Tβ1kMk11MkT)B=\begin{pmatrix}\beta_{11}M_{1}^{-1}\textbf{1}_{M_{1}}^{\mbox{\tiny T}}&\quad\cdots\quad&\beta_{1k}M_{k}^{-1}\textbf{1}_{M_{k}}^{\mbox{\tiny T}}\\ \vdots&&\vdots\\ \beta_{M_{0}1}M_{1}^{-1}\textbf{1}_{M_{1}}^{\mbox{\tiny T}}&\quad\cdots\quad&\beta_{1k}M_{k}^{-1}\textbf{1}_{M_{k}}^{\mbox{\tiny T}}\end{pmatrix} (14)

Then it is easy to verify from the structure of the kernel of LL presented in (13) that

kerLker(IB).\operatorname{ker}L\subset\ker\begin{pmatrix}-I&B\end{pmatrix}.

which implies that there exists a matrix VV such that

VL=(IB).VL=\begin{pmatrix}-I&B\end{pmatrix}. (15)

but then weak synchronization implies that

(IVL)y(t)0(I\otimes VL)y(t)\rightarrow 0

Using (14) and (15) this implies:

yj(t)βj1M11j=1M1yτj1(t)βj2M21j=1M2yτj2(t)βjkMk1j=1Mkyτjk(t)0y_{j}(t)-\beta_{j1}M_{1}^{-1}\sum_{j=1}^{M_{1}}y_{\tau^{1}_{j}}(t)-\beta_{j2}M_{2}^{-1}\sum_{j=1}^{M_{2}}y_{\tau^{2}_{j}}(t)-\\ \cdots-\beta_{jk}M_{k}^{-1}\sum_{j=1}^{M_{k}}y_{\tau^{k}_{j}}(t)\rightarrow 0

as tt\rightarrow\infty for j=1,,M0j=1,\ldots,M_{0}. Using that yτji(t)ysi(t)0y_{\tau^{i}_{j}}(t)-y^{i}_{s}(t)\rightarrow 0 established earlier this yields:

yj(t)βj1yτs1(t)βj2yτs2(t)βjkyτsk(t)0y_{j}(t)-\beta_{j1}y_{\tau^{1}_{s}}(t)-\beta_{j2}y_{\tau^{2}_{s}}(t)-\cdots-\beta_{jk}y_{\tau^{k}_{s}}(t)\rightarrow 0

as tt\rightarrow\infty for j=1,,M0j=1,\ldots,M_{0} which yields the second bullet point of the theorem.  

5 The scale-free protocol for output synchronization and weak synchronization of heterogeneous MAS

Since it is known that many multi-agent systems suffer from scale fragility it is desirable if our protocol (7) for the agent (2) to be scale free. This implies that the protocol (7) for agent ii must be designed only based on agent model ii. This is formally defined below:

Definition 3 (Scale-free output synchronization)

The family of protocols (7) is said to achieve scale-free output synchronization for the family of agents (2) if the following property holds.

For any selection of agents {τ1,,τM}\{\tau_{1},\ldots,\tau_{M}\} and for any associated graph 𝒢𝔾M\mathcal{G}\in\mathbb{G}^{M} which has a directed spanning tree and MM nodes (with associated Laplacian L~\tilde{L}), we have that

xs,e+=[A~s+B~s(L~H~s)]xs,eys=C~sxs,e\begin{array}[]{ccl}{x}_{s,e}^{+}&=&[\tilde{A}_{s}+\tilde{B}_{s}(\tilde{L}\otimes\tilde{H}_{s})]x_{s,e}\\ y_{s}&=&\tilde{C}_{s}x_{s,e}\end{array}

achieves synchronization, i.e.

yi(t)yj(t)0y_{i}(t)-y_{j}(t)\rightarrow 0

as tt\rightarrow\infty for any i,j{τ1,,τM}i,j\in\{\tau_{1},\ldots,\tau_{M}\} and all possible initial conditions for x~s,e\tilde{x}_{s,e} where we define

A~s\displaystyle\tilde{A}_{s} =diag{A~τ1,,A~τM},B~s=diag{B~τ1,,B~τM},\displaystyle=\operatorname{diag}\{\tilde{A}_{\tau_{1}},\ldots,\tilde{A}_{\tau_{M}}\},\qquad\tilde{B}_{s}=\operatorname{diag}\{\tilde{B}_{\tau_{1}},\ldots,\tilde{B}_{\tau_{M}}\},
C~s\displaystyle\tilde{C}_{s} =diag{C~τ1,,C~τM},H~s=diag{H~τ1,,H~τM}\displaystyle=\operatorname{diag}\{\tilde{C}_{\tau_{1}},\ldots,\tilde{C}_{\tau_{M}}\},\qquad\tilde{H}_{s}=\operatorname{diag}\{\tilde{H}_{\tau_{1}},\ldots,\tilde{H}_{\tau_{M}}\}

and

xs,e=(xe,τ1xe,τM)ys=(yτ1yτM),x_{s,e}=\begin{pmatrix}x_{e,\tau_{1}}\\ \vdots\\ x_{e,\tau_{M}}\end{pmatrix}\qquad y_{s}=\begin{pmatrix}y_{\tau_{1}}\\ \vdots\\ y_{\tau_{M}}\end{pmatrix},
Remark 1

For heterogeneous MAS, when all agents are introspective, scale-free collaborative protocols have been designed in [4]. When all agents are non-introspective and passive, static scale-free non-collaborative protocols have been designed for strongly connected graph in [11].

Scale free designs are used in the context that the network is not known. But this makes it hard to verify the assumption that the network has a directed spanning tree since verifying this intrinsically requires knowledge of the network. In that sense, the concept of scale-free weak synchronization is more appropriate:

Definition 4 (Scale-free weak synchronization)

The family of protocols (7) is said to achieve scale-free weak synchronization for the family of agents (2) if the following property holds.

For any selection of agents {τ1,,τM}\{\tau_{1},\ldots,\tau_{M}\} and for any associated graph 𝒢𝔾M\mathcal{G}\in\mathbb{G}^{M} (with associated Laplacian L~\tilde{L}), we have that

xs,e+=[A~s+B~s(L~H~s)]xs,eys=C~sxs,e\begin{array}[]{ccl}{x}_{s,e}^{+}&=&[\tilde{A}_{s}+\tilde{B}_{s}(\tilde{L}\otimes\tilde{H}_{s})]x_{s,e}\\ y_{s}&=&\tilde{C}_{s}x_{s,e}\end{array}

(using the same notation as in Definition 3 achieves weak synchronization, i.e.

ζi(t)0\zeta_{i}(t)\rightarrow 0

as tt\rightarrow\infty for any i{τ1,,τM}i\in\{\tau_{1},\ldots,\tau_{M}\} and all possible initial conditions for x~s,e\tilde{x}_{s,e}

The next objective of this paper is to show that protocols that achieve scale-free output synchronization as defined in Definition 3, also achieve weak synchronization in the absence of a spanning tree due to a fault.

Theorem 2

Consider a continuous-time heterogeneous MAS with agent dynamics (2) with protocols of the form (7).

In that case, we achieve scale-free output synchronization (as defined in Definition 3) if and only if we achieve scale-free weak synchronization (as defined in Definition 4).

Remark 2

Note that scale-free weak synchronization implies scale-free output synchronization even if we allow for nonlinear protocols. However, our proof below explicitly depends on linearity for the reverse implication. We can for instance easily obtain extensions of the above theorems if we have protocols containing time-delays because that preserves the linearity.

Proof: From Lemma 2 we obtain that weak synchronization implies output synchronization if the network contains a directed spanning tree. This immediately implies that scale-free weak synchronization implies scale-free output synchronization.

Remains to establish that we obtain scale-free weak synchronization if we know that we have achieved scale-free output synchronization. If we look at the interconnection of (2) and (7) we can write this of the form:

xe,i+=A~ixe,i+B~iζ~iyi=C~ixe,izi=H~ixe,i\begin{array}[]{ccl}x_{e,i}^{+}&=&\tilde{A}_{i}x_{e,i}+\tilde{B}_{i}\tilde{\zeta}_{i}\\ y_{i}&=&\tilde{C}_{i}x_{e,i}\\ z_{i}&=&\tilde{H}_{i}x_{e,i}\end{array} (16)

with

ζ~i=j=1Ni,jzj\tilde{\zeta}_{i}=\sum_{j=1}^{N}\ell_{i,j}z_{j} (17)
A~i\displaystyle\tilde{A}_{i} =(AiBiMiLi,mCi,mKi),B~i=(00LiLi,e),\displaystyle=\begin{pmatrix}A_{i}&B_{i}M_{i}\\ L_{i,m}C_{i,m}&K_{i}\end{pmatrix},\tilde{B}_{i}=\begin{pmatrix}0&0\\ L_{i}&L_{i,e}\end{pmatrix},
C~i\displaystyle\tilde{C}_{i} =(Ci0),H~i=(Ci00Ni)\displaystyle=\begin{pmatrix}C_{i}&0\end{pmatrix},\tilde{H}_{i}=\begin{pmatrix}C_{i}&0\\ 0&N_{i}\end{pmatrix}

If we define

A~\displaystyle\tilde{A} =diag{A~1,,A~N},B~=diag{B~1,,B~N},\displaystyle=\operatorname{diag}\{\tilde{A}_{1},\ldots,\tilde{A}_{N}\},\qquad\tilde{B}=\operatorname{diag}\{\tilde{B}_{1},\ldots,\tilde{B}_{N}\},
C~\displaystyle\tilde{C} =diag{C~1,,C~N},H~=diag{H~1,,H~N}\displaystyle=\operatorname{diag}\{\tilde{C}_{1},\ldots,\tilde{C}_{N}\},\qquad\tilde{H}=\operatorname{diag}\{\tilde{H}_{1},\ldots,\tilde{H}_{N}\}

and

xe=(xe,1xe,N)y=(y1yN),x_{e}=\begin{pmatrix}x_{e,1}\\ \vdots\\ x_{e,N}\end{pmatrix}\qquad y=\begin{pmatrix}y_{1}\\ \vdots\\ y_{N}\end{pmatrix},

then we can write the complete system as:

xe+=[A~+B~(LH~)]xey=C~xe\begin{array}[]{ccl}{x}_{e}^{+}&=&[\tilde{A}+\tilde{B}(L\otimes\tilde{H})]x_{e}\\ y&=&\tilde{C}x_{e}\end{array} (18)

We next consider the following kk differential equations:

(xei)+=[A~+B~(LH~)]xeiyi=C~xei\begin{array}[]{ccl}(x^{i}_{e})^{+}&=&[\tilde{A}+\tilde{B}(L\otimes\tilde{H})]x^{i}_{e}\\ y^{i}&=&\tilde{C}x_{e}^{i}\end{array} (19)

for i=1,,ki=1,\ldots,k where

xei=(xe,1ixe,Ni)yi=(y1iyNi).x^{i}_{e}=\begin{pmatrix}x^{i}_{e,1}\\ \vdots\\ x^{i}_{e,N}\end{pmatrix}\qquad y^{i}=\begin{pmatrix}y_{1}^{i}\\ \vdots\\ y^{i}_{N}\end{pmatrix}.

Each of these systems is kind of connected to one of the basic bicomponents through its initial conditions. In particular, assume agent jj is part of a basic bicomponent ii then we choose

{xe,jv(0)=xe,j(0),v=i,xe,jv(0)=0,vi,\left\{\;\begin{array}[]{ll}x^{v}_{e,j}(0)=x_{e,j}(0),&\qquad v=i,\\ x^{v}_{e,j}(0)=0,&\qquad v\neq i,\end{array}\right.

and hence

xe,j1(0)++xe,jk(0)=xe,j(0).x^{1}_{e,j}(0)+\cdots+x^{k}_{e,j}(0)=x_{e,j}(0).

On the other hand, if agent jj is not part of any basic bicomponent, then there is at least one bicomponent ii from which agent jj can be reached.

Assume that ii is the bicomponent with smallest index with this property (which implies βj,i0\beta_{j,i}\neq 0). then we choose

{xe,jv(0)=xe,j(0),v=i,xe,jv(0)=0,vi,\left\{\;\begin{array}[]{ll}x^{v}_{e,j}(0)=x_{e,j}(0),&\qquad v=i,\\ x^{v}_{e,j}(0)=0,&\qquad v\neq i,\end{array}\right.

and hence again

xe,j1(0)++xe,jk(0)=xe,j(0).x^{1}_{e,j}(0)+\cdots+x^{k}_{e,j}(0)=x_{e,j}(0).

In other words, we have

(xe,11(0)++xe,1k(0)xe,21(0)++xe,2k(0)xe,N1(0)++xe,Nk(0))=(xe,1(0)xe,2(0)xe,N(0))\begin{pmatrix}x^{1}_{e,1}(0)+\cdots+x^{k}_{e,1}(0)\\ x^{1}_{e,2}(0)+\cdots+x^{k}_{e,2}(0)\\ \vdots\\ x^{1}_{e,N}(0)+\cdots+x^{k}_{e,N}(0)\end{pmatrix}=\begin{pmatrix}x_{e,1}(0)\\ x_{e,2}(0)\\ \vdots\\ x_{e,N}(0)\end{pmatrix}

It means that we have

xe1(0)++xek(0)=xe(0)x^{1}_{e}(0)+\cdots+x^{k}_{e}(0)=x_{e}(0)

with

xe(0)=(xe,1(0)xe,2(0)xe,N(0))x_{e}(0)=\begin{pmatrix}x_{e,1}(0)\\ x_{e,2}(0)\\ \vdots\\ x_{e,N}(0)\end{pmatrix}

Using equation (19) we obtain

(xe1++xek)+=[A~+B~(LH~)](xe1++xek)(y1++yn)=C~(xe1++xek)\begin{array}[]{l}(x^{1}_{e}+\cdots+{x}^{k}_{e})^{+}=[\tilde{A}+\tilde{B}(L\otimes\tilde{H})](x^{1}_{e}+\cdots+x^{k}_{e})\\ (y^{1}+\cdots+y^{n})=\tilde{C}(x^{1}_{e}+\cdots+x^{k}_{e})\end{array} (20)

But from equation (18) and (20) we see that xe1++xekx^{1}_{e}+\cdots+x^{k}_{e} and xex_{e} satisfy the same differential equation and have the same initial condition. It implies that

xe=xe1++xek,y=y1++ykx_{e}=x^{1}_{e}+\cdots+x^{k}_{e},\qquad y=y^{1}+\cdots+y^{k} (21)

Next consider xeix^{i}_{e}. Define:

(γ1iγNi)\begin{pmatrix}\gamma^{i}_{1}\\ \vdots\\ \gamma^{i}_{N}\end{pmatrix}

as the ii’th column of the matrix (13). This implies

v=1Nj,vγvi=0\sum_{v=1}^{N}\ell_{j,v}\gamma^{i}_{v}=0 (22)

Consider an agent jj for which γji=0\gamma^{i}_{j}=0. Then we note that all terms of the summation are nonpositive since the γvi\gamma^{i}_{v} are nonnegative and the j,v\ell_{j,v} for vjv\neq j are nonpositive. This implies:

j,vγvi=0 for v=1,,N.\ell_{j,v}\gamma^{i}_{v}=0\quad\text{ for }v=1,\ldots,N. (23)

This yields that an agent jj for which γj=0\gamma_{j}=0 only depends on other agents vv (i.e. j,v0\ell_{j,v}\neq 0) for which γv=0\gamma_{v}=0. But since all agents for which γv=0\gamma_{v}=0 satisfy, by construction, xe,ji(0)=0x^{i}_{e,j}(0)=0 this yield that xe,ji(t)=0x^{i}_{e,j}(t)=0 for all t>0t>0.

We consider all agents τ1i,,τNii\tau^{i}_{1},\ldots,\tau^{i}_{N_{i}} for which the corresponding γv\gamma_{v} is nonzero, i.e.

γτji0 for j=1,,Ni\gamma_{\tau^{i}_{j}}\neq 0\quad\text{ for }j=1,\ldots,N_{i} (24)

and let LiL_{i} denote the matrix obtained from LL by deleting both columns and rows whose index is not contained in the set τ1i,,τNii\tau^{i}_{1},\ldots,\tau^{i}_{N_{i}}. It can be shown that this matrix has rank Ni1N_{i}-1. After all, we already know

(γτ1iγτNii)\begin{pmatrix}\gamma_{\tau^{i}_{1}}\\ \vdots\\ \gamma_{\tau^{i}_{N_{i}}}\end{pmatrix} (25)

is contained in the null space of LiL_{i}. If, additionally,

(ητ1iητNii)\begin{pmatrix}\eta_{\tau^{i}_{1}}\\ \vdots\\ \eta_{\tau^{i}_{N_{i}}}\end{pmatrix}

is in the null space of LiL_{i} and linearly independent of (25) then it is easily verified that

η=(η1ηN)\eta=\begin{pmatrix}\eta_{1}\\ \vdots\\ \eta_{N}\end{pmatrix}

is in the null space of LL where we have chosen ηi=0\eta_{i}=0 when i{τ1i,,ητNii}i\not\in\{\tau^{i}_{1},\ldots,\eta_{\tau^{i}_{N_{i}}}\}. Here we use that (23) implies that

j,vηv=0 for v=1,N.\ell_{j,v}\eta_{v}=0\quad\text{ for }v=1,\ldots N.

for any jj for which γj=0\gamma_{j}=0 (recall that j,v0\ell_{j,v}\neq 0 implies γv0\gamma_{v}\neq 0). But given the structure of the null space of LL given by (13), Lη=0L\eta=0 yields a contradiction.

Note that LiL_{i} has the structure of a Laplacian except for the zero row sum. However, the matrix:

Γ=diag{γτ1i,,γτNii}\Gamma=\operatorname{diag}\{\gamma_{\tau^{i}_{1}},\ldots,\gamma_{\tau^{i}_{N_{i}}}\}

is invertible and

L~i=Γ1LiΓ\tilde{L}_{i}=\Gamma^{-1}L_{i}\Gamma

is a classical Laplacian matrix with zero row sum. Moreover, it contains a directed spanning tree since its rank is equal to Ni1N_{i}-1.

We consider agents τ1i,,τNii\tau^{i}_{1},\ldots,\tau^{i}_{N_{i}}. Using the above, we obtain that

xs,e+=[A~s+B~s(LiH~)]xs,eys=C~xs,e\begin{array}[]{ccl}x_{s,e}^{+}&=&[\tilde{A}_{s}+\tilde{B}_{s}(L_{i}\otimes\tilde{H})]x_{s,e}\\ y_{s}&=&\tilde{C}x_{s,e}\end{array} (26)

where

A~s\displaystyle\tilde{A}_{s} =diag{A~τ1i,,A~τNii},B~s=diag{B~τ1i,,B~τNii},\displaystyle=\operatorname{diag}\{\tilde{A}_{\tau^{i}_{1}},\ldots,\tilde{A}_{\tau^{i}_{N_{i}}}\},\qquad\tilde{B}_{s}=\operatorname{diag}\{\tilde{B}_{\tau^{i}_{1}},\ldots,\tilde{B}_{\tau^{i}_{N_{i}}}\},
C~s\displaystyle\tilde{C}_{s} =diag{C~τ1i,,C~τNii},H~s=diag{H~τ1i,,H~τNii}\displaystyle=\operatorname{diag}\{\tilde{C}_{\tau^{i}_{1}},\ldots,\tilde{C}_{\tau^{i}_{N_{i}}}\},\qquad\tilde{H}_{s}=\operatorname{diag}\{\tilde{H}_{\tau^{i}_{1}},\ldots,\tilde{H}_{\tau^{i}_{N_{i}}}\}

and

xs,e=(xe,τ1iixe,τNiii)ys=(yτ1iiyτNiii).x_{s,e}=\begin{pmatrix}x^{i}_{e,\tau^{i}_{1}}\\ \vdots\\ x^{i}_{e,\tau^{i}_{N_{i}}}\end{pmatrix}\qquad y_{s}=\begin{pmatrix}y^{i}_{\tau^{i}_{1}}\\ \vdots\\ y^{i}_{\tau^{i}_{N_{i}}}\end{pmatrix}.

But then we obtain:

x~s,e+=[A~s+B~s(L~iH~s)]x~s,ey~s=C~sx~s,e\begin{array}[]{ccl}\tilde{x}_{s,e}^{+}&=&[\tilde{A}_{s}+\tilde{B}_{s}(\tilde{L}_{i}\otimes\tilde{H}_{s})]\tilde{x}_{s,e}\\ \tilde{y}_{s}&=&\tilde{C}_{s}\tilde{x}_{s,e}\end{array} (27)

where x~s,e=(Γ1I)xs,e\tilde{x}_{s,e}=(\Gamma^{-1}\otimes I)x_{s,e} and y~s=(Γ1I)ys\tilde{y}_{s}=(\Gamma^{-1}\otimes I)y_{s}. Since our protocol achieved scale-free synchronization and the network associated to L~i\tilde{L}_{i} contains a directed spanning tree we obtain output synchronization using Definition 3 and therefore also weak synchronization by Lemma 1, i.e.

(L~iI)y~s0(\tilde{L}_{i}\otimes I)\tilde{y}_{s}\rightarrow 0

as tt\rightarrow\infty which implies

(LiI)ys0(L_{i}\otimes I)y_{s}\rightarrow 0

Given the way we constructed ysy_{s}, this implies:

(LI)yi0(L\otimes I)y^{i}\rightarrow 0

Since this is true for i=1,,ki=1,\ldots,k we can use (21) to establish:

(LI)y0(L\otimes I)y\rightarrow 0

In other words, we achieve weak synchronization since the above derivation is valid for all possible initial conditions.  

6 Numerical examples

In this section, we consider a special case of protocol (7): all agent models are introspective and protocols are collaborative. We choose the existing examples in both continuous- and discrete-time presented in [4] and [13].

6.1 Continuous-time case

We consider agent models of the form (2) with the following three groups of parameters. For Model 1 we have:

Ai=(0100001000010000),Bi=(01001001),CiT=(1000),(Cim)T=(1100)A_{i}=\begin{pmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 0&0&0&0\end{pmatrix},B_{i}=\begin{pmatrix}0&1\\ 0&0\\ 1&0\\ 0&1\end{pmatrix},C_{i}^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 0\\ 0\\ 0\end{pmatrix},(C_{i}^{m})^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 1\\ 0\\ 0\end{pmatrix}

while for Model 2 we have:

Ai=(010001000),Bi=(001),CiT=(100),(Cim)T=(110).A_{i}=\begin{pmatrix}0&1&0\\ 0&0&1\\ 0&0&0\end{pmatrix},B_{i}=\begin{pmatrix}0\\ 0\\ 1\end{pmatrix},C_{i}^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 0\\ 0\end{pmatrix},(C_{i}^{m})^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 1\\ 0\end{pmatrix}.

Finally, for Model 3 we have:

Ai=(1001000110011100001111011),Bi=(0000010010),\displaystyle A_{i}=\begin{pmatrix}-1&0&0&-1&0\\ 0&0&1&1&0\\ 0&1&-1&1&0\\ 0&0&0&1&1\\ -1&1&0&1&1\end{pmatrix},\quad B_{i}=\begin{pmatrix}0&0\\ 0&0\\ 0&1\\ 0&0\\ 1&0\end{pmatrix},
Ci=(00010),Cim=(11000)\displaystyle C_{i}=\begin{pmatrix}0&0&0&1&0\end{pmatrix},\quad{C_{i}^{m}}=\begin{pmatrix}1&1&0&0&0\end{pmatrix}

For the protocols we use the design methodology of [4], we first choose a target model:

A=(010001010),B=(001),C=(100).A=\begin{pmatrix}0&1&0\\ 0&0&1\\ 0&-1&0\end{pmatrix},B=\begin{pmatrix}0\\ 0\\ 1\end{pmatrix},C=\begin{pmatrix}1&0&0\end{pmatrix}.

Then we assign precompensators (for each different model) such that the behavior of the system combining model with precompensator approximately behaves as the target model. Finally we combine this precompensator (which is different for each model) with a homogeneous protocol designed for the target model. For details we refer to [4].

Refer to caption
Figure 3: The 60-nodes communication network with spanning tree.

We consider scale-free output synchronization result for the 60-node heterogeneous network shown in figure 3, which contains a directed spanning tree. For this example, each agent is randomly assigned one of the above four models.

When some links have faults, the communication network might lose its directed spanning tree. For example, if two specific links are broken in the original 60-node network given by Figure 3, then we obtain the network as given in Figure 4

Refer to caption
Figure 4: The communication network without spanning tree. The links are broken due to faults.

It is obvious that there is no spanning tree in Figure 4. We obtain three basic bicomponents (indicated in blue): one containing 30 nodes, one containing 8 nodes and one containing 4 nodes. Meanwhile, there are three non-basic bicomponents: one containing 10 nodes, one containing 6 nodes and one containing 10 nodes, which are indicated in yellow.

By using the scale-free protocol, we obtain ζi0\zeta_{i}\to 0 as tt\to\infty, which means weak synchronization is achieved in the absence of connectivity, see Fig. 5. It implies that the available network data for each agent goes to zero and the communication network becomes inactive.

Refer to caption
Figure 5: The trajectory of ζi\zeta_{i} for continuous-time MAS.

We have seen that for the 60-node network given in Figure 3 this protocol indeed achieves output synchronization. If we apply the same protocol to the network described by Figure 4 which does not contain a directed spanning tree, we again consider the six bicomponents constituting the network. We see that, consistent with the theory, we get output synchronization within the three basic bicomponents as illustrated in Figures 6, 7 and 8 respectively. Clearly, the disagreement dynamic among the agents (the errors between the output of agents) goes to zero within each basic bicomponent. According to Theorem 1, we obtain that any agent outside of the basic bicomponent converge to a convex combination of the synchronized trajectories of the basic bicomponents, i.e., agents in either one of these non-basic bicomponents converge to a convex combination of the synchronized trajectories of the basic bicomponents 1, 2 and 3. Note that we do not necessarily have that all agents within a specific non-basic bicomponent converge to the same asymptotic behaviour.

Refer to caption
Figure 6: Basic bicomponent 1 for continuous-time MAS: disagreement dynamic among the agents and synchronized output trajectories.
Refer to caption
Figure 7: Basic bicomponent 2 for continuous-time MAS: disagreement dynamic among the agents and synchronized output trajectories.
Refer to caption
Figure 8: Basic bicomponent 3 for continuous-time MAS: disagreement dynamic among the agents and synchronized output trajectories.

6.2 Discrete-time case

Consider discrete-time agent models of the form (2) with four different sets of parameters

Ai=(0100001010010100),Bi=(00000110),CiT=(0001),CimT=(0101)A_{i}=\begin{pmatrix}0&1&0&0\\ 0&0&1&0\\ -1&0&0&-1\\ 0&-1&0&0\end{pmatrix},B_{i}=\begin{pmatrix}0&0\\ 0&0\\ 0&1\\ 1&0\end{pmatrix},C_{i}^{\mbox{\tiny T}}=\begin{pmatrix}0\\ 0\\ 0\\ 1\end{pmatrix},{C_{i}^{m}}^{\mbox{\tiny T}}=\begin{pmatrix}0\\ -1\\ 0\\ 1\end{pmatrix}

for model 1, and

Ai=(010001000),Bi=(001),CiT=(100),CimT=(110)A_{i}=\begin{pmatrix}0&1&0\\ 0&0&1\\ 0&0&0\end{pmatrix},B_{i}=\begin{pmatrix}0\\ 0\\ 1\end{pmatrix},C_{i}^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 0\\ 0\end{pmatrix},{C_{i}^{m}}^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 1\\ 0\end{pmatrix}

for model 2,

Ai=(0100),Bi=(01),CiT=(10),CimT=(11)A_{i}=\begin{pmatrix}0&1\\ 0&0\end{pmatrix},B_{i}=\begin{pmatrix}0\\ 1\end{pmatrix},C_{i}^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 0\end{pmatrix},{C_{i}^{m}}^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 1\end{pmatrix}

for model 3 and, finally,

Ai=(0122),Bi=(01),CiT=(10),CimT=(11)A_{i}=\begin{pmatrix}0&1\\ -2&-2\end{pmatrix},B_{i}=\begin{pmatrix}0\\ 1\end{pmatrix},C_{i}^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 0\end{pmatrix},{C_{i}^{m}}^{\mbox{\tiny T}}=\begin{pmatrix}1\\ 1\end{pmatrix}

for model 4.

For the protocols we use the design methodology of [13] which is similar to the technique we used in the continuous time. We first choose a target model:

A=(010001122),B=(001),C=(100).A=\begin{pmatrix}0&1&0\\ 0&0&1\\ 1&-2&2\end{pmatrix},B=\begin{pmatrix}0\\ 0\\ 1\end{pmatrix},C=\begin{pmatrix}1&0&0\end{pmatrix}.

We assign precompensators (for each different model) such that the behavior of the system combining model with precompensator approximately behaves as the target model. Finally, we combine this precompensator (which is different for each model) with a homogeneous protocol designed for the target model. For details, we refer to [13].

As a result, we consider scale-free weak synchronization result for the 60-node discrete-time heterogeneous network shown in Figure 4. Again each agent is randomly assigned one of these four models in this case.

Refer to caption
Figure 9: The trajectory of ζi\zeta_{i} for discrete-time MAS.
Refer to caption
Figure 10: Basic bicomponent 1 for discrete-time MAS: disagreement dynamic among the agents and synchronized output trajectories.
Refer to caption
Figure 11: Basic bicomponent 2 for discrete-time MAS: disagreement dynamic among the agents and synchronized output trajectories.
Refer to caption
Figure 12: Basic bicomponent 3 for discrete-time MAS: disagreement dynamic among the agents and synchronized output trajectories.

By using the scale-free protocol, we obtain ζi0\zeta_{i}\to 0 as tt\to\infty see Figure 9, which means weak synchronization is achieved in the absence of connectivity. It implies that the available network data for each agent goes to zero and communication network becomes inactive.

We see that, consistent with the theory, we get output synchronization within the three basic bicomponents as illustrated in figures 10, 11 and 12. Clearly, the disagreement dynamic among the agents (the errors between the output of agents) goes to zero within each basic bicomponent. Similarly, we obtain that any agent outside of the basic bicomponents converge to a convex combination of the synchronized trajectories of the basic bicomponents 1, 2 and 3 based on Part 2 of Theorem 1.

7 Conclusion

In this paper we have introduced the concept of weak synchronization for MAS. We have shown that this is the right concept if you have no information available about the network. If we have a directed spanning tree it is equal to the classical concept of output synchronization. However when, due to a fault, the network no longer contains a directed spanning tree then we still achieve the best synchronization properties possible for the given network. We have seen that the protocols guarantee a stable response to these faults: within basic bicomponents we still achieve synchronization and the outputs of the agents not contained in a basic bicomponent converge to a convex combination of the asymptotic behavior achieved in the basic bicomponents. This behavior is completely independent of the specific scale-free protocols being used.

For the heterogeneous MAS that we consider in this paper, the main focus improving the available protocol design methodologies since for heterogeneous networks the current designs are still limited in scope but the concept of weak synchronization introduced in this paper is the correct concept for this protocol design.

References

  • [1] J.P. Corfmat and A.S. Morse. Decentralized control of linear multivariable systems. Automatica, 12(5):479–495, 1976.
  • [2] C. Godsil and G. Royle. Algebraic graph theory, volume 207 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2001.
  • [3] Z. Liu, D. Nojavanzedah, and A. Saberi. Cooperative control of multi-agent systems: A scale-free protocol design. Springer, Cham, 2022.
  • [4] D. Nojavanzadeh, Z. Liu, A. Saberi, and A.A. Stoorvogel. Output and regulated output synchronization of heterogeneous multi-agent systems: A scale-free protocol design using no information about communication network and the number of agents. In American Control Conference, pages 865–870, Denver, CO, 2020.
  • [5] R. Olfati-Saber and R.M. Murray. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Aut. Contr., 49(9):1520–1533, 2004.
  • [6] Z. Qu. Cooperative control of dynamical systems: applications to autonomous vehicles. Spinger-Verlag, London, U.K., 2009.
  • [7] W. Ren and E. Atkins. Distributed multi-vehicle coordinate control via local information. Int. J. Robust & Nonlinear Control, 17(10-11):1002–1033, 2007.
  • [8] W. Ren and Y.C. Cao. Distributed coordination of multi-agent networks. Communications and Control Engineering. Springer-Verlag, London, 2011.
  • [9] D.D. Siljak. Decentralized control of complex systems. Academic Press, London, 1991.
  • [10] A. Stanoev and D. Smilkov. Consensus theory in networked systems. In L. Kocarev, editor, Consensus and synchronization in complex networks, pages 1–22. Spinger-Verlag, Berlin, 2013.
  • [11] A. A. Stoorvogel, A. Saberi, Z. Liu, and D. Nojavanzadeh. H2{H}_{2} and H{H}_{\infty} almost output synchronization of heterogeneous continuous-time multi-agent systems with passive agents and partial-state coupling via static protocol. Int. J. Robust & Nonlinear Control, 29(17):6244–6255, 2019.
  • [12] E. Tegling. Fundamental limitations of distributed feedback control in large-scale networks. PhD thesis, KTH Royal Institute of Technology, 2018.
  • [13] X. Wang, A. Saberi, and T. Yang. Synchronization in heterogeneous networks of discrete-time introspective right-invertible agents. Int. J. Robust & Nonlinear Control, 24(18):3255–3281, 2013.
  • [14] C.W. Wu. Synchronization in complex networks of nonlinear dynamical systems. World Scientific Publishing Company, Singapore, 2007.
  • [15] C.W. Wu and L.O. Chua. Application of Kronecker products to the analysis of systems with uniform linear coupling. IEEE Trans. Circ. & Syst.-I Fundamental theory and applications, 42(10):775–778, 1995.