This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Tracking Control of Nonlinear Networked and Quantized Control Systems with Communication Delays thanks: This work was supported by National Natural Science Foundation of China, under Grant 61773357.

Wei Ren, and Junlin Xiong W. Ren is with Division of Decision and Control Systems, EECS, KTH Royal Institute of Technology, SE-10044, Stockholm, Sweden. J. Xiong is with Department of Automation, University of Science and Technology of China, Hefei, 230026, China. Email: weire@kth.se, junlin.xiong@gmail.com.
Abstract

This paper studies the tracking control problem of nonlinear networked and quantized control systems (NQCSs) with communication delays. The desired trajectory is generated by a reference system. The communication network is to guarantee the information transmission among the plant, the reference system and the controller. The communication network also brings about some undesired issues like time-varying transmission intervals, time-varying transmission delays, packet dropouts, scheduling and quantization effects, which lead to non-vanishing network-induced errors and affect the tracking performances. As a result, we develop a general hybrid system model for NQCSs with all aforementioned issues. Based on the Lyapunov approach, sufficient conditions are established to guarantee the stability of the tracking error with respect to the non-vanishing network-induced errors. The obtained conditions lead to a tradeoff between the maximally allowable transmission interval and the maximally allowable delay. Furthermore, the existence of Lyapunov functions satisfying the obtained conditions is studied. For specific time-scheduling protocols (e.g., Round-Robin protocol and Try-Once-Discard protocol) and quantizers (e.g., zoom quantizer and box quantizer), Lyapunov functions are constructed explicitly. Finally, a numerical example is presented to demonstrate the developed theory.

Index Terms:
Lyapunov functions, networked control systems, quantized control, tracking control, time-scheduling protocols.

I Introduction

Because of fast development and widespread application of digital network technologies, networked control systems (NCSs) have attracted attention in the control community over the past decades; see [1, 2, 3, 4, 5]. The presence of the network offers considerable advantages over the traditional feedback control systems in terms of simplicity and flexibility in installation and maintenance, low cost and convenient resource sharing. On the other hand, the introduction of the limited-capacity network also induces many issues. The network-induced issues can be grouped into five types as follows [6, 7, 8, 9]: time-varying transmission intervals; time-varying transmission delays; quantization errors; packet dropouts (caused by the unreliability of the network); and communication constraints (caused by the sharing of the network by multiple nodes and the fact that only one node is allowed to transmit its packet per transmission). Therefore, system modelling, stability analysis and controller design are fundamental problems for NCSs. Based on different modeling approaches and analysis methods, many results have been obtained in the literature. For instance, both system modelling and stability analysis have been studied in [10, 9, 11, 8, 12], and stabilizing controllers have been developed in [6, 13, 14, 15].

However, as another fundamental problem in control theory, tracking control has seldom been studied for NCSs; see [16, 14, 17, 18]. The main objective of tracking control is to design appropriate controller such that the considered system can track a given reference trajectory as close as possible; see [19, 20, 21]. In the tracking control, the controller consists of two parts [22, 16]: the feedforward part to induce the reference trajectory in the whole system, and the feedback part to ensure the stabilization of the considered system and the convergence to the reference trajectory. Compared with stability analysis, the tracking control problem is well recognized to be more general and more difficult [23, 14]. In addition, due to the presence of the network, the aforementioned network-induced issues have great impacts on the tracking performance. For instance, both quantization and time-varying transmission delays result in the feedforward error; the communication constraints and limited capacity of the network deteriorate the tracking performance. Therefore, only approximate tracking can be achieved [16]. For instance, the approximate tracking control problem has been studied in [16] for sampled-data systems, and in [18] for NCSs with time-varying transmission intervals and delays via the emulation-like approach as in [9, 11]. However, observe from all the previous works that only parts of the aforementioned issues are studied, which motivates us to study this topic further.

In this paper, we study the tracking control problem for nonlinear networked and quantized control systems (NQCSs) with communication delays, which are nonlinear NCSs with all the aforementioned issues. To this end, a unified hybrid model in the formalism of [24, 25] is developed for the tracking control of NQCSs with communication delays based on the emulation-like approach as in [10, 8, 9], which is our first contribution. We aim at proposing a high fidelity model that is amenable to controller design and tracking performance analysis. To achieve this, all aforementioned network-induced issues are studied, including time-varying transmission intervals; time-varying transmission delays and quantization errors; packet dropouts and communication constraints. In addition, a general quantizer is proposed to recover most types of the quantizers in previous works [26, 27, 28, 8]. As a result, the proposed hybrid model extends those in previous works [14, 18, 16, 8, 9] on stability analysis and tracking control of NCSs.

Our second contribution is to establish sufficient conditions to guarantee the convergence of the tracking error with respect to network-induced errors using the Lyapunov-based approach. To this end, some reasonable assumptions are provided, which are different from those in [9] for stability analysis of NCSs. With these assumptions, We derive the tradeoff between the maximally allowable transmission interval (MATI) and the maximally allowable delay (MAD) to guarantee that the tracking error converges to the origin up to some errors due to the aforementioned network-induced errors. These network-induced errors not only leads to the main difference from the scenario of stabilizing an equilibrium point, but also results in additional technical difficulties in tracking performance analysis. Note that the obtain results are also available for the scenario of stabilizing an equilibrium point. In addition, the tradeoff depends on the applied communication protocol, and thus allows for the comparison of different protocols.

Since the assumptions we adopt are different from those for stability analysis, it is necessary to verify the existence of Lyapunov functions satisfying these assumptions, which is the third contribution of this paper. The construction of Lyapunov functions is presented explicitly based on the quantization-free and delay-free case in [18], the quantization-free case [9] and the delay-free cases in [8, 10]. In addition, for different time-scheduling protocols and quantizers, specific Lyapunov functions are established. In the construction of Lyapunov functions, we also show how to reduce the effects of the network-induced errors on the tracking performance through the implementation of the controller and the design of the time-scheduling protocol.

A preliminary version of this work has been presented in the conference paper [29] where the zoom quantizer is considered and the reference trajectory is required to be convergent. The current paper extends the approach to consider general NQCSs with a more general quantizer and no constraints on the reference trajectory. In addition, Lyapunov functions is constructed explicitly in this paper. Therefore, the result of [29] is recovered as a particular case.

This paper is organized as follows. Preliminaries are presented in Section II. In Section III, the tracking problem is formulated and a unified system model is developed. The Lyapunov-based conditions are obtained in Section V to guarantee the convergence of the tracking error and the tradeoff between the MATI and the MAD. The existence of Lyapunov functions is studied in Section VI. In Section VII, the developed results are illustrated by a numerical example. Conclusions and further researches are stated in Section VIII.

II Preliminaries

Basic definitions and notation are presented in this section. :=(,+)\mathbb{R}:=(-\infty,+\infty); 0:=[0,+)\mathbb{R}_{\geq 0}:=[0,+\infty); >0:=(0,+)\mathbb{R}_{>0}:=(0,+\infty); :={0,1,2,}\mathbb{N}:=\{0,1,2,\ldots\}; >0:={1,2,}\mathbb{N}_{>0}:=\{1,2,\ldots\}. Given two sets 𝒜\mathcal{A} and \mathcal{B}, \𝒜:={x|x,x𝒜}\mathcal{B}\backslash\mathcal{A}:=\{x|x\in\mathcal{B},x\notin\mathcal{A}\}. Given a constant aa\in\mathbb{R} and a set 𝒜\mathcal{A}, a𝒜:={ax|x𝒜}a\mathcal{A}:=\{ax|x\in\mathcal{A}\}. A set 𝒜n\mathcal{A}\subseteq\mathbb{R}^{n} is symmetric if x𝒜-x\in\mathcal{A} for all x𝒜x\in\mathcal{A}. |||\cdot| stands for Euclidean norm; 𝔍\|\cdot\|_{\mathfrak{J}} denotes the supremum norm of a function on an interval 𝔍\mathfrak{J} and \|\cdot\| denotes the supremum norm in the case of 𝔍=[t0,)\mathfrak{J}=[t_{0},\infty), where t00t_{0}\in\mathbb{R}_{\geq 0} is the given initial time. For the vectors x,ynx,y\in\mathbb{R}^{n}, (x,y):=(x𝖳,y𝖳)𝖳(x,y):=(x\operatorname{{}^{\mkern-1.5mu\mathsf{T}}},y\operatorname{{}^{\mkern-1.5mu\mathsf{T}}})\operatorname{{}^{\mkern-1.5mu\mathsf{T}}} for simplicity of notation and x,y\langle x,y\rangle denotes the usual inner product. InI_{n} represents the identity matrix of dimension nn, and diag{A,B}\operatorname{diag}\{A,B\} denotes the block diagonal matrix made of the square matrices AA and BB. The symbols \wedge and \vee denote separately ‘and’ and ‘or’ in logic. 𝑩(a,b)\bm{B}(a,b) denotes the hypercubic box centered at ana\in\mathbb{R}^{n} with edges of length 2b2b. f(t+):=lim sups0+f(t+s)f(t^{+}):=\limsup_{s\rightarrow 0^{+}}f(t+s) for a given function f:t0nf:\mathbb{R}_{\geq t_{0}}\rightarrow\mathbb{R}^{n}. A function α:00\alpha:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0} is of class 𝒦\mathcal{K} if it is continuous, α(0)=0\alpha(0)=0, and strictly increasing; it is of class 𝒦\mathcal{K}_{\infty} if it is of class 𝒦\mathcal{K} and unbounded. A function β:0×00\beta:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0} is of class 𝒦\mathcal{KL} if β(s,t)\beta(s,t) is of class 𝒦\mathcal{K} for each fixed t0t\geq 0 and β(s,t)\beta(s,t) decreases to zero as t0t\rightarrow 0 for each fixed s0s\geq 0. A function β:0×0×00\beta:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0} is of class 𝒦\mathcal{KLL} if β(r,s,t)\beta(r,s,t) is of class 𝒦\mathcal{KL} for each fixed s0s\geq 0 and of class 𝒦\mathcal{KL} for each fixed t0t\geq 0.

The basic concepts of hybrid systems are introduced as follows; see [24] for the details. Consider hybrid systems of the form:

{x˙=F(x,w),(x,w)C;x+=G(x,w),(x,w)D,\begin{cases}\dot{x}=F(x,w),\quad(x,w)\in C;\\ x^{+}=G(x,w),\quad(x,w)\in D,\end{cases} (1)

where xnx\in\mathbb{R}^{n} is the system state, wmw\in\mathbb{R}^{m} is the external input, F:CnF:C\rightarrow\mathbb{R}^{n} is the flow map, G:DmG:D\rightarrow\mathbb{R}^{m} is the jump map, CC is the flow set and DD is the jump set. For the hybrid system (1), the following basic assumptions are given [24]: the sets C,Dn×mC,D\subset\mathbb{R}^{n}\times\mathbb{R}^{m} are closed; FF is continuous on CC; and GG is continuous on DD.

A subset E0×E\subset\mathbb{R}_{\geq 0}\times\mathbb{N} is a compact hybrid time domain if E=0jJ([tj,tj+1],j)E=\bigcup_{0\leq j\leq J}([t_{j},t_{j+1}],j) for some finite sequence of times 0=t0t1tJ+10=t_{0}\leq t_{1}\leq\ldots\leq t_{J+1}. EE is a hybrid time domain if for all (T,J)E(T,J)\in E, E([0,T]×{0,,J})E\bigcap([0,T]\times\{0,\ldots,J\}) is a compact hybrid time domain. For all (t1,j1),(t2,j2)0×(t_{1},j_{1}),(t_{2},j_{2})\in\mathbb{R}_{\geq 0}\times\mathbb{N}, denote (t1,j1)(t2,j2)(t_{1},j_{1})\preceq(t_{2},j_{2}) (or (t1,j1)(t2,j2)(t_{1},j_{1})\prec(t_{2},j_{2})) if t1+j1t2+j2t_{1}+j_{1}\leq t_{2}+j_{2} (or t1+j1<t2+j2t_{1}+j_{1}<t_{2}+j_{2}). A function w:domwmw:\operatorname{dom}w\rightarrow\mathbb{R}^{m} is a hybrid input if w(,j)w(\cdot,j) is Lebesgue measurable and locally essentially bounded for each jj. A function x:domxnx:\operatorname{dom}x\rightarrow\mathbb{R}^{n} is a hybrid arc if x(,j)x(\cdot,j) is locally absolutely continuous for each jj. The hybrid arc x:domxnx:\operatorname{dom}x\rightarrow\mathbb{R}^{n} and the hybrid input w:domwmw:\operatorname{dom}w\rightarrow\mathbb{R}^{m} are a solution pair to (1) if: i) domx=domw\operatorname{dom}x=\operatorname{dom}w and (x(t0,j0),w(t0,j0))CD(x(t_{0},j_{0}),w(t_{0},j_{0}))\in C\cup D; ii) for all jj\in\mathbb{N} and almost all tt such that (t,j)domx(t,j)\in\operatorname{dom}x, (x(t,j),w(t,j))C(x(t,j),w(t,j))\in C and x˙(t,j)=F(x(t,j),w(t,j))\dot{x}(t,j)=F(x(t,j),w(t,j)); iii) for all (t,j)domx(t,j)\in\operatorname{dom}x such that (t,j+1)domx(t,j+1)\in\operatorname{dom}x, (x(t,j),w(t,j))D(x(t,j),w(t,j))\in D and x(t,j+1)=G(x(t,j),w(t,j))x(t,j+1)=G(x(t,j),w(t,j)). A solution pair (x,u)(x,u) to (1) is maximal if it cannot be extended, and it is complete if domx\operatorname{dom}x is unbounded. Let ww be a hybrid input with (0,0)(0,0) as initial hybrid time, we define w(t,j):=max{ess.sup(t,j)domwΓ(w),(0,0)(t,j)(t,j)|w(t,j)|,\|w\|_{(t,j)}:=\max\left\{\operatorname*{ess.\,sup}\limits_{(t^{\prime},j^{\prime})\in\operatorname{dom}w\setminus\Gamma(w),(0,0)\preceq(t^{\prime},j^{\prime})\preceq(t,j)}|w(t^{\prime},j^{\prime})|,\right. sup(t,j)Γ(w),(0,0)(t,j)(t,j)sup|w(t,j)|}\left.\sup\limits_{(t,j)\in\Gamma(w),(0,0)\preceq(t^{\prime},j^{\prime})\preceq(t,j)}\sup|w(t^{\prime},j^{\prime})|\right\} where Γ(w)\Gamma(w) denotes the set of all (t,j)domw(t,j)\in\operatorname{dom}w such that (t,j+1)domw(t,j+1)\in\operatorname{dom}w. Denote by 𝔖w(x0)\mathfrak{S}_{w}(x_{0}) the set of all maximal solution pairs (x,w)(x,w) to the system (1) with x0CDx_{0}\in C\cup D and finite w:=sup(t,j)domw,t+jw(t,j)\|w\|:=\sup_{(t,j)\in\operatorname{dom}w,t+j\rightarrow\infty}\|w\|_{(t,j)}.

Definition 1 ([24])

The hybrid system (1) is input-to-state stable (ISS) from ww to xx, if there exist β𝒦,γ𝒦\beta\in\mathcal{KLL},\gamma\in\mathcal{K}_{\infty} such that for all (t,j)domx(t,j)\in\operatorname{dom}x and all (x,w)𝔖w(x(0,0))(x,w)\in\mathfrak{S}_{w}(x(0,0)),

|x(t,j)|β(|x(0,0)|,t,j)+γ(w(t,j)).|x(t,j)|\leq\beta(|x(0,0)|,t,j)+\gamma(\|w\|_{(t,j)}).

In addition, β(v,t,j)=Kve(t+j)\beta(v,t,j)=Kve^{-(t+j)}, where K>0K>0, then the system (1) is exponentially input-to-state stable (EISS) from ww to xx.

III Problem Formulation

In this section, the tracking control problem for NQCSs with communication delays is formulated using the emulation approach as proposed in [11, 10].

III-A Tracking Problem of NQCSs

Consider the nonlinear system of the form

x˙p=fp(xp,u),yp=gp(xp),\displaystyle\dot{x}_{\operatorname{p}}=f_{\operatorname{p}}(x_{\operatorname{p}},u),\quad y_{\operatorname{p}}=g_{\operatorname{p}}(x_{\operatorname{p}}), (2)

where xpnpx_{\operatorname{p}}\in\mathbb{R}^{n_{\operatorname{p}}} is the system state, unuu\in\mathbb{R}^{n_{u}} is the control input, and ypnypy_{\operatorname{p}}\in\mathbb{R}^{n_{y_{\operatorname{p}}}} is the system output. The reference system tracked by the system (2) is of the form:

x˙r=fp(xr,uf),yr=gp(xr),\displaystyle\dot{x}_{\operatorname{r}}=f_{\operatorname{p}}(x_{\operatorname{r}},u_{\operatorname{f}}),\quad y_{\operatorname{r}}=g_{\operatorname{p}}(x_{\operatorname{r}}), (3)

where xrnrx_{\operatorname{r}}\in\mathbb{R}^{n_{\operatorname{r}}} is the reference state (nr=npn_{\operatorname{r}}=n_{\operatorname{p}}), ufnuu_{\operatorname{f}}\in\mathbb{R}^{n_{u}} is the feedforward control input, and yrnyry_{\operatorname{r}}\in\mathbb{R}^{n_{y_{\operatorname{r}}}} is the reference output (nyr=nyp=nyn_{y_{\operatorname{r}}}=n_{y_{\operatorname{p}}}=n_{y}). Assume that the reference system (3) has a unique solution for any initial condition and any input.

To track the reference system, the controller, which is designed for (2) in the absence of the network, is given by

u=uc+uf,u=u_{\operatorname{c}}+u_{\operatorname{f}}, (4)

where ufnuu_{\operatorname{f}}\in\mathbb{R}^{n_{u}} is the feedforward item, and ucnuu_{\operatorname{c}}\in\mathbb{R}^{n_{u}} is the feedback item. The feedback item ucu_{\operatorname{c}} is from the nonlinear feedback controller given below:

x˙c=fc(xc,ypyr),uc=gc(xc),\displaystyle\dot{x}_{\operatorname{c}}=f_{\operatorname{c}}(x_{\operatorname{c}},y_{\operatorname{p}}-y_{\operatorname{r}}),\quad u_{\operatorname{c}}=g_{\operatorname{c}}(x_{\operatorname{c}}), (5)

where xcncx_{\operatorname{c}}\in\mathbb{R}^{n_{\operatorname{c}}} is the feedback controller state, ucnuu_{\operatorname{c}}\in\mathbb{R}^{n_{u}} is the feedback controller output. Observe from (5) that the feedback controller depends on the difference between the outputs ypy_{\operatorname{p}} and yry_{\operatorname{r}}, which is different from the cases studied in previous works [10, 8, 11, 9, 18] where the feedback controller depends on the outputs ypy_{\operatorname{p}} and yry_{\operatorname{r}}. The feedback controller of the form (5) can be found in the literature [20, 16]. In addition, denote yd:=ypyrnyy_{\operatorname{d}}:=y_{\operatorname{p}}-y_{\operatorname{r}}\in\mathbb{R}^{n_{y}} for the sake of convenience.

Assume that fpf_{\operatorname{p}} and fcf_{\operatorname{c}} are continuous; gpg_{\operatorname{p}} and gcg_{\operatorname{c}} are continuously differentiable. The objective of this paper is to implement the designed controller over the network, as illustrated in Fig. 1, and to demonstrate that under reasonable assumptions, the assumed tracking performance of the system (2)-(5) will be preserved for the NQCS.

Refer to caption
Figure 1: General framework of tracking control of networked and quantized control systems (NQCSs).

III-B Information Transmission over Quantizer and Network

At the transmission times tsit_{s_{i}}, ii\in\mathbb{N}, (part of) the outputs of the plant, the reference system and the controller are sampled, quantized and transmitted through the communication network. The communication network is used to guarantee the information transmission among the sensors, the controller and the actuators. Based on the band-limited network and the spatial location of the sensors and actuators [30], we group the sensors and actuators into l>0l\in\mathbb{N}_{>0} nodes connecting the network. Correspondingly, the transmitted information is partitioned into ll parts. At each tsit_{s_{i}}, one and only one node is allowed to access to the network, which is determined by time-scheduling protocols; see also [7, 8, 11] and Subsection VI-A. The transmission times are strictly increasing, and the transmission intervals are defined as hi:=tsi+1tsih_{i}:=t_{s_{i+1}}-t_{s_{i}}, ii\in\mathbb{N}. Due to computation and data coding, the information can not be transmitted instantaneously; see [31, 32]. As a result, there exist transmission delays τi0\tau_{i}\geq 0, ii\in\mathbb{N}, such that the controller and actuators receive the transmitted information at the arrival times ri=tsi+τir_{i}=t_{s_{i}}+\tau_{i}, ii\in\mathbb{N}. For both hih_{i} and τi\tau_{i}, ii\in\mathbb{N}, the following assumption is adopted; see also [9, Assumption II.1].

Assumption 1

There exist constants hmatihmad0h_{\operatorname{mati}}\geq h_{\operatorname{mad}}\geq 0 and ε(0,hmati)\varepsilon\in(0,h_{\operatorname{mati}}) such that εhihmati\varepsilon\leq h_{i}\leq h_{\operatorname{mati}} and 0τimin{hmad,hi}0\leq\tau_{i}\leq\min\{h_{\operatorname{mad}},h_{i}\} for all ii\in\mathbb{N}.

In Assumption 1, hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}} are called the maximally allowable transfer interval (MATI) and the maximally allowable delay (MAD), respectively. ε>0\varepsilon>0 implies that there are no Zeno solutions for the closed-loop system; see [8, Assumption 1], [32]. Assumption 1 guarantees that each transmitted information arrives before the next sampling, which is called the small delay case; see [33, 9].

Remark 1

In Assumption 1, ε>0\varepsilon>0 always exists in real networks and is called the minimum inter-transmission interval; see [32]. If packet dropouts are considered, then hmatih_{\operatorname{mati}} is adjusted as h¯mati=hmati/(+1)\bar{h}_{\operatorname{mati}}=h_{\operatorname{mati}}/(\aleph+1), where \aleph\in\mathbb{N} is the maximal number of successive packet dropouts; see also [9]. \square

As aforementioned, to match the limited transmission capacity of the network, yd,ucy_{\operatorname{d}},u_{\operatorname{c}} and ufu_{\operatorname{f}} are quantized before they are transmitted via the network. Each node has a quantizer, which is a piecewise continuous function qj:>0×nj𝒬jnjq_{j}:\mathbb{R}_{>0}\times\mathbb{R}^{n_{j}}\rightarrow\mathcal{Q}_{j}\subset\mathbb{R}^{n_{j}}, where 𝒬j\mathcal{Q}_{j} is a finite or countable set and j{1,,l}j\in\{1,\ldots,l\}. The quantizer has a quantization parameter μj>0\mu_{j}>0 and satisfies the following assumption, which is a generalization of those in [34, 26, 28].

Assumption 2

For each j{1,,l}j\in\{1,\ldots,l\}, there exist nonempty symmetric sets 0j,j,𝔻jnj\mathds{C}_{0j},\mathds{C}_{j},\mathds{D}_{j}\subset\mathbb{R}^{n_{j}} and a constant dj[0,1)d_{j}\in[0,1) such that 0jj\mathds{C}_{0j}\subseteq\mathds{C}_{j}, the origin is contained in 0j,j,𝔻j\mathds{C}_{0j},\mathds{C}_{j},\mathds{D}_{j} and for all zjnjz_{j}\in\mathbb{R}^{n_{j}},

zjj\displaystyle z_{j}\in\mathds{C}_{j} qj(μj,zj)zj𝔻j,\displaystyle\Rightarrow q_{j}(\mu_{j},z_{j})-z_{j}\in\mathds{D}_{j}, (6)
zjj\displaystyle z_{j}\notin\mathds{C}_{j} qj(μj,zj)(1dj)j,\displaystyle\Rightarrow q_{j}(\mu_{j},z_{j})\notin(1-d_{j})\mathds{C}_{j}, (7)
zj0j\displaystyle z_{j}\in\mathds{C}_{0j} qj(μj,zj)0.\displaystyle\Rightarrow q_{j}(\mu_{j},z_{j})\equiv 0. (8)

In Assumption 2, ϵj:=qj(μj,zj)zj\epsilon_{j}:=q_{j}(\mu_{j},z_{j})-z_{j} is defined as the quantization error. For each j{1,,l}j\in\{1,\ldots,l\}, j\mathds{C}_{j} is the union of all the quantization regions; 𝔻j\mathds{D}_{j} is the set of all the (possible) quantization errors; 0j\mathds{C}_{0j} is the deadzone of the quantizer. Condition (6) gives a bounded region on the quantization error if the quantizer does not saturate. Condition (7) provides an approach to detect the possible saturation. Note that the constant djd_{j} is related to the sizes of the sets j\mathds{C}_{j} and 𝔻j\mathds{D}_{j}. If the signal is so small, then condition (8) implies that it is reasonable to quantize such signal as zero. Two special quantizers will be studied in Subsections VI-C and VI-D. On the other hand, assume that the quantizer is applied synchronously on both sides of the network; otherwise, see [35] for more details.

Remark 2

The applied quantizer is a dynamic quantizer due to the quantization parameter. Because of the applied dynamic quantizer, the feedback control is of the form (5) such that ydy_{\operatorname{d}} is driven to converge. If the reference trajectory is convergent, then the convergence point of the reference trajectory can be set as the origin in the applied quantizer. In this case, we can relax the feedback controller (5) as the following form:

x˙c=fc(xc,yp,yr),uc=gc(xc).\displaystyle\dot{x}_{\operatorname{c}}=f_{\operatorname{c}}(x_{\operatorname{c}},y_{\operatorname{p}},y_{\operatorname{r}}),\quad u_{\operatorname{c}}=g_{\operatorname{c}}(x_{\operatorname{c}}). (9)

That is, the controller depends on the outputs ypy_{\operatorname{p}} and yry_{\operatorname{r}}, which is similar to the cases in previous works [10, 8, 11, 9, 18]. In this case, the system modelling and tracking performance analysis in this paper can be proceeded along the similar fashion with a slight modification; see also [29, 18]. \square

Remark 3

The quantizer satisfying Assumption 2 is an extension of zoom quantizer in [28] (see also Subsection VI-C) and includes many types of quantizers in the existing works [27, 36, 26, 33]. For instance, for the uniform quantizer in [27, 28], μj\mu_{j} is constant, j={zjnj||zj|Mj}\mathds{C}_{j}=\{z_{j}\in\mathbb{R}^{n_{j}}||z_{j}|\leq M_{j}\}, 𝔻j={ϵjnj||ϵj|Δj}\mathds{D}_{j}=\{\epsilon_{j}\in\mathbb{R}^{n_{j}}||\epsilon_{j}|\leq\Delta_{j}\} and dj=Δj/Mjd_{j}=\Delta_{j}/M_{j}, where Mj>Δj>0M_{j}>\Delta_{j}>0. For the zoom quantizer [31, 28], μj\mu_{j} is time-varying, j={zjnj||zj|Mjμj}\mathds{C}_{j}=\{z_{j}\in\mathbb{R}^{n_{j}}||z_{j}|\leq M_{j}\mu_{j}\}, 𝔻j={ϵjnj||ϵj|Δjμj}\mathds{D}_{j}=\{\epsilon_{j}\in\mathbb{R}^{n_{j}}||\epsilon_{j}|\leq\Delta_{j}\mu_{j}\} and dj=Δj/Mjd_{j}=\Delta_{j}/M_{j}, where Mj>Δj>0M_{j}>\Delta_{j}>0. For the box quantizer in [8, 26], j={zjnj|zj𝐁(z^j,μj)}\mathds{C}_{j}=\{z_{j}\in\mathbb{R}^{n_{j}}|z_{j}\in\bm{B}(\hat{z}_{j},\mu_{j})\}, 𝔻j={ϵjnj||ϵj|njμj/Nj}\mathds{D}_{j}=\{\epsilon_{j}\in\mathbb{R}^{n_{j}}||\epsilon_{j}|\leq\sqrt{n_{j}}\mu_{j}/N_{j}\} and dj=0d_{j}=0, where z^j\hat{z}_{j} is the estimate of zjz_{j} and NjN_{j} is a given constant. Note that 0j={zjnj||zj|Δ0j}\mathds{C}_{0j}=\{z_{j}\in\mathbb{R}^{n_{j}}||z_{j}|\leq\Delta_{0j}\} for all the preceding quantizers and Δ0j0\Delta_{0j}\geq 0 is a small constant. \square

All the quantization parameters in ll nodes are combined as μ:=(μ1,,μl)>0l\mu:=(\mu_{1},\ldots,\mu_{l})\in\mathbb{R}^{l}_{>0}, and then the overall quantizer is defined as

q(μ,z):=(q1(μ1,z1),,ql(μl,zl)).q(\mu,z):=(q_{1}(\mu_{1},z_{1}),\ldots,q_{l}(\mu_{l},z_{l})).

The quantization parameter μ>0l\mu\in\mathbb{R}^{l}_{>0} evolves according to a hybrid dynamics, which will be given later. The quantized measurements are defined as y¯d:=q(μ,yd)\bar{y}_{\operatorname{d}}:=q(\mu,y_{\operatorname{d}}), u¯c:=q(μ,uc)\bar{u}_{\operatorname{c}}:=q(\mu,u_{\operatorname{c}}) and u¯f:=q(μ,uf)\bar{u}_{\operatorname{f}}:=q(\mu,u_{\operatorname{f}}). Correspondingly, the quantization errors are ϵd:=y¯dyd\epsilon_{\operatorname{d}}:=\bar{y}_{\operatorname{d}}-y_{\operatorname{d}}, ϵc:=u¯cuc\epsilon_{\operatorname{c}}:=\bar{u}_{\operatorname{c}}-u_{\operatorname{c}} and ϵf:=u¯fuf\epsilon_{\operatorname{f}}:=\bar{u}_{\operatorname{f}}-u_{\operatorname{f}}. Denote ϵ:=(ϵd,ϵc,ϵf)ny+2nu\bm{\epsilon}:=(\epsilon_{\operatorname{d}},\epsilon_{\operatorname{c}},\epsilon_{\operatorname{f}})\in\mathbb{R}^{n_{y}+2n_{u}}.

The quantized measurements are transmitted via the network and received at arrival times ri0r_{i}\in\mathbb{R}_{\geq 0}, ii\in\mathbb{N}. In the following, we use the variables u^c,u^fnu\hat{u}_{\operatorname{c}},\hat{u}_{\operatorname{f}}\in\mathbb{R}^{n_{u}} and y^dny\hat{y}_{\operatorname{d}}\in\mathbb{R}^{n_{y}} respectively denote the networked versions of uc,ufu_{\operatorname{c}},u_{\operatorname{f}} and ydy_{\operatorname{d}}. Therefore, the plant (2) receives u^:=u^c+u^f\hat{u}:=\hat{u}_{\operatorname{c}}+\hat{u}_{\operatorname{f}}, which is the networked version of the control input uu; the reference system (3) receives u^fnu\hat{u}_{\operatorname{f}}\in\mathbb{R}^{n_{u}}; and the feedback controller (5) receives y^dny\hat{y}_{\operatorname{d}}\in\mathbb{R}^{n_{y}}. In addition, the errors induced by the quantizer and the network are defined as ed:=y^dyde_{\operatorname{d}}:=\hat{y}_{\operatorname{d}}-y_{\operatorname{d}}, ec:=u^cuce_{\operatorname{c}}:=\hat{u}_{\operatorname{c}}-u_{\operatorname{c}} and ef:=u^fufe_{\operatorname{f}}:=\hat{u}_{\operatorname{f}}-u_{\operatorname{f}}. After the reception of the quantized measurements at arrival times, the received measurements are updated with the latest quantized measurements as follows.

y^d(ri+)\displaystyle\hat{y}_{\operatorname{d}}(r^{+}_{i}) =y¯d(tsi)+𝒉d(i,ed(tsi),ec(tsi),ef(tsi)),\displaystyle=\bar{y}_{\operatorname{d}}(t_{s_{i}})+\bm{h}_{\operatorname{d}}(i,e_{\operatorname{d}}(t_{s_{i}}),e_{\operatorname{c}}(t_{s_{i}}),e_{\operatorname{f}}(t_{s_{i}})),
u^c(ri+)\displaystyle\hat{u}_{\operatorname{c}}(r^{+}_{i}) =u¯c(tsi)+𝒉c(i,ed(tsi),ec(tsi),ef(tsi)),\displaystyle=\bar{u}_{\operatorname{c}}(t_{s_{i}})+\bm{h}_{\operatorname{c}}(i,e_{\operatorname{d}}(t_{s_{i}}),e_{\operatorname{c}}(t_{s_{i}}),e_{\operatorname{f}}(t_{s_{i}})),
u^f(ri+)\displaystyle\hat{u}_{\operatorname{f}}(r^{+}_{i}) =u¯f(tsi)+𝒉f(i,ed(tsi),ec(tsi),ef(tsi)),\displaystyle=\bar{u}_{\operatorname{f}}(t_{s_{i}})+\bm{h}_{\operatorname{f}}(i,e_{\operatorname{d}}(t_{s_{i}}),e_{\operatorname{c}}(t_{s_{i}}),e_{\operatorname{f}}(t_{s_{i}})),

where 𝒉d\bm{h}_{\operatorname{d}}, 𝒉c\bm{h}_{\operatorname{c}} and 𝒉f\bm{h}_{\operatorname{f}} are the update functions, and depend on the time-scheduling protocol that determines which node is granted to access to the network; see also Subsection VI-A.

In the arrival intervals, the received measurements are assumed to be operated in zero-order hold (ZOH) fashion, i.e., for all t(ri,ri+1)t\in(r_{i},r_{i+1}), ii\in\mathbb{N},

y^˙d=0,u^˙c=0,u^˙f=0.\dot{\hat{y}}_{\operatorname{d}}=0,\quad\dot{\hat{u}}_{\operatorname{c}}=0,\quad\dot{\hat{u}}_{\operatorname{f}}=0. (10)

Therefore, at the arrival times rir_{i}, ii\in\mathbb{N}, the error ede_{\operatorname{d}} is updated as follows.

ed(ri+)\displaystyle e_{\operatorname{d}}(r^{+}_{i}) =y^d(ri+)yd(ri+)\displaystyle=\hat{y}_{\operatorname{d}}(r^{+}_{i})-y_{\operatorname{d}}(r^{+}_{i})
=y¯d(tsi)+𝒉d(i,ϑ(tsi))yd(ri)\displaystyle=\bar{y}_{\operatorname{d}}(t_{s_{i}})+\bm{h}_{\operatorname{d}}(i,\bm{\vartheta}(t_{s_{i}}))-y_{\operatorname{d}}(r_{i})
=ed(ri)ed(tsi)+ϵd(tsi)+𝒉d(i,ϑ(tsi))\displaystyle=e_{\operatorname{d}}(r_{i})-e_{\operatorname{d}}(t_{s_{i}})+\epsilon_{\operatorname{d}}(t_{s_{i}})+\bm{h}_{\operatorname{d}}(i,\bm{\vartheta}(t_{s_{i}}))
=:ed(ri)ed(tsi)+hd(i,yd(tsi),ϑ(tsi),μ(tsi)),\displaystyle=:e_{\operatorname{d}}(r_{i})-e_{\operatorname{d}}(t_{s_{i}})+h_{\operatorname{d}}(i,y_{\operatorname{d}}(t_{s_{i}}),\bm{\vartheta}(t_{s_{i}}),\mu(t_{s_{i}})), (11)

where hd(i,yd,ϑ,μ):=ϵd+𝒉d(i,ϑ)h_{\operatorname{d}}(i,y_{\operatorname{d}},\bm{\vartheta},\mu):=\epsilon_{\operatorname{d}}+\bm{h}_{\operatorname{d}}(i,\bm{\vartheta}), ϑ:=(ed,ec,ef)nϑ\bm{\vartheta}:=(e_{\operatorname{d}},e_{\operatorname{c}},e_{\operatorname{f}})\in\mathbb{R}^{n_{\bm{\vartheta}}} and nϑ=ny+nc+nfn_{\bm{\vartheta}}=n_{y}+n_{\operatorname{c}}+n_{\operatorname{f}}. In (III-B), the third “==” holds due to ZOH device and y^d(tsi)=y^d(ri)\hat{y}_{\operatorname{d}}(t_{s_{i}})=\hat{y}_{\operatorname{d}}(r_{i}). Similarly,

ec(ri+)=ec(ri)ec(tsi)+hc(i,xc(tsi),ϑ(tsi),μ(tsi)),\displaystyle e_{\operatorname{c}}(r^{+}_{i})=e_{\operatorname{c}}(r_{i})-e_{\operatorname{c}}(t_{s_{i}})+h_{\operatorname{c}}(i,x_{\operatorname{c}}(t_{s_{i}}),\bm{\vartheta}(t_{s_{i}}),\mu(t_{s_{i}})),
ef(ri+)=ef(ri)ef(tsi)+hf(i,xf(tsi),ϑ(tsi),μ(tsi)),\displaystyle e_{\operatorname{f}}(r^{+}_{i})=e_{\operatorname{f}}(r_{i})-e_{\operatorname{f}}(t_{s_{i}})+h_{\operatorname{f}}(i,x_{\operatorname{f}}(t_{s_{i}}),\bm{\vartheta}(t_{s_{i}}),\mu(t_{s_{i}})),

where xfx_{\operatorname{f}} is an auxiliary variable to ensure that the update of efe_{\operatorname{f}} is of the form (III-B). The variable xfx_{\operatorname{f}} is related to ufu_{\operatorname{f}} because hf(i,xf,ϑ,μ)=q(μ,uf)uf+𝒉f(i,ϑ)h_{\operatorname{f}}(i,x_{\operatorname{f}},\bm{\vartheta},\mu)=q(\mu,u_{\operatorname{f}})-u_{\operatorname{f}}+\bm{h}_{\operatorname{f}}(i,\bm{\vartheta}).

Remark 4

In this paper, the ZOH technique is required in (10). The reason lies in the existence of time delays. For the time-delay case (see also [16, 37, 9]), the ZOH technique leads to the fact that y^d(ti)y^d(ri)\hat{y}_{\operatorname{d}}(t_{i})\equiv\hat{y}_{\operatorname{d}}(r_{i}), which is applied in (III-B). For the delay-free case (see also [11, 8, 18]), such a technique is not required, and the generation of (y^d,u^c,u^f)(\hat{y}_{\operatorname{d}},\hat{u}_{\operatorname{c}},\hat{u}_{\operatorname{f}}) in (10) can be more flexible. \square

Similar to the evolution of the received measurements, the quantized measurements are operated in ZOH fashion on (ri,ri+1)(r_{i},r_{i+1}) and updated at rir_{i}, ii\in\mathbb{N}, with the update of μ>0l\mu\in\mathbb{R}^{l}_{>0}. The evolution of μ\mu is given as follows.

μ˙(t)\displaystyle\dot{\mu}(t) =g¯μ(yd,xr,xc,ϑ,μ),t[tsi,tsi+1]{ri},\displaystyle=\bar{g}_{\mu}(y_{\operatorname{d}},x_{\operatorname{r}},x_{\operatorname{c}},\bm{\vartheta},\mu),\quad t\in[t_{s_{i}},t_{s_{i+1}}]\setminus\{r_{i}\}, (12)
μ(ri+)\displaystyle\mu(r^{+}_{i}) =μ(ri)μ(tsi)+hμ(i,μ(tsi),ϵ(tsi)),\displaystyle=\mu(r_{i})-\mu(t_{s_{i}})+h_{\mu}(i,\mu(t_{s_{i}}),\bm{\epsilon}(t_{s_{i}})), (13)

where g¯μ\bar{g}_{\mu} is an evolution function and hμh_{\mu} is an update function that depends on the time-scheduling protocol.

Remark 5

The evolution of the quantization parameter μ\mu is analogous to those in [38, 8, 37]. That is, μ\mu is time-varying in continuous intervals and updated at discrete-time instants. If μ\mu is time-invariant in [tsi,tsi+1)[t_{s_{i}},t_{s_{i+1}}) and only updated at the arrival times, then μ˙0\dot{\mu}\equiv 0 in (ri,ri+1)(r_{i},r_{i+1}) and μ(ri+)=hμ(i,μ(tsi),ϵ(tsi))\mu(r^{+}_{i})=h_{\mu}(i,\mu(t_{s_{i}}),\bm{\epsilon}(t_{s_{i}})). This scenario is similar to the case in [37]. However, μ\mu is updated at the arrival times in this paper instead of at the transmission times in all the previous works [38, 8]. Similar case can be found in [37], where there are two zoom parameters (the one is in the coder side and the other is in the decoder side) being updated at the arrival times. \square

Since there are some errors induced by the quantizer and the network, the following assumption is used to guarantee that the quantizer does not saturate; see [37, 8].

Assumption 3

The bound of the initial state (xp(t0),xr(t0),xc(t0))(x_{\operatorname{p}}(t_{0}),x_{\operatorname{r}}(t_{0}),x_{\operatorname{c}}(t_{0})) is assumed to be known a priori. The quantization parameter μ>0l\mu\in\mathbb{R}^{l}_{>0} is such that the quantization error ϵ\bm{\epsilon} is bounded.

Assumption 3 ensures that the system state is in the quantization regions, which further implies that the time-scheduling protocol is Lyapunov uniformly globally exponentially stable [9, 10, 11]. This assumption is enforced easily for linear systems [8, 26], and reasonable due to the extensive study on quantized control in the literature. For instance, the bound of the initial state is obtained by an initial zooming-out stage, where the quantization parameter increases such that the state is captured by the quantization regions; see [31, 34]. The bound of the quantization error can be obtained for linear systems in [31] and for nonlinear systems in [39].

Remark 6

In previous works [34] without Assumption 3, the quantization mechanism with zooming-out stage is implemented. The goal of the zooming-out stage is to bound the system state in finite time by increasing the quantization parameter. However, since the quantization errors are increasing in the zooming-out stage, the time-scheduling protocol is not Lyapunov uniformly globally asymptotically stable (UGAS); see [9, 10, 11]. As a result, the stability analysis in this paper can not be applied directly in this case. \square

IV Development of System Model

According to the analysis for information transmission in Subsection III-B, we construct an impulsive model and further a unified hybrid model for the tracking control problem of NQCSs in this section. To this end, the objective of this paper is first transformed to establish the convergence of xpx_{\operatorname{p}} towards xrx_{\operatorname{r}} in the presence of the quantizer and network. To measure the convergence of xpx_{\operatorname{p}} towards xrx_{\operatorname{r}}, define the tracking error η:=xpxrnp\eta:=x_{\operatorname{p}}-x_{\operatorname{r}}\in\mathbb{R}^{n_{\operatorname{p}}} and the error e1:=(ed,ec)n1e_{1}:=(e_{\operatorname{d}},e_{\operatorname{c}})\in\mathbb{R}^{n_{1}}, where n1=ny+ncn_{1}=n_{y}+n_{\operatorname{c}}. Combining all the variables and analyses in Subsection III-B, the resulting system model, denoted by 𝒮1\mathcal{S}_{1}, is presented as the following impulsive system.

η˙=Fη(η,xc,xr,e1,ef)x˙c=Fc(η,xc,xr,e1,ef)x˙r=Fr(η,xc,xr,e1,ef)μ˙=Gμ(η,xc,xr,e1,ef,μ)e˙1=G1(η,xc,xr,e1,ef)e˙f=Gf(η,xc,xr,e1,ef)}t[tsi,tsi+1]{ri},\displaystyle\left.\begin{aligned} \dot{\eta}&=F_{\eta}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}})\\ \dot{x}_{\operatorname{c}}&=F_{\operatorname{c}}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}})\\ \dot{x}_{\operatorname{r}}&=F_{\operatorname{r}}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}})\\ \dot{\mu}&=G_{\mu}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}},\mu)\\ \dot{e}_{1}&=G_{1}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}})\\ \dot{e}_{\operatorname{f}}&=G_{\operatorname{f}}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}})\end{aligned}\right\}t\in[t_{s_{i}},t_{s_{i+1}}]\setminus\{r_{i}\},
μ(rsi+)=μ(rsi)μ(tsi)+Hμ(i,η(tsi),xc(tsi),xr(tsi),\displaystyle\mu(r^{+}_{s_{i}})=\mu(r_{s_{i}})-\mu(t_{s_{i}})+H_{\mu}(i,\eta(t_{s_{i}}),x_{\operatorname{c}}(t_{s_{i}}),x_{\operatorname{r}}(t_{s_{i}}),
ed(tsi),ef(tsi),μ(tsi)),\displaystyle\qquad\qquad e_{\operatorname{d}}(t_{s_{i}}),e_{\operatorname{f}}(t_{s_{i}}),\mu(t_{s_{i}})),
e1(ri+)=e1(ri)e1(tsi)+H1(i,η(tsi),xc(tsi),xr(tsi),\displaystyle e_{1}(r^{+}_{i})=e_{1}(r_{i})-e_{1}(t_{s_{i}})+H_{1}(i,\eta(t_{s_{i}}),x_{\operatorname{c}}(t_{s_{i}}),x_{\operatorname{r}}(t_{s_{i}}),
e1(tsi),ef(tsi),μ(tsi)),\displaystyle\qquad\qquad e_{1}(t_{s_{i}}),e_{\operatorname{f}}(t_{s_{i}}),\mu(t_{s_{i}})),
ef(ri+)=ef(ri)ef(tsi)+Hf(i,η(tsi),xc(tsi),xr(tsi),\displaystyle e_{\operatorname{f}}(r^{+}_{i})=e_{\operatorname{f}}(r_{i})-e_{\operatorname{f}}(t_{s_{i}})+H_{\operatorname{f}}(i,\eta(t_{s_{i}}),x_{\operatorname{c}}(t_{s_{i}}),x_{\operatorname{r}}(t_{s_{i}}),
ed(tsi),ef(tsi),μ(tsi)).\displaystyle\qquad\qquad e_{\operatorname{d}}(t_{s_{i}}),e_{\operatorname{f}}(t_{s_{i}}),\mu(t_{s_{i}})).

All the functions in 𝒮1\mathcal{S}_{1} are derived by detailed calculations and assumed to be continuous. See Appendix for the detailed expressions of all the functions appear in 𝒮1\mathcal{S}_{1}.

Remark 7

The impulsive model 𝒮1\mathcal{S}_{1} is general enough to include the one in [18]. If the feedback controller (9) is applied, then a similar impulsive model can be obtained along the same fashion. In addition, if the reference system (3) is removed, then 𝒮1\mathcal{S}_{1} is reduced to a general model for NQCSs with communication delays. In this case, 𝒮1\mathcal{S}_{1} extends those developed in [8, 9] by combining all the network-induced issues and casts a new light on stability analysis for NQCSs with communication delays. \square

Now, our objective is to establish sufficient conditions to guarantee ISS of the system 𝒮1\mathcal{S}_{1} from efe_{\operatorname{f}} to (η,e1,μ)(\eta,e_{1},\mu). Here, efe_{\operatorname{f}} is the network-induced errors, and not necessarily vanishing with the time line; see [16, 18]. For specific quantizers and time-scheduling protocols, the effects of efe_{\operatorname{f}} can be attenuated, which will be discussed in Section VI.

IV-A Reformulation of System Model

To facilitate stability analysis, the system model 𝒮1\mathcal{S}_{1} is further transformed into a hybrid system by the similar mechanism proposed in [25] and employed in [9, 37, 10, 18]. To this end, some auxiliary variables are introduced. For the sake of convenience, define the augmented state x:=(η,xc,xr)nxx:=(\eta,x_{\operatorname{c}},x_{\operatorname{r}})\in\mathbb{R}^{n_{x}} and the augmented error e:=(e1,ef)nee:=(e_{1},e_{\operatorname{f}})\in\mathbb{R}^{n_{e}}, where nx=np+nc+nrn_{x}=n_{\operatorname{p}}+n_{\operatorname{c}}+n_{\operatorname{r}} and ne=n1+nfn_{e}=n_{1}+n_{\operatorname{f}}. For the update at the arrival times rir_{i}, ii\in\mathbb{N}, the variables m1nem_{1}\in\mathbb{R}^{n_{e}} and m2lm_{2}\in\mathbb{R}^{l} are used to store the information He(i,x(tsi),e(tsi),μ(tsi))e(tsi)H_{e}(i,x(t_{s_{i}}),e(t_{s_{i}}),\mu(t_{s_{i}}))-e(t_{s_{i}}) and Hμ(i,x(tsi),e(tsi),μ(tsi))μ(tsi)H_{\mu}(i,x(t_{s_{i}}),e(t_{s_{i}}),\mu(t_{s_{i}}))-\mu(t_{s_{i}}), respectively. In addition, denote m1:=(m11,m1f)m_{1}:=(m^{1}_{1},m^{\operatorname{f}}_{1}) and m11:=(m1d,m1c)m^{1}_{1}:=(m^{\operatorname{d}}_{1},m^{\operatorname{c}}_{1}). The variable τ\tau is a timer to compute both transmission intervals and transmission delays, and cc is a variable to count the transmission event. The variable bb is a logical variable to show whether the next event is a transmission event or an update event. That is, the existence of bb is to make sure that the update event is prior to the next sampling.

Based on above auxiliary variables, the resulting hybrid system, denoted by 𝒮2\mathcal{S}_{2}, is presented below. The hybrid system 𝒮2\mathcal{S}_{2} has two part. The first part is flow equation, which is given by

x˙=f(x,e)e˙=ge(x,e)μ˙=gμ(x,e,μ)m˙1=0,m˙2=0τ˙=1,c˙=0,b˙=0}\displaystyle\left.\begin{aligned} \dot{x}&=f(x,e)\\ \dot{e}&=g_{e}(x,e)\\ \dot{\mu}&=g_{\mu}(x,e,\mu)\\ \dot{m}_{1}&=0,\quad\dot{m}_{2}=0\\ \dot{\tau}&=1,\quad\dot{c}=0,\quad\dot{b}=0\end{aligned}\right\} (b=0τ[0,hmati])(b=1τ[0,hmad]).\displaystyle\begin{aligned} &(b=0\wedge\tau\in[0,h_{\operatorname{mati}}])\\ &\vee(b=1\wedge\tau\in[0,h_{\operatorname{mad}}]).\end{aligned} (14)

To simplify the notation, denote ξ:=(x,e,μ,m1,m2)nξ\xi:=(x,e,\mu,m_{1},m_{2})\in\mathbb{R}^{n_{\xi}} and nξ=nx+2ne+2ln_{\xi}=n_{x}+2n_{e}+2l. The second part is the jump equation: (ξ+,τ+,c+,b+)=R(ξ,τ,c,b)(\xi^{+},\tau^{+},c^{+},b^{+})=R(\xi,\tau,c,b). The jump equation also contains two parts for different time instants: the transmission jump equation for the transmission instants tsit_{s_{i}}, ii\in\mathbb{N}, is given by

R(ξ,τ,c,0)\displaystyle R(\xi,\tau,c,0) =(x,e,μ,He(c,x,e,μ)e,\displaystyle=(x,e,\mu,H_{e}(c,x,e,\mu)-e,
Hμ(c,x,e,μ)μ,0,c+1,1);\displaystyle\quad H_{\mu}(c,x,e,\mu)-\mu,0,c+1,1); (15)

and the update jump equation for the arrival instants rir_{i}, ii\in\mathbb{N}, is given by

R(ξ,τ,c,1)\displaystyle R(\xi,\tau,c,1) =(x,e+m1,μ+m2,em1,\displaystyle=(x,e+m_{1},\mu+m_{2},-e-m_{1},
μm2,τ,c,0).\displaystyle\quad-\mu-m_{2},\tau,c,0). (16)

Note that for the jump equations (IV-A)-(IV-A), we have that (b=0τ[ε,hmati])(b=1τ[0,hmad])(b=0\wedge\tau\in[\varepsilon,h_{\operatorname{mati}}])\vee(b=1\wedge\tau\in[0,h_{\operatorname{mad}}]) holds, where the constant ε>0\varepsilon>0 is given in Assumption 2.

IV-B Hybrid Model of NQCSs

In view of Subsection IV-A, we further write the system 𝒮2\mathcal{S}_{2} into the formalism as in [24, 25]. Denote 𝔛:=(ξ,τ,c,b):=nξ×0××{0,1}\mathfrak{X}:=(\xi,\tau,c,b)\in\mathscr{R}:=\mathbb{R}^{n_{\xi}}\times\mathbb{R}_{\geq 0}\times\mathbb{N}\times\{0,1\}. The hybrid system 𝒮2\mathcal{S}_{2} can be rewritten as

:{𝔛˙=F(𝔛),𝔛C;𝔛+=G(𝔛),𝔛D,\displaystyle\mathcal{H}:\left\{\begin{aligned} \dot{\mathfrak{X}}&=F(\mathfrak{X}),\quad\mathfrak{X}\in C;\\ \mathfrak{X}^{+}&=G(\mathfrak{X}),\quad\mathfrak{X}\in D,\end{aligned}\right. (17)

where C:={𝔛|(b=0τ[0,hmati])(b=1τ[0,hmad])}C:=\{\mathfrak{X}\in\mathscr{R}|(b=0\wedge\tau\in[0,h_{\operatorname{mati}}])\vee(b=1\wedge\tau\in[0,h_{\operatorname{mad}}])\} and D:={𝔛|(b=0τ[ε,hmati])(b=1τ[0,hmad])}D:=\{\mathfrak{X}\in\mathscr{R}|(b=0\wedge\tau\in[\varepsilon,h_{\operatorname{mati}}])\vee(b=1\wedge\tau\in[0,h_{\operatorname{mad}}])\}. The mapping FF in (17) is defined as

F(𝔛)\displaystyle F(\mathfrak{X}) :=(f(x,e),ge(x,e),gμ(x,e,μ),0,0,1,0,0);\displaystyle:=(f(x,e),g_{e}(x,e),g_{\mu}(x,e,\mu),0,0,1,0,0); (18)

and the mapping GG in (17) is defined as

G(𝔛)\displaystyle G(\mathfrak{X}) :={G1(𝔛),𝔛D1;G2(𝔛),𝔛D2,\displaystyle:=\left\{\begin{aligned} G_{1}(\mathfrak{X}),\quad\mathfrak{X}\in D_{1};\\ G_{2}(\mathfrak{X}),\quad\mathfrak{X}\in D_{2},\end{aligned}\right. (19)

where G1(𝔛):=(x,e,μ,He(c,x,e,μ)e,Hμ(c,x,e,μ)μ,0,c+1,1)G_{1}(\mathfrak{X}):=(x,e,\mu,H_{e}(c,x,e,\mu)-e,H_{\mu}(c,x,e,\mu)-\mu,0,c+1,1) with D1:={𝔛|b=0τ[ε,hmati]}D_{1}:=\{\mathfrak{X}\in\mathscr{R}|b=0\wedge\tau\in[\varepsilon,h_{\operatorname{mati}}]\} corresponds to a transmission jump (IV-A); and G2(𝔛):=(x,e+m1,μ+m2,em1,μm2,τ,c,0)G_{2}(\mathfrak{X}):=(x,e+m_{1},\mu+m_{2},-e-m_{1},-\mu-m_{2},\tau,c,0) with D2:={𝔛|b=1τ[0,hmad]}D_{2}:=\{\mathfrak{X}\in\mathscr{R}|b=1\wedge\tau\in[0,h_{\operatorname{mad}}]\} corresponds to the update jump (IV-A).

For the hybrid model \mathcal{H}, the sets CC and DD in (17) are closed, and the flow map FF in (18) is continuous due to the continuity assumptions on f,gef,g_{e} and gμg_{\mu}. The jump map GG in (19) is continuous and locally bounded by continuity of G1G_{1} and G2G_{2}. As a result, the hybrid model \mathcal{H} satisfies the basic assumptions given in Section II; see also [32, 24, 25].

V Stability Analysis

In this section, the main results are presented. Sufficient conditions are derived to guarantee ISS from efe_{\operatorname{f}} to (x,e,μ)(x,e,\mu). Before we state the main results, some assumptions are required. For the (e,μ)(e,\mu)-subsystem in flow equation (18) and the jump equation (19), the following two assumptions are satisfied.

Assumption 4

There exist a function W:ne×l×ne×l××{0,1}>0W:\mathbb{R}^{n_{e}}\times\mathbb{R}^{l}\times\mathbb{R}^{n_{e}}\times\mathbb{R}^{l}\times\mathbb{N}\times\{0,1\}\rightarrow\mathbb{R}_{>0} which is locally Lipschitz in (e,μ,m1,m2)(e,\mu,m_{1},m_{2}) for all (c,b)×{0,1}(c,b)\in\mathbb{N}\times\{0,1\}, α1W,α2W,α3W,α4W𝒦\alpha_{1W},\alpha_{2W},\alpha_{3W},\alpha_{4W}\in\mathcal{K}_{\infty} and λ[0,1)\lambda\in[0,1) such that for all (e,μ,m1,m2,c,b)ne×l×ne×l××{0,1}(e,\mu,m_{1},m_{2},c,b)\in\mathbb{R}^{n_{e}}\times\mathbb{R}^{l}\times\mathbb{R}^{n_{e}}\times\mathbb{R}^{l}\times\mathbb{N}\times\{0,1\},

W(e,μ,m1,m2,c,b)α1W(|(e,μ,m1,m2)|),\displaystyle W(e,\mu,m_{1},m_{2},c,b)\geq\alpha_{1W}(|(e,\mu,m_{1},m_{2})|), (20)
W(e,μ,m1,m2,c,b)α2W(|(e,μ,m1,m2)|),\displaystyle W(e,\mu,m_{1},m_{2},c,b)\leq\alpha_{2W}(|(e,\mu,m_{1},m_{2})|), (21)
W(e,μ,he(c,x,e,μ)e,hμ(c,x,e,μ)μ,c+1,1)\displaystyle W(e,\mu,h_{e}(c,x,e,\mu)-e,h_{\mu}(c,x,e,\mu)-\mu,c+1,1)
λW(e,μ,m1,m2,c,0)+α3W(|ef|),\displaystyle\ \leq\lambda W(e,\mu,m_{1},m_{2},c,0)+\alpha_{3W}(|e_{\operatorname{f}}|), (22)
W(e+m1,μ+m2,em1,μm2,c,0)\displaystyle W(e+m_{1},\mu+m_{2},-e-m_{1},-\mu-m_{2},c,0)
W(e,μ,m1,m2,c,1)+α4W(|ef|).\displaystyle\ \leq W(e,\mu,m_{1},m_{2},c,1)+\alpha_{4W}(|e_{\operatorname{f}}|). (23)
Assumption 5

For each b{0,1}b\in\{0,1\}, there exist a continuous function Hb:nx>0H_{b}:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}_{>0}, σbW𝒦\sigma_{bW}\in\mathcal{K}_{\infty} and Lb[0,)L_{b}\in[0,\infty) such that for all (x,m1,m2,c,b)nx×ne×l××{0,1}(x,m_{1},m_{2},c,b)\in\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{e}}\times\mathbb{R}^{l}\times\mathbb{N}\times\{0,1\} and almost all ene,μle\in\mathbb{R}^{n_{e}},\mu\in\mathbb{R}^{l},

(b)\displaystyle\mathcal{I}(b) :=W(e,μ,m1,m2,c,b)e,ge(x,e)\displaystyle:=\left\langle\frac{\partial W(e,\mu,m_{1},m_{2},c,b)}{\partial e},g_{e}(x,e)\right\rangle
+W(e,μ,m1,m2,c,b)μ,gμ(x,e,μ)\displaystyle\quad+\left\langle\frac{\partial W(e,\mu,m_{1},m_{2},c,b)}{\partial\mu},g_{\mu}(x,e,\mu)\right\rangle
LbW(e,μ,m1,m2,c,b)+Hb(x)+σbW(|ef|).\displaystyle\leq L_{b}W(e,\mu,m_{1},m_{2},c,b)+H_{b}(x)+\sigma_{bW}(|e_{\operatorname{f}}|). (24)

In Assumptions 4-5, the function WW is used to analyze the stability of (e,μ)(e,\mu)-subsystem. Assumption 4 is to estimate the jumps of WW at the discrete-time instants, i.e., the transmission times and the arrival times. Assumption 5 is to estimate the derivative of WW in the continuous-time intervals. Since Assumptions 4-5 are applied to the (e,μ)(e,\mu)-subsystem, (4)-(5) hold with respect to the additional item efe_{\operatorname{f}}, which are parts of ee. As a result, Assumptions 4-5 are different from the classic assumptions like Condition IV.1 in [9] and Assumption 1 in [10], where the external disturbances are involved. Moreover, α3W\alpha_{3W} and α4W\alpha_{4W} in Assumption 4 are allowed to be the same. For instance, (4)-(4) hold with α¯3W(v):=α¯4W(v):=max{α3W(v),α4W(v)}\bar{\alpha}_{3W}(v):=\bar{\alpha}_{4W}(v):=\max\{\alpha_{3W}(v),\alpha_{4W}(v)\}, respectively. On the other hand, similar conditions to Assumptions 4-5 have been considered in previous works [38, 18]. However, external disturbances have been studied in [38], whereas efe_{\operatorname{f}} is part of the augmented error ee in Assumptions 4-5. Both quantization and transmission delays have not been considered in [18], and however are studied in this paper. The construction of WW satisfying Assumptions 4-5 will be presented in Section VI.

Assumption 6

There exist a locally Lipschitz function V:nx0V:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}_{\geq 0}, α1V,α2V,σbV𝒦\alpha_{1V},\alpha_{2V},\sigma_{bV}\in\mathcal{K}_{\infty}, and constants ρb,θb,γb>0\rho_{b},\theta_{b},\gamma_{b}>0 for b{0,1}b\in\{0,1\}, such that for all xnxx\in\mathbb{R}^{n_{x}},

α1V(|x|)V(x)α2V(|x|),\displaystyle\alpha_{1V}(|x|)\leq V(x)\leq\alpha_{2V}(|x|), (25)

and for all (e,μ,m1,m2,c,b)ne×l×ne×l××{0,1}(e,\mu,m_{1},m_{2},c,b)\in\mathbb{R}^{n_{e}}\times\mathbb{R}^{l}\times\mathbb{R}^{n_{e}}\times\mathbb{R}^{l}\times\mathbb{N}\times\{0,1\} and almost all xnxx\in\mathbb{R}^{n_{x}},

V(x),f(x,e)ρbV(x)Hb2(x)+(γb2θb)\displaystyle\langle\nabla V(x),f(x,e)\rangle\leq-\rho_{b}V(x)-H^{2}_{b}(x)+(\gamma^{2}_{b}-\theta_{b})
×W2(e,μ,m1,m2,c,b)+σbV(|ef|),\displaystyle\quad\times W^{2}(e,\mu,m_{1},m_{2},c,b)+\sigma_{bV}(|e_{\operatorname{f}}|), (26)

where Hb:nx>0H_{b}:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}_{>0} is defined in Assumption 5.

Assumption 6 is a robust stability property of the closed-loop system (2)-(5), and the function VV is used to analyze the stability of the xx-subsystem; see [18]. Observe from (6) that efe_{\operatorname{f}} is treated as the external input of the xx-subsystem, which relaxes the standard assumptions in [37, 9, 8]. In addition, Assumption 6 implies that the designed controller leads the η\eta-subsystem (i.e., the tracking error dynamics) to have ISS-like property from (W,ef)(W,e_{\operatorname{f}}) to xx. Moreover, Assumption 6 also indicates that the xx-subsystem is 2\mathcal{L}_{2}-stable from (W,ef)(W,e_{\operatorname{f}}) to HbH_{b}; see [11, Theorem 4] and [9, Remark V.2]. In Section VII, a numerical example is presented to demonstrate how to verify Assumptions 4-6.

Although the transmission intervals and the transmission delays are bounded in Assumption 1, the tradeoff between hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}} needs to be studied further. Consider the following differential equations

ϕ¯˙b=2Lbϕ¯bγb[(1+ϱb)ϕ¯b2+1],\displaystyle\dot{\bar{\phi}}_{b}=-2L_{b}\bar{\phi}_{b}-\gamma_{b}[(1+\varrho_{b})\bar{\phi}^{2}_{b}+1], (27)

where b{0,1}b\in\{0,1\}, and Lb0,γb>0L_{b}\geq 0,\gamma_{b}>0 are given in Assumptions 5-6. In addition, ϕ¯b(0):=ϕ¯b0(1,λ1)\bar{\phi}_{b}(0):=\bar{\phi}_{b0}\in(1,\lambda^{-1}), and ϱb(0,λ2ϕ¯b021)\varrho_{b}\in(0,\lambda^{-2}\bar{\phi}^{-2}_{b0}-1), where λ\lambda is given in Assumption 4. Based on Claim 1 in [10] and Claim 1 in [18], the solutions to (6) are strictly decreasing as long as ϕ¯b(τ)0\bar{\phi}_{b}(\tau)\geq 0, b{0,1}b\in\{0,1\}.

Now we are in the position to state the main result of this section. Its proof is based on the comparison principle [24, Lemma C.1] for hybrid systems and the proof of Theorem IV. 2 in [9]. However, some essential modifications are required to deal with the effects of the networked-induced errors.

Theorem 1

Consider the system 𝒮2\mathcal{S}_{2} and let Assumptions 4-6 hold. If the MATI hmatih_{\operatorname{mati}} and the MAD hmadh_{\operatorname{mad}} satisfy

γ0ϕ¯0(τ)\displaystyle\gamma_{0}\bar{\phi}_{0}(\tau) (1+ϱ1)λ2γ1ϕ¯1(0),τ[0,hmati],\displaystyle\geq(1+\varrho_{1})\lambda^{2}\gamma_{1}\bar{\phi}_{1}(0),\quad\tau\in[0,h_{\operatorname{mati}}], (28a)
γ1ϕ¯1(τ)\displaystyle\gamma_{1}\bar{\phi}_{1}(\tau) (1+ϱ0)γ0ϕ¯0(τ),τ[0,hmad],\displaystyle\geq(1+\varrho_{0})\gamma_{0}\bar{\phi}_{0}(\tau),\quad\tau\in[0,h_{\operatorname{mad}}], (28b)

with ϕ¯0(0)>0\bar{\phi}_{0}(0)>0, ϕ¯1(0)>0\bar{\phi}_{1}(0)>0, ϕ¯0(hmati)>0\bar{\phi}_{0}(h_{\operatorname{mati}})>0, then the system 𝒮2\mathcal{S}_{2} is ISS from efe_{\operatorname{f}} to (η,e1,μ)(\eta,e_{1},\mu). That is, there exist β𝒦\beta\in\mathcal{KLL} and φ1𝒦\varphi_{1}\in\mathcal{K}_{\infty} such that for all the solution of 𝒮2\mathcal{S}_{2} and all (t,j)0×(t,j)\in\mathbb{R}_{\geq 0}\times\mathbb{N} in the admissible domain of the solution,

|(x(t,j),e(t,j),μ(t,j))|\displaystyle|(x(t,j),e(t,j),\mu(t,j))|
β(|(x(t0,j0),e(t0,j0),μ(t0,j0))|,tt0,jj0)\displaystyle\leq\beta(|(x(t_{0},j_{0}),e(t_{0},j_{0}),\mu(t_{0},j_{0}))|,t-t_{0},j-j_{0})
+φ1(ef(t,j)).\displaystyle\quad+\varphi_{1}(\|e_{\operatorname{f}}\|_{(t,j)}). (29)
Proof:

Consider the hybrid system \mathcal{H} in (17). For 𝔛CDG(D)\mathfrak{X}\in C\cup D\cup G(D), define the following Lyapunov function:

U(𝔛):=V(x)+γbϕ¯b(τ)W2(e,μ,m1,m2,c,b).U(\mathfrak{X}):=V(x)+\gamma_{b}\bar{\phi}_{b}(\tau)W^{2}(e,\mu,m_{1},m_{2},c,b). (30)

Using (20), (21) and (25), it follows that

U(𝔛)\displaystyle U(\mathfrak{X}) α1V(|x|)+γ0ϕ¯0(τ)α1W2(|(e,μ,m1,m2)|),\displaystyle\geq\alpha_{1V}(|x|)+\gamma_{0}\bar{\phi}_{0}(\tau)\alpha^{2}_{1W}(|(e,\mu,m_{1},m_{2})|),
U(𝔛)\displaystyle U(\mathfrak{X}) α2V(|x|)+γ1ϕ¯1(τ)α2W2(|(e1,μ,m11,m2)|).\displaystyle\leq\alpha_{2V}(|x|)+\gamma_{1}\bar{\phi}_{1}(\tau)\alpha^{2}_{2W}(|(e_{1},\mu,m^{1}_{1},m_{2})|).

According to (28), there exist α1,α2𝒦\alpha_{1},\alpha_{2}\in\mathcal{K}_{\infty} such that

α1(|ξ|)U(ξ)α2(|ξ|),\displaystyle\alpha_{1}(|\xi|)\leq U(\xi)\leq\alpha_{2}(|\xi|), (31)

where,

α1(v)\displaystyle\alpha_{1}(v) :=min{α1V(v/2),(1+ϱ1)λ2γ1ϕ¯1(0)α1W2(v/2)},\displaystyle:=\min\{\alpha_{1V}(v/2),(1+\varrho_{1})\lambda^{2}\gamma_{1}\bar{\phi}_{1}(0)\alpha^{2}_{1W}(v/2)\},
α2(v)\displaystyle\alpha_{2}(v) :=max{α2V(v),γ1ϕ¯1(0)α2W2(v)}.\displaystyle:=\max\{\alpha_{2V}(v),\gamma_{1}\bar{\phi}_{1}(0)\alpha^{2}_{2W}(v)\}.

Next, consider the evolution of UU in the continuous-time intervals and at the discrete-time instants, respectively. For the flow equation FF in (18), we have that111U(𝔛),F(𝔛)\langle\nabla U(\mathfrak{X}),F(\mathfrak{X})\rangle is used here with a slight abuse of terminology since UU is not differential almost everywhere. However, this is justified by the fact that c˙=0\dot{c}=0 and b˙=0\dot{b}=0 in (14).

U(𝔛),F(𝔛)\displaystyle\langle\nabla U(\mathfrak{X}),F(\mathfrak{X})\rangle
=V(x),f(x,e)+γbϕ¯˙b(τ)W2(e,μ,m1,m2,c,b)\displaystyle=\langle\nabla V(x),f(x,e)\rangle+\gamma_{b}\dot{\bar{\phi}}_{b}(\tau)W^{2}(e,\mu,m_{1},m_{2},c,b)
+2γbϕ¯b(τ)W(e,μ,m1,m2,c,b)\displaystyle\quad+2\gamma_{b}\bar{\phi}_{b}(\tau)W(e,\mu,m_{1},m_{2},c,b)
×(W(e,μ,m1,m2,c,b)e,ge(x,e)\displaystyle\quad\times\left(\left\langle\frac{\partial W(e,\mu,m_{1},m_{2},c,b)}{\partial e},g_{e}(x,e)\right\rangle\right.
+W(e,μ,m1,m2,c,b)μ,gμ(x,e,μ))\displaystyle\quad\left.+\left\langle\frac{\partial W(e,\mu,m_{1},m_{2},c,b)}{\partial\mu},g_{\mu}(x,e,\mu)\right\rangle\right)
ρbV(x)θbW2(e,μ,m1,m2,c,b)Hb2(x)\displaystyle\leq-\rho_{b}V(x)-\theta_{b}W^{2}(e,\mu,m_{1},m_{2},c,b)-H^{2}_{b}(x)
+γb2W2(e,μ,m1,m2,c,b)+σbV(|ef|)\displaystyle\quad+\gamma^{2}_{b}W^{2}(e,\mu,m_{1},m_{2},c,b)+\sigma_{bV}(|e_{\operatorname{f}}|)
+2γbϕ¯b(τ)W(e,μ,m1,m2,c,b)\displaystyle\quad+2\gamma_{b}\bar{\phi}_{b}(\tau)W(e,\mu,m_{1},m_{2},c,b)
×[LbW(e,μ,m1,m2,c,b)+Hb(x)+σbW(|ef|)]\displaystyle\quad\times[L_{b}W(e,\mu,m_{1},m_{2},c,b)+H_{b}(x)+\sigma_{bW}(|e_{\operatorname{f}}|)]
+W2(e,μ,m1,m2,c,b)\displaystyle\quad+W^{2}(e,\mu,m_{1},m_{2},c,b)
×γb[2Lbϕ¯b(τ)γb((1+ϱb)ϕ¯b2(τ)+1)]\displaystyle\quad\times\gamma_{b}[-2L_{b}\bar{\phi}_{b}(\tau)-\gamma_{b}((1+\varrho_{b})\bar{\phi}^{2}_{b}(\tau)+1)]
=ρbV(x)θbW2(e,μ,m1,m2,c,b)Hb2(x)\displaystyle=-\rho_{b}V(x)-\theta_{b}W^{2}(e,\mu,m_{1},m_{2},c,b)-H^{2}_{b}(x)
+σbV(|ef|)+2γbϕ¯b(τ)W(e,μ,m1,m2,c,b)[Hb(x)\displaystyle\quad+\sigma_{bV}(|e_{\operatorname{f}}|)+2\gamma_{b}\bar{\phi}_{b}(\tau)W(e,\mu,m_{1},m_{2},c,b)[H_{b}(x)
+σbW(|ef|)](1+ϱb)γ2bϕ¯2b(τ)W2(e,μ,m1,m2,c,b)\displaystyle\quad+\sigma_{bW}(|e_{\operatorname{f}}|)]-(1+\varrho_{b})\gamma^{2}_{b}\bar{\phi}^{2}_{b}(\tau)W^{2}(e,\mu,m_{1},m_{2},c,b)
ρbV(x)θbW2(e,μ,m1,m2,c,b)[Hb(x)\displaystyle\leq-\rho_{b}V(x)-\theta_{b}W^{2}(e,\mu,m_{1},m_{2},c,b)-[H_{b}(x)
γbϕ¯b(τ)W(e,μ,m1,m2,c,b)]2+σ¯b(|ef|)\displaystyle\quad-\gamma_{b}\bar{\phi}_{b}(\tau)W(e,\mu,m_{1},m_{2},c,b)]^{2}+\bar{\sigma}_{b}(|e_{\operatorname{f}}|)
ρbV(x)θbW2(e,μ,m1,m2,c,b)+σ¯b(|ef|),\displaystyle\leq-\rho_{b}V(x)-\theta_{b}W^{2}(e,\mu,m_{1},m_{2},c,b)+\bar{\sigma}_{b}(|e_{\operatorname{f}}|), (32)

where, the first “=” holds due to the definition of the flow equation; the first “\leq” holds because of the functions in (27) and Assumptions 5-6; the second “\leq” holds due to the fact that 2xyx2+y2/2xy\leq\ell x^{2}+y^{2}/\ell for all x,y0x,y\geq 0 and >0\ell>0. In addition, σ¯b(v):=σbW2(v)/ϱb+σbV(v)\bar{\sigma}_{b}(v):=\sigma^{2}_{bW}(v)/\varrho_{b}+\sigma_{bV}(v) with b{0,1}b\in\{0,1\}.

For the jump equation GG in (19), there are two cases based on the values of the variable bb. For the case b=0b=0,

U(G1(𝔛))\displaystyle U(G_{1}(\mathfrak{X})) =V(x)+γ1ϕ¯1(0)W2(e,μ,he(c,x,e,μ)e,\displaystyle=V(x)+\gamma_{1}\bar{\phi}_{1}(0)W^{2}(e,\mu,h_{e}(c,x,e,\mu)-e,
hμ(c,x,e,μ)μ,c+1,1)\displaystyle\qquad h_{\mu}(c,x,e,\mu)-\mu,c+1,1)
V(x)+γ1ϕ¯1(0)[λW(e,μ,m1,m2,c,0)+α3W(|ef|)]2\displaystyle\leq V(x)+\gamma_{1}\bar{\phi}_{1}(0)[\lambda W(e,\mu,m_{1},m_{2},c,0)+\alpha_{3W}(|e_{\operatorname{f}}|)]^{2}
V(x)+γ1ϕ¯1(0)[(1+ϱ1)λ2W2(e,a,μ,c,0)\displaystyle\leq V(x)+\gamma_{1}\bar{\phi}_{1}(0)[(1+\varrho_{1})\lambda^{2}W^{2}(e,a,\mu,c,0)
+(1+1ϱ1)α3W2(|ef|)]\displaystyle\quad\left.+\left(1+\frac{1}{\varrho_{1}}\right)\alpha^{2}_{3W}(|e_{\operatorname{f}}|)\right]
V(x)+γ0ϕ¯0(τ)W2(e,μ,m1,m2,c,0)+α31(|ef|)\displaystyle\leq V(x)+\gamma_{0}\bar{\phi}_{0}(\tau)W^{2}(e,\mu,m_{1},m_{2},c,0)+\alpha_{31}(|e_{\operatorname{f}}|)
=U(𝔛)+α31(|ef|),\displaystyle=U(\mathfrak{X})+\alpha_{31}(|e_{\operatorname{f}}|), (33)

where, the first “=” holds due to the definition of the jump equation; the first “\leq” holds due to Assumption 4; the second “\leq” holds due to the fact that 2xyx2+y2/2xy\leq\ell x^{2}+y^{2}/\ell for all x,y0x,y\geq 0 and >0\ell>0; the third “\leq” holds due to the inequality (28a). In addition, α31(v):=(ϱ1λ2)1γ0ϕ¯0(τ)α3W2(v)\alpha_{31}(v):=(\varrho_{1}\lambda^{2})^{-1}\gamma_{0}\bar{\phi}_{0}(\tau)\alpha^{2}_{3W}(v). Similarly, for the case b=1b=1,

U(G2(𝔛))\displaystyle U(G_{2}(\mathfrak{X})) =V(x)+γ0ϕ¯0(τ)W2(e+m1,μ+m2,\displaystyle=V(x)+\gamma_{0}\bar{\phi}_{0}(\tau)W^{2}(e+m_{1},\mu+m_{2},
em1,μm2,c,0)\displaystyle\quad-e-m_{1},\mu-m_{2},c,0)
V(x)+γ0ϕ¯0(τ)[W(e,μ,m1,m2,c,1)\displaystyle\leq V(x)+\gamma_{0}\bar{\phi}_{0}(\tau)[W(e,\mu,m_{1},m_{2},c,1)
+α4W(|ef|)]2\displaystyle\quad+\alpha_{4W}(|e_{\operatorname{f}}|)]^{2}
V(x)+γ0ϕ¯0(τ)[(1+ϱ0)W2(e,μ,m1,m2,c,1)\displaystyle\leq V(x)+\gamma_{0}\bar{\phi}_{0}(\tau)[(1+\varrho_{0})W^{2}(e,\mu,m_{1},m_{2},c,1)
+(1+1ϱ0)α4W2(|ef|)]\displaystyle\quad\left.+\left(1+\frac{1}{\varrho_{0}}\right)\alpha^{2}_{4W}(|e_{\operatorname{f}}|)\right]
V(x)+(1+ϱ0)γ0ϕ¯0(τ)W2(e,μ,m1,m2,c,1)\displaystyle\leq V(x)+(1+\varrho_{0})\gamma_{0}\bar{\phi}_{0}(\tau)W^{2}(e,\mu,m_{1},m_{2},c,1)
+α32(|ef|)\displaystyle\quad+\alpha_{32}(|e_{\operatorname{f}}|)
=U(𝔛)+α32(|ef|),\displaystyle=U(\mathfrak{X})+\alpha_{32}(|e_{\operatorname{f}}|), (34)

where α32(v):=ϱ01γ0ϕ¯0(τ)α4W2(v)\alpha_{32}(v):=\varrho_{0}^{-1}\gamma_{0}\bar{\phi}_{0}(\tau)\alpha^{2}_{4W}(v).

Combining (V)-(V), we obtain that

U(𝔛),F(𝔛)\displaystyle\langle\nabla U(\mathfrak{X}),F(\mathfrak{X})\rangle ρbV(x)θbW2(e,μ,m1,m2,c,b)\displaystyle\leq-\rho_{b}V(x)-\theta_{b}W^{2}(e,\mu,m_{1},m_{2},c,b)
+σ¯b(|ef|),\displaystyle\quad+\bar{\sigma}_{b}(|e_{\operatorname{f}}|),
U(G(𝔛))\displaystyle U(G(\mathfrak{X})) U(𝔛)+α3(|ef|).\displaystyle\leq U(\mathfrak{X})+\alpha_{3}(|e_{\operatorname{f}}|).

Define ε¯:=min{ρb,θb}\bar{\varepsilon}:=\min\{\rho_{b},\theta_{b}\} and ε~(0,ε¯min{1,γb1ϕ¯b(0)})\tilde{\varepsilon}\in(0,\bar{\varepsilon}\min\{1,\gamma^{-1}_{b}\bar{\phi}_{b}(0)\}). We obtain by calculation that

U(𝔛),F(𝔛)\displaystyle\langle\nabla U(\mathfrak{X}),F(\mathfrak{X})\rangle ε~U(𝔛)+σ1(|ef|),\displaystyle\leq-\tilde{\varepsilon}U(\mathfrak{X})+\sigma_{1}(|e_{\operatorname{f}}|), (35)
U(𝔛(ti,j+1))\displaystyle U(\mathfrak{X}(t_{i},j+1)) U(𝔛(ti,j))+α3(|ef|),\displaystyle\leq U(\mathfrak{X}(t_{i},j))+\alpha_{3}(|e_{\operatorname{f}}|), (36)

where σ1(v):=max{σ¯1(v),σ¯2(v)}\sigma_{1}(v):=\max\{\bar{\sigma}_{1}(v),\bar{\sigma}_{2}(v)\} and α3(v):=max{α31(v),α32(v)}\alpha_{3}(v):=\max\{\alpha_{31}(v),\alpha_{32}(v)\}.

Integrating and iterating (35)-(36) from (t0,j0)(t_{0},j_{0}) to (t,j)(t,j) in the hybrid time domain, one has

U(𝔛(t,j))\displaystyle U(\mathfrak{X}(t,j)) eε~(tt0)U(𝔛(t0,j0))\displaystyle\leq e^{-\tilde{\varepsilon}(t-t_{0})}U(\mathfrak{X}(t_{0},j_{0}))
+11eεε~[ε~1σ1(ef(t,j))+α3(ef(t,j))],\displaystyle\quad+\frac{1}{1-e^{-\varepsilon\tilde{\varepsilon}}}[\tilde{\varepsilon}^{-1}\sigma_{1}(\|e_{\operatorname{f}}\|_{(t,j)})+\alpha_{3}(\|e_{\operatorname{f}}\|_{(t,j)})], (37)

where ε>0\varepsilon>0 is given in Assumption 1. From (31) and (V), we have

|(η(t,j),e1(t,j),μ(t,j))|\displaystyle|(\eta(t,j),e_{1}(t,j),\mu(t,j))|
α11(2eε~(tt0)α2(|𝔛(t0,j0)|))\displaystyle\leq\alpha^{-1}_{1}(2e^{-\tilde{\varepsilon}(t-t_{0})}\alpha_{2}(|\mathfrak{X}(t_{0},j_{0})|))
+α11(41eεε~[ε~1σ1(ef(t,j))+α3(ef(t,j))]).\displaystyle\quad+\alpha^{-1}_{1}\left(\frac{4}{1-e^{-\varepsilon\tilde{\varepsilon}}}[\tilde{\varepsilon}^{-1}\sigma_{1}(\|e_{\operatorname{f}}\|_{(t,j)})+\alpha_{3}(\|e_{\operatorname{f}}\|_{(t,j)})]\right).

Thus, the system 𝒮2\mathcal{S}_{2} is ISS from (ef,w)(e_{\operatorname{f}},w) to (η,e1,μ)(\eta,e_{1},\mu) with

β(v,t,j)\displaystyle\beta(v,t,j) :=α11(2eε~(0.5t+0.5εj)α2(v))\displaystyle:=\alpha^{-1}_{1}(2e^{-\tilde{\varepsilon}(0.5t+0.5\varepsilon j)}\alpha_{2}(v))
φ1(v)\displaystyle\varphi_{1}(v) :=α11(4(ε~1σ1(v)+α3(v))1eεε~),\displaystyle:=\alpha^{-1}_{1}\left(\frac{4(\tilde{\varepsilon}^{-1}\sigma_{1}(v)+\alpha_{3}(v))}{1-e^{-\varepsilon\tilde{\varepsilon}}}\right),

where the definition of β\beta comes from the fact that tεjt\geq\varepsilon j (see also [10, Section V-A]), and ε>0\varepsilon>0 is given in Assumption 1. As a result, the proof is completed. ∎

Remark 8

In Theorem 1, (28a)-(28b) for the MATI hmatih_{\operatorname{mati}} and the MAD hmadh_{\operatorname{mad}} are different from those in [10, 18, 9]. For the delay-free and quantization-free case, the formula of the MATI was given explicitly in [18, Assumption 4]. In fact, the MATI in [18] is the same as the one in [10], and is the solution to the equation ϕ¯˙=2Lϕ¯γ(ϕ¯2+1)\dot{\bar{\phi}}=-2L\bar{\phi}-\gamma(\bar{\phi}^{2}+1), where L,γ>0L,\gamma>0. Such equation has the similar form as (27). As a result, the delay-free and quantization-free case in [18] is recovered as a particular case of this paper. \square

Note that if WW and VV are respectively lower bounded by the functions of (e,μ)(e,\mu) and xx, then the system 𝒮2\mathcal{S}_{2} is ISS from efe_{\operatorname{f}} to (x,e,μ)(x,e,\mu) along the similar line as in the proof of Theorem 1. In addition, according to Theorem 1, the following corollary is an direct consequence, and hence the proof is omitted here.

Corollary 1

If all the assumptions in Theorem 1 are satisfied with α1V(v)=a1Vv2\alpha_{1V}(v)=a_{1V}v^{2}, α2V(v)=a2Vv2\alpha_{2V}(v)=a_{2V}v^{2}, α1W(v)=a1Wv2\alpha_{1W}(v)=a_{1W}v^{2} and α2W(v)=a2Wv2\alpha_{2W}(v)=a_{2W}v^{2}, where a1V,a2V,a1W,a2W>0a_{1V},a_{2V},a_{1W},a_{2W}>0, then the system 𝒮2\mathcal{S}_{2} is EISS from efe_{\operatorname{f}} to (η,e1,μ)(\eta,e_{1},\mu).

In Theorem 1, the tracking error is established to converge to the region around of the origin. The radius of such a region is related to the ISS nonlinear gain φ1\varphi_{1}, which is the function of the norm of efe_{\operatorname{f}}, respectively. In the following, we further study the ISS gains obtained in Theorem 1, and develop the relation among the ISS nonlinear gain φ1(v)\varphi_{1}(v) in (1), the MATI hmatih_{\operatorname{mati}}, and the MAD hmadh_{\operatorname{mad}}.

Proposition 1

If all the assumptions in Theorem 1 are satisfied, then φ1(v)\varphi_{1}(v) in (1) can be written as the form (1+φ¯(hmati,hmad))φ^(ε)φ~(v)(1+\bar{\varphi}(h_{\operatorname{mati}},h_{\operatorname{mad}}))\hat{\varphi}(\varepsilon)\tilde{\varphi}(v), where φ¯:0×00\bar{\varphi}:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0} is a positive definitive function, φ^:00\hat{\varphi}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0} is a strictly decreasing function and φ~𝒦\tilde{\varphi}\in\mathcal{K}_{\infty}.

Proof:

Following the same line as in the proof of Theorem 1, we have φ1(v)\varphi_{1}(v) with the following form

φ2(v)\displaystyle\varphi_{2}(v) =α11(4(ε~1σ1(v)+α3(v))1eεε~).\displaystyle=\alpha^{-1}_{1}\left(\frac{4(\tilde{\varepsilon}^{-1}\sigma_{1}(v)+\alpha_{3}(v))}{1-e^{-\varepsilon\tilde{\varepsilon}}}\right).

Since α1(v)=min{α1V(v/2),(1+ϱ1)ϕ¯b01α1W2(v/2)}\alpha_{1}(v)=\min\{\alpha_{1V}(v/2),(1+\varrho_{1})\bar{\phi}^{-1}_{b0}\alpha^{2}_{1W}(v/2)\} in (31), we obtain that α1(v)α~1(v):=min{α1V(v/2),(1+ϱ1)λα1W2(v/2)}\alpha_{1}(v)\geq\tilde{\alpha}_{1}(v):=\min\{\alpha_{1V}(v/2),(1+\varrho_{1})\lambda\alpha^{2}_{1W}(v/2)\}. For b{0,1}b\in\{0,1\}, define the constant ϱb1:=ψb(hmati,hmad)\varrho^{-1}_{b}:=\psi_{b}(h_{\operatorname{mati}},h_{\operatorname{mad}}) with certain positive definitive functions ψb:0×00\psi_{b}:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}. It holds from the definitions of the functions σ1\sigma_{1} and α3\alpha_{3} in (35)-(36) that

φ1(v)\displaystyle\varphi_{1}(v) α~11(41eεε~[ε~1σ1(v)+α3(v)])\displaystyle\leq\tilde{\alpha}^{-1}_{1}\left(\frac{4}{1-e^{-\varepsilon\tilde{\varepsilon}}}[\tilde{\varepsilon}^{-1}\sigma_{1}(v)+\alpha_{3}(v)]\right)
α~11(41eεε~[ε~1maxb{0,1}{σb1W2(v)ϱb+σb1V(v)}\displaystyle\leq\tilde{\alpha}^{-1}_{1}\left(\frac{4}{1-e^{-\varepsilon\tilde{\varepsilon}}}\left[\tilde{\varepsilon}^{-1}\max_{b\in\{0,1\}}\left\{\frac{\sigma^{2}_{b1W}(v)}{\varrho_{b}}+\sigma_{b1V}(v)\right\}\right.\right.
+max{γ0ϕ¯0(τ)ϱ1λ2α3W2(v),γ1ϕ¯1(τ)ϱ0α4W2(v)}])\displaystyle\quad+\left.\left.\max\left\{\frac{\gamma_{0}\bar{\phi}_{0}(\tau)}{\varrho_{1}\lambda^{2}}\alpha^{2}_{3W}(v),\frac{\gamma_{1}\bar{\phi}_{1}(\tau)}{\varrho_{0}}\alpha^{2}_{4W}(v)\right\}\right]\right)
α~11(41eεε¯min{1,λ/γb}[1ε¯min{1,γb1λ}\displaystyle\leq\tilde{\alpha}^{-1}_{1}\left(\frac{4}{1-e^{-\varepsilon\bar{\varepsilon}\min\{1,\lambda/\gamma_{b}\}}}\left[\frac{1}{\bar{\varepsilon}\min\{1,\gamma^{-1}_{b}\lambda\}}\right.\right.
×maxb{0,1}{ψb(hmati,hmad)σbW2(v)+σbV(v)}\displaystyle\quad\times\max_{b\in\{0,1\}}\left\{\psi_{b}(h_{\operatorname{mati}},h_{\operatorname{mad}})\sigma^{2}_{bW}(v)+\sigma_{bV}(v)\right\}
+maxb{0,1}{γbϕ¯b(0)ψb(hmati,hmad)}\displaystyle\quad+\max_{b\in\{0,1\}}\left\{\gamma_{b}\bar{\phi}_{b}(0)\psi_{b}(h_{\operatorname{mati}},h_{\operatorname{mad}})\right\}
×(λ2α3W2(v)+α4W2(v))])\displaystyle\quad\left.\left.\times(\lambda^{-2}\alpha^{2}_{3W}(v)+\alpha^{2}_{4W}(v))\right]\right)
α^1(41eεε¯min{1,λ/γb})[α^2(2ε¯min{1,γb1λ}\displaystyle\leq\hat{\alpha}_{1}\left(\frac{4}{1-e^{-\varepsilon\bar{\varepsilon}\min\{1,\lambda/\gamma_{b}\}}}\right)\left[\hat{\alpha}_{2}\left(\frac{2}{\bar{\varepsilon}\min\{1,\gamma^{-1}_{b}\lambda\}}\right.\right.
×maxb{0,1}{ψb(hmati,hmad)}σbW2(v))\displaystyle\quad\times\max_{b\in\{0,1\}}\{\psi_{b}(h_{\operatorname{mati}},h_{\operatorname{mad}})\}\sigma^{2}_{bW}(v))
+α^2(4σbV(v)ε¯min{1,γb1λ})\displaystyle\quad+\hat{\alpha}_{2}\left(\frac{4\sigma_{bV}(v)}{\bar{\varepsilon}\min\{1,\gamma^{-1}_{b}\lambda\}}\right)
+α^2(maxb{0,1}{4γbϕ¯b(0)ψb(hmati,hmad)}\displaystyle\quad+\hat{\alpha}_{2}\left(\max_{b\in\{0,1\}}\{4\gamma_{b}\bar{\phi}_{b}(0)\psi_{b}(h_{\operatorname{mati}},h_{\operatorname{mad}})\}\right.
×(λ2α3W2(v)+α4W2(v)))],\displaystyle\quad\left.\left.\times(\lambda^{-2}\alpha^{2}_{3W}(v)+\alpha^{2}_{4W}(v))\right)\right], (38)

where α^1,α^2𝒦\hat{\alpha}_{1},\hat{\alpha}_{2}\in\mathcal{K}_{\infty} and the fourth “\leq” holds because of the following two facts [40]: for all α𝒦\alpha\in\mathcal{K}_{\infty} and x,y0x,y\geq 0, (i) α(x+y)α(2x)+α(2y)\alpha(x+y)\leq\alpha(2x)+\alpha(2y); (ii) there exist α1,α2𝒦\alpha_{1},\alpha_{2}\in\mathcal{K}_{\infty} such that α(xy)α1(x)α2(y)\alpha(xy)\leq\alpha_{1}(x)\alpha_{2}(y). Using such two facts several times, there exist a strictly decreasing function φ^1:00\hat{\varphi}_{1}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}, a positive definitive function φ¯1:0×00\bar{\varphi}_{1}:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0} and φ~1𝒦\tilde{\varphi}_{1}\in\mathcal{K}_{\infty} such that φ1(v)φ^1(ε)(1+φ¯1(hmati,hmad))φ~1(v)\varphi_{1}(v)\leq\hat{\varphi}_{1}(\varepsilon)(1+\bar{\varphi}_{1}(h_{\operatorname{mati}},h_{\operatorname{mad}}))\tilde{\varphi}_{1}(v). Therefore, the proof is completed. ∎

Remark 9

From the proofs of Theorem 1 and Proposition 1, the norm ef(t,j)\|e_{\operatorname{f}}\|_{(t,j)} and the function φ1\varphi_{1} are related to the MATI hmatih_{\operatorname{mati}} and the MAD hmadh_{\operatorname{mad}}. Since ZOH devices are applied, the bound on ef(t,j)\|e_{\operatorname{f}}\|_{(t,j)} can be established via a step-by-step sampling approach; see [16, Section 4.1]. Thus, the upper bound on φ~1\tilde{\varphi}_{1} can also be obtained. In addition, the function φ^1\hat{\varphi}_{1} is related to the minimum inter-transmission interval ε\varepsilon. From (V), φ^1\hat{\varphi}_{1} increases to \infty as ε0\varepsilon\rightarrow 0. That is, the larger ε\varepsilon is, the smaller φ^1\hat{\varphi}_{1} is. Due to the existence of ε\varepsilon in all practical networks, the effects of ε\varepsilon on φ^1\hat{\varphi}_{1} can not be avoided but be limited by choosing appropriate networks, thereby leading to the minimum of φ^1\hat{\varphi}_{1}. \square

Observe from Theorem 1 and Proposition 1 that the conservatism of the obtained results comes from the ISS gain φ1\varphi_{1}. We would like to obtain the ISS gains as small as possible, which in turn implies better tracking performance. If the feedforward input ufu_{\operatorname{f}} is transmitted to the reference system directly, then ef0e_{\operatorname{f}}\equiv 0 and φ10\varphi_{1}\equiv 0. In addition, the Lyapunov function satisfying Assumptions 4-5 is constructed in Section VI, and α3W=α3W0\alpha_{3W}=\alpha_{3W}\equiv 0, which thus reduces the conservatism of the obtained results. On the other hand, if the reference trajectory is bounded, we would like to guarantee the boundedness of (x,e,μ)(x,e,\mu). For this case, the following corollary is established, which is an extension of Proposition 1 in [18] and thus the proof is omitted here.

Corollary 2

If all the assumptions in Theorem 1 are satisfied, and there exist γc𝒦\gamma_{\operatorname{c}}\in\mathcal{K}_{\infty}, βr:nr×nf0\beta_{\operatorname{r}}:\mathbb{R}^{n_{\operatorname{r}}}\times\mathbb{R}^{n_{\operatorname{f}}}\rightarrow\mathbb{R}_{\geq 0}, βc:nc0\beta_{\operatorname{c}}:\mathbb{R}^{n_{\operatorname{c}}}\rightarrow\mathbb{R}_{\geq 0} such that for all (t,j)0×(t,j)\in\mathbb{R}_{\geq 0}\times\mathbb{N} in the admissible domain of the solution,

|(xr(t,j),ef(t,j))|βr(xr(t0,j0),ef(t0,j0)),\displaystyle|(x_{\operatorname{r}}(t,j),e_{\operatorname{f}}(t,j))|\leq\beta_{\operatorname{r}}(x_{\operatorname{r}}(t_{0},j_{0}),e_{\operatorname{f}}(t_{0},j_{0})), (39)
|xc(t,j)|βc(xc(t0,j0))+γc((η,xr,e1)(t,j)),\displaystyle|x_{\operatorname{c}}(t,j)|\leq\beta_{\operatorname{c}}(x_{\operatorname{c}}(t_{0},j_{0}))+\gamma_{\operatorname{c}}(\|(\eta,x_{\operatorname{r}},e_{1})\|_{(t,j)}), (40)

then there exists a function β¯:nx×ne×l0\bar{\beta}:\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{e}}\times\mathbb{R}^{l}\rightarrow\mathbb{R}_{\geq 0} such that

|(x(t,j),e(t,j),μ(t,j))|β¯((x(t0,j0),e(t0,j0),μ(t0,j0))).\displaystyle|(x(t,j),e(t,j),\mu(t,j))|\leq\bar{\beta}((x(t_{0},j_{0}),e(t_{0},j_{0}),\mu(t_{0},j_{0}))).

In Corollary 2, the conditions (39)-(40) hold due to the boundedness of the reference trajectory. In (39), the boundedness of the reference state comes obviously from the bounded reference trajectory. If the feedforward input ufu_{\operatorname{f}} is bounded, then efe_{\operatorname{f}} is also bounded. The boundedness of xcx_{\operatorname{c}} is from Assumption 6, which implies that the designed controller leads to the ISS-like property from (W,ef)(W,e_{\operatorname{f}}) to η\eta; see also [11, Theorem 4] and [9, Remark V.2]. The interested readers are referred to [18] for more details.

VI On the Existence of Lyapunov Functions

In this section, the existence of the Lyapunov function WW satisfying Assumptions 4-5 is discussed. Since the network-induced errors are involved, which leads to different assumptions on Lyapunov functions, it is necessary to verify the existence of these Lyapunov functions to ensure the satisfaction of the applied assumptions. Therefore, based on the construction of Lyapunov functions in [8, 37, 9], Assumptions 4-5 are verified under some appropriate conditions. In addition, explicit Lyapunov functions for different time-scheduling protocols and quantizers are presented.

VI-A Time-Scheduling Protocols

In Section III, the update of ϑ=(ed,ec,ef)\bm{\vartheta}=(e_{\operatorname{d}},e_{\operatorname{c}},e_{\operatorname{f}}) is given by

ϑ(ri+)=ϑ(ri)ϑ(tsi)+Hϑ(i,x(tsi),ϑ(tsi),μ(tsi)),\bm{\vartheta}(r^{+}_{i})=\bm{\vartheta}(r_{i})-\bm{\vartheta}(t_{s_{i}})+H_{\bm{\vartheta}}(i,x(t_{s_{i}}),\bm{\vartheta}(t_{s_{i}}),\mu(t_{s_{i}})), (41)

where Hϑ:=(hd,hc,hf)H_{\bm{\vartheta}}:=(h_{\operatorname{d}},h_{\operatorname{c}},h_{\operatorname{f}}) is the update function. Similar to the analysis and the terminology in [9, 37], the function Hϑ(i,x(tsi),ϑ(tsi),μ(tsi))H_{\bm{\vartheta}}(i,x(t_{s_{i}}),\bm{\vartheta}(t_{s_{i}}),\mu(t_{s_{i}})) is referred to as the protocol. Based on ll nodes of the network, ϑ\bm{\vartheta} is partitioned into ϑ=(ϑ1,,ϑl)\bm{\vartheta}=(\bm{\vartheta}_{1},\ldots,\bm{\vartheta}_{l}). If the jj-th node is granted to access to the network according to certain time-scheduling protocol, then the corresponding component ϑj\bm{\vartheta}_{j} is updated and the other components are kept. In the literature, there are several time-scheduling protocols that can be modeled as HϑH_{\bm{\vartheta}}; see [9, 37, 8]. In the following, two classes of commonly-used protocols are presented.

The first protocol is Round-Robin (RR) protocol [11], which is a periodic protocol [7]. The period of the RR protocol is ll, and each node has and only has one chance to access to the network in a period. The function HϑH_{\bm{\vartheta}} is given by

Hϑ(i,x,ϑ,μ)=(IΨ(si))ϑ(tsi)+Ψ(si)ϵ(tsi),H_{\bm{\vartheta}}(i,x,\bm{\vartheta},\mu)=(I-\Psi(s_{i}))\bm{\vartheta}(t_{s_{i}})+\Psi(s_{i})\bm{\epsilon}(t_{s_{i}}), (42)

where, Ψ(si)=diag{Ψ1(si),,Ψl(si)}\Psi(s_{i})=\operatorname{diag}\{\Psi_{1}(s_{i}),\ldots,\Psi_{l}(s_{i})\} and Ψj(si)nj×nj\Psi_{j}(s_{i})\in\mathbb{R}^{n_{j}\times n_{j}}, j=1lnj=nϑ\sum^{l}_{j=1}n_{j}=n_{\bm{\vartheta}}. If si=js_{i}=j, Ψj(si)=Inj\Psi_{j}(s_{i})=I_{n_{j}}; otherwise, Ψj(si)=0\Psi_{j}(s_{i})=0.

Another protocol is Try-Once-Discard (TOD) protocol, which is a quadratic protocol [7, 11]. For the TOD protocol, the node with a minimum index where the norm of the local network-induced error is the largest is allowed to access to the network. The function HϑH_{\bm{\vartheta}} is given by

Hϑ(i,x,ϑ,μ)=(IΨ(ϑ))ϑ(ti)+Ψ(ϑ)ϵ(tsi),H_{\bm{\vartheta}}(i,x,\bm{\vartheta},\mu)=(I-\Psi(\bm{\vartheta}))\bm{\vartheta}(t_{i})+\Psi(\bm{\vartheta})\bm{\epsilon}(t_{s_{i}}), (43)

where, Ψ(ϑ)=diag{Ψ1(ϑ),,Ψl(ϑ)}\Psi(\bm{\vartheta})=\operatorname{diag}\{\Psi_{1}(\bm{\vartheta}),\ldots,\Psi_{l}(\bm{\vartheta})\}, and Ψj(ϑ)=Inj\Psi_{j}(\bm{\vartheta})=I_{n_{j}} if min{argmax1kl|ϑk|}=j\min\left\{\arg\max_{1\leq k\leq l}|\bm{\vartheta}_{k}|\right\}=j; otherwise, Ψj(ϑ)=0\Psi_{j}(\bm{\vartheta})=0.

Before verifying Assumptions 4-5, the Lyapunov uniformly globally exponentially stable (UGES) protocol is presented as follows. The Lyapunov UGES protocol was proposed first in [11, Definition 7] and extended in [37, 9, 8].

Definition 2

The protocol given by (Hϑ,Hμ)(H_{\bm{\vartheta}},H_{\mu}) is Lyapunov UGES, if there exist a function W¯:nϑ×l×0\bar{W}:\mathbb{R}^{n_{\bm{\vartheta}}}\times\mathbb{R}^{l}\times\mathbb{N}\rightarrow\mathbb{R}_{\geq 0} with W¯(,,c)\bar{W}(\cdot,\cdot,c) locally Lipschitz for all cc\in\mathbb{N}, constants α1W¯,α2W¯>0\alpha_{1\bar{W}},\alpha_{2\bar{W}}>0 and λ1(0,1)\lambda_{1}\in(0,1) such that for all ϑnϑ\bm{\vartheta}\in\mathbb{R}^{n_{\bm{\vartheta}}}, μl,c\mu\in\mathbb{R}^{l},c\in\mathbb{N} and xnxx\in\mathbb{R}^{n_{x}},

α1W¯|(ϑ,μ)|W¯(ϑ,μ,c)\displaystyle\alpha_{1\bar{W}}|(\bm{\vartheta},\mu)|\leq\bar{W}(\bm{\vartheta},\mu,c) α2W¯|(ϑ,μ)|,\displaystyle\leq\alpha_{2\bar{W}}|(\bm{\vartheta},\mu)|, (44)
W¯(Hϑ(c,x,ϑ,μ),Hμ(c,x,ϑ,μ),c+1)\displaystyle\bar{W}(H_{\bm{\vartheta}}(c,x,\bm{\vartheta},\mu),H_{\mu}(c,x,\bm{\vartheta},\mu),c+1) λ1W¯(ϑ,μ,c).\displaystyle\leq\lambda_{1}\bar{W}(\bm{\vartheta},\mu,c). (45)

Definition 2 is an extension of Condition 1 in [37] and the results in [8, Subsections IV-C and IV-D]. Condition 1 in [37] guarantees the Lyapunov UGES property for the combined protocol of zoom quantizer and network scheduling. In addition, the Lyapunov UGES property is studied for NQCSs in [8], where only zoom quantizer and box quantizer are considered. Therefore, Definition 2 provides the Lyapunov UGES property for the combined protocol of the general quantizer and network scheduling.

VI-B Construction of Lyapunov Functions

Based on the aforementioned Lyapunov UGES protocol, the Lyapunov function satisfying Assumptions 4-5 is constructed. To begin with, the following assumption is required.

Assumption 7

There exist constants λ21\lambda_{2}\geq 1, M1>0M_{1}>0 such that for all ϑnϑ\bm{\vartheta}\in\mathbb{R}^{n_{\bm{\vartheta}}}, μl\mu\in\mathbb{R}^{l} and cc\in\mathbb{N},

W¯(ϑ,μ,c+1)λ2W¯(ϑ,μ,c),\bar{W}(\bm{\vartheta},\mu,c+1)\leq\lambda_{2}\bar{W}(\bm{\vartheta},\mu,c), (46)

and for almost all ϑnϑ\bm{\vartheta}\in\mathbb{R}^{n_{\bm{\vartheta}}}, μl\mu\in\mathbb{R}^{l} and all cc\in\mathbb{N},

max{|W¯(ϑ,μ,c)ϑ|,|W¯(ϑ,μ,c)μ|}M1.\max\left\{\left|\frac{\partial\bar{W}(\bm{\vartheta},\mu,c)}{\partial\bm{\vartheta}}\right|,\left|\frac{\partial\bar{W}(\bm{\vartheta},\mu,c)}{\partial\mu}\right|\right\}\leq M_{1}. (47)

In Assumption 7, it follows from (46) that W¯\bar{W} is bounded at the transmission jump. In addition, (47) equals to the requirement that W¯\bar{W} is globally Lipschitz in (e,μ)(e,\mu) uniformly for cc, which is valid for the applied protocols in the existing works and this paper. Furthermore, Assumption 7 generalizes the assumptions for the quantization-free case in [9] and the case of zoom quantization in [37]. Besides Assumption 7, assume further that there exist a function m¯:nx0\bar{m}:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}_{\geq 0} and constants Me,Mf0M_{e},M_{\operatorname{f}}\geq 0 such that

|ge(x,e)|+|gμ(x,e,μ)|\displaystyle|g_{e}(x,e)|+|g_{\mu}(x,e,\mu)| m¯(x)+Me|(e,μ)|+Mf|ef|,\displaystyle\leq\bar{m}(x)+M_{e}|(e,\mu)|+M_{\operatorname{f}}|e_{\operatorname{f}}|, (48)

which is called the growth condition; see also [11, 9].

Based on Assumption 7 and (48), the main result of this section is presented as follows.

Theorem 2

Consider the system 𝒮2\mathcal{S}_{2}. If the following holds:

  1. i)

    the protocol (Hϑ,Hμ)(H_{\bm{\vartheta}},H_{\mu}) is Lyapunov UGES with a continuous function W¯\bar{W}, which is locally Lipschitz in (ϑ,μ)(\bm{\vartheta},\mu);

  2. ii)

    Assumption 7 and (48) hold for a function m¯:nx0\bar{m}:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}_{\geq 0}, and constants λ21\lambda_{2}\geq 1, M1>0M_{1}>0, Me,Mf0M_{e},M_{\operatorname{f}}\geq 0;

  3. iii)

    |Hϑ,j(i,x,ϑ,μ)||ϑj||H_{\bm{\vartheta},j}(i,x,\bm{\vartheta},\mu)|\leq|\bm{\vartheta}_{j}| holds for all ϑnϑ\bm{\vartheta}\in\mathbb{R}^{n_{\bm{\vartheta}}}, μl\mu\in\mathbb{R}^{l}, cc\in\mathbb{N} and each j{1,,nϑ}j\in\{1,\ldots,n_{\bm{\vartheta}}\}.

then the following function, which is defined below,

W(e,μ,m1,m2,c,b)\displaystyle W(e,\mu,m_{1},m_{2},c,b) :=(1b)max{W¯(ed,ec,0,μ,c),\displaystyle:=(1-b)\max\{\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c),
W¯(ed+m1d,ec+m1c,0,μ+m2,c)}\displaystyle\quad\ \bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu+m_{2},c)\}
+bmax{λ1λ2W¯(ed,ec,0,μ,c),\displaystyle\quad\ +b\max\left\{\frac{\lambda_{1}}{\lambda_{2}}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c),\right.
W¯(ed+m1d,ec+m1c,0,μ+m2,c)}\displaystyle\quad\ \bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu+m_{2},c)\}

satisfies Assumptions 4-5. In addition, λ=λ1\lambda=\lambda_{1}, α1W(v)=α1W¯v\alpha_{1W}(v)=\alpha_{1\bar{W}}v, α2W(v)=α2W¯v\alpha_{2W}(v)=\alpha_{2\bar{W}}v, α3W(v)=(1+λ)M1v\alpha_{3W}(v)=(1+\lambda)M_{1}v, α4W(v)=0\alpha_{4W}(v)=0, L0=α1W¯1M1MeL_{0}=\alpha^{-1}_{1\bar{W}}M_{1}M_{e}, L1=(λ1α1W¯)1λ2M1MeL_{1}=(\lambda_{1}\alpha_{1\bar{W}})^{-1}\lambda_{2}M_{1}M_{e}, H0(v)=H1(v)=M1m¯(v)H_{0}(v)=H_{1}(v)=M_{1}\bar{m}(v), and σ1W(v)=σ2W(v)=(MeM1/α1W¯+Mf)M1v\sigma_{1W}(v)=\sigma_{2W}(v)=(M_{e}M_{1}/\alpha_{1\bar{W}}+M_{\operatorname{f}})M_{1}v.

Proof:

According to the local Lipschitz property of W¯\bar{W} and Lebourg’s Lipschitz mean value theorem [41, Theorem 2.3.7], it follows that

|(e,μ)|\displaystyle|(e,\mu)| |(ϑ,μ)|\displaystyle\leq|(\bm{\vartheta},\mu)|
(44)[W¯(ϑ,μ,c)W¯(ed,ec,0,μ,c)]/α1W¯\displaystyle\overset{\eqref{eqn-44}}{\leq}[\bar{W}(\bm{\vartheta},\mu,c)-\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)]/\alpha_{1\bar{W}}
+W¯(ed,ec,0,μ,c)/α1W¯\displaystyle\quad+\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)/\alpha_{1\bar{W}}
(47)W¯(ed,ec,0,μ,c)/α1W¯+M1|ef|/α1W¯.\displaystyle\overset{\eqref{eqn-47}}{\leq}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)/\alpha_{1\bar{W}}+M_{1}|e_{\operatorname{f}}|/\alpha_{1\bar{W}}. (49)

Define 1:=(MeM1/α1W¯+Mf)M1\mathcal{M}_{1}:=(M_{e}M_{1}/\alpha_{1\bar{W}}+M_{\operatorname{f}})M_{1}. Based on WW defined in Theorem 2, there are following four cases for Assumptions 4-5.

Case 1: b=1b=1 and λ1W¯(ed,ec,0,μ,c)λ2W¯(ed+m1d,ec+m1c,0,μ,c)\lambda_{1}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)\geq\lambda_{2}\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu,c). In this case, for the discrete-time instants, we have

W(e,μ,He(c,x,e,μ)e,Hμ(c,x,e,μ)μ,c+1,0)\displaystyle W(e,\mu,H_{e}(c,x,e,\mu)-e,H_{\mu}(c,x,e,\mu)-\mu,c+1,0)
=λ1λ2W¯(ed,ec,0,μ,c+1)\displaystyle=\frac{\lambda_{1}}{\lambda_{2}}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c+1)
=λ1λ2[W¯(ed,ec,0,μ,c+1)W¯(ϑ,μ,c+1)\displaystyle=\frac{\lambda_{1}}{\lambda_{2}}\left[\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c+1)-\bar{W}(\bm{\vartheta},\mu,c+1)\right.
+W¯(ϑ,μ,c+1)]λ1W¯(ed,ec,0,μ,c)\displaystyle\quad\left.+\bar{W}(\bm{\vartheta},\mu,c+1)\right]-\lambda_{1}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)
+λ1W¯(ed,ec,0,μ,c)\displaystyle\quad+\lambda_{1}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)
(46),(47)λ1W¯(ed,ec,0,μ,c)+λ1(1+λ21)|ef|.\displaystyle\overset{\eqref{eqn-46},\eqref{eqn-47}}{\leq}\lambda_{1}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)+\lambda_{1}(1+\lambda^{-1}_{2})|e_{\operatorname{f}}|.

For the continuous-time intervals, we have that

(1)\displaystyle\mathcal{I}(1) =λ1λ2W¯(ed,ec,0,μ,c)e1,g1(x,e)\displaystyle=\frac{\lambda_{1}}{\lambda_{2}}\left\langle\frac{\partial\bar{W}(e_{\operatorname{d}},e_{c},0,\mu,c)}{\partial e_{1}},g_{1}(x,e)\right\rangle
+λ1λ2W¯(ed,ec,0,μ,c)μ,gμ(x,e,μ)\displaystyle\quad+\frac{\lambda_{1}}{\lambda_{2}}\left\langle\frac{\partial\bar{W}(e_{\operatorname{d}},e_{c},0,\mu,c)}{\partial\mu},g_{\mu}(x,e,\mu)\right\rangle
(47),(48)λ1λ2M1[m¯(x)+Me|(e,μ)|+Mf|ef|]\displaystyle\overset{\eqref{eqn-47},\eqref{eqn-48}}{\leq}\frac{\lambda_{1}}{\lambda_{2}}M_{1}[\bar{m}(x)+M_{e}|(e,\mu)|+M_{\operatorname{f}}|e_{\operatorname{f}}|]
(VI-B)M1Meα1W¯W(e,μ,m1,m2,c,1)+λ1λ2M1m¯(x)\displaystyle\overset{\eqref{eqn-49}}{\leq}\frac{M_{1}M_{e}}{\alpha_{1\bar{W}}}W(e,\mu,m_{1},m_{2},c,1)+\frac{\lambda_{1}}{\lambda_{2}}M_{1}\bar{m}(x)
+λ11|ef|/λ2.\displaystyle\quad+\lambda_{1}\mathcal{M}_{1}|e_{\operatorname{f}}|/\lambda_{2}.

Case 2: b=1b=1 and λ1W¯(ed,ec,0,μ,c)λ2W¯(ed+m1d,ec+m1c,0,μ,c)\lambda_{1}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)\leq\lambda_{2}\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu,c). In this case, we yield that

W(e,μ,He(c,x,e,μ)e,Hμ(c,x,e,μ)μ,c+1,0)\displaystyle W(e,\mu,H_{e}(c,x,e,\mu)-e,H_{\mu}(c,x,e,\mu)-\mu,c+1,0)
=W¯(Hd(c,x,e,μ),Hc(c,x,e,μ),0,Hμ(c,x,ϑ,μ),c+1)\displaystyle=\bar{W}(H_{\operatorname{d}}(c,x,e,\mu),H_{\operatorname{c}}(c,x,e,\mu),0,H_{\mu}(c,x,\bm{\vartheta},\mu),c+1)
W¯(Hϑ(c,x,ϑ,μ),Hμ(c,x,ϑ,μ),c+1)\displaystyle\quad-\bar{W}(H_{\bm{\vartheta}}(c,x,\bm{\vartheta},\mu),H_{\mu}(c,x,\bm{\vartheta},\mu),c+1)
+W¯(Hϑ(c,x,ϑ,μ),Hμ(c,x,ϑ,μ),c+1)\displaystyle\quad+\bar{W}(H_{\bm{\vartheta}}(c,x,\bm{\vartheta},\mu),H_{\mu}(c,x,\bm{\vartheta},\mu),c+1)
(47)M1|Hf(c,x,ϑ,μ)|\displaystyle\overset{\eqref{eqn-47}}{\leq}M_{1}|H_{\operatorname{f}}(c,x,\bm{\vartheta},\mu)|
+W¯(Hϑ(c,x,ϑ,μ),Hμ(c,x,ϑ,μ),c+1)\displaystyle\quad+\bar{W}(H_{\bm{\vartheta}}(c,x,\bm{\vartheta},\mu),H_{\mu}(c,x,\bm{\vartheta},\mu),c+1)
λ1W¯(ed,ec,0,μ,c)+λ1W¯(ed,ec,0,μ,c)\displaystyle\quad-\lambda_{1}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)+\lambda_{1}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)
(45),(47)λ1W¯(ed,ec,0,μ,c)+(1+λ1)M1|ef|,\displaystyle\overset{\eqref{eqn-45},\eqref{eqn-47}}{\leq}\lambda_{1}\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)+(1+\lambda_{1})M_{1}|e_{\operatorname{f}}|,

and for the continuous-time intervals, it follows that

(1)\displaystyle\mathcal{I}(1) =W¯(ed+m1d,ec+m1c,0,μ,c)e1,g1(x,e)\displaystyle=\left\langle\frac{\partial\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu,c)}{\partial e_{1}},g_{1}(x,e)\right\rangle
+W¯(ed+m1d,ec+m1c,0,μ,c)μ,gμ(x,e,μ)\displaystyle\quad+\left\langle\frac{\partial\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu,c)}{\partial\mu},g_{\mu}(x,e,\mu)\right\rangle
(47),(48)M1[m¯(x)+Me|(e,μ)|+Mf|ef|]\displaystyle\overset{\eqref{eqn-47},\eqref{eqn-48}}{\leq}M_{1}[\bar{m}(x)+M_{e}|(e,\mu)|+M_{\operatorname{f}}|e_{\operatorname{f}}|]
(VI-B)λ2M1Meλ1α1W¯W(e,μ,m1,m2,c,1)+M1m¯(x)+1|ef|.\displaystyle\overset{\eqref{eqn-49}}{\leq}\frac{\lambda_{2}M_{1}M_{e}}{\lambda_{1}\alpha_{1\bar{W}}}W(e,\mu,m_{1},m_{2},c,1)+M_{1}\bar{m}(x)+\mathcal{M}_{1}|e_{\operatorname{f}}|.

Case 3: b=0b=0 and W¯(ed,ec,0,μ,c)W¯(ed+m1d,0,ec+m1c,0,μ+m2,c)\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)\geq\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},0,e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu+m_{2},c). At the discrete-time instants, one has that W(e+m1,μ+m2,em1,μm2,c,1)=W¯(ed+m1d,ec+m1c,0,μ+m2,c)W(e+m_{1},\mu+m_{2},-e-m_{1},-\mu-m_{2},c,1)=\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu+m_{2},c). In the continuous-time intervals,

(0)\displaystyle\mathcal{I}(0) =W¯(ed,ec,0,μ,c)e1,g1(x,e)\displaystyle=\left\langle\frac{\partial\bar{W}(e_{\operatorname{d}},e_{c},0,\mu,c)}{\partial e_{1}},g_{1}(x,e)\right\rangle
+W¯(ed,ec,0,μ,c)μ,gμ(x,e,μ)\displaystyle\quad+\left\langle\frac{\partial\bar{W}(e_{\operatorname{d}},e_{c},0,\mu,c)}{\partial\mu},g_{\mu}(x,e,\mu)\right\rangle
(47),(48)M1[m¯(x)+Me|(e,μ)|+Mf|ef|]\displaystyle\overset{\eqref{eqn-47},\eqref{eqn-48}}{\leq}M_{1}[\bar{m}(x)+M_{e}|(e,\mu)|+M_{\operatorname{f}}|e_{\operatorname{f}}|]
(VI-B)M1Meα1W¯W(e,μ,m1,m2,c,1)+M1m¯(x)+1|ef|.\displaystyle\overset{\eqref{eqn-49}}{\leq}\frac{M_{1}M_{e}}{\alpha_{1\bar{W}}}W(e,\mu,m_{1},m_{2},c,1)+M_{1}\bar{m}(x)+\mathcal{M}_{1}|e_{\operatorname{f}}|.

Case 4: b=0b=0 and W¯(ed,ec,0,μ,c)W¯(ed+m1d,ec+m1c,0,μ+m2,c)\bar{W}(e_{\operatorname{d}},e_{\operatorname{c}},0,\mu,c)\leq\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu+m_{2},c). We get that at the discrete-time instants, W(e+m1,μ+m2,em1,μm2,c,1)=W¯(0,0,0,0,0,c)=0W(e+m_{1},\mu+m_{2},-e-m_{1},-\mu-m_{2},c,1)=\bar{W}(0,0,0,0,0,c)=0. In the continuous-time intervals,

(0)\displaystyle\mathcal{I}(0) =W¯(ed+m1d,ec+m1c,0,μ,c)e1,g1(x,e)\displaystyle=\left\langle\frac{\partial\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu,c)}{\partial e_{1}},g_{1}(x,e)\right\rangle
+W¯(ed+m1d,ec+m1c,0,μ,c)μ,gμ(x,e,μ)\displaystyle\quad+\left\langle\frac{\partial\bar{W}(e_{\operatorname{d}}+m^{\operatorname{d}}_{1},e_{\operatorname{c}}+m^{\operatorname{c}}_{1},0,\mu,c)}{\partial\mu},g_{\mu}(x,e,\mu)\right\rangle
(43),(46),(47)M1[m¯(x)+Me|(e,μ)|+Mf|ef|]\displaystyle\overset{\eqref{eqn-43},\eqref{eqn-46},\eqref{eqn-47}}{\leq}M_{1}[\bar{m}(x)+M_{e}|(e,\mu)|+M_{\operatorname{f}}|e_{\operatorname{f}}|]
(VI-B)M1Meα1W¯W(e,μ,m1,m2,c,1)+M1m¯(x)+1|ef|.\displaystyle\overset{\eqref{eqn-49}}{\leq}\frac{M_{1}M_{e}}{\alpha_{1\bar{W}}}W(e,\mu,m_{1},m_{2},c,1)+M_{1}\bar{m}(x)+\mathcal{M}_{1}|e_{\operatorname{f}}|.

Therefore, for b{0,1}b\in\{0,1\}, Assumption 4-5 are verified with the parameters given in Theorem 2. In addition, since (44) holds and WW is given in Theorem 2, the bounds on WW as in (20)-(21) are easily obtained. This completes the proof. ∎

Remark 10

In Theorem 2, only the function WW can be constructed for the system 𝒮2\mathcal{S}_{2} and the function VV can not, which is different from the existing works [10, 9, 37], where both VV and WW were constructed. The reasons lie in that the objective of this paper is the tracking performance and that VV involves in the information of the tracking error, the reference system and the feedback controller. In the previous works [10, 9, 37], the objectives are on stability analysis of NCSs and VV is only related to the state of the plant. Since the stability analysis is a basic topic for NCSs and numerous preliminary results have been obtained, VV has been constructed in the previous works, whereas VV in this paper cannot be developed along the similar techniques as applied in [10, 9, Section V]. \square

In Theorem 2, the Lyapunov function W¯\bar{W} satisfying the items i) and ii) exists; see [11, 37, 9, 10]. The item iii) guarantees that the local errors do not increase at each transmission time for all the relevant protocols. Such constraint is valid for all protocols discussed in [8, 11]. We can also double the bound of the local errors due to the quantization error. In the following, two types of the quantizers are introduced as the special cases of the quantizer satisfying Assumption 2 and specific Lyapunov functions are constructed for different cases of time-scheduling protocols and quantizers.

VI-C Zoom Quantizer

A zoom quantizer is defined as

qj(μj,zj)=μjqj(zjμj),q_{j}(\mu_{j},z_{j})=\mu_{j}q_{j}\left(\frac{z_{j}}{\mu_{j}}\right), (50)

where μj>0\mu_{j}>0, qjq_{j} in the right-hand side of (50) is a uniform quantizer; see [31, 28] and Remark 3 in Section III. According to Assumption 2, we have that for zoom quantizer, ={z=(z1,,zl)nz||zj|Mjμj}\mathds{C}=\{z=(z_{1},\ldots,z_{l})\in\mathbb{R}^{n_{z}}||z_{j}|\leq M_{j}\mu_{j}\}, 𝔻={ϵ=(ϵ1,,ϵl)nz||ϵj|Δjμj}\mathds{D}=\{\epsilon=(\epsilon_{1},\ldots,\epsilon_{l})\in\mathbb{R}^{n_{z}}||\epsilon_{j}|\leq\Delta_{j}\mu_{j}\} and 0={0}\mathds{C}_{0}=\{0\}. In addition, the quantization parameter is time-invariant in the arrival intervals and updated at the arrival times according to μj(ri+)=Ωjμj(ri)\mu_{j}(r^{+}_{i})=\Omega_{j}\mu_{j}(r_{i}) with Ωj(0,1)\Omega_{j}\in(0,1).

If the zoom quantizer is implemented, then appropriate Lyapunov functions satisfying Assumption 4 are constructed in the following propositions for different time-scheduling protocols.

Proposition 2

Let Assumption 3 hold. If the quantizer satisfying Assumption 2 is the zoom quantizer and the protocol in (41) is the RR protocol, then Assumption 4 is verified with W(e,μ,m1,m2,c,b)=|μ|+ϖi=c|ϕ1(i,c,e1)|2W(e,\mu,m_{1},m_{2},c,b)=|\mu|+\varpi\sqrt{\sum^{\infty}_{i=c}|\phi_{1}(i,c,e_{1})|^{2}}, where ϕ1(i,c,e1)\phi_{1}(i,c,e_{1}) is the solution to e1+=(hd(i,x,ed,μ),hc(i,x,ec,μ))e^{+}_{1}=(h_{\operatorname{d}}(i,x,e_{\operatorname{d}},\mu),h_{\operatorname{c}}(i,x,e_{\operatorname{c}},\mu)) at time ii starting at time cc with the initial condition e1e_{1}, and ϖ(0,(1maxjΩj)/(lmaxjΔj))\varpi\in(0,(1-\max_{j}\Omega_{j})/(\sqrt{l}\max_{j}\Delta_{j})). Moreover, λ=max{(l1)/l,ϖlmaxjΔj+maxjΩj}\lambda=\max\{\sqrt{(l-1)/l},\varpi\sqrt{l}\max_{j}\Delta_{j}+\max_{j}\Omega_{j}\}, α1W(v)=min{1,ϖ}v\alpha_{1W}(v)=\min\{1,\varpi\}v, α2W(v)=(1+ϖl)v\alpha_{2W}(v)=(1+\varpi\sqrt{l})v, α3W(v)=α4W(v)0\alpha_{3W}(v)=\alpha_{4W}(v)\equiv 0, λ2=l\lambda_{2}=\sqrt{l} and M1=ϖlM_{1}=\varpi\sqrt{l}.

Proof:

For the RR protocol and zoom quantizer case, the Lyapunov function is constructed in several steps. First, the Lyapunov functions for the ee-subsystem and μ\mu-subsystem are constructed, respectively. Then, such two Lyapunov functions are combined into a Lyapunov function for the whole system.

For the RR protocol, the functions hd,hc,hfh_{\operatorname{d}},h_{\operatorname{c}},h_{\operatorname{f}} are of the form (46). First, we consider the following system

e1+\displaystyle e^{+}_{1} =[hd(c,x,ed,μ)hc(c,x,ec,μ)]=[(IΨd(c))ed+Ψd(c)ϵd(IΨc(c))ec+Ψc(c)ϵc]\displaystyle=\begin{bmatrix}h_{\operatorname{d}}(c,x,e_{\operatorname{d}},\mu)\\ h_{\operatorname{c}}(c,x,e_{\operatorname{c}},\mu)\end{bmatrix}=\begin{bmatrix}(I-\Psi_{\operatorname{d}}(c))e_{\operatorname{d}}+\Psi_{\operatorname{d}}(c)\epsilon_{\operatorname{d}}\\ (I-\Psi_{\operatorname{c}}(c))e_{\operatorname{c}}+\Psi_{\operatorname{c}}(c)\epsilon_{\operatorname{c}}\end{bmatrix}
=:H1(c,x,e1,μ).\displaystyle=:H_{1}(c,x,e_{1},\mu). (51)

Denote by ϕ1(i,c,e1)\phi_{1}(i,c,e_{1}) the solution to the system (VI-C) at time ii starting at time cc with the initial condition e1e_{1} and define W1(e1,c):=i=c|ϕ1(i,c,e1)|2W_{1}(e_{1},c):=\sqrt{\sum^{\infty}_{i=c}|\phi_{1}(i,c,e_{1})|^{2}}. Based on Proposition 4 in [11] and similar to the proof of Proposition 3 in [18], we have |e1|W1(e1,c)l|e1||e_{1}|\leq W_{1}(e_{1},c)\leq\sqrt{l}|e_{1}|. Define W2(μ,c):=|μ|W_{2}(\mu,c):=|\mu| and W(e,μ,m1,m2,c,b):=ϖW1(e1,c)+W2(μ,c)W(e,\mu,m_{1},m_{2},c,b):=\varpi W_{1}(e_{1},c)+W_{2}(\mu,c), where ϖ\varpi is given in Proposition 2. Thus, we have that (20)-(21) hold with α1W(v):=min{1,ϖ}v\alpha_{1W}(v):=\min\{1,\varpi\}v and α2W(v):=(1+ϖl)v\alpha_{2W}(v):=(1+\varpi\sqrt{l})v, respectively. In addition, |ϵ|maxjΔj|μ||\bm{\epsilon}|\leq\max_{j}\Delta_{j}|\mu| and |μ+|maxjΩj|μ||\mu^{+}|\leq\max_{j}\Omega_{j}|\mu|.

Observe that Ωj(0,1)\Omega_{j}\in(0,1) and the system (VI-C) is dead-beat stable in ll steps (see Proposition 4 in [11]), then (4) holds with α4W(v)=0\alpha_{4W}(v)=0. In the following, we just need to prove (4). Because of the solution ϕ1\phi_{1} of the system (VI-C) and following from the proof of Proposition 7 in [8], we have

W1(H1(c,x,e1,μ),c+1)\displaystyle W_{1}(H_{1}(c,x,e_{1},\mu),c+1)
l1lW1(e1,c)+lmaxjΔj|μ|.\displaystyle\leq\sqrt{\frac{l-1}{l}}W_{1}(e_{1},c)+\sqrt{l}\max_{j}\Delta_{j}|\mu|. (52)

Combining (VI-C) and the fact that W2(Hμ(c,x,e,μ))maxjΩj|μ|W_{2}(H_{\mu}(c,x,e,\mu))\leq\max_{j}\Omega_{j}|\mu| yields that

ϖW1(H1(c,x,e1,μ),c+1)+W2(Hμ(c,x,e,μ),c+1)\displaystyle\varpi W_{1}(H_{1}(c,x,e_{1},\mu),c+1)+W_{2}(H_{\mu}(c,x,e,\mu),c+1)
ϖl1lW1(e1,c)+ϖlmaxjΔjW2(μ)+maxjΩjW2(μ)\displaystyle\leq\varpi\sqrt{\frac{l-1}{l}}W_{1}(e_{1},c)+\varpi\sqrt{l}\max_{j}\Delta_{j}W_{2}(\mu)+\max_{j}\Omega_{j}W_{2}(\mu)
λW(e1,μ,c),\displaystyle\leq\lambda W(e_{1},\mu,c),

where λ\lambda is given in Proposition 2. Thus, the proof is completed. ∎

The following proposition presents the Lyapunov function satisfying Assumption 4 under the TOD protocol and zoom quantizer case. Its proof is a combination of the proofs of Proposition 2 and Proposition 8 in [8], and hence omitted here.

Proposition 3

Let Assumption 3 hold. If the quantizer satisfying Assumption 2 is the zoom quantizer and the protocol in (41) is the TOD protocol, then Assumption 4 is verified with W(e,μ,m1,m2,c,b)=ϖ|e1|+|μ|W(e,\mu,m_{1},m_{2},c,b)=\varpi|e_{1}|+|\mu|, where ϖ(0,(1maxjΩj)/(maxjΔj))\varpi\in(0,(1-\max_{j}\Omega_{j})/(\max_{j}\Delta_{j})). Moreover, λ=max{(l1)/l,ϖmaxjΔj+maxjΩj}\lambda=\max\{\sqrt{(l-1)/l},\varpi\max_{j}\Delta_{j}+\max_{j}\Omega_{j}\}, α1W(v)=min{1,ϖ}v\alpha_{1W}(v)=\min\{1,\varpi\}v, α2W(v)=(1+ϖ)v\alpha_{2W}(v)=(1+\varpi)v, α3W(v)=α4W(v)0\alpha_{3W}(v)=\alpha_{4W}(v)\equiv 0, λ2=1\lambda_{2}=1 and M1=ϖM_{1}=\varpi.

In terms of Theorem 2, α3W=0\alpha_{3W}=0 is not given a priori. However, α3W0\alpha_{3W}\equiv 0 in Propositions 2-3, which implies that the effects of efe_{\operatorname{f}} on η\eta can be ignored. In this case, the feedforward input can be transmitted to the plant and the reference system directly. To reduce the effects of η\eta, some addition conditions are needed or the time-scheduling protocol needs to be changed. In the following, we introduce the TOD-tracking protocol [18], which is a refined TOD protocol with Ψj(ϑ)\Psi_{j}(\bm{\vartheta}) defined as

Ψj(ϑ)={Inj,if j=min{argmaxj|(ed,ecef)j|},0,otherwise.\displaystyle\Psi_{j}(\bm{\vartheta})=\begin{cases}I_{n_{j}},&\text{if }j=\min\{\arg\max_{j}|(e_{\operatorname{d}},e_{\operatorname{c}}-e_{\operatorname{f}})_{j}|\},\\ 0,&\text{otherwise}.\end{cases} (53)

That is, the node to access to the network depends on (ed,ecef)(e_{\operatorname{d}},e_{\operatorname{c}}-e_{\operatorname{f}}) instead of ϑ\bm{\vartheta}. Thus, W(e,μ,m1,m2,c,b)=ϖ|(ed,ecef)|+|μ|W(e,\mu,m_{1},m_{2},c,b)=\varpi|(e_{\operatorname{d}},e_{\operatorname{c}}-e_{\operatorname{f}})|+|\mu| and (28) holds with (η,ed,ecef,μ)(\eta,e_{\operatorname{d}},e_{\operatorname{c}}-e_{\operatorname{f}},\mu). The TOD-tracking protocol depends on the network setup. For instance, if ufu_{\operatorname{f}} is generated by the controller [16], then which node is granted to access the network is associated to (ed,ec+ef)(e_{\operatorname{d}},e_{\operatorname{c}}+e_{\operatorname{f}}). If ypy_{\operatorname{p}} and yry_{\operatorname{r}} are transmitted via the same nodes, then the node to access to the network is related to (e1,ef)(e_{1},e_{\operatorname{f}}). For the TOD-tracking protocol, the following proposition is presented.

Proposition 4

Let Assumption 3 hold. If the quantizer satisfying Assumption 2 is the zoom quantizer and the protocol in (41) is the TOD-tracking protocol, then Assumption 4 is verified with W(e,μ,m1,m2,c,b)=ϖ|(ed,ecef)|+|μ|W(e,\mu,m_{1},m_{2},c,b)=\varpi|(e_{\operatorname{d}},e_{\operatorname{c}}-e_{\operatorname{f}})|+|\mu|, where ϖ(0,(1maxjΩj)/(maxjΔj))\varpi\in(0,(1-\max_{j}\Omega_{j})/(\max_{j}\Delta_{j})). Moreover, λ=max{(l1)/l,ϖmaxjΔj+maxjΩj}\lambda=\max\{\sqrt{(l-1)/l},\varpi\max_{j}\Delta_{j}+\max_{j}\Omega_{j}\}, α1W(v)=min{1,ϖ}v\alpha_{1W}(v)=\min\{1,\varpi\}v, α2W(v)=(1+ϖ)v\alpha_{2W}(v)=(1+\varpi)v, α3W(v)=α4W(v)0\alpha_{3W}(v)=\alpha_{4W}(v)\equiv 0, λ2=1\lambda_{2}=1 and M1=ϖM_{1}=\varpi.

VI-D Box Quantizer

Box quantizer, whose quantization regions are rectangle boxes, is another type of dynamical quantizers; see [26, 8]. A box quantizer is defined by three parameters [8]: an integer Nj>1N_{j}>1 to define the number of the quantization levels, an estimate z^jnj\hat{z}_{j}\in\mathbb{R}^{n_{j}} of the variables to be quantized zjnjz_{j}\in\mathbb{R}^{n_{j}} and a real number μj0\mu_{j}\geq 0 to define the size of the quantization regions, where j{1,,l}j\in\{1,\ldots,l\}. Because NjN_{j} is a given constant and z^j\hat{z}_{j} depends on μj\mu_{j}, μj\mu_{j} is called the quantization parameter. Consider the box 𝑩(z^j,μj)\bm{B}(\hat{z}_{j},\mu_{j}) and divide it into NjnjN^{n_{j}}_{j} equally small sub-boxes numbered from 1 to NjnjN^{n_{j}}_{j} in some way. Thus, q(μj,zj)q(\mu_{j},z_{j}) is the number of the sub-box containing zjz_{j}, and z^j\hat{z}_{j} is updated to be the center of such sub-box.222If zjz_{j} is in the intersection of some sub-boxes, then q(μj,zj)q(\mu_{j},z_{j}) can be any number of these sub-boxes. In addition, we allow NjN_{j} not to be the same for different nodes, which generalizes the cases in [8, 26]. The evolution of μ\mu is presented as follows; see [8] for the details.

μ˙(t)\displaystyle\dot{\mu}(t) =gμ(x,e,μ),t(ri,ri+1),\displaystyle=g_{\mu}(x,e,\mu),\quad t\in(r_{i},r_{i+1}), (54)
μ(ri+)\displaystyle\mu(r^{+}_{i}) =(μ1(ri)N1,,μl(ri)Nl).\displaystyle=\left(\frac{\mu_{1}(r_{i})}{N_{1}},\ldots,\frac{\mu_{l}(r_{i})}{N_{l}}\right). (55)

For the RR protocol, HϑH_{\bm{\vartheta}} and HμH_{\mu} are given by

Hϑ(i,x,ϑ,μ)=(IΨϑ(si))ϑ(tsi)+Ψϑ(si)Jϑ(si,x,ϑ,μ),\displaystyle H_{\bm{\vartheta}}(i,x,\bm{\vartheta},\mu)=(I-\Psi_{\bm{\vartheta}}(s_{i}))\bm{\vartheta}(t_{s_{i}})+\Psi_{\bm{\vartheta}}(s_{i})J_{\bm{\vartheta}}(s_{i},x,\bm{\vartheta},\mu),
Hμ(i,x,ϑ,μ)=(IΨ¯(si))μ+N1Ψ¯(si)μ,\displaystyle H_{\mu}(i,x,\bm{\vartheta},\mu)=(I-\bar{\Psi}(s_{i}))\mu+N^{-1}\bar{\Psi}(s_{i})\mu,

where Ψϑ\Psi_{\bm{\vartheta}} and Ψ¯\bar{\Psi} are the diagonal matrices with different dimensions, N=diag{N1,,Nl}N=\operatorname{diag}\{N_{1},\ldots,N_{l}\} and JϑJ_{\bm{\vartheta}} depends on the quantization procedure; see [8]. For the box quantizer and RR protocol case, the following proposition establishes the existence of Lyapunov function satisfying Assumption 4.

Proposition 5

Let Assumption 3 hold. If the RR protocol and the box quantizer are applied and there exists a constant d>0d>0 such that |Je(si,x,e,μ)|d|μ||J_{e}(s_{i},x,e,\mu)|\leq d|\mu|,333The existence of dd depends on the expression of Je(si,x,e,μ)J_{e}(s_{i},x,e,\mu), which has been given in [8, Section III-C] (where quantization is considered but transmission delays are not studied). then Assumption 4 is verified with W(e,μ,m1,m2,c,b)=ϖi=c|ϕ1(i,c,e1)|2+i=c|ϕμ(i,c,μ)|2W(e,\mu,m_{1},m_{2},c,b)=\varpi\sqrt{\sum^{\infty}_{i=c}|\phi_{1}(i,c,e_{1})|^{2}}+\sqrt{\sum^{\infty}_{i=c}|\phi_{\mu}(i,c,\mu)|^{2}}, where ϕ1(i,c,e1)\phi_{1}(i,c,e_{1}) and ϕμ(i,c,μ)\phi_{\mu}(i,c,\mu) are the solutions to e1+=(hd(i,x,ed,μ),hc(i,x,ec,μ))e^{+}_{1}=(h_{\operatorname{d}}(i,x,e_{\operatorname{d}},\mu),h_{\operatorname{c}}(i,x,e_{\operatorname{c}},\mu)) and μ+=hμ(i,x,e,μ)\mu^{+}=h_{\mu}(i,x,e,\mu) at time ii starting at time cc with the initial condition ee and μ\mu, respectively. Moreover, λ=max{(l1)/l,ϖdl+ρ¯}\lambda=\max\{\sqrt{(l-1)/l},\varpi d\sqrt{l}+\bar{\rho}\}, α1W(v)=min{1,ϖ}v\alpha_{1W}(v)=\min\{1,\varpi\}v,

α2W(v)=(ϖl+N¯2lN¯21)v,ρ¯=N¯2lN¯2+1N¯2l,\displaystyle\alpha_{2W}(v)=\left(\varpi\sqrt{l}+\sqrt{\frac{\bar{N}^{2}l}{\bar{N}^{2}-1}}\right)v,\quad\bar{\rho}=\sqrt{\frac{\bar{N}^{2}l-\bar{N}^{2}+1}{\bar{N}^{2}l}},

α4W(v)=α6W(v)=0\alpha_{4W}(v)=\alpha_{6W}(v)=0, λ2=l\lambda_{2}=\sqrt{l}, M1=ϖlM_{1}=\varpi\sqrt{l}, N¯=minjNj\bar{N}=\min_{j}N_{j} and ϖ(0,(1ρ¯)/(dl))\varpi\in(0,(1-\bar{\rho})/(d\sqrt{l})).

Proof:

First, we consider the μ\mu-subsystem. Define W2(μ,c):=i=c|ϕμ(i,c,μ)|2W_{2}(\mu,c):=\sqrt{\sum^{\infty}_{i=c}|\phi_{\mu}(i,c,\mu)|^{2}}, where ϕμ(i,c,μ)\phi_{\mu}(i,c,\mu) is the solution to μ+=hμ(i,x,e,μ)\mu^{+}=h_{\mu}(i,x,e,\mu) at time ii starting at time cc with the initial condition μ\mu. Let N¯=minjNj\bar{N}=\min_{j}N_{j} and following the similar line as the proof of Proposition 5 in [8], we have

|μ|W2(μ,c)\displaystyle|\mu|\leq W_{2}(\mu,c) N¯2lN¯21|μ|,\displaystyle\leq\sqrt{\frac{\bar{N}^{2}l}{\bar{N}^{2}-1}}|\mu|, (56)
W2(μ+,c+1)\displaystyle W_{2}(\mu^{+},c+1) N¯2lN¯2+1N¯2lW2(μ,c).\displaystyle\leq\sqrt{\frac{\bar{N}^{2}l-\bar{N}^{2}+1}{\bar{N}^{2}l}}W_{2}(\mu,c). (57)

Next, for the e1e_{1}-subsystem, consider the following system

e1+\displaystyle e^{+}_{1} =[hd(c,x,ed,μ)hc(c,x,ec,μ)]\displaystyle=\begin{bmatrix}h_{\operatorname{d}}(c,x,e_{\operatorname{d}},\mu)\\ h_{\operatorname{c}}(c,x,e_{\operatorname{c}},\mu)\end{bmatrix}
=[(IΨd(c))ed+Ψd(c)Jd(si,x,e1,μ)(IΨc(c))ec+Ψc(c)Jc(si,x,ec,μ)]\displaystyle=\begin{bmatrix}(I-\Psi_{\operatorname{d}}(c))e_{\operatorname{d}}+\Psi_{\operatorname{d}}(c)J_{\operatorname{d}}(s_{i},x,e_{1},\mu)\\ (I-\Psi_{\operatorname{c}}(c))e_{\operatorname{c}}+\Psi_{\operatorname{c}}(c)J_{\operatorname{c}}(s_{i},x,e_{\operatorname{c}},\mu)\end{bmatrix}
=H1(c,x,e1,μ).\displaystyle=H_{1}(c,x,e_{1},\mu). (58)

Define W1(e1,c):=i=c|ϕ1(i,c,e1)|2W_{1}(e_{1},c):=\sqrt{\sum^{\infty}_{i=c}|\phi_{1}(i,c,e_{1})|^{2}}, where ϕ1(i,c,e1)\phi_{1}(i,c,e_{1}) is the solutions to (VI-D) at time ii starting at time cc with the initial condition e1e_{1}. Similar to the proof of Proposition 2, we have |e1|W1(e1,c)l|e||e_{1}|\leq W_{1}(e_{1},c)\leq\sqrt{l}|e|. Based on the proof of Proposition 5 in [8] and similar to the proof of Proposition 2, we obtain that

W1(H1(c,x,e1,μ),c+1)\displaystyle W_{1}(H_{1}(c,x,e_{1},\mu),c+1)
l1lW1(c,e1)+dlW2(μ,c).\displaystyle\leq\sqrt{\frac{l-1}{l}}W_{1}(c,e_{1})+d\sqrt{l}W_{2}(\mu,c). (59)

Finally, define W(e,μ,m1,m2,c,b):=ϖW1(e1,c)+W2(μ,c)W(e,\mu,m_{1},m_{2},c,b):=\varpi W_{1}(e_{1},c)+W_{2}(\mu,c), and (20)-(21) hold with α1W(v)=min{1,ϖ}v\alpha_{1W}(v)=\min\{1,\varpi\}v and α2W(v)=(ϖl+N¯2l/(N¯21))v\alpha_{2W}(v)=(\varpi\sqrt{l}+\sqrt{\bar{N}^{2}l/(\bar{N}^{2}-1)})v. Similar to the proof of Proposition 2, it follows that (4) holds. Combining (57) and (VI-D) yields that

ϖW1(H1(c,x,e1,μ),c+1)+W2(Hμ(c,x,e,μ),c+1)\displaystyle\varpi W_{1}(H_{1}(c,x,e_{1},\mu),c+1)+W_{2}(H_{\mu}(c,x,e,\mu),c+1)
ϖl1lW1(e1,c)+ϖdlW2(μ,c)\displaystyle\leq\varpi\sqrt{\frac{l-1}{l}}W_{1}(e_{1},c)+\varpi d\sqrt{l}W_{2}(\mu,c)
+N¯2lN¯2+1N¯2lW2(μ,c)\displaystyle\quad+\sqrt{\frac{\bar{N}^{2}l-\bar{N}^{2}+1}{\bar{N}^{2}l}}W_{2}(\mu,c)
λW(e1,μ,c),\displaystyle\leq\lambda W(e_{1},\mu,c),

where λ\lambda is given in Proposition 5. Thus, (4) holds and the proof is completed. ∎

For the TOD protocol and the TOD-tracking protocol, the time-scheduling protocol and the update of μ\mu are given by

Hϑ(i,x,ϑ,μ)\displaystyle H_{\bm{\vartheta}}(i,x,\bm{\vartheta},\mu) =(IΨ(ϑ))ϑ(tsi)+Ψ(ϑ)Jϑ(si,x,ϑ,μ),\displaystyle=(I-\Psi(\bm{\vartheta}))\bm{\vartheta}(t_{s_{i}})+\Psi(\bm{\vartheta})J_{\bm{\vartheta}}(s_{i},x,\bm{\vartheta},\mu),
Hμ(i,x,ϑ,μ)\displaystyle H_{\mu}(i,x,\bm{\vartheta},\mu) =(IΨ¯(ϑ))μ+N1Ψ¯(ϑ)μ.\displaystyle=(I-\bar{\Psi}(\bm{\vartheta}))\mu+N^{-1}\bar{\Psi}(\bm{\vartheta})\mu.

In above cases, the following propositions are presented. The proofs are proceeded along the similar strategy as the proof of Proposition 5, and hence omitted here.

Proposition 6

Let Assumption 3 hold. If the TOD protocol and the box quantizer are applied and there exists a constant d>0d>0 such that |Je(si,x,e,μ)|d|μ||J_{e}(s_{i},x,e,\mu)|\leq d|\mu|, then Assumption 4 is verified with W(e,μ,m1,m2,c,b)=ϖ|e1|+|μ|W(e,\mu,m_{1},m_{2},c,b)=\varpi|e_{1}|+|\mu|, where ϖ>0\varpi>0. Moreover, λ=max{(l1)/l,ϖd+ρ~}\lambda=\max\{\sqrt{(l-1)/l},\varpi d+\tilde{\rho}\}, α1W(v)=min{1,ϖ}v\alpha_{1W}(v)=\min\{1,\varpi\}v, α2W(v)=(1+ϖl)v\alpha_{2W}(v)=(1+\varpi\sqrt{l})v, α3W(v)=α4W(v)=0\alpha_{3W}(v)=\alpha_{4W}(v)=0, M1=ϖM_{1}=\varpi, where ϖ(0,(1ρ¯)/d)\varpi\in(0,(1-\bar{\rho})/d), α(0,1)\alpha\in(0,1), N¯=minjNj\bar{N}=\min_{j}N_{j} and

ρ~=max{l1l,N¯2lα2N¯2+αN¯2l}.\displaystyle\tilde{\rho}=\max\left\{\sqrt{\frac{l-1}{l}},\sqrt{\frac{\bar{N}^{2}l-\alpha^{2}\bar{N}^{2}+\alpha}{\bar{N}^{2}l}}\right\}. (60)
Proposition 7

Let Assumption 3 hold. If the TOD-tracking protocol and the box quantizer are applied and there exists a constant d>0d>0 such that |Je(si,x,e,μ)|d|μ||J_{e}(s_{i},x,e,\mu)|\leq d|\mu|, then Assumption 4 is verified with W(e,μ,m1,m2,c,b)=ϖ|(eη,ecef)|+|μ|W(e,\mu,m_{1},m_{2},c,b)=\varpi|(e_{\eta},e_{\operatorname{c}}-e_{\operatorname{f}})|+|\mu|, where ϖ>0\varpi>0. Moreover, λ=max{(l1)/l,ϖd+ρ~}\lambda=\max\{\sqrt{(l-1)/l},\varpi d+\tilde{\rho}\}, α1W(v)=min{1,ϖ}v\alpha_{1W}(v)=\min\{1,\varpi\}v, α2W(v)=(1+ϖl)v\alpha_{2W}(v)=(1+\varpi\sqrt{l})v, α3W(v)=α4W(v)=0\alpha_{3W}(v)=\alpha_{4W}(v)=0, M1=ϖM_{1}=\varpi, where ϖ(0,(1ρ¯)/d)\varpi\in(0,(1-\bar{\rho})/d), α(0,1)\alpha\in(0,1), N¯=minjNj\bar{N}=\min_{j}N_{j} and ρ~\tilde{\rho} is given in (60).

VII Illustrative Example

In this section, the developed results are demonstrated with a numerical example. We will illustrate how Assumptions 4-6 could be verified and how the verification of these conditions leads to quantitative tradeoff between hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}}.

Consider a single-link revolute manipulator modeled by Lagrange dynamics [42, 43]: M(𝒒)𝒒¨+C(𝒒,𝒒˙)𝒒˙+g(𝒒)=𝝉M(\bm{q})\ddot{\bm{q}}+C(\bm{q},\dot{\bm{q}})\dot{\bm{q}}+g(\bm{q})=\bm{\tau}. Pick M(𝒒)=1M(\bm{q})=1, C(𝒒,𝒒˙)=0C(\bm{q},\dot{\bm{q}})=0, g(𝒒)=mcos𝒒g(\bm{q})=m\cos\bm{q}, 𝝉=au\bm{\tau}=au and define 𝒒1=𝒒\bm{q}_{1}=\bm{q}, 𝒒2=𝒒˙\bm{q}_{2}=\dot{\bm{q}}, we have that

𝒒˙1=𝒒2,𝒒˙2=mcos𝒒1+au,\displaystyle\dot{\bm{q}}_{1}=\bm{q}_{2},\quad\dot{\bm{q}}_{2}=-m\cos\bm{q}_{1}+au, (61)

where 𝒒1\bm{q}_{1} is the generalized configuration coordinate, 𝒒2\bm{q}_{2} is the generalized velocity, 𝝉\bm{\tau} is the generalized force acting on the system and m,a>0m,a>0 are certain constants. Both 𝒒1\bm{q}_{1} and 𝒒2\bm{q}_{2} are measurable. The reference system is given by

𝒒˙r1=𝒒r2,𝒒˙r2=mcos𝒒r1+auf,\displaystyle\dot{\bm{q}}_{\operatorname{r}1}=\bm{q}_{\operatorname{r}2},\quad\dot{\bm{q}}_{\operatorname{r}2}=-m\cos\bm{q}_{\operatorname{r}1}+au_{\operatorname{f}}, (62)

where 𝒒r1,𝒒r2\bm{q}_{\operatorname{r}1},\bm{q}_{\operatorname{r}2} are measurable and uf=2cos(5t)u_{\operatorname{f}}=2\cos(5t).

Assume that if there is no communication network, then the tracking error is asymptotically stable. The designed controller is u=uf+ucu=u_{\operatorname{f}}+u_{\operatorname{c}}, where uc=a1[msin(𝒒d1/2)+𝒒d1+𝒒d2]u_{\operatorname{c}}=-a^{-1}[m\sin(\bm{q}_{\operatorname{d}1}/2)+\bm{q}_{\operatorname{d}1}+\bm{q}_{\operatorname{d}2}], where 𝒒d1:=𝒒1𝒒r1\bm{q}_{\operatorname{d}1}:=\bm{q}_{1}-\bm{q}_{\operatorname{r}1} and 𝒒d2:=𝒒2𝒒r2\bm{q}_{\operatorname{d}2}:=\bm{q}_{2}-\bm{q}_{\operatorname{r}2}. In this paper, we consider the case that the communication between the controller and the plant is via a quantizer and a communication network. The applied quantizer is zoom quantizer (50) with maxjΔj=0.8\max_{j}\Delta_{j}=0.8 and maxjΩj=0.6\max_{j}\Omega_{j}=0.6. The controller is applied using ZOH devices and the network is assumed to have l=3l=3 nodes for 𝒒d1\bm{q}_{\operatorname{d}1}, 𝒒d2\bm{q}_{\operatorname{d}2} and uu, respectively. In this case, the applied feedback controller is emulated and given by uc=a1[msin(𝒒^d1/2)+𝒒^d1+𝒒^d2]u_{\operatorname{c}}=-a^{-1}[m\sin(\hat{\bm{q}}_{\operatorname{d}1}/2)+\hat{\bm{q}}_{\operatorname{d}1}+\hat{\bm{q}}_{\operatorname{d}2}]. In addition, ufu_{\operatorname{f}} is assumed to be transmitted to the reference system directly for the TOD-tracking protocol and 𝒒^d1,𝒒^d2\hat{\bm{q}}_{\operatorname{d}1},\hat{\bm{q}}_{\operatorname{d}2} are implemented in ZOH fashion in the update intervals, which implies that the feedback controller knows 𝒒r1,𝒒r2\bm{q}_{\operatorname{r}1},\bm{q}_{\operatorname{r}2} but does not depend on 𝒒r1,𝒒r2\bm{q}_{\operatorname{r}1},\bm{q}_{\operatorname{r}2}. To simplify the following simulation, the transmission intervals and the transmission delays are constants, that is, hihmatih_{i}\equiv h_{\operatorname{mati}} and τihmad\tau_{i}\equiv h_{\operatorname{mad}} for all ii\in\mathbb{N}.

Refer to caption
Figure 2: The functions ϕ¯b\bar{\phi}_{b}, b{0,1}b\in\{0,1\} with ϕ¯0(0)=ϕ¯1(0)=3\bar{\phi}_{0}(0)=\bar{\phi}_{1}(0)=\sqrt{3} for RR protocol.
Refer to caption
Figure 3: The functions ϕ¯b\bar{\phi}_{b}, b{0,1}b\in\{0,1\} with ϕ¯0(0)=3\bar{\phi}_{0}(0)=\sqrt{3}, ϕ¯1(0)=3+1\bar{\phi}_{1}(0)=\sqrt{3}+1 for TOD and TOD-tracking protocols.

In the following, we first verify Assumptions 4-6. Based on (61)-(62), we obtain that Fη=(η2,m[cos(η1+ed1+𝒒r1)cos(𝒒r1)+sin((η1+ed1)/2)](η1+ed1)(η2+ed2)+auf+aef)F_{\eta}=(\eta_{2},-m[\cos(\eta_{1}+e_{\operatorname{d}1}+\bm{q}_{\operatorname{r}1})-\cos(\bm{q}_{\operatorname{r}1})+\sin((\eta_{1}+e_{\operatorname{d}1})/2)]-(\eta_{1}+e_{\operatorname{d}1})-(\eta_{2}+e_{\operatorname{d}2})+au_{\operatorname{f}}+ae_{\operatorname{f}}), Fr=(𝒒r2,mcos𝒒r1+auf+aef)F_{\operatorname{r}}=(\bm{q}_{\operatorname{r}2},-m\cos\bm{q}_{\operatorname{r}1}+au_{\operatorname{f}}+ae_{\operatorname{f}}), G1=(Fη,0)G_{1}=(-F_{\eta},0), Gr=FrG_{\operatorname{r}}=-F_{\operatorname{r}} and Gf=u˙fG_{\operatorname{f}}=-\dot{u}_{\operatorname{f}}. Choose the Lyapunov function WW in Proposition 2 for the RR protocol, W(e,μ,m1,m2,c,b):=ϖ|e|+|μ|W(e,\mu,m_{1},m_{2},c,b):=\varpi|e|+|\mu| for the TOD protocol and W(e,μ,m1,m2,c,b):=ϖ|(ed,ecef)|+|μ|W(e,\mu,m_{1},m_{2},c,b):=\varpi|(e_{\operatorname{d}},e_{\operatorname{c}}-e_{\operatorname{f}})|+|\mu| for the TOD-tracking protocol. Thus, Assumption 4 holds easily based on Propositions 2-4.

On the other hand, it holds that |g1||η2|+|η1+η2|+E1|e1|+a|ef||g_{1}|\leq|\eta_{2}|+|\eta_{1}+\eta_{2}|+E_{1}|e_{1}|+a|e_{\operatorname{f}}|, where E1=m+3max{1,a}E_{1}=m+\sqrt{3}\max\{1,a\}. It is known from Propositions 2-4 that α1W(v)=min{1,ϖ}v\alpha_{1W}(v)=\min\{1,\varpi\}v for all v0v\geq 0 and max{|W¯(ϑ,μ,c)/ϑ|,|W¯(ϑ,μ,c)/μ|}M1\max\{|\partial\bar{W}(\bm{\vartheta},\mu,c)/\partial\bm{\vartheta}|,|\partial\bar{W}(\bm{\vartheta},\mu,c)/\partial\mu|\}\leq M_{1} for almost all e,m1ne,μ,m2l,ce,m_{1}\in\mathbb{R}^{n_{e}},\mu,m_{2}\in\mathbb{R}^{l},c\in\mathbb{N} and b{0,1}b\in\{0,1\}, where M1=lM_{1}=\sqrt{l} for the RR protocol and M1=1M_{1}=1 for the TOD and TOD-tracking protocols. Thus, |W(e,μ,m1,m2,c,b)/e,ge(x,e)|ϖM1[|η1|+|η1+η2|+E1|e1|+a|ef|]|\langle\partial W(e,\mu,m_{1},m_{2},c,b)/\partial e,g_{e}(x,e)\rangle|\leq\varpi M_{1}[|\eta_{1}|+|\eta_{1}+\eta_{2}|+E_{1}|e_{1}|+a|e_{\operatorname{f}}|]. As a result, Assumption 5 holds with L0=ϖM1E1/α1WL_{0}=\varpi M_{1}E_{1}/\alpha_{1W}, L1=ϖM1E1λ2/(λ1α1W)L_{1}=\varpi M_{1}E_{1}\lambda_{2}/(\lambda_{1}\alpha_{1W}), H0(x)=H1(x)=ϖM1(|η1|+|η1+η2|)H_{0}(x)=H_{1}(x)=\varpi M_{1}(|\eta_{1}|+|\eta_{1}+\eta_{2}|) and σbW(v)=ϖaM1|v|\sigma_{bW}(v)=\varpi aM_{1}|v|.

Refer to caption
Figure 4: Tracking errors for hmati=0.0242h_{\operatorname{mati}}=0.0242, hmad=0.00390h_{\operatorname{mad}}=0.00390 and RR protocol.
Refer to caption
Figure 5: Tracking errors for hmati=0.0256h_{\operatorname{mati}}=0.0256, hmad=0.00385h_{\operatorname{mad}}=0.00385 and TOD protocol.

To verify Assumption 6, define V(η):=ϕ1η12+ϕ2η1η2+ϕ3η22V(\eta):=\phi_{1}\eta^{2}_{1}+\phi_{2}\eta_{1}\eta_{2}+\phi_{3}\eta^{2}_{2}, where ϕ1,ϕ2,ϕ3\phi_{1},\phi_{2},\phi_{3} are chosen to make VV satisfy (25). Assume that there exists a time-varying parameter m^[m,m]\hat{m}\in[-m,m] such that m[cos(η1+ed1+𝒒r1)cos(𝒒r1)+sin((η1+ed1)/2)]=m^ed1m[\cos(\eta_{1}+e_{\operatorname{d}1}+\bm{q}_{\operatorname{r}1})-\cos(\bm{q}_{\operatorname{r}1})+\sin((\eta_{1}+e_{\operatorname{d}1})/2)]=\hat{m}e_{\operatorname{d}1}. Thus, using twice the fact that 2xycx2+y2/c2xy\leq cx^{2}+y^{2}/c for all x,y0x,y\geq 0 and c>0c>0, we get that V(η),Fη(x,e)ϕ2η12(2ϕ3ϕ2)η22+(2ϕ12ϕ3ϕ2)η1η2+(ϱ01+ϱ11)(ϕ2η1+2ϕ3η2)2+ϱ0E22|e1|2+ϱ1a2|ef|2\langle\nabla V(\eta),F_{\eta}(x,e)\rangle\leq-\phi_{2}\eta^{2}_{1}-(2\phi_{3}-\phi_{2})\eta^{2}_{2}+(2\phi_{1}-2\phi_{3}-\phi_{2})\eta_{1}\eta_{2}+(\varrho^{-1}_{0}+\varrho^{-1}_{1})(\phi_{2}\eta_{1}+2\phi_{3}\eta_{2})^{2}+\varrho_{0}E^{2}_{2}|e_{1}|^{2}+\varrho_{1}a^{2}|e_{\operatorname{f}}|^{2}, where ϱ0,ϱ1>0\varrho_{0},\varrho_{1}>0 are defined in (27) and E2=3max{1+m,a}E_{2}=\sqrt{3}\max\{1+m,a\}. Therefore, if ϕ1,ϕ2,ϕ3\phi_{1},\phi_{2},\phi_{3} are chosen such that (25) holds and

ρb(|η|)Hb2(x)ϕ2η12+(2ϕ12ϕ3ϕ2)η1η2\displaystyle-\rho_{b}(|\eta|)-H^{2}_{b}(x)\geq-\phi_{2}\eta^{2}_{1}+(2\phi_{1}-2\phi_{3}-\phi_{2})\eta_{1}\eta_{2}
(2ϕ3ϕ2)η22+(ϱ01+ϱ11)(ϕ2η1+2ϕ3η2)2,\displaystyle\quad-(2\phi_{3}-\phi_{2})\eta^{2}_{2}+(\varrho^{-1}_{0}+\varrho^{-1}_{1})(\phi_{2}\eta_{1}+2\phi_{3}\eta_{2})^{2}, (63)

then Assumption 6 is verified with θb(v)=πv2\theta_{b}(v)=\pi v^{2}, γ0=π+ϱ0E22\gamma_{0}=\sqrt{\pi+\varrho_{0}E^{2}_{2}}, γ1=π+ϱ1λ22E22/λ12\gamma_{1}=\sqrt{\pi+\varrho_{1}\lambda^{2}_{2}E^{2}_{2}/\lambda^{2}_{1}}, σbV(v)=ϱ1a2|v|2\sigma_{bV}(v)=\varrho_{1}a^{2}|v|^{2} and π>0\pi>0 is arbitrarily small.

Observe that L0L_{0} and L1L_{1} depend on the magnitude of mm and aa; γ0\gamma_{0} and γ1\gamma_{1} depend on the choice of ϱ0,ϱ1\varrho_{0},\varrho_{1} and ε\varepsilon. Thus, the tradeoff between hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}} is related to m,a,ϱ0m,a,\varrho_{0} and ϱ1\varrho_{1}. Pick m=4.905m=4.905, a=2a=2, ϖ=π=0.005\varpi=\pi=0.005, ϱ0=1\varrho_{0}=1 and ϱ1=ϱ0λ2/λ1\varrho_{1}=\varrho_{0}\lambda_{2}/\lambda_{1}. The choices of ϕ1,ϕ2\phi_{1},\phi_{2} and ϕ3\phi_{3} are to satisfy (25) and (VII). Thus, L0=17.7150L_{0}=17.7150, L1=37.5792L_{1}=37.5792, γ0=7.2325\gamma_{0}=7.2325, γ1=22.3450\gamma_{1}=22.3450 for the RR protocol and L0=10.2278L_{0}=10.2278, L1=21.6964L_{1}=21.6964, γ0=7.2325\gamma_{0}=7.2325, γ1=22.3450\gamma_{1}=22.3450 for the TOD and the TOD-tracking protocols.

Refer to caption
Figure 6: Tracking errors for hmati=0.0256h_{\operatorname{mati}}=0.0256, hmad=0.00385h_{\operatorname{mad}}=0.00385 and TOD-tracking protocol.

Set the initial ϕ¯0(0)=ϕ¯1(0)=3\bar{\phi}_{0}(0)=\bar{\phi}_{1}(0)=\sqrt{3} for the RR protocol, and ϕ¯0(0)=3,ϕ¯1(0)=3+1\bar{\phi}_{0}(0)=\sqrt{3},\bar{\phi}_{1}(0)=\sqrt{3}+1 for the TOD and TOD-tracking protocols, then hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}} are obtained via (28a)-(28b); see Figs. 2-3. Follows from Figs. 2-3, hmati=0.0242h_{\operatorname{mati}}=0.0242 and hmad=0.00390h_{\operatorname{mad}}=0.00390 for the RR protocol; hmati=0.0256h_{\operatorname{mati}}=0.0256 and hmad=0.00385h_{\operatorname{mad}}=0.00385 for the TOD and TOD-tracking protocols. In addition, the magnitudes of hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}} are different from different initial conditions. For instance, hmati=0.0255h_{\operatorname{mati}}=0.0255 and hmad=0.00425h_{\operatorname{mad}}=0.00425 for the RR protocol with ϕ¯0(0)=ϕ¯1(0)=2\bar{\phi}_{0}(0)=\bar{\phi}_{1}(0)=\sqrt{2}; hmati=0.0242h_{\operatorname{mati}}=0.0242 and hmad=0.0069h_{\operatorname{mad}}=0.0069 for the TOD protocol with ϕ¯0(0)=2\bar{\phi}_{0}(0)=\sqrt{2} and ϕ¯1(0)=2+1\bar{\phi}_{1}(0)=\sqrt{2}+1. hmati=0.02375h_{\operatorname{mati}}=0.02375 and hmad=0.00365h_{\operatorname{mad}}=0.00365 for the RR protocol with ϕ¯0(0)=ϕ¯1(0)=2\bar{\phi}_{0}(0)=\bar{\phi}_{1}(0)=2; hmati=0.02615h_{\operatorname{mati}}=0.02615 and hmad=0.0024h_{\operatorname{mad}}=0.0024 for the TOD protocol with ϕ¯0(0)=2\bar{\phi}_{0}(0)=2 and ϕ¯1(0)=3\bar{\phi}_{1}(0)=3. Therefore, different tradeoffs between hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}} can be obtained for different initial conditions. Furthermore, from (28), hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}} depend on ϱ0\varrho_{0} and ϱ1\varrho_{1}. For instance, changing the value of ϱ0\varrho_{0} leads to the different magnitudes of hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}}. The maximal value of ϱ0\varrho_{0} is 2.090. If ϱ0>2.090\varrho_{0}>2.090, then there is no intersection between γ1ϕ¯1\gamma_{1}\bar{\phi}_{1} and (1+ϱ0)γ0ϕ¯0(1+\varrho_{0})\gamma_{0}\bar{\phi}_{0}.

Based on hmatih_{\operatorname{mati}} and hmadh_{\operatorname{mad}} obtained from Figs. 2-3, the tracking errors are illustrated in Figs. 4-6 for different cases. Figs. 4-5 are for the RR and TOD protocol cases, respectively. The tracking errors are convergent and bounded in Figs. 4-5 due to the effects of the network-induced error efe_{\operatorname{f}}. Since ufu_{\operatorname{f}} is transmitted to the reference system and the plant directly in the TOD-tracking protocol, ef=0e_{\operatorname{f}}=0 and the tracking error converge to zero in Fig. 6.

VIII Conclusions

In this paper, the tracking control of nonlinear networked and quantized control systems was analyzed based on a Lyapunov approach. To deal with the network-induced issues, a new hybrid model was developed. Sufficient conditions were established to guarantee the convergence of the tracking error. In addition, the Lyapunov function satisfying the obtained conditions was constructed. For the specific time-scheduling protocols and quantizers, we discussed how to reduce the effects of the network-induced errors on the convergence of the tracking error. Finally, the developed results were demonstrated by a numerical example.

In the future, our work will focus on tracking control of networked and quantized control systems with external disturbances. Since the disturbances may lead the system state to escape from the quantization regions, the general quantization mechanism in [28, 26] needs to be considered.

Appendix

The detailed expressions of the functions in 𝒮1\mathcal{S}_{1} are presented as follows. As the preceding definitions in Section III, we have xp=η+xrx_{\operatorname{p}}=\eta+x_{\operatorname{r}}. Moreover, based on (2)-(5), (10), we have that for t[tsi,tsi+1]{ri}t\in[t_{s_{i}},t_{s_{i+1}}]\setminus\{r_{i}\}, ii\in\mathbb{N},

η˙\displaystyle\dot{\eta} =fp(xp,u^c+u^f)fp(t,xr,u^f)\displaystyle=f_{\operatorname{p}}(x_{\operatorname{p}},\hat{u}_{\operatorname{c}}+\hat{u}_{\operatorname{f}})-f_{\operatorname{p}}(t,x_{\operatorname{r}},\hat{u}_{\operatorname{f}})
=fp(xp,uc+ec+uf+ef)fp(t,xr,uf+ef)\displaystyle=f_{\operatorname{p}}(x_{\operatorname{p}},u_{\operatorname{c}}+e_{\operatorname{c}}+u_{\operatorname{f}}+e_{\operatorname{f}})-f_{\operatorname{p}}(t,x_{\operatorname{r}},u_{\operatorname{f}}+e_{\operatorname{f}})
=fp(η+xr,gc(xc)+ec+uf+ef)fp(t,xr,uf+ef)\displaystyle=f_{\operatorname{p}}(\eta+x_{\operatorname{r}},g_{\operatorname{c}}(x_{\operatorname{c}})+e_{\operatorname{c}}+u_{\operatorname{f}}+e_{\operatorname{f}})-f_{\operatorname{p}}(t,x_{\operatorname{r}},u_{\operatorname{f}}+e_{\operatorname{f}})
=:Fη(η,xc,xr,e1,ef),\displaystyle=:F_{\eta}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}}),
x˙c\displaystyle\dot{x}_{\operatorname{c}} =fc(xc,y^d)=fc(xc,yd+ed)\displaystyle=f_{\operatorname{c}}(x_{\operatorname{c}},\hat{y}_{\operatorname{d}})=f_{\operatorname{c}}(x_{\operatorname{c}},y_{\operatorname{d}}+e_{\operatorname{d}})
=fc(xc,gp(η+xr)gr(xr)+ed)\displaystyle=f_{\operatorname{c}}(x_{\operatorname{c}},g_{\operatorname{p}}(\eta+x_{\operatorname{r}})-g_{\operatorname{r}}(x_{\operatorname{r}})+e_{\operatorname{d}})
=:Fc(η,xc,xr,e1,ef),\displaystyle=:F_{\operatorname{c}}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}}),
x˙r\displaystyle\dot{x}_{\operatorname{r}} =fp(xr,u^f)=fp(xr,uf+ef)=:Fr(η,xc,xr,e1,ef),\displaystyle=f_{\operatorname{p}}(x_{\operatorname{r}},\hat{u}_{\operatorname{f}})=f_{\operatorname{p}}(x_{\operatorname{r}},u_{\operatorname{f}}+e_{\operatorname{f}})=:F_{\operatorname{r}}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}}),
μ˙\displaystyle\dot{\mu} =gμ(xp,xc,xr,ϑ,μ)\displaystyle=g_{\mu}(x_{\operatorname{p}},x_{\operatorname{c}},x_{\operatorname{r}},\bm{\vartheta},\mu)
=gμ(η+xr,xc,xr,ed,ec,ef,μ)\displaystyle=g_{\mu}(\eta+x_{\operatorname{r}},x_{\operatorname{c}},x_{\operatorname{r}},e_{\operatorname{d}},e_{\operatorname{c}},e_{\operatorname{f}},\mu)
=:Gμ(η,xc,xr,e1,ef,μ),\displaystyle=:G_{\mu}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}},\mu),
e˙1\displaystyle\dot{e}_{1} =[e˙de˙c]=[e˙pe˙re˙c]\displaystyle=\begin{bmatrix}\dot{e}_{\operatorname{d}}\\ \dot{e}_{\operatorname{c}}\end{bmatrix}=\begin{bmatrix}\dot{e}_{\operatorname{p}}-\dot{e}_{\operatorname{r}}\\ \dot{e}_{\operatorname{c}}\end{bmatrix}
=[gp(xr),fp(xr,uf+ef)gp(xp),fp(η+xr,gc(xc)+ec+uf+ef)gc(xc),fc(η,xc,xr,eη)]\displaystyle=\begin{bmatrix}\langle\nabla g_{\operatorname{p}}(x_{\operatorname{r}}),f_{\operatorname{p}}(x_{\operatorname{r}},u_{\operatorname{f}}+e_{\operatorname{f}})\rangle\\ -\langle\nabla g_{\operatorname{p}}(x_{\operatorname{p}}),f_{\operatorname{p}}(\eta+x_{\operatorname{r}},g_{\operatorname{c}}(x_{\operatorname{c}})+e_{\operatorname{c}}+u_{\operatorname{f}}+e_{\operatorname{f}})\rangle\\ \langle\nabla g_{\operatorname{c}}(x_{\operatorname{c}}),f_{\operatorname{c}}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{\eta})\rangle\end{bmatrix}
=:G1(η,xc,xr,e1,ef),\displaystyle=:G_{1}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}}),
e˙r\displaystyle\dot{e}_{\operatorname{r}} =gr(xr),fr(xr,uf+ef)=:Gr(η,xc,xr,e1,ef),\displaystyle=-\langle\nabla g_{\operatorname{r}}(x_{\operatorname{r}}),f_{\operatorname{r}}(x_{\operatorname{r}},u_{\operatorname{f}}+e_{\operatorname{f}})\rangle=:G_{\operatorname{r}}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}}),
e˙f\displaystyle\dot{e}_{\operatorname{f}} =u˙f=ufxp(Fη+Fr)ufxrFrufxcFc\displaystyle=-\dot{u}_{\operatorname{f}}=-\dfrac{\partial u_{\operatorname{f}}}{\partial x_{\operatorname{p}}}(F_{\eta}+F_{\operatorname{r}})-\dfrac{\partial u_{\operatorname{f}}}{\partial x_{\operatorname{r}}}F_{\operatorname{r}}-\dfrac{\partial u_{\operatorname{f}}}{\partial x_{\operatorname{c}}}F_{\operatorname{c}}
=:Gf(η,xc,xr,e1,ef).\displaystyle=:G_{\operatorname{f}}(\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}}).

At the arrival time rir_{i}, ii\in\mathbb{N},

hμ(i,μ,ϵ)\displaystyle h_{\mu}(i,\mu,\bm{\epsilon}) =hμ(i,μ,ϵd,ϵc,ϵf)\displaystyle=h_{\mu}(i,\mu,\epsilon_{\operatorname{d}},\epsilon_{\operatorname{c}},\epsilon_{\operatorname{f}})
=hμ(i,μ,q(μ,gp(xp)gr(xr))gp(xp)+gr(xr),\displaystyle=h_{\mu}(i,\mu,q(\mu,g_{\operatorname{p}}(x_{\operatorname{p}})-g_{\operatorname{r}}(x_{\operatorname{r}}))-g_{\operatorname{p}}(x_{\operatorname{p}})+g_{\operatorname{r}}(x_{\operatorname{r}}),
q(μ,gc(xc))gc(xc),q(μ,uf)uf)\displaystyle\quad q(\mu,g_{\operatorname{c}}(x_{\operatorname{c}}))-g_{\operatorname{c}}(x_{\operatorname{c}}),q(\mu,u_{\operatorname{f}})-u_{\operatorname{f}})
=:Hμ(i,η,xc,xr,e1,ef,μ),\displaystyle=:H_{\mu}(i,\eta,x_{\operatorname{c}},x_{\operatorname{r}},e_{1},e_{\operatorname{f}},\mu),
hd(i,yd,ϑ,μ)\displaystyle h_{\operatorname{d}}(i,y_{\operatorname{d}},\bm{\vartheta},\mu) =ϵd+𝒉d(i,ϑ)\displaystyle=\epsilon_{\operatorname{d}}+\bm{h}_{\operatorname{d}}(i,\bm{\vartheta})
=q(μ,gp(xp)gr(xr))gp(xp)+gr(xr)\displaystyle=q(\mu,g_{\operatorname{p}}(x_{\operatorname{p}})-g_{\operatorname{r}}(x_{\operatorname{r}}))-g_{\operatorname{p}}(x_{\operatorname{p}})+g_{\operatorname{r}}(x_{\operatorname{r}})
+𝒉d(i,ed,ec,ef)\displaystyle\quad+\bm{h}_{\operatorname{d}}(i,e_{\operatorname{d}},e_{\operatorname{c}},e_{\operatorname{f}})
=:Hd(i,η,xr,xc,e1,ef,μ),\displaystyle=:H_{\operatorname{d}}(i,\eta,x_{\operatorname{r}},x_{\operatorname{c}},e_{1},e_{\operatorname{f}},\mu),
hc(i,xc,ϑ,μ)\displaystyle h_{\operatorname{c}}(i,x_{\operatorname{c}},\bm{\vartheta},\mu) =ϵc+𝒉c(i,ϑ)\displaystyle=\epsilon_{\operatorname{c}}+\bm{h}_{\operatorname{c}}(i,\bm{\vartheta})
=q(μ,gc(xc))gc(xc)+𝒉c(i,ed,ec,ef)\displaystyle=q(\mu,g_{\operatorname{c}}(x_{\operatorname{c}}))-g_{\operatorname{c}}(x_{\operatorname{c}})+\bm{h}_{\operatorname{c}}(i,e_{\operatorname{d}},e_{\operatorname{c}},e_{\operatorname{f}})
=:Hc(i,η,xr,xc,e1,ef,μ),\displaystyle=:H_{\operatorname{c}}(i,\eta,x_{\operatorname{r}},x_{\operatorname{c}},e_{1},e_{\operatorname{f}},\mu),
hf(i,xf,ϑ,μ)\displaystyle h_{\operatorname{f}}(i,x_{\operatorname{f}},\bm{\vartheta},\mu) =ϵf+𝒉f(i,ϑ)\displaystyle=\epsilon_{\operatorname{f}}+\bm{h}_{\operatorname{f}}(i,\bm{\vartheta})
=q(μ,uf)uf+𝒉f(i,ed,ec,ef)\displaystyle=q(\mu,u_{\operatorname{f}})-u_{\operatorname{f}}+\bm{h}_{\operatorname{f}}(i,e_{\operatorname{d}},e_{\operatorname{c}},e_{\operatorname{f}})
=:Hf(i,η,xr,xc,e1,ef,μ).\displaystyle=:H_{\operatorname{f}}(i,\eta,x_{\operatorname{r}},x_{\operatorname{c}},e_{1},e_{\operatorname{f}},\mu).

References

  • [1] J. Baillieul and P. J. Antsaklis, “Control and communication challenges in networked real-time systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 9–28, 2007.
  • [2] J. Lunze, Control Theory of Digitally Networked Dynamic Systems. Springer, 2014.
  • [3] R. A. Gupta and M.-Y. Chow, “Networked control system: Overview and research trends,” IEEE Transactions on Industrial Electronics, vol. 57, no. 7, pp. 2527–2535, 2010.
  • [4] N. W. Bauer, “Networked Control Systems: From Theory to Experiments,” Ph.D. dissertation, Technische Universiteit Eindhoven, 2013.
  • [5] P. Antsaklis and J. Baillieul, “Special issue on technology of networked control systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 5–8, 2007.
  • [6] I. Karafyllis and M. Krstic, “Nonlinear stabilization under sampled and delayed measurements, and with inputs subject to delay and zero-order hold,” IEEE Trans. Autom. Control, vol. 57, no. 5, pp. 1141–1154, 2012.
  • [7] M. Donkers, W. Heemels, N. Van De Wouw, and L. Hetel, “Stability analysis of networked control systems using a switched linear systems approach,” IEEE Trans. Autom. Control, vol. 56, no. 9, pp. 2101–2115, 2011.
  • [8] D. Nešić and D. Liberzon, “A unified framework for design and analysis of networked and quantized control systems,” IEEE Trans. Autom. Control, vol. 54, no. 4, pp. 732–747, 2009.
  • [9] W. M. H. Heemels, A. R. Teel, N. van de Wouw, and D. Nešić, “Networked control systems with communication constraints: Tradeoffs between transmission intervals, delays and performance,” IEEE Trans. Autom. Control, vol. 55, no. 8, pp. 1781–1796, 2010.
  • [10] D. Carnevale, A. R. Teel, and D. Nešić, “A Lyapunov proof of an improved maximum allowable transfer interval for networked control systems,” IEEE Trans. Autom. Control, vol. 52, no. 5, p. 892, 2007.
  • [11] D. Nešić and A. R. Teel, “Input-output stability properties of networked control systems,” IEEE Trans. Autom. Control, vol. 49, no. 10, pp. 1650–1667, 2004.
  • [12] D. Antunes, J. Hespanha, and C. Silvestre, “Stability of networked control systems with asynchronous renewal links: An impulsive systems approach,” Automatica, vol. 49, no. 2, pp. 402–413, 2013.
  • [13] M. B. Cloosterman, L. Hetel, N. Van De Wouw, W. Heemels, J. Daafouz, and H. Nijmeijer, “Controller synthesis for networked control systems,” Automatica, vol. 46, no. 10, pp. 1584–1594, 2010.
  • [14] H. Gao and T. Chen, “Network-based \mathcal{H}_{\infty} output tracking control,” IEEE Trans. Autom. Control, vol. 53, no. 3, pp. 655–667, 2008.
  • [15] S. Hirche, T. Matiakis, and M. Buss, “A distributed controller approach for delay-independent stability of networked control systems,” Automatica, vol. 45, no. 8, pp. 1828–1836, 2009.
  • [16] N. van de Wouw, P. Naghshtabrizi, M. Cloosterman, and J. P. Hespanha, “Tracking control for sampled-data systems with uncertain time-varying sampling intervals and delays,” International Journal of Robust and Nonlinear Control, vol. 20, no. 4, pp. 387–411, 2010.
  • [17] H. Zhang, Y. Shi, and M. Liu, “\mathcal{H}_{\infty} step tracking control for networked discrete-time nonlinear systems with integral and predictive actions,” IEEE Trans. Ind. Informat., vol. 9, no. 1, pp. 337–345, 2013.
  • [18] R. Postoyan, N. Van de Wouw, D. Nešić, and W. M. H. Heemels, “Tracking control for nonlinear networked control systems,” IEEE Trans. Autom. Control, vol. 59, no. 6, pp. 1539–1554, 2014.
  • [19] J. B. Biemond, N. van de Wouw, W. H. Heemels, and H. Nijmeijer, “Tracking control for hybrid systems with state-triggered jumps,” IEEE Trans. Autom. Control, vol. 58, no. 4, pp. 876–890, 2013.
  • [20] J. Lian and Y. Ge, “Robust \mathcal{H}_{\infty} output tracking control for switched systems under asynchronous switching,” Nonlinear Analysis: Hybrid Systems, vol. 8, pp. 57–68, 2013.
  • [21] M. J. Grimble, “Nonlinear generalized minimum variance feedback, feedforward and tracking control,” Automatica, vol. 41, no. 6, pp. 957–969, 2005.
  • [22] N. Van De Wouw, P. Naghshtabrizi, M. Cloosterman, and J. P. Hespanha, “Tracking control for networked control systems,” in IEEE Conference on Decision and Control. IEEE, 2007, pp. 4441–4446.
  • [23] E. Garcia, G. Vitaioli, and P. J. Antsaklis, “Model-based tracking control over networks,” in 2011 IEEE International Conference on Control Applications (CCA). IEEE, 2011, pp. 1226–1231.
  • [24] C. Cai and A. R. Teel, “Characterizations of input-to-state stability for hybrid systems,” Syst. Control Lett., vol. 58, no. 1, pp. 47–53, 2009.
  • [25] R. Goebel and A. R. Teel, “Solutions to hybrid inclusions via set and graphical convergence with stability theory applications,” Automatica, vol. 42, no. 4, pp. 573–587, 2006.
  • [26] D. Liberzon, “On stabilization of linear systems with limited information,” IEEE Trans. Autom. Control, vol. 48, no. 2, pp. 304–307, 2003.
  • [27] R. W. Brockett and D. Liberzon, “Quantized feedback stabilization of linear systems,” IEEE Trans. Autom. Control, vol. 45, no. 7, pp. 1279–1289, 2000.
  • [28] D. Liberzon, “Hybrid feedback stabilization of systems with quantized signals,” Automatica, vol. 39, no. 9, pp. 1543–1554, 2003.
  • [29] W. Ren and J. Xiong, “Tracking control of networked and quantized control systems,” in IEEE Conference on Decision and Control. IEEE, 2018, pp. 5844–5849.
  • [30] M. Tabbara, D. Nešić, and A. R. Teel, “Stability of wireless and wireline networked control systems,” IEEE Trans. Autom. Control, vol. 52, no. 9, pp. 1615–1630, 2007.
  • [31] K. Liu, E. Fridman, and K. H. Johansson, “Dynamic quantization of uncertain linear networked control systems,” Automatica, vol. 59, pp. 248–255, 2015.
  • [32] W. Wang, D. Nešić, and R. Postoyan, “Emulation-based stabilization of networked control systems implemented on FlexRay,” Automatica, vol. 59, pp. 73–83, 2015.
  • [33] S. van Loon, M. Donkers, N. van de Wouw, and W. Heemels, “Stability analysis of networked and quantized linear control systems,” Nonlinear Analysis: Hybrid Systems, vol. 10, pp. 111–125, 2013.
  • [34] D. Liberzon and D. Nešić, “Input-to-state stabilization of linear systems with quantized state measurements,” IEEE Trans. Autom. Control, vol. 52, no. 5, pp. 767–781, 2007.
  • [35] T. Kameneva and D. Nešić, “Robustness of quantized control systems with mismatch between coder/decoder initializations,” Automatica, vol. 45, no. 3, pp. 817–822, 2009.
  • [36] M. Fu and L. Xie, “The sector bound approach to quantized feedback control,” IEEE Trans. Autom. Control, vol. 50, no. 11, pp. 1698–1711, 2005.
  • [37] W. Heemels, D. Nešić, A. R. Teel, and N. van de Wouw, “Networked and quantized control systems with communication delays,” in Proceedings of the 48th IEEE Conference on Decision and Control. IEEE, 2009, pp. 7929–7935.
  • [38] M. Tabbara and D. Nešić, “Input-output stability with input-to-state stable protocols for quantized and networked control systems,” in Proceedings of IEEE Conference on Decision and Control. IEEE, 2008, pp. 2680–2685.
  • [39] A. Franci and A. Chaillet, “Quantised control of nonlinear systems: analysis of robustness to parameter uncertainty, measurement errors, and exogenous disturbances,” International Journal of Control, vol. 83, no. 12, pp. 2453–2462, 2010.
  • [40] C. M. Kellett, “A compendium of comparison function results,” Mathematics of Control, Signals, and Systems, vol. 26, no. 3, pp. 339–374, 2014.
  • [41] F. H. Clarke, Optimization and nonsmooth analysis. Philadelphia, PA: SIAM, 1990.
  • [42] F. Lewis, S. Jagannathan, and A. Yesildirak, Neural Network Control of Robot Manipulators and Nonlinear Systems. CRC Press, 1998.
  • [43] L. Sciavicco and B. Siciliano, Modelling and Control of Robot Manipulators. Springer Science & Business Media, 2012.
Wei Ren received his B.Sci. degree from Hubei University, China, and his Ph.D. degree from the University of Science and Technology of China, China, in 2011 and 2018. He was a joint-Ph.D. under the supervision of Professor Dragan Nešić in the University of Melbourne, Victoria, Australia. Currently, he is a post-doctor at KTH Royal Institute of Technology, Stockholm, Sweden. His research interests include networked control systems, nonlinear systems, symbolic abstraction, multi-agent systems and hybrid systems.
Junlin Xiong received his B.Eng. and M.Sci. degrees from Northeastern University, China, and his Ph.D. degree from the University of Hong Kong, China, in 2000, 2003 and 2007, respectively. From November 2007 to February 2010, he was a research associate at the University of New South Wales at the Australian Defense Force Academy, Australia. In March 2010, he joined the University of Science and Technology of China where he is currently a Professor in the Department of Automation. His current research interests are in the fields of Markovian jump systems, networked control systems and negative imaginary systems.