This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

State Feedback Stabilization of Generic Logic Systems via Ledley Antecedence Solution

Yingzhe Jia    \IEEEmembershipStudent Member, IEEE    Daizhan Cheng    \IEEEmembershipFellow, IEEE and Jun-e Feng This work is supported partly by the National Natural Science Foundation of China (NSFC) under Grants 61773371, 61733018 and 61877036.Yingzhe Jia and Jun-e Feng are with School of Mathematics, Shandong University, Jinan 250100, P. R. China (e-mail for Feng: fengjune@sdu.edu.cn, e-mail for Jia: yingzhe.jia@postgrad.manchester.ac.uk). Daizhan Cheng is with the Key Laboratory of Systems and Control, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences, Beijing 100190, P. R. China (e-mail dcheng@iss.ac.cn).Corresponding author: Jun-e Feng. Tel.: +86 531 88364652.
Abstract

In this paper, the application of Ledley antecedence solutions in designing state feedback stabilizers of generic logic systems has been proposed. To make the method feasible, two modifications are made to the original Ledley antecedence solution theory: (i) the preassigned logical functions have been extended from being a set of equations to an admissible set; (ii) the domain of arguments has been extended from the whole state space to a restricted subset. In the proposed method, state feedback controls are considered as a set of extended Ledley antecedence solutions for a designed iterative admissible sets over their corresponding restricted subsets. Based on this, an algorithm has been proposed to verify the solvability, and simultaneously to provide all possible state feedback stabilizers when the problem is solvable. All stabilizers are optimal, which stabilize the logic systems from any initial state to the destination state/state set in the shortest time. The method is firstly demonstrated on Boolean control networks to achieve point stabilization. Then, with some minor modifications, the proposed method is also proven to be applicable to set stabilization problems. Finally, it is shown that in kk-valued and mix-valued logical systems, the proposed method remains effective.

{IEEEkeywords}

Boolean control network, Stabilization, Ledley antecedence/consequence solution, State feedback stabilizer, Semi-tensor product of matrices.

1 Introduction

\IEEEPARstart

The problem of stability and stabilization is one of the most important issues in studying dynamic (control) systems. It is also a fundamental topic for Boolean (control) networks as well as for kk-valued or mix-valued (control) networks. For statement ease, in the following text a logical network could be a Boolean, kk-valued, or mix-valued network. The systematic investigation of stability of logical networks may be traced back to F. Robert [27], where it was called the convergence of discrete iteration. The vector metric was introduced and used to provide some sufficient conditions for convergence of discrete iterations.

Boolean network (BN) was introduced firstly by Kauffman [14, 15] to formulate cellular networks. To manipulate BN, the Boolean control network (BCN) was merged naturally [13]. Since then, the stability and stabilization become an interesting and challenging topic to study. Later on, Cheng and his colleagues proposed a new matrix product called semi-tensor product (STP) of matrices, and used STP to convert a logical network into an algebraic form called the algebraic state space representation (ASSR) of logical networks [2, 5]. Motivated by ASSR, the investigation of Boolean (control) networks has been developed quickly. Many fundamental control problems about logical networks have been studied, for instance, controllability and observability[1, 8], state space decomposition and disturbance decoupling [4, 11], optimization and optimal control [31, 9], with some applications to finite automata [28, 29], coding [33], just to mention a few. A general outline on recent development of logical networks via STP/ASSR and its applications can be discovered by some survey papers [10, 22, 24, 21, 30].

The stability of logical networks have also been studied via ASSR widely. Three basic approaches have been developed: (i) incidence-matrix-based stability analysis [3]; (ii) transition-matrix-based stability analysis [2, 7]; (iii) Lyapunov function-based analysis [19]. Of course, stability is closely related to the stabilization of logical control networks, because stabilization is to find a suitable controller, which is commonly called the stabilizer, to make the controlled system being stable.

Various stabilization problems have also been investigated via STP and ASSR. For instance, set stability and stabilization were firstly proposed and investigated by [12]; stability and stabilization of BNC with delay have been discussed by several authors, say [23]; state feedback stabilization with design technique was proposed by [20]; partial stability and stabilization were investigated by [6]; robust stability and stabilization of BCN with disturbances were considered in [32].

This paper investigates the state feedback stabilization of generic logic systems. The technique proposed in this paper is based on Ledley antecedence solution. The antecedence/consequence solution for a set of logical equations was firstly proposed by R.S. Ledley [17, 18]. It has been applied to several logical problems such as error diagnosis of disease, digital computational method in symbolic logic, digital circuit design, etc. We also refer to [16] for a systematic description. It was firstly applied to design of BCN by [26]. Using STP, some useful formulas were developed by [26] to calculate anticedence/consequence solutions. The method of Ledley antecedence/consequence solution has also been used to design control of singular Boolean networks [25].

It is interesting that Ledley antecedence solution may provide a suitable state feedback stabilizer in a very natural and convenient way. To this end, two generalizations of Ledley antecedence solution have been done: (i) Ledley antecedence solution has been originally used with respect to a set of equalities, in which the preassigned functions equal to a set of constants respectively. In this paper, it has been developed such that the constants can be converted to an admissible set. (ii) Originally, the domain of arguments for Ledley antecedence solution is over the whole state space. In this paper it has been developed into a restricted set.

The main advantages of this new approach are: (i) The algorithm is necessary and sufficient. It can verify whether the problem is solvable, and simultaneously it can also produce a state feedback stabilizer provided the problem is solvable. (ii) The stabilizer provides an optimal solution, which means each point can reach its destination point/set in the shortest time. (iii) All the feasible state feedback stabilizers can be found with one search. (iv) With a mild revision, the method is applicable to set stabilization. In addition, the computations involved are easy and straightforward.

Before ending this section, we give a brief list for notations:

  1. 1.

    m×n{\mathcal{M}}_{m\times n}: the set of m×nm\times n real matrices.

  2. 2.

    Col(M)\operatorname{Col}(M) (Row(M)\operatorname{Row}(M)): the set of columns (rows) of MM. Coli(M)\operatorname{Col}_{i}(M) (Rowi(M)\operatorname{Row}_{i}(M)): the ii-th column (row) of MM.

  3. 3.

    𝒟:={0,1}{\mathcal{D}}:=\{0,1\}.

  4. 4.

    𝒟k:={0,1k1,,k2k1,1}{\mathcal{D}}_{k}:=\left\{0,\frac{1}{k-1},\cdots,\frac{k-2}{k-1},1\right\}, k2k\geq 2.

  5. 5.

    δni\delta_{n}^{i}: the ii-th column of the identity matrix InI_{n}.

  6. 6.

    Δn:={δni|i=1,,n}\Delta_{n}:=\left\{\delta_{n}^{i}|i=1,\cdots,n\right\}; Δ:=Δ2\Delta:=\Delta_{2}.

  7. 7.

    𝟎:=(0,0,,0)T{\bf 0}_{\ell}:=(\underbrace{0,0,\cdots,0}_{\ell})^{\mathrm{T}}.

  8. 8.

    A matrix Lm×nL\in{\mathcal{M}}_{m\times n} is called a logical matrix if Col(L)Δm\operatorname{Col}(L)\subset\Delta_{m}. Denote by m×n{\mathcal{L}}_{m\times n} the set of m×nm\times n logical matrices.

  9. 9.

    If Ln×rL\in{\mathcal{L}}_{n\times r}, by definition it can be expressed as L=[δni1,δni2,,δnir]L=[\delta_{n}^{i_{1}},\delta_{n}^{i_{2}},\cdots,\delta_{n}^{i_{r}}]. For the sake of compactness, it is briefly denoted as L=δn[i1,i2,,ir]L=\delta_{n}[i_{1},i_{2},\cdots,i_{r}].

  10. 10.

    δn{a1,a2,,an}:={δna1,δna2,,δnan}\delta_{n}\{a_{1},a_{2},\cdots,a_{n}\}:=\{\delta_{n}^{a_{1}},\delta_{n}^{a_{2}},\cdots,\delta_{n}^{a_{n}}\}.

  11. 11.

    \ltimes: Semi-tensor product of matrices.

  12. 12.

    Let Ap×nA\in{\mathcal{M}}_{p\times n} and Bq×nB\in{\mathcal{M}}_{q\times n}. Then ABpq×nA*B\in{\mathcal{M}}_{pq\times n} is the Khatri-Rao product of matrix, satisfying [2]

    Colj(AB)=Colj(A)Colj(B),j=1,,n.\operatorname{Col}_{j}(A*B)=\operatorname{Col}_{j}(A)\ltimes\operatorname{Col}_{j}(B),\quad j=1,\cdots,n.
  13. 13.

    Denote by m×n{\mathcal{B}}_{m\times n} the set of m×nm\times n Boolean matrices.

  14. 14.

    A,Bm×nA,B\in{\mathcal{M}}_{m\times n}, ABA\geq B means that Ai,jBi,jA_{i,j}\geq B_{i,j}, i,j\forall i,j.

  15. 15.

    PRkPR_{k}: power reducing matrix, which is defined as

    PRk=diag(δk1,δk2,,δkk)k2×k.PR_{k}=diag(\delta_{k}^{1},\delta_{k}^{2},\cdots,\delta_{k}^{k})\in{\mathcal{M}}_{k^{2}\times k}.

The rest of this paper is organized as follows: Section II provides some necessary preliminaries, including two parts: (i) STP of matrices and the matrix express of logical functions; (ii) Ledley antecedence/consequnce solution to a set of logical functions. Section III considers the state feedback stabilization of BCN to a pre-assigned point. In Section III the Ledley antecedence/consequence solution has been generalized in two ways: (i) the set of equalities is generalized to a set of inclusions; (ii) the antecedence/consequence solution has been generalized to subset antecedence/consequence. Section IV uses the generalized antecedence to construct state feedback stabilizer. Section V discusses the stabilization to a pre-assigned subset. The algorithm for point stabilization has been extended to a similar algorithm for set stabilization. Section VI provides an example to show that the technique developed is also applicable to kk-valued or mix-valued logical networks. In section VII, some concluding remarks of the paper are given.

2 Preliminaries

2.1 Matrix Expression of Logical Functions

We first recall STP, which is the fundamental tool for algebraic expression of logical functions.

Definition 2.1

[2] Let Am×nA\in{\mathcal{M}}_{m\times n}, Bp×qB\in{\mathcal{M}}_{p\times q}, and the least common multiple lcm(n,p)=t\operatorname{lcm}(n,p)=t. Then the SPT of AA and BB is defined by

AB:=(AIt/n)(BIt/p),\displaystyle A\ltimes B:=\left(A\otimes I_{t/n}\right)\left(B\otimes I_{t/p}\right), (1)

where \otimes is Kronecker product.

Note that STP is a generalization of classical matrix product. That is, when n=pn=p, STP degenerates to classical matrix product. Throughout this paper the default matrix product is STP, and in most cases the symbol \ltimes is omitted.

XX is called a logical variable if X𝒟X\in{\mathcal{D}}. The vector expression of XX, denoted by x=Xx=\vec{X}, is

x=X:=[X1X]Δ.x=\vec{X}:=\begin{bmatrix}X\\ 1-X\end{bmatrix}\in\Delta.

Note that as a convention, we always use capital letters, such as XiX_{i}, UjU_{j}, for logical variables, and use their lower case, such as xix_{i}, uju_{j}, for their vector forms.

Definition 2.2

A mapping ϕ:𝒟𝒟\phi:{\mathcal{D}}\rightarrow{\mathcal{D}} is called a unary operator. A mapping ϕ:𝒟2𝒟\phi:{\mathcal{D}}^{2}\rightarrow{\mathcal{D}} is called a binary operator. A mapping ϕ:𝒟n𝒟\phi:{\mathcal{D}}^{n}\rightarrow{\mathcal{D}} is called a (classical) logical function of nn-variables. A classical logical function is also called a Boolean function.

So the unary operators are one-variable Boolean functions, and the binary operators are two-variable Boolean functions. In vector form, for each operator there is a logical matrix such that the logical expression can be converted into an algebraic expression, called the ASSR expression.

Consider unary operator negation ( ¬\neg)

¬x=[0110]x=δ2[2,1]x:=Mnx,\displaystyle\neg x=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}x=\delta_{2}[2,1]x:=M_{n}x, (2)

where  Mn2×2M_{n}\in{\mathcal{L}}_{2\times 2} is called the structure matrix of negation.

Similarly, for a binary operator σ\sigma, we can also find its structure matrix Mσ2×4M_{\sigma}\in{\mathcal{L}}_{2\times 4}, such that

xσy=Mσxy.x\sigma y=M_{\sigma}xy.

For instance,

  • conjunction ( \wedge):

    Mc=δ2[1,2,2,2].\displaystyle M_{c}=\delta_{2}[1,2,2,2]. (3)
  • disjunction ( \vee):

    Md=δ2[1,1,1,2].\displaystyle M_{d}=\delta_{2}[1,1,1,2]. (4)
  • conditional ( \rightarrow):

    Mi=δ2[1,2,1,1].\displaystyle M_{i}=\delta_{2}[1,2,1,1]. (5)
  • biconditional ( \leftrightarrow):

    Me=δ2[1,2,2,1].\displaystyle M_{e}=\delta_{2}[1,2,2,1]. (6)
  • exclusive or ( ¯\bar{\vee}):

    Mp=δ2[2,1,1,2].\displaystyle M_{p}=\delta_{2}[2,1,1,2]. (7)
Proposition 2.3

[2] Let F:𝒟n𝒟F:{\mathcal{D}}^{n}\rightarrow{\mathcal{D}} be a Boolean function. Then there exists a unique matrix, MF2×2nM_{F}\in{\mathcal{L}}_{2\times 2^{n}} such that in vector form we have

f(x1,,xn)=MFi=1nxi,\displaystyle f(x_{1},\cdots,x_{n})=M_{F}\ltimes_{i=1}^{n}x_{i}, (8)

where ff is the vector form of FF.

Proposition 2.4

Let x=δksx=\delta_{k}^{s}. Then

x2=PRkx,xΔk.\displaystyle x^{2}=PR_{k}x,\quad x\in\Delta_{k}. (9)

2.2 Ledley Antecedence/Consequence Solution

The antecedence and consequence solutions of a given logical function were proposed and investigated firstly by Ledley [17, 18]. A systematic description with applications can be found in [16]. The following definition comes from [26], which is a generalization of the original one given in [16].

Definition 2.5

Given a set of Boolean equations

Fi(X1,,Xn,U1,,Um)=Ci,i=1,,s,\displaystyle F_{i}(X_{1},\cdots,X_{n},U_{1},\cdots,U_{m})=C_{i},\quad i=1,\cdots,s, (10)

where Ci𝒟C_{i}\in{\mathcal{D}} are constants. A set of logical functions

Uj=Gj(X1,X2,,Xn),j=1,2,,m\displaystyle U_{j}=G_{j}(X_{1},X_{2},\cdots,X_{n}),\quad j=1,2,\cdots,m (11)
  • (i)

    is called an antecedence solution of (10), if (11) implies (10);

  • (ii)

    is called a consequence solution of (10), if (10) implies (11).

Note that in general for a given set of Boolean equations with Z={Z1,Z2,,Zq}Z=\{Z_{1},Z_{2},\cdots,Z_{q}\} as their independent variables, an arbitrary partition

Z=Z1Z2Z=Z^{1}\bigcup Z^{2}

is considered. The antecedence/consequence solution may have the form as

Zs=Gs(Z1),ZsZ2.Z_{s}=G_{s}(Z^{1}),\quad Z_{s}\in Z^{2}.

For our purpose, (11) is enough for control design.

Using vector form expression xi=Xix_{i}=\vec{X}_{i}, i=1,2,,ni=1,2,\cdots,n, uj=Uju_{j}=\vec{U}_{j}, j=1,2,,mj=1,2,\cdots,m, and denoting x=i=1nxix=\ltimes_{i=1}^{n}x_{i}, u=j=1muju=\ltimes_{j=1}^{m}u_{j}, the algebraic forms of (10) and (11) are as the following (12) and (13) respectively:

MFux=c,\displaystyle M_{F}ux=c, (12)
u=MGx.\displaystyle u=M_{G}x. (13)

Based on algebraic form (12), [26] proposed a matrix, called truth matrix, to describe the corresponding relation between xx and uu. Roughly speaking, truth matrix shows when xx and uu are consistent. We use a simple example to describe this.

Example 2.6

Given

F1(X1,X2,U)=(X1U)X2=δ21,F2(X1,X2,U)=(X1U)=δ22,\displaystyle\begin{array}[]{l}F_{1}(X_{1},X_{2},U)=(X_{1}\vee U)\rightarrow X_{2}=\delta_{2}^{1},\\ F_{2}(X_{1},X_{2},U)=(X_{1}\wedge U)=\delta_{2}^{2},\end{array} (16)

it is easy to calculate that

MF=δ4[1,3,2,4,2,4,2,2],\displaystyle M_{F}=\delta_{4}[1,3,2,4,2,4,2,2], (17)

and c=δ42c=\delta_{4}^{2}. Next, we construct a matrix, where the columns represent different values of xx and rows represent different values of uu. Then the table shows whether corresponding to each pair (u,x)(u,x) equation (12) holds. If it is “true”, the corresponding entry is 11, if it is “false”, the corresponding entry is 0. Then we have Table 1. We also say that the truth matrix is

T=[00101011].\displaystyle T=\begin{bmatrix}0&0&1&0\\ 1&0&1&1\end{bmatrix}. (18)
Table 1: Truth Matrix
u\xu\backslash x δ41\delta_{4}^{1} δ42\delta_{4}^{2} δ43\delta_{4}^{3} δ44\delta_{4}^{4}
δ21\delta_{2}^{1} 0 0 1 0
δ22\delta_{2}^{2} 1 0 1 1

The following result is essential for verifying antecedence/consequence solutions.

Theorem 2.7

[26] Assume the truth table of equations (10) is TT, and the set of equations (11) has its algebraic form as (13). Then

  • (i)

    (11) is an antecedence solution of (10), if and only if,

    MGT,\displaystyle M_{G}\leq T, (19)
  • (ii)

    (11) is a consequence solution of (10), if and only if,

    TMG.\displaystyle T\leq M_{G}. (20)

3 Generalized Antecedence/Consequence Solutions

By definition, the antecedence solution generates a set of state feedback controls

Uj=Gj(X1,X2,,Xn),j=1,2,,m.U_{j}=G_{j}(X_{1},X_{2},\cdots,X_{n}),\quad j=1,2,\cdots,m.

To make this set of state feedback controls meeting our requirement, we need to generalize the previous concepts about antecedence solution to design stabilizer.

First, we replace a single state C=(C1,C2,,Cs)C=(C_{1},C_{2},\cdots,C_{s}) by a set of states. Assume there exists a set Ω𝒟s\Omega\subset{\mathcal{D}}_{s}, called admissible set. Replacing a constant set CC, it is said that (10) is true as long as C=(C1,C2,,Cs)ΩC=(C_{1},C_{2},\cdots,C_{s})\in\Omega. Then we can construct a truth matrix TΩT_{\Omega}. In Theorem 2.7, replacing TT by TΩT_{\Omega}, it is easy to verify that Theorem 2.7 remains available.

Second, we may consider a subset of states, W𝒟nW\subset{\mathcal{D}}_{n}, called restricted set. The concept of antecedence/consequence solutions can be generalized as follows:

Definition 3.1

Given a set of Boolean equations (10), where C=(C1,C2,,Cs)Ω𝒟sC=(C_{1},C_{2},\cdots,C_{s})\in\Omega\subset{\mathcal{D}}_{s}. Consider a set of functions

Uj=Gj(X1,X2,,Xn),j=1,2,,m.\displaystyle U_{j}=G_{j}(X_{1},X_{2},\cdots,X_{n}),\quad j=1,2,\cdots,m. (21)
  • (1)

    (21) is called a subset antecedence solution of (10) with respect to WW, if, as long as (X1,,Xn)W(X_{1},\cdots,X_{n})\in W, (21) implies (10);

  • (ii)

    (21) is called a subset consequence solution of (10) with respect to WW, if, as long as (X1,,Xn)W(X_{1},\cdots,X_{n})\in W, (10) implies (21).

Note that at TΩT_{\Omega} each column corresponds to a distinct state XX, the restriction of TΩT_{\Omega} on WW, denoted by TΩ|W\left.T_{\Omega}\right|_{W}, is obtained by deleting columns which corresponding to XWX\not\in W.

Definition 3.2
  • (i)

    Assume ff is a subset antecedence/consequence solution with respect to WW, and gg is a subset antecedence/consequence solution with respect to WW^{\prime}. ff is said to superior to gg, denoted by fgf\succ g (or gfg\prec f), if WWW^{\prime}\subset W and

    f|W=g|W.f|_{W^{\prime}}=g|_{W^{\prime}}.
  • (ii)

    Assume ff is a WW-antecedence/consequence solution. ff is said to be a maximum antecedence/consequence solution, if gg is a WW^{\prime}-antecedence/consequence solution and gfg\succ f, then W=WW^{\prime}=W and g|W=f|Wg|_{W}=f|_{W}.

Note that if a WW-subset antecedence/consequence solution ff is a maximum antecedence/consequence solution, then WW is completely determined by TT (precisely speaking, by the non-zero columns of TT), and is independent of ff. Hence, WW is called a maximum (restricted) subset.

Using the same argument as in the proof of Theorem 2.7, it is straightforward to prove the following result.

Theorem 3.3

Assume the truth table of equations (10) with respect to Ω𝒟s\Omega\subset{\mathcal{D}}^{s} is TΩT_{\Omega}, and state subset W𝒟nW\subset{\mathcal{D}}^{n} is given. Then

  • (i)

    (11) is a subset antecedence solution of (10) with respect to subset WW, if and only if,

    MG|WTΩ|W.\displaystyle\left.M_{G}\right|_{W}\leq\left.T_{\Omega}\right|_{W}. (22)
  • (ii)

    (11) is a subset consequence solution of (10) with respect to subset WW, if and only if,

    TΩ|WMG|W.\displaystyle\left.T_{\Omega}\right|_{W}\leq\left.M_{G}\right|_{W}. (23)

A subset antecedence/consequence solution with respect to subset WW is briefly called WW-antecedence/consequence solution.

Example 3.4

Recall Example 2.6.

  • (i)

    Since the truth matrix of (16) is (18). According to Theorem 2.7, it is obvious that (16) has no antecedence solution.

  • (ii)

    Let W={δ41,δ43,δ44}W=\{\delta_{4}^{1},\delta_{4}^{3},\delta_{4}^{4}\}, which is a maximum subset. Set u=δ4[2,1,2,2]xu=\delta_{4}[2,1,2,2]x. According to Theorem 3.3, this uu is a WW-antecedence solution, which is a maximum subset antecedence solution.

  • (iii)

    Let W={δ41,δ43}W^{\prime}=\{\delta_{4}^{1},\delta_{4}^{3}\}. Then u=δ4[2,1,2,1]xu=\delta_{4}[2,1,2,1]x is a WW^{\prime}-antecedence solution. It is also clear that this uu is not a maximum solution.

4 Design of State Feedback Stabilizer

Definition 4.1

Consider a BCN

{X1(t+1)=F1(X1(t),,Xn(t),U1(t),,Um(t)),X2(t+1)=F2(X1(t),,Xn(t),U1(t),,Um(t)),Xn(t+1)=Fn(X1(t),,Xn(t),U1(t),,Um(t)),\displaystyle\begin{cases}X_{1}(t+1)=F_{1}(X_{1}(t),\cdots,X_{n}(t),U_{1}(t),\cdots,U_{m}(t)),\\ X_{2}(t+1)=F_{2}(X_{1}(t),\cdots,X_{n}(t),U_{1}(t),\cdots,U_{m}(t)),\\ \vdots\\ X_{n}(t+1)=F_{n}(X_{1}(t),\cdots,X_{n}(t),U_{1}(t),\cdots,U_{m}(t)),\\ \end{cases} (24)

where Xi(t)𝒟X_{i}(t)\in{\mathcal{D}}, i=1,,ni=1,\cdots,n are state variables, Uj(t)𝒟U_{j}(t)\in{\mathcal{D}}, j=1,,mj=1,\cdots,m are controls, Fi:𝒟m+n𝒟F_{i}:{\mathcal{D}}^{m+n}\rightarrow{\mathcal{D}}, i=1,,ni=1,\cdots,n are Boolean functions.

Let Xd=(X1d,X2d,,Xnd)𝒟nX^{d}=(X^{d}_{1},X^{d}_{2},\cdots,X^{d}_{n})\in{\mathcal{D}}^{n} be a given state. The BCN is said to be state feedback stabilizable to XdX^{d}, if there exists a state feedback control sequence

{U1(t)=G1(X1(t),,Xn(t)),U2(t)=G2(X1(t),,Xn(t)),Um(t)=Gm(X1(t),,Xn(t)),\displaystyle\begin{cases}U_{1}(t)=G_{1}(X_{1}(t),\cdots,X_{n}(t)),\\ U_{2}(t)=G_{2}(X_{1}(t),\cdots,X_{n}(t)),\\ \vdots\\ U_{m}(t)=G_{m}(X_{1}(t),\cdots,X_{n}(t)),\\ \end{cases} (25)

such that for the closed-loop network, there exists a T0T\in\mathbb{Z}_{\geq 0} such that X(t)=XdX(t)=X^{d}, for tTt\geq T.

Definition 4.2

Consider BCN (24). A state Xd𝒟nX^{d}\in{\mathcal{D}}^{n} is called a control fixed point, if there exists a control sequence U=(U1,U2,,Um)U=(U_{1},U_{2},\cdots,U_{m}), such that

Fi(Xd,U)=Xid,i=1,2,,n.\displaystyle F_{i}(X^{d},U)=X^{d}_{i},\quad i=1,2,\cdots,n. (26)
Remark 4.3

It is obvious that (26) is equivalent to: there exists a state feedback control sequence U(X)U(X), such that

Fi(Xd,U(Xd))=Xid,i=1,2,,n.\displaystyle F_{i}(X^{d},U(X^{d}))=X^{d}_{i},\quad i=1,2,\cdots,n. (27)

In fact, for a BCN, free control sequence is equivalent to state feedback control. It is obvious that state feedback control can be replaced by free control sequence. Conversely, since there are only finite states in a BCN, if a free control sequence can do something, then at each point X𝒟nX\in{\mathcal{D}}^{n}, a control UU is selected, then this control can be described as U(X)U(X).

The following necessary condition is obvious.

Proposition 4.4

Assume BCN (24) is stabilizable to XdX^{d}, then XdX^{d} is a control fixed point.

Next, we provide an algorithm, which provides a constructive procedure for stabilizer.

Algorithm 4.5
  • Step 0 (Initial Step): Set initial object Ω(0)=W0:={(C1,C2,Cn)}\Omega(0)=W_{0}:=\{(C_{1},C_{2},\cdots C_{n})\}, where Xd=(C1,C2,,Cn)X^{d}=(C_{1},C_{2},\cdots,C_{n}) is the destination point, to which the BCN is designed to be stabilized. Construct TΩ(0)T_{\Omega(0)}, and using it to find the maximum set W1W_{1} with respect to Ω(0)\Omega(0), and check if XdW1X^{d}\in W_{1}. If XdW1X^{d}\not\in W_{1}, XdX^{d} is not a control fixed point, and the corresponding state feedback stabilization is not solvable. Stop the algorithm.

  • Step kk (Repeated Step, k1k\geq 1) : Set Ω(k)=Wk\i=0k1Wi\Omega(k)=W_{k}\backslash\bigcup\limits_{i=0}^{k-1}W_{i}.

    • (i)

      If Ω(k)=\Omega(k)=\emptyset, then the problem is not solvable. Stop.

    • (ii)

      Else (i.e., Ω(k)\Omega(k)\neq\emptyset). Check if

      i=0kΩi=Δ2n.\displaystyle\bigcup_{i=0}^{k}\Omega_{i}=\Delta_{2^{n}}. (28)

      If (28) holds, the stabilization problem is solvable, set last kk as kk_{*} and go to Final Step. Else (i.e., (28) does not hold), construct TΩ(k)T_{\Omega(k)}, and use it to find the maximum set Wk+1W_{k+1} with respect to Ω(k)\Omega(k). Then go back to Step k+1k+1.

  • Final Step (Stabilizer Constructing Step): Construct u(t)=MGx(t)u(t)=M_{G}x(t) by the following inequalities:

    MG|Ω(i){TΩ0|Ω(0),i=0,TΩi1|Ω(i),i=1,,k.\displaystyle\left.M_{G}\right|_{\Omega(i)}\begin{cases}\leq\left.T_{\Omega_{0}}\right|_{\Omega(0)},\quad i=0,\\ \leq\left.T_{\Omega_{i-1}}\right|_{\Omega(i)},\quad i=1,\cdots,k^{*}.\end{cases} (29)
Theorem 4.6

BCN (24) is stabilizable to XdX^{d}, if and only if, the Algorithm 4.5 can go through and provide a state feedback stabilizer.

Proof 4.1.

Starting from first step, one easily sees that XΩ(0)X\in\Omega(0) means there exists a W0W_{0}-antecedence solution, as a state feedback control, which assures Fi(X,U(X))=CiF_{i}(X,U(X))=C_{i}, i=1,2,,ni=1,2,\cdots,n. That is, the state feedback control can drive XΩ(0)X\in\Omega(0) to CC. Similar argument shows that at step k, there is a state feedback control, when restrict it on Ω(k)\Omega(k), it drives XΩ(k)X\in\Omega(k) to Ω(k1)\Omega(k-1). Eventually, if there exists a kk^{*} such that (28) holds, then all the points can be driven to CC within kk^{*} steps. So the successful of the algorithm means the state feedback stabilization is solvable.

To see that if the algorithm fails, the problem is not solvable, we consider kk^{*} as the last step for Ωk\Omega_{k}\neq\emptyset and assume

i=0kΩiΔ2n.\bigcup_{i=0}^{k^{*}}\Omega_{i}\neq\Delta_{2^{n}}.

Then, there exists a XΔ2n\{i=0kΩi}X\in\Delta_{2^{n}}\backslash\left\{\bigcup_{i=0}^{k^{*}}\Omega_{i}\right\}. It is clear that no state feedback can drive it to CC.

Remark 7.

From the proof of Theorem 4.6 one sees easily that if the stabilization problem is solvable, then the state feedback stabilizers obtained from Algorithm 4.5 are “optimal” in the sense that each point X𝒟nX\in{\mathcal{D}}^{n} can reach the destination at the shortest time. If we do not require “time-optimal”, we may have more state feedback stabilizers. If in Algorithm 4.5 the Ω(i)\Omega(i) are replaced by WiW_{i}, then in addition to time-optimal stabilizers, there are some non-time optimal stabilizers, unfortunately, there are also some state feedback controls, which can not stabilize the network.

We use an example to depict this algorithm.

Example 8

Consider the following BCN:

{X1(t+1)=X2(t)U1(t),X2(t+1)=X4(t)(U2(t)X1(t)),X3(t+1)=(X1(t)X4(t))¯(¬X3(t)),X4(t+1)=(¬X1(t))U2(t).\displaystyle\begin{cases}X_{1}(t+1)=X_{2}(t)\vee U_{1}(t),\\ X_{2}(t+1)=X_{4}(t)\vee\left(U_{2}(t)\wedge X_{1}(t)\right),\\ X_{3}(t+1)=\left(X_{1}(t)\wedge X_{4}(t)\right)\bar{\vee}\left(\neg X_{3}(t)\right),\\ X_{4}(t+1)=\left(\neg X_{1}(t)\right)\leftrightarrow U_{2}(t).\end{cases} (30)

Is it possible to stabilize it to Xd=(1,1,0,1)X^{d}=(1,1,0,1)?

Consider the following equation:

{F1(X,U)=X2U1,F2(X,U)=X4(U2X1),F3(X,U)=(X1X4)¯(¬X3),F4(X,U)=(¬X1)U2.\displaystyle\begin{cases}F_{1}(X,U)=X_{2}\vee U_{1},\\ F_{2}(X,U)=X_{4}\vee\left(U_{2}\wedge X_{1}\right),\\ F_{3}(X,U)=\left(X_{1}\wedge X_{4}\right)\bar{\vee}\left(\neg X_{3}\right),\\ F_{4}(X,U)=\left(\neg X_{1}\right)\leftrightarrow U_{2}.\end{cases} (31)

Let the structure matrix of FiF_{i} be MiM_{i}, i=1,2,3,4i=1,2,3,4. Then F:=F1F2F3F4F:=F_{1}\ltimes F_{2}\ltimes F_{3}\ltimes F_{4} has its structure matrix

MF=M1M2M3M4=δ16[2,4,4,2,2,4,4,2,3,7,1,5,3,7,1,5,1,7,3,5,1,7,3,5,4,8,2,6,4,8,2,6,2,4,4,2,10,12,12,10,3,7,1,5,11,15,9,13,1,7,3,5,9,15,11,13,4,8,2,6,12,16,10,14].\displaystyle\begin{array}[]{ccl}M_{F}&=&M_{1}*M_{2}*M_{3}*M_{4}\\ {}\hfil&=&\delta_{16}[2,4,4,2,2,4,4,2,3,7,1,5,3,7,1,5,\\ {}\hfil&{}\hfil&~{}~{}~{}~{}1,7,3,5,1,7,3,5,4,8,2,6,4,8,2,6,\\ {}\hfil&{}\hfil&~{}~{}~{}~{}2,4,4,2,10,12,12,10,3,7,1,5,11,15,9,13,\\ {}\hfil&{}\hfil&~{}~{}~{}~{}1,7,3,5,9,15,11,13,4,8,2,6,12,16,10,14].\end{array} (37)

Then

Ω(0)=W0=δ163{C=(1,1,0,1)}.\Omega(0)=W_{0}=\delta^{3}_{16}\sim\{C=(1,1,0,1)\}.

Then the truth matrix of Ω(0)\Omega(0) can be obtained as

TΩ(0)=[0000000010001000001000100000000000000000100000000010000000000000].\displaystyle T_{\Omega(0)}=\left[\begin{array}[]{cccccccccccccccc}0&0&0&0&0&0&0&0&1&0&0&0&1&0&0&0\\ 0&0&1&0&0&0&1&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{array}\right]. (42)

From (42) it is clear that

W1=δ16{3,7,9,13}.\displaystyle W_{1}=\delta_{16}\{3,7,9,13\}. (43)

Since (1,1,0,1)δ163W1(1,1,0,1)\sim\delta_{16}^{3}\in W_{1}, (1,1,0,1)(1,1,0,1), the objective state is a control fixed point.

Next, set

Ω(1)=W1\W0=δ16{7,9,13}.\displaystyle\Omega(1)=W_{1}\backslash W_{0}=\delta_{16}\{7,9,13\}. (44)

We consider

(F1(X,U),F2(X,U),F3(X,U),F4(X,U))Ω(1),\displaystyle(F_{1}(X,U),F_{2}(X,U),F_{3}(X,U),F_{4}(X,U))\in\Omega(1), (45)

The truth matrix can be obtained as

TΩ(1)=[0000000001000100010001000000000000000000010000110100100100000000].\displaystyle T_{\Omega(1)}=\left[\begin{array}[]{cccccccccccccccc}0&0&0&0&0&0&0&0&0&1&0&0&0&1&0&0\\ 0&1&0&0&0&1&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&1&0&0&0&0&1&1\\ 0&1&0&0&1&0&0&1&0&0&0&0&0&0&0&0\\ \end{array}\right]. (50)

Then we have

W2=δ16{2,5,6,8,10,14,15,16}.\displaystyle W_{2}=\delta_{16}\{2,5,6,8,10,14,15,16\}. (51)
Ω(2)=W2\{W1W0}=δ16{2,5,6,8,10,14,15,16}.\displaystyle\begin{array}[]{ccl}\Omega(2)&=&W_{2}\backslash\{W_{1}\cup W_{0}\}\\ {}\hfil&=&\delta_{16}\{2,5,6,8,10,14,15,16\}.\end{array} (54)

Third step, we consider

(F1(X,U),F2(X,U),F3(X,U),F4(X,U))Ω(2),\displaystyle(F_{1}(X,U),F_{2}(X,U),F_{3}(X,U),F_{4}(X,U))\in\Omega(2), (55)
TΩ(2)=[1001100100010001000100010111011110011001000101000001010001110111].\displaystyle T_{\Omega(2)}=\left[\begin{array}[]{cccccccccccccccc}1&0&0&1&1&0&0&1&0&0&0&1&0&0&0&1\\ 0&0&0&1&0&0&0&1&0&1&1&1&0&1&1&1\\ 1&0&0&1&1&0&0&1&0&0&0&1&0&1&0&0\\ 0&0&0&1&0&1&0&0&0&1&1&1&0&1&1&1\\ \end{array}\right]. (60)

Then we have

W3=δ16{1,4,5,6,8,10,11,12,14,15,16}.\displaystyle W_{3}=\delta_{16}\{1,4,5,6,8,10,11,12,14,15,16\}. (61)

Hence,

Ω(3):=W3\{W2W1W0}=δ16{1,4,11,12}.\displaystyle\Omega(3):=W_{3}\backslash\{W_{2}\cup W_{1}\cup W_{0}\}=\delta_{16}\{1,4,11,12\}. (62)

Now since

Ω(0)Ω(1)Ω(2)Ω(3)=Δ16,\Omega(0)\cup\Omega(1)\cup\Omega(2)\cup\Omega(3)=\Delta_{16},

the BCN (30) is state feedback stabilizable to (1,1,0,1)(1,1,0,1).

Finally, we construct the state feedback control. It should satisfy the following condition

U|Ω(1)Ω(0)TΩ(0)|Ω(1)Ω(0)(a)U|Ω(2)TΩ(1)|Ω(2)(b)U|Ω(3)TΩ(2)|Ω(3)(c)\displaystyle\begin{array}[]{lr}U|_{\Omega(1)\cup\Omega(0)}\leq T_{\Omega(0)}|_{\Omega(1)\cup\Omega(0)}&~{}~{}~{}~{}~{}(a)\\ U|_{\Omega(2)}\leq T_{\Omega(1)}|_{\Omega(2)}&~{}~{}~{}~{}~{}(b)\\ U|_{\Omega(3)}\leq T_{\Omega(2)}|_{\Omega(3)}&~{}~{}~{}~{}~{}(c)\\ \end{array} (66)

For (66)(a), one feasible choice is:

u(δ163)=δ44δ4{2,4},u(δ167)=δ42,u(δ169)=δ43δ4{1,3},u(δ1613)=δ41.\displaystyle\begin{array}[]{l}u(\delta_{16}^{3})=\delta_{4}^{4}\in\delta_{4}\{2,4\},\\ u(\delta_{16}^{7})=\delta_{4}^{2},\\ u(\delta_{16}^{9})=\delta_{4}^{3}\in\delta_{4}\{1,3\},\\ u(\delta_{16}^{13})=\delta_{4}^{1}.\\ \end{array} (71)

Note that where δ4{,}\in\delta_{4}\{*,*\} means it has multiple choices.

For (66)(b), one feasible choice is:

u(δ162)=δ42δ4{2,4},u(δ165)=δ44,u(δ166)=δ42,u(δ168)=δ44,u(δ1610)=δ43δ4{1,3},u(δ1614)=δ41,u(δ1615)=δ43,u(δ1616)=δ43.\displaystyle\begin{array}[]{l}u(\delta_{16}^{2})=\delta_{4}^{2}\in\delta_{4}\{2,4\},\\ u(\delta_{16}^{5})=\delta_{4}^{4},\\ u(\delta_{16}^{6})=\delta_{4}^{2},\\ u(\delta_{16}^{8})=\delta_{4}^{4},\\ u(\delta_{16}^{10})=\delta_{4}^{3}\in\delta_{4}\{1,3\},\\ u(\delta_{16}^{14})=\delta_{4}^{1},\\ u(\delta_{16}^{15})=\delta_{4}^{3},\\ u(\delta_{16}^{16})=\delta_{4}^{3}.\\ \end{array} (80)

For (66)(c), one feasible choice is:

u(δ161)=δ41δ4{1,3},u(δ164)=δ42δ4{1,2,3,4},u(δ1611)=δ42δ4{2,4},u(δ1612)=δ44δ4{1,2,3,4}.\displaystyle\begin{array}[]{l}u(\delta_{16}^{1})=\delta_{4}^{1}\in\delta_{4}\{1,3\},\\ u(\delta_{16}^{4})=\delta_{4}^{2}\in\delta_{4}\{1,2,3,4\},\\ u(\delta_{16}^{11})=\delta_{4}^{2}\in\delta_{4}\{2,4\},\\ u(\delta_{16}^{12})=\delta_{4}^{4}\in\delta_{4}\{1,2,3,4\}.\\ \end{array} (85)

Putting (71)-(85) together, we have a state feedback stabilizer as

u(t)=MGx(t),\displaystyle u(t)=M_{G}x(t), (86)

where

MG=δ4[1,2,4,2,4,2,2,4,3,3,2,4,1,1,3,3].M_{G}=\delta_{4}[1,2,4,2,4,2,2,4,3,3,2,4,1,1,3,3].

Then

MG1=(I2𝟏2T)MG=δ2[1,1,2,1,2,1,1,2,2,2,1,2,1,1,2,2],\begin{array}[]{ccl}M_{G}^{1}&=&(I_{2}\otimes{\bf 1}^{T}_{2})M_{G}\\ {}\hfil&=&\delta_{2}[1,1,2,1,2,1,1,2,2,2,1,2,1,1,2,2],\end{array}
MG2=(𝟏2TI2)MG=δ2[1,2,2,2,2,2,2,2,1,1,2,2,1,1,1,1].\begin{array}[]{ccl}M_{G}^{2}&=&({\bf 1}^{T}_{2}\otimes I_{2})M_{G}\\ {}\hfil&=&\delta_{2}[1,2,2,2,2,2,2,2,1,1,2,2,1,1,1,1].\end{array}

Then it is easy to figure out the state feedback stabilizer as

U1(t)=(X1(t)X2(t)X3(t))(X1(t)X2(t)¬X3(t)¬X4(t))(X1(t)¬X2(t)X3(t)¬X4(t))(X1(t)¬X2(t)¬X3(t)X4(t))(¬X1(t)X2(t)¬X3(t)X4(t))(¬X1(t)¬X2(t)X3(t)),U2(t)=(X1(t)X2(t)X3(t)X4(t))(¬X1(t)X2(t)X3(t))(¬X1(t)¬X2(t)).\displaystyle\begin{array}[]{ccl}U_{1}(t)&=&\left(X_{1}(t)\wedge X_{2}(t)\wedge X_{3}(t)\right)\\ {}\hfil&{}\hfil&\vee\left(X_{1}(t)\wedge X_{2}(t)\wedge\neg X_{3}(t)\wedge\neg X_{4}(t)\right)\\ {}\hfil&{}\hfil&\vee\left(X_{1}(t)\wedge\neg X_{2}(t)\wedge X_{3}(t)\wedge\neg X_{4}(t)\right)\\ {}\hfil&{}\hfil&\vee\left(X_{1}(t)\wedge\neg X_{2}(t)\wedge\neg X_{3}(t)\wedge X_{4}(t)\right)\\ {}\hfil&{}\hfil&\vee\left(\neg X_{1}(t)\wedge X_{2}(t)\wedge\neg X_{3}(t)\wedge X_{4}(t)\right)\\ {}\hfil&{}\hfil&\vee\left(\neg X_{1}(t)\wedge\neg X_{2}(t)\wedge X_{3}(t)\right),\\ U_{2}(t)&=&\left(X_{1}(t)\wedge X_{2}(t)\wedge X_{3}(t)\wedge X_{4}(t)\right)\\ {}\hfil&{}\hfil&\vee\left(\neg X_{1}(t)\wedge X_{2}(t)\wedge X_{3}(t)\right)\\ {}\hfil&{}\hfil&\vee\left(\neg X_{1}(t)\wedge\neg X_{2}(t)\right).\\ \end{array} (96)

To verify whether this stabilizer works we calculate the closed-loop BCN, which is

x(t+1)=MFu(t)x(t)=MFMGx(t)x(t)=MFMGPR16x(t):=Mcx(t),\displaystyle\begin{array}[]{ccl}x(t+1)&=&M_{F}u(t)x(t)=M_{F}M_{G}x(t)x(t)\\ {}\hfil&=&M_{F}M_{G}PR_{16}x(t):=M_{c}x(t),\end{array} (99)

where

Mc=MFMGPR16=δ16[2,7,3,5,9,7,3,13,3,7,2,6,3,7,9,13].\begin{array}[]{ccl}M_{c}&=&M_{F}M_{G}PR_{16}\\ {}\hfil&=&\delta_{16}[2,7,3,5,9,7,3,13,3,7,2,6,3,7,9,13].\end{array}

It is ready to verify that

Mc3=δ16[3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3].M_{c}^{3}=\delta_{16}[3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3].

That is, after three steps all the trajectories of the closed-loop BCN converge to δ163(1,1,0,1)\delta_{16}^{3}\sim(1,1,0,1).

The state transition graph of (99) is shown in Fig. 1, where the only attractor is the point δ163(1,1,0,1)\delta_{16}^{3}\sim(1,1,0,1).

δ1612\delta_{16}^{12}δ1613\delta_{16}^{13}δ168\delta_{16}^{8}δ164\delta_{16}^{4}δ166\delta_{16}^{6}δ163\delta_{16}^{3}δ1616\delta_{16}^{16}δ165\delta_{16}^{5}δ1610\delta_{16}^{10}δ167\delta_{16}^{7}δ169\delta_{16}^{9}δ1615\delta_{16}^{15}δ1614\delta_{16}^{14}δ1611\delta_{16}^{11}δ162\delta_{16}^{2}δ161\delta_{16}^{1}
Figure 1: State Transition Graph of (99)

Finally, it is worthy noting that from (71)-(85) one sees easily that there are

2×2×2×2×2×4×2×4=10242\times 2\times 2\times 2\times 2\times 4\times 2\times 4=1024

time optimal feasible state feedback stabilizers.

5 Set Stabilization

Definition 1

Consider BCN (24). Let 𝒟n\mathcal{M}\subset{\mathcal{D}}^{n} be a given set of states. The BCN is said to be state feedback stabilizable to \mathcal{M}, if there exists a state feedback control sequence (25) such that for any initial state X0X_{0} of the closed-loop network, there exists a T0T\in\mathbb{Z}_{\geq 0} such that,

X(t;X0,U(X)),tT.\displaystyle X(t;X_{0},U(X))\in\mathcal{M},\quad\forall t\geq T. (100)
Definition 2

Consider BCN (24), and assume 𝒟n\mathcal{M}\subset{\mathcal{D}}^{n}. \mathcal{M} is said to be control invariant set, if for each state X0X_{0}\in\mathcal{M}, there exists a control sequence U=(U1,U2,,Um)U=(U_{1},U_{2},\cdots,U_{m}) such that F(X0,U)F(X_{0},U)\in\mathcal{M}.

To verify whether a state set \mathcal{M} is a control invariant set, we have the following results.

Proposition 3

Consider BCN (24) and assume a set of states 𝒟n\mathcal{M}\subset\mathcal{D}^{n} is given to be stabilized. Construct the corresponding truth matrix TT_{\mathcal{M}}. Let the maximum set for TT_{\mathcal{M}} be Θ\Theta. Then \mathcal{M} is a control invariant set, if,

Θ=\displaystyle\mathcal{M}\cap\Theta=\mathcal{M} (101)

In other words, Θ\mathcal{M}\subset\Theta.

Proof 5.1.

By definition, there exists at least one Θ\Theta-antecedence solution U(X)U(X) such that

F(X0,U),X0Θ\displaystyle F(X_{0},U)\in\mathcal{M},\quad\forall X_{0}\in\Theta (102)

Since Θ\mathcal{M}\subset\Theta, then it is obvious that F(X0,U),X0F(X_{0},U)\in\mathcal{M},\quad X_{0}\in\mathcal{M}.

On the other hand, in more general scenarios where Θ\mathcal{M}\not\subset\Theta, yet Θ\mathcal{M}\cap\Theta\neq\emptyset, we have the following remark.

Remark 4.

In the aforementioned scenarios where Θ\mathcal{M}\not\subset\Theta, yet Θ\mathcal{M}\cap\Theta\neq\emptyset, the control invariant set W0W_{0} within the state set \mathcal{M} can be determined by,

W0=Θ\displaystyle W_{0}=\mathcal{M}\cap\Theta (103)

The proof is the same as above, and for brevity, it is not demonstrated here. In [12], the concept of the largest control invariant set is given, through which, the optimal state feedback controls can be designed. Based on the property of Ledley Antecedence Solution, it is quite obvious that the control invariant set solved using the maximum set of the truth matrix and (103) is the largest control invariant set. Similar to point stabilization case, we have the following algorithm.

Algorithm 0.
  • Step 0 (determining the largest control invariant set): Set the destination set \mathcal{M}, to which the BCN is designed to be stabilized. Construct TT_{\mathcal{M}}, and use it to find the maximum set Θ\Theta with respect to \mathcal{M}. Check if Θ=\mathcal{M}\cap\Theta=\emptyset. If Θ=\mathcal{M}\cap\Theta=\emptyset, the corresponding state feedback stabilization is not solvable. Stop the algorithm. Else (i.e., W0=ΘW_{0}=\mathcal{M}\cap\Theta), calculate the corresponding truth matrix TW0T_{W_{0}} and the maximum set W1W_{1}, set Ω(1)=W1\W0\Omega(1)=W_{1}\backslash W_{0} and go to step k=1k=1.

  • Step kk (Repeated Step, k1k\geq 1) : Same as Step kk of Algorithm 4.5.

  • Final Step (Stabilizer Constructing Step): Same as Final Step of Algorithm 4.5.

The following result can be proved using similar argument for Theorem 4.6.

Theorem 6.

BCN (24) is stabilizable to W0W_{0}, if and only if, the Algorithm 5 can go through and provide a state feedback stabilizer.

It is also worthy noting that this algorithm provides the time-optimal stabilizers. In the following an example is used to describe the design process.

Example 7

Recall Example 8, and consider BCN (30), and set ={(1,0,1,0)δ166,(1,0,0,1)δ167,(0,1,0,1)δ1612}\mathcal{M}=\{(1,0,1,0)\sim\delta_{16}^{6},(1,0,0,1)\sim\delta_{16}^{7},(0,1,0,1)\sim\delta_{16}^{12}\}. This example investigates whether BCN (30) can be stabilized to \mathcal{M}.

First, construct the truth matrix TT_{\mathcal{M}} corresponding to \mathcal{M},

T=[0000000001000100010001000001000100000110010000000100000000011000].\displaystyle T_{\mathcal{M}}=\left[\begin{array}[]{cccccccccccccccc}0&0&0&0&0&0&0&0&0&1&0&0&0&1&0&0\\ 0&1&0&0&0&1&0&0&0&0&0&1&0&0&0&1\\ 0&0&0&0&0&1&1&0&0&1&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0&0&0&1&1&0&0&0\end{array}\right].

The maximum set is

Θ=δ16{2,6,7,10,12,13,14,16}.\Theta=\delta_{16}\left\{2,6,7,10,12,13,14,16\right\}.

Since Θ=\mathcal{M}\cap\Theta=\mathcal{M}, set \mathcal{M} is control invariant set. For easier understanding, denote W0=W_{0}=\mathcal{M}, W1=ΘW_{1}=\Theta, set Ω(1):=W1\W0\Omega(1):=W_{1}\backslash W_{0}. Then it is easy to calculate that

TΩ(1)=[1001100100000000000000000010001010011001000000010000000100100111].\displaystyle T_{\Omega(1)}=\left[\begin{array}[]{cccccccccccccccc}1&0&0&1&1&0&0&1&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&1&0&0&0&1&0\\ 1&0&0&1&1&0&0&1&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&0&1&0&0&1&0&0&1&1&1\end{array}\right].

Then W2=δ16{1,4,5,8,11,14,15,16}W_{2}=\delta_{16}\{1,4,5,8,11,14,15,16\}, and set

Ω(2):=W2\(W1W0)=δ16{1,4,5,8,11,15}.\begin{array}[]{ccl}\Omega(2)&:=&W_{2}\backslash(W_{1}\cup W_{0})\\ {}\hfil&=&\delta_{16}\{1,4,5,8,11,15\}.\end{array}

We calculate that

TΩ(2)=[0110011000110011100110011100110001100000001111001001011011000000].\displaystyle T_{\Omega(2)}=\left[\begin{array}[]{cccccccccccccccc}0&1&1&0&0&1&1&0&0&0&1&1&0&0&1&1\\ 1&0&0&1&1&0&0&1&1&1&0&0&1&1&0&0\\ 0&1&1&0&0&0&0&0&0&0&1&1&1&1&0&0\\ 1&0&0&1&0&1&1&0&1&1&0&0&0&0&0&0\end{array}\right].

One sees that

W3=δ16{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16},\begin{array}[]{ccl}W_{3}&=\delta_{16}\{&1,2,3,4,5,6,7,8,\\ {}\hfil&{}\hfil&9,10,11,12,13,14,15,16\},\end{array}

and

Ω(3)=W3\(W2W1W0)=δ16{3,9}.\begin{array}[]{ccl}\Omega(3)&=W_{3}\backslash(W_{2}\cup W_{1}\cup W_{0})=\delta_{16}\{3,9\}.\end{array}

Since

Ω(0)Ω(1)Ω(2)Ω(2)=Δ16,\Omega(0)\cup\Omega(1)\cup\Omega(2)\cup\Omega(2)=\Delta_{16},

we conclude that the BCN (30) is a state feedback stabilizable to \mathcal{M}.

Afterwards, we construct the state feedback stabilizer.

Since U(X)U(X) for XΩ(0)Ω(1)X\in\Omega(0)\cup\Omega(1) is determined by TΩ(0)T_{\Omega(0)}, we choose one feasible solution as follows:

u(δ162)=δ44δ4{2,4}u(δ166)=δ43δ4{2,3}u(δ167)=δ43u(δ1610)=δ41δ4{1,3}u(δ1612)=δ44δ4{2,4}u(δ1614)=δ41u(δ1616)=δ42.\displaystyle\begin{array}[]{l}u(\delta_{16}^{2})=\delta_{4}^{4}\in\delta_{4}\{2,4\}\\ u(\delta_{16}^{6})=\delta_{4}^{3}\in\delta_{4}\{2,3\}\\ u(\delta_{16}^{7})=\delta_{4}^{3}\\ u(\delta_{16}^{10})=\delta_{4}^{1}\in\delta_{4}\{1,3\}\\ u(\delta_{16}^{12})=\delta_{4}^{4}\in\delta_{4}\{2,4\}\\ u(\delta_{16}^{14})=\delta_{4}^{1}\\ u(\delta_{16}^{16})=\delta_{4}^{2}.\end{array} (111)

Using TΩ(1)T_{\Omega(1)}, we can determine U(X)U(X) for XΩ(2)X\in\Omega(2), that is

u(δ161)=δ41δ4{1,3}u(δ164)=δ41δ4{1,3}u(δ165)=δ41δ4{1,3}u(δ168)=δ44δ4{1,3,4}u(δ1611)=δ42δ4{2,4}u(δ1615)=δ44δ4{2,4}.\displaystyle\begin{array}[]{l}u(\delta_{16}^{1})=\delta_{4}^{1}\in\delta_{4}\{1,3\}\\ u(\delta_{16}^{4})=\delta_{4}^{1}\in\delta_{4}\{1,3\}\\ u(\delta_{16}^{5})=\delta_{4}^{1}\in\delta_{4}\{1,3\}\\ u(\delta_{16}^{8})=\delta_{4}^{4}\in\delta_{4}\{1,3,4\}\\ u(\delta_{16}^{11})=\delta_{4}^{2}\in\delta_{4}\{2,4\}\\ u(\delta_{16}^{15})=\delta_{4}^{4}\in\delta_{4}\{2,4\}.\end{array} (118)

Using TΩ(2)T_{\Omega(2)}, we can determine U(X)U(X) for XΩ(3)X\in\Omega(3) as

u(δ163)=δ43δ4{1,3}u(δ169)=δ42δ4{2,4}.\displaystyle\begin{array}[]{l}u(\delta_{16}^{3})=\delta_{4}^{3}\in\delta_{4}\{1,3\}\\ u(\delta_{16}^{9})=\delta_{4}^{2}\in\delta_{4}\{2,4\}.\end{array} (121)

Assume

u(t)=MGx(t),u(t)=M_{G}x(t),

where MG4×16M_{G}\in{\mathcal{L}}_{4\times 16}. Summarizing the choices (111)-(121), a feasible control is

MG=δ4[1,2,3,1,1,3,3,4,2,1,2,4,4,1,4,2].\displaystyle M_{G}=\delta_{4}[1,2,3,1,1,3,3,4,2,1,2,4,4,1,4,2]. (122)

We conclude that (122) provides a state feedback stabilizer for BCN (30).

Finally, we may check the closed-loop system. It is

x(t+1)=MFu(t)x(t)=MFMGx(t)x(t)=MFMGPR16x(t):=MCx(t).\displaystyle\begin{array}[]{ccl}x(t+1)&=&M_{F}u(t)x(t)=M_{F}M_{G}x(t)x(t)\\ {}\hfil&=&M_{F}M_{G}PR_{16}x(t):=M_{C}x(t).\end{array} (125)

It is easy to calculate that

MC=δ16[2,7,4,2,2,12,12,13,4,7,2,6,12,7,10,6].\displaystyle M_{C}=\delta_{16}[2,7,4,2,2,12,12,13,4,7,2,6,12,7,10,6]. (126)

The state transition graph of this closed-loop network is described in Fig. 2, which shows that there are only one attractor C={δ126,δ1212}C=\{\delta_{12}^{6},~{}\delta_{12}^{12}\}. Hence, all trajectories converge to CW0C\subset W_{0}.

δ169\delta_{16}^{9}δ164\delta_{16}^{4}δ163\delta_{16}^{3}δ165\delta_{16}^{5}δ161\delta_{16}^{1}δ1611\delta_{16}^{11}δ162\delta_{16}^{2}δ1615\delta_{16}^{15}δ1610\delta_{16}^{10}δ167\delta_{16}^{7}δ1614\delta_{16}^{14}δ168\delta_{16}^{8}δ1613\delta_{16}^{13}δ1612\delta_{16}^{12}δ166\delta_{16}^{6}δ1616\delta_{16}^{16}
Figure 2: State Transition Graph of (125)

From (111)-(121) it is easy to see that there are 6144 feasible state feedback stabilizers.

6 Stabilization of General Logical Systems

The method discussed in previous sections are all applicable to kk-valued or mix-valued logic. In this section we give an example to demonstrate this.

Example 1

Consider a mix-valued logical network

{X1(t+1)=X1(t)X2(t),X2(t+1)=X1(t)U(t),\displaystyle\begin{cases}X_{1}(t+1)=X_{1}(t)\diamondsuit X_{2}(t),\\ X_{2}(t+1)=X_{1}(t)\Box U(t),\end{cases} (127)

where X1(t)𝒟2X_{1}(t)\in{\mathcal{D}}_{2}, X2(t),U(t)𝒟3X_{2}(t),~{}U(t)\in{\mathcal{D}}_{3}, :𝒟2×𝒟3𝒟2\diamondsuit:{\mathcal{D}}_{2}\times{\mathcal{D}}_{3}\rightarrow{\mathcal{D}}_{2}, :𝒟2×𝒟3𝒟3\Box:{\mathcal{D}}_{2}\times{\mathcal{D}}_{3}\rightarrow{\mathcal{D}}_{3}, with their structure matrices as

M=δ2[1,1,2,2,2,1];\displaystyle M_{\diamondsuit}=\delta_{2}[1,1,2,2,2,1]; (128)
M=δ3[1,2,3,2,3,1].\displaystyle M_{\Box}=\delta_{3}[1,2,3,2,3,1]. (129)
  • (i)

    Is it possible to stabilize (127) to (1,1)δ61(1,1)\sim\delta_{6}^{1}?

    It is easy to calculate that

    x(t+1)=MFu(t)x(t),\displaystyle x(t+1)=M_{F}u(t)x(t), (130)

    where

    MF=δ6[1,1,4,5,5,2,2,2,5,6,6,3,3,3,6,4,4,1].M_{F}=\delta_{6}[1,1,4,5,5,2,2,2,5,6,6,3,3,3,6,4,4,1].

    Set Ω(0)=W0={(1,1)}\Omega(0)=W_{0}=\{(1,1)\}. Using (130), the truth matrix with respect to Ω(0)\Omega(0) is

    TΩ(0)=[110000000000000001].\displaystyle T_{\Omega(0)}=\begin{bmatrix}1&1&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&1\\ \end{bmatrix}. (131)

    Hence,

    W1=δ6{1,2,6},\displaystyle W_{1}=\delta_{6}\{1,2,6\}, (132)

    and

    Ω(1)=W1\W0=δ6{2,6}.\displaystyle\Omega(1)=W_{1}\backslash W_{0}=\delta_{6}\{2,6\}. (133)

    The truth matrix with respect to Ω(1)\Omega(1) is

    TΩ(1)=[000001110110001000].\displaystyle T_{\Omega(1)}=\begin{bmatrix}0&0&0&0&0&1\\ 1&1&0&1&1&0\\ 0&0&1&0&0&0\\ \end{bmatrix}. (134)

    Hence,

    W2=δ6{1,2,3,4,5,6},\displaystyle W_{2}=\delta_{6}\{1,2,3,4,5,6\}, (135)

    and

    Ω(2)=W2\(W0W1)=δ6{3,4,5}.\displaystyle\Omega(2)=W_{2}\backslash(W_{0}\cup W_{1})=\delta_{6}\{3,4,5\}. (136)

    Now

    Ω(0)Ω(1)Ω(2)=𝒟6.\Omega(0)\cup\Omega(1)\cup\Omega(2)={\mathcal{D}}_{6}.

    Hence, mix-valued logical system (127) can be stabilized to (1,1)δ61(1,1)\sim\delta_{6}^{1}.

    Next, we construct a state feedback stabilizer. Similarly to Boolean cases, a feasible state feedback control can be obtained as

    u(t)=MGx(t),\displaystyle u(t)=M_{G}x(t), (137)

    where

    MG=δ3[1,1,3,2,2,3].M_{G}=\delta_{3}[1,1,3,2,2,3].

    To verify the result we consider the closed-loop network, which is

    x(t+1)=MFMGx2(t)=MFMGPR6x(t):=Mcx(t).\displaystyle x(t+1)=M_{F}M_{G}x^{2}(t)=M_{F}M_{G}PR_{6}x(t):=M_{c}x(t). (138)

    Then

    Mc=MFMGPR6=δ6[1,1,6,6,6,1].M_{c}=M_{F}M_{G}PR_{6}=\delta_{6}[1,1,6,6,6,1].

    Since

    Mc2=δ6[1,1,1,1,1,1],M_{c}^{2}=\delta_{6}[1,1,1,1,1,1],

    one sees easily that after two steps the system stabilized to δ61\delta_{6}^{1}.

  • (ii)

    Is it possible to stabilize (127) to ={(1,0),(0,0)}\mathcal{M}=\{(1,0),(0,0)\}? To verify this, we first calculate the truth matrix corresponding to state set \mathcal{M},

    T=[000000000111111000].\displaystyle T_{\mathcal{M}}=\begin{bmatrix}0&0&0&0&0&0\\ 0&0&0&1&1&1\\ 1&1&1&0&0&0\\ \end{bmatrix}. (139)

    where the maximum set Θ=Δ6\Theta=\Delta_{6}, and Θ=\mathcal{M}\cap\Theta=\mathcal{M}, therefore, the state set \mathcal{M} is a control invariant set. From the truth table 139, one can easily sees that all states of the logic system can be stabilized to the state set \mathcal{M} within one step, and the control sequence can be obtained as,

    u(t)=δ3[3,3,3,2,2,2]x(t)\displaystyle u(t)=\delta_{3}[3,3,3,2,2,2]x(t) (140)

    Put this state feedback control into the original system, we have the closed-loop system as

    x(t+1)=MFMGPR6x(t)=Mcx(t),\displaystyle x(t+1)=M_{F}M_{G}PR_{6}x(t)=M_{c}x(t), (141)

    where

    Mc=MFMGPR6=δ6[3,3,6,6,6,3].M_{c}=M_{F}M_{G}PR_{6}=\delta_{6}[3,3,6,6,6,3].

    Fig. 3 is the state transition graph of the closed-loop network (141). It shows the convergence of this system to \mathcal{M}.

    δ62\delta_{6}^{2}δ63\delta_{6}^{3}δ61\delta_{6}^{1}δ65\delta_{6}^{5}δ66\delta_{6}^{6}δ64\delta_{6}^{4}
    Figure 3: State Transition Graph of (141)

7 Conclusion

The Ledley antecedence/consequence solution has been applied for the design of feedback stabilizers in generic logic systems in this paper. To achieve such goal, the original theory has been extended in two ways. First, logical functions have been considered as an admissible set of values instead of a set of equations. Second, the domain of arguments has also been applied on a restricted subset of state space instead of the whole state space. Based on such knowledge, this paper has depicted that by properly designing admissible subsets for the logic functions of the systems, the state feedback controls can be obtained automatically when solving the subset antecedence solutions with restricted subsets for arguments. By using this approach, the stabilization of BCN to a pre-assigned point and to a pre-assigned set can be achieved respectively. An iterative algorithm has been designed to verify if the problem is solvable or not, and if the problem is solvable, the algorithm is also capable to provide all (time-optimal) state feedback stabilizers. Various examples have been demonstrated to verify the effectiveness of the proposed technique in Boolean control networks, as well as kk-valued and mix-valued logical networks.

The technique introduced in this paper is new and useful. It shows that state feedback controls can be obtained via properly designed admissible sets and restricted sets. The method provided in this paper has a universal character, which is via properly designed sequence of subsets, the state feedback control may be obtained for various purposes (i.e., state feedback control, output feedback stabilization, tracking).

References

  • [1] D. Cheng, H. Qi, “Controllability and observability of Boolean control networks”, in Automatica, vol. 45, no.7, pp. 1659-1667, Jul. 2009.
  • [2] D. Cheng, H. Qi, Z. Li, Analysis and Control of Boolean Networks, A Semi-tensor Product Approach, Springer, London, 2011.
  • [3] D. Cheng, H. Qi, Z. Li, J. Liu, “Stability and stabilization of Boolean networks”, in Int. J. Robust Nonlin., vol. 21, no. 2, pp. 134-156, Jan. 2011.
  • [4] D. Cheng, “Disturbance decoupling of Boolean control networks”, in IEEE Trans. Automat. Contr., vol. 56, no. 1, pp. 2-10, May, 2010.
  • [5] D. Cheng, H. Qi, Y. Zhao, An Introduction to Semi-tensor Product of Matrices and Its Applications, World Scientific, Singapore, 2012.
  • [6] H. Chen, L. Sun, Y. Liu, “Partial stability and stabilization of Boolean networks”, in Int. J. Syst. Sci., vol. 47, no. 9, pp. 2119-2127, Jul. 2016.
  • [7] E. Fornasini, M.E. Valcher, “On the periodic trajectories of Boolean control networks”, in Automatica, vol. 49, no.5, pp. 1506-1509, May, 2013.
  • [8] E. Fornasisni, M.E. Valcher, “Observability, reconstructability and state observers of Boolean control networks”, in IEEE Trans. Automat. Contr., vol. 58, no. 6, pp. 1390-1401, Nov. 2013.
  • [9] E. Fornasisni, M.E. Valcher, “Optimal control of Boolean control networks”, in IEEE Trans. Automat. Contr., vol. 59, no. 5, pp. 1258-1270, Dec. 2014.
  • [10] E. Fornasini, M.E. Valcher, “Recent developments in Boolean networks control”, in J. Control. Decis., vol. 3, no. 1, pp. 1-18, Jan. 2016.
  • [11] S. Fu, J. Zhao, J. Wang, “Input-output decoupling control design for switched Boolean control networks”, in Automatica, vol. 355, no. 17, pp. 8576-8596, Nov. 2018.
  • [12] Y. Guo, P. Wang, W. Gui, C. Yang, “Set stability and set stabilization of Boolean control networks based on invariant subsets”, in Automatica, vol. 61, pp. 106-112, Nov. 2015.
  • [13] S. Huang, D. Ingber, “Shape-dependent control of cell growth, differentiation, and apoptosis: switching between attractors in cell regulatory networks”, in Exp. Cell Res., vol. 261, no. 1, pp. 91-103, Nov. 2000.
  • [14] S. Kauffman, “Metabolic stability and epigenesis on randomly constructed genetic nets”, in J. Theor. Biol., vol. 22, no. 3, pp. 437-467, Mar. 1969.
  • [15] S. Kauffman, The Origins of Order: Self-organization and Selection if Evolution, Oxford Univ. Press, London, 1993.
  • [16] K.H. Kim. Boolean Matrix Theory and Applications, New York: Marcel Dekker Inc., 1982.
  • [17] R.S. Ledley, “Digital computational methods in symbolic logic with examples in biochemistry”, in Proc. Nat. Acad. Sci. USA, vol. 41, no. 7, pp. 498-511, Jul. 1955.
  • [18] R.S. Ledley, “Logic and Boolean algebra in medical science”, in Proc. Conf. Appl. Undergraduate Math, Atlanta, GA, 1973.
  • [19] H. Li, Y. Wang, “Laypunov-based stability and construction of Lyapunov functions for Boolean networks”, in SIAM J. Control. Optim., vol. 55, no. 6, pp. 3437-3457, 2017.
  • [20] R. Li, M. Yang, T. Chu, “State feedback stabilization for Boolean control networks”, in IEEE Trans. Automat. Contr., vol. 58, no. 7, pp. 1853-1857, Jan. 2017.
  • [21] H. Li, G. Zhao, M. Meng, J. Feng, “A survey on applications of semi-tensor product method in engineering”, in Sci. China Inform. Sci., vol. 61, no. 1, pp. 010202, Jan. 2018.
  • [22] J. Lu, H. Li, Y. Liu, F, Li, “Survey on semi-tensor product method with its applications in logical networks and other finite-valued systems”, in IET Control. Theory Appl., vol. 11, no. 13, pp. 2040-2047, May, 2017.
  • [23] M. Meng, J. Lam, J. Feng, K. Cheung, “Stability and stabilization of Boolean networks with stochastic delay”, in IEEE Trans. Automat. Contr., vol. 65, no. 4, pp. 790-796, May, 2019.
  • [24] A. Muhammad, A. Rushdi, F.A. M. Ghaleb, “A tutorial exposition of semi-tensor products of matrices with a stress on their representation of Boolean function”, in J. King Saud Univ. Sci, vol. 5, pp. 3-30, 2016.
  • [25] H. Qi, Y. Qiao, “Dynamics and control of singular Boolean networks”, in Asia J. Control, vol.21, no.6, pp.2604-2613, Nov. 2019.
  • [26] Y. Qiao, H. Qi, D. Cheng, “Partition-based solution of static logical networks with applications”, in IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 4, pp. 1252-1262, Mar. 2018.
  • [27] F. Robert, Discrete Iterations, A Metric Study, Translated by J. Rokne, Springer-Verlag, Berlin, 1986.
  • [28] X. Xu, Y. Hong, “Matrix expression and reachability analysis of finite automata”, in J. Contr. Theory Appl., vol. 10, no. 2, pp. 210-215, May, 2012.
  • [29] Y. Yan, Z. Chen, Z. Liu, “Semi-tensor product approach to controllability and stabilizability of finite automata”, in J. Sys. Eng. Electr., vol. 26, no. 1, pp. 134-141, Mar. 2015.
  • [30] G. Zhao, H. Li, P. Duan, FE. Alsaadi, “Survey on applications of semi-tensor product method in networked evolutionary games”, in J. Appl. Anal., vol. 10, no. 1, pp. 32-54, Feb. 2020.
  • [31] Y. Zhao, Z. Li, D. Cheng, “Optimal Control of Logical Control Networks”, in IEEE Trans. Automat. Contr., vol. 56, no. 8, pp. 1766-1776, Nov. 2010.
  • [32] J. Zhong, D. Ho, J. Lu, W. Xu , “Global robust stability and stabilization of Boolean network with disturbance”, in Automatica, vol. 84, pp. 142-148, Oct. 2017.
  • [33] J. Zhong, D. Lin, “Decomposition of nonlinear feedback registers based on Boolean networks”, in Sci. China, Inform. Sci., vol. 62, no. 3, pp. 039110, Mar. 2019.
{IEEEbiography}

[[Uncaptioned image]]Yingzhe Jia (S’18) received the B.E. in electrical engineering from Shandong University, China, in 2015, and the Ph.D. degree in electrical and electronic engineering from the University of Manchester, Manchester, U.K. in 2020. He is currently a research associate with the school of Mathematics, Shandong University, China. His research interests include stabilization of logic control networks, application of game theory in power systems, etc.

{IEEEbiography}

[[Uncaptioned image]]Daizhan Cheng (F’05) received the Bachelor s degree in mechanical engineering from Tsinghua University, Beijing, China, the M.S. degree in mathematics from the Graduate School, Chinese Academy of Sciences, Beijing, China, and the Ph.D. degree in systems science and mathematics from Washington University, St. Louis, WA, USA, in 1970, 1981, and 1985, respectively.

Since 1990 he has been a Professor both with the Institute of Systems Science, National University of Singapore, Singapore, and Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China. He has authored or coauthored 12 books, more than 240 journal papers, and more than 130 conference papers. His current research interests include nonlinear control systems, switched systems, Hamiltonian systems, Boolean control networks, and game theory.

Dr. Cheng was the recipient of the Second Grade National Natural Science Award of China in 2008 and 2014, and the Automatica 2008 to 2010 Best Theory/Methodology Paper Award, bestowed by IFAC, in 2011. He was the Chairman of the Technical Committee on Control Theory, Chinese Association of Automation from 2003 to 2010, and a Member of the IEEE CSS Board of Governors in 2009. He was an IFAC Fellow in 2008 and an IFAC Council Member from 2011 to 2014.

{IEEEbiography}

[[Uncaptioned image]]Jun-e Feng received the Ph.D. degree in cybernetics from Shandong University, Jinan, China, in 2003.

From 2006 to 2007, she was a Visiting Scholar with Massachusetts Institute of Technology, Cambridge, MA, USA, and a Visiting Scholar with the University of Hong Kong, Hong Kong, in 2013. She is currently a Professor of School of Mathematics with Shandong University. Her research interests include singular systems, multiagent systems, logic control networks, etc