This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Monotonous Parameter Estimation of One Class of Nonlinearly Parameterized Regressions without Overparameterization

Anton Glushchenko aiglush@upi.ru    Konstantin Lastochkin lastconst@ipu.ru Laboratory 7, V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow, Russia
Abstract

The estimation law of unknown parameters vector θ{\theta} is proposed for one class of nonlinearly parametrized regression equations y(t)=Ω(t)Θ(θ)y\left(t\right)=\Omega\left(t\right)\Theta\left(\theta\right). We restrict our attention to parametrizations that are widely obtained in practical scenarios when polynomials in θ\theta are used to form Θ(θ)\Theta\left(\theta\right). For them we introduce a new “linearizability” assumption that a mapping from overparametrized vector of parameters Θ(θ)\Theta\left(\theta\right) to original one θ\theta exists in terms of standard algebraic functions. Under such assumption and weak requirement of the regressor finite excitation, on the basis of dynamic regressor extension and mixing technique we propose a procedure to reduce the nonlinear regression equation to the linear parameterization without application of singularity causing operations and the need to identify the overparametrized parameters vector. As a result, an estimation law with exponential convergence rate is derived, which, unlike known solutions, (i) does not require a strict P-monotonicity condition to be met and a priori information about θ\theta to be known, (ii) ensures elementwise monotonicity for the parameter error vector. The effectiveness of our approach is illustrated with both academic example and 2-DOF robot manipulator control problem.

keywords:
parameter estimation; nonlinear regression model; overparametrization; finite excitation; adaptive control.
thanks: This research was financially supported in part by Grants Council of the President of the Russian Federation (project MD-1787.2022.4). The material in this paper was not presented at any IFAC meeting. Corresponding author A. Glushchenko. Tel. +79102266946.

,

1 Introduction

In the majority of applications real technical systems have a limited number of significant physical parameters. At the same time, mathematical models of these systems, written in the state space or Euler-Lagrange form, are described by equations with overparameterization, i.e. with a large number of new virtual parameters that are nonlinearly related to the original ones [6], [12].

As far as classic methods of identification theory and adaptive control are concerned, each parameter of a mathematical model is considered to be unique and independent (decoupled) from the others. With increasing the system order and, as a result, the number of the above-mentioned virtual parameters, this leads to the well-known [6] shortcomings, which make it difficult to apply the basic estimation laws:

  1. S1.

    Slower convergence and stringent excitation conditions due to the need to solve the identification problem in a larger parameter space.

  2. S2.

    Necessity to apply projection operators for online estimation of the system physical parameters.

To overcome these problems, it has been proposed [1], [8], [9], [10] to take into account the relationship between the unknown parameters to design the estimation law. In [1] the dynamic regressor extension and mixing (DREM) technique is applied to “isolate the good mappings” from virtual to physical parameters and utilize the strong P-monotonicity property [11] to achieve consistent parameter estimation for nonlinearly parameterized regressions. In case the regressor is non-square integrable, the solution [1] ensures asymptotic convergence of the parameter identification error. The requirement of strong P-monotonicity has turned out to be strict enough for some applications, e.g. composite control of Euler-Lagrange systems [9], [12], adaptive observation of windmill power coefficient [2]. For some, mainly polynomial mappings, it is possible to relax this condition using a special monotonizability assumption [2], [8], [9]. The relaxation mechanism is based on the search for a bijective substitution such that the new nonlinear mapping satisfies the strong P-monotonicity condition. However, the solution from [8], [9] has several key problems:

  1. P1.

    The convergence of the parametric error is guaranteed only for hardly validated non-square integrable regressors (Proposition 4 in [9]);

  2. P2.

    Weak property of non-increasing norm of the parameter error vector is guaranteed, but not the elementwise monotonicity (Remark 8 in [9]).

  3. P3.

    The estimation law requires a priori information about uncertainty parameters (for example, Lemma 2 in [9]).

  4. P4.

    The calculation of the system physical parameters from the obtained estimates can lead to singularities and sometimes requires application of projection operator (for example, see definition 𝒟I{{\mathcal{D}}^{I}} in Lemma 2 of [9]).

  5. P5.

    Due to P2 and P4 the transient behavior of parametric error is unpredictable, singularity may occur if we want to use 𝒟I{{\mathcal{D}}^{I}}.

In a recent paper [10] a new estimation law has been proposed that solves P1 and ensures exponential convergence of the parametric error when a more realistic for some practical scenarios condition of regressor finite excitation is satisfied.

The motivation for this study is to solve all problems P1-P5 for one class of nonlinearly parametrized regression equations (NLPRE).

Notation and Definitions. Further the following notation is used: |.|\left|.\right| is the absolute value, .\left\|.\right\| is the suitable norm of (.)(.), In×n=In{I_{n\times n}}=I_{n} is an identity n×nn\times n matrix, 0n×n{0_{n\times n}} is a zero n×nn\times n matrix, 0n0_{n} stands for a zero vector of length nn, det{.}{\rm{det}}\{.\} stands for a matrix determinant, adj{.}{\rm{adj}}\{.\} represents an adjoint matrix. Denote [.]:=1/(p+k)[.]\mathfrak{H}\left[.\right]:=1\;/\left({p+k}\right)\left[.\right] as a stable operator (k>0k>0 and p:=d/dtp:=d/dt). For a mapping :nn{\mathcal{F}}{\rm{:\;}}{\mathbb{R}^{n}}\mapsto{\mathbb{R}^{n}} we denote its Jacobian by x(x)==x(x)\nabla_{x}{\mathcal{F}}\left(x\right)=\linebreak={\textstyle{{\partial{\mathcal{F}}}\over{\partial x}}}\left(x\right). We also use the fact that for all (possibly singular) n×n{n\times n} matrices MM the following holds: adj{M}M=det{M}In×n{\rm{adj}}\{M\}M={\rm{det}}\{M\}I_{n\times n}.

Definition. A regressor ω(t)n×m\omega\left(t\right)\in{\mathbb{R}^{n\times m}} is finitely exciting (ω(t)FE)(\omega\left(t\right)\in{\rm{FE}}) over a time range [tr+,te]\left[{t_{r}^{+}{\rm{,\;}}{t_{e}}}\right]{\rm{}} if there exist tr+0t_{r}^{+}\geq 0, te>tr+{t_{e}}>t_{r}^{+} and α\alpha such that the following inequality holds:

tr+teω(τ)ωT(τ)𝑑ταIn,\int\limits_{t_{r}^{+}}^{{t_{e}}}{\omega\left(\tau\right){\omega^{\rm{T}}}\left(\tau\right)d}\tau\geq\alpha I_{n}{\rm{,}} (1)

where α>0\alpha\!>\!0 is the excitation level.

2 Problem Statement

The following NLPRE is considered:

y(t)=Ω(t)Θ(θ),y\left(t\right)=\Omega\left(t\right)\Theta\left(\theta\right){\rm{,}} (2)

where y(t)ny\left(t\right)\in{\mathbb{R}^{n}}, Ω(t)n×p\Omega\left(t\right)\in{\mathbb{R}^{n\times p}} are measurable regressand and regressor, respectively, θDθq\theta\in{D_{\theta}}\subset{\mathbb{R}^{q}} is a vector of unknown time-invariant parameters, Θ:qp\Theta{\rm{:\;}}{\mathbb{R}^{q}}\mapsto{\mathbb{R}^{p}} is a known mapping and p>qp>q. The problem is to estimate parameters θ\theta using y(t)y(t) and Ω(t)\Omega(t) such that:

limtθ^(t)θ=limtθ~(t)=0(exp),\begin{array}[]{c}\mathop{{\rm{lim}}}\limits_{t\to\infty}\left\|{\hat{\theta}\left(t\right)-\theta}\right\|=\mathop{{\rm{lim}}}\limits_{t\to\infty}\left\|{\tilde{\theta}\left(t\right)}\right\|=0{\rm{}}\left({{\rm{exp}}}\right){\rm{,}}\\ \end{array} (3a)
𝑡atb,i{1,q}|θ~i(ta)||θ~i(tb)|,\begin{array}[]{c}\forall{{\mathop{t}\nolimits}_{a}}\geq{t_{b}}{\rm{,\;}}\forall i\in\left\{{1,{\rm{\;}}q}\right\}{\rm{\;}}\left|{{{\tilde{\theta}}_{i}}\left({{t_{a}}}\right)}\right|\leq\left|{{{\tilde{\theta}}_{i}}\left({{t_{b}}}\right)}\right|{\rm{,}}\end{array} (3b)

where θ^(t)\hat{\theta}\left(t\right) is an estimate of the unknown parameters, θ~i(t){\tilde{\theta}_{i}}\left(t\right) is an estimation error of the ith parameter from θ\theta, (exp)\left(\rm{exp}\right) is an abbreviation for exponential rate of convergence.

The feasibility conditions for the problem (3a) are

  1. FC1.

    Ω(t)FE\Omega\left(t\right)\in{\rm{FE}}, i.e. condition of identifiability of an overparametrized parameters Θ(θ)\Theta\left(\theta\right).

  2. FC2.

    Dθ:={θq:det{θψ(θ)}0}{D_{\theta}}{\rm{:=}}\Big{\{}{\theta\in{\mathbb{R}^{{q}}}{\rm{:de}}{{\rm{t}}}\Big{\{}{{\nabla_{\theta}}{\psi}\left(\theta\right)}\Big{\}}\neq 0}\Big{\}}, where ψ(θ)==Θ(θ)𝒞k,{\psi}\left(\theta\right)=\linebreak={{\mathcal{L}}}\Theta\left(\theta\right)\in{\mathcal{C}^{{k}}}{\rm{,}} i.e. existence of inverse mapping :qq{\mathcal{F}}{\rm{:\;}}{\mathbb{R}^{q}}\mapsto{\mathbb{R}^{q}} that reconstructs the unknown parameters θ=(ψ)\theta={\mathcal{F}}\left(\psi\right) from a ”good” elements ψ(θ)\psi\left(\theta\right) handpicked by q×p\mathcal{L}\in\mathbb{R}^{q\times p} from Θ(θ)\Theta\left(\theta\right).

When FC1-FC2111It should be understood that, if the inverse function (ψ){\mathcal{F}}\left(\psi\right) does not exist, then there is no way to obtain θ\theta from Θ(θ)\Theta(\theta).are met, then the parameters Θ(θ)\Theta\left(\theta\right) can be obtained and recalculated into θ\theta (possibly, only asymptotically). However the main contribution of this paper is to solve all problems P1-P5 and shortcomings S1-S2 of existing solutions and consequently ensure elementwise monotonicity (3b) and obtain θ\theta without estimation of Θ(θ)\Theta\left(\theta\right) and substitution (Θ^(t)){\mathcal{F}}\left({\mathcal{L}}\hat{\Theta}\left(t\right)\right).

3 Main Result

To facilitate the proposed estimation design, in addition to FC1-FC2 a class of mappings Θ(θ)\Theta\left(\theta\right) and respective inverse functions (ψ){\mathcal{F}}\left(\psi\right), to which we restrict our attention, is defined in the following linearizing assumption.

Assumption 1.There exist 𝒢:qq×q{\mathcal{G}}{\rm{:\;}}{\mathbb{R}^{q}}\!\mapsto\!{\mathbb{R}^{q\times q}}, 𝒮:qq,{\mathcal{S}}{\rm{:\;}}{\mathbb{R}^{q}}\!\mapsto\!{\mathbb{R}^{q}}{\rm{,\;}} Πθ:q×q,{\Pi_{\theta}}{\rm{:\;}}{\mathbb{R}}\mapsto{\mathbb{R}^{q\times q}}{\rm{,}} 𝒯𝒢:Δ𝒢q×q,{{\mathcal{T}}_{\mathcal{G}}}{\rm{:\;}}{\mathbb{R}^{{\Delta_{\mathcal{G}}}}}\mapsto{\mathbb{R}^{q\times q}}{\rm{,}} 𝒯𝒮:Δ𝒮q,{{\mathcal{T}}_{\mathcal{S}}}{\rm{:\;}}{\mathbb{R}^{{\Delta_{\mathcal{S}}}}}\mapsto{\mathbb{R}^{q}}{\rm{,}} Ξ𝒢:Δ𝒢×q{{\Xi_{\mathcal{G}}}}{\rm{:\;}}{\mathbb{R}}\mapsto{\mathbb{R}^{{{\Delta_{\mathcal{G}}}}\times q}}, Ξ𝒮:Δ𝒮×q{{\Xi_{\mathcal{S}}}}{\rm{:\;}}{\mathbb{R}}\mapsto{\mathbb{R}^{{{\Delta_{\mathcal{S}}}}\times q}} such that for all Δ(t)\Delta\left(t\right)\in\mathbb{R} the following holds:

𝒮(ψ)=𝒢(ψ)(ψ)=𝒢(ψ)θ,Πθ(Δ)𝒢(ψ)=𝒯𝒢(Ξ𝒢(Δ)ψ),Πθ(Δ)𝒮(ψ)=𝒯𝒮(Ξ𝒮(Δ)ψ),\begin{array}[]{c}{\mathcal{S}}\left(\psi\right)={\mathcal{G}}\left(\psi\right){\mathcal{F}}\left(\psi\right)={\mathcal{G}}\left(\psi\right)\theta{\rm{,}}\\ {\Pi_{\theta}}\left(\Delta\right){\mathcal{G}}\left(\psi\right)={{\mathcal{T}}_{\mathcal{G}}}\left({{\Xi_{\mathcal{G}}}\left(\Delta\right)\psi}\right){\rm{,}}\\ {\Pi_{\theta}}\left(\Delta\right){{\mathcal{S}}\left(\psi\right)}={{\mathcal{T}}_{\mathcal{S}}}\left({{\Xi_{\mathcal{S}}}\left(\Delta\right)\psi}\right){\rm{,}}\end{array} (4)

where

det{Πθ(Δ)}Δθ(t),θ1,rank{𝒢(ψ)}=q,Ξ(.)(Δ)=Ξ¯(.)(Δ)Δ(t)Δ(.)×q,Ξ(.)ij(Δ)=cijΔij(t),cij{0,1},ij1,\displaystyle\begin{array}[]{c}{\rm{det}}\left\{{{\Pi_{\theta}}\left(\Delta\right)}\right\}\geq{\Delta^{{\ell_{\theta}}}}\left(t\right){\rm{,\;}}{\ell_{\theta}}\geq 1,{\rm{rank}}\left\{{{\mathcal{G}}\left(\psi\right)}\right\}=q{\rm{,}}\\ {\Xi_{\left(.\right)}}\left(\Delta\right)={{\overline{\Xi}}_{\left(.\right)}}\left(\Delta\right)\Delta\left(t\right)\in{\mathbb{R}^{{\Delta_{\left(.\right)}}\times q}}{\rm{,}}\\ {\Xi_{\left(.\right)}}_{ij}\left(\Delta\right)={c_{ij}}{\Delta^{{\ell_{ij}}}}\left(t\right){\rm{,\;}}c_{ij}\in\left\{{0,{\rm{1}}}\right\}{\rm{,\;}}{\ell_{ij}}\geq 1,\end{array}

and all above mentioned mappings are known222Assumption 1 is not restrictive and can be easily verified via direct inspection of mapping (ψ)\mathcal{F}(\psi)..

Assumption 1 is met in case when polynomials in θ\theta are used to form Θ(θ)\Theta\left(\theta\right) and consequently the inverse transform function (ψ){\mathcal{F}}\left(\psi\right) can be computed using algebraic functions.

Example. For vector ψ(θ)=col{θ1θ2+θ12,θ2+θ1}\psi\left(\theta\right)=col\left\{{{\theta_{1}}{\theta_{2}}{\rm{+}}\theta_{1}^{2}{\rm{,\;}}{\theta_{2}}+{\theta_{1}}}\right\} the mappings from (4) take the form:

𝒮(ψ)=[ψ1ψ22ψ1],𝒢(ψ)=[ψ200ψ2],Π(Δ)=[Δ00Δ2],Ξ𝒮(Δ)=[Δ0Δ200Δ],Ξ𝒢(Δ)=[0Δ0Δ2],𝒯𝒢(Ξ𝒢(Δ)ψ)=[Δψ200Δ2ψ2],𝒯𝒮(Ξ𝒮(Δ)ψ)=[ψ1ΔΔ2ψ22Δ2ψ1].\begin{array}[]{c}{\mathcal{S}}\left(\psi\right)={\begin{bmatrix}{{\psi_{1}}}\\ {\psi_{2}^{2}-{\psi_{1}}}\end{bmatrix}}{\rm{,\;}}{\mathcal{G}}\left(\psi\right)={\begin{bmatrix}{{\psi_{2}}}&0\\ 0&{{\psi_{2}}}\end{bmatrix}{\rm{,}}}\\ \begin{array}[]{c}\Pi\left(\Delta\right)={\begin{bmatrix}\Delta&0\\ 0&{{\Delta^{2}}}\end{bmatrix}}{\rm{,}}\\ {\Xi_{\mathcal{S}}}\left(\Delta\right)={\begin{bmatrix}\Delta&0\\ {{\Delta^{2}}}&0\\ 0&\Delta\end{bmatrix}}{\rm{,\;}}{\Xi_{\mathcal{G}}}\left(\Delta\right)={\begin{bmatrix}0&\Delta\\ 0&{{\Delta^{2}}}\end{bmatrix}}{\rm{,}}\end{array}\\ {{\mathcal{T}}_{\mathcal{G}}}\left({{\Xi_{\mathcal{G}}}\left(\Delta\right)\psi}\right)={\begin{bmatrix}{\Delta{\psi_{2}}}&0\\ 0&{{\Delta^{2}}{\psi_{2}}}\end{bmatrix}}{\rm{,}}\\ \begin{array}[]{c}{{\mathcal{T}}_{\mathcal{S}}}\left({{\Xi_{\mathcal{S}}}\left(\Delta\right)\psi}\right)={\begin{bmatrix}{{\psi_{1}}\Delta}\\ {{\Delta^{2}}\psi_{2}^{2}-{\Delta^{2}}{\psi_{1}}}\end{bmatrix}}{\rm{.}}\;\;\blacktriangledown\\ \end{array}\end{array}\\ (5)

Assumption 1 sets the conditions to obtain the following linearly parameterized regression equation from ψ(θ)\psi(\theta):

𝒯𝒮(Ξ𝒮(Δ)ψ)=𝒯𝒢(Ξ𝒢(Δ)ψ)θ.{{\mathcal{T}}_{\mathcal{S}}}\left({{\Xi_{\mathcal{S}}}\left(\Delta\right)\psi}\right)={{\mathcal{T}}_{\mathcal{G}}}\left({{\Xi_{\mathcal{G}}}\left(\Delta\right)\psi}\right)\theta{\rm{.}} (6)

Taking into consideration that the following equalities hold in accordance with Assumption 1:

Ξ𝒮(Δ)=Ξ¯𝒮(Δ)Δ,Ξ𝒢(Δ)=Ξ¯𝒢(Δ)Δ,\begin{gathered}{\Xi_{\mathcal{S}}}\left(\Delta\right)={\overline{\Xi}_{\mathcal{S}}}\left(\Delta\right)\Delta{\rm{,\;}}\\ {\Xi_{\mathcal{G}}}\left(\Delta\right)={\overline{\Xi}_{\mathcal{G}}}\left(\Delta\right)\Delta{\rm{,}}\end{gathered} (7)

equation (6) is rewritten as:

𝒯𝒮(Ξ¯𝒮(Δ)𝒴ψ)=𝒯𝒢(Ξ¯𝒢(Δ)𝒴ψ)θ,{{\mathcal{T}}_{\mathcal{S}}}\!\left({{{\overline{\Xi}}_{\mathcal{S}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right)\!=\!{{\mathcal{T}}_{\mathcal{G}}}\left({{{\overline{\Xi}}_{\mathcal{G}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right)\theta{\rm{,}} (8)

where 𝒴ψ(t)=Δ(t)ψ(θ){{\mathcal{Y}}_{\psi}}\left(t\right)=\Delta\left(t\right)\psi\left(\theta\right) is the unmeasurable linear regression equation with respect to ψ(θ)\psi\left(\theta\right).

Example (remainder). For ψ(θ)=col{θ1θ2+θ12,θ2+θ1}\psi\left(\theta\right)=col\left\{{{\theta_{1}}{\theta_{2}}{\rm{+}}\theta_{1}^{2}{\rm{,\;}}{\theta_{2}}+{\theta_{1}}}\right\} the mappings from (8) take the form:

Ξ¯𝒮(Δ)=[10Δ001],Ξ¯𝒢(Δ)=[010Δ],𝒯𝒢(Ξ¯𝒢(Δ)𝒴ψ)=[𝒴2ψ00Δ𝒴2ψ],𝒯𝒮(Ξ¯𝒮(Δ)𝒴ψ)=[𝒴1ψ𝒴2ψ2Δ𝒴1ψ].\displaystyle\begin{array}[]{c}{{\overline{\Xi}}_{\mathcal{S}}}\left(\Delta\right)={\begin{bmatrix}1&0\\ \Delta&0\\ 0&1\end{bmatrix}}{\rm{,\;}}{{\overline{\Xi}}_{\mathcal{G}}}\left(\Delta\right)={\begin{bmatrix}0&1\\ 0&\Delta\end{bmatrix}}{\rm{,}}\\ {{\mathcal{T}}_{\mathcal{G}}}\left({{\overline{\Xi}_{\mathcal{G}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right)={\begin{bmatrix}{{{{\mathcal{Y}}_{2\psi}}}}&0\\ 0&{{\Delta}{{{\mathcal{Y}}_{2\psi}}}}\end{bmatrix}}{\rm{,}}\\ \begin{array}[]{c}{{\mathcal{T}}_{\mathcal{S}}}\left({{\overline{\Xi}_{\mathcal{S}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right)={\begin{bmatrix}{{{\mathcal{Y}}_{1\psi}}}\\ {{{\mathcal{Y}}^{2}_{2\psi}}-{\Delta}{{{{\mathcal{Y}}_{1\psi}}}}}\end{bmatrix}}{\rm{.}}\;\;\blacktriangledown\\ \end{array}\end{array}

Thus, if Assumption 1 is satisfied, having equation for 𝒴ψ(t){{\mathcal{Y}}_{\psi}}\left(t\right) and the known mappings from (4) at hand, the regression equation with nonlinear parameterization (2) can be transformed into the new one with linear parameterization (8). That is the reason why Assumption 1 is called “linearizing”.

Using (2), the regression equation with measurable 𝒴ψ(t),Δ(t)0{{\mathcal{Y}}_{\psi}}\left(t\right){\rm{,\;}}\Delta\left(t\right)\geq 0 could be obtained with the help of DREM procedure [1]. Towards this end, we introduce the following dynamic extension:

y¯˙(t)=eσ(tt0)ΩT(t)y(t),y¯(t0)=0p,Ω¯˙(t)=eσ(tt0)ΩT(t)Ω(t),Ω¯(t0)=0p×p,\begin{array}[]{c}\dot{\overline{y}}\left(t\right)={e^{-\sigma\left({t-{t_{0}}}\right)}}{\Omega^{\rm{T}}}\left(t\right)y\left(t\right){\rm{,\;}}\overline{y}\left({{t_{0}}}\right)={0_{p}},\\ \dot{\overline{\Omega}}\left(t\right)={e^{-\sigma\left({t-{t_{0}}}\right)}}{\Omega^{\rm{T}}}\left(t\right)\Omega\left(t\right){\rm{,\;}}\overline{\Omega}\left({{t_{0}}}\right)={0_{p\times p}},\end{array} (9)

and apply a mixing procedure to y¯(t)\overline{y}(t):

𝒴ψ(t)=Δ(t)ψ(θ),𝒴ψ(t):=adj{Ω¯(t)}y¯(t),Δ(t):=det{Ω¯(t)}.\begin{array}[]{c}{{\mathcal{Y}}_{\psi}}\left(t\right)=\Delta\left(t\right)\psi\left(\theta\right){\rm{,}}\\ {{\mathcal{Y}}_{\psi}}\left(t\right)\!{\rm{:}}=\!{\mathcal{L}}{\rm{adj}}\left\{{\overline{\Omega}\left(t\right)}\right\}\overline{y}\left(t\right){\rm{,\;}}\Delta\left(t\right)\!{\rm{:}}\!=\!{\rm{det}}\left\{{\overline{\Omega}\left(t\right)}\right\}.\end{array} (10)

The following proposition has been proved in [4], [5] for the scalar regressor Δ(t)\Delta\left(t\right) obtained by (9) and (10).

Proposition 1. If Ω(t)FE\Omega\left(t\right)\in{\rm{FE}}, then for all tteΔ(t)ΔLB>0.t\geq{t_{e}}\;\Delta\left(t\right)\geq\linebreak\geq{\Delta_{LB}}>0.

So, the signals 𝒯𝒮(Ξ¯𝒮(Δ)𝒴ψ),𝒯𝒢(Ξ¯𝒢(Δ)𝒴ψ){{\mathcal{T}}_{\mathcal{S}}}\left({{{\overline{\Xi}}_{\mathcal{S}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right){\rm{,\;}}{{\mathcal{T}}_{\mathcal{G}}}\left({{{\overline{\Xi}}_{\mathcal{G}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right) can be computed through equations (9) and (10). Then the mixing procedure is applied a novo:

𝒴θ(t)=(t)θ,𝒴θ(t):=adj{𝒯𝒢(Ξ¯𝒢(Δ)𝒴ψ)}𝒯𝒮(Ξ¯𝒮(Δ)𝒴ψ),(t):=det{𝒯𝒢(Ξ¯𝒢(Δ)𝒴ψ)}.\begin{array}[]{c}{{\mathcal{Y}}_{\theta}}\left(t\right)={\mathcal{M}}\left(t\right)\theta{\rm{,}}\\ {{\mathcal{Y}}_{\theta}}\left(t\right)\!{\rm{:}}\!=\!{\rm{adj}}\!\left\{{{{\mathcal{T}}_{\mathcal{G}}}\left({{{\overline{\Xi}}_{\mathcal{G}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right)}\right\}{{\mathcal{T}}_{\mathcal{S}}}\left({{{\overline{\Xi}}_{\mathcal{S}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right){\rm{,}}\\ {\rm{}}{\mathcal{M}}\left(t\right){\rm{:}}={\rm{det}}\left\{{{{\mathcal{T}}_{\mathcal{G}}}\left({{{\overline{\Xi}}_{\mathcal{G}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right)}\right\}.\end{array} (11)

Having the linear regression equation (11) at hand, the estimation law to identify the unknown parameters is introduced based on standard gradient descent method:

θ^˙(t)=θ~˙(t)=γ(t)((t)θ^(t)𝒴θ(t)),\dot{\hat{\theta}}\left(t\right)=\dot{\tilde{\theta}}\left(t\right)=-\gamma{\mathcal{M}}\left(t\right)\left({{\mathcal{M}}\left(t\right)\hat{\theta}\left(t\right)-{{\mathcal{Y}}_{\theta}}\left(t\right)}\right), (12)

where γ>0\gamma>0 is an adaptive gain, θ^(t0)=θ^0\hat{\theta}(t_{0})=\hat{\theta}_{0} is an initial condition.

The properties of the law (12) are considered in the following theorem.

Theorem 1. If FC1-FC2 and Assumption 1 are met, then goals (3a) and (3b) are achieved.

Proof. The solution of the differential equation (12) for all tt0t\geq{t_{0}} is written as:

θ~i(t)=eγt0t2(τ)𝑑τθ~i(t0),{\tilde{\theta}_{i}}\left(t\right)={e^{-\gamma\int\limits_{{t_{0}}}^{t}{{{\mathcal{M}}^{2}}\left(\tau\right)d\tau}}}{\tilde{\theta}_{i}}\left({{t_{0}}}\right){\rm{,}} (13)

from which 𝑡atb,i{1,q}|θ~i(ta)||θ~i(tb)|\forall{{\mathop{t}\nolimits}_{a}}\geq{t_{b}}{\rm{,\;}}\forall i\in\left\{{1,{\rm{}}q}\right\}{\rm{}}\left|{{{\tilde{\theta}}_{i}}\left({{t_{a}}}\right)}\right|\leq\left|{{{\tilde{\theta}}_{i}}\left({{t_{b}}}\right)}\right|.

Following Assumption 1, it holds that det{Πθ(Δ)}Δθ(t),rank{𝒢(ψ)}=q{\rm{det}}\left\{{{\Pi_{\theta}}\left(\Delta\right)}\right\}\geq\linebreak\geq{\Delta^{{\ell_{\theta}}}}\left(t\right){\rm{,rank}}\left\{{{\mathcal{G}}\left(\psi\right)}\right\}=q, and tteΔ(t)ΔLB>0\forall t\geq{t_{e}}{\rm{\;}}\Delta\left(t\right)\geq{\Delta_{LB}}>0 also holds owing to Proposition 1, then for all ttet\geq{t_{e}} we can write the following expression for (t){\mathcal{M}}\left(t\right):

||=|det{𝒯𝒢(Ξ𝒢(Δ)𝒴ψ)}|=|det{Πθ(Δ)}||det{𝒢(ψ)}|ΔLBθ|det{𝒢(ψ)}|>0,\begin{array}[]{c}\left|{{\mathcal{M}}}\right|=\left|{{\rm{det}}\left\{{{{\mathcal{T}}_{\mathcal{G}}}\left({{\Xi_{\mathcal{G}}}\left(\Delta\right){{\mathcal{Y}}_{\psi}}}\right)}\right\}}\right|=\left|{{\rm{det}}\left\{{{\Pi_{\theta}}\left(\Delta\right)}\right\}}\right|\cdot\\ \cdot\left|{{\rm{det}}\left\{{{\mathcal{G}}\left(\psi\right)}\right\}}\right|\geq\Delta_{{{LB}}}^{{\ell_{\theta}}}\left|{{\rm{det}}\left\{{{\mathcal{G}}\left(\psi\right)}\right\}}\right|>0,\end{array} (14)

which, in its turn, allows one to rewrite the solution (13) for all ttet\geq{t_{e}} as:

|θ~i(t)|eγΔLB2θdet2{𝒢(ψ)}(tte)|θ~i(t0)|,\left|{{{\tilde{\theta}}_{i}}\left(t\right)}\right|\leq{e^{-\gamma\Delta_{{{LB}}}^{2{\ell_{\theta}}}{\rm{de}}{{\rm{t}}^{2}}\left\{{{\mathcal{G}}\left(\psi\right)}\right\}\left({t-{t_{e}}}\right)}}\left|{{{\tilde{\theta}}_{i}}\left({{t_{0}}}\right)}\right|{\rm{,}} (15)

from which it follows that limtθ~(t)=0(exp)\mathop{{\rm{lim}}}\limits_{t\to\infty}\left\|{\tilde{\theta}\left(t\right)}\right\|=0{\rm{}}\left({{\rm{exp}}}\right).

This completes the proof of Theorem 1.

Therefore, if the mapping (ψ){\mathcal{F}}\left(\psi\right) satisfy the premises of Assumption 1, then, in accordance with the extension (9) and mixing procedures (10) and (11), the estimation law (12) can be designed ensuring that the goals (3a) and (3b) are achieved. Note that, in contrast to [10], in addition to properties (3a) and (3b) the proposed law does not use a priori information about low and upper bounds of parameters (P3) in design procedure333It should be noted that the proposed law requires only knowledge that θ\theta lies in the safe domain DθD_{\theta} from FC2. and does not include singularity causing division operations (P4-P5).

4 Numerical Experiment

4.1 Academic example

Using an academic example, the proposed identification method has been compared with the gradient law and the one proposed in [9]. The regressor and the mapping were defined as follows:

Ω(t)=[etsin(t)1]TΘ(θ)=[θ1θ2 + θ12θ2+θ1cos(θ1)]\Omega\left(t\right)={{\begin{bmatrix}{{e^{-t}}}\\ {\sin\left(t\right)}\\ 1\end{bmatrix}}^{\text{T}}}{\text{, }}\Theta\left(\theta\right)={\begin{bmatrix}{{\theta_{1}}{\theta_{2}}{\text{ + }}\theta_{1}^{2}}\\ {{\theta_{2}}+{\theta_{1}}}\\ {{\text{cos}}\left({{\theta_{1}}}\right)}\end{bmatrix}}{\text{, }} (16)

where the FC1 was met, and the premises of FC2 were satisfied in case θDθ:={θq:θ2θ1}\theta\in{D_{\theta}}{\rm{:=}}\Big{\{}{\theta\in{\mathbb{R}^{{q}}}{\rm{:\;}}{\theta_{2}}\neq-{\theta_{1}}}\Big{\}}.

According to the proposed approach, the matrix ==[I202]\mathcal{L}=\linebreak=\begin{bmatrix}I_{2}&0_{2}\end{bmatrix} to implement the mixing procedure (10) was introduced, and the mappings from Assumption 1 that were necessary to implement (12) were defined as in the above-given example (see (5)).

In accordance with the “monotonizability Assumption”, the following change of variables was introduced to implement the estimation law from [9]:

η=𝒟(θ)=col{θ1θ1+θ2},θ=𝒟I(η)=col{η1η2η1},\begin{gathered}\eta=\mathcal{D}\left(\theta\right)=col\left\{{{\theta_{1}}{\text{, }}{\theta_{1}}+{\theta_{2}}}\right\}{\text{,}}\\ \theta={\mathcal{D}^{I}}\left(\eta\right)=col\left\{{{\eta_{1}}{\text{, }}{\eta_{2}}-{\eta_{1}}}\right\}{\text{,}}\\ \end{gathered} (17)

which allowed one to rewrite Θ(θ)\Theta(\theta) as (Θ𝒟I)(η)==col{η1η2η2, cos(η1)}\left({\Theta\circ{\mathcal{D}^{I}}}\right)\left(\eta\right)=\linebreak=col\left\{{{\eta_{1}}{\eta_{2}}{\text{, }}{\eta_{2}}{\text{, cos}}\left({{\eta_{1}}}\right)}\right\} and ensure that there existed ρ>0\rho>0 such that the strong P-monotonicity condition [11] for mapping 𝒲(η)=(Θ𝒟I)(η)=(ψ𝒟I)(η)\mathcal{W}\left(\eta\right)=\mathcal{L}\left({\Theta\circ{\mathcal{D}^{I}}}\right)\left(\eta\right)=\left({\psi\circ{\mathcal{D}^{I}}}\right)\left(\eta\right):

(ab)TP(𝒲(a)𝒲(b))ρ|ab|2>0,ab2ab\begin{gathered}{\left({a-b}\right)^{\text{T}}}P\left({\mathcal{W}\left(a\right)-\mathcal{W}\left(b\right)}\right)\geqslant\rho{\left|{a-b}\right|^{2}}>0{\text{,}}\\ \forall a{\text{, }}b\in{\mathbb{R}^{2}}{\text{, }}a\neq b{\text{, }}\\ \end{gathered} (18)

was met for P=[κ001]κθ1max24(θ1min+θ2min)P={\begin{bmatrix}\kappa&0\\ 0&1\end{bmatrix}}{\text{, }}\kappa\geqslant\tfrac{{\theta_{1{\text{max}}}^{2}}}{{4\left({{\theta_{1{\text{min}}}}+{\theta_{2{\text{min}}}}}\right)}}.

According to [9] and using (10), the parameter estimation law was rewritten as:

θ^(t)=𝒟I(η^),η^˙(t)=γηPΔ(t)(𝒴ψ(t)Δ(t)𝒲(η^)).\begin{gathered}\hat{\theta}\left(t\right)={\mathcal{D}^{I}}\left({\hat{\eta}}\right){\text{,}}\\ \dot{\hat{\eta}}\left(t\right)={\gamma_{\eta}}P\Delta\left(t\right)\left({{\mathcal{Y}_{\psi}}\left(t\right)-\Delta\left(t\right)\mathcal{W}\left({\hat{\eta}}\right)}\right).\\ \end{gathered} (19)

The classic gradient-based estimation law was defined as:

θ^(t)=[Θ^1(t)Θ^2(t)Θ^2(t)Θ^1(t)Θ^2(t)],Θ^˙(t)=ΓΩT(t)(Ω(t)Θ^(t)y(t)).\begin{gathered}\hat{\theta}\left(t\right)={\begin{bmatrix}{\tfrac{{{{\hat{\Theta}}_{1}}\left(t\right)}}{{{{\hat{\Theta}}_{2}}\left(t\right)}}}\\ {{{\hat{\Theta}}_{2}}\left(t\right)-\tfrac{{{{\hat{\Theta}}_{1}}\left(t\right)}}{{{{\hat{\Theta}}_{2}}\left(t\right)}}}\end{bmatrix}}{\text{,}}\\ \dot{\hat{\Theta}}\left(t\right)=-\Gamma{\Omega^{\text{T}}}\left(t\right)\left({\Omega\left(t\right)\hat{\Theta}\left(t\right)-y\left(t\right)}\right).\\ \end{gathered} (20)

It should be noted that in contrast to (12), the law (19) required information about the low bounds θ1min,θ2min{\theta_{1{\text{min}}}},\;{\theta_{2{\text{min}}}}, while the law (20) included the division operation. To conduct the experiment, the unknown parameters θ\theta, parameters of filters (9) and laws (12), (19), (20) were set as follows:

θ1=1, θ2=2,σ=1, γ=1013γη=105Γ=10I2κ=10,θ^(0)=η^(0)=02Θ^(0)=[010].\begin{gathered}{\theta_{1}}=1,{\text{ }}{\theta_{2}}=2,\\ \sigma=1,{\text{ }}\gamma={10^{13}}{\text{, }}{\gamma_{\eta}}={10^{5}}{\text{, }}\Gamma=10{I_{2}}{\text{, }}\kappa=10,\\ \hat{\theta}\left(0\right)=\hat{\eta}\left(0\right)={0_{2}}{\text{, }}\hat{\Theta}\left(0\right)={\begin{bmatrix}0&1&0\end{bmatrix}}.\\ \end{gathered} (21)

The initial conditions for (20) were chosen by trial and error so that to meet the condition Θ^2(t)0{\hat{\Theta}_{2}}\left(t\right)\neq 0. Figure 1 depicts the transients of the estimates obtained with the help of (12), (19), and (20).

Refer to caption
Figure 1: Transient behavior of θ^(t)\hat{\theta}\left(t\right)

Estimates obtained with the laws (12) and (19) exponentially converged to the true values, since the condition Ω(t)FE\Omega\left(t\right)\in{\text{FE}} was met. At the same time, the estimates by (20) did not converge to true values since Ω(t)PE\Omega\left(t\right)\notin{\text{PE}}. The simulation result confirmed that the goal (3a) and (3b) was achieved and demonstrated the advantages of the proposed solution in comparison with both the classic gradient identifier with overparameterization (20) and the law (19) from [9].

4.2 2-DOF robot manipulator

A problem of adaptive control of a 2-DOF robot manipulator with uncertainty has been considered:

M(q)q¨+C(qq˙)q˙+U(q)=u,\begin{gathered}M\left(q\right)\ddot{q}+C\left({q{\text{, }}\dot{q}}\right)\dot{q}+\nabla U\left(q\right)=u{\text{,}}\end{gathered} (22)
M(q)=[Θ1(θ)+2Θ2(θ)cos(q2)Θ3(θ)+Θ2(θ)cos(q2)Θ3(θ)+Θ2(θ)cos(q2)Θ3(θ)],\displaystyle M\!\left(q\right)\!=\!{\begin{bmatrix}{{\Theta_{1}}\!\left(\theta\right)\!+\!2{\Theta_{2}}\!\left(\theta\right)\cos\!\left({{q_{2}}}\right)}&{{\Theta_{3}}\!\left(\theta\right)\!+\!{\Theta_{2}}\!\left(\theta\right)\cos\left({{q_{2}}}\right)}\\ {{\Theta_{3}}\!\left(\theta\right)\!+\!{\Theta_{2}}\!\left(\theta\right)\cos\!\left({{q_{2}}}\right)}&{{\Theta_{3}}\!\left(\theta\right)}\end{bmatrix}}{\text{,}}
C(qq˙)=[Θ2(θ)sin(q2)q˙2Θ2(θ)sin(q2)(q˙1+q˙2)Θ2(θ)sin(q2)q˙10],\displaystyle C\left({q{\text{, }}\dot{q}}\right)\!=\!{\begin{bmatrix}{-{\Theta_{2}}\left(\theta\right)\sin\left({{q_{2}}}\right){{\dot{q}}_{2}}}&{-{\Theta_{2}}\left(\theta\right)\sin\left({{q_{2}}}\right)\left({{{\dot{q}}_{1}}\!+\!{{\dot{q}}_{2}}}\right)}\\ {{\Theta_{2}}\left(\theta\right)\sin\left({{q_{2}}}\right){{\dot{q}}_{1}}}&0\end{bmatrix}}{\text{,}}
U(q)=[Θ4(θ)gcos(q1+q2)+Θ5(θ)gcos(q1)Θ4(θ)gcos(q1+q2)],\displaystyle\nabla U\left(q\right)={\begin{bmatrix}{{\Theta_{4}}\left(\theta\right)g\cos\left({{q_{1}}+{q_{2}}}\right)+{\Theta_{5}}\left(\theta\right)g\cos\left({{q_{1}}}\right)}\\ {{\Theta_{4}}\left(\theta\right)g\cos\left({{q_{1}}+{q_{2}}}\right)}\end{bmatrix}}{\text{,}}
Θ(θ)=[θ22θ4+θ12(θ3+θ4)θ1θ2θ4θ22θ4θ2θ4θ1(θ3+θ4)],\displaystyle\Theta\left(\theta\right)={\begin{bmatrix}{\theta_{2}^{2}{\theta_{4}}+\theta_{1}^{2}\left({{\theta_{3}}+{\theta_{4}}}\right)}\\ {{\theta_{1}}{\theta_{2}}{\theta_{4}}}\\ {\theta_{2}^{2}{\theta_{4}}}\\ {{\theta_{2}}{\theta_{4}}}\\ {{\theta_{1}}\left({{\theta_{3}}+{\theta_{4}}}\right)}\end{bmatrix}}{\text{,}}

where q2q\in{\mathbb{R}^{2}} is the vector of generalized coordinates, u2u\in{\mathbb{R}^{2}} is the control vector, M22×2M{\text{: }}{\mathbb{R}^{2}}\mapsto{\mathbb{R}^{2\times 2}} is the generalized inertia matrix, which is positive definite and assumed to be bounded, C2×22×2C{\text{: }}{\mathbb{R}^{2}}\times{\mathbb{R}^{2}}\mapsto{\mathbb{R}^{2\times 2}} represents the Coriolis and centrifugal forces matrix, U2U{\text{: }}{\mathbb{R}^{2}}\mapsto\mathbb{R} is the potential energy function.

The goal was stated as limt col{q~q~˙}=0,\mathop{{\text{lim}}}\limits_{t\to\infty}{\text{ }}col\left\{{\tilde{q}{\text{, }}\dot{\tilde{q}}}\right\}=0, where q~=qq\tilde{q}=q-{q_{*}} is state tracking error, and q{q_{*}} is a reference trajectory. Certainty equivalence Slotine-Li controller [13] that ensured achievement of the above-mentioned goal had the form [9]:

u=W(qq˙t)Θ(θ^)+K1ss=q~˙+K2q~,W(qq˙t)=[W11W12W13W14W15W21W22W23W24W25],\begin{gathered}u=W\left({q{\text{, }}\dot{q}{\text{, }}t}\right)\Theta\left({\hat{\theta}}\right)+{K_{1}}s{\text{, }}s=\dot{\tilde{q}}+{K_{2}}\tilde{q}{\text{,}}\\ W\left({q{\text{, }}\dot{q}{\text{, }}t}\right)={\begin{bmatrix}{{W_{11}}}&{{W_{12}}}&{{W_{13}}}&{{W_{14}}}&{{W_{15}}}\\ {{W_{21}}}&{{W_{22}}}&{{W_{23}}}&{{W_{24}}}&{{W_{25}}}\end{bmatrix}}{\text{,}}\\ \end{gathered} (23)

with W11=q¨r1W12=cos(q2)(2q¨r1+q¨r2)sin(q2)(q˙2q˙r1+(q˙1+q˙2)q˙r2){W_{11}}={\ddot{q}_{r1}}{\text{, }}{W_{12}}=\cos\left({{q_{2}}}\right)\left({2{{\ddot{q}}_{r1}}+{{\ddot{q}}_{r2}}}\right)-sin\left({{q_{2}}}\right)\cdot\linebreak\cdot\left({{{\dot{q}}_{2}}{{\dot{q}}_{r1}}\!+\!\left({{{\dot{q}}_{1}}\!+\!{{\dot{q}}_{2}}}\right){{\dot{q}}_{r2}}}\right){\text{, }} W14=W24=gcos(q1+q2){W_{14}}\!=\!{W_{24}}\!=\!g\cos\left({{q_{1}}\!+\!{q_{2}}}\right){\text{, }} W13=q¨r2W15=gcos(q1)W21=W25=0, W22=cos(q2)q¨r1+sin(q2)q˙1q˙r1,{W_{13}}={\ddot{q}_{r2}}{\text{, }}{W_{15}}=g\cos\left({{q_{1}}}\right){\text{, }}{W_{21}}={W_{25}}=0,{\text{ }}\linebreak{W_{22}}=\cos\left({{q_{2}}}\right){\ddot{q}_{r1}}+\sin\left({{q_{2}}}\right){\dot{q}_{1}}{\dot{q}_{r1}}, W23=q¨r1+q¨r2{W_{23}}={\ddot{q}_{r1}}+{\ddot{q}_{r2}} where q˙r=q˙K2q~{\dot{q}_{r}}={\dot{q}_{*}}-{K_{2}}\tilde{q}.

The estimates of the unknown parameters θ\theta with exponential or asymptotic rate of convergence were required to implement (23). Using measurable signals qq˙q{\text{, }}\dot{q} and τ\tau and the results of Proposition 7 from [9], the regression model (2) was parametrized as follows:

y(t)=[u]Ω(t)=[Ω11Ω12Ω13Ω14Ω15Ω21Ω22Ω23Ω24Ω25],y\left(t\right)\!=\!\mathfrak{H}\left[u\right]{\text{, }}\Omega\left(t\right)\!=\!\mathfrak{H}{\begin{bmatrix}{{\Omega_{11}}}&{{\Omega_{12}}}&{{\Omega_{13}}}&{{\Omega_{14}}}&{{\Omega_{15}}}\\ {{\Omega_{21}}}&{{\Omega_{22}}}&{{\Omega_{23}}}&{{\Omega_{24}}}&{{\Omega_{25}}}\end{bmatrix}}{\text{,}} (24)

and Ω11=pq˙1Ω12=pcos(q2)(2q˙1+q˙2)Ω13=pq˙2Ω14=Ω24=W14Ω15=W15{\Omega_{11}}\!=\!p{\dot{q}_{1}}{\text{, }}{\Omega_{12}}\!=\!p\cos\left({{q_{2}}}\right)\left({2{{\dot{q}}_{1}}+{{\dot{q}}_{2}}}\right){\text{, }}{\Omega_{13}}\!=\!p{\dot{q}_{2}}{\text{, }}\linebreak{\Omega_{14}}={\Omega_{24}}={W_{14}}{\text{, }}{\Omega_{15}}={W_{15}}{\text{, }} Ω21=Ω25=0, Ω22=pcos(q2)q˙1+sin(q2)(q˙12+q˙1q˙2)Ω23=p(q˙1+q˙2){\Omega_{21}}={\Omega_{25}}=0,{\text{ }}\linebreak{\Omega_{22}}=p\cos\left({{q_{2}}}\right){\dot{q}_{1}}+\sin\left({{q_{2}}}\right)\left({\dot{q}_{1}^{2}+{{\dot{q}}_{1}}{{\dot{q}}_{2}}}\right){\text{, }}{\Omega_{23}}=p\left({{{\dot{q}}_{1}}+{{\dot{q}}_{2}}}\right). In accordance with the “monotonizability Assumption”, the following change of variables was introduced to implement the identification law from [9]:

η=𝒟(θ)=col{θ1θ2θ2θ4θ1(θ3+θ4)},θ=𝒟I(η)=col{η1η2η4η1η3η2η3η2},\begin{gathered}\eta=\mathcal{D}\left(\theta\right)=col\left\{{{\theta_{1}}{\text{, }}{\theta_{2}}{\text{, }}{\theta_{2}}{\theta_{4}}{\text{, }}{\theta_{1}}\left({{\theta_{3}}+{\theta_{4}}}\right)}\right\}{\text{,}}\\ \theta={\mathcal{D}^{I}}\left(\eta\right)=col\left\{{{\eta_{1}}{\text{, }}{\eta_{2}}{\text{, }}\tfrac{{{\eta_{4}}}}{{{\eta_{1}}}}-\tfrac{{{\eta_{3}}}}{{{\eta_{2}}}}{\text{, }}\tfrac{{{\eta_{3}}}}{{{\eta_{2}}}}}\right\}{\text{,}}\\ \end{gathered} (25)

which allowed one to rewrite Θ(θ)\Theta\left(\theta\right) as (Θ𝒟I)(η)==col{η2η3+η1η4η1η3η2η3η3η4}\left({\Theta\circ{\mathcal{D}}^{I}}\right)\left(\eta\right)=\linebreak=col\left\{{{\eta_{2}}{\eta_{3}}+{\eta_{1}}{\eta_{4}}{\text{, }}{\eta_{1}}{\eta_{3}}{\text{, }}{\eta_{2}}{\eta_{3}}{\text{, }}{\eta_{3}}{\text{, }}{\eta_{4}}}\right\} and ensure that there existed a constant ρ>0\rho>0 such that the strong P-monotonicity condition [11] for mapping 𝒲(η)=C(Θ𝒟I)(η)\mathcal{W}\left(\eta\right)=C\left({\Theta\circ{\mathcal{D}}^{I}}\right)\left(\eta\right):

(ab)TP(𝒲(a)𝒲(b))ρ|ab|2>0,ab4ab\begin{gathered}{\left({a-b}\right)^{\text{T}}}P\left({\mathcal{W}\left(a\right)-\mathcal{W}\left(b\right)}\right)\geqslant\rho{\left|{a-b}\right|^{2}}>{\text{0}}{\text{,}}\\ \forall a{\text{, }}b\in{\mathbb{R}^{4}}{\text{, }}a\neq b{\text{, }}\\ \end{gathered} (26)

was met for

C=[I404][04I4101×4]P=diag{κκ, 0, 0}κ14θ4m[θ2M+(θ1M)2θ2m].\begin{gathered}C={\begin{bmatrix}{{I_{4}}}&{{0_{4}}}\end{bmatrix}}{\begin{bmatrix}{{0_{4}}}&{{I_{4}}}\\ 1&{{0_{1\times 4}}}\end{bmatrix}}{\text{, }}P=diag\left\{{\kappa{\text{, }}\kappa{\text{, 0}}{\text{, 0}}}\right\}{\text{, }}\\ \kappa\geqslant\tfrac{1}{{4\theta_{4}^{m}}}\left[{\theta_{2}^{M}+\tfrac{{{{\left({\theta_{1}^{M}}\right)}^{2}}}}{{\theta_{2}^{m}}}}\right].\end{gathered}

According to [9] and using (9), the estimation law was defined as:

θ^(t)=𝒟I(η^),η^˙(t)=γηPΔ(t)(Cadj{Ω¯(t)}y¯(t)Δ(t)𝒲(η^)).\begin{gathered}\hat{\theta}\left(t\right)={\mathcal{D}^{I}}\left({\hat{\eta}}\right){\text{,}}\\ \dot{\hat{\eta}}\!\left(t\right)\!=\!{\gamma_{\eta}}P\Delta\!\left(t\right)\!\left({C{\text{adj}}\left\{{\overline{\Omega}\left(t\right)}\right\}\overline{y}\left(t\right)\!-\!\Delta\left(t\right)\mathcal{W}\left({\hat{\eta}}\right)}\right).\\ \end{gathered} (27)

Following the proposed method of identification, the vector ψ(θ)\psi\left(\theta\right) from FC2 and the mappings from Assumption 1 took the form:

ψ(θ)=[Θ1(θ)Θ2(θ)Θ3(θ)Θ5(θ)],𝒢(ψ)=diag{ψ4ψ4ψ2(ψ1ψ3)2ψ3(ψ1ψ3)2ψ3},𝒮(ψ)=col{ψ1ψ3(ψ1ψ3)ψ3(ψ1ψ3)ψ3ψ42ψ42ψ22ψ42ψ22},𝒯𝒢(Ξ¯𝒢(Δ)𝒴ψ)=diag{𝒴4ψ𝒴4ψ𝒴2ψ(𝒴1ψ𝒴3ψ)2Δ𝒴3ψ(𝒴1ψ𝒴3ψ)2Δ𝒴3ψ},𝒯𝒮(Ξ¯𝒮(Δ)𝒴ψ)=col{𝒴1ψ𝒴3ψ(𝒴1ψ𝒴3ψ)𝒴3ψ(𝒴1ψ𝒴3ψ)𝒴3ψ𝒴4ψ2𝒴4ψ2𝒴2ψ2𝒴4ψ2𝒴2ψ2}.\begin{gathered}\psi\left(\theta\right)={\begin{bmatrix}{{\Theta_{1}}\left(\theta\right)}&{{\Theta_{2}}\left(\theta\right)}&{{\Theta_{3}}\left(\theta\right)}&{{\Theta_{5}}\left(\theta\right)}\end{bmatrix}}{\text{,}}\\ \mathcal{G}\!\left(\psi\right)\!=\!diag\!\left\{{{\psi_{4}}{\text{, }}{\psi_{4}}{\psi_{2}}{\text{, }}{{\left({{\psi_{1}}\!-\!{\psi_{3}}}\right)}^{2}}\!{\psi_{3}}{\text{, }}{{\left({{\psi_{1}}\!-\!{\psi_{3}}}\right)}^{2}}{\psi_{3}}}\!\right\}\!{\text{,}}\\ \mathcal{S}\left(\psi\right)=col\left\{{{\psi_{1}}-{\psi_{3}}{\text{, }}\left({{\psi_{1}}-{\psi_{3}}}\right){\psi_{3}}{\text{, }}}\right.\\ \left.{\left({{\psi_{1}}\!-\!{\psi_{3}}}\right){\psi_{3}}\psi_{4}^{2}-\psi_{4}^{2}\psi_{2}^{2}{\text{, }}\psi_{4}^{2}\psi_{2}^{2}}\right\}{\text{,}}\\ {\mathcal{T}_{\mathcal{G}}}\left({{{\overline{\Xi}}_{\mathcal{G}}}\left(\Delta\right){\mathcal{Y}_{\psi}}}\right)=diag\left\{{\mathcal{Y}_{4\psi}}{\text{, }}{\mathcal{Y}_{4\psi}}{\mathcal{Y}_{2\psi}}{\text{, }}\right.\\ \left.{{\left({{\mathcal{Y}_{1\psi}}-{\mathcal{Y}_{3\psi}}}\right)}^{2}}\Delta{\mathcal{Y}_{3\psi}}{\text{, }}{{\left({{\mathcal{Y}_{1\psi}}-{\mathcal{Y}_{3\psi}}}\right)}^{2}}\Delta{\mathcal{Y}_{3\psi}}\right\}{\text{,}}\\ {\mathcal{T}_{\mathcal{S}}}\left({{{\overline{\Xi}}_{\mathcal{S}}}\left(\Delta\right){\mathcal{Y}_{\psi}}}\right)\!=\!col\left\{{\mathcal{Y}_{1\psi}}\!-\!{\mathcal{Y}_{3\psi}}{\text{, }}\left({{\mathcal{Y}_{1\psi}}\!-\!{\mathcal{Y}_{3\psi}}}\right){\mathcal{Y}_{3\psi}}{\text{, }}\right.\\ \left.\left({{\mathcal{Y}_{1\psi}}-{\mathcal{Y}_{3\psi}}}\right){\mathcal{Y}_{3\psi}}\mathcal{Y}_{4\psi}^{2}-\mathcal{Y}_{4\psi}^{2}\mathcal{Y}_{2\psi}^{2}{\text{, }}\mathcal{Y}_{4\psi}^{2}\mathcal{Y}_{2\psi}^{2}\right\}{\text{.}}\end{gathered} (28)

Note that, unlike (12), the law (27) requires information about the bounds θ1θ1Mθ2mθ2θ2Mθ4mθ4{\theta_{1}}\leqslant\theta_{1}^{M}{\text{, }}\theta_{2}^{m}\leqslant{\theta_{2}}\leqslant\theta_{2}^{M}{\text{, }}\theta_{4}^{m}\leqslant{\theta_{4}} and uses the singularity burden division operation in the mapping 𝒟I(η^){\mathcal{D}^{I}}\left({\hat{\eta}}\right). To conduct the experiment, the unknown parameters θ\theta, parameters of the control law (23), filters (9) and laws (12), (27) were set as follows:

θ1=0.7, θ2=0.8, θ3=1.5, θ4=0.5, g=9.8,K1=3I2K2=I2σ=1, κ=10,η^i(0)=0.1, θ^(0)=𝒟I(η^(0))γ:=101+2(t),γη:=51+Δ2(t).\begin{gathered}{\theta_{1}}=0.7,{\text{ }}{\theta_{2}}=0.8,{\text{ }}{\theta_{3}}=1.5,{\text{ }}{\theta_{4}}=0.5,{\text{ }}g=9.{\text{8}}{\text{,}}\\ {K_{1}}=3{I_{2}}{\text{, }}{K_{2}}={I_{2}}{\text{, }}\sigma=1,{\text{ }}\kappa=10,\\ {{\hat{\eta}}_{i}}\left(0\right)=0.1,{\text{ }}\hat{\theta}\left(0\right)={\mathcal{D}^{I}}\left({\hat{\eta}\left(0\right)}\right){\text{, }}\\ \gamma{\rm{:}}={\textstyle{{10}\over{1+{{\mathcal{M}}^{2}}\left(t\right)}}}{\rm{,\;}}{\gamma_{\eta}}{\rm{:}}={\textstyle{5\over{1+{\Delta^{2}}\left(t\right)}}}.\end{gathered} (29)

It is worth mentioning that the applicability and safety of use of time-varying adaptive gain in certainty equivalence indirect control problem was shown in, for instance, Proposition 6 from [9]. So the above-presented proof of Theorem is correct mutatis mutandis for this simulation example.

Figure 2 presents the transients of both estimates θ^(t)\hat{\theta}\left(t\right) obtained with the help of the laws (12), (27) and errors q~(t)q~˙(t)\tilde{q}\left(t\right){\text{, }}\dot{\tilde{q}}\left(t\right) for implementations (23) with (14), (23) with (27).

Refer to caption
Figure 2: Transient behavior of θ^(t)\hat{\theta}\left(t\right) and q~(t)q~˙(t)\tilde{q}\left(t\right){\text{, }}\dot{\tilde{q}}\left(t\right)

The simulation results confirmed the effectiveness of the proposed estimation law. Owing to the monotonicity of the elements of θ~(t)\tilde{\theta}\left(t\right), compared to the control law (23) with (27), the overshoot was reduced for q~˙(t)\dot{\tilde{q}}\left(t\right).

In addition, unlike (27), the proposed law does not require (i) special selection of θ^(0)\hat{\theta}\left(0\right) and is implementable under any initial conditions (the law (27) does not allow one to choose η1(0)=0, η2(0)=0{\eta_{1}}\left(0\right)=0,{\text{ }}{\eta_{2}}\left(0\right)=0 due to the definition of the mapping θ^(0)=𝒟I(η^(0))\hat{\theta}\left(0\right)={\mathcal{D}^{I}}\left({\hat{\eta}\left(0\right)}\right)), (ii) low and upper bounds θ4m,θ2M,θ1M,θ2m\theta_{4}^{m},\;\theta_{2}^{M},\;\theta_{1}^{M},\;\theta_{2}^{m}.

5 Conclusion

The unknown parameters estimation law for one class of NLPRE was proposed. In contrast to existing solutions, elementwise monotonicity of the parametric error was ensured. Necessary and sufficient implementability conditions for the developed law were: (i) the regressor finite excitation requirement, (ii) existence of inverse function from overparameterized parameters to physical ones, (iii) that only polynomials functions in θ\theta were used to form overparametrization. The results can be applied to improve the solutions quality of adaptive control and observation problems from recent studies [2], [3], [7], [9].

References

  • [1] S. Aranovskiy, A. Bobtsov, R. Ortega, and A. Pyrkin. Parameters estimation via dynamic regressor extension and mixing. In Proc. Amer. Control Conf., pages 6971–6976, 2016.
  • [2] A. Bobtsov, R. Ortega, S. Aranovskiy, and R. Cisneros. On-line estimation of the parameters of the windmill power coefficient. Systems & Control Letters, 164:105242, 2022.
  • [3] R. Cisneros and R. Ortega. Identification of nonlinearly parameterized nonlinear dissipative systems. IFAC-PapersOnLine, 55(12):79–84, 2022.
  • [4] A.I. Glushchenko and K.A. Lastochkin. Unknown piecewise constant parameters identification with exponential rate of convergence. Int. J. of Adaptive Control and Signal Proc., 37(1):315–346, 2023.
  • [5] A.I. Glushchenko, K.A. Lastochkin, and V.A. Petrov. Exponentially stable adaptive control. Part I. Time-invariant plants. Autom. and Remote Control, 83(4):548–578, 2022.
  • [6] L. Ljung. System Identification: Theory for the User. Prentice Hall, New Jersey, 1987.
  • [7] R. Ortega, A. Bobtsov, R. Costa-Castello, and N. Nikolaev. Parameter estimation of two classes of nonlinear systems with non-separable nonlinear parameterizations. 2022. arXiv preprint arXiv:2211.06455, https://arxiv.org/abs/2211.06455.
  • [8] R. Ortega, V. Gromov, E. Nuño, A. Pyrkin, and J.G. Romero. Parameter estimation of nonlinearly parameterized regressions: application to system identification and adaptive control. IFAC-PapersOnLine, 53(2):1206–1212, 2020.
  • [9] R. Ortega, V. Gromov, E. Nuño, A. Pyrkin, and J.G. Romero. Parameter estimation of nonlinearly parameterized regressions without overparameterization: application to adaptive control. Automatica, 127:109544, 2021.
  • [10] R. Ortega, J.G. Romero, and S. Aranovskiy. A new least squares parameter estimator for nonlinear regression equations with relaxed excitation conditions and forgetting factor. Systems & Control Letters, 169:105377, 2022.
  • [11] A. Pavlov, A. Pogromsky, N. van de Wouw, and H. Nijmeijer. Convergent dynamics, a tribute to Boris Pavlovich Demidovich. Systems & Control Letters, 52(3):257–261, 2004.
  • [12] J.G. Romero, R. Ortega, and A. Bobtsov. Parameter estimation and adaptive control of euler–lagrange systems using the power balance equation parameterisation. Int. J. of Control, pages 1–13, 2021.
  • [13] J.E. Slotine and L. Weiping. Adaptive manipulator control: A case study. IEEE Trans. on Automatic control, 33(11):995–1003, 1988.