This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On algebraic time-derivative estimation
and deadbeat state reconstruction

Johann Reger and Jerome Jouffroy Johann Reger is head of the Control Engineering Group, Ilmenau University of Technology, Gustav-Kirchhoff-Str.1, D-98693 Ilmenau, Germany (e-mail: reger@ieee.org).Jerome Jouffroy is with the Mads Clausen Institute, University of Southern Denmark, Alsion 2, DK-6400 Sønderborg, Denmark (e-mail: jerome@mci.sdu.dk).
Abstract

This paper places into perspective the so-called algebraic time-derivative estimation method recently introduced by Fliess and co-authors with standard results from linear state-space theory for control systems. In particular, it is shown that the algebraic method can essentially be seen as a special case of deadbeat state estimation based on the reconstructibility Gramian of the considered system.

I Introduction

In the past few years, the algebraic approach to estimation in control systems proposed by Fliess and co-workers has generated a number of interesting results for different problems of estimation of dynamical systems such as state estimation, parametric identification, and fault diagnosis, to name but a few (see [13, 11, 9, 8] and references therein). Loosely speaking, this new estimation approach is mainly based on the robust computation of the time-derivatives of a noisy signal by using a finite weighted combination of time-integrations of this signal. These results, obtained through the use of differential algebra and operational calculus [20], allow to obtain an estimate of the time-derivative of a particular order in an arbitrary small amount of time [12].

Questions arise on how to relate the above to more classical results of automatic control, and in particular to linear system theory. The present paper contributes to this discussion by showing that the algebraic time-derivative estimation method, as presented in [21] and references therein, can be seen, essentially, as a special case of previously known state-space results exhibiting a deadbeat property.

After this introduction, we briefly recall in Section II the main results of the algebraic time-derivative estimation method. Then, in Section III, we recall a few results of linear observability theory and show how in particular the reconstructibility Gramian can be related to the algebraic method. We end this paper with a few additional remarks on how to relate further extensions of the algebraic approach with different areas of control systems theory.

Parts of this study were presented, albeit in German, in Reger and Jouffroy [24].

II Algebraic time-derivative estimation

The algebraic derivative estimation techniques have been presented in various styles and frameworks, mostly based on abstract algebra and operational calculus. Because of its practical interest, we recall here only the main result for a moving-horizon version of the approach (see [21] and [35]). However, note that the results shown in the present paper would also be very easily applicable to earlier expanding-horizon versions that can be found in [7] or [13].

Consider a real-valued, NN-th degree polynomial function of time

y(t)=i=0Naii!tiy(t)=\sum_{i=0}^{N}\frac{a_{i}}{i!}\,t^{i} (1)

where the terms aia_{i} are unknown constant coefficients. The goal is to obtain estimates of the time-derivatives of y(t)y(t), up to order NN.

In [7, 6, 22], Fliess and co-workers proposed to do so by, roughly speaking, resorting to algebraic combinations of moving-horizon time-integrations of the available signal y(t)y(t). Let us briefly recall these results in the following theorem [21, 35].

Theorem 1

For all tTt\geq T, the jj-th order time-derivative estimate y^(j)(t)\hat{y}^{(j)}(t), j=0,1,2,,Nj=0,1,2,\ldots,N, of the polynomial signal y(t)y(t) as defined in (1) satisfies the convolution

y^(j)(t)=0THj(T,τ)y(tτ)𝑑τ,j=0,1,,N\hat{y}^{(j)}(t)=\int_{0}^{T}H_{j}(T,\tau)\,y(t-\tau)\,d\tau\,,\quad j=0,1,\ldots,N (2)

where the convolution kernel

Hj(T,τ)=(N+j+1)!(N+1)!TN+j+1×\displaystyle\textstyle H_{j}(T,\tau)=\frac{(N+j+1)!\,(N+1)!}{T^{N+j+1}}\times
κ1=0Njκ2=0j(Tτ)κ1+κ2(τ)Nκ1κ2κ1!κ2!(Njκ1)!(jκ2)!(Nκ1κ2)!(κ1+κ2)!(Nκ1+1)\displaystyle\textstyle\sum\limits_{\kappa_{1}=0}^{N-j}\sum\limits_{\kappa_{2}=0}^{j}\frac{(T-\tau)^{\kappa_{1}+\kappa_{2}}\,(-\tau)^{N-\kappa_{1}-\kappa_{2}}}{\kappa_{1}!\kappa_{2}!(N-j-\kappa_{1})!(j-\kappa_{2})!(N-\kappa_{1}-\kappa_{2})!(\kappa_{1}+\kappa_{2})!(N-\kappa_{1}+1)} (3)

depends on the order jj of the time derivative to be estimated and on an arbitrary constant time window length T>0T>0. \square


For the interested reader, as well as for the sake of completeness, a way to derive the results of Theorem 1 is given in Appendix A.

Thus, considering for example the degree-one polynomial

y(t)=a0+a1ty(t)=a_{0}+a_{1}\,t (4)

applying Theorem 1 would simply give us the following first-order time-derivative estimate

y˙^(t)=0T6T3(T2τ)y(tτ)𝑑τ.\hat{\dot{y}}(t)=\int_{0}^{T}\frac{6}{T^{3}}\big{(}T-2\tau\big{)}\,y(t-\tau)\,d\tau\,. (5)

The effect of the time-integration is obviously to dampen the impact of the measurement noise on the estimate. Note that this feature can also be used to filter out noise from the signal y(t)y(t) itself, as the zero-order time-derivative estimator would be

y^(t)=0T2T2(2T3τ)y(tτ)𝑑τ\hat{y}(t)=\int_{0}^{T}\frac{2}{T^{2}}\big{(}2T-3\tau\big{)}\,y(t-\tau)\,d\tau (6)

as obtained, once again, from Theorem 1.

III From deadbeat reconstruction of the state to the algebraic method

As will be seen, the above may be related in several ways to more traditional results of classical linear control theory. To this end, consider now the following linear time-varying system

𝐱˙(t)\displaystyle\dot{\mathbf{x}}(t) =𝐀(t)𝐱(t)\displaystyle=\mathbf{A}(t)\mathbf{x}(t) (7)
y(t)\displaystyle y(t) =𝐂(t)𝐱(t)\displaystyle=\mathbf{C}(t)\mathbf{x}(t) (8)

where 𝐱(t)N+1\mathbf{x}(t)\in\mathbb{R}^{N+1} and y(t)y(t)\in\mathbb{R}. Note that while the form of system (7)-(8) was chosen for the sake of simplicity and ease of presentation, the discussion of the present section is extendible to systems with multiple inputs and outputs.

Then let us briefly recall a few elements pertaining to the notion of state reconstructibility [14, 1, 23]. As noted in Willems and Mitter [34], this property has been quite overlooked in the control literature, possibly because of its equivalence with observability for linear continuous-time systems. Loosely speaking, we say that system (7)-(8) is reconstructible on [t0,t1][t_{0},t_{1}] if 𝐱(t1)\mathbf{x}(t_{1}) can be obtained from the measurements y(t)y(t) for t[t0,t1]t\in[t_{0},t_{1}].

A standard way of determining 𝐱(t1)\mathbf{x}(t_{1}) can be obtained by first writing the following expression for the output

y(τ)=𝐂(τ)𝚽(τ,t1)𝐱(t1)y(\tau)=\mathbf{C}(\tau)\,\mathbf{\Phi}(\tau,t_{1})\,\mathbf{x}(t_{1}) (9)

where 𝚽(τ,t)\mathbf{\Phi}(\tau,t) is the transition matrix of (7). Then, left-multiply and integrate (9) to get

t0t1𝚽T(τ,t1)𝐂T(τ)y(τ)𝑑τ=(t0t1𝚽T(τ,t1)𝐂T(τ)𝐂(τ)𝚽(τ,t1)𝑑τ)𝐱(t1)\int_{t_{0}}^{t_{1}}\mathbf{\Phi}^{\mathrm{T}}(\tau,t_{1})\,\mathbf{C}^{\mathrm{T}}(\tau)\,y(\tau)\,d\tau=\\ \left(\int_{t_{0}}^{t_{1}}\mathbf{\Phi}^{\mathrm{T}}(\tau,t_{1})\,\mathbf{C}^{\mathrm{T}}(\tau)\,\mathbf{C}(\tau)\,\mathbf{\Phi}(\tau,t_{1})\,d\tau\right)\mathbf{x}(t_{1}) (10)

Since in eq. (10) 𝐱(t1)\mathbf{x}(t_{1}) is a constant term with respect to the integral, it can be isolated, and we finally get, for an estimate 𝐱^(t1)\hat{\mathbf{x}}(t_{1}) of 𝐱(t1)\mathbf{x}(t_{1}),

𝐱^(t1):=𝐖r1(t0,t1)t0t1𝚽T(τ,t1)𝐂T(τ)y(τ)𝑑τ\hat{\mathbf{x}}(t_{1}):=\mathbf{W}_{\mathrm{r}}^{-1}(t_{0},t_{1})\int_{t_{0}}^{t_{1}}\mathbf{\Phi}^{\mathrm{T}}(\tau,t_{1})\,\mathbf{C}^{\mathrm{T}}(\tau)\,y(\tau)\,d\tau (11)

where

𝐖r(t0,t1)=t0t1𝚽T(τ,t1)𝐂T(τ)𝐂(τ)𝚽(τ,t1)𝑑τ\mathbf{W}_{\mathrm{r}}(t_{0},t_{1})=\int_{t_{0}}^{t_{1}}\mathbf{\Phi}^{\mathrm{T}}(\tau,t_{1})\,\mathbf{C}^{\mathrm{T}}(\tau)\,\mathbf{C}(\tau)\,\mathbf{\Phi}(\tau,t_{1})\,d\tau (12)

is the reconstructibility Gramian.

In treatments of observability in textbooks, developments such as the above are mostly used, through the observability counterpart of (12), to check whether a system is observable (resp. reconstructible) or not. However, as noted in [5, p. 158] for the observability case, expression (12) can also be used to actually compute 𝐱^(t1)\mathbf{\hat{x}}(t_{1}) as integration will smooth out high-frequency noise.

The above results are well-known, even if not as much used for state estimation as linear asymptotic observers are. But the former has the interesting property of allowing to give an estimate of 𝐱(t1)\mathbf{x}(t_{1}) in a finite time, whose value is decided by the invertibility of (12).


Interestingly, these two features of the above Gramian-based estimation – deadbeat property and time-integration, coincide with those of algebraic time-derivative estimation.

Let us push the comparison a little further in a simple way by first noticing that the degree-one polynomial (4) of our example can be put into state-space phase-variable form with matrices

𝐀=(0100),𝐂=(10),\mathbf{A}=\begin{pmatrix}0&1\\ 0&0\end{pmatrix},\quad\mathbf{C}=\begin{pmatrix}1&0\end{pmatrix}, (13)

with state 𝐱(t)=(y(t),y˙(t))T\mathbf{x}(t)=(y(t),\dot{y}(t))^{\mathrm{T}} and initial conditions 𝐱(0)=(a0,a1)T\mathbf{x}(0)=(a_{0},a_{1})^{\mathrm{T}}.

Then, compute an estimate of 𝐱(t)\mathbf{x}(t) using (11) and (12). To do so, use the fact that the matrices in (13) are time-invariant and that 𝐀2=𝟎\mathbf{A}^{2}=\mathbf{0} to obtain

𝚽(τ,t1)=e𝐀(τt1)=𝐈+(τt1)𝐀=(1τt101)\mathbf{\Phi}(\tau,t_{1})=e^{\mathbf{A}(\tau-t_{1})}=\mathbf{I}+(\tau-t_{1})\,\mathbf{A}=\begin{pmatrix}1&\tau-t_{1}\\ 0&1\end{pmatrix} (14)

which implies that

𝐂𝚽(τ,t1)=(1τt1).\mathbf{C\,\Phi}(\tau,t_{1})=\begin{pmatrix}1&\tau-t_{1}\end{pmatrix}\,. (15)

Letting t0=tTt_{0}=t-T (with T>0T>0 fixed) and t1=tt_{1}=t, we then obtain from (12) the following Gramian

𝐖r(tT,t)=(TT22T22T33)\mathbf{W}_{\mathrm{r}}(t-T,t)=\begin{pmatrix}T&-\frac{T^{2}}{2}\\[2.15277pt] -\frac{T^{2}}{2}&\frac{T^{3}}{3}\end{pmatrix} (16)

which in turn is used, in combination with (11), to get

𝐱^(t)=(y^(t)y˙^(t))=(4T6T26T212T3)tTt(1τt)y(τ)𝑑τ.\hat{\mathbf{x}}(t)=\begin{pmatrix}\hat{y}(t)\\ \hat{\dot{y}}(t)\end{pmatrix}=\begin{pmatrix}\frac{4}{T}&\frac{6}{T^{2}}\\[2.15277pt] \frac{6}{T^{2}}&\frac{12}{T^{3}}\end{pmatrix}\int_{t-T}^{t}\begin{pmatrix}1\\ \tau-t\end{pmatrix}y(\tau)\,d\tau\,. (17)

Hence, similarly to the previous section, an estimate of the derivatives of a degree-one polynomial can be obtained with time-integrations of the measured signal, albeit this time using tools from classical control theory.

Note, interestingly, that in this particular example, there is more than a mere similarity. Indeed, after a simple change of variable σ=tτ\sigma=t-\tau in (17), we find exactly the same expressions as (5) and (6).

The above second-order case can be generalized to obtain the jj-th time-derivative of any polynomial simply by specializing 𝐀(t)\mathbf{A}(t) and 𝐂(t)\mathbf{C}(t) in (7)-(8) to get a state-space description of polynomial (1), which yields, in phase-variable form the N+1N+1 square matrix

𝐀=(01000010000010000)\mathbf{A}=\begin{pmatrix}0&1&0&\cdots&0\\[-4.30554pt] 0&0&1&\ddots&0\\[-4.30554pt] \vdots&\vdots&\ddots&\ddots&0\\ 0&0&0&\cdots&1\\ 0&0&0&\cdots&0\end{pmatrix} (18)

and the N+1N+1 row vector

𝐂=(100)\mathbf{C}=\begin{pmatrix}1&0&\cdots&0\end{pmatrix} (19)

associated to the state vector 𝐱(t)=(y(t),y˙(t),,y(j)(t),\mathbf{x}(t)=(y(t),\dot{y}(t),...,y^{(j)}(t), ,..., y(N)(t))Ty^{(N)}(t))^{\mathrm{T}}.

After several steps in line with the previous second-order example, we obtain, similarly to Section II, an expression of the jj-th time-derivative of a polynomial signal (1) based on the reconstructibility Gramian. This is summarized in the following theorem.


Theorem 2

For all tTt\geq T, the jj-th order time-derivative estimate y^(j)(t)\hat{y}^{(j)}(t), j=0,1,2,,Nj=0,1,2,\ldots,N, of the polynomial signal y(t)y(t) as defined in (1) satisfies the convolution

y^(j)(t)=0TGj(T,σ)y(tσ)𝑑σ,j=0,1,,N\hat{y}^{(j)}(t)=\int_{0}^{T}G_{j}(T,\sigma)\,y(t-\sigma)\,d\sigma\,,\quad j=0,1,\ldots,N (20)

where the convolution kernel

Gj(T,τ)=(N+j+1)!Tj+1j!(Nj)!k=0N(1)k(N+k+1)!(j+k+1)(Nk)!(k!)2(σT)k\textstyle G_{j}(T,\tau)=\frac{(N+j+1)!}{T^{j+1}j!(N-j)!}\sum\limits_{k=0}^{N}\frac{(-1)^{k}(N+k+1)!}{(j+k+1)(N-k)!(k!)^{2}}\left(\frac{\sigma}{T}\right)^{k} (21)

depends on the order jj of the time derivative to be estimated and on an arbitrary constant time window length T>0T>0. \square


Proof:

Broadly speaking, the proof is based on obtaining a closed-form expression corresponding to equations (11) and (12) for the particular case with matrices (18) and (19).

Since this system is LTI, the corresponding transition matrix results from the matrix exponential of (18), i.e.

e𝐀t=(1tt2/2t3/6tN/N!01tt2/2tN1/(N1)!001ttN2/(N2)!0000t00001)e^{\mathbf{A}\,t}=\begin{pmatrix}1&t&t^{2}/2&t^{3}/6&\cdots&t^{N}/N!\\ 0&1&t&t^{2}/2&\cdots&t^{N-1}/(N-1)!\\ 0&0&1&t&\cdots&t^{N-2}/(N-2)!\\[-4.30554pt] \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&0&\cdots&t\\ 0&0&0&0&\cdots&1\end{pmatrix} (22)

which is then used to obtain the state-transition matrix

𝚽(τ,t1)=e𝐀(τt1).\mathbf{\Phi}(\tau,t_{1})=e^{\mathbf{A}\,(\tau-t_{1})}\,. (23)

Consequently, the entries of the (N+1)×(N+1)(N+1)\times(N+1) reconstructibility Gramian matrix (12) read

[Wr]ij(t0,t1)=t0t1(τt1)i+j2(i1)!(j1)!𝑑τ=(t0t1)i+j1(i1)!(j1)!(i+j1).\left[W_{\mathrm{r}}\right]_{ij}\!(t_{0},t_{1})=\!\int_{t_{0}}^{t_{1}}\!\textstyle\frac{(\tau-t_{1})^{i+j-2}}{(i-1)!(j-1)!}\,d\tau=\frac{-(t_{0}-t_{1})^{i+j-1}}{(i-1)!(j-1)!(i+j-1)}\,.\quad (24)

In view of (11), the inversion of this Gramian is required. Its entries are provided in closed-form by Lemma 1 in Appendix B, that is

[Wr1]ij(t0,t1)=(i1)!(j1)!(i+j1)(t1t0)i+j1×(N+iN+1j)(N+jN+1i)(i+j2i1)2.\left[W_{\mathrm{r}}^{-1}\right]_{ij}\!(t_{0},t_{1})=\frac{(i-1)!\,(j-1)!\,(i+j-1)}{(t_{1}-t_{0})^{i+j-1}}\times\\ {\binom{N+i}{N+1-j}}{\binom{N+j}{N+1-i}}{\binom{i+j-2}{i-1}}^{2}. (25)

Hence, by using eq. (25) regarding the particular form of the transition matrix (23), the (i+1)(i+1)-th component of 𝐱^(t)\hat{\mathbf{x}}(t) follows from eq. (11)

x^i+1(t1)=t0t1j=0N[Wr1]i+1,j+1(t0,t1)(τt1)jj!y(τ)dτ.\hat{x}_{i+1}(t_{1})=\int_{t_{0}}^{t_{1}}\textstyle\sum\limits_{j=0}^{N}\left[W_{\mathrm{r}}^{-1}\right]_{i+1,j+1}(t_{0},t_{1})\,\frac{(\tau-t_{1})^{j}}{j!}\,y(\tau)\,d\tau\,. (26)

In other words, the jj-th time-derivative estimate of y(t)y(t) at time t=t1t=t_{1} can be obtained from the convolution

y(j)(t1)=t0t1G¯j(t1,t0,τ)y(τ)𝑑τ,j=0,1,,Ny^{(j)}(t_{1})=\int_{t_{0}}^{t_{1}}\bar{G}_{j}(t_{1},t_{0},\tau)\,y(\tau)\,d\tau\,,\quad j=0,1,\ldots,N (27)

where

G¯j(t1,t0,τ)=(N+j+1)!(t1t0)j+1j!(Nj)!×k=0N(1)k(N+k+1)!(j+k+1)(Nk)!(k!)2(t1τt1t0)k.\bar{G}_{j}(t_{1},t_{0},\tau)=\frac{(N\!+\!j\!+\!1)!}{(t_{1}\!-\!t_{0})^{j+1}j!(N\!-\!j)!}\times\\ \sum\limits_{k=0}^{N}\frac{(-1)^{k}(N\!+\!k\!+\!1)!}{(j\!+\!k\!+\!1)(N\!-\!k)!(k!)^{2}}\left(\frac{t_{1}\!-\!\tau}{t_{1}\!-\!t_{0}}\right)^{k}. (28)

A receding-horizon version of equation (27) can then be obtained as follows: Let t0=tTt_{0}=t-T (with T>0T>0 fixed), and t1=tt_{1}=t. Proceed then to the change of variable σ=tτ\sigma=t-\tau to obtain (20) and (21), which completes the proof of the theorem. ∎


As might be expected from the above discussion and the second-order example, it is possible to show an equivalence between the algebraic estimator of Section II and the one of Theorem 2, and this for all NN. We make this statement precise in the following theorem.


Theorem 3

Let Hj(T,τ)H_{j}(T,\tau) and Gj(T,τ)G_{j}(T,\tau) be defined as in (3) and (21), respectively. Then for T>0T>0, τ[0,T]\tau\in[0,T] and N{0,1,2,},N\in\{0,1,2,\ldots\},

Hj(T,τ)=Gj(T,τ),j{0,1,2,,N}.H_{j}(T,\tau)=G_{j}(T,\tau),\quad j\in\{0,1,2,\ldots,N\}\,. (29)

\square


Proof:

Theorem 3 follows from Riesz’ representation theorem [26], which states that for every continuous linear functional ff on a Hilbert space \mathcal{H}, a unique pp\in\mathcal{H} exists such that

f(q)=p,qq,f(q)=\langle p,q\rangle\qquad\forall q\in\mathcal{H}\,, (30)

where .,.\langle.\,,.\rangle denotes the inner product on \mathcal{H}.

In order to prepare the ground for applying this theorem, first note that for parameter T>0T>0 fixed, the expressions Hj(T,τ)H_{j}(T,\tau) and Gj(T,τ)G_{j}(T,\tau), given by (3) and (21), are polynomials in τ\tau of degree NN. For tt fixed, furthermore y(tτ)y(t-\tau) is a polynomial in τ\tau of degree NN which in view of (1) consequently spans N\mathcal{H}_{N}, i.e. the Hilbert space of degree NN polynomials equipped with the real-valued inner product

p,q:=0Tp(τ)q(τ)𝑑τ,p,qN.\langle p,q\rangle:=\int_{0}^{T}p(\tau)q(\tau)\,d\tau\,,\quad p,q\in\mathcal{H}_{N}\,. (31)

Hence, for T>0T>0 fixed, Hj(T,τ)NH_{j}(T,\tau)\in\mathcal{H}_{N} and Gj(T,τ)NG_{j}(T,\tau)\in\mathcal{H}_{N}. Moreover, letting q(τ):=y(tτ)q(\tau):=y(t-\tau) with fixed tTt\geq T we have that qNq\in\mathcal{H}_{N}.

In accordance with (2) and (20), let

fHj(q):=0THj(T,τ)q(τ)𝑑τf_{H_{j}}(q):=\int_{0}^{T}H_{j}(T,\tau)\,q(\tau)\,d\tau (32)

and

fGj(q):=0TGj(T,τ)q(τ)𝑑τf_{G_{j}}(q):=\int_{0}^{T}G_{j}(T,\tau)\,q(\tau)\,d\tau (33)

for j=0,1,2,,Nj=0,1,2,\ldots,N.

Thus, Theorems 1 and 2 imply that for any qNq\in\mathcal{H}_{N}

fHj(q)=fGj(q),j=0,1,2,,N.f_{H_{j}}(q)=f_{G_{j}}(q)\,,\quad j=0,1,2,\ldots,N\,. (34)

Since Hj(T,τ)NH_{j}(T,\tau)\in\mathcal{H}_{N} and Gj(T,τ)NG_{j}(T,\tau)\in\mathcal{H}_{N}, for T>0T>0 fixed, the uniqueness of pp in Riesz’ theorem shows that

Hj(T,τ)Gj(T,τ)H_{j}(T,\tau)\equiv G_{j}(T,\tau) (35)

for j=0,1,2,,Nj=0,1,2,\ldots,N, under the assumptions of Theorem 3. ∎

Note that other proofs of the previous theorem are also possible. For example, a somewhat more component-wise proof, based on modern computer algebra proof techniques [33], is presented in [25] by showing specifically how the terms in (3) relate to those of (21).

IV Additional remarks

In addition to the main result of Section II, Fliess et al. proposed several extensions or modifications, several of which having also connections with different areas of control systems. Let us briefly consider some of them in the few following remarks.

For instance, note that an expanding-horizon version of the algebraic method was first introduced in [12], which would correspond to let t0=0t_{0}=0 and t1=tt_{1}=t in the reconstructibility Gramian perspective. In this case, an equivalence similar to Theorem 3 can still be obtained. Furthermore, note that, interestingly, letting 𝐒(t):=𝐖r(0,t)\mathbf{S}(t):=\mathbf{W}_{\mathrm{r}}(0,t), and differentiating respectively 𝐒(t)\mathbf{S}(t) and the product 𝐒(t)𝐱^(t)\mathbf{S}(t)\,\hat{\mathbf{x}}(t) with respect to time using a few standard manipulations, we obtain

𝐒˙(t)=𝐀T(t)𝐒(t)𝐒(t)𝐀(t)+𝐂T(t)𝐂(t)\dot{\mathbf{S}}(t)=-\mathbf{A}^{\mathrm{T}}(t)\mathbf{S}(t)-\mathbf{S}(t)\mathbf{A}(t)+\mathbf{C}^{\mathrm{T}}(t)\mathbf{C}(t)\vskip-4.30554pt (36)

and

𝐱^˙(t)=(𝐀(t)𝐒1(t)𝐂T(t)𝐂(t))𝐱^(t)+𝐒1(t)𝐂T(t)y(t)\dot{\hat{\mathbf{x}}}(t)=\left(\mathbf{A}(t)-\mathbf{S}^{-1}(t)\mathbf{C}^{\mathrm{T}}(t)\mathbf{C}(t)\right)\hat{\mathbf{x}}(t)+\mathbf{S}^{-1}(t)\mathbf{C}^{\mathrm{T}}(t)y(t) (37)

which draw strong similarities with the information form of the continuous-time Kalman filter [19, 15] for system (7)-(8) with additive noise v(t)v(t)\in\mathbb{R} of identity covariance, 𝐑=𝐈\mathbf{R}=\mathbf{I}, on the measurement equation (8). This in turn shows that, thanks to a simple modification of Theorem 3 for expanding horizons, links with optimal estimation could be obtained even though the derivations and motivations for the algebraic method are clearly different (see in particular [12]).

As another example, one could consider the state estimation problem for a MIMO time-invariant system with inputs:

𝐱˙(t)\displaystyle\dot{\mathbf{x}}(t) =𝐀𝐱(t)+𝐁𝐮(t)\displaystyle=\mathbf{A}\mathbf{x}(t)+\mathbf{B}\mathbf{u}(t) (38)
𝐲(t)\displaystyle\mathbf{y}(t) =𝐂𝐱(t)\displaystyle=\mathbf{C}\mathbf{x}(t) (39)

For such systems, Byrski et al. derived a so-called moving window observer [2],[3],[4]. In order to briefly sketch the result, let

𝚿(t)=e𝛀t\mathbf{\Psi}(t)=e^{\mathbf{\Omega}t} (40)

with

𝛀=(𝐀𝐁𝐁T𝐂T𝐂𝐀T).\mathbf{\Omega}=\begin{pmatrix}\mathbf{A}&\mathbf{B}\mathbf{B}^{\mathrm{T}}\\ \mathbf{C}^{\mathrm{T}}\mathbf{C}&-\mathbf{A}^{\mathrm{T}}\end{pmatrix}\,. (41)

Assume that the output and the input of system (38) and (39) may be used for the reconstruction of the state, thus consequently, we may allow for input- and output-sided deterministic disturbances, bounded in an 2\mathcal{L}_{2}-norm sense. Moreover, assume that the pair (𝐂,𝐀)(\mathbf{C},\mathbf{A}) is observable. Then the moving window observer that minimizes the estimation error 𝐱^𝐱\hat{\mathbf{x}}-\mathbf{x} on the moving fixed time horizon [tT,t][t-T,t] is given by

𝐱^(t)=0T𝐆1(T,Tτ)𝐲(tτ)𝑑τ+0T𝐆2(T,Tτ)𝐮(tτ)𝑑τ\hat{\mathbf{x}}(t)\!=\!\!\int_{0}^{T}\!\!\!\!\mathbf{G}_{1}(T,T\!-\tau)\mathbf{y}(t-\tau)\,d\tau+\!\int_{0}^{T}\!\!\!\!\mathbf{G}_{2}(T,T\!-\tau)\mathbf{u}(t-\tau)\,d\tau (42)

where

𝐆1(T,t)=\displaystyle\mathbf{G}_{1}(T,t)= e𝐀T(0T𝚿11T(τ)𝐂T𝐂e𝐀τ𝑑τ)1𝚿11T(t)𝐂T\displaystyle e^{\mathbf{A}T}\left(\int_{0}^{T}\mathbf{\Psi}_{11}^{\mathrm{T}}(\tau)\,\mathbf{C}^{\mathrm{T}}\mathbf{C}\,e^{\mathbf{A}\tau}d\tau\right)^{-1}\mathbf{\Psi}_{11}^{\mathrm{T}}(t)\,\mathbf{C}^{\mathrm{T}} (43)
𝐆2(T,t)=\displaystyle\mathbf{G}_{2}(T,t)= e𝐀T(0T𝚿11T(τ)𝐂T𝐂e𝐀τ𝑑τ)1𝚿21T(t)𝐁\displaystyle e^{\mathbf{A}T}\left(\int_{0}^{T}\mathbf{\Psi}_{11}^{\mathrm{T}}(\tau)\,\mathbf{C}^{\mathrm{T}}\mathbf{C}\,e^{\mathbf{A}\tau}d\tau\right)^{-1}\mathbf{\Psi}_{21}^{\mathrm{T}}(t)\,\mathbf{B} (44)

In the case of an input-free SISO system of the particular form (18) and (19), we have that 𝚿11(t)=e𝐀t\mathbf{\Psi}_{11}(t)=e^{\mathbf{A}t} and matrix 𝐆2\mathbf{G}_{2} vanishes. As a consequence, algebraic derivative estimation may be seen as a very special particularization of equation (42).

In an other extension presented in [22], the authors propose to further reduce the impact of measurement noise on the estimates by using additional integrations. This is also possible with the Gramian point-of-view as both sides of (10) can easily be time-integrated several additional times with respect to t0t_{0}, as opposed to only once to obtain 𝐱(t1)\mathbf{x}(t_{1}) – in fact, even filter operations with respect to the variable t0t_{0} can be applied on both sides of (10), so as to generate a variety of further estimators. Once again, an equivalence between this result of the algebraic approach and a particularization of a reconstructibility perspective can be obtained. More generally, we can for example insert in (10) another kernel λ(τ,t0)\lambda(\tau,t_{0}) as follows

𝐱^(t1):=𝐖λ1(t0,t1)t0t1λ(τ,t0)𝚽T(τ,t1)𝐂T(τ)y(τ)𝑑τ\hat{\mathbf{x}}(t_{1}):=\mathbf{W}_{\mathbf{\lambda}}^{-1}(t_{0},t_{1})\int_{t_{0}}^{t_{1}}\lambda(\tau,t_{0})\mathbf{\Phi}^{\mathrm{T}}(\tau,t_{1})\,\mathbf{C}^{\mathrm{T}}(\tau)\,y(\tau)\,d\tau (45)

where

𝐖λ(t0,t1)=t0t1λ(τ,t0)𝚽T(τ,t1)𝐂T(τ)𝐂(τ)𝚽(τ,t1)𝑑τ,\mathbf{W}_{\mathbf{\lambda}}(t_{0},t_{1})\!=\!\!\int_{t_{0}}^{t_{1}}\!\!\lambda(\tau,t_{0})\mathbf{\Phi}^{\mathrm{T}}(\tau,t_{1})\,\mathbf{C}^{\mathrm{T}}(\tau)\,\mathbf{C}(\tau)\,\mathbf{\Phi}(\tau,t_{1})\,d\tau, (46)

this to obtain the desired response with respect to measurement noise.

Finally, and although it is clearly beyond the scope of the present paper, note that because of the convolution form of algebraic estimation (2), the latter can also be connected with Finite-Impulse Response (FIR) differentiators, on which numerous studies and results were published (see [17], [31] and references therein), with the minor difference that these differentiators are usually described in a discrete-time framework, although it is clear that a comparison similar to the present paper could also be carried out in discrete-time.

In particular, it might be of interest to compare the latest extension of the algebraic estimation approach, where time-delays are considered to improve the results, together with FIR differentiator designs considering the same issue that have been proposed over the past few years (see for example [32] and [27]).

Acknowledgments

The authors would like to express their gratitude to Håkan Hjalmarsson whose comments and suggestions were very helpful in improving the present paper. Johann Reger thanks Peter Caines and Jessy Grizzle for valuable discussions. The work of Johann Reger was partially supported by fellowships of the German Academic Exchange Service (DAAD), grant D/07/40582, and by Max Planck Institute for Dynamics of Complex Technical Systems in Magdeburg, Germany.

Appendix

IV-A Proof of Theorem 1

The following proof resorts to standard techniques from operational calculus. To this end, we rephrase eq. (1) in the Laplace domain as

Y(s)=i=0Ny(i)(0)si+1,Y(s)=\sum_{i=0}^{N}\frac{y^{(i)}(0)}{s^{i+1}}\,, (47)

where the coefficients aia_{i} are identified with y(i)(0)y^{(i)}(0). In order to single out a particular term, y(j)(0)y^{(j)}(0), first multiply (47) by sN+1s^{N+1},

sN+1Y(s)=i=0Ny(i)(0)sNi,s^{N+1}\,Y(s)=\sum_{i=0}^{N}y^{(i)}(0)\,s^{N-i}\,, (48)

which results in a polynomial form in ss on the right side of (48). To eliminate the terms y(j+1)(0),,y(N)(0)y^{(j+1)}(0),\ldots,y^{(N)}(0), differentiate (48) NjN-j times with respect to ss (see [10] for a first presentation of the idea). This yields

dNjdsNj(sN+1Y(s))=i=0jy(i)(0)(Ni)!(ji)!sji.\frac{d^{N-j}}{ds^{N-j}}\left(s^{N+1}Y(s)\right)=\sum_{i=0}^{j}y^{(i)}(0)\,\frac{(N-i)!}{(j-i)!}\,s^{j-i}\,. (49)

In the next step, we proceed to a similar treatment to eliminate the remaining constant terms y(0)(0)y^{(0)}(0), y(1)(0)y^{(1)}(0), \ldots, y(j1)(0)y^{(j-1)}(0). But before doing so, premultiply (49) by 1/s1/s, i.e.

1sdNjdsNj(sN+1Y(s))==(Nj)!sy(j)(0)+i=0j1y(i)(0)(Ni)!(ji)!sji1\frac{1}{s}\frac{d^{N-j}}{ds^{N-j}}\left(s^{N+1}Y(s)\right)=\\ =\frac{(N-j)!}{s}y^{(j)}(0)+\sum_{i=0}^{j-1}y^{(i)}(0)\frac{(N-i)!}{(j-i)!}s^{j-i-1} (50)

which is done to prevent y(j)(0)y^{(j)}(0) from cancelation due to a jj-fold differentiation with respect to ss. Indeed, the latter operation finally gives

djdsj(1sdNjdsNj(sN+1Y(s)))=(1)jj!(Nj)!sj+1y(j)(0).\frac{d^{j}}{ds^{j}}\!\!\left(\frac{1}{s}\frac{d^{N-j}}{ds^{N-j}}\!\left(s^{N+1}Y(s)\right)\!\!\right)\!\!=\!\frac{(-1)^{j}\,j!\,(N\!-\!j)!}{s^{j+1}}y^{(j)}(0). (51)

This equation could readily be transformed back into the time domain. However, the left side of (51) contains the monomial sNs^{N}, i.e. an NN-fold differentiation with respect to time in the time domain, meaning if a high-frequency noise is corrupting y(t)y(t), the former would be amplified as a result. Note that a similar idea can also be found in [30, p.17–18]. In order to avoid the explicit use of these time derivatives, premultiply (51) with 1/sN+11/s^{N+1}, thus implying that y(t)y(t) will be integrated at least one time. Therefore, we obtain

1sN+1djdsj(1sdNjdsNj(sN+1Y(s)))=(1)jj!(Nj)!sN+j+2y(j)(0)\frac{1}{s^{N+1}}\frac{d^{j}}{ds^{j}}\!\!\left(\!\frac{1}{s}\frac{d^{N-j}}{ds^{N-j}}\!\left(s^{N+1}Y(s)\!\right)\!\!\right)\!\!=\!\frac{(-1)^{j}j!\,(N\!-\!j)!}{s^{N+j+2}}y^{(j)}(0) (52)

where it can been seen that the term y(j)(0)y^{(j)}(0) depends only on a finite number of operations on the signal Y(s)Y(s), as shown in [21, 35].

Before performing the backward transform into the time-domain, rearrange the left side terms of (52) using Leibniz’ formula for the differentiation of products twice. This results in

1sN+1djdsj(1sdNjdsNj(sN+1Y(s)))=κ1=0Njκ2=0j(Njκ1)(jκ2)×(N+1)!(Nκ1κ2)!(Nκ1+1)1sκ1+κ2+1dNκ1κ2dsNκ1κ2Y(s)\!\!\!\!\!\!\frac{1}{s^{N+1}}\frac{d^{j}}{ds^{j}}\!\left(\!\frac{1}{s}\frac{d^{N-j}}{ds^{N-j}}\!\left(s^{N+1}Y(s)\!\right)\!\!\right)\!\!=\!\!\sum\limits_{\kappa_{1}=0}^{N-j}\sum\limits_{\kappa_{2}=0}^{j}\!\!{\binom{N\!-\!j}{\kappa_{1}}}\!\!{\binom{j}{\kappa_{2}}\times}\\ \frac{(N\!+\!1)!}{(N\!-\!\kappa_{1}\!-\!\kappa_{2})!\,(N\!-\!\kappa_{1}\!+\!1)}\frac{1}{s^{\kappa_{1}+\kappa_{2}+1}}\frac{d^{N-\kappa_{1}-\kappa_{2}}}{ds^{N-\kappa_{1}-\kappa_{2}}}\,Y(s) (53)

which, in view of the right hand side of (52), implies in turn

1sN+j+2y(j)(0)=(1)jj!(Nj)!κ1=0Njκ2=0j(Njκ1)(jκ2)×(N+1)!(Nκ1κ2)!(Nκ1+1)1sκ1+κ2+1dNκ1κ2dsNκ1κ2Y(s).\frac{1}{s^{N+j+2}}\,y^{(j)}(0)=\frac{(-1)^{j}}{j!\,(N\!-\!j)!}\sum\limits_{\kappa_{1}=0}^{N-j}\sum\limits_{\kappa_{2}=0}^{j}\!{\binom{N\!-\!j}{\kappa_{1}}}\!\!{\binom{j}{\kappa_{2}}}\times\\ \frac{(N\!+\!1)!}{(N\!-\!\kappa_{1}\!-\!\kappa_{2})!\,(N\!-\!\kappa_{1}\!+\!1)}\frac{1}{s^{\kappa_{1}+\kappa_{2}+1}}\frac{d^{N-\kappa_{1}-\kappa_{2}}}{ds^{N-\kappa_{1}-\kappa_{2}}}\,Y(s)\,. (54)

Eq. (54) is now transformed back into the time domain. Using the following inverse Laplace transform formulae

1[1si+1djdsjY(s)]=0t(tτ)i(τ)ji!y(τ)𝑑τ\mathcal{L}^{-1}\ \left[\frac{1}{s^{i+1}}\frac{d^{j}}{ds^{j}}Y(s)\right]=\ \int_{0}^{t}\frac{(t-\tau)^{i}(-\tau)^{j}}{i!}\,y(\tau)\,d\tau (55)

we obtain

y^(j)(0)=0tHj(t,τ)y(τ)𝑑τ,j=0,1,,N\hat{y}^{(j)}(0)=\int_{0}^{t}H_{j}(t,\tau)\,y(\tau)\,d\tau\,,\quad j=0,1,\ldots,N (56)

with

Hj(t,τ)=(N+j+1)!(N+1)!(1)jtN+j+1×\displaystyle\textstyle H_{j}(t,\tau)=\frac{(N+j+1)!\,(N+1)!\,(-1)^{j}}{t^{N+j+1}}\times
κ1=0Njκ2=0j(tτ)κ1+κ2(τ)Nκ1κ2κ1!κ2!(Njκ1)!(jκ2)!(Nκ1κ2)!(κ1+κ2)!(Nκ1+1)\displaystyle\!\textstyle\sum\limits_{\kappa_{1}=0}^{N-j}\sum\limits_{\kappa_{2}=0}^{j}\!\frac{(t-\tau)^{\kappa_{1}+\kappa_{2}}\,(-\tau)^{N-\kappa_{1}-\kappa_{2}}}{\kappa_{1}!\kappa_{2}!(N-j-\kappa_{1})!(j-\kappa_{2})!(N-\kappa_{1}-\kappa_{2})!(\kappa_{1}+\kappa_{2})!(N-\kappa_{1}+1)} (57)

The results obtained above thus give an estimate y^(j)(t)\hat{y}^{(j)}(t) at time t=0t=0 from the polynomial signal yy, see (1), taken on the interval [0,t][0,t]. In order to get a moving-horizon and causal version of these results, first replace tt with T-T, where TT is a positive constant [7, 6] and simplify using the fact that

(1)Hj(T,τ)=(1)jHj(T,τ).(-1)\ H_{j}(-T,-\tau)=(-1)^{j}\ H_{j}(T,\tau)\,. (58)

Finally, by shifting the yy-values by tt, Theorem 1 is immediate. \blacksquare

IV-B Lemma for the Proof of Theorem 2

Lemma 1 (Inverse of 𝐖r(t0,t1)\mathbf{W}_{\mathrm{r}}(t_{0},t_{1}))

Let the entries of the matrix 𝐖r(t0,t1)\mathbf{W}_{\mathrm{r}}(t_{0},t_{1}) be given as in (24). The entries of its inverse are

[Wr1]ij(t0,t1)=(i1)!(j1)!(i+j1)(t1t0)i+j1×(N+iN+1j)(N+jN+1i)(i+j2i1)2.\left[W_{\mathrm{r}}^{-1}\right]_{ij}(t_{0},t_{1})=\frac{(i-1)!\,(j-1)!\,(i+j-1)}{(t_{1}-t_{0})^{i+j-1}}\times\\ {\binom{N+i}{N+1-j}}{\binom{N+j}{N+1-i}}{\binom{i+j-2}{i-1}}^{2}. (59)

\square


Proof:

In light of equation (24), first, left- and right-multiply 𝐖r(t0,t1)\mathbf{W}_{\mathrm{r}}(t_{0},t_{1}) with a diagonal matrix 𝐌\mathbf{M} whose entries are

Mij=(i1)!(t0t1)iδijM_{ij}=\frac{\,(i-1)!\,}{(t_{0}-t_{1})^{i}}\delta_{ij} (60)

where δij\delta_{ij} is the Kronecker delta. Then, proceed with computing the following matrix product in component form as

[(t1t0)𝐌𝐖r(t0,t1)𝐌]ij\displaystyle[(t_{1}\!-\!t_{0})\,\mathbf{M}\,\mathbf{W}_{\mathrm{r}}(t_{0},t_{1})\,\mathbf{M}]_{ij}
=(t1t0)k,l=1N+1Mik[Wr]kl(t0,t1)Mlj\displaystyle\textstyle=(t_{1}-t_{0})\sum\limits_{k,l=1}^{N+1}M_{ik}\left[W_{\mathrm{r}}\right]_{kl}(t_{0},t_{1})M_{lj}
=(t1t0)k,l=1N+1(i1)!(t0t1)iδik(t0t1)k+l1(k1)!(l1)!(k+l1)(l1)!(t0t1)lδlj\displaystyle\textstyle=(t_{1}-t_{0})\sum\limits_{k,l=1}^{N+1}\frac{(i-1)!}{(t_{0}-t_{1})^{i}}\delta_{ik}\frac{-(t_{0}-t_{1})^{k+l-1}}{(k-1)!(l-1)!(k+l-1)}\frac{(l-1)!}{(t_{0}-t_{1})^{l}}\delta_{lj}
=1i+j1\displaystyle=\frac{1}{i+j-1} (61)

whose result can be recognized as the entries of an (N+1)×(N+1)\times (N+1)(N+1) Hilbert matrix, hereafter denoted 𝐇\mathbf{H}. The entries of the inverse of 𝐇\mathbf{H} are known to be [29]

[H1]ij=(1)i+j(i+j1)(N+iN+1j)(N+jN+1i)(i+j2i1)2\left[H^{-1}\right]_{ij}=(-1)^{i+j}\,(i+j-1)\textstyle{\binom{N+i}{N+1-j}}{\binom{N+j}{N+1-i}}{\binom{i+j-2}{i-1}}^{2}\quad (62)

and by computing

𝐖r1(t0,t1)=(t1t0)𝐌𝐇1𝐌\mathbf{W}_{\mathrm{r}}^{-1}(t_{0},t_{1})=(t_{1}\!-\!t_{0})\,\mathbf{M}\,\mathbf{H}^{-1}\,\mathbf{M} (63)

we obtain (25), which completes the proof of the Lemma. ∎

References

  • [1] R. W. Brockett, Finite Dimensional Linear Systems. Wiley, 1970.
  • [2] W. Byrski and S. Fuksa, “Optimal finite parameter observer. An Application to synthesis of stabilizing feedback for a linear system,” Control and Cybernetics, vol. 13, no. 1–2, pp. 72–83, 1984.
  • [3] S. Fuksa and W. Byrski, “General approach to linear optimal estimator of finite number of parameters,” IEEE Transactions on Automatic Control, vol. 29, no. 5, pp. 470–472, 1984.
  • [4] W. Byrski, “Integral description of the optimal state observers,” Proc. of 2nd European Control Conference (ECC), Groningen, 1993.
  • [5] C.-T. Chen, Linear system theory and design. Oxford Univ. Press, 1999.
  • [6] M. Fliess, C. Join, M. Mboup, and A. Sedoglavic, “Estimation des dérivées d’un signal multidimensionnel avec applications aux images et aux vidéos,” Actes 20e Coll. GRETSI, Louvain-la-Neuve, 2005.
  • [7] M. Fliess, C. Join, M. Mboup, and H. Sira-Ramírez, “Analyse et représentation de signaux transitoires: application à la compression, au débruitage et à la détection de ruptures,” Actes 20e Coll. GRETSI, Louvain-la-Neuve, 2005.
  • [8] M. Fliess, C. Join, and H. Sira-Ramírez, “Robust residual generation fo linear fault diagnosis: an algebraic setting with examples,” Int. Journal of Control, vol. 77, pp. 1223–1242, 2004.
  • [9] M. Fliess, C. Join, and H. Sira-Ramírez, “Non-linear estimation is easy,”  Int. Journal of Modelling, Identification and Control, vol. 3, 2008.
  • [10] M. Fliess, M. Mboup, H. Mounier, and H. Sira-Ramírez, “Questioning some paradigms of signal processing via concrete examples,” in Algebraic Methods in Flatness, Signal Processing and State Estimation, H. Sira-Ramírez, G. Silva-Navarro (Eds.), Innovación Editiorial Lagares, Mexico, pp. 1–21, 2003.
  • [11] M. Fliess and H. J. Sira-Ramírez, “An algebraic framework for linear identification,” ESAIM Control Optim. Calc. Variat., vol. 9, 2003.
  • [12] M. Fliess and H. J. Sira-Ramírez, “State reconstructors: a possible alternative to asymptotic observers and Kalman filters,” Proceedings of CESA, 2003.
  • [13] M. Fliess and H. Sira Ramírez, “Control via state estimations of some nonlinear systems,” IFAC Symposium on Nonlinear Control Systems (NOLCOS 2004), Stuttgart, 2004.
  • [14] R. E. Kalman, P. L. Falb, and M. A. Arbib, Topics in Mathematical System Theory. McGraw-Hill, 1969.
  • [15] P. G. Kaminski, A. E. Bryson and S. F. Schmidt, “Discrete square root filtering: a survey of current techniques,” IEEE Transactions on Automatic Control, vol. 16, no. 6, pp. 727-736, 1971.
  • [16] H. Khalil, Nonlinear systems (2nd ed.). Prentice-Hall, New York, 1996.
  • [17] I. R. Khan and R. Ohba, “New design of full band differentiators based on Taylor series,” IEE Proceedings Vision, Image & Signal Processing, vol. 146, pp. 185–189, 1999.
  • [18] M. Krstić, I. Kanellakopoulos and P. Kokotović, Nonlinear and adaptive control design. Wiley Interscience, New York, 1995.
  • [19] W. H. Kwon, P. S. Kim and P. G. Park, “A receding horizon Kalman FIR filter for linear continuous-time systems,” IEEE Transactions on Automatic Control, vol. 44, no. 11, pp. 2115–2120, 1995.
  • [20] M. Mboup, “Parameter estimation via differential algebra and operational calculus,” in preparation, 2007.
  • [21] M. Mboup, C. Join, and M. Fliess, “A revised look at numerical differentiation with an application to nonlinear feedback control,” 15th Mediterranean Conference on Control and Automation, Athens, 2007.
  • [22] A. Neves, M. Mboup, and M. Fliess, “An algebraic identification method for the demodulation of QPSK signal through a convolutive channel,” European Signal Proc. Conf. (EUSIPCO), Vienna, 2004.
  • [23] J. O’Reilly, Observers for Linear Systems, Academic Press, 1983.
  • [24] J. Reger and J. Jouffroy, “Algebraische Ableitungssch tzung im Kontext der Rekonstruierbarkeit,” [in German], at-Automatisierungstechnik, vol. 56, no. 6, pp. 324–331, 2008.
  • [25] J. Reger and J. Jouffroy, “Algebraic Time-Derivative Estimation and Deadbeat State Reconstruction,” Technical Report CGR-07-09, University of Michigan, USA, 2007, http:/​/arxiv.org/abs/0710.0010
  • [26] F. Riesz and B. Sz.-Nagy, Functional Analysis. Fredrick Ungar, New York, 1955.
  • [27] S. Samadi and A. Nishihara, “Explicit formula for predictive FIR filters and differentiators using Hahn polynomials,” IEICE Transactions, vol. E90-A, no. 8, pp. 1511–1518, 2007.
  • [28] S. Sastry, Nonlinear systems. Springer, 1999.
  • [29] L. R. Savage and E. Lukas, “Tables of inverses of finite segments of the Hilbert matrix” in Contributions to the Solutions of Systems of Linear Equations and the Determination of Eigenvalues, O. Taussky (Editor), National Bureau of Standards Applied Mathematics Series, vol. 39, pp. 105–108, 1954.
  • [30] E. D. Sontag, Mathematical Control Theory (2nd ed.). Springer, 1998.
  • [31] C.-C. Tseng, “Digital differentiator design using fractional delay filter and limit computation,” IEEE Transactions on Circuits and Systems–I:Regular Papers, vol. 52, no. 10, pp. 2248–2259, 2002.
  • [32] S. Valiviita and O. Vainio, “Delayless differentiation algorithm and its efficient implementation for motion control applications,” IEEE Transactions on Instrumentation and Measurement, vol. 48, no. 5, pp. 967–971, 1999.
  • [33] H. Wilf and D. Zeilberger, “An algebraic proof theory for hypergeometric (ordinary and “q”) multisum/integral identities,” Inventiones Mathematicae, vol. 108, pp. 575–633, 1992.
  • [34] J. C. Willems and S. K. Mitter, “Controllability, observability, pole allocation, and state reconstruction,” IEEE Transactions on Automatic Control, vol. 16, no. 6, pp. 582–595, 1971.
  • [35] J. Zehetner, J. Reger, and M. Horn, “A Derivative Estimation Toolbox based on Algebraic Methods — Theory and Practice, ” IEEE Int. Conf. on Control Applications, Singapore, 2007.