This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Exponential quadrature rules without order reduction

B. Cano
IMUVA, Departamento de Matemática Aplicada,
Facultad de Ciencias, Universidad de Valladolid,
Paseo de Belén 7, 47011 Valladolid,
Spain
and
M. J. Moreta

IMUVA, Departamento de Fundamentos del Análisis Económico I,
Facultad de Ciencias Económicas y Empresariales, Universidad Complutense de Madrid,
Campus de Somosaguas, Pozuelo de Alarcón, 28223 Madrid,
Spain.
Corresponding author. Email: bego@mac.uva.esEmail:mjesusmoreta@yahoo.es
Abstract

In this paper a technique is suggested to integrate linear initial boundary value problems with exponential quadrature rules in such a way that the order in time is as high as possible. A thorough error analysis is given for both the classical approach of integrating the problem firstly in space and then in time and of doing it in the reverse order in a suitable manner. Time-dependent boundary conditions are considered with both approaches and full discretization formulas are given to implement the methods once the quadrature nodes have been chosen for the time integration and a particular (although very general) scheme is selected for the space discretization. Numerical experiments are shown which corroborate that, for example, with the suggested technique, order 2s2s is obtained when choosing the ss nodes of Gaussian quadrature rule.

1 Introduction

Due to the recent development and improvement of Krylov methods [7, 11], exponential quadrature rules have become a valuable tool to integrate linear initial value partial differential problems [9]. This is because of the fact that the linear and stiff part of the problem can be ‘exactly’ integrated in an efficient way through exponential operators. Moreover, when the source term is nontrivial, a variations-of-constants formula and the interpolation of that source term in several nodes leads to the appearance of some φj\varphi_{j}-operators, to which Krylov techniques can also be applied.

However, in the literature [9], these methods have always been applied and analysed either on the initial value problem or on the initial boundary value one with vanishing or periodic boundary conditions. More precisely, when the linear and stiff operator is the infinitesimal generator of a strongly continuous semigroup in a certain Banach space. In such a case, it has been proved [9] that the exponential quadrature rule converges with global order ss if ss is the number of nodes being used for the interpolation of the source term.

There are no results concerning specifically how to deal with these methods when integrating the most common non-vanishing and time-dependent boundary conditions case. The only related reference is that of Lawson quadrature rules [3, 4, 10] which differ from these methods in the fact that, not only the source term is interpolated, but all the integrand which turns up in the variations-of-constants formula. In such a way, with the latter methods, {φj}\{\varphi_{j}\}-operators (with j1j\geq 1) do not turn up. Only exponential functions (those corresponding to j=0j=0) are present. In [3] a thorough error analysis is given which studies the strong order reduction which turns up with Lawson methods even in the vanishing boundary conditions case unless some even more artificial additional vanishing boundary conditions are satisfied. Nevertheless, in [4], a technique is suggested to avoid that order reduction under homogeneous boundary conditions and to even tackle the time-dependent boundary case without order reduction. Moreover, the analysis there also includes the error coming from the space discretization.

The aim of this paper is to generalize that technique to the most common quadrature rules which are used in the literature and which also use {φj}\{\varphi_{j}\}-operators. Besides, we also include the error coming from the space discretization not only when avoiding order reduction but also with the classical approach. We will see that, with the technique which is suggested in this paper, we manage to get the order of the classical quadrature interpolatory rule, which implies that, by choosing the ss nodes carefully, we can manage to get even order 2s2s in time.

The paper is structured as follows. Section 2 gives some preliminaries on the abstract framework in Banach spaces which is used for the time integration of the problem, on the definition of the exponential quadrature rules and on the general hypotheses which are used for the abstract space discretization. Then, Section 3 describes and makes a thorough error analysis of the classical method of lines, which integrates the problem firstly in space and then in time. Both vanishing and nonvanishing boundary conditions are considered there and several different results are obtained depending on the specific accuracy of the quadrature rule and on whether a parabolic assumption is satisfied. After that, the technique which is suggested in the paper to improve the order of accuracy in time is well described in Section 4. It consists of discretizing firstly in time with suitable boundary conditions and then in space. Therefore, the analysis is firstly performed on the local error of the time semidiscretization and then on the local and global error of the full scheme (27),(32),(33). Finally, Section 5 shows some numerical results which corroborate the theoretical results of the previous sections.

2 Preliminaries

As in [4], we consider the linear abstract initial boundary value problem in the complex Banach space XX

u(t)=Au(t)+f(t),0tT,u(0)=u0,u(t)=g(t),\displaystyle\begin{array}[]{rcl}u^{\prime}(t)&=&Au(t)+f(t),\quad 0\leq t\leq T,\\ u(0)&=&u_{0},\\ \partial u(t)&=&g(t),\end{array} (4)

where D(A)D(A) is a dense subspace of XX and the linear operators AA and \partial satisfy the following assumptions:

  1. (A1)

    The boundary operator :D(A)XY\partial:D(A)\subset X\to Y is onto, where YY is another Banach space.

  2. (A2)

    Ker(\partial) is dense in XX and A0:D(A0)=Ker()XXA_{0}:D(A_{0})=Ker(\partial)\subset X\to X, the restriction of AA to Ker(\partial), is the infinitesimal generator of a C0C_{0}- semigroup {etA0}t0\{e^{tA_{0}}\}_{t\geq 0} in XX, which type is denoted by ω\omega, and we assume that it is negative.

  3. (A3)

    If zz\in\mathbb{C} satisfies Re(z)>ωRe(z)>\omega and vYv\in Y, then the steady state problem

    Ax\displaystyle Ax =\displaystyle= zx,\displaystyle zx,
    x\displaystyle\partial x =\displaystyle= v,\displaystyle v,

    possesses a unique solution denoted by x=K(z)vx=K(z)v. Moreover, the linear operator K(z):YD(A)K(z):Y\to D(A) satisfies

    K(z)vXLvY,\displaystyle\|K(z)v\|_{X}\leq L\|v\|_{Y},

    where the constant LL holds for any zz such that Re(z)>ω0>ωRe(z)>\omega_{0}>\omega.

As precisely stated in [4, 12], with these assumptions, problem (4) is well-posed in BV/LBV/L^{\infty} sense. Moreover, as ω\omega is negative, the semigroup etA0e^{tA_{0}} decays exponentially when t0t\to 0 and the operator A0A_{0} is invertible.

We also remind that, because of hypothesis (A2), {φj(tA0)}j=0\{\varphi_{j}(tA_{0})\}_{j=0}^{\infty} are bounded operators for t>0t>0, where {φj}\{\varphi_{j}\} are the standard functions which are used in exponential methods [9]:

φj(tA0)=1tj0te(tτ)A0τj1(j1)!𝑑τ.\displaystyle\varphi_{j}(tA_{0})=\frac{1}{t^{j}}\int_{0}^{t}e^{(t-\tau)A_{0}}\frac{\tau^{j-1}}{(j-1)!}d\tau. (5)

It is well-known that they can be calculated in a recursive way through the formula

φk+1(z)=φk(z)1/k!z,z0,φk+1(0)=1(k+1)!,φ0(z)=ez,\displaystyle\varphi_{k+1}(z)=\frac{\varphi_{k}(z)-1/k!}{z},\quad z\neq 0,\qquad\varphi_{k+1}(0)=\frac{1}{(k+1)!},\qquad\varphi_{0}(z)=e^{z}, (6)

and that these functions are bounded on the complex plane when Re(z)0\mbox{Re}(z)\leq 0.

We also assume that the solution of (4) satisfies that, for a natural number pp,

Ap+1ju(j)C([0,T],X),0jp+1.\displaystyle A^{p+1-j}u^{(j)}\in C([0,T],X),\quad 0\leq j\leq p+1. (7)

When AA is a differential space operator, this assumption implies that the time derivatives of the solution are regular in space, but without imposing any restriction on the boundary. Because of Theorem 3.1 in [1], this assumption is satisfied when the data u0u_{0}, ff and gg are regular and satisfy certain natural compatibility conditions at the boundary. Moreover, as from (4),

u(j)(t)=l=0j1Alf(j1l)(t)+Aju(t),0jp+1,\displaystyle u^{(j)}(t)=\sum_{l=0}^{j-1}A^{l}f^{(j-1-l)}(t)+A^{j}u(t),\quad 0\leq j\leq p+1, (8)

the following crucial boundary values

Aju(t)=g(j)(t)l=0j1Alf(j1l)(t),0jp+1,\displaystyle\partial A^{j}u(t)=g^{(j)}(t)-\sum_{l=0}^{j-1}\partial A^{l}f^{(j-1-l)}(t),\quad 0\leq j\leq p+1,

can be calculated from the given data in problem (4).

We will center on exponential quadrature rules [9] to time integrate (4). When applied to a finite-dimensional linear problem like

U(t)=BU(t)+F(t),\displaystyle U^{\prime}(t)=BU(t)+F(t), (9)

where BB is a matrix, these rules correspond to interpolating FF in ss nodes {ci}i=1s\{c_{i}\}_{i=1}^{s} in the integral in the equality

U(tn+1)=ekBU(tn)+k01ek(1θ)BF(tn+θk)𝑑θ,\displaystyle U(t_{n+1})=e^{kB}U(t_{n})+k\int_{0}^{1}e^{k(1-\theta)B}F(t_{n}+\theta k)d\theta, (10)

which is satisfied by the solutions of (9) when tn+1=tn+kt_{n+1}=t_{n}+k. This yields

Un+1=ekBUn+ki=1sbi(kB)F(tn+cik),\displaystyle U^{n+1}=e^{kB}U^{n}+k\sum_{i=1}^{s}b_{i}(kB)F(t_{n}+c_{i}k),

with weights

bi(kB)=01ek(1θ)Bli(θ)𝑑θ,\displaystyle b_{i}(kB)=\int_{0}^{1}e^{k(1-\theta)B}l_{i}(\theta)d\theta,

where lil_{i} are the Lagrange interpolation polynomials corresponding to the nodes {ci}i=1s\{c_{i}\}_{i=1}^{s}. We will define the values {aij}i,j=1s\{a_{ij}\}_{i,j=1}^{s} in such a way that

li(θ)=ai,1+ai,2θ+ai,3θ22++ai,sθs1(s1)!.\displaystyle l_{i}(\theta)=a_{i,1}+a_{i,2}\theta+a_{i,3}\frac{\theta^{2}}{2}+\dots+a_{i,s}\frac{\theta^{s-1}}{(s-1)!}. (11)

From this,

bi(τB)=01eτ(1θ)Bli(θ)𝑑θ=1τ0τe(τσ)Bli(στ)𝑑σ=j=1sai,jφj(τB),b_{i}(\tau B)=\int_{0}^{1}e^{\tau(1-\theta)B}l_{i}(\theta)d\theta=\frac{1}{\tau}\int_{0}^{\tau}e^{(\tau-\sigma)B}l_{i}(\frac{\sigma}{\tau})d\sigma=\sum_{j=1}^{s}a_{i,j}\varphi_{j}(\tau B),

for the functions φj\varphi_{j} in (5), and the final formula for the integration of (9) is

Un+1=ekBUn+ki,j=1sai,jφj(kB)F(tn+cik).\displaystyle U^{n+1}=e^{kB}U^{n}+k\sum_{i,j=1}^{s}a_{i,j}\varphi_{j}(kB)F(t_{n}+c_{i}k). (12)

We will consider an abstract spatial discretization which satisfies the same hypotheses as in [4] (Section 4.1) and which includes a big range of techniques. In such a way, for each parameter hh in a sequence {hj}j=1\{h_{j}\}_{j=1}^{\infty} such that hj0h_{j}\to 0, XhXX_{h}\subset X is a finite dimensional space which approximates XX when hj0h_{j}\to 0 and the elements in D(A0)D(A_{0}) are approximated in a subspace Xh,0X_{h,0}. The norm in XhX_{h} is denoted by h\|\cdot\|_{h}. The operator AA is then approximated by AhA_{h}, A0A_{0} by Ah,0A_{h,0} and the solution of the elliptic problem

Aw=F,w=g,Aw=F,\qquad\partial w=g,

is approximated by Rhw+QhgR_{h}w+Q_{h}g, where RhwXh,0R_{h}w\in X_{h,0} is called the elliptic projection, QhgXhQ_{h}g\in X_{h} discretizes the boundary values and the following is satisfied:

Ah,0Rhw+AhQhg=LhF,\displaystyle A_{h,0}R_{h}w+A_{h}Q_{h}g=L_{h}F, (13)

for a projection operator Lh:XXh,0L_{h}:X\to X_{h,0}. We will also use Ph=LhLhQhP_{h}=L_{h}-L_{h}Q_{h}\partial and we remind part of hypothesis (H3) in [4], which states that, for a subspace ZZ of XX with norm Z\|\cdot\|_{Z}, whenever uZu\in Z,

Ah,0(RhPh)uhεhuZ,\displaystyle\|A_{h,0}(R_{h}-P_{h})u\|_{h}\leq\varepsilon_{h}\|u\|_{Z}, (14)

for εh\varepsilon_{h} decreasing with hh and, therefore, this gives a bound for the error in the space discretization of operator AA.

Moreover, we will assume that this additional hypothesis is satisfied:

  1. (HS)

    Ah,01AhQhh\|A_{h,0}^{-1}A_{h}Q_{h}\|_{h} is bounded independently of hh for small enough hh. Considering (13), this in fact corresponds to a discrete maximum principle, which would be simulating the continuous maximum principle which is satisfied because of (A3) when z=0z=0.

3 Classical approach: Discretizing firstly in space and then in time

When considering vanishing boundary conditions in (4) (which has been classically done in the literature with exponential methods [9]), discretizing first in space and then in time leads to the following semidiscrete problem in Xh,0X_{h,0}:

Uh(t)\displaystyle U_{h}^{\prime}(t) =\displaystyle= Ah,0Uh(t)+Lhf(t),\displaystyle A_{h,0}U_{h}(t)+L_{h}f(t),
Uh(0)\displaystyle U_{h}(0) =\displaystyle= Lhu(0).\displaystyle L_{h}u(0).

When integrating this problem with an exponential quadrature rule which is based on ss nodes (12), the following scheme arises:

Uhn+1=ekAh,0Uhn+ki,j=1sai,jφj(kAh,0)Lhf(tn+cik).\displaystyle U_{h}^{n+1}=e^{kA_{h,0}}U_{h}^{n}+k\sum_{i,j=1}^{s}a_{i,j}\varphi_{j}(kA_{h,0})L_{h}f(t_{n}+c_{i}k). (15)

Denoting by ρh,n+1\rho_{h,n+1} to Uh(tn+1)U¯hn+1U_{h}(t_{n+1})-\bar{U}_{h}^{n+1} where U¯hn+1\bar{U}_{h}^{n+1} is the result of applying (15) from Uh(tn)U_{h}(t_{n}) instead of UhnU_{h}^{n}; and eh,n+1e_{h,n+1} to Uh(tn+1)Uhn+1U_{h}(t_{n+1})-U_{h}^{n+1} the following result follows:

Theorem 1.

Whenever g(t)=0g(t)=0 in (4), uC([0,T],Z)u\in C([0,T],Z) and fCs([0,T],X)f\in C^{s}([0,T],X),

  1. (i)

    ρh,n=O(ks+1)\rho_{h,n}=O(k^{s+1}),

  2. (ii)

    eh,n=O(ks)e_{h,n}=O(k^{s}),

  3. (iii)

    Lhu(tn)Uhn=O(ks+εh)L_{h}u(t_{n})-U_{h}^{n}=O(k^{s}+\varepsilon_{h}).

where the constants in Landau notation are independent of kk and hh.

Proof.

(i) comes from the fact that the difference between f(tn+kτ)f(t_{n}+k\tau) and its interpolant I(f(tn+kτ))I(f(t_{n}+k\tau)) in those nodes is O(ks)O(k^{s}). More explicitly, by using (10) and the definition of U¯hn+1\bar{U}_{h}^{n+1},

ρh,n+1=Uh(tn+1)U¯hn+1=k01ek(1θ)Ah,0Lh[f(tn+kθ)I(f(tn+kθ))]𝑑θ.\displaystyle\hskip 14.22636pt\rho_{h,n+1}=U_{h}(t_{n+1})-\bar{U}_{h}^{n+1}=k\int_{0}^{1}e^{k(1-\theta)A_{h,0}}L_{h}[f(t_{n}+k\theta)-I(f(t_{n}+k\theta))]d\theta. (16)

Now, taking into account hypotheses (H1)-(H2) in [4], ek(1τ)Ah,0e^{k(1-\tau)A_{h,0}} and LhL_{h} are bounded with hh, and the result follows.

Then, (ii) is deduced from the classical argument for the global error once the local error is bounded. Finally, (iii) comes from (ii) and the decomposition

Lhu(tn)Uhn=[Lhu(tn)Uh(tn)]+[Uh(tn)Uhn],L_{h}u(t_{n})-U_{h}^{n}=[L_{h}u(t_{n})-U_{h}(t_{n})]+[U_{h}(t_{n})-U_{h}^{n}],

by noticing that, for the first term, as g=0g=0, it happens that

Lhu˙(t)U˙h(t)\displaystyle L_{h}\dot{u}(t)-\dot{U}_{h}(t) =\displaystyle= Ah,0(Rhu(t)Uh(t))=Ah,0(Lhu(t)Uh(t))+Ah,0(Rhu(t)Phu(t)),\displaystyle A_{h,0}(R_{h}u(t)-U_{h}(t))=A_{h,0}(L_{h}u(t)-U_{h}(t))+A_{h,0}(R_{h}u(t)-P_{h}u(t)),
Lhu(0)Uh(0)\displaystyle L_{h}u(0)-U_{h}(0) =\displaystyle= 0.\displaystyle 0.

Then, because of (14), Lhu(t)Uh(t)=0te(ts)Ah,0O(εh)𝑑s=O(εh).L_{h}u(t)-U_{h}(t)=\int_{0}^{t}e^{(t-s)A_{h,0}}O(\varepsilon_{h})ds=O(\varepsilon_{h}).

We also have this finer result, which implies global order s+1s+1 under more restrictive hypotheses.

Theorem 2.

Let us assume that g(t)=0g(t)=0 in (4), uu belongs to C([0,T],Z)C([0,T],Z), ff to Cs+2([0,T],X)C^{s+2}([0,T],X), the interpolatory quadrature rule which is based on {ci}i=1s\{c_{i}\}_{i=1}^{s} integrates exactly polynomials of degree less than or equal to ss and this bound holds

kAh,0r=1n1erkAh,0hC,0nkT.\displaystyle\|kA_{h,0}\sum_{r=1}^{n-1}e^{rkA_{h,0}}\|_{h}\leq C,\quad 0\leq nk\leq T. (17)

Then,

  1. (i)

    Ah,01ρh,n=O(ks+2)A_{h,0}^{-1}\rho_{h,n}=O(k^{s+2}),

  2. (ii)

    eh,n=O(ks+1)e_{h,n}=O(k^{s+1}),

  3. (iii)

    Lhu(tn)Uhn=O(ks+1+εh)L_{h}u(t_{n})-U_{h}^{n}=O(k^{s+1}+\varepsilon_{h}).

where the constants in Landau notation are independent of kk and hh.

Proof.

To prove (i), it suffices to consider the following formula for the interpolation error which is valid when fCs+1f\in C^{s+1}:

f(tn+kθ)I(f(tn+kθ))=ks[f(s)(tn)i=1s(θci)+O(k)].f(t_{n}+k\theta)-I(f(t_{n}+k\theta))=k^{s}\bigg{[}f^{(s)}(t_{n})\prod_{i=1}^{s}(\theta-c_{i})+O(k)\bigg{]}.

Then, substituting in (16) and multiplying by Ah,01A_{h,0}^{-1},

Ah,01ρh,n+1\displaystyle A_{h,0}^{-1}\rho_{h,n+1} =\displaystyle= ks+201(1θ)(k(1θ)Ah,0)1[ek(1θ)Ah,0I]Lh[f(s)(tn)i=1s(θci)+O(k)]𝑑θ\displaystyle k^{s+2}\int_{0}^{1}(1-\theta)(k(1-\theta)A_{h,0})^{-1}[e^{k(1-\theta)A_{h,0}}-I]L_{h}[f^{(s)}(t_{n})\prod_{i=1}^{s}(\theta-c_{i})+O(k)]d\theta
+ks+201k1Ah,01Lh[f(s)(tn)i=1s(θci)+O(k)]𝑑θ\displaystyle+k^{s+2}\int_{0}^{1}k^{-1}A_{h,0}^{-1}L_{h}[f^{(s)}(t_{n})\prod_{i=1}^{s}(\theta-c_{i})+O(k)]d\theta
=\displaystyle= ks+201(1θ)φ1(k(1θ)Ah,0)Lhf(s)(tn)i=1s(θci)dθ\displaystyle k^{s+2}\int_{0}^{1}(1-\theta)\varphi_{1}(k(1-\theta)A_{h,0})L_{h}f^{(s)}(t_{n})\prod_{i=1}^{s}(\theta-c_{i})d\theta
+ks+1(01i=1s(θci)dθ)Ah,01Lhf(s)(tn)+O(ks+2)=O(ks+2),\displaystyle+k^{s+1}\bigg{(}\int_{0}^{1}\prod_{i=1}^{s}(\theta-c_{i})d\theta\bigg{)}A_{h,0}^{-1}L_{h}f^{(s)}(t_{n})+O(k^{s+2})=O(k^{s+2}),

where we have used (6), (H1)-(H2) in [4] and the fact that the integral in brackets vanishes because the interpolatory quadrature rule is exact for polynomials of degree ss.

As for (ii), a summation-by-parts argument like that given in [5] for splitting exponential methods also applies here because of hypothesis (17) and the fact that fCs+2f\in C^{s+2}. Finally, (iii) follows in the same way as in the proof of Theorem 1.

Remark 3.

Notice that, when Ah,0A_{h,0} is symmetric with negative eigenvalues and h\|\cdot\|_{h} is the discrete L2L^{2}-norm, kAh,0r=1n1erkAh,0h\|kA_{h,0}\sum_{r=1}^{n-1}e^{rkA_{h,0}}\|_{h} coincides with its spectral radius. As, for each eigenvalue λh\lambda_{h},

|kλhr=1n1erkλh|=k|λh|ekλhetnλh1ekλh,\bigg{|}k\lambda_{h}\sum_{r=1}^{n-1}e^{rk\lambda_{h}}\bigg{|}=k|\lambda_{h}|\frac{e^{k\lambda_{h}}-e^{t_{n}\lambda_{h}}}{1-e^{k\lambda_{h}}},

and this is bounded in the negative real axis, (17) follows. In fact, this bound has been proved in [8] for analytic semigroups covering the case in which (4) corresponds to parabolic problems. Therefore it seems natural that it is also satisfied by a suitable space discretization.

On the other hand, when g0g\not\equiv 0 in (4), the semidiscretized problem which arises is

Uh(t)=Ah,0Uh(t)+AhQhg(t)+LhQh(f(t)g(t))+Phf(t),Uh(0)=Phu(0).\displaystyle\left.\begin{array}[]{lcl}U^{\prime}_{h}(t)&=&A_{h,0}U_{h}(t)+A_{h}Q_{h}g(t)+L_{h}Q_{h}(\partial f(t)-g^{\prime}(t))+P_{h}f(t),\\ U_{h}(0)&=&P_{h}u(0).\end{array}\right. (20)

In a similar way as before, the local error would be given by

k01ek(1θ)Ah,0[AhQh[g(tn+kθ)I(g(tn+kθ))]\displaystyle k\int_{0}^{1}e^{k(1-\theta)A_{h,0}}\bigg{[}A_{h}Q_{h}[g(t_{n}+k\theta)-I(g(t_{n}+k\theta))] (21)
+LhQh[f(tn+kθ)g(tn+kθ)I(f(tn+kθ)g(tn+kθ))\displaystyle\hskip 71.13188pt+L_{h}Q_{h}[\partial f(t_{n}+k\theta)-g^{\prime}(t_{n}+k\theta)-I(\partial f(t_{n}+k\theta)-g^{\prime}(t_{n}+k\theta))
+Ph[f(tn+kθ)I(f(tn+kθ))]]dθ.\displaystyle\hskip 71.13188pt+P_{h}[f(t_{n}+k\theta)-I(f(t_{n}+k\theta))]\bigg{]}d\theta.

Again, when gCs+1([0,T],Y)g\in C^{s+1}([0,T],Y) and fCs([0,T],X)f\in C^{s}([0,T],X), the error of interpolation will be O(ks)O(k^{s}). However, although LhQhL_{h}Q_{h} and PhP_{h} are bounded [4], AhQhA_{h}Q_{h} is not bounded any more. That is why we state the following result which bounds in fact Ah,01ρh,nA_{h,0}^{-1}\rho_{h,n} by using (HS) and which proof for the global error is the same as in Theorem 2.

Theorem 4.

Let us assume that g(t)0g(t)\not\equiv 0 in (4), uu belongs to C([0,T],Z)C([0,T],Z), gg to Cs+2([0,T],Y)C^{s+2}([0,T],Y), ff to Cs+1([0,T],X)C^{s+1}([0,T],X), and the bound (17) holds. Then,

  1. (i)

    Ah,01ρh,n=O(ks+1)A_{h,0}^{-1}\rho_{h,n}=O(k^{s+1}),

  2. (ii)

    eh,n=O(ks)e_{h,n}=O(k^{s}),

  3. (iii)

    Lhu(tn)Uhn=O(ks+εh)L_{h}u(t_{n})-U_{h}^{n}=O(k^{s}+\varepsilon_{h}).

where the constants in Landau notation are independent of kk and hh.

Remark 5.

As in Remark 3, if Ah,0A_{h,0} is a symmetric matrix with negative eigenvalues and the discrete L2L^{2}-norm is considered, kAh,0ek(1θ)Ah,0h\|kA_{h,0}e^{k(1-\theta)A_{h,0}}\|_{h} coincides with its spectral radius. As for each eigenvalue λh\lambda_{h} of Ah,0A_{h,0},

01k|λh|ek(1θ)λh𝑑θ=01ddθ(ek(1θ)λh)𝑑θ=1ekλh1,\int_{0}^{1}k|\lambda_{h}|e^{k(1-\theta)\lambda_{h}}d\theta=\int_{0}^{1}\frac{d}{d\theta}(e^{k(1-\theta)\lambda_{h}})d\theta=1-e^{k\lambda_{h}}\leq 1,

considering this in the first part of (21) explains that the local error ρh,n\rho_{h,n} behaves as O(ks)O(k^{s}) under the rest of hypotheses of Theorem 4.

In any case, we want to remark in this section that accuracy has been lost with respect to the vanishing boundary conditions case since order reduction turns up at least for the local error and, in many cases, also for the global error.

4 Suggested approach: Discretizing firstly in time and then in space

In this section, we directly tackle the nonvanishing boundary conditions case by discretizing in a suitable way firstly in time and then in space. We will see that we manage to get at least the same order as with the classical approach when vanishing boundary conditions are present, but even a much higher order some times.

Let us suggest how to apply the exponential quadrature rule (12) directly to (4). When g=0g=0, BB in (12) is directly substituted by A0A_{0} and there is no problem because ekA0e^{kA_{0}} and φj(kA0)\varphi_{j}(kA_{0}) have perfect sense over XX. However, it has no sense to do that when g0g\neq 0 because AA is not A0A_{0} any more. For Lawson methods, for which just exponential functions appear, instead of eτA0αe^{\tau A_{0}}\alpha, it was suggested in [4] to consider v0(τ)v_{0}(\tau) as the solution of

v0(τ)\displaystyle v_{0}^{\prime}(\tau) =\displaystyle= Av0(τ),\displaystyle Av_{0}(\tau),
v0(0)\displaystyle v_{0}(0) =\displaystyle= α,\displaystyle\alpha,
v0(τ)\displaystyle\partial v_{0}(\tau) =\displaystyle= l=0pτll!Alα,\displaystyle\sum_{l=0}^{p}\frac{\tau^{l}}{l!}\partial A^{l}\alpha, (22)

whenever αD(Ap)\alpha\in D(A^{p}). In such a way, if αD(Ap+1)\alpha\in D(A^{p+1}),

v0(τ)=l=0pτll!Alα+τp+1φp+1(τA0)Ap+1α,\displaystyle v_{0}(\tau)=\sum_{l=0}^{p}\frac{\tau^{l}}{l!}A^{l}\alpha+\tau^{p+1}\varphi_{p+1}(\tau A_{0})A^{p+1}\alpha, (23)

which resembles the formal analytic expansion of the exponential of τA\tau A applied over α\alpha. In this manuscript then, whenever αD(Ap)\alpha\in D(A^{p}), for j=1,,sj=1,\dots,s, instead of φj(τA0)α\varphi_{j}(\tau A_{0})\alpha, we suggest to consider the following functions :

vj(τ)=l=0p1τl(l+j)!Alα+τpφp+j(τA0)Apα.\displaystyle v_{j}(\tau)=\sum_{l=0}^{p-1}\frac{\tau^{l}}{(l+j)!}A^{l}\alpha+\tau^{p}\varphi_{p+j}(\tau A_{0})A^{p}\alpha. (24)

This resembles the formal analytic expansion of φj\varphi_{j} when evaluated at τA\tau A and applied over α\alpha. (Notice that, for j=0j=0, this would correspond to (23) changing pp by p1p-1. As the functions φj\varphi_{j} are multiplied by kk in (12), we need one less term in this expansion.)

Therefore, imitating (12), we suggest to consider as continuous numerical approximation un+1u_{n+1} from the previous unu_{n},

un+1=l=0pkll!Alun+kp+1φp+1(kA0)Ap+1un\displaystyle u_{n+1}=\hbox to0.0pt{$\displaystyle\sum_{l=0}^{p}\frac{k^{l}}{l!}A^{l}u_{n}+k^{p+1}\varphi_{p+1}(kA_{0})A^{p+1}u_{n}$\hss} (25)
+ki,j=1sai,j[l=0p1kl(l+j)!Alf(tn+cik)+kpφp+j(kA0)Apf(tn+cik)].\displaystyle+k\sum_{i,j=1}^{s}a_{i,j}\bigg{[}\sum_{l=0}^{p-1}\frac{k^{l}}{(l+j)!}A^{l}f(t_{n}+c_{i}k)+k^{p}\varphi_{p+j}(kA_{0})A^{p}f(t_{n}+c_{i}k)\bigg{]}.

It is not practical to discretize directly this formula in space since numerical differentiation is a badly-posed problem. However, if we seek a differential equation which the functions (24) satisfy and apply a numerical integrator to it, there should be no problems. For that, let us first consider the following lemma.

Lemma 6.

For j1j\geq 1,

ddτφj(τA0)=(A0jτI)φj(τA0)+1(j1)!τI,τ>0.\frac{d}{d\tau}\varphi_{j}(\tau A_{0})=(A_{0}-\frac{j}{\tau}I)\varphi_{j}(\tau A_{0})+\frac{1}{(j-1)!\tau}I,\quad\tau>0.
Proof.

It suffices to differentiate considering (5),

ddτ[1τj0τe(τθ)A0θj1(j1)!𝑑θ]\displaystyle\frac{d}{d\tau}\bigg{[}\frac{1}{\tau^{j}}\int_{0}^{\tau}e^{(\tau-\theta)A_{0}}\frac{\theta^{j-1}}{(j-1)!}d\theta\bigg{]}
=\displaystyle= jτj+10τe(τθ)A0θj1(j1)!𝑑θ+1τjτj1(j1)!I+1τj0τA0e(τθ)A0θj1(j1)!𝑑θ\displaystyle-\frac{j}{\tau^{j+1}}\int_{0}^{\tau}e^{(\tau-\theta)A_{0}}\frac{\theta^{j-1}}{(j-1)!}d\theta+\frac{1}{\tau^{j}}\frac{\tau^{j-1}}{(j-1)!}I+\frac{1}{\tau^{j}}\int_{0}^{\tau}A_{0}e^{(\tau-\theta)A_{0}}\frac{\theta^{j-1}}{(j-1)!}d\theta
=\displaystyle= (A0jτI)φj(τA0)+1(j1)!τI.\displaystyle(A_{0}-\frac{j}{\tau}I)\varphi_{j}(\tau A_{0})+\frac{1}{(j-1)!\tau}I.

From here, the next result follows:

Lemma 7.

The function vj(τ)v_{j}(\tau) in (24) satisfies the following initial boundary value problem:

vj(τ)\displaystyle v_{j}^{\prime}(\tau) =\displaystyle= (Ajτ)vj(τ)+1(j1)!τα,τ>0,\displaystyle(A-\frac{j}{\tau})v_{j}(\tau)+\frac{1}{(j-1)!\tau}\alpha,\quad\tau>0,
vj(0)\displaystyle v_{j}(0) =\displaystyle= 1j!α,\displaystyle\frac{1}{j!}\alpha,
vj(τ)\displaystyle\partial v_{j}(\tau) =\displaystyle= l=0p1τl(l+j)!Alα.\displaystyle\sum_{l=0}^{p-1}\frac{\tau^{l}}{(l+j)!}\partial A^{l}\alpha. (26)
Proof.

Notice that, using Lemma 6,

vj(τ)\displaystyle v_{j}^{\prime}(\tau) =\displaystyle= l=1p1lτl1(l+j)!Alα+pτp1φp+j(τA0)Apα\displaystyle\sum_{l=1}^{p-1}\frac{l\tau^{l-1}}{(l+j)!}A^{l}\alpha+p\tau^{p-1}\varphi_{p+j}(\tau A_{0})A^{p}\alpha
+τp[(A0p+jτI)φp+j(τA0)+1(p+j1)!τI]Apα\displaystyle+\tau^{p}[(A_{0}-\frac{p+j}{\tau}I)\varphi_{p+j}(\tau A_{0})+\frac{1}{(p+j-1)!\tau}I]A^{p}\alpha
=\displaystyle= l=1p1lτl1(l+j)!Alα+τp1(p+j1)!Apαjτp1φp+j(τA0)Apα+τpA0φp+j(τA0)Apα.\displaystyle\sum_{l=1}^{p-1}\frac{l\tau^{l-1}}{(l+j)!}A^{l}\alpha+\frac{\tau^{p-1}}{(p+j-1)!}A^{p}\alpha-j\tau^{p-1}\varphi_{p+j}(\tau A_{0})A^{p}\alpha+\tau^{p}A_{0}\varphi_{p+j}(\tau A_{0})A^{p}\alpha.

On the other hand,

(Ajτ)vj(τ)+1(j1)!τα\displaystyle(A-\frac{j}{\tau})v_{j}(\tau)+\frac{1}{(j-1)!\tau}\alpha
=\displaystyle= l=0p1τl(l+j)!Al+1α+τpAφp+j(τA0)Apαjl=1p1τl1(l+j)!Alαjτp1φp+j(τA0)Apα\displaystyle\sum_{l=0}^{p-1}\frac{\tau^{l}}{(l+j)!}A^{l+1}\alpha+\tau^{p}A\varphi_{p+j}(\tau A_{0})A^{p}\alpha-j\sum_{l=1}^{p-1}\frac{\tau^{l-1}}{(l+j)!}A^{l}\alpha-j\tau^{p-1}\varphi_{p+j}(\tau A_{0})A^{p}\alpha
=\displaystyle= m=1p1m(m+j)!τm1Amα+τp1(p1+j)!Apαjτp1φp+j(τA0)Apα+τpAφp+j(τA0)Apα,\displaystyle\sum_{m=1}^{p-1}\frac{m}{(m+j)!}\tau^{m-1}A^{m}\alpha+\frac{\tau^{p-1}}{(p-1+j)!}A^{p}\alpha-j\tau^{p-1}\varphi_{p+j}(\tau A_{0})A^{p}\alpha+\tau^{p}A\varphi_{p+j}(\tau A_{0})A^{p}\alpha,

where the change m=l+1m=l+1 has been used in the second line for the first sum. Therefore, the lemma is proved taking also into account that φp+j(τA0)ApαD(A0)\varphi_{p+j}(\tau A_{0})A^{p}\alpha\in D(A_{0}).

With Lawson methods [4], starting from a previous approximation Uh,nU_{h,n} to Phu(tn)P_{h}u(t_{n}) and discretizing (22) in space with α=u(tn)\alpha=u(t_{n}) in v0(τ)\partial v_{0}(\tau) led to a term like

Vh,n,0(k)\displaystyle V_{h,n,0}(k) =\displaystyle= ekAh,0Uh,n+j=1pkjφj(kAh,0)[AhQhAj1u(tn)LhQhAju(tn)]\displaystyle e^{kA_{h,0}}U_{h,n}+\sum_{j=1}^{p}k^{j}\varphi_{j}(kA_{h,0})[A_{h}Q_{h}\partial A^{j-1}u(t_{n})-L_{h}Q_{h}\partial A^{j}u(t_{n})] (27)
+kp+1φp+1(kAh,0)AhQhApu(tn),\displaystyle+k^{p+1}\varphi_{p+1}(kA_{h,0})A_{h}Q_{h}\partial A^{p}u(t_{n}),

which is the approximation which corresponds to the first term in (12).

For the rest of the terms in (12), we suggest to discretize (26) in space with α=f(tn+cik)\alpha=f(t_{n}+c_{i}k). In such a way, the following system turns up:

Vh,j,n,i(τ)+LhQhv^j,n,i(τ)\displaystyle V_{h,j,n,i}^{\prime}(\tau)+L_{h}Q_{h}\partial\hat{v}_{j,n,i}^{\prime}(\tau) =\displaystyle= Ah,0Vh,j,n,i(τ)+AhQhv^j,n,i(τ)jτ[Vh,j,n,i(τ)+LhQhv^j,n,i(τ)]\displaystyle A_{h,0}V_{h,j,n,i}(\tau)+A_{h}Q_{h}\partial\hat{v}_{j,n,i}(\tau)-\frac{j}{\tau}[V_{h,j,n,i}(\tau)+L_{h}Q_{h}\partial\hat{v}_{j,n,i}(\tau)]
+1(j1)!τ[Phf(tn+cik)+LhQhf(tn+cik)],\displaystyle+\frac{1}{(j-1)!\tau}[P_{h}f(t_{n}+c_{i}k)+L_{h}Q_{h}\partial f(t_{n}+c_{i}k)],
Vh,j,n,i(0)+LhQhv^j,n,i(0)\displaystyle V_{h,j,n,i}(0)+L_{h}Q_{h}\partial\hat{v}_{j,n,i}(0) =\displaystyle= 1j!Lhf(tn+cik),\displaystyle\frac{1}{j!}L_{h}f(t_{n}+c_{i}k),

where

v^j,n,i(τ)=l=0p1τl(l+j)!Alf(tn+cik).\displaystyle\hat{v}_{j,n,i}(\tau)=\sum_{l=0}^{p-1}\frac{\tau^{l}}{(l+j)!}A^{l}f(t_{n}+c_{i}k).

This can be rewritten as

Vh,j,n,i(τ)\displaystyle\hskip 5.69046ptV_{h,j,n,i}^{\prime}(\tau) =\displaystyle= (Ah,0jτI)Vh,j,n,i(τ)+AhQhv^j,n,i(τ)+1(j1)!τPhf(tn+cik)\displaystyle(A_{h,0}-\frac{j}{\tau}I)V_{h,j,n,i}(\tau)+A_{h}Q_{h}\partial\hat{v}_{j,n,i}(\tau)+\frac{1}{(j-1)!\tau}P_{h}f(t_{n}+c_{i}k)
+LhQh[1(j1)!τf(tn+cik)jτv^j,n,i(τ)v^j,n,i(τ)],\displaystyle+L_{h}Q_{h}\partial[\frac{1}{(j-1)!\tau}f(t_{n}+c_{i}k)-\frac{j}{\tau}\hat{v}_{j,n,i}(\tau)-\hat{v}_{j,n,i}^{\prime}(\tau)],
Vh,j,n,i(0)\displaystyle\hskip 5.69046ptV_{h,j,n,i}(0) =\displaystyle= 1j!Phf(tn+cik).\displaystyle\frac{1}{j!}P_{h}f(t_{n}+c_{i}k). (28)

With the same arguments as in Lemma 6, φj(τAh,0)Phf(tn+cik)\varphi_{j}(\tau A_{h,0})P_{h}f(t_{n}+c_{i}k) is the solution of

Wh,j,n,i(τ)\displaystyle W_{h,j,n,i}^{\prime}(\tau) =\displaystyle= (Ah,0jτI)Wh,j(τ)+1(j1)!τPhf(tn+cik),\displaystyle(A_{h,0}-\frac{j}{\tau}I)W_{h,j}(\tau)+\frac{1}{(j-1)!\tau}P_{h}f(t_{n}+c_{i}k),
Wh,j,n,i(0)\displaystyle W_{h,j,n,i}(0) =\displaystyle= 1j!Phf(tn+cik).\displaystyle\frac{1}{j!}P_{h}f(t_{n}+c_{i}k).

Therefore, in order to solve (28), we are interested in finding

Zh,j,n,i(τ)=Vh,j,n,i(τ)Wh,j,n,i(τ),\displaystyle Z_{h,j,n,i}(\tau)=V_{h,j,n,i}(\tau)-W_{h,j,n,i}(\tau), (29)

which is the solution of

Zh,j,n,i(τ)\displaystyle Z_{h,j,n,i}^{\prime}(\tau) =\displaystyle= (Ah,0jτI)Zh,j,n,i(τ)+AhQhv^j,n,i(τ)\displaystyle(A_{h,0}-\frac{j}{\tau}I)Z_{h,j,n,i}(\tau)+A_{h}Q_{h}\partial\hat{v}_{j,n,i}(\tau)
+LhQh[1(j1)!τf(tn+cik)jτv^j,n,i(τ)v^j,n,i(τ)],\displaystyle+L_{h}Q_{h}\partial[\frac{1}{(j-1)!\tau}f(t_{n}+c_{i}k)-\frac{j}{\tau}\hat{v}_{j,n,i}(\tau)-\hat{v}_{j,n,i}^{\prime}(\tau)],
Zh,j,n,i(0)\displaystyle Z_{h,j,n,i}(0) =\displaystyle= 0.\displaystyle 0. (30)

Now, using the first line of (26) for the boundary with α=f(tn+cik)\alpha=f(t_{n}+c_{i}k),

[1(j1)!τf(tn+cik)jτv^j,n,i(τ)v^j,n,i(τ)]=Avj,n,i(τ)\displaystyle\partial[\frac{1}{(j-1)!\tau}f(t_{n}+c_{i}k)-\frac{j}{\tau}\hat{v}_{j,n,i}(\tau)-\hat{v}_{j,n,i}^{\prime}(\tau)]=-\partial Av_{j,n,i}(\tau)
=\displaystyle= l=0p1τl(l+j)!Al+1f(tn+cik)τpA0φp+j(τA0)Apf(tn+cik),\displaystyle-\sum_{l=0}^{p-1}\frac{\tau^{l}}{(l+j)!}\partial A^{l+1}f(t_{n}+c_{i}k)-\tau^{p}\partial A_{0}\varphi_{p+j}(\tau A_{0})A^{p}f(t_{n}+c_{i}k),

the fact that

τpA01τp+j0τe(τσ)A0σp+j1(p+j1)!Apf(tn+cik)𝑑σ\displaystyle\tau^{p}A_{0}\frac{1}{\tau^{p+j}}\int_{0}^{\tau}e^{(\tau-\sigma)A_{0}}\frac{\sigma^{p+j-1}}{(p+j-1)!}A^{p}f(t_{n}+c_{i}k)d\sigma
=1τje(τσ)A0σp+j1(p+j1)!|σ=0σ=τApf(tn+cik)+1τj0τe(τσ)A0σp+j2(p+j2)!Apf(tn+cik)𝑑σ\displaystyle=-\frac{1}{\tau^{j}}e^{(\tau-\sigma)A_{0}}\frac{\sigma^{p+j-1}}{(p+j-1)!}|_{\sigma=0}^{\sigma=\tau}A^{p}f(t_{n}+c_{i}k)+\frac{1}{\tau^{j}}\int_{0}^{\tau}e^{(\tau-\sigma)A_{0}}\frac{\sigma^{p+j-2}}{(p+j-2)!}A^{p}f(t_{n}+c_{i}k)d\sigma
=τp1(p+j1)!Apf(tn+cik)+τp1φp+j1(τA0)Apf(tn+cik),\displaystyle=-\frac{\tau^{p-1}}{(p+j-1)!}A^{p}f(t_{n}+c_{i}k)+\tau^{p-1}\varphi_{p+j-1}(\tau A_{0})A^{p}f(t_{n}+c_{i}k),

and that the boundary of the second term vanishes, it follows that

[1(j1)!τf(tn+cik)jτv^j,n,i(τ)v^j,n,i(τ)]=l=0p2τl(l+j)!Al+1f(tn+cik).\displaystyle\partial[\frac{1}{(j-1)!\tau}f(t_{n}+c_{i}k)-\frac{j}{\tau}\hat{v}_{j,n,i}(\tau)-\hat{v}_{j,n,i}^{\prime}(\tau)]=-\sum_{l=0}^{p-2}\frac{\tau^{l}}{(l+j)!}\partial A^{l+1}f(t_{n}+c_{i}k).

Using this in (30),

Zh,j,n,i(τ)\displaystyle Z_{h,j,n,i}(\tau) (31)
=\displaystyle= 0τeθτ(Ah,0jσI)𝑑σl=0p2θl(l+j)![AhQhAlf(tn+cik)LhQhAl+1f(tn+cik)\displaystyle\int_{0}^{\tau}e^{\int_{\theta}^{\tau}(A_{h,0}-\frac{j}{\sigma}I)d\sigma}\sum_{l=0}^{p-2}\frac{\theta^{l}}{(l+j)!}\bigg{[}A_{h}Q_{h}\partial A^{l}f(t_{n}+c_{i}k)-L_{h}Q_{h}\partial A^{l+1}f(t_{n}+c_{i}k)
+θp1(p1+j)!AhQhAp1f(tn+cik)]dθ\displaystyle\hskip 170.71652pt+\frac{\theta^{p-1}}{(p-1+j)!}A_{h}Q_{h}\partial A^{p-1}f(t_{n}+c_{i}k)\bigg{]}d\theta
=\displaystyle= l=0p20τeAh,0(τθ)θj+lτj(l+j)![AhQhAlf(tn+cik)LhQhAl+1f(tn+cik)]𝑑θ\displaystyle\sum_{l=0}^{p-2}\int_{0}^{\tau}e^{A_{h,0}(\tau-\theta)}\frac{\theta^{j+l}}{\tau^{j}(l+j)!}[A_{h}Q_{h}\partial A^{l}f(t_{n}+c_{i}k)-L_{h}Q_{h}\partial A^{l+1}f(t_{n}+c_{i}k)]d\theta
+0τeAh,0(τθ)θp1+jτj(p1+j)!AhQhAp1f(tn+cik)dθ\displaystyle+\int_{0}^{\tau}e^{A_{h,0}(\tau-\theta)}\frac{\theta^{p-1+j}}{\tau^{j}(p-1+j)!}A_{h}Q_{h}\partial A^{p-1}f(t_{n}+c_{i}k)d\theta
=\displaystyle= l=0p2τl+1φj+l+1(τAh,0)[AhQhAlf(tn+cik)LhQhAl+1f(tn+cik)]\displaystyle\sum_{l=0}^{p-2}\tau^{l+1}\varphi_{j+l+1}(\tau A_{h,0})[A_{h}Q_{h}\partial A^{l}f(t_{n}+c_{i}k)-L_{h}Q_{h}\partial A^{l+1}f(t_{n}+c_{i}k)]
+τpφp+j(τAh,0)AhQhAp1f(tn+cik).\displaystyle+\tau^{p}\varphi_{p+j}(\tau A_{h,0})A_{h}Q_{h}\partial A^{p-1}f(t_{n}+c_{i}k).

Therefore, using (29),

Vh,j,n,i(k)=φj(kAh,0)Phf(tn+cik)\displaystyle\hskip-28.45274ptV_{h,j,n,i}(k)=\varphi_{j}(kA_{h,0})P_{h}f(t_{n}+c_{i}k) (32)
+l=0p2kl+1φj+l+1(kAh,0)[AhQhAlf(tn+cik)LhQhAl+1f(tn+cik)],\displaystyle+\sum_{l=0}^{p-2}k^{l+1}\varphi_{j+l+1}(kA_{h,0})[A_{h}Q_{h}\partial A^{l}f(t_{n}+c_{i}k)-L_{h}Q_{h}\partial A^{l+1}f(t_{n}+c_{i}k)],
+kpφp+j(kAh,0)AhQhAp1f(tn+cik),\displaystyle+k^{p}\varphi_{p+j}(kA_{h,0})A_{h}Q_{h}\partial A^{p-1}f(t_{n}+c_{i}k),

and the overall exponential quadrature rule would be given by

Uh,0\displaystyle U_{h,0} =\displaystyle= Phu0,\displaystyle P_{h}u_{0},
Uh,n+1\displaystyle U_{h,n+1} =\displaystyle= Vh,n,0(k)+ki,j=1sai,jVh,j,n,i(k),\displaystyle V_{h,n,0}(k)+k\sum_{i,j=1}^{s}a_{i,j}V_{h,j,n,i}(k), (33)

with Vh,n,0(k)V_{h,n,0}(k) in (27) and Vh,j,n,i(k)V_{h,j,n,i}(k) in (32).

4.1 Time semidiscretization error

Let us first study just the error after time discretization. The local truncation error is well-known to be given by ρn=u(tn+1)u¯n+1\rho_{n}=u(t_{n+1})-\bar{u}_{n+1}, where u¯n+1\bar{u}_{n+1} is given by expression (25) substituting unu_{n} by u(tn)u(t_{n}).

Let us first consider the following general result, which will allow to conclude more particular results depending on the choice of the values {ci}i=1s\{c_{i}\}_{i=1}^{s}.

Lemma 8.

Under the assumptions of regularity (7), the local truncation error satisfies

ρn=m=1pkm[r=0m1(1m!1r!l=1s1(mr1+l)!i=1scirail)Amr1f(r)(tn)]+O(kp+1).\rho_{n}=\sum_{m=1}^{p}k^{m}\bigg{[}\sum_{r=0}^{m-1}\big{(}\frac{1}{m!}-\frac{1}{r!}\sum_{l=1}^{s}\frac{1}{(m-r-1+l)!}\sum_{i=1}^{s}c_{i}^{r}a_{il}\big{)}A^{m-r-1}f^{(r)}(t_{n})\bigg{]}+O(k^{p+1}).
Proof.

Notice that u¯n+1\bar{u}_{n+1} can be written as

u¯n+1\displaystyle\bar{u}_{n+1} =\displaystyle= u(tn)+j=1pkj[1j!Aju(tn)+i,l=1sai,l1(j1+l)!Aj1f(tn+cik)]+O(kp+1)\displaystyle u(t_{n})+\sum_{j=1}^{p}k^{j}\bigg{[}\frac{1}{j!}A^{j}u(t_{n})+\sum_{i,l=1}^{s}a_{i,l}\frac{1}{(j-1+l)!}A^{j-1}f(t_{n}+c_{i}k)\bigg{]}+O(k^{p+1})
=\displaystyle= u(tn)+j=1pkj[1j!Aju(tn)+i,l=1sai,l1(j1+l)!r=0pjcirkrr!Aj1f(r)(tn)]+O(kp+1)\displaystyle u(t_{n})+\sum_{j=1}^{p}k^{j}\bigg{[}\frac{1}{j!}A^{j}u(t_{n})+\sum_{i,l=1}^{s}a_{i,l}\frac{1}{(j-1+l)!}\sum_{r=0}^{p-j}\frac{c_{i}^{r}k^{r}}{r!}A^{j-1}f^{(r)}(t_{n})\bigg{]}+O(k^{p+1})
=\displaystyle= u(tn)+m=1pkm[1m!Amu(tn)+i,l=1sai,lr=0m1cir(mr1+l)!r!Amr1f(r)(tn)]+O(kp+1),\displaystyle u(t_{n})+\sum_{m=1}^{p}k^{m}\bigg{[}\frac{1}{m!}A^{m}u(t_{n})+\sum_{i,l=1}^{s}a_{i,l}\sum_{r=0}^{m-1}\frac{c_{i}^{r}}{(m-r-1+l)!r!}A^{m-r-1}f^{(r)}(t_{n})\bigg{]}+O(k^{p+1}),

where the Taylor expansion of f(tn+cik)f(t_{n}+c_{i}k) has been used as well as changes of subindexes.

As, according to (8),

u(tn+1)\displaystyle u(t_{n+1}) =\displaystyle= u(tn)+m=1pkmm!u(m)(tn)\displaystyle u(t_{n})+\sum_{m=1}^{p}\frac{k^{m}}{m!}u^{(m)}(t_{n})
=\displaystyle= u(tn)+m=1pkmm![Amu(tn)+r=0m1Amr1f(r)(tn)]+O(kp+1),\displaystyle u(t_{n})+\sum_{m=1}^{p}\frac{k^{m}}{m!}[A^{m}u(t_{n})+\sum_{r=0}^{m-1}A^{m-r-1}f^{(r)}(t_{n})]+O(k^{p+1}),

the result follows.

Theorem 9.

If p=sp=s in (7) and (25), for any nodes {ci}i=1s\{c_{i}\}_{i=1}^{s}, ρn=O(ks+1)\rho_{n}=O(k^{s+1}).

Proof.

It suffices to take into account that any polynomial of degree s1\leq s-1 coincides with its interpolant on the nodes {ci}i=1s\{c_{i}\}_{i=1}^{s}. Therefore, for rs1r\leq s-1, i=1scirli(θ)=θr\sum_{i=1}^{s}c_{i}^{r}l_{i}(\theta)=\theta^{r}. Using (11), this implies that

i=1scirai,l(l1)!={0iflr+11ifl=r+1,\sum_{i=1}^{s}c_{i}^{r}\frac{a_{i,l}}{(l-1)!}=\bigg{\{}\begin{array}[]{lcl}0&\mbox{if}&l\neq r+1\\ 1&\mbox{if}&l=r+1,\end{array}

or equivalently, for rs1r\leq s-1,

i=1scirai,r+1=r!,i=1scirai,l=0, whenever lr+1.\sum_{i=1}^{s}c_{i}^{r}a_{i,r+1}=r!,\quad\sum_{i=1}^{s}c_{i}^{r}a_{i,l}=0,\mbox{ whenever }l\neq r+1.

Substituting this in the expression for ρn\rho_{n} in Lemma 8 with p=sp=s, all the terms in brackets vanish and the result follows.

Theorem 10.

If p=s+1p=s+1 in (7) and (25) and the nodes {ci}i=1s\{c_{i}\}_{i=1}^{s} are such that the interpolatory quadrature rule which is based on them is exact for polynomials of degree s\leq s, ρn=O(ks+2)\rho_{n}=O(k^{s+2}).

Proof.

With the same argument as in the previous lemma, all the terms in brackets in the expression of ρn\rho_{n} in Lemma 8 vanish for msm\leq s. Then, for m=s+1m=s+1, the term in parenthesis vanishes for the same reason when rs1r\leq s-1. It just suffices to see what happens when m=s+1m=s+1 and r=sr=s. But, as the quadrature rule which is based on {ci}i=1s\{c_{i}\}_{i=1}^{s} is assumed to be exact for the polynomial θs\theta^{s},

1s+1=01θs𝑑θ=01i=1scisli(θ)dθ=i=1scisl=1sai,ll!.\frac{1}{s+1}=\int_{0}^{1}\theta^{s}d\theta=\int_{0}^{1}\sum_{i=1}^{s}c_{i}^{s}l_{i}(\theta)d\theta=\sum_{i=1}^{s}c_{i}^{s}\sum_{l=1}^{s}\frac{a_{i,l}}{l!}.

From this, the result also directly follows.

We now state the following much more general result:

Theorem 11.

Whenever the nodes {ci}i=1s\{c_{i}\}_{i=1}^{s} are such that the interpolatory quadrature rule which is based on them is exact for polynomials of degree p1\leq p-1, considering that value of pp in (7) and (25), ρn=O(kp+1)\rho_{n}=O(k^{p+1}).

Proof.

It suffices to notice that, for 0rm10\leq r\leq m-1, with mpm\leq p, due to the hypothesis,

010u10umr1θr𝑑θ𝑑umr1𝑑u1\displaystyle\hskip-56.9055pt\int_{0}^{1}\int_{0}^{u_{1}}\dots\int_{0}^{u_{m-r-1}}\theta^{r}d\theta du_{m-r-1}\dots du_{1}
=\displaystyle= 010u10umr1i=1scirli(θ)dθdumr1du1.\displaystyle\int_{0}^{1}\int_{0}^{u_{1}}\dots\int_{0}^{u_{m-r-1}}\sum_{i=1}^{s}c_{i}^{r}l_{i}(\theta)d\theta du_{m-r-1}\dots du_{1}.

Now, the left-hand side term above can inductively be proved to be

1r+11r+21m,\frac{1}{r+1}\frac{1}{r+2}\dots\frac{1}{m},

and the right-hand side can be written as

010u10umr1i=1scirl=1sai,lθl1(l1)!dθdumr1du1\displaystyle\int_{0}^{1}\int_{0}^{u_{1}}\dots\int_{0}^{u_{m-r-1}}\sum_{i=1}^{s}c_{i}^{r}\sum_{l=1}^{s}a_{i,l}\frac{\theta^{l-1}}{(l-1)!}d\theta du_{m-r-1}\dots du_{1}
=\displaystyle= l=1s(i=1scirai,l)010u10umr1θl1(l1)!𝑑θ𝑑umr1𝑑u1\displaystyle\sum_{l=1}^{s}\big{(}\sum_{i=1}^{s}c_{i}^{r}a_{i,l}\big{)}\int_{0}^{1}\int_{0}^{u_{1}}\dots\int_{0}^{u_{m-r-1}}\frac{\theta^{l-1}}{(l-1)!}d\theta du_{m-r-1}\dots du_{1}
=\displaystyle= l=1s(i=1scirai,l)1(l1+mr)!.\displaystyle\sum_{l=1}^{s}\big{(}\sum_{i=1}^{s}c_{i}^{r}a_{i,l}\big{)}\frac{1}{(l-1+m-r)!}.

Then, using Lemma 8, the result directly follows.

From this, the following interesting results are achieved:

Corollary 12.
  • (i)

    For the ss nodes corresponding to a Gaussian quadrature rule, considering p=2sp=2s in (7) and (25), ρn=O(k2s+1)\rho_{n}=O(k^{2s+1}).

  • (ii)

    For the ss nodes corresponding to a Gaussian-Lobatto quadrature rule, considering p=2s2p=2s-2 in (7) and (25), ρn=O(k2s1)\rho_{n}=O(k^{2s-1}).

Remark 13.

Due to the fact that the last node of one step is the first of the following, the nodes corresponding to the Gaussian-Lobatto quadrature rule have the advantage that just s(s1)s(s-1) (instead of s2s^{2}) terms of the form Vh,n,j,iV_{h,n,j,i} must be calculated in (33).

4.2 Full discretization error

Let us also consider the error which arises when discretizing (22) and (26) in space.

4.2.1 Local error

To define the local error after full discretization, we consider

U¯h,n+1=V¯h,n,0(k)+ki,j=1sai,jV¯h,n,j,i(k),\bar{U}_{h,n+1}=\bar{V}_{h,n,0}(k)+k\sum_{i,j=1}^{s}a_{i,j}\bar{V}_{h,n,j,i}(k),

where

  1. (i)

    V¯h,n,0(τ)\bar{V}_{h,n,0}(\tau) is the solution of

    V¯h,n,0(τ)+LhQhv^n,0(τ)\displaystyle\bar{V}_{h,n,0}^{\prime}(\tau)+L_{h}Q_{h}\partial\hat{v}_{n,0}^{\prime}(\tau) =\displaystyle= Ah,0V¯h,n,0(τ)+AhQhv^n,0(τ),\displaystyle A_{h,0}\bar{V}_{h,n,0}(\tau)+A_{h}Q_{h}\partial\hat{v}_{n,0}(\tau),
    V¯h,n,0(0)\displaystyle\bar{V}_{h,n,0}(0) =\displaystyle= Rhu(tn),\displaystyle R_{h}u(t_{n}),

    with v^n,0(τ)=l=0pτll!Alu(tn)\hat{v}_{n,0}(\tau)=\sum_{l=0}^{p}\frac{\tau^{l}}{l!}A^{l}u(t_{n}).

  2. (ii)

    V¯h,n,j,i(τ)\bar{V}_{h,n,j,i}(\tau) is the solution of (28) substituting Phf(tn+cik)P_{h}f(t_{n}+c_{i}k) by Rhf(tn+cik)R_{h}f(t_{n}+c_{i}k). More precisely,

    V¯h,j,n,i(τ)\displaystyle\bar{V}_{h,j,n,i}^{\prime}(\tau) =\displaystyle= (Ah,0jτI)V¯h,j,n,i(τ)+AhQhv^j,n,i(τ)\displaystyle(A_{h,0}-\frac{j}{\tau}I)\bar{V}_{h,j,n,i}(\tau)+A_{h}Q_{h}\partial\hat{v}_{j,n,i}(\tau)
    +1(j1)!τRhf(tn+cik)\displaystyle+\frac{1}{(j-1)!\tau}R_{h}f(t_{n}+c_{i}k)
    +LhQh[1(j1)!τf(tn+cik)jτv^j,n,i(τ)v^j,n,i(τ)],\displaystyle+L_{h}Q_{h}\partial[\frac{1}{(j-1)!\tau}f(t_{n}+c_{i}k)-\frac{j}{\tau}\hat{v}_{j,n,i}(\tau)-\hat{v}_{j,n,i}^{\prime}(\tau)],
    V¯h,j,n,i(0)\displaystyle\bar{V}_{h,j,n,i}(0) =\displaystyle= 1j!Rhf(tn+cik).\displaystyle\frac{1}{j!}R_{h}f(t_{n}+c_{i}k). (34)

Then, we define ρh,n=Rhu(tn+1)U¯h,n+1\rho_{h,n}=R_{h}u(t_{n+1})-\bar{U}_{h,n+1} and the following is satisfied.

Theorem 14.

Let us assume that, apart from hypotheses of Section 2, uu and ff in (4) satisfy

AjuC([0,T],Z),j=0,,p+1,AjfC([0,T],Z),j=0,,p.\displaystyle A^{j}u\in C([0,T],Z),\quad j=0,\dots,p+1,\qquad A^{j}f\in C([0,T],Z),\quad j=0,\dots,p. (35)

Then, ρh,n=O(kεh+ρn),\rho_{h,n}=O(k\varepsilon_{h}+\|\rho_{n}\|), where the constant in Landau notation is independent of kk and hh and the bounds in Section 4.1 hold for ρn\rho_{n}.

Proof.

Because of definition,

ρh,n\displaystyle\rho_{h,n} =\displaystyle= (Rhu(tn+1)Rhu¯n+1)+(Rhu¯n+1U¯h,n+1)\displaystyle(R_{h}u(t_{n+1})-R_{h}\bar{u}_{n+1})+(R_{h}\bar{u}_{n+1}-\bar{U}_{h,n+1}) (36)
=\displaystyle= Rhρn+(Rhu¯n+1U¯h,n+1),\displaystyle R_{h}\rho_{n}+(R_{h}\bar{u}_{n+1}-\bar{U}_{h,n+1}),

where u¯n\bar{u}_{n} and ρn\rho_{n} are those defined in Section 4.1. The fact that (35) is satisfied implies that u¯n+1\bar{u}_{n+1} belongs to ZZ and therefore ρnZ\rho_{n}\in Z. Moreover, ρnZ=O(ρn)\|\rho_{n}\|_{Z}=O(\|\rho_{n}\|) and, using the same proof as that of Theorem 11 in [4],

Rhρn=O(ρn).\displaystyle R_{h}\rho_{n}=O(\|\rho_{n}\|). (37)

On the other hand,

Rhu¯n+1U¯h,n+1=Rhv¯0,nV¯h,n,0(k)+ki,j=1sai,j[Rhvj,n,i(k)V¯h,j,n,i(k)],\displaystyle R_{h}\bar{u}_{n+1}-\bar{U}_{h,n+1}=R_{h}\bar{v}_{0,n}-\bar{V}_{h,n,0}(k)+k\sum_{i,j=1}^{s}a_{i,j}[R_{h}v_{j,n,i}(k)-\bar{V}_{h,j,n,i}(k)], (38)

where v¯0,n\bar{v}_{0,n} corresponds to (23) with α=u(tn)\alpha=u(t_{n}) and vj,n,i(τ)v_{j,n,i}(\tau) corresponds to (24) with α=f(tn+cik)\alpha=f(t_{n}+c_{i}k). In the same way as in the proof of Theorem 4.4 in [4],

Rhv¯0,nV¯h,n,0(k)=O(kεh).\displaystyle R_{h}\bar{v}_{0,n}-\bar{V}_{h,n,0}(k)=O(k\varepsilon_{h}). (39)

Moreover, using Lemma 7,

Rhvj,n,i(τ)\displaystyle R_{h}v_{j,n,i}^{\prime}(\tau) =\displaystyle= RhAvj,n,i(τ)jτRhvj,n,i(τ)+1(j1)!τRhf(tn+cik)\displaystyle R_{h}Av_{j,n,i}(\tau)-\frac{j}{\tau}R_{h}v_{j,n,i}(\tau)+\frac{1}{(j-1)!\tau}R_{h}f(t_{n}+c_{i}k)
=\displaystyle= PhAvj,n,i(τ)+(RhPh)Avj,n,i(τ)jτRhvj,n,i(τ)+1(j1)!τRhf(tn+cik)\displaystyle P_{h}Av_{j,n,i}(\tau)+(R_{h}-P_{h})Av_{j,n,i}(\tau)-\frac{j}{\tau}R_{h}v_{j,n,i}(\tau)+\frac{1}{(j-1)!\tau}R_{h}f(t_{n}+c_{i}k)
=\displaystyle= Ah,0Rhvj,n,i(τ)+AhQhv^j,n,i(τ)LhQhAvj,n,i(τ)\displaystyle A_{h,0}R_{h}v_{j,n,i}(\tau)+A_{h}Q_{h}\partial\hat{v}_{j,n,i}(\tau)-L_{h}Q_{h}\partial Av_{j,n,i}(\tau)
+(RhPh)Avj,n,i(τ)jτRhvj,n,i(τ)+1(j1)!τRhf(tn+cik),\displaystyle+(R_{h}-P_{h})Av_{j,n,i}(\tau)-\frac{j}{\tau}R_{h}v_{j,n,i}(\tau)+\frac{1}{(j-1)!\tau}R_{h}f(t_{n}+c_{i}k),

and making the difference with (34), it follows that

Rhvj,n,i(τ)V¯h,n,j,i(τ)=(Ah,0jτI)(Rhvj,n,i(τ)Vh,n,j,i(τ))+(RhPh)Avj,n,i,\displaystyle R_{h}v_{j,n,i}^{\prime}(\tau)-\bar{V}_{h,n,j,i}^{\prime}(\tau)=(A_{h,0}-\frac{j}{\tau}I)(R_{h}v_{j,n,i}(\tau)-V_{h,n,j,i}(\tau))+(R_{h}-P_{h})Av_{j,n,i},

where we have used that

[Avj,n,i(τ)+1(j1)!τf(tn+cik)jτv^j,n,i(τ)v^j,n,i(τ)]=0\partial[Av_{j,n,i}(\tau)+\frac{1}{(j-1)!\tau}f(t_{n}+c_{i}k)-\frac{j}{\tau}\partial\hat{v}_{j,n,i}(\tau)-\partial\hat{v}_{j,n,i}^{\prime}(\tau)]=0

because of Lemma 7. Now, due to the same lemma and (34), Rhvj,n,i(0)V¯h,n,j,i(0)=0R_{h}v_{j,n,i}(0)-\bar{V}_{h,n,j,i}(0)=0, and therefore

Rhvj,n,i(k)V¯h,n,j,i(k)\displaystyle R_{h}v_{j,n,i}(k)-\bar{V}_{h,n,j,i}(k) =\displaystyle= 0ke(kτ)Ah,0τjkj(RhPh)Avj,n,i(τ)𝑑τ\displaystyle\int_{0}^{k}e^{(k-\tau)A_{h,0}}\frac{\tau^{j}}{k^{j}}(R_{h}-P_{h})Av_{j,n,i}(\tau)d\tau (40)
=\displaystyle= kφj+1(kAh,0)O(εh)=O(kεh).\displaystyle k\varphi_{j+1}(kA_{h,0})O(\varepsilon_{h})=O(k\varepsilon_{h}).

Here we have used that Avj,n,iZAv_{j,n,i}\in Z because of (35) and Lemma 3.3 in [4]. Finally, gathering (36)–(40), the result follows.

4.2.2 Global error

We now study the global error, which we define as eh,n=Phu(tn)Uh,ne_{h,n}=P_{h}u(t_{n})-U_{h,n}.

Theorem 15.

Under the same assumptions of Theorem 14,

eh,n=O(1kmax0ln1ρl+εh),\displaystyle e_{h,n}=O(\frac{1}{k}\max_{0\leq l\leq n-1}\|\rho_{l}\|+\varepsilon_{h}),

where the constant in Landau notation is independent of kk and hh and the bounds in Section 4.1 hold for ρl\rho_{l}.

Proof.

As in the proof of Theorem 4.5 in [4],

eh,n+1\displaystyle e_{h,n+1} =\displaystyle= (Phu(tn+1)Rhu(tn+1))+Rhu(tn+1)Uh,n+1\displaystyle(P_{h}u(t_{n+1})-R_{h}u(t_{n+1}))+R_{h}u(t_{n+1})-U_{h,n+1} (41)
=\displaystyle= O(εh)+Rhu(tn+1)Uh,n+1.\displaystyle O(\varepsilon_{h})+R_{h}u(t_{n+1})-U_{h,n+1}.

The difference is that now, using (33),

Rhu(tn+1)Uh,n+1\displaystyle R_{h}u(t_{n+1})-U_{h,n+1} =\displaystyle= ρh,n+U¯h,n+1Uh,n+1\displaystyle\rho_{h,n}+\overline{U}_{h,n+1}-U_{h,n+1}
=\displaystyle= ρh,n+V¯h,n,0(k)Vh,n,0(k)+ki,j=1saij(V¯h,j,n,i(k)Vh,j,n,i(k)).\displaystyle\rho_{h,n}+\bar{V}_{h,n,0}(k)-V_{h,n,0}(k)+k\sum_{i,j=1}^{s}a_{ij}(\bar{V}_{h,j,n,i}(k)-V_{h,j,n,i}(k)).

As in [4],

V¯h,n,0Vh,n,0=ekAh,0(Rhu(tn)Uh,n).\displaystyle\bar{V}_{h,n,0}-V_{h,n,0}=e^{kA_{h,0}}(R_{h}u(t_{n})-U_{h,n}).

As for V¯h,j,n,i(k)Vh,j,n,i(k)\bar{V}_{h,j,n,i}(k)-V_{h,j,n,i}(k), making the difference between (34) and (28),

V¯h,j,n,i(τ)Vh,j,n,i(τ)\displaystyle\bar{V}_{h,j,n,i}^{\prime}(\tau)-V_{h,j,n,i}^{\prime}(\tau) =\displaystyle= (Ah,0jτ)(V¯h,j,n,i(τ)Vh,j,n,i(τ))+1(j1)!τ(RhPh)f(tn+cik),\displaystyle(A_{h,0}-\frac{j}{\tau})(\bar{V}_{h,j,n,i}(\tau)-V_{h,j,n,i}(\tau))+\frac{1}{(j-1)!\tau}(R_{h}-P_{h})f(t_{n}+c_{i}k),
V¯h,j,n,i(0)Vh,j,n,i(0)\displaystyle\bar{V}_{h,j,n,i}(0)-V_{h,j,n,i}(0) =\displaystyle= 1j!(RhPh)f(tn+cik).\displaystyle\frac{1}{j!}(R_{h}-P_{h})f(t_{n}+c_{i}k).

Considering then an analogue of Lemma 6 substituting A0A_{0} by Ah,0A_{h,0} and taking into account that φj(0)=1/j!\varphi_{j}(0)=1/j! (6),

V¯h,j,n,i(k)Vh,j,n,i(k)=φj(kAh,0)(RhPh)f(tn+cik)=O(εh).\displaystyle\bar{V}_{h,j,n,i}(k)-V_{h,j,n,i}(k)=\varphi_{j}(kA_{h,0})(R_{h}-P_{h})f(t_{n}+c_{i}k)=O(\varepsilon_{h}).

Therefore,

Rhu(tn+1)Uh,n+1=ekAh,0(Rhu(tn)Uh,n)+ρh,n+O(kεh).\displaystyle R_{h}u(t_{n+1})-U_{h,n+1}=e^{kA_{h,0}}(R_{h}u(t_{n})-U_{h,n})+\rho_{h,n}+O(k\varepsilon_{h}).

This implies that

Rhu(tn+1)Uh,n+1=etn+1Ah,0(Rhu(0)Uh,0)+O(1kmax0lnρh,l+εh),R_{h}u(t_{n+1})-U_{h,n+1}=e^{t_{n+1}A_{h,0}}(R_{h}u(0)-U_{h,0})+O(\frac{1}{k}\max_{0\leq l\leq n}\|\rho_{h,l}\|+\varepsilon_{h}),

which, together with the first line of (33), (14), (41) and Theorem 14, implies the result.

5 Numerical experiments

In this section we will show some numerical experiments which corroborate the previous results. For that, we have considered parabolic problems with homogeneous and non-homogeneous Dirichlet boundary conditions for which X=L2(Ω)X=L^{2}(\Omega) for a certain spatial domain Ω\Omega and gH12(Ω)g\in H^{\frac{1}{2}}(\Omega). The fact that these problems can be well fitted under the theory of abstract IBVPs is well justified in [4, 13]. Moreover, other types of boundary conditions can also be considered although we restrict here to Dirichlet boundary conditions just for the sake of brevity.

As for the space discretization, we have considered here both the standard symmetric 2nd-order finite differences and collocation spectral methods in 11 dimension. For the former, it was already well justified in [4] that the hypotheses which are required on the space discretization are satisfied, at least for the discrete L2L^{2}-norm, Z=H4(Ω)Z=H^{4}(\Omega) and εh=O(h2)\varepsilon_{h}=O(h^{2}). Besides, a discrete maximum principle (hypothesis (HS)) is well-known to apply [14]. With the collocation spectral methods, those hypotheses are also valid with the discrete L2L^{2}-norm associated to the corresponding Gaussian-Lobatto quadrature rule (h,GL\|\cdot\|_{h,GL}), Z=Hm(Ω)Z=H^{m}(\Omega) and εh=O(J2m)\varepsilon_{h}=O(J^{2-m}) [2, 6], where J+1J+1 is the number of collocation nodes, which is clearly inversely proportional to the diameter space grid hh. In such a way, the more regular the functions are, the quicker the numerical solution of the elliptic problems converges to the exact solution.

Besides, although in the collocation case the matrix Ah,0A_{h,0} is not symmetric any more, Remarks 3 and 5 still apply. Notice that, for every matrix BB of dimension (J1)×(J1)(J-1)\times(J-1),

Bh,GL=DJBDJ1h\displaystyle\|B\|_{h,GL}=\|D_{J}BD_{J}^{-1}\|_{h} (42)

where DJD_{J} denotes the diagonal matrix which contains the square root of the coefficients of the quadrature rule corresponding to the interior Gauss-Lobatto nodes {xj}j=1J1\{x_{j}\}_{j=1}^{J-1}. (We will denote them by {αj}j=1J1\{\alpha_{j}\}_{j=1}^{J-1}.) Because of this, when DJBDJ1D_{J}BD_{J}^{-1} is symmetric, Bh,GL=ρ(B)\|B\|_{h,GL}=\rho(B). The fact that DJAh,0DJ1D_{J}A_{h,0}D_{J}^{-1} is symmetric comes from the following: Notice that (Ah)i,j=Lj′′(xi)(A_{h})_{i,j}=L_{j}^{\prime\prime}(x_{i}) where {Lj(x)}\{L_{j}(x)\} are the Lagrange polynomials associated to the interior Gauss-Lobatto nodes and those at the boundary. As {Lj(x)}j=1J1\{L_{j}(x)\}_{j=1}^{J-1} vanish at the boundary, integrating by parts, for every i,j{1,,J1}i,j\in\{1,\dots,J-1\},

Lj′′(x)Li(x)𝑑x=Lj(x)Li(x)𝑑x.\int L_{j}^{\prime\prime}(x)L_{i}(x)dx=-\int L_{j}^{\prime}(x)L_{i}^{\prime}(x)dx.

As the integrand in the left-hand side is a polynomial of degree 2J22J-2, the corresponding Gaussian-Lobatto quadrature rule integrates it exactly. Therefore,

αiLj′′(xi)=Lj(x)Li(x)𝑑x=αjLi′′(xj),\alpha_{i}L_{j}^{\prime\prime}(x_{i})=-\int L_{j}^{\prime}(x)L_{i}^{\prime}(x)dx=\alpha_{j}L_{i}^{\prime\prime}(x_{j}),

where, for the last equality, the role of ii and jj has been interchanged. From this, and using (42) again,

kAh,0r=1n1erkAh,0h,GL=kDJAh,0DJ1ek(1θ)DJAh,0DJ1h.\|kA_{h,0}\sum_{r=1}^{n-1}e^{rkA_{h,0}}\|_{h,GL}=\|kD_{J}A_{h,0}D_{J}^{-1}e^{k(1-\theta)D_{J}A_{h,0}D_{J}^{-1}}\|_{h}.

As DJAh,0DJ1D_{J}A_{h,0}D_{J}^{-1} is symmetric, the matrix inside h\|\cdot\|_{h} is also symmetric and therefore

kAh,0r=1n1erkAh,0h,GL=ρ(kDJAh,0DJ1r=1n1erkDJAh,0DJ1)=ρ(kAh,0r=1n1erkAh,0).\|kA_{h,0}\sum_{r=1}^{n-1}e^{rkA_{h,0}}\|_{h,GL}=\rho(kD_{J}A_{h,0}D_{J}^{-1}\sum_{r=1}^{n-1}e^{rkD_{J}A_{h,0}D_{J}^{-1}})=\rho(kA_{h,0}\sum_{r=1}^{n-1}e^{rkA_{h,0}}).

Secondly, the eigenvalues of Ah,0A_{h,0} are negative. This is due to the following: For every polynomial which vanishes at the boundary such that p(x)0p(x)\not\equiv 0,

p′′(x)p(x)𝑑x=[p(x)]2𝑑x<0.\int p^{\prime\prime}(x)p(x)dx=-\int[p^{\prime}(x)]^{2}dx<0.

Considering p(x)=i=1J1βiLi(x)p(x)=\sum_{i=1}^{J-1}\beta_{i}L_{i}(x) and using the Gauss-Lobatto quadrature rule and the definition of Lagrange polynomials,

k=1J1αk(i=1J1βiLi′′(xk))(j=1J1βjLj(xk))=i,k=1J1αkβiβkLi′′(xk)<0.\sum_{k=1}^{J-1}\alpha_{k}(\sum_{i=1}^{J-1}\beta_{i}L_{i}^{\prime\prime}(x_{k}))(\sum_{j=1}^{J-1}\beta_{j}L_{j}(x_{k}))=\sum_{i,k=1}^{J-1}\alpha_{k}\beta_{i}\beta_{k}L_{i}^{\prime\prime}(x_{k})<0.

This can be rewritten as βTDJ2Ah,0β<0\vec{\beta}^{T}D_{J}^{2}A_{h,0}\vec{\beta}<0 for every vector β0\vec{\beta}\neq\vec{0}, or equivalently, (DJβ)TDJAh,0DJ1(DJβ)<0(D_{J}\vec{\beta})^{T}D_{J}A_{h,0}D_{J}^{-1}(D_{J}\vec{\beta})<0, which implies that DJAh,0DJ1D_{J}A_{h,0}D_{J}^{-1} has negative eigenvalues and so has Ah,0A_{h,0}.

For both types of discretizations which have been considered here, LhQh0L_{h}Q_{h}\partial\equiv 0 and therefore formulas (27) and (32) simplify a little bit. However, other possible discretizations (as those considered in [4]) are also possible, for which that simplification cannot be made.

In all cases, we have considered the one-dimensional problem

ut(x,t)\displaystyle u_{t}(x,t) =\displaystyle= uxx(x,t)+f(x,t),0t1,0x1,\displaystyle u_{xx}(x,t)+f(x,t),\quad 0\leq t\leq 1,\quad 0\leq x\leq 1,
u(x,0)\displaystyle u(x,0) =\displaystyle= u0(x),\displaystyle u_{0}(x),
u(0,t)\displaystyle u(0,t) =\displaystyle= g0(t),u(1,t)=g1(t),\displaystyle g_{0}(t),\quad u(1,t)=g_{1}(t), (43)

with the corresponding functions ff, u0u_{0}, g0g_{0} and g1g_{1} which make that u(x,t)=x(1x)etu(x,t)=x(1-x)e^{-t} or u(x,t)=extu(x,t)=e^{x-t} are solutions of the problem. These functions satisfy regularity hypotheses (7) and (35) for any natural number pp.

5.1 Trapezoidal rule

We begin by considering the trapezoidal rule in time and the second-order finite differences in space. We have considered h=103h=10^{-3} so that the error in space is negligible. The trapezoidal rule corresponds to s=2s=2 but is just exact for polynomials of degree 1\leq 1. Therefore, one of the hypothesis of Theorem 2 is not satisfied and we can just apply Theorem 1 when discretizating firstly in space and then in time with the solution which satisfies g0(t)=g1(t)=0g_{0}(t)=g_{1}(t)=0. That theorem states that, with respect to the time stepsize kk, the local and global error should show orders 33 and 22 respectively and we can check that really happens in Table 1. For the same problem, but applying the technique which is suggested in this paper (33) with p=2p=2, Theorems 9, 14 and 15 state that also the local and global error should show orders 33 and 22 respectively and that is what we can in fact observe in the same table. We can see that, although the local order is a bit more clear with the suggested technique, the size of the errors is slightly bigger with the suggested approach. Therefore, it seems that, in this particular problem, the error constants are bigger with the suggested technique and it is not worth the additional cost of calculating terms which contain φ3(kAh,0)\varphi_{3}(kA_{h,0}) and φ4(kAh,0)\varphi_{4}(kA_{h,0}).

The comparison is more advantageous for the suggested technique when the solution is such that it does not vanish at the boundary. Then, Theorem 4 and Remark 5 state that the local and global error should show order 22 with the classical approach and that can be checked in Table 2. However, with the suggested strategy, as with the vanishing boundary conditions case, the theorems in this paper prove local order 33 and global order 22, which can again be checked in the same table. The fact that we manage to increase the order in the local error makes that the global errors, although always of order 22, are smaller with the suggested technique than with the classical approach. Nevertheless, the comparison between both techniques will be more beneficial for the technique which is suggested in the paper when the classical (non-exponential) order of the quadrature rule increases.

Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/10 8.0170e-5 5.5395e-5 1.5334e-4 9.8091e-5
1/20 1.2961e-5 2.6 1.3953e-5 2.0 1.9139e-5 3.0 1.8575e-5 2.4
1/40 1.8644e-6 2.8 3.4952e-6 2.0 2.3902e-6 3.0 4.0262e-6 2.2
1/80 2.5316e-7 2.9 8.7426e-7 2.0 2.9862e-7 3.0 9.3773e-7 2.1
1/160 3.3354e-8 2.9 2.1860e-7 2.0 3.7318e-8 3.0 2.2635e-7 2.0
1/320 4.3171e-9 3.0 5.4651e-8 2.0 4.6641e-9 3.0 5.5610e-8 2.0
Table 1: Trapezoidal rule, h=103h=10^{-3}, u(x,t)=x(1x)etu(x,t)=x(1-x)e^{-t}
Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/10 7.4531e-4 4.8108e-4 2.0144e-4 1.3742e-4
1/20 1.5446e-4 2.3 1.2476e-4 2.0 2.8775e-5 2.8 3.0165e-5 2.2
1/40 3.3863e-5 2.2 3.2074e-5 2.0 3.9084e-6 2.9 7.0453e-6 2.1
1/80 7.3906e-6 2.2 8.1809e-6 2.0 5.1475e-7 2.9 1.6979e-6 2.0
1/160 1.5848e-6 2.2 2.0770e-6 2.0 6.6122e-8 3.0 4.1324e-7 2.0
1/320 3.3666e-7 2.2 5.2777e-7 2.0 8.1531e-9 3.0 9.8459e-8 2.1
Table 2: Trapezoidal rule, h=103h=10^{-3}, u(x,t)=extu(x,t)=e^{x-t}

5.2 Simpson rule

In this subsection we consider Simpson rule in time and a collocation spectral method in space with 4040 nodes so that the error in space is negligible. As Simpson rule corresponds to s=3s=3 and the interpolatory quadrature rule which is based in those 33 nodes is exact for polynomials of degree 3\leq 3, we can take p=4p=4 in Theorem 10 and achieve orders 55 and 44 for the local and global error respectively with the technique suggested here. However, with the classical approach, at least in the common case that g(t)0g(t)\not\equiv 0, Theorem 4 and Remark 5 give just order 33 for the local and global error. These results can be checked in Table 3. Moreover, the size of the global error, even for the bigger timestepsizes is smaller with the suggested technique.

We also want to remark here that the trapezoidal and Simpson rules correspond to Gauss-Lobatto quadrature rules with s=2s=2 and s=3s=3 respectively and therefore Corollary 12 (ii) and Remark 13 apply.

Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/2 8.0718e-4 4.9507e-4 2.7496e-4 1.6821e-4
1/4 8.3265e-5 3.3 4.2862e-5 3.5 8.5778e-6 5.0 4.4156e-6 5.2
1/8 8.6214e-6 3.3 4.0561e-6 3.4 2.6785e-7 5.0 1.5345e-7 4.8
1/16 1.0500e-6 3.0 4.4103e-7 3.2 8.3681e-9 5.0 6.7803e-9 4.5
1/32 1.1622e-7 3.2 4.6864e-8 3.2 2.6148e-10 5.0 3.5342e-10 4.3
1/64 1.2378e-8 3.2 4.9423e-9 3.2 8.1711e-12 5.0 2.0189e-11 4.1
Table 3: Simpson’s rule, J=39J=39, u(x,t)=extu(x,t)=e^{x-t}

5.3 Gaussian rules

In order to achieve the highest accuracy given a certain number of nodes, we consider in this subsection Gaussian quadrature rules. More precisely, those corresponding to s=1,2,3,4s=1,2,3,4. As space discretization, we have considered again the same spectral collocation method of the previous subsection. Following Corollary 12 (i) and Theorem 14, even for non-vanishing boundary conditions, taking p=2sp=2s in (27) and (32) the local error in time should show order 2s+12s+1 and the global error, using Theorem 15, order 2s2s. This should be compared with the order s+1s+1 which is proved for the classical approach when g(t)0g(t)\equiv 0 in Theorem 2 and the order ss for the local and global error when g(t)0g(t)\not\equiv 0, which comes from Theorem 4 and Remark 5. In Tables 4 and 5 we see the results which correspond to s=1s=1 and s=2s=2 respectively for the vanishing boundary conditions case. Although for s=1s=1 there is not an improvement on the global order for the suggested technique, the errors are a bit smaller. Of course the benefits are more evident with s=2s=2. For the non-vanishing boundary conditions case, Tables 6,7,8 and 9 show the results which correspond to s=1,2,3s=1,2,3 and 44 respectively. When avoiding order reduction, the results are much better than with the classical approach. Not only the order is bigger but also the size of the errors is smaller from the very beginning. We notice that the global order is even a bit better than expected for the first values of kk.

Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/4 8.2639e-3 4.3743e-3 4.9483e-3 2.5685e-3
1/8 1.8874e-3 2.1 1.1548e-3 1.9 6.4427e-4 2.9 3.7820e-4 2.8
1/16 3.6716e-4 2.4 2.9791e-4 2.0 8.2199e-5 3.0 6.9202e-5 2.4
1/32 7.6916e-5 2.2 7.6389e-5 2.0 1.0381e-5 3.0 1.4677e-5 2.2
1/64 1.6803e-5 2.2 1.9445e-5 2.0 1.3043e-6 3.0 3.3796e-6 2.1
1/128 3.6111e-6 2.2 4.9232e-6 2.0 1.6345e-7 3.0 8.1115e-7 2.1
Table 4: Midpoint rule, J=39J=39, u(x,t)=x(1x)etu(x,t)=x(1-x)e^{-t}
Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/2 8.9292e-4 5.4788e-4 8.2591e-5 5.0573e-5
1/4 8.6746e-5 3.4 4.5296e-5 3.6 2.8465e-6 4.9 1.4782e-6 5.1
1/8 8.1186e-6 3.4 3.9933e-6 3.5 9.3457e-8 4.9 5.4928e-8 4.7
1/16 1.0237e-6 3.0 4.3449e-7 3.2 2.9938e-9 5.0 2.5254e-9 4.4
1/32 1.1415e-7 3.2 4.6048e-8 3.2 9.4726e-11 5.0 1.3424e-10 4.2
1/64 1.2112e-8 3.2 4.8418e-9 3.2 2.9786e-12 5.0 7.7190e-12 4.1
Table 5: Gaussian rule with s=2s=2, J=39J=39, u(x,t)=x(1x)etu(x,t)=x(1-x)e^{-t}
Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/8 6.6985e-2 2.8814e-2 1.5650e-3 8.6254e-4
1/16 3.0444e-2 1.1 1.2167e-2 1.2 1.8673e-4 3.1 1.4048e-4 2.6
1/32 1.3218e-2 1.2 5.1128e-3 1.2 2.2356e-5 3.1 2.7661e-5 2.3
1/64 5.6269e-3 1.2 2.1472e-3 1.2 2.6947e-6 3.1 6.1319e-6 2.2
1/128 2.3791e-3 1.2 9.0138e-4 1.2 3.2718e-7 3.0 1.4450e-6 2.1
1/256 1.0015e-3 1.2 3.7813e-4 1.2 3.9993e-8 3.0 3.5085e-7 2.0
Table 6: Midpoint rule, J=39J=39, u(x,t)=extu(x,t)=e^{x-t}
Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/2 1.9328e-2 1.1743e-2 7.9408e-4 4.8503e-4
1/4 4.8715e-3 2.0 2.3068e-3 2.3 2.2565e-5 5.1 1.1356e-5 5.4
1/8 1.1460e-3 2.1 4.7811e-4 2.3 6.3799e-7 5.1 3.3399e-7 5.1
1/16 2.5199e-4 2.2 9.8750e-5 2.3 1.8167e-8 5.1 1.2423ee-8 4.7
1/32 5.4061e-5 2.2 2.0543e-5 2.3 5.2222e-10 5.1 5.7608e-10 4.4
1/64 1.1476e-5 2.2 4.2930e-6 2.3 1.4918e-11 5.1 2.9580e-11 4.3
Table 7: Gaussian rule with s=2s=2, J=39J=39, u(x,t)=extu(x,t)=e^{x-t}
Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/2 8.4732e-4 5.1405e-4 3.0107e-6 1.8363e-6
1/4 1.0734e-4 3.0 5.0710e-5 3.3 2.0423e-8 7.2 1.0082e-8 7.5
1/8 1.2260 e-5 3.1 5.1107e-6 3.3 1.3840e-10 7.2 6.6752e-11 7.2
1/16 1.3379e-6 3.2 5.2398e-7 3.3 9.4087e-13 7.2 5.3198e-13 7.0
1/32 1.4335e-7 3.2 5.4413e-8 3.3 6.4275e-15 7.2 5.3534e-15 6.6
1/64 1.5182e-8 3.2 5.6735e-9 3.3 4.4259e-17 7.2 6.6517e-17 6.3
Table 8: Gaussian rule with s=3s=3, J=39J=39, u(x,t)=extu(x,t)=e^{x-t}
Classical approach Suggested approach
k Loc. err. ord. Glob. err. ord. Loc. err. ord. Glob. err. ord.
1/2 2.7746e-5 1.6829e-5 8.1253e-9 4.9495e-9
1/4 1.7137e-6 4.0 8.0951e-7 4.4 1.3502e-11 9.2 6.5604e-12 9.6
1/8 9.6826e-8 4.1 4.0363e-8 4.3 2.2443e-14 9.2 1.0107e-14 9.3
1/16 5.2833e-9 4.2 2.0690e-9 4.3 3.7267e-17 9.2 1.7380e-17 9.2
1/32 2.8240e-10 4.2 1.0719e-10 4.3 6.2266e-20 9.2 3.5644e-20 8.9
Table 9: Gaussian rule with s=4s=4, J=39J=39, u(x,t)=extu(x,t)=e^{x-t}

Finally, although it is not an aim of this paper, in order to compare roughly the results in terms of computational cost, let us concentrate on Gaussian quadrature rules of the same order 2s2s when integrating a non-vanishing boundary value problem. When considering 2s2s nodes with the classical approach, 2s2s evaluations of the source term ff must be made at each step and the 2s2s operators {φj(kAh,0)}j=12s\{\varphi_{j}(kA_{h,0})\}_{j=1}^{2s} are needed, which will be multiplied by vectors with all its components varying in principle at each step. However, with the suggested technique and ss nodes, just ss evaluations of the source term ff must be made although 3s3s operators {φj(kAh,0)}j=13s\{\varphi_{j}(kA_{h,0})\}_{j=1}^{3s} are needed. Nevertheless, from these 3s3s, just the first ss of them are multiplied by vectors which change independently in all their components at each step. The other 2s2s are multiplied by vectors which just contain information on the boundary. Therefore, with finite differences many components vanish and, with Gauss-Lobatto spectral methods, those vectors are just a time-dependent linear combination of two vectors which do not change with time. With Gauss-Lobatto methods, as Ah,0A_{h,0} is not sparse but its size its moderate, we have calculated once and for all at the very beginning ekAh,0e^{kA_{h,0}}, φj(kAh,0)\varphi_{j}(kA_{h,0}), j=1,,sj=1,\dots,s and the two necessary vectors derived from φj(kAh,0)\varphi_{j}(kA_{h,0}), j=s+1,,3sj=s+1,\dots,3s. Then, in (33) the terms containing the former at each step require O(J2)O(J^{2}) operations while the terms containing the latter just require O(J)O(J) operations. With finite differences, as the matrix Ah,0A_{h,0} is sparse and usually bigger, we have applied general Krylov subroutines [11] to calculate all the required terms at each step.

We offer a particular comparison for order 22 with Gauss-Lobatto spectral space discretization on the one hand and 2nd-order finite differences on the other, and considering the implementation described above in each case. In Figure 1 we can see that, for the former, the suggested technique is more than twice cheaper than the classical one and, with the latter in Figure 2, the comparison is not so advantageous for the suggested technique but it is still cheaper than the classical approach. We also remark that, in any case, the more expensive the source function ff is to evaluate, the more advantageous the suggested technique with ss nodes will be against the classical approach with 2s2s nodes.

Refer to caption
Figure 1: Error against CPU time when integrating problem (43) with exact solution u(x,t)=extu(x,t)=e^{x-t}, using Gauss-Lobatto spectral method in space and, in time, the classical approach of Gaussian rule with s=2s=2 (pink circles) or the suggested technique for midpoint rule (blue asterisks)
Refer to caption
Figure 2: Error against CPU time when integrating problem (43) with exact solution u(x,t)=extu(x,t)=e^{x-t}, using second-order finite differences in space and, in time, the classical approach of Gaussian rule with s=2s=2 (pink circles) or the suggested technique for midpoint rule (blue asterisks)

References

  • [1] I. Alonso–Mallo, Rational methods with optimal order of convergence for partial differential equations, Appl. Num. Math. 35 (1999), 117–131.
  • [2] I. Alonso–Mallo, B. Cano and J. C. Jorge, Spectral-Fractional Step Runge-Kutta Discretizations for Initial Boundary Value Problems with Time-Dependent Boundary Conditions, Math. Comput. 73 (2004), pp. 1801-1825.
  • [3] I. Alonso–Mallo, B. Cano and N. Reguera, Analysis of order reduction when integrating linear initial boundary value problems with Lawson methods, Applied Numerical Mathematics, 118 (2017) 64-74.
  • [4] I. Alonso–Mallo, B. Cano and N. Reguera, Avoiding order reduction when integrating linear initial boundary value problems with Lawson methods, IMA J. Num. Anal., doi: 10.1093/imanum/drw052.
  • [5] I. Alonso–Mallo, B. Cano and N. Reguera, Avoiding order reduction when integrating reaction-diffusion boundary value problems with exponential splitting methods, arXiv:submit/1679590.
  • [6] C. Bernardy and Y. Maday, Approximations spectrales de problemes aux limites elliptiques, Springer-Verlag France, Paris, 1992. MR 94f:65112.
  • [7] T. Gockler and V. Grimm, Convergence analysis of an extended Krylov subspace method for the approximation of operator functions in exponential integrators, SIAM J. Numer. Anal. 51 (4) (2013), 2189–2213.
  • [8] M. Hochbruck and A. Ostermann, Exponential Runge-Kutta methods for parabolic problems, Appl. Numer. Math. 53 (2005), no. 2-4, 323–339.
  • [9] M. Hochbruck and A. Ostermann, Exponential integrators, Acta Numerica (2010) 209–286.
  • [10] J. D. Lawson, Generalized Runge-Kutta processes for stable systems with large Lipschitz constants, SIAM J. Numer. Anal. 4 (1967) 372–380.
  • [11] J. Niesen, and W. M. Wright, Algorithm 919: a Krylov subspace algorithm for evaluating the φ\varphi-functions appearing in exponential integrators, ACM Trans. Math. Software 38, no. 3, Art. 22 (2012).
  • [12] C. Palencia and I. Alonso–Mallo, Abstract initial boundary value problems, Proc. Royal Soc. Edinburgh 124A (1994), 879–908.
  • [13] A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, (Springer-verlag, Berlin, 1994)
  • [14] J. C. Strikwerda, Finite Difference Schemes and Partial Differential Equations, (Wadsworth & Brooks, United States of America, 1989).