This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Error propagation in an explicit and an implicit numerical method for Volterra integro-differential equations

J. S. C. Prentice
Senior Research Officer
Mathsophical Ltd.
Johannesburg, South Africa
Email: jpmsro@mathsophical.com
Abstract

We study error propagation in both an explicit and an implicit method for solving Volterra integro-differential equations. We determine the relationship between local and global errors. We derive upper bounds for the global error, and show that the global order for both methods is expected to be first-order. A few numerical examples illustrate our results.

1 Introduction

Recently, we presented explicit and implicit numerical methods for solving the Volterra integro-differential equation

y(n)(x)=f(x,y)+x0xK(x,y(t),t)𝑑t, x>x0,y^{\left(n\right)}\left(x\right)=f\left(x,y\right)+\int\limits_{x_{0}}^{x}K\left(x,y\left(t\right),t\right)dt,\text{ \ \ \ }x>x_{0}, (1)

using numerous examples to demonstrate the performance of the methods, and also studying the stability of the methods [1][2]. In this paper, we investigate the propagation of numerical error in these methods. Not only is this an interesting study in its own right, it also allows us to learn about upper bounds on the global error, and the order of the error.

2 Notation and terminology

We deviate slightly form notation used in our previous work: here, ww denotes the approximate solution, and yy denotes the true solution. The nodes are labelled as

x0<x1<x2<<xi<xi+1<<xfx_{0}<x_{1}<x_{2}<\ldots<x_{i}<x_{i+1}<\ldots<x_{f}

and hh is the uniform spacing between the nodes - the stepsize. We focus our attention on the case of n=1n=1 in (1). We note that ff and KK are assumed to be suitably smooth so as to yield a unique solution and, in particular, KK is not singular anywhere on the interval of integration.

The explicit method is given by

wi+1=\displaystyle w_{i+1}= wi+hf(xi,wi)\displaystyle\text{ }w_{i}+hf\left(x_{i},w_{i}\right) (2)
+h22(j=0i2K(xi,wj,xj)K(xi,w0,x0)K(xi,wi,xi))\displaystyle\text{ }+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i}2K\left(x_{i},w_{j},x_{j}\right)-K\left(x_{i},w_{0},x_{0}\right)-K\left(x_{i},w_{i},x_{i}\right)\right)
ME(wi),\displaystyle\equiv M_{E}\left(w_{i}\right),

where we have implicitly defined ME(wi).M_{E}\left(w_{i}\right).

The implicit method is given by

wi+1=\displaystyle w_{i+1}=\text{ } wi+hf(xi+1,wi+1)\displaystyle w_{i}+hf\left(x_{i+1},w_{i+1}\right) (3)
+h22(j=0i+12K(xi+1,wj,xj)K(xi+1,w0,x0)K(xi+1,wi+1,xi+1))\displaystyle\text{ }+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i+1}2K\left(x_{i+1},w_{j},x_{j}\right)-K\left(x_{i+1},w_{0},x_{0}\right)-K\left(x_{i+1},w_{i+1},x_{i+1}\right)\right)
MI(wi),\displaystyle\equiv M_{I}\left(w_{i}\right),

where we have implicitly defined MI(wi).M_{I}\left(w_{i}\right).

We also define ME(yi)M_{E}\left(y_{i}\right) and MI(yi)M_{I}\left(y_{i}\right) as follows:

yi+1=\displaystyle y_{i+1}= yi+hf(xi,yi)\displaystyle\text{ }y_{i}+hf\left(x_{i},y_{i}\right)
+h22(j=0i2K(xi,yj,xj)K(xi,y0,x0)K(xi,yi,xi))\displaystyle\text{ }+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i}2K\left(x_{i},y_{j},x_{j}\right)-K\left(x_{i},y_{0},x_{0}\right)-K\left(x_{i},y_{i},x_{i}\right)\right)
ME(yi).\displaystyle\equiv M_{E}\left(y_{i}\right).
yi+1=\displaystyle y_{i+1}=\text{ } yi+hf(xi+1,yi+1)\displaystyle y_{i}+hf\left(x_{i+1},y_{i+1}\right)
+h22(j=0i+12K(xi+1,yj,xj)K(xi+1,y0,x0)K(xi+1,yi+1,xi+1))\displaystyle+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i+1}2K\left(x_{i+1},y_{j},x_{j}\right)-K\left(x_{i+1},y_{0},x_{0}\right)-K\left(x_{i+1},y_{i+1},x_{i+1}\right)\right)
MI(yi).\displaystyle\equiv M_{I}\left(y_{i}\right).

The global error at xi+1x_{i+1} is defined as

Δi+1\displaystyle\Delta_{i+1} =wi+1yi+1=ME(wi)yi+1 (explicit method)\displaystyle=w_{i+1}-y_{i+1}=M_{E}\left(w_{i}\right)-y_{i+1}\text{ \ \ (explicit method)}
Δi+1\displaystyle\Delta_{i+1} =wi+1yi+1=MI(wi)yi+1 (implicit method)\displaystyle=w_{i+1}-y_{i+1}=M_{I}\left(w_{i}\right)-y_{i+1}\text{ \ \ (implicit method)}

and the local error at xi+1x_{i+1} is defined as

εi+1\displaystyle\varepsilon_{i+1} =ME(yi)yi+1 (explicit method)\displaystyle=M_{E}\left(y_{i}\right)-y_{i+1}\text{ \ \ (explicit method)}
εi+1\displaystyle\varepsilon_{i+1} =MI(yi)yi+1 (implicit method)\displaystyle=M_{I}\left(y_{i}\right)-y_{i+1}\text{ \ \ (implicit method)}

The local error has the form

εi+1=εi+1D+εi+1Q=O(h2)\varepsilon_{i+1}=\varepsilon_{i+1}^{D}+\varepsilon_{i+1}^{Q}=O\left(h^{2}\right)

where εi+1D\varepsilon_{i+1}^{D} is the error associated with the Euler approximation to the derivative in the IDE, and εi+1Q\varepsilon_{i+1}^{Q} is the error associated with the composite Trapezium approximation to the integral in the IDE. These are O(h),O\left(h\right), at worst, but on multiplication by h,h, as required by the structure of the methods, these errors acquire, at worst, an O(h2)O\left(h^{2}\right) character. The precise form of these errors will not concern us here; it is enough for our purposes to simply accept that εi+1=O(h2).\varepsilon_{i+1}=O\left(h^{2}\right). Nevertheless, we discuss this matter to some extent in the Appendix.

3 Analysis - explicit case

We consider the explicit case first, and do so in detail. The parameters ξ\xi and η\eta denote values appropriate for the various Taylor residual terms that arise.

3.1 Error propagation

At x1x_{1} we have

w1\displaystyle w_{1} = w0+hf(x0,w0)\displaystyle=\text{ }w_{0}+hf\left(x_{0},w_{0}\right)
Δ1+y1\displaystyle\Rightarrow\Delta_{1}+y_{1} =Δ0+y0+hf(x0,Δ0+y0)\displaystyle=\Delta_{0}+y_{0}+hf\left(x_{0},\Delta_{0}+y_{0}\right)
=Δ0+y0+hf(x0,y0)+Δ0hfy(x0,ξ0)\displaystyle=\Delta_{0}+y_{0}+hf\left(x_{0},y_{0}\right)+\Delta_{0}hf_{y}\left(x_{0},\xi_{0}\right)
Δ1\displaystyle\Rightarrow\Delta_{1} =[y0+hf(x0,y0)y1]+Δ0(1+hfy(x0,ξ0))\displaystyle=\left[y_{0}+hf\left(x_{0},y_{0}\right)-y_{1}\right]+\Delta_{0}\left(1+hf_{y}\left(x_{0},\xi_{0}\right)\right)
=[ME(y0)y1]+Δ0(1+hfy(x0,ξ0))\displaystyle=\left[M_{E}\left(y_{0}\right)-y_{1}\right]+\Delta_{0}\left(1+hf_{y}\left(x_{0},\xi_{0}\right)\right)
=ε1+Δ0(1+hfy(x0,ξ0)).\displaystyle=\varepsilon_{1}+\Delta_{0}\left(1+hf_{y}\left(x_{0},\xi_{0}\right)\right).

At x2x_{2} we have

w2=\displaystyle w_{2}=  w1+hf(x1,w1)+h22K(x0,w0,x0)+h22K(x1,w1,x1)\displaystyle\text{\thinspace}w_{1}+hf\left(x_{1},w_{1}\right)+\frac{h^{2}}{2}K\left(x_{0},w_{0},x_{0}\right)+\frac{h^{2}}{2}K\left(x_{1},w_{1},x_{1}\right)
Δ2+y2=\displaystyle\Rightarrow\Delta_{2}+y_{2}=  Δ1+y1+hf(x1,Δ1+y1)+h22K(x0,Δ0+y0,x0)+h22K(x1,Δ1+y1,x1)\displaystyle\text{\thinspace}\Delta_{1}+y_{1}+hf\left(x_{1},\Delta_{1}+y_{1}\right)+\frac{h^{2}}{2}K\left(x_{0},\Delta_{0}+y_{0},x_{0}\right)+\frac{h^{2}}{2}K\left(x_{1},\Delta_{1}+y_{1},x_{1}\right)
=\displaystyle= Δ1+y1+hf(x1,y1)+Δ1hfy(x1,ξ1)\displaystyle\,\Delta_{1}+y_{1}+hf\left(x_{1},y_{1}\right)+\Delta_{1}hf_{y}\left(x_{1},\xi_{1}\right)
+h22(K(x0,y0,x0)+Δ0Ky(x0,η0,x0)+K(x1,y1,x1)+Δ1Ky(x1,η1,x1))\displaystyle+\frac{h^{2}}{2}\left(\begin{array}[]{c}K\left(x_{0},y_{0},x_{0}\right)+\Delta_{0}K_{y}\left(x_{0},\eta_{0},x_{0}\right)\ldots\\ \ldots+K\left(x_{1},y_{1},x_{1}\right)+\Delta_{1}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\end{array}\right)
Δ2=\displaystyle\Rightarrow\Delta_{2}= [y1+hf(x1,y1)+h22(K(x0,y0,x0)+K(x1,y1,x1))y2]\displaystyle\left[y_{1}+hf\left(x_{1},y_{1}\right)+\frac{h^{2}}{2}\left(K\left(x_{0},y_{0},x_{0}\right)+K\left(x_{1},y_{1},x_{1}\right)\right)-y_{2}\right]
+Δ1(1+hfy(x1,ξ1)+h22Ky(x1,η1,x1))+Δ0h22Ky(x0,η0,x0)\displaystyle+\Delta_{1}\left(1+hf_{y}\left(x_{1},\xi_{1}\right)+\frac{h^{2}}{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)+\Delta_{0}\frac{h^{2}}{2}K_{y}\left(x_{0},\eta_{0},x_{0}\right)
=\displaystyle=  [ME(y1)y2]+Δ1(1+hfy(x1,ξ1)+h22Ky(x1,η1,x1))+Δ0h22Ky(x0,η0,x0)\displaystyle\text{\thinspace}\left[M_{E}\left(y_{1}\right)-y_{2}\right]+\Delta_{1}\left(1+hf_{y}\left(x_{1},\xi_{1}\right)+\frac{h^{2}}{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)+\Delta_{0}\frac{h^{2}}{2}K_{y}\left(x_{0},\eta_{0},x_{0}\right)
=\displaystyle=  ε2+Δ1(1+hfy(x1,ξ1)+h22Ky(x1,η1,x1))+j=00Δjh22Ky(xj,ηj,xj),\displaystyle\text{\thinspace}\varepsilon_{2}+\Delta_{1}\left(1+hf_{y}\left(x_{1},\xi_{1}\right)+\frac{h^{2}}{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)+\sum\limits_{j=0}^{0}\Delta_{j}\frac{h^{2}}{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right),

and at x3x_{3} we have

w3=\displaystyle w_{3}= w2+hf(x2,w2)+h22K(x0,w0,x0)+h2K(x1,w1,x1)+h22K(x2,w2,x2)\displaystyle\,w_{2}+hf\left(x_{2},w_{2}\right)+\frac{h^{2}}{2}K\left(x_{0},w_{0},x_{0}\right)+h^{2}K\left(x_{1},w_{1},x_{1}\right)+\frac{h^{2}}{2}K\left(x_{2},w_{2},x_{2}\right)
Δ3+y3=\displaystyle\Rightarrow\Delta_{3}+y_{3}= Δ2+y2+hf(x2,Δ2+y2)+h22K(x0,Δ0+y0,x0)+h2K(x1,Δ1+y1,x1)\displaystyle\,\Delta_{2}+y_{2}+hf\left(x_{2},\Delta_{2}+y_{2}\right)+\frac{h^{2}}{2}K\left(x_{0},\Delta_{0}+y_{0},x_{0}\right)+h^{2}K\left(x_{1},\Delta_{1}+y_{1},x_{1}\right)
+h22K(x2,Δ2+y2,x2)\displaystyle+\frac{h^{2}}{2}K\left(x_{2},\Delta_{2}+y_{2},x_{2}\right)
=\displaystyle=  Δ2+y2+hf(x2,y2)+Δ2hfy(x2,ξ2)\displaystyle\text{\thinspace}\Delta_{2}+y_{2}+hf\left(x_{2},y_{2}\right)+\Delta_{2}hf_{y}\left(x_{2},\xi_{2}\right)
+h22(K(x0,y0,x0)+Δ0Ky(x0,η0,x0)+2K(x1,y1,x1)+2Δ1Ky(x1,η1,x1)+K(x2,y2,x2)+Δ2Ky(x2,η2,x2))\displaystyle+\frac{h^{2}}{2}\left(\begin{array}[]{c}K\left(x_{0},y_{0},x_{0}\right)+\Delta_{0}K_{y}\left(x_{0},\eta_{0},x_{0}\right)+2K\left(x_{1},y_{1},x_{1}\right)\ldots\\ \ldots+2\Delta_{1}K_{y}\left(x_{1},\eta_{1},x_{1}\right)+K\left(x_{2},y_{2},x_{2}\right)+\Delta_{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\end{array}\right)
Δ3=\displaystyle\Rightarrow\Delta_{3}= [y2+hf(x2,y2)+h22(K(x0,y0,x0)+2K(x1,y1,x1)+K(x2,y2,x2))y3]\displaystyle\left[y_{2}+hf\left(x_{2},y_{2}\right)+\frac{h^{2}}{2}\left(K\left(x_{0},y_{0},x_{0}\right)+2K\left(x_{1},y_{1},x_{1}\right)+K\left(x_{2},y_{2},x_{2}\right)\right)-y_{3}\right]
+Δ2(1+hfy(x2,ξ2)+h22Ky(x2,η2,x2))+Δ1h2K(x1,η1,x1)+Δ0h22Ky(x0,η0,x0)\displaystyle+\Delta_{2}\left(1+hf_{y}\left(x_{2},\xi_{2}\right)+\frac{h^{2}}{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\right)+\Delta_{1}h^{2}K\left(x_{1},\eta_{1},x_{1}\right)+\Delta_{0}\frac{h^{2}}{2}K_{y}\left(x_{0},\eta_{0},x_{0}\right)
=\displaystyle=  [ME(y2)y3]+Δ2(1+hfy(x2,ξ2)+h22Ky(x2,η2,x2))+Δ1h2K(x1,η1,x1)\displaystyle\text{\thinspace}\left[M_{E}\left(y_{2}\right)-y_{3}\right]+\Delta_{2}\left(1+hf_{y}\left(x_{2},\xi_{2}\right)+\frac{h^{2}}{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\right)+\Delta_{1}h^{2}K\left(x_{1},\eta_{1},x_{1}\right)
+Δ0h22Ky(x0,η0,x0)\displaystyle+\Delta_{0}\frac{h^{2}}{2}K_{y}\left(x_{0},\eta_{0},x_{0}\right)
=\displaystyle=  ε3+Δ2(1+hfy(x2,ξ2)+h22Ky(x2,η2,x2))+j=01Δjh22Ky(xj,ηj,xj)\displaystyle\text{\thinspace}\varepsilon_{3}+\Delta_{2}\left(1+hf_{y}\left(x_{2},\xi_{2}\right)+\frac{h^{2}}{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\right)+\sum\limits_{j=0}^{1}\Delta_{j}\frac{h^{2}}{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right)
+j=11Δjh22Ky(xj,ηj,xj)\displaystyle+\sum\limits_{j=1}^{1}\Delta_{j}\frac{h^{2}}{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right)
=\displaystyle= ε3+Δ2(1+hfy(x2,ξ2)+h22Ky(x2,η2,x2))+j=01Δjh22Ky(xj,ηj,xj)\displaystyle\,\varepsilon_{3}+\Delta_{2}\left(1+hf_{y}\left(x_{2},\xi_{2}\right)+\frac{h^{2}}{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\right)+\sum\limits_{j=0}^{1}\Delta_{j}\frac{h^{2}}{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right)
+j=11Δjh22Ky(xj,ηj,xj).\displaystyle+\sum\limits_{j=1}^{1}\Delta_{j}\frac{h^{2}}{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right).

In general, for i>1,i>1, we have

Δi+1=\displaystyle\Delta_{i+1}=  εi+1+Δi(1+hfy(xi,ξi)+h22Ky(xi,ηi,xi))\displaystyle\text{\thinspace}\varepsilon_{i+1}+\Delta_{i}\left(1+hf_{y}\left(x_{i},\xi_{i}\right)+\frac{h^{2}}{2}K_{y}\left(x_{i},\eta_{i},x_{i}\right)\right)
+h22(j=0i1ΔjKy(xj,ηj,xj)+j=1i1ΔjKy(xj,ηj,xj))\displaystyle+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i-1}\Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)+\sum\limits_{j=1}^{i-1}\Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)\right)
Δi+1=\displaystyle\Rightarrow\Delta_{i+1}=\text{\thinspace} ε~i+1+α~iΔi,\displaystyle\widetilde{\varepsilon}_{i+1}+\widetilde{\alpha}_{i}\Delta_{i}, (4)

where

ε~i+1\displaystyle\widetilde{\varepsilon}_{i+1} εi+1+h22(j=0i1ΔjKy(xj,ηj,xj)+j=1i1ΔjKy(xj,ηj,xj))\displaystyle\equiv\varepsilon_{i+1}+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i-1}\Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)+\sum\limits_{j=1}^{i-1}\Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)\right)
=εi+1+h22(j=0i12ΔjKy(xj,ηj,xj)Δ02Ky(x0,η0,x0))\displaystyle=\varepsilon_{i+1}+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i-1}2\Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)-\frac{\Delta_{0}}{2}K_{y}\left(x_{0},\eta_{0},x_{0}\right)\right)
=εi+1+h2j=1i1ΔjKy(xj,ηj,xj) if Δ0=0.\displaystyle=\varepsilon_{i+1}+h^{2}\sum\limits_{j=1}^{i-1}\Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)\text{ \ if }\Delta_{0}=0.

and

α~i1+hfy(xi,ξi)+h22Ky(xi,ηi,xi)=1+h(fy(xi,ξi)+h2Ky(xi,ηi,xi)).\widetilde{\alpha}_{i}\equiv 1+hf_{y}\left(x_{i},\xi_{i}\right)+\frac{h^{2}}{2}K_{y}\left(x_{i},\eta_{i},x_{i}\right)=1+h\left(f_{y}\left(x_{i},\xi_{i}\right)+\frac{h}{2}K_{y}\left(x_{i},\eta_{i},x_{i}\right)\right).

Note that ε~i+1\widetilde{\varepsilon}_{i+1} is not a local error, but it is convenient to combine the terms in this way, as we shall soon see. Also, we can write

ε~i+1\displaystyle\widetilde{\varepsilon}_{i+1} =(Ci+11+j=0i1ΔjKy(xj,ηj,xj))h2\displaystyle=\left(C_{i+1}^{1}+\sum\limits_{j=0}^{i-1}\Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)\right)h^{2}
(Ci+11+Ci+12)h2\displaystyle\equiv\left(C_{i+1}^{1}+C_{i+1}^{2}\right)h^{2}
=C~i+1h2\displaystyle=\widetilde{C}_{i+1}h^{2}

wherein the coefficients Ci+11,Ci+12C_{i+1}^{1},C_{i+1}^{2} and C~i+1\widetilde{C}_{i+1} have been implicitly defined. Equation (4) is the defining expression for the propagation of error in the explicit method.

In the remainder of this paper, we will assume Δ0=0.\Delta_{0}=0.

3.2 Upper bounds

Assume fy+h2Ky>0.f_{y}+\frac{h}{2}K_{y}>0. With

ε~max\displaystyle\widetilde{\varepsilon}_{\max} max[x0,xf]|ε~i|=max[x0,xf]|C~i|h2C~h2\displaystyle\equiv\max_{\left[x_{0},x_{f}\right]}\left|\widetilde{\varepsilon}_{i}\right|=\max_{\left[x_{0},x_{f}\right]}\left|\widetilde{C}_{i}\right|h^{2}\equiv\widetilde{C}h^{2}
α~\displaystyle\widetilde{\alpha} 1+max[x0,xf](hfy+h22Ky)1+hL\displaystyle\equiv 1+\max_{\left[x_{0},x_{f}\right]}\left(hf_{y}+\frac{h^{2}}{2}K_{y}\right)\equiv 1+hL
L\displaystyle\Rightarrow L =max[x0,xf](fy+h2Ky)\displaystyle=\max_{\left[x_{0},x_{f}\right]}\left(f_{y}+\frac{h}{2}K_{y}\right)

we find

|Δi+1|\displaystyle\left|\Delta_{i+1}\right| ε~max(1+α~+α~2++α~i)\displaystyle\leqslant\widetilde{\varepsilon}_{\max}\left(1+\widetilde{\alpha}+\widetilde{\alpha}^{2}+\ldots+\widetilde{\alpha}^{i}\right)
=ε~max(α~i+11α~1)\displaystyle=\widetilde{\varepsilon}_{\max}\left(\frac{\widetilde{\alpha}^{i+1}-1}{\widetilde{\alpha}-1}\right)
=ε~maxhL((1+hL)i+11)\displaystyle=\frac{\widetilde{\varepsilon}_{\max}}{hL}\left(\left(1+hL\right)^{i+1}-1\right)
=ε~maxhL((1+(i+1)hLi+1)i+11)\displaystyle=\frac{\widetilde{\varepsilon}_{\max}}{hL}\left(\left(1+\frac{\left(i+1\right)hL}{i+1}\right)^{i+1}-1\right)
=C~h2hL((1+(xi+1x0)Li+1)i+11)\displaystyle=\frac{\widetilde{C}h^{2}}{hL}\left(\left(1+\frac{\left(x_{i+1}-x_{0}\right)L}{i+1}\right)^{i+1}-1\right)
C~hL(e(xi+1x0)L1) for large i.\displaystyle\approx\frac{\widetilde{C}h}{L}\left(e^{\left(x_{i+1}-x_{0}\right)L}-1\right)\text{ \ for large }i. (5)

If fy+h2Ky<0f_{y}+\frac{h}{2}K_{y}<0 we define Lmax[x0,xf]|fy+h2Ky|.L\equiv-\max_{\left[x_{0},x_{f}\right]}\left|f_{y}+\frac{h}{2}K_{y}\right|. We can then choose hh so that α~=1+hL>0,\widetilde{\alpha}=1+hL>0, and we then find

|Δi+1||C~hL(e(xi+1x0)L1)||C~L|h if fy+h2Ky0.\left|\Delta_{i+1}\right|\lesssim\left|\frac{\widetilde{C}h}{L}\left(e^{\left(x_{i+1}-x_{0}\right)L}-1\right)\right|\approx\left|-\frac{\widetilde{C}}{L}\right|h\text{ \ if }f_{y}+\frac{h}{2}K_{y}\ll 0. (6)

If fy+h2Ky=0f_{y}+\frac{h}{2}K_{y}=0 we have α~=1\widetilde{\alpha}=1 and so

|Δi+1|\displaystyle\left|\Delta_{i+1}\right| ε~max(1+1+1++1)i+1 times\displaystyle\leqslant\widetilde{\varepsilon}_{\max}\underset{i+1\text{ times}}{\underbrace{\left(1+1+1+\ldots+1\right)}}
=C~h(i+1)h\displaystyle=\widetilde{C}h\left(i+1\right)h
=C~(xi+1x0)h.\displaystyle=\widetilde{C}\left(x_{i+1}-x_{0}\right)h.

We see that all of these bounds exhibit a first-order (O(h))\left(O\left(h\right)\right) character.

3.3 Order

Assume hh is sufficiently small so that α~i1\widetilde{\alpha}_{i}\approx 1

Δ1\displaystyle\Delta_{1} =ε~1+α~0Δ0=ε~1=(C11+Δ0Ky(x0,η0,x0))h2=C11h2\displaystyle=\widetilde{\varepsilon}_{1}+\widetilde{\alpha}_{0}\Delta_{0}=\widetilde{\varepsilon}_{1}=\left(C_{1}^{1}+\Delta_{0}K_{y}\left(x_{0},\eta_{0},x_{0}\right)\right)h^{2}=C_{1}^{1}h^{2}
Δ2\displaystyle\Delta_{2} =ε~2+α~1Δ1(C21+Δ0Ky(x0,η0,x0)+Δ1Ky(x1,η1,x1))h2+Δ1\displaystyle=\widetilde{\varepsilon}_{2}+\widetilde{\alpha}_{1}\Delta_{1}\approx\left(C_{2}^{1}+\Delta_{0}K_{y}\left(x_{0},\eta_{0},x_{0}\right)+\Delta_{1}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)h^{2}+\Delta_{1}
=(C21+C11h2Ky(x1,η1,x1))h2+C11h2(C21+C11)h2\displaystyle=\left(C_{2}^{1}+C_{1}^{1}h^{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)h^{2}+C_{1}^{1}h^{2}\approx\left(C_{2}^{1}+C_{1}^{1}\right)h^{2}
=(C21+C112)2h2=(C21+C112)(2h)h\displaystyle=\left(\frac{C_{2}^{1}+C_{1}^{1}}{2}\right)2h^{2}=\left(\frac{C_{2}^{1}+C_{1}^{1}}{2}\right)\left(2h\right)h
Δ3\displaystyle\Delta_{3} =(C31+C21+C113)3h2=(C31+C21+C113)(3h)h\displaystyle=\left(\frac{C_{3}^{1}+C_{2}^{1}+C_{1}^{1}}{3}\right)3h^{2}=\left(\frac{C_{3}^{1}+C_{2}^{1}+C_{1}^{1}}{3}\right)\left(3h\right)h
\displaystyle\vdots
Δi+1\displaystyle\Delta_{i+1} =(Ci+11++C21+C11i+1)(i+1)h2=(Ci+11++C21+C11i+1)((i+1)h)h\displaystyle=\left(\frac{C_{i+1}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{i+1}\right)\left(i+1\right)h^{2}=\left(\frac{C_{i+1}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{i+1}\right)\left(\left(i+1\right)h\right)h

Now, let xd>x0x_{d}>x_{0} and choose h1h_{1} so that

xd=x0+m1h1.x_{d}=x_{0}+m_{1}h_{1}.

Hence,

Δm1\displaystyle\Delta_{m_{1}} =(Cm11++C21+C11m1)(m1h1)h1\displaystyle=\left(\frac{C_{m_{1}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{1}}\right)\left(m_{1}h_{1}\right)h_{1}
=(Cm11++C21+C11m1)(xdx0)h1.\displaystyle=\left(\frac{C_{m_{1}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{1}}\right)\left(x_{d}-x_{0}\right)h_{1}.

Now, choose m2m1m_{2}\neq m_{1} and h2h_{2} such that

xd=x0+m2h2.x_{d}=x_{0}+m_{2}h_{2}.

Note that

m1h1=m2h2h2=(m1m2)h1.m_{1}h_{1}=m_{2}h_{2}\Rightarrow h_{2}=\left(\frac{m_{1}}{m_{2}}\right)h_{1}.

Hence,

Δm2\displaystyle\Delta_{m_{2}} =(Cm21++C21+C11m2)(m2h2)h2\displaystyle=\left(\frac{C_{m_{2}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{2}}\right)\left(m_{2}h_{2}\right)h_{2}
=(Cm21++C21+C11m2)(xdx0)h2\displaystyle=\left(\frac{C_{m_{2}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{2}}\right)\left(x_{d}-x_{0}\right)h_{2}
=(Cm21++C21+C11m2)(xdx0)(m1m2)h1\displaystyle=\left(\frac{C_{m_{2}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{2}}\right)\left(x_{d}-x_{0}\right)\left(\frac{m_{1}}{m_{2}}\right)h_{1}
Δm1(m1m2)\displaystyle\approx\Delta_{m_{1}}\left(\frac{m_{1}}{m_{2}}\right)

so that the global error at xdx_{d} scales in the same way as the stepsize, i.e. the global error is first-order. This aligns with the O(h)O\left(h\right) nature of the upper bounds considered earlier. Note that this implies that the explicit method is convergent (Δ0\Delta\rightarrow 0 as h0).h\rightarrow 0).

4 Analysis - implicit case

4.1 Error propagation

For the implicit method we have

w1= w0+hf(x1,w1)+h22(K(x1,w0,x0)+K(x1,w1,x1))w_{1}=\text{ }w_{0}+hf\left(x_{1},w_{1}\right)+\frac{h^{2}}{2}\left(K\left(x_{1},w_{0},x_{0}\right)+K\left(x_{1},w_{1},x_{1}\right)\right)

which gives (with Δ0=0)\Delta_{0}=0)

Δ1+y1\displaystyle\Delta_{1}+y_{1} = y0+hf(x1,Δ1+y1)+h22(K(x1,y0,x0)+K(x1,Δ1+y1,x1))\displaystyle=\text{ }y_{0}+hf\left(x_{1},\Delta_{1}+y_{1}\right)+\frac{h^{2}}{2}\left(K\left(x_{1},y_{0},x_{0}\right)+K\left(x_{1},\Delta_{1}+y_{1},x_{1}\right)\right)
Δ1\displaystyle\Rightarrow\Delta_{1} =[y0+hf(x1,y1)+h22(K(x1,y0,x0)+K(x1,y1,x1))y1]1hfy(x1,ξ1)h22Ky(x1,η1,x1)\displaystyle=\frac{\left[y_{0}+hf\left(x_{1},y_{1}\right)+\frac{h^{2}}{2}\left(K\left(x_{1},y_{0},x_{0}\right)+K\left(x_{1},y_{1},x_{1}\right)\right)-y_{1}\right]}{1-hf_{y}\left(x_{1},\xi_{1}\right)-\frac{h^{2}}{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)}
Δ1\displaystyle\Rightarrow\Delta_{1} =α~~1(MI(y0)y1)\displaystyle=\widetilde{\widetilde{\alpha}}_{1}\left(M_{I}\left(y_{0}\right)-y_{1}\right)
=α~~1ε1,\displaystyle=\widetilde{\widetilde{\alpha}}_{1}\varepsilon_{1},

wherein we have implicitly defined α~~1.\widetilde{\widetilde{\alpha}}_{1}.

At x2x_{2} we find

Δ2=ε2+Δ1(1+h2Ky(x1,η1,x1))1hfy(x2,ξ2)h22Ky(x2,η2,x2)\Delta_{2}=\frac{\varepsilon_{2}+\Delta_{1}\left(1+h^{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)}{1-hf_{y}\left(x_{2},\xi_{2}\right)-\frac{h^{2}}{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)}

and at x3x_{3} we find

Δ3=ε3+Δ2(1+h2Ky(x2,η2,x2))+Δ1h2Ky(x1,η1,x1)1hfy(x3,ξ3)h22Ky(x3,η3,x3).\Delta_{3}=\frac{\varepsilon_{3}+\Delta_{2}\left(1+h^{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\right)+\Delta_{1}h^{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)}{1-hf_{y}\left(x_{3},\xi_{3}\right)-\frac{h^{2}}{2}K_{y}\left(x_{3},\eta_{3},x_{3}\right)}.

In general (for i>1),i>1), we have

Δi+1\displaystyle\Delta_{i+1} =εi+1+j=1i1Δjh2Ky(xj,ηj,xj)+Δi(1+h2Ky(xi,ηi,xi))1hfy(xi+1,ξi+1)h22Ky(xi+1,ηi+1,xi+1)\displaystyle=\frac{\varepsilon_{i+1}+\sum\limits_{j=1}^{i-1}\Delta_{j}h^{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right)+\Delta_{i}\left(1+h^{2}K_{y}\left(x_{i},\eta_{i},x_{i}\right)\right)}{1-hf_{y}\left(x_{i+1},\xi_{i+1}\right)-\frac{h^{2}}{2}K_{y}\left(x_{i+1},\eta_{i+1},x_{i+1}\right)}
Δi+1\displaystyle\Rightarrow\Delta_{i+1} =ε~~i+1+α~~iΔi,\displaystyle=\widetilde{\widetilde{\varepsilon}}_{i+1}+\widetilde{\widetilde{\alpha}}_{i}\Delta_{i}, (7)

where

ε~~i+1\displaystyle\widetilde{\widetilde{\varepsilon}}_{i+1} εi+1+j=1i1Δjh2Ky(xj,ηj,xj)1hfy(xi+1,ξi+1)h22Ky(xi+1,ηi+1,xi+1)\displaystyle\equiv\frac{\varepsilon_{i+1}+\sum\limits_{j=1}^{i-1}\Delta_{j}h^{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right)}{1-hf_{y}\left(x_{i+1},\xi_{i+1}\right)-\frac{h^{2}}{2}K_{y}\left(x_{i+1},\eta_{i+1},x_{i+1}\right)}
α~~i\displaystyle\widetilde{\widetilde{\alpha}}_{i} 1+h2Ky(xi,ηi,xi)1hfy(xi+1,ξi+1)h22Ky(xi+1,ηi+1,xi+1).\displaystyle\equiv\frac{1+h^{2}K_{y}\left(x_{i},\eta_{i},x_{i}\right)}{1-hf_{y}\left(x_{i+1},\xi_{i+1}\right)-\frac{h^{2}}{2}K_{y}\left(x_{i+1},\eta_{i+1},x_{i+1}\right)}.

4.2 Upper bound

We are most likely to use the implicit method when the IDE is stiff (both fy<0f_{y}<0 and Ky<0).K_{y}<0). Hence, it is instructive to apply (7) to the test equation [2]

y(x)\displaystyle y^{\prime}\left(x\right) =λ(y(x)1)+γ0xy(t)𝑑t\displaystyle=\lambda\left(y\left(x\right)-1\right)+\gamma\int\limits_{0}^{x}y\left(t\right)dt (8)
y(0)\displaystyle y\left(0\right) =2, λ<0, γ<0\displaystyle=2,\text{ }\lambda<0,\text{ }\gamma<0

(where f=λ(y(x)1)f=\lambda\left(y\left(x\right)-1\right) and K=γy(t)),K=\gamma y\left(t\right)), with solution

y(x)\displaystyle y\left(x\right) =em1x+em2x\displaystyle=e^{m_{1}x}+e^{m_{2}x}
m1\displaystyle m_{1} =λλ2+4γ2, m2=λ+λ2+4γ2\displaystyle=\frac{\lambda-\sqrt{\lambda^{2}+4\gamma}}{2},\text{ \ }m_{2}=\frac{\lambda+\sqrt{\lambda^{2}+4\gamma}}{2}

when m1m_{1} and m2m_{2} are real (λ2+4γ0)\left(\lambda^{2}+4\gamma\geqslant 0\right), and

y(x)=2eλx2cos(|λ2+4γ|2x)y\left(x\right)=2e^{\frac{\lambda x}{2}}\cos\left(\frac{\sqrt{\left|\lambda^{2}+4\gamma\right|}}{2}x\right)

when m1m_{1} and m2m_{2} are complex (λ2+4γ<0)\left(\lambda^{2}+4\gamma<0\right).

With ZhλZ\equiv h\lambda and Wh2γ,W\equiv h^{2}\gamma, we define LL by

1+hL\displaystyle 1+hL 1+h2Ky1hfyh22Ky=1+W1ZW2\displaystyle\equiv\frac{1+h^{2}K_{y}}{1-hf_{y}-\frac{h^{2}}{2}K_{y}}=\frac{1+W}{1-Z-\frac{W}{2}}
L\displaystyle\Rightarrow L =Z+3W2h(1ZW2).\displaystyle=\frac{Z+\frac{3W}{2}}{h\left(1-Z-\frac{W}{2}\right)}.

Since ZZ and WW are both negative, LL is negative, too. Also

L=Z+3W2h(1ZW2)\displaystyle L=\frac{Z+\frac{3W}{2}}{h\left(1-Z-\frac{W}{2}\right)} =hλ+3h2γ2h(1hλh2γ2)\displaystyle=\frac{h\lambda+\frac{3h^{2}\gamma}{2}}{h\left(1-h\lambda-\frac{h^{2}\gamma}{2}\right)}
=λ+3hγ2(1hλh2γ2).\displaystyle=\frac{\lambda+\frac{3h\gamma}{2}}{\left(1-h\lambda-\frac{h^{2}\gamma}{2}\right)}.

With

ε~~max\displaystyle\widetilde{\widetilde{\varepsilon}}_{\max} max[x0,xf]|ε~~i|max[x0,xf]|C~~i|h2C~~h2\displaystyle\equiv\max_{\left[x_{0},x_{f}\right]}\left|\widetilde{\widetilde{\varepsilon}}_{i}\right|\equiv\max_{\left[x_{0},x_{f}\right]}\left|\widetilde{\widetilde{C}}_{i}\right|h^{2}\equiv\widetilde{\widetilde{C}}h^{2}
α~~\displaystyle\widetilde{\widetilde{\alpha}} 1+hL\displaystyle\equiv 1+hL

we find

|Δi+1|\displaystyle\left|\Delta_{i+1}\right| ε~~max|1+α~~+α~~2++α~~i|\displaystyle\leqslant\widetilde{\widetilde{\varepsilon}}_{\max}\left|1+\widetilde{\widetilde{\alpha}}+\widetilde{\widetilde{\alpha}}^{2}+\ldots+\widetilde{\widetilde{\alpha}}^{i}\right|
=ε~~max|α~~i+11α~~1|\displaystyle=\widetilde{\widetilde{\varepsilon}}_{\max}\left|\frac{\widetilde{\widetilde{\alpha}}^{i+1}-1}{\widetilde{\widetilde{\alpha}}-1}\right|
=ε~~maxh|L||(1+hL)i+11|\displaystyle=\frac{\widetilde{\widetilde{\varepsilon}}_{\max}}{h\left|L\right|}\left|\left(1+hL\right)^{i+1}-1\right|
=ε~~maxh|L||(1+(i+1)hLi+1)i+11|\displaystyle=\frac{\widetilde{\widetilde{\varepsilon}}_{\max}}{h\left|L\right|}\left|\left(1+\frac{\left(i+1\right)hL}{i+1}\right)^{i+1}-1\right|
=C~~h2h|L||(1+(xi+1x0)Li+1)i+11|\displaystyle=\frac{\widetilde{\widetilde{C}}h^{2}}{h\left|L\right|}\left|\left(1+\frac{\left(x_{i+1}-x_{0}\right)L}{i+1}\right)^{i+1}-1\right|
|C~~hL(e(xi+1x0)L1)| for large i.\displaystyle\approx\left|\frac{\widetilde{\widetilde{C}}h}{L}\left(e^{\left(x_{i+1}-x_{0}\right)L}-1\right)\right|\text{ \ for large }i.

Since L<0,L<0, we note that

|C~~hL(e(xi+1x0)L1)||C~~hL|\left|\frac{\widetilde{\widetilde{C}}h}{L}\left(e^{\left(x_{i+1}-x_{0}\right)L}-1\right)\right|\rightarrow\left|-\frac{\widetilde{\widetilde{C}}h}{L}\right|

if L0L\ll 0, and/or if xi+1x0x_{i+1}-x_{0} becomes large (similar to the case considered in (6)).

4.3 Order

To analyze the order of the implicit method, we assume that hh is small enough so that

ε~~i+1\displaystyle\widetilde{\widetilde{\varepsilon}}_{i+1} εi+1+j=1i1Δjh2Ky(xj,ηj,xj)\displaystyle\approx\varepsilon_{i+1}+\sum\limits_{j=1}^{i-1}\Delta_{j}h^{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right)
α~~i\displaystyle\widetilde{\widetilde{\alpha}}_{i} 1.\displaystyle\approx 1.

Similar reasoning to the explicit case can now be used to find that the implicit method is expected to be first-order. Furthermore, this implies that the implicit method is convergent.

5 Comments

For the explicit method, given that Δ1=ε1=ε1D+ε1Q,\Delta_{1}=\varepsilon_{1}=\varepsilon_{1}^{D}+\varepsilon_{1}^{Q}, and given that all subsequent global errors are written in terms of local errors and prior global errors, we have that Δi\Delta_{i} is a function of local errors εD\varepsilon^{D} and εQ,\varepsilon^{Q}, Jacobians fyf_{y} and KyK_{y} and the stepsize hh. For example, we find

Δ4=ε4+α~3ε3+α~3α~2ε2+α~3α~2α~1ε1+h2(ε1Ky1+(ε2+α~1ε1)Ky2)\Delta_{4}=\varepsilon_{4}+\widetilde{\alpha}_{3}\varepsilon_{3}+\widetilde{\alpha}_{3}\widetilde{\alpha}_{2}\varepsilon_{2}+\widetilde{\alpha}_{3}\widetilde{\alpha}_{2}\widetilde{\alpha}_{1}\varepsilon_{1}+h^{2}\left(\varepsilon_{1}K_{y}^{1}+\left(\varepsilon_{2}+\widetilde{\alpha}_{1}\varepsilon_{1}\right)K_{y}^{2}\right)

where

α~k\displaystyle\widetilde{\alpha}_{k} =1+hfy(xk,ξk)+h22Ky(xk,ηk,xk)\displaystyle=1+hf_{y}\left(x_{k},\xi_{k}\right)+\frac{h^{2}}{2}K_{y}\left(x_{k},\eta_{k},x_{k}\right)
Kyk\displaystyle K_{y}^{k} Ky(xk,ηk,xk).\displaystyle\equiv K_{y}\left(x_{k},\eta_{k},x_{k}\right).

Similar expressions obtain for Δ5,Δ6\Delta_{5,}\Delta_{6} and so on, and also for the case of the implicit method. It is interesting to note that, if the global Δi\Delta_{i} error is known and the Jacobians can be reliably estimated (such as for the test equation), then the local errors εi\varepsilon_{i} (for the explicit method) can be estimated via the sequence

ε1\displaystyle\varepsilon_{1} =Δ1,\displaystyle=\Delta_{1},
ε2\displaystyle\varepsilon_{2} =Δ2α~1Δ1,\displaystyle=\Delta_{2}-\widetilde{\alpha}_{1}\Delta_{1}, (9)
ε3\displaystyle\varepsilon_{3} =Δ3α~2Δ2h2Δ1Ky1,\displaystyle=\Delta_{3}-\widetilde{\alpha}_{2}\Delta_{2}-h^{2}\Delta_{1}K_{y}^{1},
ε4\displaystyle\varepsilon_{4} =Δ4α~3Δ3h2Δ1Ky1h2Δ2Ky2\displaystyle=\Delta_{4}-\widetilde{\alpha}_{3}\Delta_{3}-h^{2}\Delta_{1}K_{y}^{1}-h^{2}\Delta_{2}K_{y}^{2}

and so on. A similar sequence can be found for the implicit method.

6 Numerical examples

A few simple examples, using the test equation, will serve to illustrate some of the aspects of our analysis.

  1. 1.

    Figure 1. Here, we solve (8) with λ=100,γ=200\lambda=-100,\gamma=-200 and h=5×103h=5\times 10^{-3} using the explicit method. The stepsize is small enough to ensure a stable solution. We show |Δi|\left|\Delta_{i}\right| (the solid red line, labelled E), and the quantity |Ci~hL|\left|\frac{\widetilde{C_{i}}h}{L}\right| determined using (5), i.e.

    |C~ihL|=|Δi||e(xi+1x0)L1|.\left|\frac{\widetilde{C}_{i}h}{L}\right|=\frac{\left|\Delta_{i}\right|}{\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|}.

    We indicate |Ci~hL|\left|\frac{\widetilde{C_{i}}h}{L}\right| with the blue dots (labelled C) which appear to be superimposed on the curve for |Δi|.\left|\Delta_{i}\right|. This is due to the fact that L=fy+h2Ky=λ+h2γ0,L=f_{y}+\frac{h}{2}K_{y}=\lambda+\frac{h}{2}\gamma\ll 0, and so |e(xi+1x0)L1|1.\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|\approx 1. From this curve we estimate max|Ci~hL|=0.0041,\max\left|\frac{\widetilde{C_{i}}h}{L}\right|=0.0041, and we plot 0.0041|e(xi+1x0)L1|0.0041\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right| as the upper bound (labelled U) on |Δi|.\left|\Delta_{i}\right|.

  2. 2.

    Figure 2. We solve (8) with λ=100,γ=200\lambda=-100,\gamma=-200 and h=5×102h=5\times 10^{-2} using the explicit method. The stepsize is not small enough to ensure a stable solution. The labelling follows that of Figure 1. We estimate max|Ci~hL|=1.9×1061.\max\left|\frac{\widetilde{C_{i}}h}{L}\right|=1.9\times 10^{61}. As before, L0|e(xi+1x0)L1|1,L\ll 0\Rightarrow\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|\approx 1, so that the curve for |Ci~hL|\left|\frac{\widetilde{C_{i}}h}{L}\right| is superimposed on the curve for |Δi|.\left|\Delta_{i}\right|.

  3. 3.

    Figure 3. We use the explicit method with λ=1,γ=2\lambda=1,\gamma=2 and h=5×103.h=5\times 10^{-3}. We do not have |e(xi+1x0)L1|1,\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|\approx 1, and so curve C is different to curve E. We estimate max|Ci~hL|=2.5×104,\max\left|\frac{\widetilde{C_{i}}h}{L}\right|=2.5\times 10^{-4}, yielding the upper bound U.

  4. 4.

    Figure 4. Here, we solve the test equation with λ=1,γ=2\lambda=-1,\gamma=-2 and h=5×103h=5\times 10^{-3} using the implicit method. We show the signed global error Δi,\Delta_{i}, and Ci~hL\frac{\widetilde{C_{i}}h}{L}. We see that Δi=0\Delta_{i}=0 when Ci~hL=0,\frac{\widetilde{C_{i}}h}{L}=0, as we would expect. We estimate max|Ci~hL|=1.14×108,\max\left|\frac{\widetilde{C_{i}}h}{L}\right|=1.14\times 10^{-8}, yielding the upper and lower bounds (U and -U). The oscillatory character of the error is due to the oscillatory nature of the solution.

  5. 5.

    Figure 5. We solve the test equation with λ=1,γ=2\lambda=-1,\gamma=-2 and h=5×103h=5\times 10^{-3} using the explicit method. The upper plot shows the global error Δi,\Delta_{i}, and the lower plot shows the local error εi,\varepsilon_{i}, determined using (9).

7 Conclusion

We have investigated error propagation in an explicit and implicit method for solving integro-differential equations of the Volterra type. We have derived upper bounds for the global error, and shown that the global order for both methods is expected to be first-order. With respect to (1), we have considered the case n=1n=1. For n>1,n>1, we would need to solve a system of IDEs, and future work would center around error propagation in such systems - and in systems of IDEs, in general.

References

  • [1] J.S.C. Prentice, An Euler-type method for Volterra integro-differential equations, arXiv.org (Cornell University Library), 2023, 11p, [arXiv:2306.02547]
  • [2] J.S.C. Prentice, Stability analysis of an implicit and explicit numerical method for Volterra integro-differential equations with kernel K(x,y(t),t),K(x,y(t),t), arXiv.org (Cornell University Library), 2023, 10p, [arXiv:2306.12600]

8 Appendix

8.1 Local order

The implicit method

yi+1=yi+hf(xi+1,yi+1)+h22(j=0i+12K(xi+1,yj,xj)K(xi+1,y0,x0)K(xi+1,yi+1,xi+1))y_{i+1}=y_{i}+hf\left(x_{i+1},y_{i+1}\right)+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i+1}2K\left(x_{i+1},y_{j},x_{j}\right)-K\left(x_{i+1},y_{0},x_{0}\right)-K\left(x_{i+1},y_{i+1},x_{i+1}\right)\right)

is derived from

yi+1yih=f(xi+1,yi+1)+h2(j=0i+12K(xi+1,yj,xj)K(xi+1,y0,x0)K(xi+1,yi+1,xi+1))\frac{y_{i+1}-y_{i}}{h}=f\left(x_{i+1},y_{i+1}\right)+\frac{h}{2}\left(\sum\limits_{j=0}^{i+1}2K\left(x_{i+1},y_{j},x_{j}\right)-K\left(x_{i+1},y_{0},x_{0}\right)-K\left(x_{i+1},y_{i+1},x_{i+1}\right)\right)

where the LHS is the Euler approximation to the first derivative, and the second term on the RHS is the composite Trapezium Rule, which models the integral in (1). As is well-known, the approximation error in the Euler approximation is O(h)O\left(h\right) and in the composite Trapezium Rule it is O(h2).O\left(h^{2}\right). In other words, we have

yi+1yih+O(h)=f(xi+1,yi+1)+h2(j=0i+12K(xi+1,yj,xj)K(xi+1,y0,x0)K(xi+1,yi+1,xi+1))+O(h2)\frac{y_{i+1}-y_{i}}{h}+O\left(h\right)=f\left(x_{i+1},y_{i+1}\right)+\frac{h}{2}\left(\sum\limits_{j=0}^{i+1}2K\left(x_{i+1},y_{j},x_{j}\right)-K\left(x_{i+1},y_{0},x_{0}\right)-K\left(x_{i+1},y_{i+1},x_{i+1}\right)\right)+O\left(h^{2}\right)

which gives

yi+1yi+O(h2)=hf(xi+1,yi+1)+h22(j=0i+12K(xi+1,yj,xj)K(xi+1,y0,x0)K(xi+1,yi+1,xi+1))+O(h3).y_{i+1}-y_{i}+O\left(h^{2}\right)=hf\left(x_{i+1},y_{i+1}\right)+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i+1}2K\left(x_{i+1},y_{j},x_{j}\right)-K\left(x_{i+1},y_{0},x_{0}\right)-K\left(x_{i+1},y_{i+1},x_{i+1}\right)\right)+O\left(h^{3}\right).

The sum of the O(h2)O\left(h^{2}\right) term and the O(h3)O\left(h^{3}\right) is the local error εi+1,\varepsilon_{i+1}, so that

εi+1=O(h2)εi+1D+O(h3)εi+1Q=O(h2)\varepsilon_{i+1}=\underset{\varepsilon_{i+1}^{D}}{\underbrace{O\left(h^{2}\right)}}+\underset{\varepsilon_{i+1}^{Q}}{\underbrace{O\left(h^{3}\right)}}=O\left(h^{2}\right)

for the implicit method.

For the explicit method

yi+1=yi+hf(xi,yi)+h22(j=0i2K(xi,yj,xj)K(xi,y0,x0)K(xi,yi,xi))y_{i+1}=y_{i}+hf\left(x_{i},y_{i}\right)+\frac{h^{2}}{2}\left(\sum\limits_{j=0}^{i}2K\left(x_{i},y_{j},x_{j}\right)-K\left(x_{i},y_{0},x_{0}\right)-K\left(x_{i},y_{i},x_{i}\right)\right)

the composite Trapezium Rule is written

h2(j=0i+12K(xi+1,yj,xj)K(xi+1,y0,x0)K(xi+1,yi+1,xi+1))+O(h2)\displaystyle\frac{h}{2}\left(\sum\limits_{j=0}^{i+1}2K\left(x_{i+1},y_{j},x_{j}\right)-K\left(x_{i+1},y_{0},x_{0}\right)-K\left(x_{i+1},y_{i+1},x_{i+1}\right)\right)+O\left(h^{2}\right)
=h2(j=0i2K(xi,yj,xj)K(xi,y0,x0)K(xi,yi,xi))+h2(K(xi,yi,xi)K(xi+1,yi+1,xi+1))+O(h2)\displaystyle=\frac{h}{2}\left(\sum\limits_{j=0}^{i}2K\left(x_{i},y_{j},x_{j}\right)-K\left(x_{i},y_{0},x_{0}\right)-K\left(x_{i},y_{i},x_{i}\right)\right)+\frac{h}{2}\left(K\left(x_{i},y_{i},x_{i}\right)-K\left(x_{i+1},y_{i+1},x_{i+1}\right)\right)+O\left(h^{2}\right)
=h2(j=0i2K(xi,yj,xj)K(xi,y0,x0)K(xi,yi,xi))+O(h)+O(h2)O(h).\displaystyle=\frac{h}{2}\left(\sum\limits_{j=0}^{i}2K\left(x_{i},y_{j},x_{j}\right)-K\left(x_{i},y_{0},x_{0}\right)-K\left(x_{i},y_{i},x_{i}\right)\right)+\underset{O\left(h\right)}{\underbrace{O\left(h\right)+O\left(h^{2}\right)}}.

Consequently, the local error for the explicit method has the form

εi+1=O(h2)εi+1D+O(h2)εi+1Q=O(h2).\varepsilon_{i+1}=\underset{\varepsilon_{i+1}^{D}}{\underbrace{O\left(h^{2}\right)}}+\underset{\varepsilon_{i+1}^{Q}}{\underbrace{O\left(h^{2}\right)}}=O\left(h^{2}\right).

It is worth noting that, for both methods, εi+10\varepsilon_{i+1}\rightarrow 0 as h0,h\rightarrow 0, implying consistency.

8.2 Roundoff

For completeness’ sake, we can include a roundoff term μi\mu_{i} in each local error, as in

ε~i\displaystyle\widetilde{\varepsilon}_{i} ε~i+μi\displaystyle\rightarrow\widetilde{\varepsilon}_{i}+\mu_{i}
ε~~i\displaystyle\widetilde{\widetilde{\varepsilon}}_{i} ε~~i+μi.\displaystyle\rightarrow\widetilde{\widetilde{\varepsilon}}_{i}+\mu_{i}.

This will simply lead to terms of the form

(C~hL+μihL)(e(xi+1x0)L1)\displaystyle\left(\frac{\widetilde{C}h}{L}+\frac{\mu_{i}}{hL}\right)\left(e^{\left(x_{i+1}-x_{0}\right)L}-1\right)
(C~~hL+μihL)(e(xi+1x0)L1)\displaystyle\left(\frac{\widetilde{\widetilde{C}}h}{L}+\frac{\mu_{i}}{hL}\right)\left(e^{\left(x_{i+1}-x_{0}\right)L}-1\right)

in the upper bounds derived earlier. If |hL|1,\left|hL\right|\ll 1, the roundoff component could be significant, but this is unlikely - particularly in the context of modern computing, where numerical precision can be controlled via the variable precision arithmetic (VPA) capabilities of current software. In other words, we can effectively make μi\mu_{i} as small as we need it to be. This may occur at the cost of computational efficiency, but that is the way of things.