This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: INRIA, France Firstname.Name@inria.fr 22institutetext: École Normale Supérieure Firstname.Name@ens.fr.

Evaluation of Chebyshev polynomials on intervals and application to root finding

Viviane Ledoux 1122    Guillaume Moroz 11
Abstract

In approximation theory, it is standard to approximate functions by polynomials expressed in the Chebyshev basis. Evaluating a polynomial ff of degree n given in the Chebyshev basis can be done in O(n)O(n) arithmetic operations using the Clenshaw algorithm. Unfortunately, the evaluation of ff on an interval II using the Clenshaw algorithm with interval arithmetic returns an interval of width exponential in nn. We describe a variant of the Clenshaw algorithm based on ball arithmetic that returns an interval of width quadratic in nn for an interval of small enough width. As an application, our variant of the Clenshaw algorithm can be used to design an efficient root finding algorithm.

Keywords:
Clenshaw algorithm Chebyshev polynomials Root finding Ball arithmetic Interval arithmetic.

1 Introduction

Clenshaw showed in 1955 that any polynomial given in the form

p(x)=i=0naiTi(x)p(x)=\sum_{i=0}^{n}a_{i}T_{i}(x) (1)

can be evaluated on a value xx with a single loop using the following functions defined by recurrence:

uk(x)={0if k=n+1anif k=n2xuk+1(x)uk+2(x)+akif 1k<nxu1(x)u2(x)+a0if k=0u_{k}(x)=\begin{cases}0&\text{if }k=n+1\\ a_{n}&\text{if }k=n\\ 2xu_{k+1}(x)-u_{k+2}(x)+a_{k}&\text{if }1\leq k<n\\ xu_{1}(x)-u_{2}(x)+a_{0}&\text{if }k=0\end{cases} (2)

such that p(x)=u0(x)p(x)=u_{0}(x).

Unfortunately, if we use Equation (2) with interval arithmetic directly, the result can be an interval of size exponentially larger than the input, as illustrated in Example 1.

Example 1

Let ε>0\varepsilon>0 be a positive real number, and let xx be the interval [12ε,12+ε][\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon] of width 2ε2\varepsilon. Assuming that an=1a_{n}=1, we can see that un1u_{n-1} is an interval of width 4ε4\varepsilon. Then by recurrence, we observe that unku_{n-k} is an interval of width at least 4εFk4\varepsilon F_{k} where (Fn)n{(F_{n})}_{n\in\mathbb{N}} denotes the Fibonacci sequence, even if all ai=0a_{i}=0 for i<ni<n.

Note that the constant below the exponent is even higher when xx is closer to 11. These numerical instabilities also appear with floating point arithmetic near 11 and 1-1 as analyzed in [4].

To work around the numerical instabilities near 11 and 1-1, Reinsch suggested a variant of the Clenshaw algorithm [4, 7]. Let dn(x)=and_{n}(x)=a_{n} and un(x)=anu_{n}(x)=a_{n}, and for kk between 0 and n1n-1, define dk(x)d_{k}(x) and uk(x)u_{k}(x) by recurrence as follows:

{dk(x)=2(x1)uk+1(x)+dk+1(x)+akuk(x)=dk(x)+uk+1\begin{cases}d_{k}(x)=2(x-1)u_{k+1}(x)+d_{k+1}(x)+a_{k}\\ u_{k}(x)=d_{k}(x)+u_{k+1}\end{cases} (3)

Computing p(x)p(x) with this recurrence is numerically more stable near 11. However, this algorithm does not solve the problem of exponential growth illustrated in Example 1.

Our first main result is a generalization of Equation 3 for any value in the interval [1,1][-1,1]. This leads to Algorithm 1 that returns intervals with tighter radii, as analyzed in Lemma 2. Our second main result is the use of classical backward error analysis to derive Algorithm 2 which gives an even better radii. Then in Section 3 we use the new evaluation algorithm to design a root solver for Chebyshev series, detailed in Algorithm 3.

2 Evaluation of Chebyshev polynomials on intervals

2.1 Forward error analysis

In this section we assume that we want to evaluate a Chebyshev polynomial on the interval II. Let aa be the center of II and rr be its radius. Furthermore, let γ\gamma and γ¯\overline{\gamma} be the 22 conjugate complex roots of the equation:

X22aX+1=0.X^{2}-2aX+1=0. (4)

In particular, using Vieta’s formulas that relate the coefficients to the roots of a polynomial, γ\gamma satisfies γ+γ¯=2a\gamma+\overline{\gamma}=2a and γγ¯=1\gamma\overline{\gamma}=1.

Let zn(x)=anz_{n}(x)=a_{n} and un(x)=anu_{n}(x)=a_{n}, and for kk between 0 and n1n-1, define zk(x)z_{k}(x) and uk(x)u_{k}(x) by recurrence as follows:

{zk(x)=2(xa)uk+1(x)+γzk+1(x)+akuk(x)=zk(x)+γ¯uk+1(x)\begin{cases}z_{k}(x)=2(x-a)u_{k+1}(x)+\gamma z_{k+1}(x)+a_{k}\\ u_{k}(x)=z_{k}(x)+\overline{\gamma}u_{k+1}(x)\end{cases} (5)

Using Equation (4), we can check that the uku_{k} satisfies the recurrence relation uk(x)=2xuk+1(x)uk+2(x)+aku_{k}(x)=2xu_{k+1}(x)-u_{k+2}(x)+a_{k}, such that p(x)=xu1(x)u2(x)+a0p(x)=xu_{1}(x)-u_{2}(x)+a_{0}.

Let (ek)(e_{k}) and (fk)(f_{k}) be two sequences of positive real numbers. Let (a,r)\mathcal{B}_{\mathbb{R}}(a,r) and (uk(a),ek)\mathcal{B}_{\mathbb{R}}(u_{k}(a),e_{k}) represent the intervals [ar,a+r][a-r,a+r] and [uk(a)ek,uk(a)+ek][u_{k}(a)-e_{k},u_{k}(a)+e_{k}]. Let (zk(a),fk)\mathcal{B}_{\mathbb{C}}(z_{k}(a),f_{k}) be the complex ball of center zk(a)z_{k}(a) and radius fkf_{k}.

Our goal is to compute recurrence formulas on the eke_{k} and the fkf_{k} such that:

{zk((a,r))(zk(a),fk)uk((a,r))(uk(a),ek).\begin{cases}\begin{aligned} z_{k}(\mathcal{B}_{\mathbb{R}}(a,r))&\subset\mathcal{B}_{\mathbb{C}}(z_{k}(a),f_{k})\\ u_{k}(\mathcal{B}_{\mathbb{R}}(a,r))&\subset\mathcal{B}_{\mathbb{R}}(u_{k}(a),e_{k}).\end{aligned}\end{cases} (6)
Lemma 1

Let en=0e_{n}=0 and fn=0f_{n}=0 and for n>k1n>k\geq 1:

{fk=2r|uk+1(a)|+2rek+1+fk+1ek=min(ek+1+fk,fk1a2) if |a|<1 else ek+1+fk\begin{cases}f_{k}=2r|u_{k+1}(a)|+2re_{k+1}+f_{k+1}\\ e_{k}=\min(e_{k+1}+f_{k},\frac{f_{k}}{\sqrt{1-a^{2}}})\text{ if $|a|<1$ else }e_{k+1}+f_{k}\end{cases} (7)

Then, (ek)(e_{k}) and (fk)(f_{k}) satisfy Equation (6).

Proof (sketch)

For the inclusion zk((a,r))(zk(a),fk)z_{k}(\mathcal{B}_{\mathbb{R}}(a,r))\subset\mathcal{B}_{\mathbb{C}}(z_{k}(a),f_{k}), note that γ\gamma has modulus 11, such that the radius of γzk+1\gamma z_{k+1} is the same as the radius of zk+1z_{k+1} when using ball arithmetics. The remaining terms bounding the radius of zkz_{k} follow from the standard rules of interval arithmetics.

For the inclusion uk((a,r))(uk(a),ek)u_{k}(\mathcal{B}_{\mathbb{R}}(a,r))\subset\mathcal{B}_{\mathbb{C}}(u_{k}(a),e_{k}), note that the error segment on uku_{k} is included in the Minkowski sum of a disk of radius fkf_{k} and a segment of radius ek+1e_{k+1}, denoted by MM. If θ\theta is the angle of the segment with the horizontal, we have cosθ=a\cos\theta=a. We conclude that the intersection of MM with a horizontal line is a segment of radius at most min(ek+1+fk,fk1a2)\min(e_{k+1}+f_{k},\frac{f_{k}}{\sqrt{1-a^{2}}}).

Corollary 1

Let (u,e)=𝙱𝚊𝚕𝚕𝙲𝚕𝚎𝚗𝚜𝚑𝚊𝚠𝙵𝚘𝚛𝚠𝚊𝚛𝚍((a0,,an),a,r)\mathcal{B}_{\mathbb{R}}(u,e)=\verb|BallClenshawForward|((a_{0},\ldots,a_{n}),a,r) be the result of Algorithm 1, then

p((a,r))(u,e)p(\mathcal{B}_{\mathbb{R}}(a,r))\subset\mathcal{B}_{\mathbb{R}}(u,e)

Moreover, the following lemma bounds the radius of the ball returned by Algorithm 1.

Lemma 2

Let (u,e)=𝙱𝚊𝚕𝚕𝙲𝚕𝚎𝚗𝚜𝚑𝚊𝚠𝙵𝚘𝚛𝚠𝚊𝚛𝚍((a0,,an),a,r)\mathcal{B}_{\mathbb{R}}(u,e)=\verb|BallClenshawForward|((a_{0},\ldots,a_{n}),a,r) be the result of Algorithm 1, and let MM be an upper bound on |uk(a)||u_{k}(a)| for 1kn1\leq k\leq n. Assume that εk<Mr\varepsilon_{k}<Mr for 1kn1\leq k\leq n, then

{e<2Mn2rifn<121a2e<9Mnr1a2if121a2n<1a22re<2M[(1+2r1a2)n1]if1a22r<n\left\{\begin{array}[]{llll}e<2Mn^{2}r&\text{if}\quad n<&\frac{1}{2\sqrt{1-a^{2}}}&\\ e<9Mn\frac{r}{\sqrt{1-a^{2}}}&\text{if}&\frac{1}{2\sqrt{1-a^{2}}}\leq n<&\frac{\sqrt{1-a^{2}}}{2r}\\ e<2M\left[(1+\frac{2r}{\sqrt{1-a^{2}}})^{n}-1\right]&\text{if}&&\frac{\sqrt{1-a^{2}}}{2r}<n\end{array}\right.
Proof (sketch)

We distinguish 22 cases. First if n<121a2n<\frac{1}{2\sqrt{1-a^{2}}}, we focus on the relation ekek+1+fk+Mre_{k}\leq e_{k+1}+f_{k}+Mr, and we prove by descending recurrence that ek2M(nk)2re_{k}\leq 2M(n-k)^{2}r and fk2Mr(2(nk1)+1)f_{k}\leq 2Mr(2(n-k-1)+1).

For the case 121a2n\frac{1}{2\sqrt{1-a^{2}}}\leq n, we use the relation ekfk1a2+Mre_{k}\leq\frac{f_{k}}{\sqrt{1-a^{2}}}+Mr, that we substitute in the recurrence relation defining fkf_{k} to get fk2rM+2r1a2fk+1+fk+1+Mr1a2f_{k}\leq 2rM+\frac{2r}{\sqrt{1-a^{2}}}f_{k+1}+f_{k+1}+Mr\sqrt{1-a^{2}}. We can check by recurrence that fk32M1a2[(1+2r1a2)n1]f_{k}\leq\frac{3}{2}M\sqrt{1-a^{2}}\left[(1+\frac{2r}{\sqrt{1-a^{2}}})^{n}-1\right], which allows us to conclude for the case 1a22rn\frac{\sqrt{1-a^{2}}}{2r}\leq n. Finally, when 121a2n<1a22r\frac{1}{2\sqrt{1-a^{2}}}\leq n<\frac{\sqrt{1-a^{2}}}{2r}, we observe that (1+2r1a2)n1nexp(1)2r1a2(1+\frac{2r}{\sqrt{1-a^{2}}})^{n}-1\leq n\exp(1)\frac{2r}{\sqrt{1-a^{2}}} which leads to the bound for the last case.

Algorithm 1 Clenshaw evaluation algorithm, forward error
function BallClenshawForward((a0,,an)(a_{0},\ldots,a_{n}), aa, rr)
  \triangleright Computation of the centers uku_{k}
  un+10u_{n+1}\leftarrow 0
  unanu_{n}\leftarrow a_{n}
  for kk in n1,n2,,1n-1,n-2,\ldots,1 do
   uk2auk+1uk+2+aku_{k}\leftarrow 2au_{k+1}-u_{k+2}+a_{k}
   εk\varepsilon_{k}\leftarrow bound on the rounding error for uku_{k}   
  u0au1u2+a0u_{0}\leftarrow au_{1}-u_{2}+a_{0}
  ε0\varepsilon_{0}\leftarrow bound on the rounding error for u0u_{0}
  
  \triangleright Computation of the radii eke_{k}
  fn0f_{n}\leftarrow 0
  en0e_{n}\leftarrow 0
  for kk in n1,n2,,1n-1,n-2,\ldots,1 do
   fk2r|uk+1|+2rek+1+fk+1f_{k}\leftarrow 2r|u_{k+1}|+2re_{k+1}+f_{k+1}
   ekmin(ek+1+fk,fk1a2)+εke_{k}\leftarrow min(e_{k+1}+f_{k},\frac{f_{k}}{\sqrt{1-a^{2}}})+\varepsilon_{k}   
  f0r|u1|+2re1+f1f_{0}\leftarrow r|u_{1}|+2re_{1}+f_{1}
  e0min(e1+f0,f01a2)+ε0e_{0}\leftarrow min(e_{1}+f_{0},\frac{f_{0}}{\sqrt{1-a^{2}}})+\varepsilon_{0}
  return (u0,e0)\mathcal{B}_{\mathbb{R}}(u_{0},e_{0})

2.2 Backward error analysis

In the literature, we can find an error analysis of the Clenshaw algorithm [3]. The main idea is to add the errors appearing at each step of the Clenshaw algorithm to the input coefficients. Thus the approximate result correspond to the exact result of an approximate input. Finally, the error bound is obtained as the evaluation of a Chebyshev polynomial. This error analysis can be used directly to derive an algorithm to evaluate a polynomial in the Chebyshev basis on an interval in Algorithm 2.

Lemma 3

Let en=0e_{n}=0 and for n>k1n>k\geq 1:

ek=2r|uk+1(a)|+ek+1e_{k}=2r|u_{k+1}(a)|+e_{k+1} (8)

and e0=r|u1(a)|+e1e_{0}=r|u_{1}(a)|+e_{1}. Then (ek)(e_{k}) satisfies uk((a,r))(uk(a),ek)u_{k}(\mathcal{B}_{\mathbb{R}}(a,r))\subset\mathcal{B}_{\mathbb{R}}(u_{k}(a),e_{k}).

Proof (sketch)

In the case where the computations are performed without errors, D. Elliott [3, Equation (4.9)] showed that for γ=x~x\gamma=\tilde{x}-x we have:

p(x~)p(x)=2γi=0nui(x~)Ti(x)γu1(x~)p(\tilde{x})-p(x)=2\gamma\sum_{i=0}^{n}u_{i}(\tilde{x})T_{i}(x)-\gamma u_{1}(\tilde{x})

In the case where x~=a\tilde{x}=a and x(a,r)x\in\mathcal{B}_{\mathbb{R}}(a,r) we have γr\gamma\leq r and |T(x)|1|T(x)|\leq 1 which implies ekr|u1(a)|+i=2n2r|ui(a)|e_{k}\leq r|u_{1}(a)|+\sum_{i=2}^{n}2r|u_{i}(a)|.

Corollary 2

Let (u,e)=𝙱𝚊𝚕𝚕𝙲𝚕𝚎𝚗𝚜𝚑𝚊𝚠𝙱𝚊𝚌𝚔𝚠𝚊𝚛𝚍((a0,,an),a,r)\mathcal{B}_{\mathbb{R}}(u,e)=\verb|BallClenshawBackward|((a_{0},\ldots,a_{n}),a,r) be the result of Algorithm 2, and let MM be an upper bound on |uk(a)||u_{k}(a)| for 1kn1\leq k\leq n. Assume that εk<Mr\varepsilon_{k}<Mr for 1kn1\leq k\leq n, then e<3Mnre<3Mnr.

Algorithm 2 Clenshaw evaluation algorithm, backward error
function BallClenshawBackward((a0,,an)(a_{0},\ldots,a_{n}), aa, rr)
  \triangleright Computation of the centers uku_{k}
  un+10u_{n+1}\leftarrow 0
  unanu_{n}\leftarrow a_{n}
  for kk in n1,n2,,1n-1,n-2,\ldots,1 do
   uk2auk+1uk+2+aku_{k}\leftarrow 2au_{k+1}-u_{k+2}+a_{k}
   εk\varepsilon_{k}\leftarrow bound on the rounding error for uku_{k}   
  u0au1u2+a0u_{0}\leftarrow au_{1}-u_{2}+a_{0}
  ε0\varepsilon_{0}\leftarrow bound on the rounding error for u0u_{0}
  
  \triangleright Computation of the radii eke_{k}
  en0e_{n}\leftarrow 0
  for kk in n1,n2,,1n-1,n-2,\ldots,1 do
   ekek+1+2r|uk+1|+εke_{k}\leftarrow e_{k+1}+2r|u_{k+1}|+\varepsilon_{k}   
  e0e1+r|u1|+ε0e_{0}\leftarrow e_{1}+r|u_{1}|+\varepsilon_{0}
  return (u0,e0)\mathcal{B}_{\mathbb{R}}(u_{0},e_{0})

3 Application to root finding

For classical polynomials, numerous solvers exist in the literature, such as those described in [5] for example. For polynomials in the Chebyshev basis, several approaches exist that reduce the problem to polynomial complex root finding [1], or complex eigenvalue computations [2] among other.

In this section, we experiment a direct subdivision algorithm based on interval evaluation, detailed in Algorithm 3. This algorithm is implemented and publicly available in the software clenshaw [6].

Algorithm 3 Subdivision algorithm for root finding
(a0,,an)(a_{0},\ldots,a_{n}) represents the Chebyshev polynomial approximating f(x)f(x)
(b0,,bn)(b_{0},\ldots,b_{n}) represents the Chebyshev polynomial approximating dfdx(x)\frac{df}{dx}(x)
ResRes is a list of isolating intervals for the roots of ff in [1,1][-1,1]
function SubdivideClenshaw((a0,,an)(a_{0},\ldots,a_{n}), (b0,,bn)(b_{0},\dots,b_{n}))
  \triangleright Partition [1,1][-1,1] in intervals where FF either has constant sign or is monotonous
  L[(0,1)]L\leftarrow[\mathcal{B}_{\mathbb{R}}(0,1)]
  Partition[]Partition\leftarrow[]
  while LL is not empty do
   (a,r)\mathcal{B}_{\mathbb{R}}(a,r)\leftarrow pop the first element of LL
   (f,s)BallClenshaw((a0,,an),a,r)\mathcal{B}_{\mathbb{R}}(f,s)\leftarrow\texttt{BallClenshaw}\left((a_{0},\ldots,a_{n}),a,r\right)
   (df,t)BallClenshaw((b0,,bn),a,r)\mathcal{B}_{\mathbb{R}}(df,t)\leftarrow\texttt{BallClenshaw}\left((b_{0},\ldots,b_{n}),a,r\right)
   if fs>0f-s>0 then
     append the pair ((a,r),plus)(\mathcal{B}_{\mathbb{R}}(a,r),"plus") to PartitionPartition
   else if f+s<0f+s<0 then
     append the pair ((a,r),minus)(\mathcal{B}_{\mathbb{R}}(a,r),"minus") to PartitionPartition
   else if gs>0g-s>0 or g+s<0g+s<0 then
     append the pair ((a,r),monotonous)(\mathcal{B}_{\mathbb{R}}(a,r),"monotonous") to PartitionPartition
   else
     1,2subdivide(a,r)\mathcal{B}_{1},\mathcal{B}_{2}\leftarrow\texttt{subdivide}\mathcal{B}_{\mathbb{R}}(a,r)
     append 1,2\mathcal{B}_{1},\mathcal{B}_{2} to LL      
  
  \triangleright Compute the sign of FF at the boundaries
  (f,s)BallClenshaw((a0,,an),1,0)\mathcal{B}_{\mathbb{R}}(f,s)\leftarrow\texttt{BallClenshaw}\left((a_{0},\ldots,a_{n}),-1,0\right)
  append the pair ((1,0),sign((f,s)))(\mathcal{B}_{\mathbb{R}}(-1,0),\texttt{sign}(\mathcal{B}_{\mathbb{R}}(f,s))) to PartitionPartition
  (f,s)BallClenshaw((a0,,an),1,0)\mathcal{B}_{\mathbb{R}}(f,s)\leftarrow\texttt{BallClenshaw}\left((a_{0},\ldots,a_{n}),1,0\right)
  append the pair ((1,0),sign((f,s)))(\mathcal{B}_{\mathbb{R}}(1,0),\texttt{sign}(\mathcal{B}_{\mathbb{R}}(f,s))) to PartitionPartition
  
  \triangleright Recover the root isolating intervals
  PartitionPartition\leftarrow sort PartitionPartition
  ResRes\leftarrow the "monotonous" intervals of PartitionPartition such that the adjacent intervals have opposite signs
  return ResRes

We applied this approach to Chebyshev polynomials whose coefficients are independently and identically distributed with the normal distribution with mean 0 and variance 11.

As illustrated in Figure 1 our code performs significantly better than the classical companion matrix approach. In particular, we could solve polynomials of degree 9000090000 in the Chebyshev basis in less than 55 seconds and polynomials of degree 50005000 in 0.0430.043 seconds on a quad-core Intel(R) i7-8650U cpu at 1.9GHz. For comparison, the standard numpy function chebroots took more than 6565 seconds for polynomials of degree 50005000. Moreover, using least square fitting on the ten last values, we observe that our approach has an experimental complexity closer to Θ(n1.67)\Theta(n^{1.67}), whereas the companion matrix approach has a complexity closer to Θ(n2.39)\Theta(n^{2.39}).

Refer to caption
Figure 1: Time for isolating the roots of a random Chebyshev polynomial, on a quad-core Intel(R) i7-8650U cpu at 1.9GHz, with 16G of ram

References

  • [1] Boyd, J.: Computing zeros on a real interval through chebyshev expansion and polynomial rootfinding. SIAM Journal on Numerical Analysis 40(5), 1666–1682 (2002). https://doi.org/10.1137/S0036142901398325
  • [2] Boyd, J.: Finding the zeros of a univariate equation: Proxy rootfinders, chebyshev interpolation, and the companion matrix. SIAM Review 55(2), 375–396 (2013). https://doi.org/10.1137/110838297
  • [3] Elliott, D.: Error analysis of an algorithm for summing certain finite series. Journal of the Australian Mathematical Society 8(2), 213–221 (1968). https://doi.org/10.1017/S1446788700005267
  • [4] Gentleman, W.M.: An error analysis of Goertzel’s (Watt’s) method for computing Fourier coefficients. The Computer Journal 12(2), 160–164 (01 1969). https://doi.org/10.1093/comjnl/12.2.160
  • [5] Kobel, A., Rouillier, F., Sagraloff, M.: Computing real roots of real polynomials … and now for real! In: Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation. pp. 303–310. ISSAC ’16, ACM, New York, NY, USA (2016). https://doi.org/10.1145/2930889.2930937
  • [6] Moroz, G.: Clenshaw 0.1 (Dec 2019). https://doi.org/10.5281/zenodo.3571248, https://gitlab.inria.fr/gmoro/clenshaw
  • [7] Oliver, J.: An Error Analysis of the Modified Clenshaw Method for Evaluating Chebyshev and Fourier Series. IMA Journal of Applied Mathematics 20(3), 379–391 (11 1977). https://doi.org/10.1093/imamat/20.3.379