New error bounds for Legendre approximations of differentiable functions
Abstract
In this paper we present a new perspective on error analysis for Legendre approximations of differentiable functions. We start by introducing a sequence of Legendre-Gauss-Lobatto polynomials and prove their theoretical properties, including an explicit and optimal upper bound. We then apply these properties to derive a new explicit bound for the Legendre coefficients of differentiable functions. Building on this, we establish an explicit and optimal error bound for Legendre approximations in the norm and an explicit and optimal error bound for Legendre approximations in the norm under the condition that their maximum error is attained in the interior of the interval. Illustrative examples are provided to demonstrate the sharpness of our new results.
Keywords: Legendre approximations, differentiable functions, Legendre coefficients, Legendre-Gauss-Lobatto functions, optimal convergence rates.
AMS classifications: 41A25, 41A10
1 Introduction
Legendre approximations are one of the most fundamental methods in the field of scientific computing, such as Gauss-type quadrature, finite element and spectral methods for the numerical solution of differential equations (see, e.g., [3, 6, 7, 14]). One of the most remarkable advantages of Legendre approximations is that their accuracy depends solely upon the smoothness of the underlying functions. From both theoretical and applied perspectives, it is of particular importance to study error estimates of various Legendre approximation methods, such as Legendre projection and interpolation.
Over the past one hundred years, there has been a continuing interest in developing error estimates and error bounds for classical spectral approximations (i.e., Chebyshev, Legendre, Jacobi, Laguerre and Hermite approximations) and many results can be found in monographes on approximation theories, orthogonal polynomials and spectral methods (see, e.g., [3, 4, 8, 11, 14, 15, 16]). However, the existing results have several typical drawbacks: (i) optimal error estimates of Laguerre and Hermite approximations are far from satisfactory; (ii) error estimates and error bounds for Legendre and, more generally, Gegenbauer and Jacobi approximations of differentiable functions might be suboptimal. Indeed, for the former, little literature is available on optimal error estimates or sharp error bounds for Laguerre and Hermite interpolation. For the latter, even for the very simple function , the error bounds of its Legendre approximation of degree in the maximum norm in [10, 17, 20] are suboptimal since they all behave like as , while the actual convergence rate is (see [22, Theorem 3]). In view of these drawbacks, optimal error estimates and sharp error bounds for classical spectral approximations have received renewed interest in recent years. We refer to the monograph [16] and the literature [2, 10, 17, 19, 20, 22, 23, 24, 25, 26] for more detailed discussions.
Let and let be the Legendre polynomial of degree . It is well known that the sequence forms a complete orthogonal system on and
(1.1) |
where is the Kronecker delta. For any , its Legendre projection of degree is defined by
(1.2) |
In order to analyze the error estimate of Legendre projections in both the and norms, sharp estimates of the Legendre coefficients play an important role in the analysis (see, e.g., [10, 17, 19, 20, 22, 24, 25]). Indeed, these estimates are useful not only in understanding the convergence rates of Legendre projections but useful also in estimating the degree of the Legendre projection to approximate within a prescribed accuracy. In recent years, sharp estimates of the Legendre coefficients have experienced rapid development. In the case when is analytic inside and on the Bernstein ellipse
(1.3) |
for some , an explicit and sharp bound for the Legendre coefficients was given in [22, Lemma 2]
(1.4) |
where and and is the length of the circumference of . As a direct consequence, for each , it was proved in [22, Theorem 2] that
(1.5) |
Moreover, another direct consequence of (1.4) is the error bound of Legendre projection in the norm:
(1.6) |
In the case when is differentiable but not analytic on the interval , upper bounds for the Legendre coefficients were extensively studied in [10, 17, 20, 25]. However, those bounds in [17, 20] depend on some semi-norms of high-order derivatives of which may be overestimated, especially when the singularity is close to endpoints, and those bounds in [10, 25] involve ratios of gamma functions, which are less favorable since their asymptotic behavior and computation still require further treatment.
In this paper, we give a new perspective on error bounds of Legendre approximations for differentiable functions. We start by introducing the Legendre-Gauss-Lobatto (LGL) polynomials
(1.9) |
and then proving theoretical properties of these polynomials, including the differential recurrence relation and an optimal and explicit bound for their maximum value on . Based on LGL polynomials, we obtain a new explicit and sharp bound for the Legendre coefficients of differentiable functions, which is sharper than the result in [20] and is more informative than the results in [10, 25]. Building on this new bound, we then establish an explicit and optimal error bounds for Legendre projections in the norm and an explicit and optimal error bound for Legendre projections in the norm whenever the maximum error of is attained in the interior of . We emphasize that in contrast to those results in [10, 25] which involve ratios of gamma functions, our results are more explicit and informative.
This paper is organized as follows. In section 2 we prove some theoretical properties of the LGL polynomials. In section 3 we establish a new bound for the Legendre coefficients of differentiable functions, which improves the existing result in [20]. Building on this bound, we establish some new error bounds of Legendre projections in both and norms. In section 4 we present two extensions, including the extension of LGL polynomials to Gegenbauer-Gauss-Lobatto functions and optimal convergence rates of LGL interpolation and differentiation for analytic functions. Finally, we give some concluding remarks in section 5.
2 Properties of Legendre-Gauss-Lobatto polynomials
In this section, we establish some theoretical properties of LGL polynomials . Our main result is stated in the following theorem.
Theorem 2.1.
Let be defined in (1.9). Then, the following properties hold:
-
(i)
is even for and for .
-
(ii)
The following differential recurrence relation holds
(2.1) -
(iii)
Let and let be the zeros of on the interval . Then, attains its local maximum values at these points and
(2.2) -
(iv)
The maximum value of satisfies
(2.3) and the bound on the right-hand side is optimal in the sense that it can not be improved further.
Proof.
As for (i), they follow from the properties of Legendre polynomials, i.e., and for . As for (ii), we obtain from [15, Equation (4.7.29)] that
(2.4) |
The identity (2.1) follows immediately from (2.4). As for (iii), due to (2.4) we know that attains its local maximum values at the zeros of . Since is an even function on , we only need to consider the maximum values of at the nonnegative zeros of . To this end, we introduce an auxiliary function
(2.5) |
Direct calculation of the derivative of by using (2.4) gives
(2.6) |
where we have used the first equality of [15, Equation (4.7.27)] in the last step. From (2) it is easy to see that whenever and for , and thus is strictly decreasing on the interval . Combining this with (2.5) gives the desired result (2.2). As for (iv), it follows from (2.2) that
(2.7) |
Now, we first consider the case where is odd. In this case, we know that and thus
A straightforward calculation shows that the sequence is a strictly increasing sequence, and thus
This proves the case of odd . We next consider the case where is even. Recall that is strictly decreasing on the interval , we obtain that
A straightforward calculation shows that the sequence is a strictly increasing sequence, and thus
This proves the case of even . Since the constant is the limit of the sequences and , the bound in (2.3) is optimal in the sense that it can not be reduced further. This completes the proof. ∎
In Figure 1 we plot the function and the points for . Indeed, it is easily seen that the sequence is strictly increasing, which coincides with the result (iii) in Theorem 2.1. To verify the sharpness of the result (iv) in Theorem 2.1, we plot in Figure 2 the scaled function for . Clearly, we observe that the maximum value of approaches to one as , which implies the inequality (2.3) is quite sharp.
Remark 2.2.
In the proof of Theorem 2.1, we have actually proved the following sharper inequality
(2.10) |
Since the bound on the right-hand side of (2.10) involves the ratio of gamma functions, we therefore established a simpler upper bound for in (2.3). On the other hand, we mention that a rough estimate for has been given in the classical monograph [15, Theorem 7.33.3]:
(2.11) |
and the bound of the factor is independent of . Comparing (2.11) with (2.3), it is clear to see that the latter is more precise than the former.
Remark 2.3.
The polynomials can also be viewed as weighted Gegenbauer or Sobolev orthogonal polynomials. Indeed, from (4.2) in Section 4 we know that each , where , can be expressed by a weighted Gegenbauer polynomial up to a constant factor. On the other hand, it is easily verified that are orthogonal with respect to the Sobolev inner product of the form
and hence they can also be viewed as Sobolev orthogonal polynomials.
3 A new explicit bound for Legendre coefficients of differentiable functions
In this section, we establish a new explicit bound for the Legendre coefficients of differentiable functions with the help of the properties of LGL polynomials. As will be shown later, our new result is better than the existing result in [20] and is more informative than the result in [25].
Let the total variation of on be defined by
(3.1) |
where the supremum is taken over all partitions of the interval .
Theorem 3.1.
If are absolutely continuous on and is of bounded variation on for some nonnegative integer , i.e., . Then, for each ,
(3.2) |
where the product is assumed to be one whenever .
Proof.
We prove the inequality (3.2) by induction on . Invoking (2.4) and using integration by part, we obtain
where we have used Theorem 2.1 and the integral is a Riemann-Stieltjes integral (see, e.g., [13, Chapter 12]) in the last equation. Furthermore, taking advantage of the inequality of Riemann-Stieltjes integral (see, e.g., [13, Theorem 12.15]) and making use of Theorem 2.1, we have
This proves the case . In the case of , making use of (2.1) and employing once again integration by part, we obtain
from which, using again the inequality of Riemann-Stieltjes integral and (2.3), we find that
This proves the case . In the case of , we can continue the above process to obtain
from which we infer that
This proves the case . For , the repeated application of integration by parts brings in higher derivatives of and corresponding higher variations up to . Hence we can obtain the desired result (3.2) and this ends the proof. ∎
Remark 3.2.
An explicit bound for the Legendre coefficients has been established in [20, Theorem 2.2]
(3.3) |
where and is the weighted semi-norm defined by . Comparing (3.2) and (3.3), it is easily seen that the weighted semi-norm of is replaced by the total variation of , and (3.2) is better than (3.3) because . Moreover, the following bound was given in [25, Corollary 1]
(3.4) |
Comparing (3.2) with (3.4), one can easily check that both results are about equally accurate in the sense that their ratio tends to one as . However, our result (3.2) is more explicit and informative.
Example 3.3.
We consider the following example
(3.5) |
It is clear that this function is absolutely continuous on and its derivative is of bounded variation on . Moreover, direct calculation gives , and thus
The upper bound in (3.3) can be written as
Comparing with , it is easily verified that the former is always better than the latter for all and , especially the former remains fixed while the latter blows up when . Figure 3 shows the exact Legendre coefficients and the bound for three values of . It can be seen that is quite sharp whenever is not close to both endpoints and is slightly overestimated whenever .
Example 3.4.
We consider the truncated power function
(3.8) |
where . It is clear that and are absolutely continuous on and is of bounded variation on . Moreover, direct calculation gives , and thus
In Figure 4 we illustrate the exact Legendre coefficients and the above bound for three values of . Clearly, we can see that our new bound is quite sharp whenever is not close to both endpoints and is slightly overestimated whenever .
In the following, we apply Theorem 3.1 to establish some new explicit error bounds for Legendre projections in the and norms.
Theorem 3.5.
If are absolutely continuous on and is of bounded variation on for some nonnegative integer , i.e., .
-
(i)
For and ,
(3.9) -
(ii)
For and ,
(3.12) and the product is assumed to be one whenever .
Proof.
In Figure 5 we show the actual error of and the error bound (3.9) in the norm as a function of for two test functions. Clearly, we can see that the error bound (3.9) is optimal up to a constant factor.
Remark 3.6.
Recently, the following error bound was proved in [10, Theorem 3.4]
(3.13) |
Comparing (3.9) with (3.13), it is easily verified that their ratio asymptotes to one as and thus both bounds are almost identical for large . Whenever is small, direct calculations show that (3.13) is slightly sharper than (3.9). On the other hand, it is easily seen that our bound (3.9) is more explicit and informative.
Remark 3.7.
In the case where is piecewise analytic on and has continuous derivatives up to order for some , the current author has proved in [22, Theorem 3] that the optimal rate of convergence of is . For such functions, however, the predicted rate of convergence by the error bound (3.12) is only . Hence, the error bound (3.12) is suboptimal in the sense that it overestimates the actual error by a factor of .
We further ask: How to derive an optimal error bound of in the norm? In the following, we shall establish a weighted inequality for the error of in the norm and an explicit and optimal error bound for in the norm under the condition that the maximum error of is attained in the interior of .
Theorem 3.8.
If are absolutely continuous on and is of bounded variation on for some , i.e., .
-
(i)
For , we have
(3.14) -
(ii)
If the maximum error of is attained at for . Then, for , we have
(3.15)
Proof.
We first consider the proof of (3.14). Recall the Bernstein-type inequality satisfied by Legendre polynomials (see [1] or [12, Equation (18.14.7)])
(3.16) |
and the bound on the right hand side is optimal in the sense that can not be improved to for any and the constant is best possible. Consequently, using the above inequality and Theorem 3.1, we deduce that
(3.17) |
This proves (3.14). As for (3.15), note that the maximum error of is attained at , we have
Combining the last inequality with Theorem 3.1 and (3.16) and using a similar process as in (3) gives (3.15). This ends the proof. ∎
Remark 3.9.
For functions with interior singularities, from the pointwise error analysis developed in [23, 24] we know that the maximum error of is actually determined by the errors at these interior singularities for moderate and large values of . In this case, the error bound in (3.15) is optimal in the sense that it can not be improved with respect to up to a constant factor; see Figure 6 for an illustration.
In Figure 6 we show the maximum error of and the error bound (3.15) as a function of for the functions and . For these two functions, the maximum errors of are attained at and , respectively, for moderate and large values of . Thus, the error bound in (3.15) can be applied to these two examples. We see from Figure 6 that the error bound (3.15) is optimal with respect to up to a constant factor.
4 Extensions
In this section we present two extensions of our results in Theorem 2.1, including Gegenbauer-Gauss-Lobatto (GGL) functions and optimal convergence rates of Legendre-Gauss-Lobatto interpolation and differentiation for analytic functions.
4.1 Gegenbauer-Gauss-Lobatto functions
We introduce the Gegenbauer-Gauss-Lobatto (GGL) functions of the form
(4.1) |
where is the Gegenbauer polynomial of degree defined in [12, Equation (18.5.9)] and is the Gegenbauer weight function. Moreover, by using [12, Equation (18.9.8)], they can also be written as
(4.2) |
Notice that whenever and thus can be viewed as a generalization of . We are now ready to prove the following theorem.
Theorem 4.1.
Let be the function defined in (4.1) or (4.2) and let and .
-
(i)
is even and for all .
-
(ii)
For all , the derivative of is
(4.3) -
(iii)
For all , the differential recurrence relation of is
(4.4) -
(iv)
Let for all and let be the zeros of on the interval . Then, attains its local maximum values at these points and
(4.5) for and
(4.6) for .
-
(v)
For , the maximum value of satisfies
(4.9) For , the maximum value of satisfies
(4.10)
Proof.
As for (i), the assertion is even follows from the symmetry of Gegenbauer polynomials (i.e., for all ) and follows from (4.2). As for (ii), from [12, Equation (18.9.20)] we know that
(4.11) |
The combination of (4.11) and (4.2) proves (4.3). As for (iii), it is a direct consequence of (4.1) and (4.3). Now we consider the proof of (iv). Since is even, we only need to consider the maximum values of at the nonnegative zeros of . Similar to the argument of Theorem 2.1, we introduce the following auxiliary function
(4.12) |
Combining (4.2), (4.3) and (4.11) and after some calculations, we get
(4.13) |
where we have used [12, Equation (18.9.7)] in the last step. Furthermore, invoking [12, Equation (18.9.8)] and [12, Equation (18.9.1)], and after some lengthy but elementary calculations, we obtain that
(4.14) |
It is easily seen that is strictly decreasing on whenever and is strictly increasing on whenever , and thus the inequalities (4.5) and (4.6) follow immediately. As for (v), we first consider the case . From (iv) we infer that
In the case when is odd, it is easily seen that and thus
This proves the case of odd . In the case when is even, notice that is strictly decreasing on the interval we obtain
This proves the case of even . Finally, we consider the case of . On the one hand, from (4.6) we obtain immediately that . On the other hand, from [15, Theorem 8.9.1] we obtain that for that
Combining these with (4.2) gives the desired estimate. This completes the proof. ∎
As a direct consequence of Theorem 4.1, we have the following corollary.
Corollary 4.2.
For , we have
(4.17) |
Remark 4.3.
Based on a Nicholson-type formula for Gegenbauer polynomials, Durand proved the following inequality for (see [5, Equation (19)])
(4.18) |
Comparing (4.18) with (4.17), it is easily seen that both bounds are the same whenever is even. In the case of odd , however, direct calculations show that
and thus we have derived an improved bound for whenever is odd.
Remark 4.4.
Making use of the asymptotic expansion of the ratio of gamma functions (see, e.g., [12, Equation (5.11.13)]), it follows that
(4.19) |
Clearly, we see that for large . Direct calculations show that the sequences and are strictly decreasing whenever and the sequences and are strictly increasing whenever . For , there exists a positive integer such that the sequences and are strictly increasing.
4.2 Optimal convergence rates of Legendre-Gauss-Lobatto interpolation and differentiation for analytic functions
Legendre-Gauss-Lobatto interpolation is widely used in the numerical solution of differential and integral equations (see, e.g., [3, 6, 14]). Let be the unique polynomial which interpolates at the zeros of and it is well known that is the LGL interpolant of degree . If is analytic inside and on the ellipse for some , convergence rates of LGL interpolation and differentiation in the maximum norm have been thoroughly studied in [26] with the help of the Hermite’s contour integral. In particular, the following error bounds were proved (setting in [26, Theorem 4.3]):
(4.20) |
and
(4.21) |
where is a generic positive constant and are LGL points (i.e., the zeros of ). It is clear to see that the above results imply that the rate of convergence of in the norm is and the maximum error of LGL spectral differentiation is .
In the following, we shall improve the results (4.20) and (4.21) by using Theorem 2.1 and show that the factor in (4.20) can actually be removed and the factor in (4.21) can be improved to . We state our main results in the following theorem.
Theorem 4.5.
If is analytic inside and on the ellipse for some , then
(4.22) |
and
(4.23) |
where is a generic positive constant and for and denotes the distance from to .
Proof.
From the Hermite integral formula [4, Theorem 3.6.1] we know that the remainder of the LGL interpolants can be written as
(4.24) |
from which we can deduce immediately that
(4.25) |
where we used (2.3) in the last step. Moreover, combining (4.24) with (2.4) we obtain
(4.26) |
where we have used the fact that in the last step. In order to establish sharp error bounds for the LGL interpolation and differentiation, it is necessary to find the minimum value of for . Owing to [21, Equation (5.14)] we infer that
and, after some calculations, we obtain that
(4.27) |
and for . Combining this with (4.2) and (4.2) gives the desired results. This ends the proof. ∎
Remark 4.6.
In the proof of Theorem 4.5, we have used an asymptotic estimate of the minimum value of for . Now we provide a more detailed observation on this issue. By parameterizing the ellipse with with and , we plot in Figure 7 for several values of and . Clearly, we observe that the minimum value of is always attained at . This observation inspires us to raise the following conjecture:
Conjecture: For and ,
(4.28) |
where .
To provide some insights into this conjecture, after some calculations we obtain
It is easily seen that the minimum values of and are always attained at , which confirms the above conjecture for . For , however, will involve a rather lengthy expression and it would be infeasible to find the minimum value of from its explicit expression. We will pursue the proof of this conjecture in future work.
Example 4.7.
We consider the following Runge function
(4.29) |
It is easily verified that this function has a pair of poles at and thus the rates of convergence of is with . In Figure 8 we illustrate the maximum errors of LGL interpolants for two values of . In our implementation, the LGL interpolants are computed by using the second barycentric formula
(4.30) |
where are the barycentric weights of LGL points and the algorithm for computing these barycentric weights is described in [18]. Clearly, we see that numerical results are in good agreement with our theoretical analysis.
In Figure 9 we illustrate the maximum errors of LGL spectral differentiation. In our implementation, we first compute the barycentric weights by the algorithm in [18] and then compute the differentiation matrix by
(4.33) |
Finally, is evaluated by multiplying the differentiation matrix with . From (4.23) we know that the predicted rate of convergence of LGL spectral differentiation is . Clearly, we can observe from Figure 9 that the predicted rate of convergence is consistent with the errors of LGL spectral differentiation.
5 Conclusion
In this paper, we have studied the error analysis of Legendre approximations for differentiable functions from a new perspective. We introduced the sequence of LGL polynomials and proved their theoretical properties. Based on these properties, we derived a new explicit bound for the Legendre coefficients of differentiable functions. We then obtained an optimal error bound of Legendre projections in the norm and an optimal error bound of Legendre projections in the norms under the condition that the maximum error of Legendre projections is attained in the interior of . Numerical examples were provided to demonstrate the sharpness of our new results. Finally, we presented two extensions of our analysis, including Gegenbauer-Gauss-Lobatto (GGL) functions and optimal convergence rates of Legendre-Gauss-Lobatto interpolation and differentiation for analytic functions.
In future work, we will explore the extensions of the current study to a more general setting, such as finding some new upper bounds for the weighted Jacobi polynomials (see, e.g., [9]) and establishing some sharp error bounds for Gegenbauer approximations of differentiable functions.
Acknowledgements
This work was supported by the National Natural Science Foundation of China under grant 11671160. The author wishes to thank the editor and two anonymous referees for their valuable comments on the manuscript.
References
- [1] V. A. Antonov and K. V. Holševnikov, An estimate of the remainder in the expansion of the generating function for the Legendre polynomials (Generalization and improvement of Bernstein’s inequality), Vestnik Leningrad Univ. Math., 13:163–166, 1981.
- [2] I. Babuška and H. Hakula, Pointwise error estimate of the Legendre expansion: The known and unknown features, Comput. Methods Appl. Mech. Engrg., 345(1):748–773, 2019.
- [3] C. Canuto, M. Y. Hussaini, A. Quarteroni and T. A. Zang, Spectral Methods: Fundamentals in Single Domains, Springer, 2006.
- [4] P. J. Davis, Interpolation and Approximation, Dover Publications, New York, 1975.
- [5] L. Durand, Nicholson-type integrals for products of Gegenbauer functions and related topics, Theory and Application of Special Functions, Edited by Richard A. Askey, pp.353–374, Academic Press, New York, 1975.
- [6] A. Ern and J.-L. Guermond, Finite elements I: Approximation and Interpolation, Vol. 72 of Texts in Applied Mathematics, Springer, Cham, 2021.
- [7] W. Gautschi, Orthogonal Polynomials: Computation and Approximation, Oxford University Press, London, 2004.
- [8] D. Jackson, The Theory of Approximation, American Mathematical Society Colloquium Publications, Volume XI, New York, 1930.
- [9] I. Krasikov, An upper bound on Jacobi polynomials, J. Approx. Theory, 149:116–130, 2007.
- [10] W.-J. Liu, L.-L. Wang and B.-Y. Wu, Optimal error estimates for Legendre expansions of singular functions with fractional derivatives of bounded variation, Adv. Comput. Math., 47: article number: 79, 2021.
- [11] J. C. Mason and D. C. Handscomb, Chebyshev Polynomials, Chapman and Hall/CRC, Boca Raton, 2003.
- [12] F. W. J. Olver, D. W. Lozier, R. F. Boisvert and C. W. Clark, NIST Handbook of Mathematical Functions, Cambridge University Press, 2010.
- [13] M. H. Protter and C. B. Morrey, A First Course in Real Analysis, Second Edition, Springer-Verlag, New York, 1991.
- [14] J. Shen, T. Tang and L.-L. Wang, Spectral Methods: Algorithms, Analysis and Applications, Springer, Heidelberg, 2011.
- [15] G. Szegő, Orthogonal Polynomials, Vol. 23, 4th Edition, Amer. Math. Soc., Providence, RI, 1975.
- [16] L. N. Trefethen, Approximation Theory and Approximation Practice, Extended Edition, SIAM, Philadephia, 2019.
- [17] H.-Y. Wang and S.-H. Xiang, On the convergence rates of Legendre approximation, Math. Comp., 81(278):861–877, 2012.
- [18] H.-Y. Wang, D. Huybrechs and S. Vandewalle, Explicit barycentric weights for polynomial interpolation in the roots or extrema of classical orthogonal polynomials, Math. Comp., 83(290):2893–2914, 2014.
- [19] H.-Y. Wang, On the optimal estimates and comparison of Gegenbauer expansion coefficients, SIAM J. Numer. Aanl., 54(3):1557–1581, 2016.
- [20] H.-Y. Wang, A new and sharper bound for Legendre expansion of differentiable functions, Appl. Math. Lett., 85:95–102, 2018.
- [21] H.-Y. Wang and L. Zhang, Jacobi polynomials on the Bernstein ellipse, J. Sci. Comput., 75:457–477, 2018.
- [22] H.-Y. Wang, How much faster does the best polynomial approximation converge than Legendre projections?, Numer. Math., 147:481–503, 2021.
- [23] H.-Y. Wang, Optimal rates of convergence and error localization of Gegenbauer projections, IMA J. Numer. Anal., https://doi.org/10.1093/imanum/drac047, 2022.
- [24] H.-Y. Wang, Analysis of error localization of Chebyshev spectral approximations, SIAM J. Numer. Anal., 61(2):952–972, 2023.
- [25] S.-H. Xiang and G.-D. Liu, Optimal decay rates on the asymptotics of orthogonal polynomial expansions for functions of limited regularities, Numer. Math., 145:117–148, 2020.
- [26] Z.-Q. Xie, L.-L. Wang and X.-D. Zhao, On exponential convergence of Gegenbauer interpolation and spectral differentiation, Math. Comp., 82(282):1017–1036, 2013.