Numerical weighted integration of functions having mixed smoothness
Dinh Dũng
Information Technology Institute, Vietnam National University, Hanoi
144 Xuan Thuy, Cau Giay, Hanoi, Vietnam
Email: dinhzung@gmail.com
Abstract
We investigate the approximation of weighted integrals over for integrands from weighted Sobolev spaces of mixed smoothness. We prove upper and lower bounds of the convergence rate of optimal quadratures with respect to integration nodes for functions from these spaces. In the one-dimensional case , we obtain the right convergence rate of optimal quadratures. For , the upper bound is performed by sparse-grid quadratures with integration nodes on step hyperbolic crosses in the function domain .
Keywords and Phrases: Numerical multivariate weighted integration; Quadrature; Weighted Sobolev space of mixed smoothness; Step hyperbolic crosses of integration nodes; Convergence rate.
MSC (2020): 65D30; 65D32; 41A25; 41A55.
1 Introduction
The aim of the present paper is to investigate approximation of weighted integrals over for integrands lying in weighted Sobolev spaces of mixed smoothness .
We want to give upper and lower bounds of the approximation error for optimal quadratures with integration nodes over the unit ball of .
We first introduce weighted Sobolev spaces of mixed smoothness.
Let
(1.1)
where
(1.2)
is a univariate Freud-type weight. The most important parameter in the weight is . The parameter which produces only a possitive constant in the weight is introduced for a certain normalization for instance, for the standard Gaussian weight which is one of the most important weights. In what follows, we fix the parameters and for simplicity, drop them from the notation.
Let and be a Lebesgue measurable set on .
We denote by the weighted space of all functions on such that the norm
(1.3)
is finite. For , we define the weighted Sobolev space of mixed smoothness as the normed space of all functions such that the weak (generalized) partial derivative belongs to for every satisfying the inequality . The norm of a function in this space is defined by
(1.4)
It is useful to notice that any function is equivalent in the sense of the Lesbegue measure to a continuous (not necessarily bounded) function on , see Lemma 3.1 below. Hence throughout the present paper, we always assume that the functions are continuous. We need this assumption for well-defined quadratures for functions .
Let be the standard -dimensional Gaussian measure with the density function
The well-known spaces and
which are used in many applications, are defined in the same way by replacing the norm (1.3) with the norm
The spaces and can be seen as the and ,
where
for a fixed .
In the present paper, we are interested in approximation of weighted integrals
(1.5)
for functions lying in the space
.
To approximate them we use quadratures of the form
(1.6)
where are the integration nodes and the integration weights. For convenience, we assume that some of the integration nodes may coincide.
Let be a set of continuous functions on . Denote by the family of all quadratures of the form (1.6) with . The optimality of quadratures from for is measured by
(1.7)
We recall that the space is defined as the classical Sobolev space of mixed smoothness by replacing with in (1.4), where as usually, denotes the Lebesgue space of functions on equipped with the usual -integral norm.
For approximation of integrals
over the set , we need natural modifications for functions on , and for a set of functions on , of the definitions (1.6) and (1.7). For simplicity we will drop from these notations if there is no misunderstanding.
We first briefly describe the main results of the present paper and then give comments on related works.
For a normed space of functions on , the boldface denotes the unit ball in . Throughout the present paper we make use of the notation
For the set , we prove the upper and lower bounds
(1.8)
in particular, in the case of Gaussian measure
(1.9)
In the one-dimensional case, we prove the right convergence rate
(1.10)
The difference between the upper and lower bounds in (1.8) is the logarithmic factor
.
There is a large number of works on high-dimensional unweighted integration over the unit -cube for functions having a mixed smoothness (see [2, 5, 12] for results and bibliography). However, there are only a few works on high-dimensional weighted integration for functions having a mixed smoothness. The problem of optimal weighted integration (1.5)–(1.7) has been studied in [6, 7, 4] for functions in certain Hermite spaces, in particular, the space which coincides with in terms of norm equivalence.
It has been proven in [4] that
Recently, in [1, Theorem 2.3] for the space with and , we have constructed an asymptotically optimal quadrature of the form (1.6) which gives the asymptotic order
(1.11)
The results (1.9) and (1.11) show a substantial difference of the convergence rates between the cases and .
In constructing the asymptotically optimal quadrature in (1.11), we used a technique collaging a quadrature for the Sobolev spaces on the unit -cube to the integer-shifted -cubes. Unfortunately, this technique is not suitable to constructing a quadrature realizing the upper bound in (1.8) for the space
which is the largest among the spaces with . It requires a different technique based on the well-known Smolyak algorithm. Such a quadrature relies on sparse grids of integration nodes which are step hyperbolic crosses in the function domain , and some generalization of the results on univariate numerical integration by truncated Gaussian quadratures from [3]. To prove the lower bound in (1.8) and (1.10) we adopt a traditional technique to construct for arbitrary integration nodes a fooling function vanishing at these nodes.
It is interesting to compare the results (1.9) and (1.11) on with known results on
for the unweighted Sobolev space of mixed smoothness . For , there holds the asymptotic order
and for and , there hold the bounds
which are so far the best known (see, e.g., [2, Chapter 8], for detail). Hence we can see that and have the same asymptotic order in the case , and very different lower and upper bounds in both power and logarithmic terms in the case .
The right asymptotic orders of the both and are still open problems (cf. [2, Open Problem 1.9]).
The problem of numerical integration considered in the present paper is related to the research direction of optimal approximation and integration for functions having mixed smoothness on one hand, and the other research direction of univariate weighted polynomial approximation and integration on , on the other hand. For survey and bibliography, we refer the reader to the books [2, 12] on the first direction, and [11, 9, 8] on the second one.
The paper is organized as follows. In Section 2, we prove the asymptotic order
of and construct asymptotically optimal quadratures. In Section 3, we prove upper and lower bounds of for , and construct quadratures which give the upper bound. Section 4 is devoted to some extentions of the results in the previous sections to Markov-Sonin weights.
Notation.
Denote ; for , ,
, . For , the inequality means for every . For , denote if , and if . We use letters and to denote general
positive constants which may take different values. For the quantities and depending on
, , ,
we write , , ( is specially dropped),
if there exists some constant such that
for all , , (the notation has the obvious opposite meaning), and
if
and . Denote by the cardinality of the set .
For a Banach space , denote by the boldface the unit ball in .
2 One-dimensional integration
In this section, for one-dimensional numerical integration, we prove the asymptotic order of and present some asymptotically optimal quadratures.
We start this section with a well-known inequality in the following lemma which is implied directly from the definition (1.7) and which is quite useful for lower estimation of .
Lemma 2.1
Let be a set of continuous functions on . Then we have
(2.1)
We now consider the problem of approximation of integral (1.5) for univariate functions from
. Let be the sequence of orthonormal polynomials with respect to the weight . In the classical quadrature theory, a possible choice of integration nodes is to take the zeros of the polynomials .
Denote by , the positive zeros of , and by the negative ones (if is odd, then is also a zero of ). These zeros are located as
(2.2)
with a positive constant independent of (see, e. g., [8, (4.1.32)]). Here is the Mhaskar-Rakhmanov-Saff number which is
(2.3)
and is the gamma function. Notice that the formula (2.3) is given in [8, (4.1.4)] for the weight . Inspecting the definition of Mhaskar-Rakhmanov-Saff number (see, e.g., [8, Page 116]), one easily verify that it still holds true for the general weight for any and .
For a continuous function on , the classical Gaussian quadrature is defined as
(2.4)
where are the corresponding Cotes numbers. This quadrature is based on Lagrange interpolation (for details, see, e.g., [11, 1.2. Interpolation and quadrature]). Unfortunately, it does not give the optimal convergence rate for functions from , see Remark 2.3 below.
In [3], for the weight , the authors proposed truncated Gaussian quadratures which not only improve the convergence rate but also give the asymptotic order of as shown in Theorem 2.2 below. Let us introduce in the same manner truncated Gaussian quadratures for the weight
with any and .
Throughout this paper, we fix a number with , and denote by the smallest integer satisfying .
It is useful to remark that
(2.5)
where is the distance between consecutive zeros of the polynomial . These relations were proven in [3, (13)] for the weight .
From their proofs there, one can easily see that they are still hold true for the general case of the weight . By (2.2) and (2.5), for sufficiently large we have that
(2.6)
with a positive constant depending on and only.
For a continuous function on , consider
the truncated Gaussian quadrature
(2.7)
Notice that the number of samples in the quadrature is strictly smalller than – the number of samples in the quadrature . However, due to (2.6) it has the asymptotic order as when going to infinity.
Theorem 2.2
For any , let be the largest integer such that . Then the quadratures , , are asymptotically optimal for and
(2.8)
Proof. For , there holds the inequality
(2.9)
with some constants and independent of and . This inequality was proven in [3, Corollary 4] for the weight . Inspecting the proof of [3, Corollary 4], one can easily see that this inequality is also true for a weight of the form (1.1) with any and . The inequality (2.9) implies the upper bound in (2.8):
The lower bound in (2.8) is already contained in Theorem 3.8 below. Since its proof is much simpler for the case , let us proccess it separately. In order to prove the lower bound in (2.8) we will apply Lemma 2.1. Let be arbitrary points. For a given , we put and , . Then there is with
such that the interval does not contain any point from the set .
Take a nonnegative function , , and put
Define the functions and on by
and
Let us estimate the norm . For a given with , we have
(2.10)
By a direct computation we find that for ,
(2.11)
where if , and if ,
(2.12)
and are polynomials in the variables and of degree at most with respect to each variable.
Hence, we obtain
In the case when is the Gaussian density, the truncated Gaussian quadratures in Theorem 2.2 give
(2.14)
On the other hand, for the full Gaussian quadratures , it has been proven in [3, Proposition 1] the convergence rate
which is much worse than the convergence rate of
as
in (2.14) for .
3 High-dimensional integration
In this section, for high-dimensional numerical integration (), we prove upper and lower bounds of and construct quadratures based on step-hyperbolic-cross grids of integration nodes which give the upper bounds. To do this we need some auxiliary lemmata.
Lemma 3.1
Let . Then any function is equivalent in the sense of the Lebesgue measure to a continuous function on .
Proof. We prove this lemma in the particular case when , and . The general case can be proven in a similar way.
Fix and define for ,
We preliminarly prove that is continuously embbeded into the space where , is any positive number and is the Banach space of continuous functions on equipped with the norm
Since the subspace of infinitely differentiable functions with compact support is dense in both the Banach spaces and , to prove this continuous embbeding, it is sufficient to show the inequality
(3.1)
For , denote by the th partial derivative of . Taking a function , we have that for ,
From the continuous embbeding of into it follows that
any function is equivalent in sense of the Lebesgue measure to a continuous (not necessarily bounded) function on . Hence we obtain the claim of the lemma for since has been taken arbitrarily and the restriction of a function to belongs to .
Importantly, as noticed in Introduction from Lemma 3.1 we can assume that the functions are continuous. This allows to correctly define quadratures for them.
For and , let be defined by , and by , . With an abuse we write
.
Lemma 3.2
Let , and . Assume that is a function on such that for every , .
Put for and ,
Then for every and almost every
.
Proof. Taking arbitrary test functions and and defining
, we have that .
For and with , and otherwise, we derive that
Hence,
for almost every .
This means that the weak derivative exists for almost every
which coincides with .
Moreover, for almost every since
by the assumption for every .
Assume that there exists a sequence of quadratures with
(3.2)
such that
(3.3)
for some number and constant .
Based on a sequence of the form (3.2) satisfying (3.3), we construct quadratures on by using the well-known Smolyak algorithm. We define for , the one-dimensional operators
and
For , the -dimensional operators , and are defined as the tensor product of one-dimensional operators:
(3.4)
where and
the univariate operators , and
are successively applied to the univariate functions , and , respectively, by considering them as
functions of variable with the other variables held fixed. The operators , and are well-defined for continuous functions on , in particular for ones from .
Notice that if is a continuous function on , then is a quadrature on which is given by
(3.5)
where
and the summation means that the sum is taken over all such that . Hence we derive that
(3.6)
where is defined by , , and , . We also have
(3.7)
where .
Notice that as mappings from to , the operators , and possess commutative and associative properties with respect to applying the component operators , and in the following sense. We have for any ,
and for any reordered sequence of ,
(3.8)
These properties directly follow from
(3.5)–(3.7).
Proof. The case of the lemma is as in (3.3) by the assumption. For simplicity we prove the lemma for the case . The general case can be proven in the same way by induction on . We make use of the temporary notation:
From Lemmata 3.1 and 3.2 it follows that for every . Notice that is a function in the variable only. Hence, by (3.3)
we obtain that
We say that , , if and only if for every .
Lemma 3.4
Under the assumption (3.2)–(3.3), we have that
for every ,
(3.9)
with absolute convergence of the series, and
(3.10)
Proof. The operator can be represented in the form
Therefore, by using Lemma 3.3 we derive that for every and ,
which proves (3.10) and hence the absolute convergence of the series in (3.9) follows.
Notice that
where recall is defined by , , and , .
By using Lemma 3.3 we derive for and ,
which is going to when . This together with the obvious equality
We now define an algorithm for quadrature on sparse grids adopted from the alogorithm for sampling recovery initiated by Smolyak (for detail see [2, Sections 4.2 and 5.3]). For , we define the operator
From (3.6) we can see that is a quadrature on of the form (1.6):
(3.11)
where
and
is a finite set.
The set of integration nodes in this quadrature
is a step hyperbolic cross in the function domain .
The number of integration nodes in the quadrature is
which can be estimated as
(3.12)
As commented in Introduction, this quadrature plays a crucial role in the proof of the upper bound in the main results of the present paper (1.8).
From Theorem 2.2 we can see that the truncated Gaussian quadratures
form a sequence of the form (3.2) satisfying (3.3) with .
Remark 3.7
The technique for proving the upper bound (3.13) is analogous to a general technique for establishing upper bounds of the error of unweighted sampling recovery by Smolyak algorithms of functions having mixed smoothness on a bounded domain (see, e.g., [2, Section 5.3] and [13, Section 6.9] for detail).
Theorem 3.8
We have that
(3.14)
Proof. Let us first prove the upper bound in (4.2). We will construct a quadrature of the form (3.11) which realizes it. In order to do this, we take the truncated Gaussian quadrature defined in (2.4). For every , let be the largest number such that . Then we have
.
For the sequence of quadratures with
This means that the assumption (3.2)–(3.3) holds for . To prove the upper bound in (4.2) we approximate the integral
by the quadrature which is formed from the sequence .
For every , let be the largest number such that . Then the corresponding operator defines a quadrature belonging to . From (3.12) it follows
We now prove the lower bound in (4.2) by using the inequality (2.1) in Lemma 2.1. For , we define the set
Then we have
(3.15)
Indeed, we have the inclusion
and
Hence,
We prove the converse inequality
by induction on the dimension . It is obvious for . Assuming that this inequality is true for , we check
it for , . Fix a positive number with . We have by induction assumption,
For a given , let be arbitrary points. Denote by the smallest number such that . We define the -parallelepiped for of size
by
Since , there exists a multi-index
such that does not contain any point from .
As in the proof of Theorem 2.2, we take a nonnegative function , , and put
(3.16)
For , we define the univariate functions in variable by
(3.17)
Then the mulitivariate functions and on are defined by
and
(3.18)
Let us estimate the norm . For every with
, we prove the inequality
(3.19)
We have
(3.20)
Similarly to (2.10)–(2.13) we derive that for every ,
where
and are polynomials in the variables and of degree at most with respect to each variable.
This together with (3.16)–(3.17) and the inequalities and yields that
(3.21)
Since and , we have that
, and consequently,
This equality, the estimates (3.21) and the inequalities and
yield that
Let us analyse some properties of the quadratures and their integration nodes which give the upper bound in (4.2).
1. The set of integration nodes in the quadratures which are formed from the non-equidistant zeros of the orthonornal polynomials ,
is a step hyperbolic cross on the function domain . This is a contrast to the classical theory of approximation of multivariate periodic functions having mixed smoothness for which the classical step hyperbolic crosses of integer points are on the frequency domain (see, e.g., [2, Section 2.3] for detail). The terminology ’step hyperbolic cross’ of intergration nodes is borrowed from there. In Figure 1, in particular, the step hyperbolic cross in the right picture is designed for the Hermite weight (). The set also completely differs from the classical Smolyak grids of fractional dyadic points on the function domain (see Figure 2 for ) which are used in sparse-grid sampling recovery and numerical integration for functions having a mixed smoothness (see, e.g., [2, Section 5.3] for detail).
2. The set is very sparsely distributed inside the -cube
for some constant .
Its diameter which is the length of its symmetry axes is , i.e., the size of .
The number of integration nodes in is . For the integration nodes , we have that
On the other hand, the diameter of is going to when .
A classical step hyperbolic cross
A Hermite step hyperbolic cross
Figure 1: Step hyperbolic crosses ()
Figure 2: A Smolyak grid ()
4 Extension to Markov-Sonin weights
In this section, we extend the results of the previous sections to Markov-Sonin weights. A univariate Markov-Sonin weight is a function of the form
(here is indicated in the notation to distinguish Markov-Sonin weights and Freud-type weight ).
A -dimensional Markov-Sonin weight is defined as
Markov-Sonin weights are not of the form (1.2) and have a singularity at .
We will keep all the notations and definitions in Sections
1–3 with replacing by , pointing some modifications.
Denote and
. Besides the spaces and we consider also the spaces and which are defined in a similar manner. For the space , we require one of the following restrictions on and to be satisfied:
(i)
;
(ii)
and is not an integer, for , the derivative can be extended to a continuous function on for all such that
.
Let be the sequence of orthonormal polynomials with respest to the weight .
Denote again by , the positive zeros of , and by the negative ones (if is odd, then is also a zero of ).
If is even, we add .
These nodes are located as
with a positive constant independent of (the Mhaskar-Rakhmanov-Saff number is
).
In the case (i), the truncated Gaussian quadrature is defined by
and in the case (ii) by
where are the corresponding Cotes numbers.
In the same ways, by using related results in [10] we can prove the following counterparts of Theorems 2.2 and 3.8 for the unit ball
of the Markov-Sonin weighted Sobolev space of mixed smoothness .
Theorem 4.1
For any , let be the largest integer such that . Then the quadratures , are asymptotically optimal for and
Theorem 4.2
We have that
Acknowledgments:
This research is funded by Vietnam Ministry of Education and Training under Grant No. B2023-CTT-08. A part of this work was done when the author was working at the Vietnam Institute for Advanced Study in Mathematics (VIASM). He would like to thank the VIASM for providing a fruitful research environment and working condition. The author specially thanks Dr. Nguyen Van Kien for drawing Figures 1 and 2.
References
[1]
D. Dũng and V. K. Nguyen.
Optimal numerical integration and approximation of functions on
equipped with Gaussian measure.
arXiv:2207.01155 [math.NA], 2022.
[2]
D. Dũng, V. N. Temlyakov, and T. Ullrich.
Hyperbolic Cross Approximation.
Advanced Courses in Mathematics - CRM Barcelona,
Birkhäuser/Springer, 2018.
[3]
B. Della Vecchia and G. Mastroianni.
Gaussian rules on unbounded intervals.
J. Complexity, 19:247–258, 2003.
[4]
J. Dick, C. Irrgeher, G. Leobacher, and F. Pillichshammer.
On the optimal order of integration in Hermite spaces with finite
smoothness.
SIAM J. Numer. Anal., 56:684–707, 2018.
[5]
J. Dick, F. Y. Kuo, and I. H. Sloan.
High-dimensional integration: the quasi-Monte Carlo way.
Acta Numer., 22:133–288, 2013.
[6]
C. Irrgeher, P. Kritzer, G. Leobacher, and F. Pillichshammer.
Integration in Hermite spaces of analytic functions.
J. Complexity, 31:308–404, 2015.
[7]
C. Irrgeher and G. Leobacher.
High-dimensional integration on the , weighted Hermite
spaces, and orthogonal transforms.
J. Complexity, 31:174–205, 2015.
[8]
P. Junghanns, G. Mastroianni, and I. Notarangelo.
Weighted Polynomial Approximation and Numerical Methods for
Integral Equations.
Birkhäuser, 2021.
[9]
D. S. Lubinsky.
A survey of weighted polynomial approximation with exponential
weights.
Surveys in Approximation Theory, 3:1–105, 2007.
[10]
G. Mastroianni and D. Occorsio.
Markov-Sonin Gaussian rule for singular functions.
J. Comput. Appl. Math., 169(1):197–212, 2004.
[11]
H. N. Mhaskar.
Introduction to the Theory of Weighted Polynomial
Approximation.
World Scientific, Singapore, 1996.
[12]
V. N. Temlyakov.
Multivariate Approximation.
Cambridge University Press, 2018.
[13]
R. Tripathy and I. Bilionis.
Deep UQ: Learning deep neural network surrogate models for high
dimensional uncertainty quantification.
J. Comput. Phys., 375:565–588, 2018.