Optimal sampling for least-squares approximation
Abstract
Least-squares approximation is one of the most important methods for recovering an unknown function from data. While in many applications the data is fixed, in many others there is substantial freedom to choose where to sample. In this paper, we review recent progress on optimal sampling for (weighted) least-squares approximation in arbitrary linear spaces. We introduce the Christoffel function as a key quantity in the analysis of (weighted) least-squares approximation from random samples, then show how it can be used to construct sampling strategies that possess near-optimal sample complexity: namely, the number of samples scales log-linearly in , the dimension of the approximation space. We discuss a series of variations, extensions and further topics, and throughout highlight connections to approximation theory, machine learning, information-based complexity and numerical linear algebra. Finally, motivated by various contemporary applications, we consider a generalization of the classical setting where the samples need not be pointwise samples of a scalar-valued function, and the approximation space need not be linear. We show that even in this significantly more general setting suitable generalizations of the Christoffel function still determine the sample complexity. This provides a unified procedure for designing improved sampling strategies for general recovery problems. This article is largely self-contained, and intended to be accessible to nonspecialists.
1 Introduction
Least-squares approximation is the process of fitting an unknown function from samples by computing a best -norm fit in a given subspace, which is often termed the approximation space. Least squares is a classical topic, yet it is one of the widely-used tools in applied mathematics, computer science, engineering and numerous other disciplines. For the data scientist, it is almost always ones first ‘go-to’ method when trying to fit a function to data.
In many data-fitting problems, the samples are fixed. However, many other problems offer substantial flexibility to choose where to sample. When data is also expensive to acquire – which, despite claims about ‘big data’ is often the case in applications in science and engineering – we are naturally led to the following questions. How many samples do we need – or, in other words, what is the sample complexity – and how should we best choose them? This is by no means a new question. It arises in many different guises in different fields, including optimal design of experiments in statistics, active learning in machine learning, optimal sensor placement in sampling theory and signal processing, and optimal (standard) information in information-based complexity.
The purpose of this article is to survey recent advances made in the last 5-10 years in optimal sampling, as we shall term it from now on, which has been motivated by certain function approximation problems in high dimensions. Such methods are in essence importance sampling techniques, where samples are drawn randomly from a probability measure chosen specifically for the given approximation space. Throughout, our aim is to ensure quasi-optimal recovery (in an appropriate sense) with near-optimal sample complexity.
1.1 Overview
After a short literature review (§2), this article commences with a formulation and review of (weighted) least-squares approximation (§3). We then discuss multivariate polynomial approximation (§4), this being one of the main motivating examples for this work. The next two sections contain the core developments of this article. We describe the theory of least-squares approximation with random sampling and introduce the so-called Christoffel function, which plays a key role in its analysis (§5). We then show that sampling from a measure proportional to this function leads to provably near-optimal sampling (§6). The power of such a result lies in its generality: this strategy is near optimal for any linear approximation space. Next, we consider the matter of how much can be gained through this approach in comparison to Monte Carlo sampling, i.e., i.i.d. random sampling from some underlying probability measure (§7). Monte Carlo sampling is ubiquitous in applications, especially high-dimensional approximation tasks. Yet, as we discuss, sample complexity bounds for this naïve sampling strategy can be arbitrarily bad. Once more, we see the Christoffel function plays a key role in analyzing the sample complexity. Having done this, we then conclude this part of the article by discussing series of further topics (§8). In particular, we describe very recent advances of optimal (as opposed to near-optimal) sampling and its connections to sampling numbers in information-based complexity and the study of sampling discretizations in approximation theory. We also discuss connections to matrix sketching via leverage score sampling, as well as various practical considerations.
The majority of this article considers linear approximation spaces, i.e., finite-dimensional subspaces of functions. However, modern applications increasingly make use of nonlinear spaces. Moreover, in many applications the object to recover may not be a scalar-valued function, and samples may not be simple pointwise evaluations. We conclude this article by describing a recent framework for optimal sampling with general linear samples and nonlinear approximation spaces (§9). We discuss how many of the key ideas seen in linear spaces, such as Christoffel functions, naturally extend to this general setting. Finally, we end with some concluding thoughts (§10).
1.2 Scope and target audience
In this article, we focus on the foundational techniques and theory. After a brief review in the next section, we largely omit applications, although this is currently an active area of interest. This article is intended to be accessible to nonspecialists. We build most concepts up from first principles, relying on basic knowledge only. In order to make it as self-contained as possible, proofs of most of the results shown in this work are given in an appendix.
2 Literature review
We commence with a short discussion of relevant literature. Additional literature on variations, extensions and further topics can be found in §8.
Least squares is a classical topic, with origins tracing back to the work of Gauss and Legendre [111]. Starting in the early 2010s, and motivated by problems in parametric and stochastic Differential Equations (DEs), there was a resurgence of research on this topic, focusing on high- and and infinite-dimensional function approximation, and typically involving polynomial spaces. Key works in this direction include [33, 34, 91]. This resurgence was based on least squares with random sampling, inspired by Monte Carlo quadrature and its ability to integrate functions without succumbing to the curse of dimensionality. However, it is worth noting that the goal of least-squares approximation is to achieve quasi-optimal rates of convergence with respect to the approximation space. Typically, the resulting rate will exceed the error rate for Monte Carlo quadrature.
As noted above, Monte Carlo sampling generically leads to suboptimal sample complexity bounds for least-squares approximation. This observation led to a concerted effort to develop practical sampling strategies with better performance (see [5, §8.1.1] for an overview), culminating in the near-optimal strategies which are the basis of this work. These were developed in [35], but also appeared slightly earlier in [67] in the case of (total degree) polynomial spaces.
At a similar time, related techniques under the name leverage score sampling – which are based on the classical topic of statistical leverage – have become increasingly popular in machine learning and data science. In particular, leverage score sampling is an effective tool for matrix sketching [50, 79, 124]. As we comment in §8, it is can also be viewed as a special case of the techniques described in this article, corresponding to functions defined over a discrete domain.
Finally, we remark on some applications. As observed, this work is closely related to optimal design of experiments and optimal sensor placement in sampling theory and signal processing – both large areas with countless applications that we shall not attempt to review. However, this specific line of research emerged out of computing polynomial approximations to high-dimensional functions arising in parametric and stochastic DEs [26, 33, 90], and this remains a key area of application. See [36, 66, 5, 67, 64, 97] and references therein. For other surveys focused multivariate polynomial approximation and parametric and stochastic DEs, see [64, 66] and [5, Chpt. 5].
Recently, these techniques have also been applied to the closely related problem of numerical integration (cubature) [95, 88]. There are also emerging applications in Trefftz methods for solving Helmoltz equations [106] and methods for option pricing in finance [1, 57]. On the theoretical side, this line of work has also spurred recent advances in approximation theory (so-called sampling discretizations) and information-based complexity (so-called sampling numbers). We discuss these topics further in §8. Related ideas have also been used in sampling theory [18]. We also note that Christoffel functions are useful tools for empirical inference in data analysis [75].
Finally, through the close connection to leverage score sampling, there are manifold applications in machine learning and data science. These include randomized numerical linear algebra [124, 82], kernel methods [16, 20, 54, 55, 93] and active learning [20, 31, 43, 54, 78]. Moreover, the generalization we describe in §9 opens the door to applications in many seemingly unrelated areas, such as inverse problems in imaging [13].
3 Preliminaries
Let be a measure space and be the Lebesgue space of square-integrable functions with respect to . Typically, in this work, . For convenience, we assume that is a finite measure () and, therefore, without loss of generality, that is a probability measure (). It is possible to consider infinite measures, but for ease of exposition we shall not do this.
Given , we consider sampling measures . These are assumed to be such that is a probability space for every . We also make the following assumption.
Assumption 3.1 (Absolute continuity and positivity).
The additive mixture
is absolutely continuous with respect to and its Radon–Nikodym derivative is strictly positive almost everywhere on .
This assumption allows us to write
(3.1) |
where the density (the Radon–Nikodym derivative) is measurable, positive almost everywhere and satisfies
(3.2) |
In what follows it will often be more convenient to work with the reciprocal of this function. We define the weight function as , .
Given sampling measures , we now draw samples , , independently from these measures and consider noisy measurements of an unknown function the form
(3.3) |
Typically, we will assume that so that the samples (3.3) are almost surely well defined.
We consider a bounded, adversarial noise model, where the ’s are not assumed to be random, but are assumed to be small in magnitude. Our aim is to derive error bounds in which the noise term scales like the -norm
of the noise vector . Random noise models (included unbounded noise) can also be considered (see [89, 35] and [5, Rem. 5.1]).
Measure space | |
Lebesgue space functions , where | |
Norm on | |
Inner product on | |
Function to approximate | |
Sampling measures | |
Weight function, given by (3.1) | |
Sample points, where independently for | |
Noisy samples of | |
Finite-dimensional subspace in which to approximate | |
dimension of | |
, | Matrix and measurement vector of the algebraic least-squares problem, given by (3.8) |
, | Semi-inner product and seminorm defined by the sample points, given by (3.11) and (3.12), respectively |
3.1 Weighted least-squares approximation
Let be an arbitrary -dimensional subspace, where , in which we seek to approximate the unknown from the measurements (3.3). We term the approximation space. In this work, we consider general approximation spaces. In particular, this means that interpolation – which generally requires carefully-constructed points sets – may not be possible [47]. Instead, we consider the weighted least-squares approximation
(3.4) |
Note that the loss function is well defined almost surely (for fixed and weight function as above), since pointwise evaluations of and are well-defined almost surely.
-
Remark 3.2 (The scaling factor)
The scaling factors in (3.4) are motivated by noticing that
(3.5) where the second equality is due to (3.1). Thus, in the noiseless case, (3.4) can be considered as a empirical approximation to the continuous least-squares approximation
(3.6) i.e., the best approximation to from in the -norm. In particular, if , then the minimizers of (3.4) converge almost surely to the minimizer of (3.6) as [64].
The objective of this article is to describe how to choose the measures to achieve the most sample-efficient approximation. We shall compare such approaches against the standard approach of Monte Carlo sampling, i.e., i.i.d. random sampling from . This is equivalent to setting
which leads, via (3.1), to . Thus, (3.4) becomes an unweighted least-squares approximation.
-
Remark 3.3 (Hierarchical approximation)
Often, rather than a fixed subspace , one may wish to construct a sequence of approximations in a nested collection of subspaces
of dimension . In this case, given numbers satisfying , , one aims to design nested collections of sample points
We write for the ensuing (weighted) least-squares approximations, where is constructed from the sample points . Nested implies that samples are recycled at each iteration – a highly desirable property in the setting of limited data. We term such a procedure a hierarchical (also known as a progressive [37] or sequential [19]) approximation scheme.
3.2 Reformulations of (3.4)
Given a basis for , the problem (3.4) is readily reformulated as an algebraic least-squares problem for the coefficients of . This takes the form
(3.7) |
where
(3.8) |
To be precise, every minimizer satisfying (3.4) has coefficients that satisfy (3.7) and vice versa. Classical least-squares analysis asserts that any vector satisfying (3.7) is also a solution of the normal equations
(3.9) |
and vice versa. Rewriting the normal equations in terms of functions also leads to the following variational form of (3.4).
(3.10) |
This is equivalent to (3.9) in the same sense as before. Here we wrote
(3.11) |
for the discrete semi-inner product induced by the sample points and the weight function (whenever defined). For convenience, we shall denote the corresponding seminorm as
(3.12) |
In the noiseless case , the formulation (3.10) asserts that is precisely the orthogonal projection of onto with respect to the discrete semi-inner product (3.11). Since (3.11) is an empirical approximation to the continuous inner product (recall (3.5)), this sheds further light on why minimizers of (3.4) converge to (3.6) (the orthogonal projection in the -inner product).
-
Remark 3.4 (Numerical considerations)
Fast numerical computations are not the primary concern of this article. However, we note that (3.7) can be solved using standard linear algebra techniques. Since the matrix is generally dense and unstructured, each matrix-vector multiplication involves floating-point operations (flops). Hence, when using an iterative method such as conjugate gradients, the number of flops that suffice to compute to an error of (in the norm )is roughly , where is the condition number of . In §5 we see that the sufficient conditions that ensure accuracy and stability of the approximation also guarantee that is well conditioned.
3.3 Key terminology
We now introduce some key terminology that will be used from now on. First, we say that the approximation is -quasi-optimal or -quasi-optimal if, in the absence of noise,
respectively (note that the term instance optimality also sometimes used [47]). Obviously the former is stronger – therefore, achieving it will be our main objective. We say that the recovery is stable if, in the presence of noise, the recovery error depends on , i.e.,
where is some best approximation error term. Finally, we say that a sampling strategy (i.e., a collection of measures ) has near-optimal sample complexity or optimal sample complexity if, respectively, or samples suffice for obtaining a quasi-optimal and stable approximation, for some constant . This is sometimes referred to as rate optimality [47].
4 Application to multivariate polynomial approximation
We now introduce an important example considered in this paper, namely, multivariate polynomial approximation in dimensions.
4.1 Spaces of multivariate polynomials
Let be a domain and a measure. Let be a finite set of multi-indices with . We consider the polynomial space
(4.1) |
Here, we use the notation for the -dimensional variable, for a multi-index and . There are several standard choices for the index set . In low dimensions, one may consider the (isotropic) tensor-product or total degree
(4.2) |
index sets of order . Unfortunately, the cardinality of these index sets scales poorly with respect to (for fixed ). A better choice is moderate dimensions is the hyperbolic cross index set
(4.3) |
However, as the dimension increases, this too may become too large to use. Since high-dimensional functions often have a very anisotropic dependence with respect to the coordinate variables, in higher dimensions one may look to consider anisotropic versions of these index sets. Given an anisotropy parameter with (understood componentwise) and (not necessarily an integer), the corresponding anisotropic index sets are defined as
and
Notice that the isotropic index sets are recovered by setting (the vector of ones).
The choice of index set is not the focus of this paper. For more discussion, see, e.g., [64] and [5, Chpts. 2 & 5]. We remark, however, that all index sets defined above are examples of lower (also known as monotone or downward closed) sets. A set is lower if whenever and (understood componentwise, once more), one also has .
4.2 Multivariate orthogonal polynomials on tensor-product domains
As we shall see in the next section, orthonormal bases play a key role in least-squares approximation from random samples and, in particular, optimal sampling. It is therefore convenient to be able to readily compute orthonormal polynomial bases for the space .
Fortunately, when is lower and and are of tensor-product type, such polynomials are easily generated via tensor-products of univariate orthogonal polynomials. For concreteness, let
where, for each , and is a probability measure on . Then, under mild conditions on (see, e.g., [112, §2.2]), there exists a unique sequence of orthonormal polynomials
where, for each , is a polynomial of degree . Using this, one immediately obtains an orthonormal basis of via tensor products. Specifically,
What about the subspace introduced in (4.1)? Fortunately, whenever is a lower set, the functions with indices also form an orthonormal basis for this space. In other words,
See, e.g., [14, Proof of Thm. 6.5]. This property, combined with the tensor-product structure of the basis functions, makes optimal sampling and least-squares approximation in the subspaces computationally feasible and, in many cases, straightforward, for tensor-product domains and measures. See §8 for some further discussion.
To conclude this section, we list several standard families of univariate measures and their corresponding orthogonal polynomials. Consider a compact interval, which without loss of generality we take to be . Given parameters , the Jacobi (probability) measure is given by
This measure generates the Jacobi polynomials for general and the ultraspherical polynomials when . Of particular interest are the following cases.
-
, which corresponds to the arcsine measure . This yields the Chebyshev polynomials of the first kind.
-
, which corresponds to the uniform measure . This yields the Legendre polynomials.
-
, which corresponds to the measure . This yields the Chebyshev polynomials of the second kind.
We will consider these polynomials later in this paper. We will also briefly discuss certain unbounded domains. Here two common examples are as follows.
-
over , which yields the Hermite polynomials.
-
over , which yields the Laguerre polynomials.
5 Theory of weighted least-squares approximation from random samples
5.1 Basic accuracy and stability guarantee
Accuracy and stability of the weighted least-squares approximation are controlled by the following discrete stability constants:
In other words, and are the optimal constants in the norm equivalence
(5.1) |
In approximation theory, this is known as a sampling discretization [68] or a (weighted) Marcinkiewicz–Zygmund inequality [116]. Squaring and writing out the discrete semi-norm, (5.1) is equivalent to
(5.2) |
Hence, the existence of finite, positive constants implies that is a norm over the -dimensional space , with being the constants of the norm equivalence.
Note that if is an orthonormal basis for , then it is straightforward to show that
(5.3) |
where is the least-squares matrix (3.8).
Lemma 5.1 (Accuracy and stability of weighted least squares).
Let , , be sample points at which both and any are finite, and be such that , . Suppose that . Then the weighted least-squares problem (3.4) has a unique solution . Moreover, this solution satisfies
(5.4) |
where . Also, if is an orthonormal basis of then the condition number of the least-squares matrix (3.8) satisfies .
This result is a standard exercise. In particular, the condition number statement follows immediately from (5.3). We include a short proof of the other parts in the appendix for completeness. We also observe that this result holds for arbitrary weight functions and sample points satisfying the stipulated assumptions. At this stage, we do not require the sample points to be random. This will be used in the next subsection to derive concrete sample complexity estimates.
-
Remark 5.2 (The noise bound)
On the face of it, the noise term is undesirable since coefficients corresponding to large values of are more heavily weighted than others. We will take this into account later when we construct near-optimal sampling measures. Specifically, in §6.1 we construct sampling measures that lead to log-linear sample complexity and for which . Hence, the noise term in this case.
5.2 The (reciprocal) Christoffel function
We now return to the main setting of this paper, where the samples points are drawn randomly and independently with , , for measures satisfying Assumption 3.1. Our aim is to analyze the sample complexity of weighted least-squares approximation. In view of Lemma 5.1, this involves first bounding the discrete stability constants and .
A key tool in this analysis is the Christoffel function of . Christoffel functions are well-known objects in approximation theory [98, 125], where they are typically considered in the context of spaces spanned by algebraic polynomials. It transpires that Christoffel functions – or, more precisely, their reciprocals – are also fundamentally associated with random sampling for least-squares approximation.
Definition 5.3 (Christoffel function).
Let . The (reciprocal) Christoffel function of is the function defined by
(5.5) |
In other words, measures how large in magnitude an element of can be at in relation to its -norm. This function also admits an explicit expression. Let be an arbitrary orthonormal basis of . Then it is a short exercise to show that
(5.6) |
Often taken as the definition of , this formulation will be useful in our subsequent analysis. It also emphasizes the fact that is precisely the diagonal of the reproducing kernel of in [98, §3].
For reasons that will become clear soon, we are particularly interested in the maximal behaviour of the function , where is the weight function defined by (3.1). We therefore let
(5.7) |
To continue the connection with approximation theory, it is worth noting that is the optimal constant in the (weighted) Nikolskii-type inequality (see, e.g., [92] and references therein),
(5.8) |
Thus, measures how large the scaled element can be uniformly in relation to the -norm of . It is important to observe that
(5.9) |
for any weight function and -dimensional subspace . This bound follows by integrating both sides with respect to the measure and noticing that , the latter being an immediate consequence of (5.6). This gives
where in the last equality we used (3.2) and the fact that .
5.3 Bounding the discrete stability constants
The following result establishes a key relationship between the Christoffel function and the sample complexity of weighted least-squares approximation.
Theorem 5.4 (Estimates for and in probability).
This result is well known. In view of (5.3), its proof relies on bounding the maximum and minimum eigenvalues of . This is achieved by using what have now become quite standard matrix concentration inequalities, such as the matrix Chernoff bound [118, Thm. 1.1] (see also Theorem A.1). See the appendix for a proof.
-
Remark 5.5 (One-sided estimates)
The conclusions of Lemma 5.1 only rely on bounding the lower discrete stability constant from below. This can be done with a slightly smaller sampling condition than (5.11). It follows readily from the proof of Theorem 5.4 that with probability at least , whenever
However, bounding from above yields a bound on the condition number of (see Lemma 5.1), which, as discussed in Remark 3.2, is important for numerical purposes.
5.4 Error bounds in probability
We next combine Lemma 5.1 and Theorem 5.4 to obtain error bounds for weighted least-squares approximation. We split these bounds into two types: error bounds in probability (this subsection) and error bounds in expectation (the next subsection). In these two subsections, we will strive for generality by tracking the dependence in these bounds on the parameter appearing in Theorem 5.4. However, it is generally informative to think of this as a fixed scalar, e.g., .
Corollary 5.6 (First uniform error bound error bound in probability).
Let , be a finite-dimensional subspace with and be probability measures satisfying Assumption 3.1. Consider sample points drawn randomly and independently with , , where
and and are as in (3.1) and (5.7), respectively. Then the following hold with probability at least . For any and that is defined everywhere in , the weighted least-squares approximation is unique and satisfies
(5.12) |
where . Moreover, if is an orthonormal basis of , then the condition number of the least-squares matrix (3.8) satisfies .
This result follows immediately from Lemma 5.1 and Theorem 5.4 via the estimate
Now suppose that (for concreteness) and assume further that , a.e. . This will be the case in Section 6 when we construct near-optimal sampling measures. Then (5.12) reads
Hence the approximation is stable and -quasi-optimal. In some problems, the difference between the - and -norms may be of little consequence. For example, in the case of polynomial approximation of holomorphic functions in low dimensions, the best approximation error decays exponentially fast with respect to in either norm (see, e.g., [5, §3.5-3.6]). On the other hand, for high-dimensional holomorphic functions or functions of finite regularity in any dimension, the best approximation errors decay algebraically fast, with, typically, the -norm error decaying at least slower than the -norm error (see, e.g., [5, §3.8-3.9]). Thus, the crude bound (5.12) may underestimate the convergence rate of the least-squares approximation. Motivated by these considerations, we next discuss how to establish -quasi-optimality results.
-
Remark 5.7 (Uniform versus nonuniform)
Corollary 5.6 is a uniform result, in the sense that a single random draw of the sample points suffices for all functions. We next discuss a nonuniform results, in which the error bound holds with high probability for each fixed function. Uniform bounds are desirable in many applications, as it means that the same sample points (which may correspond to, e.g., sensor locations) can be re-used for approximating multiple functions. Theoretically, it also means that one can achieve worst-case error bounds. Indeed, let be a set of functions that are defined everywhere and for which
Typically, may be a unit ball of some Banach space – for example, the space of Sobolev functions . Then, ignoring noise for simplicity and assuming as before that and , a.e. , Corollary 5.6 implies the following uniform bound with high probability:
As we discuss in §8, this has implications in the study of sampling numbers in information-based complexity and the efficacy of pointwise samples (standard information).
-
Remark 5.8 (The term )
As an alternative to assuming that , a.e. , one may also bound the term by assuming that contains a function with and , a.e. . This holds, for example, whenever the constant function . In this case,
However, as noted in Remark 5.1, we can always construct so that the former assumption holds.
We now present a nonuniform ‘in probability’ bound that provides -quasi-optimality, at the expense of a poor scaling with respect to the failure probability . The proof is based on Markov inequality, which, roughly speaking, is used to bound the discrete error term arising in (5.4).
Corollary 5.9 (First nonuniform error bound in probability).
Let , , be a finite-dimensional subspace with and be probability measures satisfying Assumption 3.1. Consider sample points drawn randomly and independently with , , where
(5.13) |
and and are as in (3.1) and (5.7), respectively. Then the following hold with probability at least . For any , the weighted least-squares approximation is unique and satisfies
(5.14) |
Moreover, if is an orthonormal basis of , then .
Suppose again that and , . Then this bound implies that
While stable and -quasi-optimal, the scaling with respect to is undesirable. To obtain an -independent bound, this suggests we either need to to be exponentially large in , or impose an additional constraint on that . Neither is a desirable outcome.
One possible way to circumvent this issue involves using Bernstein’s inequality instead of Markov’s inequality. This exploits the fact that the discrete term in (5.4) is a sum of independent random variables with bounded variance. It therefore concentrates exponentially fast (in ) around its mean (recall (3.5)). This leads to the following result, which is also nonuniform.
Corollary 5.10 (Second nonuniform error bound in probability).
This result asserts a mixed type of quasi-optimality, involving the -norm and a (weighted) -norm divided by the factor . Notice that the factor can be removed whenever , a.e. , as will be the case later. Therefore, consider, as in Remark 5.4, a case where the -norm best approximation error decays algebraically fast in . As we noted therein, the -norm best approximation error often decays slower than the former. Hence, one may choose in (5.15) to show that achieves the same algebraic convergence rate in -norm as the -norm best approximation in . This approach has also been used in the related context of function approximation via compressed sensing in [110] and [5, §7.6].
5.5 Error bounds in expectation
To obtain error bounds in expectation, we need to modify the least-squares estimator to avoid the ‘bad’ regime where the discrete stability constants can be poorly behaved. In this section, we proceed as in [47, §2.2], which is based on [34, 35].
Let be an orthonormal basis of . First, we notice that (5.10) holds whenever
where is the discrete Gram matrix
It is then possible to show the following bound.
Lemma 5.11.
This lemma can be used to construct two estimators with error bounds in expectation. The first is the conditioned estimator [35], which is defined as
Computing this estimator requires one to evaluate
Having done this, one simply sets when and otherwise.
The conditioned estimator has the disadvantage that it requires an orthonormal basis for to be known – a property that may not hold in practice (see §8). This can be avoided by using a truncated estimator. This approach assumes an a priori bound for of the form
for some known . Now define the truncation operator by
Then this estimator is defined as
Note that one can also construct a truncated estimator with respect to the other norms. The -norm has also been commonly used for this purpose [34, 47].
Theorem 5.12 (Nonuniform error bound in expectation).
Let , , be a finite-dimensional subspace with and be probability measures satisfying Assumption 3.1. Consider sample points drawn randomly and independently with , , where
(5.17) |
and and are as in (3.1) and (5.7), respectively. Then
The same bound holds for the , except with the final term replaced by .
Observe that the factor
as . Hence, this bound asserts -quasi-optimality of the two estimators (with constant approaching ) up to the term.
-
Remark 5.13 (Removing the term)
As discussed in [47], the additional term in the error bound may cause problems when striving to achieve rate-optimal bounds in the absence of noise. In particular, if the best approximation error decays exponentially fast (or faster) in for some then achieving the same rate for the least-squares approximation would require setting . This adds an additional multiplicative factor of in the sample complexity bound (5.17), thus prohibiting rate optimality (recall that ). One way to remove this term, which was introduced in [65], is to repeatedly redraw the sample points until the condition is met, and then use the resulting points to construct the weighted-least squares estimator . By doing this, one can achieve a similar error bound in expectation, except without the term (the term now only influences the expected number of redraws needed to achieve ). See [65, 47, 48] for further information.
6 Christoffel sampling
We now come to the crux of this article, which is devise random sampling schemes that achieve near-optimal sample complexity bounds.
6.1 Optimal choice of weight function via the Christoffel function
The results shown in the previous section relate the number of measurements to the constant . Hence, our goal is to choose a weight function that minimizes this constant. Recall that
A natural first choice involves selecting
Applying the normalization condition (3.2) (recall that ) and the fact that (recall (5.6)), we obtain
(6.1) |
This choice is quite popular in the literature. However, it requires the additional assumption that almost everywhere. This is a rather mild assumption, which is equivalent to requiring that for almost every there exists a for which (in particular, it holds under the assumption made in Remark 5.4). If this holds, then (6.1) is the optimal for choice of , since achieves the optimal lower bound (recall (5.9)).
However, this choice may not be desirable for the reasons considered in Remark 5.1. If is small at , then becomes large and potentially blows up the noise. Fortunately, this issue can be resolved and the above assumption removed, by slightly changing to
(6.2) |
This avoids the assumption on and leads to a bounded weight function satisfying . The only cost is suboptimality by a factor of , i.e., . The reader will likely notice that one could replace the in (6.2) with a weighted combination for any , giving and . For simplicity, we consider the factor throughout.
This aside, having chosen as in (6.1) (one could also consider (6.2)), to achieve near-optimal sampling we wish to select measures such that (3.1) holds, i.e.,
(6.3) |
If the measures are chosen so that this holds, then the various sample complexity estimates of the previous section are near-optimal in . Indeed, letting , we see that the condition
suffices to ensure the various ‘in probability’ or ‘in expectation’ bounds given in the previous section.
6.2 Christoffel sampling
There are several ways to achieve (6.3). Arguably the simplest involves setting
(6.4) |
However, this strategy is not well suited in the case of hierarchical approximation (Remark 3.1). Indeed, let be the optimal sampling measure for , as defined in (6.4) with . Now suppose that the first points , , where is as in (6.4) for . Then we would like to recycle these points when constructing the second set of sample points . However, since in general, these points are drawn with respect to the wrong measure for near-optimal sampling in the subspace . Thus, it is not possible to achieve near-optimal sampling simply by augmenting the set with new points.
One strategy to overcome this limitation involves interpreting as an additive mixture involving and a certain update measure . One can then use this construct a sampling procedure that recycles ‘most’ of the first points, while ensuring that the overall sample is drawn i.i.d. from [19]. An alternative approach, introduced in [86], involves choosing measures according to the individual basis functions. Let be an orthonormal basis of and suppose that for some . Then we define
(6.5) |
Observe that
due to (5.6). Hence (6.3) also holds for this choice, guaranteeing quasi-optimal sampling. Moreover, as each sampling measure corresponds to a single basis function, rather than an additive mixture of basis functions as in (6.4), this approach readily lends itself to hierarchical approximation. See [6, 86] for further details. Note that the distributions corresponding to these measures known as induced distributions [96, 64], as they are induced by the orthonormal basis .
6.3 Uniform error bounds in probability
To conclude this section, we now describe how a further modification of the near-optimal sampling measure can lead to uniform bounds in probability that improve on the somewhat crude bounds shown in Corollary 5.6 and achieve something close to -quasi-optimality. This section is based on techniques developed in [72, 73] to estimate sampling numbers. See also §8 for additional discussion.
For this, we assume that there is an orthonormal basis and that
(6.6) |
For convenience, given let
(6.7) |
where is the th coefficient of . The second equality is due to Parseval’s identity. We now construct the measure. Define sets
and consider a sequence with . Then let
(6.8) |
Theorem 6.1.
Let , , be an orthonormal basis and be as in (6.6). Let and for , where is such that . Consider sample points , , where is as in (6.8) and
(6.9) |
Then the following holds with probability at least . For any that is defined everywhere in and for which and any noise , the weighted least-squares approximation is unique and satisfies
(6.10) |
where is a constant depending on and only. Moreover, the condition number of the least-squares matrix (3.8) satisfies .
To put this result into context, consider a class of functions that are defined everywhere on and for which
for some and . This holds, for instance, in the case of polynomial approximation when is a unit ball of functions of finite regularity. Then the right-hand side of (6.10) satisfies
Hence, with probability at least , one obtains a matching error bound for the least-squares estimator (up to constants), uniformly for functions , with near-optimal sample complexity.
7 Improvement over Monte Carlo sampling
Having derived near-optimal sampling strategies, in this section we discuss how this compares against standard Monte Carlo sampling, i.e., i.i.d. random sampling from the measure . Recall that in this case, the weight function is precisely , meaning that Monte Carlo sampling corresponds to an unweighted least-squares approximation. Theorem 5.4 asserts that the sample complexity of Monte Carlo sampling therefore depends on the unweighted quantity
(7.1) |
In other words, the maximal behaviour of the Christoffel function , or equivalently, the optimal constant in the unweighted Nikolskii-type inequality (5.8).
7.1 Bounded orthonormal systems
Recall that for any , and therefore, . On the other hand, if for some , then Monte Carlo sampling is already a near-optimal strategy, and there may be little need to optimize the sampling measure (besides reducing the constant ). This situation occurs whenever has an orthonormal basis that is uniformly bounded. Such a basis is sometimes referred to as a bounded orthonormal system (see, e.g., [58, Chpt. 12] or [5, Chpt. 6]). Specifically, if
(7.2) |
then it follows immediately from (5.6) that . Subspaces of trigonometric polynomials are a standard example for which this property holds (with ). Closely related are the Chebyshev polynomials of the first kind (see §4.2). The orthonormal Chebyshev polynomials on are defined by
Therefore they satisfy (7.2) . In dimensions, the tensor-product Chebyshev polynomial basis on satisfies (7.2) with . Hence it is also a bounded orthonormal system, albeit with a constant that grows exponentially fast as .
7.2 Arbitrarily-bad sample complexity bounds
Unfortunately, bounded orthonormal bases are quite rare in practice. It is also straightforward to construct subspaces for which can be arbitrarily large in comparison to . One way to do this involves the Legendre polynomials. Unlike the Chebyshev polynomials, the univariate Legendre polynomials grow with their degree, and satisfy (see, e.g., [5, §2.2.2])
(7.3) |
Therefore, for any set , the space has constant given by (this follows from (5.6) and (7.1)). The following result is now immediate.
Proposition 7.1.
There exists a probability space such that the following holds. For every and , there exists a subspace of dimension such that .
The reason for this poor behaviour is that Legendre polynomials are highly peaked near the endpoints . Specifically, the th such polynomial has absolute value at , but is uniformly within compact subsets of . This points towards a general observation. We expect poor scaling of , and therefore poor performance of Monte Carlo sampling, whenever has an orthonormal basis that is highly localized around a common shared point.
7.3 Sample complexity bounds for polynomial spaces
Following §4, we now describe a series of further bounds for in the case of multivariate polynomial spaces , focusing on the case where is a lower set.
Chebyshev polynomials. As noted previously, any subspace
of multivariate Chebyshev polynomials satisfies the exponentially-large (in ) bound . Fortunately, if is a lower set, then one has the -independent bound (see, e.g., [5, Prop. 5.13])
This bound is sharp up to a constant, i.e., there is a lower set with . For , this bound is sharper than the previous bounds .
Improvements such as this are typical when lower set structure is imposed – we will see another instance of this in a moment. It is one of the features that makes lower sets desirable for high-dimensional approximation. The underlying reason for it is because a lower set cannot contain too many ‘large’ (or even nonzero) indices.
Legendre polynomials. Because of (5.6), (7.1) and (7.3), for any subspace of Legendre polynomials, one has
which can be arbitrarily large in comparison to . However, lower set structure leads to a dramatic improvement over the general case. If is lower, then the following sharp upper bound holds (see, e.g., [5, Prop. 5.17])
(7.4) |
Ultraspherical and Jacobi polynomials. The situation for Jacobi polynomials with is similar to that of Legendre polynomials. The quantity can be made arbitrarily large for general . However, if is lower then one has the finite bound
(7.5) |
for Jacobi polynomials (see [85, Thm. 9]) and
(7.6) |
for ultraspherical polynomials (see [85, Thm. 8]).
-
Remark 7.2 (Sharpness of the rates)
The bounds (7.4)–(7.6) imply that a superlinear sample complexity suffices for stable and accurate polynomial approximation with random samples drawn from Jacobi measures. These rates are also necessary. This was recently shown in [127] in the case, based on earlier work [108, 15] on deterministic points that are equidistributed with respect to such measures. Specifically, [127] shows that choosing , where , necessarily implies exponential instability of the least-squares approximation (or, indeed, any other approximation method that achieves similar accuracy).
Bounded, non-tensor product domains. Several of these bounds generalize to bounded non-tensor product domains [109, 45, 14, 47]. If is the uniform measure and is bounded and satisfies the inner cone condition (see, e.g., [2, §4.11]), then
See [47, Thm. 5.4]. Thus, the same quadratic growth occurs for the total degree index set. Further, if has boundary, then [47, Thm. 5.6]
See [47] and references therein for further results in this direction. These bounds apply only to the total degree index set (4.2). For arbitrary lower sets, one has (see [14, Thm. 2.4])
whenever has the -rectangle property: there is a such that for any there exists an axis-aligned rectangle containing for which .
Hermite and Laguerre polynomials. Unfortunately, this analysis of Monte Carlo sampling says nothing about Hermite and Laguerre polynomial approximations, for the simple reason that such polynomials are not uniformly bounded, and therefore the constant (7.1) satisfies . The sample complexity of Hermite and Laguerre polynomial approximation is poorly understood in comparison to that of Jacobi polynomials. A number of empirical studies have suggested a super-polynomial or exponential sample complexity (in , for fixed ). But relatively few theoretical estimates exist. See [83, 64, 67, 113] and [5, Rem. 5.18]. Suffice to say, though, Hermite and Laguerre polynomial approximations are examples where one stands to gain substantially from Christoffel sampling, and as such, these has often been used as examples to illustrate its efficacy [64, 67, 35].
7.4 Summary
We summarize this section as follows. In general, the sample complexity of Monte Carlo sampling depends on the -norm of the Christoffel function, and it is easy to construct examples where this is arbitrarily large in comparison to . Moreover, these scenarios arise in various polynomial approximation problems, especially when considering unbounded domains.
8 Variations, extensions and further topics
To conclude this first part of the article, we now discuss a number of issues not considered so far, along with some variations and extensions.
Sampling from the near-optimal measures
A key practical issue is drawing samples from the measure (6.4) or (6.5) in a computationally efficient manner. As observed in [35], (6.4) is an additive mixture of measures of the form (6.5). Hence it is enough to be able to draw samples from the induced distributions , where is any orthonormal basis for . A prerequisite for this is, of course, the existence of an explicit orthonormal basis. This is often the case – for example, the problem considered in §4.2, which involve tensor products of orthogonal polynomials, for which fast algorithms for sampling from the induced distributions exist [96] – but not always. One such case is polynomial approximation on nontensorial domains. Orthogonal polynomials can be defined explicitly for certain non-tensorial domains, e.g., simplices, balls and spheres [52]. Yet, this is impossible in general.
A simple way to address this problem is to replace by a fine grid of points drawn i.i.d. from and replace the continuous measure by the discrete uniform measure supported on [6, 87]. Given a nonorthogonal basis for , now considered a subspace of , one may construct an orthonormal basis via numerical linear algebra tools such as classical QR factorizations or SVDs, or through more recent approaches such as V+A (Vandermonde with Arnoldi) [129]. Sampling from the Christoffel sampling measure is then straightforward, since it is also a discrete measure supported on .
This approach, which is a form of leverage score sampling (see next), ensures accurate and stable recovery in the -norm. To guarantee the same in the original -norm one needs to that ensure is large enough. Since is obtained by Monte Carlo sampling, this is governed, as in §7 and specifically (7.1), by the constant (7.1). This is another reason why analyzing the maximal behaviour of the Christoffel function is important, since it can inform the size of grid needed in such a construction. See [6, 87] for further details.
Note that the computation of an orthonormal basis over is a purely offline cost. It does not involve additional evaluations of the target function , which are often the main computational bottleneck in practice. Of course, theoretical bounds for the Christoffel may not be available or, if available, may result value of that is too large for practical computations. This has spurred several recent works [122, 47] which develop more refined strategies for constructing the grid than Monte Carlo sampling.
An alternative to this approach involves using a structured (e.g. Quasi-Monte Carlo) grid for [22]. This has the additional benefit of ensuring that the algebraic least-squares problem can be solved efficiently via Fast Fourier Transforms (FFT).
Connections to matrix sketching and leverage score sampling
Let , where , and . In applications in data science, it may be infeasible to solve the ‘full’ least squares problem . Matrix sketching involves using a sketching matrix (a matrix with one nonzero per row) and solving the sketched problem . The objective is to find with a minimal number of rows such that
In random sketching, one considers a discrete probability distribution on with , , draws and then sets and otherwise. A near-optimal choice of distribution is provided by the leverage scores , , of the matrix . These are given by
(8.1) |
where is any matrix whose columns form an orthonormal basis for . The resulting procedure is the well-known technique of leverage score sampling [50, 79, 124].
Leverage score sampling can be considered as a special case of Christoffel sampling involving the discrete set . Note that any vector can be viewed as a function via the relation and vice versa. One now defines the subspace to cast the above problem into the form introduced in §3. In particular, the values of the Christoffel function over the set are precisely the leverage scores (8.1) of the matrix . We refer to [8, Sec. A.2] and [80] for details.
sampling strategies, frame subsampling and Kadison–Singer
The Christoffel sampling schemes described in this paper are near-optimal, in the sense the sample complexity is log-linear in . Due to the coupon collector’s problem, this is the best achievable when using i.i.d. samples (see, e.g., [48, Rem. 3]).
This limitation has spurred a recent line of work on methods that have linear sample complexity in , i.e., optimal up to possible constants, and even schemes that can achieve interpolation, i.e., . This is based on techniques introduced in [81, 24] on frame subsampling. Here, given a frame of vectors in , one asks whether it is possible to extract a subset of size that, after potential reweighting, still constitutes a frame. This problem is closely related to Weaver’s conjecture, see [123, Thm. 2] or [81, Conj. 1.2]). Weaver’s conjecture (now theorem) is equivalent to the Kadison–Singer problem, and form the basis of the proof in [81] for the latter. The connection to sampling for least-squares approximation comes from the Marcinkiewicz–Zygmund inequality (5.1), which can be recast as a frame condition for the vectors defined by , .
The work of [81] (as well as extensions due to [99]) has been used to show the existence of sampling points with sample complexity for an arbitrary subspace . See [77, 115, 94, 72, 73, 48, 49] and references therein. Unfortunately, these works are impractical, as the computational cost in constructing the subsample is (at least) exponential in . Fortunately, recent progress has been made by using the approach of [24], as well as techniques from [76], leading to practical algorithms that run in polynomial time. See [23] and [46] for two such approaches, as well as [32] for related work in the discrete setting. A significant result of [46] is to provides polynomial-time algorithms that also work down to the interpolation regime , albeit with constants in the error bounds that grow algebraically with .
Sampling numbers and information-based complexity
Another major area of focus in the last several years has been the use of (weighted) least-squares approximation, Christoffel sampling and subsampling techniques to provide new results in information-based complexity [102, 101]. In this line of work, one considers a compact subset of a Banach space, then studies object such as the (linear) sampling numbers for . These measure how well (in a worst-case sense over ) one can approximate functions in from samples using arbitrary (linear) reconstruction maps. New results have emerged that bound the sampling numbers (for arbitrary ) in terms of its Kolmogorov -width, i.e., the best approximation error (uniformly over ) that can be achieved by any -dimensional subspace. These results show that pointwise samples (known as standard information) can constitute near-optimal information for recovery. Some of the core ideas of this work can be found in Theorem 6.1, including the construction of the measure (6.8) which is due to [72, 73]. Note that sampling number are often formulated with respect to the -norm (as in this article), but recent works also consider other -norms – in particular, the uniform norm. For a selection of the many recent results in this direction, see [23, 49, 70, 48, 115, 77, 94, 72, 73] and references therein.
Sampling discretizations
In tandem with these efforts, there has also been a focus on the development and systematic study of sampling discretizations using these ideas, both in the -norm such as in (5.1) and in other -norms. We refer to [68, 116] for reviews, as well as [61, 39, 40, 77], and references therein. Note that -norm sampling discretizations are related to the construction of weakly admissible meshes. See [126] for recent work on weakly admissible meshes that employs similar techniques.
As we have seen, sampling discretization are sufficient conditions for accurate and stable recovery via (weighted) least squares. However, they are also necessary conditions for stable recovery by any method. Modifying [4, Rem. 6.2], let be an arbitrary reconstruction map and suppose that is -accurate over , i.e.,
(8.2) |
Note that his holds with for least squares whenever , due to Lemma 5.1. Now let be any set of functions that are defined at the and suppose that . Then it is a short argument to show that the -Lipschitz constant
of the map restricted to the domain satisfies
where is the lower constant in the sampling discretization (5.1). It follows that a reconstruction map cannot be both accurate (even over , as in (8.2)) and stable without a sampling discretization.
Alternative sampling methods
Many other sampling methods have been proposed over the last decade, especially in the context of high-dimensional polynomial approximation. However, these are generally lacking near-optimal sample complexity guarantees. See [5, §8.1.1] and [66] and references therein for an overview of the many different approaches considered.
A limitation of Christoffel sampling is that i.i.d. points may cluster, thereby reducing the practical efficiency of the scheme. Most recently, a number of works have explored ideas such as volume sampling [44] using determinantal point processes to overcome this limitation. These are well-known concepts in machine learning [74, 43], in which non-independent samples are drawn from a measure that promotes repulsion between the points. This transpires to be closely related to Christoffel sampling, since the marginals of the sample points follow the same distribution. The application of volume sampling to least-squares approximation in arbitrary subspaces has been considered in [25, 121] for reproducing kernel Hilbert spaces and [100] for general spaces, along with its theoretical analysis and comparison with Christoffel sampling. Despite practical benefits, however, it is as of yet unclear whether they offer theoretical advantages over Christoffel sampling [100].
Adaptive methods
Finally, we briefly mention the prospect of adaptive methods. While these methods typically lack full theoretical guarantees, they can prove extremely effective in practice. In a variation of Remark 3.1, in an adaptive scheme one also chooses each subspace adaptively based on the previous approximation . In this case, we term this procedure an adaptive approximation scheme. This can be done using greedy methods [114], as in [84, 86] (which are themselves based on adaptive quadrature routines [60]), when given a dictionary of candidate basis functions to use to build the spaces . This aside, adaptive methods can also be used when constructing approximations in complex, nonlinear approximation spaces. See §10 for further discussion.
9 Beyond linear spaces and pointwise samples
Up to now, we have considered approximating an unknown function from a collection of pointwise samples in an -dimensional approximation space . In this final section, we introduce a general framework that significantly extends this setup. This section is primarily based on the framework introduced in [8], which was then further extended in [9]. Unlike in previous sections, our presentation will now be less thorough: we aim to convey the main ideas without the full details or variations. See [8, 9] for in-depth treatments, and [53, 120] for related work.
There are four main extensions we now address. First, the target object need not be a scalar-valued function, but simply an element of a Hilbert space . Second, the measurements arise as evaluations of arbitrary linear operators, which may be scalar-, vector- or function space-valued. Third, there may be different distinct processes generating the measurements. And finally, the approximation space need not be finite-dimensional subspace of . Examples that motivate these generalizations are discussed in §9.2.
9.1 The general framework
Let be a probability space, be a separable Hilbert space and consider a normed vector subspace of , termed the object space. Let be the number of measurement processes. For each , let be a probability space, which we term the measurement domain, be a Hilbert space, which we term the measurement space, and
be a mapping from to the space of of bounded linear operators , which we term the sampling operator.
For each , let be such that is a probability space. We assume that is absolutely continuous with respect to and its Radon–Nikodym derivative is strictly positive almost everywhere on . By definition
Note that, in what follows, we use or to denote the variable in , rather than . For convenience, we also define the weight function by , .
Let , where is the number of measurements in the th measurement process. With this setup in hand, we now draw samples independently with , , and consider the measurements
(9.1) |
where is a noise term. Finally, we let be the approximation space (which now need not be a linear space) and we consider the empirical least-squares fit
(9.2) |
Note that computing solutions to (9.2) may be very challenging numerically when is a nonlinear set. However, this is highly dependent on the choice of , and therefore not the focus of this article.
9.2 Examples
As shown in [8, 9], this framework includes many problems of practical relevance. We now summarize several such examples. We start by showing that it generalizes the setup of §3.
(i) Scalar-valued function approximation from pointwise samples. The setup of §3 be considered in the general framework as follows. Let , , , , and (with the Euclidean inner product). We then define the sampling operator as the pointwise sampling operator for and . Note that the measurements (9.1) and least-squares fit (9.2) reduce to (3.3) and (3.4), respectively
(ii) Function approximation from gradient-augmented samples. A simple modification of (i) involves sampling both the function and its gradient. This arises in various applications, including parametric PDEs and UQ [17, 107, 104, 103], seismology, Physics-Informed Neural Networks (PINNs) for PDEs [56, 128] and deep learning [38]. This problem can be cast into the general framework. Suppose that . We then let , be the Sobolev space of order one, , and be defined by .
The main difference with (i) is that the samples are vector-valued. Further generalizations are also possible. For example, in some cases it may be too expensive to evaluate the gradient at every point. Let be the number of function samples and be the number of function-plus-gradient samples. As shown in [8, Sec. B.7], we can consider this as a multimodal sampling problem with , , and sampling operators and .
(iii) Hilbert-valued function approximation from pointwise samples. In some applications, the unknown is a function taking values in a Hilbert space . This arises in, for instance, parametric PDEs and UQ, where is the parametric solution map of a PDE whose (weak) solutions take values in . Approximating such a function from pointwise samples is easily considered in this framework. We use the setup of the scalar-valued case described above, except with replaced by . One can also consider gradient-augmented samples, much as in the previous example. A further extension to this problem is that of operator learning [28, 69], in which the unknown is an operator between two Hilbert spaces. This also fits within this framework.
(iv) Image reconstruction. We now briefly describe a seemingly quite different problem, based on [8, Sec. C]. This example highlights that the general framework can handle both discrete and continuous settings, and measurements that do not arise as pointwise samples. Consider a discrete -dimensional image of size , which we may vectorize and express as a vector , where . Let be the matrix of the -dimensional discrete Fourier transform. In Fourier imaging [13], the goal is to recover from a subset of its frequencies. If , , is the set of frequencies sampled, then the measurements of are
where is noise and is a matrix that selects the rows of corresponding to the indices in . Fourier imaging arises in various applications, including Magnetic Resonance Imaging (MRI), Nuclear Magnetic Resonance (NMR) and radio interferometry [13]. A key question is how to best choose within any relevant physical constraints. As described in [8, Sec. C], this problem can be cast into the general framework. The framework can also handle various practical constraints – for instance, the fact that MR imaging devices cannot sample individual frequencies, but may only sample along piecewise smooth curves in frequency space, which can be handled by considering vector-valued measurements. Moreover, the framework can also handle the more advanced scenario of parallel MRI, where coils simultaneously acquire measurements.
(v) Other examples. Various other families of problems can be considered within this framework. For instance, as discussed in [9], many standard measurement constructions in compressed sensing [58] become special cases of this approach. One can also readily consider related problems such as matrix completion and, more generally, matrix recovery for linear measurements [42]. Various other problems in sampling theory and signal processing, such as so-called mobile sampling [62], also fit into this framework. It can also incorporate recovery problems involving function averages [30], as well as techniques such as stratification and antithetics, which are commonly variance reduction techniques in Monte Carlo integration [105].
(vi) Nonlinear approximation spaces. Many recovery problems call for nonlinear approximation spaces. A standard example is the sparse regression problem. Here, one may consider the setup of §3 (or a more general setup, as in (ii)–(v)), but, rather than a linear subspace, one defines
where and is some known dictionary of functions. The sparse regression problem has been studied extensively (see [10] and references therein), especially in the context of dictionaries of polynomials, where it is termed sparse polynomial approximation [5]. However, many other nonlinear approximations are widely used in practice. A partial list includes various ‘structured’ sparse models, such as joint, group or block sparsity or sparsity in levels [21, 29, 51, 117], low-rank matrix or tensor models [42], single- [59] or multi-layer neural networks, tensor networks [119, 53], Fourier sparse functions [54] and spaces defined by generative models [27].
9.3 The generalized Christoffel function
The key tool in our analysis is a generalization of the Christoffel function (Definition 5.3).
Definition 9.1 (Generalized Christoffel function).
Let be a normed vector space, be a Hilbert space, be a measure space, be such that the function is measurable for every and suppose that , . The generalized Christoffel function of with respect to is the function defined by
Notice that reduces to the standard Christoffel function (5.5) in the case of (i) above. In general, measures how large the measurement of an arbitrary can be (in norm) at an index in relation to the norm of . For instance, in the Fourier imaging problem (iv) it measures how large the Fourier transform can be at a given frequency for an element of the approximation space in relation to its norm. We remark in passing that inherits some of the properties of the standard Christoffel function. See, e.g., [8, Lem. E.1].
Much like in (5.7), given a nonnegative weight function , we also define
(9.3) |
9.4 Theoretical guarantee
We now present a theoretical guarantee for this framework. We first require several assumptions.
Assumption 9.2 (Nondegeneracy of the sampling operators).
For each and , the map is measurable and is integrable. Moreover, there are constants such that
(9.4) |
We remark in passing that the lower bound (9.4) is, in fact, only required to hold for , where is the difference set (see the proof of Theorem 9.4 below, as well as that of [9, Thm. E.2]). This assumption says that the action of the sampling operators preserves the norm of any , up to constants. Note that it holds trivially with in the standard problem (i), since in that case we have and
All other examples discussed in §9.2 can also be formulated so that this assumption also holds.
Recall that in the standard case studied earlier, stability and accuracy is ensured by the sampling discretization (5.2). The middle term in this inequality is an empirical approximation to the -norm. An analogous construct is crucial to the analysis of this general setting, and arises by making an empirical approximation to the integrals in (9.4). Given , we say that empirical nondegeneracy holds for with constants if
(9.5) |
This can be seen as a generalization of the well-known Restricted Isometry Property (RIP) in compressed sensing [58]. In the case of sparse regression (see (vi) above), it is sometimes termed a universal sampling discretization [41].
Assumption 9.3 (Union of subspaces model).
The set satisfies the following.
-
(i)
is a cone, i.e., for any and ).
-
(ii)
, where each is a subspace of dimension .
This trivially holds with and when is an -dimensional subspace. In general, Assumption 9.3 is a extension of the union-of-subspaces model, which is well-known in the context of compressed sensing [21, 51]. It includes many of the examples of nonlinear model classes used in practice, such as sparse regression problem and many of its generalizations (see (vi) above).
Finally, for the result below we also need a truncation operator. Similar to §5.5, this is the operator defined by , where . Given a minimizer of (9.2), we define . Note that we do not consider the other estimator introduced in §5.5, although it could be readily formulated in this setting. The reason is that when is a nonlinear space, it is generally not possible to certify (in polynomial time) that (9.5) holds, which, like (5.2) is the key ingredient for accuracy and stable recovery. In sparse recovery, for instance, this is equivalent to certifying that a given matrix has the RIP – a well-known NP-hard problem.
Theorem 9.4.
9.5 Christoffel sampling
Much as in §6, we can use Theorem 9.4 to optimize the sampling measures . Let in the case of (9.6) or in the case of (9.7). Then, using (9.3), we now choose
which gives the sampling measures
We term this Christoffel sampling.111Note we may assume without loss of generality that . If not, the sampling operator simply yields zero measurements over the space almost everywhere, and can therefore be excluded. Nondegeneracy (9.4) implies that there is at least one sampling operator yielding nonzero measurements over . Substituting this into (9.6) yields the measurement condition
(9.8) |
or, in the case of (9.7),
(9.9) |
This approach is ‘optimal’ in the sense that it minimizes (up to a factor of ) the bound (9.6) over all possible sampling measures . When is an -dimensional subspace – in which case and in Assumption 9.3 can be chosen as – it is a short argument using the nondegeneracy condition (9.4) to see that
(see [8, Cor. 4.7]). Hence, if each is chosen proportional to the right-hand side in (9.9), then the total number of measurements satisfies the near-optimal log-linear scaling
Unfortunately, in the general case of a nonlinear set , there is no clear way to relate the integral in (9.8) to explicit quantities such as and . However, it is possible to show the bound
(see [8, Cor. 4.7] once more), where we recall that is the number of subspaces in Assumption 9.3(ii). For fixed and small , this is near-optimal. However, in cases such as sparse regression, . Fortunately, a more refined analysis is possible in these cases. See [10, 8, 9] for discussion.
While it is difficult to provide explicit measurement conditions in the general case, it is possible to gain some insight over why Christoffel sampling, in general, improves over Monte Carlo sampling, i.e., the case where , . Since in this case, (9.3) and Theorem 9.4 provides the measurement conditions for Monte Carlo sampling of the form
and likewise in the case of (9.7). Therefore, comparing with (9.8), the improvement of Christoffel sampling, in general terms, can be equated to the difference between the supremum of the (generalized) Christoffel function and its integral (mean). In particular, if this function is sharply peaked, then we expect significant improvements, while if it is approximately flat, then we expect less improvement. Such observations are witnessed in numerical experiments [8, 9] .
10 Conclusions and outlook
In this article, we have surveyed recent advances in optimal sampling for least-squares approximation, focusing on the key role that the Christoffel function plays in both understanding the sample complexity in general, and in constructing near-optimal sampling strategies. We have also seen in §9 how these ideas naturally extend to more general types of measurements and play a key role in the sample complexity even for nonlinear spaces. We now offer some concluding thoughts.
First, although the picture for pointwise sampling in linear spaces is increasingly mature, there remain various open questions. While optimal (i.e., ) sampling strategies that are practical (i.e., implementable in polynomial time) are now known (recall §8), future investigations are needed on their practical efficacy, especially in the interpolation regime. There are also open questions about uniform recovery, which are especially relevant to sampling numbers and questions in information-based complexity. Finally, the question of optimal sampling with hierarchical or adaptive schemes has not yet been addressed.
By contrast, nonlinear approximation spaces pose many more open problems. First, even in relatively well-studied settings such as sparse regression, it is unknown whether Christoffel sampling generally leads to near-optimal sample complexity bounds. This issue is discussed in detail in [10]. Less is known about more complicated nonlinear spaces. See [8, 9] for discussion. Second, there is also the practical matter of drawing samples from the resulting Christoffel sampling measure. This is very dependent on the particular nonlinear space and samples under consideration, and may well be highly nontrivial. Finding practical surrogate sampling measures is an interesting open problem.
In the nonlinear setting, it is worth noting that Christoffel sampling is well-suited only when the approximation space has low intrinsic complexity that is comparable to the number of samples (e.g., in the case where is a linear space with ). It is not well suited for ‘complex’ approximation spaces, such as spaces of deep neural networks [8] or low-rank tensor networks [53]. Christoffel sampling can be implemented in an adaptive manner in such cases. Here, one alternates between adding samples and learning an approximation, and at each stages uses a linearization to obtain an intermediate linear approximation space over which Christoffel sampling can be performed. This idea was developed for approximating functions and solving PDEs via deep learning in [7] and [8, Sec. D], and later extended to more general approximation spaces in [63].
Finally, it is worth noting that Christoffel sampling, in any of its guises, is not a panacea. Depending on the problem or target function, there may be little or no benefit over standard Monte Carlo sampling. This is relevant in various applications, as Monte Carlo samples are often commonly encountered in practice [3]. It leads to another interesting line of research, which is understanding function classes where Monte Carlo sampling is near-optimal. One such case is holomorphic function approximation in high dimensions. In a series of works [3, 12, 11], it has been shown that Monte Carlo sampling is near-optimal information for classes of infinite-dimensional holomorphic functions arising in parametric DE problems. In particular, Monte Carlo sampling yields error bounds that are within a polylogarithmic factor of the adaptive -width for such classes. For other work in this direction involving functions Sobolev spaces, see [71].
Acknowledgements
The idea for this article came from a talk given at the 2023 Foundations of Computational Mathematics conference. The author would like to thank the conference organizers, as well as the various participants for insightful comments and questions. Parts of this article were written during a visit to the Isaac Newton Institute for Mathematical Sciences, Cambridge for the programme Discretization and recovery in high-dimensional spaces. The author would like to thank the institute for support and hospitality, and the programme participants for providing a stimulating environment. Finally, he would like to thank Simone Brugiapaglia, Matthew J. Colbrook, Maksym Neyra–Nesterenko and Daniel Fassler for helpful feedback.
This work was supported by EPSRC grant number EP/R014604/1 and NSERC grant number RGPIN-2021-611675.
References
- [1] D. Ackerer and D. Filipović. Option pricing with orthogonal polynomial expansions. Math. Finance, 30(1):47–84, 2019.
- [2] R. A. Adams and J. J. F. Fournier. Sobolev Spaces, volume 140 of Pure Appl. Math. (Amst.). Academic Press, Amsterdam, Netherlands, 2nd edition, 2003.
- [3] B. Adcock and S. Brugiapaglia. Monte Carlo is a good sampling strategy for polynomial approximation in high dimensions. arXiv:2208.09045, 2023.
- [4] B. Adcock, S. Brugiapaglia, N. Dexter, and S. Moraga. Learning smooth functions in high dimensions: from sparse polynomials to deep neural networks. In S. Mishra and A. Townsend, editors, Numerical Analysis Meets Machine Learning, volume 25 of Handbook of Numerical Analysis, pages 1–52. Elsevier, 2024.
- [5] B. Adcock, S. Brugiapaglia, and C. G. Webster. Sparse Polynomial Approximation of High-Dimensional Functions. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2022.
- [6] B. Adcock and J. M. Cardenas. Near-optimal sampling strategies for multivariate function approximation on general domains. SIAM J. Math. Data Sci., 2(3):607–630, 2020.
- [7] B. Adcock, J. M. Cardenas, and N. Dexter. CAS4DL: Christoffel adaptive sampling for function approximation via deep learning. Sampl. Theory Signal Process. Data Anal., 20:21, 2022.
- [8] B. Adcock, J. M. Cardenas, and N. Dexter. CS4ML: A general framework for active learning with arbitrary data based on Christoffel functions. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 19990–20037, 2023.
- [9] B. Adcock, J. M. Cardenas, and N. Dexter. A unified framework for learning with nonlinear model classes from arbitrary linear samples. In International Conference on Machine Learning, 2024.
- [10] B. Adcock, J. M. Cardenas, N. Dexter, and S. Moraga. Towards optimal sampling for learning sparse approximation in high dimensions, chapter 2, pages 9–77. Number 191 in Springer Optim. Appl. Springer, Cham, 2022.
- [11] B. Adcock, N. Dexter, and S. Moraga. Optimal approximation of infinite-dimensional holomorphic functions II: recovery from i.i.d. pointwise samples. arXiv:2310.16940, 2023.
- [12] B. Adcock, N. Dexter, and S. Moraga. Optimal approximation of infinite-dimensional holomorphic functions. Calcolo, 61(1):12, 2024.
- [13] B. Adcock and A. C. Hansen. Compressive Imaging: Structure, Sampling, Learning. Cambridge University Press, Cambridge, UK, 2021.
- [14] B. Adcock and D. Huybrechs. Approximating smooth, multivariate functions on irregular domains. Forum Math. Sigma, 8:e26, 2020.
- [15] B. Adcock, R. Platte, and A. Shadrin. Optimal sampling rates for approximating analytic functions from pointwise samples. IMA J. Numer. Anal., 39(3):1360–1390, 2019.
- [16] A. Alaoui and M. W. Mahoney. Fast randomized kernel ridge regression with statistical guarantees. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
- [17] A. K. Alekseev, I. M. Navon, and M. E. Zelentsov. The estimation of functional uncertainty using polynomial chaos and adjoint equations. Internat. J. Numer. Methods Fluids, 67(3):328–341, 2011.
- [18] J. Antezana, D. Carbajal, and J. L. Romero. Random periodic sampling patterns for shift-invariant spaces. IEEE Trans. Inf. Theory, 70(6):3855–3863, 2024.
- [19] B. Arras, M. Bachmayr, and A. Cohen. Sequential sampling for optimal weighted least squares approximations in hierarchical spaces. SIAM J. Math. Data Sci., 1(1):189–207, 2019.
- [20] Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, and Amir Zandieh. A universal sampling method for reconstructing signals with simple fourier transforms. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, pages 1051–1063, New York, NY, USA, 2019. Association for Computing Machinery.
- [21] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hedge. Model-based compressive sensing. IEEE Trans. Inf. Theory, 56(4):1982–2001, 2010.
- [22] F. Bartel, L. Kämmerer, D. Potts, and T. Ullrich. On the reconstruction of functions from values at subsampled quadrature points. Math. Comp., 93(346):785–809, 2024.
- [23] F. Bartel, M. Schäfer, and T. Ullrich. Constructive subsampling of finite frames with applications in optimal function recovery. Appl. Comput. Harmon. Anal., 65:209–248, 2023.
- [24] J. Batson, D. A. Spielman, and N. Srivastava. Twice-Ramanujan sparsifiers. SIAM Rev., 56(2):315–334, 2014.
- [25] A. Belhadji, R. Bardenet, and P. Chainais. Signal reconstruction using determinantal sampling. arXiv:2310.0943, 2023.
- [26] M. Berveiller, B. Sudret, and M. Lemaire. Stochastic finite element: a non intrusive approach by regression. Eur. J. Comput. Mech., 15(1-3):81–92, 2006.
- [27] A. Bora, A. Jalal, E. Price, and A. G. Dimakis. Compressed sensing using generative models. In International Conference on Machine Learning, pages 537–546, 2017.
- [28] N. Boullé and A. Townsend. A mathematical guide to operator learning. In S. Mishra and A. Townsend, editors, Numerical Analysis Meets Machine Learning, volume 25 of Handbook of Numerical Analysis, pages 83–125. Elsevier, 2024.
- [29] A. Bourrier, M. E. Davies, T. Peleg, P. Pérez, and R. Gribonval. Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems. IEEE Trans. Inf. Theory, 60(12):7928–7946, 2014.
- [30] L. B. Bruno and W. Erb. Polynomial interpolation of function averages on interval segments. SIAM J. Numer. Anal., 62(4):1759–1781, 2024.
- [31] S. Chen, R. Varma, A. Singh, and J. Kovačcević. A statistical perspective of sampling scores for linear regression. In 2016 IEEE International Symposium on Information Theory (ISIT), pages 1556–1560, 2016.
- [32] X. Chen and E. Price. Active regression via linear-sample sparsification. In A. Beygelzimer and D. Hsu, editors, Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pages 663–695. PMLR, 2019.
- [33] A. Chkifa, A. Cohen, G. Migliorati, F. Nobile, and R. Tempone. Discrete least squares polynomial approximation with random evaluations – application to parametric and stochastic elliptic PDEs. ESAIM. Math. Model. Numer. Anal., 49(3):815–837, 2015.
- [34] A. Cohen, M. A. Davenport, and D. Leviatan. On the stability and accuracy of least squares approximations. Found. Comput. Math., 13:819–834, 2013.
- [35] A. Cohen and G. Migliorati. Optimal weighted least-squares methods. SMAI J. Comput. Math., 3:181–203, 2017.
- [36] A. Cohen and G. Migliorati. Near-optimal approximation methods for elliptic PDEs with lognormal coefficients. Math. Comp., 92(342):1665–1691, 2023.
- [37] Albert Cohen and Ronald DeVore. Approximation of high-dimensional parametric PDEs. Acta Numer., 24:1–159, 2015.
- [38] W. M. Czarnecki, S. Osindero, M. Jaderberg, G. Swirszcz, and R. Pascanu. Sobolev training for neural networks. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
- [39] F. Dai, A. Prymak, A. Shadrin, V. Temlyakov, and S. Tikhonov. Entropy numbers and Marcinkiewicz-type discretization. J. Funct. Anal., 281(6):109090, 2021.
- [40] F. Dai, A. Prymak, A. Shadrin, V. Temlyakov, and S. Tikhonov. Sampling discretization of integral norms. Constr. Approx., 54(3):455–471, 2021.
- [41] F. Dai and V. Temlyakov. Universal sampling discretization. Constr. Approx., 58:589–613, 2023.
- [42] M. A. Davenport and J. Romberg. An overview of low-rank matrix recovery from incomplete observations. IEEE J. Sel. Topics Signal Process., 10(4):602–622, 2016.
- [43] M. Derezinski, M. K. K Warmuth, and D. J. Hsu. Leveraged volume sampling for linear regression. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
- [44] A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. Matrix approximation and projective clustering via volume sampling. Theory Comput., 2(1):225–247, 2006.
- [45] Z. Ditzian. On Nikol’skii inequalities for domains in . Constr. Approx., 44:23–51, 2016.
- [46] M. Dolbeault and A. Chkifa. Randomized least-squares with minimal oversampling and interpolation in general spaces. SIAM J. Numer. Anal., 62(4):1515–1538, 2024.
- [47] M. Dolbeault and A. Cohen. Optimal sampling and Christoffel functions on general domains. Constr. Approx., 56:121–163, 2021.
- [48] M. Dolbeault and A. Cohen. Optimal pointwise sampling for approximation. J. Complexity, 68:101602, 2022.
- [49] M. Dolbeault, D. Krieg, and M. Ullrich. A sharp upper bound for sampling numbers in . Appl. Comput. Harmon. Anal., 63:113–134, 2023.
- [50] P. Drineas, M. Magdon-Ismail, M. W. Mahoney, and D. P. Woodruff. Fast approximation of matrix coherence and statistical leverage. J. Mach. Learn. Res., 13:3475–4506, 2012.
- [51] M. F. Duarte and Y. C. Eldar. Structured compressed sensing: from theory to applications. IEEE Trans. Signal Process., 59(9):4053–4085, 2011.
- [52] C. F. Dunkl and Y. Xu. Orthogonal Polynomials of Several Variables. Cambridge University Press, 2nd edition, 2014.
- [53] M. Eigel, R. Schneider, and P. Trunschke. Convergence bounds for empirical nonlinear least-squares. ESAIM Math. Model. Numer. Anal., 56(1):79–104, 2022.
- [54] T. Erdelyi, C. Musco, and C. Musco. Fourier sparse leverage scores and approximate kernel learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 109–122. Curran Associates, Inc., 2020.
- [55] M. Fanuel, J. Schreurs, and J. A. K. Suykens. Nyström landmark sampling and regularized Christoffel functions. Mach. Learn., 111:2213–2254, 2022.
- [56] X. Feng and L. Zeng. Gradient-enhanced deep neural network approximations. J. Mach. Learn. Model. Comput., 3(4):73–91, 2022.
- [57] D. Filipović, K. Glau, Y. Nakatsukasa, and F. Statti. Weighted Monte Carlo with least squares and randomized extended Kaczmarz for option pricing. arXiv:1910.07241, 2019.
- [58] S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Appl. Numer. Harmon. Anal. Birkhäuser, New York, NY, 2013.
- [59] A. Gajjar, C. Musco, and C. Hegde. Active learning for single neuron models with lipschitz non-linearities. In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pages 4101–4113. PMLR, 2023.
- [60] T. Gerstner and M. Griebel. Dimension-adaptive tensor-product quadrature. Computing, 71:65–87, 2003.
- [61] K. Gröchenig. Sampling, Marcinkiewicz—Zygmund inequalities, approximation, and quadrature rules. J. Approx. Theory, 257:105455, 2020.
- [62] K. Gröchenig, J. L. Romero, J. Unnikrishnan, and M. Vetterli. On minimal trajectories for mobile sampling of bandlimited fields. Appl. Comput. Harmon. Anal., 39(3):487–510, 2015.
- [63] R. Gruhlke, A. Nouy, and P. Trunschke. Optimal sampling for stochastic and natural gradient descent. arXiv:2402.03113, 2024.
- [64] L. Guo, A. Narayan, and T. Zhou. Constructing least-squares polynomial approximations. SIAM Rev., 62(2):483–508, 2020.
- [65] C. Haberstich, A. Nouy, and G. Perrin. Boosted optimal weighted least-squares. Math. Comp., 91:1281–1315, 2022.
- [66] M. Hadigol and A. Doostan. Least squares polynomial chaos expansion: a review of sampling strategies. Comput. Methods Appl. Mech. Engrg., 332:382–407, 2018.
- [67] J. Hampton and A. Doostan. Coherence motivated sampling and convergence analysis of least squares polynomial chaos regression. Comput. Methods Appl. Mech. Engrg., 290:73–97, 2015.
- [68] B. Kashin, E. Kosov, I. Limonova, and V. Temlyakov. Sampling discretization and related problems. J. Complexity, 71:101653, 2022.
- [69] N. B. Kovachki, S. Lanthaler, and A. M. Stuart. Operator learning: algorithms and analysis. In S. Mishra and A. Townsend, editors, Numerical Analysis Meets Machine Learning, volume 25 of Handbook of Numerical Analysis, pages 419–467. Elsevier, 2024.
- [70] D. Krieg, K. Pozharska, M. Ullrich, and T. Ullrich. Sampling recovery in and other norms. arXiv:2305.07539, 2023.
- [71] D. Krieg and M. Sonnleitner. Random points are optimal for the approximation of Sobolev functions. IMA J. Numer. Anal., 44(3):1346–1371, 2024.
- [72] D. Krieg and M. Ullrich. Function values are enough for -approximation. Found. Comput. Math., 21(4):1141–1151, 2021.
- [73] D. Krieg and M. Ullrich. Function values are enough for -approximation: Part II. J. Complexity, 66:101569, 2021.
- [74] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 5(2-3):123—286, 2012.
- [75] J. B. Lasserre, E. Pauwels, and M. Putinar. The Christoffel–Darboux Kernel for Data Analysis. Cambridge University Press, Cambridge, UK, 2022.
- [76] Y. T. Lee and H. Sun. Constructing linear-sized spectral sparsification in almost-linear time. SIAM J. Comput., 47(6):2315–2336, 2018.
- [77] I. Limonova and V. Temlyakov. On sampling discretization in . J. Math. Anal. Appl., 515(2):126457, 2022.
- [78] P. Ma, M. W. Mahoney, and B. Yu. A statistical perspective on algorithmic leveraging. J. Mach. Learn. Res., 16:861–911, 2015.
- [79] M. W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends in Machine Learning, 3:123–224, 2011.
- [80] O. Malik, Y. Xu, N. Cheng, S. Becker, A. Doostan, and A. Narayan. Fast algorithms for monotone lower subsets of Kronecker least squares problems. arXiv:2209.05662, 2022.
- [81] A. W. Marcus, D. A. Spielman, and N. Srivastava. Interlacing families II: mixed characteristic polynomials and the Kadison–Singer problem. Ann. Math., 182(1):327–350, 2015.
- [82] P.-G. Martinsson and J. A. Tropp. Randomized numerical linear algebra: foundations and algorithms. Acta Numer., 32:403–572, 2020.
- [83] G. Migliorati. Polynomial approximation by means of the random discrete projection and application to inverse problems for PDEs with stochastic data. PhD thesis, Politecnico di Milano, 2013.
- [84] G. Migliorati. Adaptive polynomial approximation by means of random discrete least squares. In Assyr Abdulle, Simone Deparis, Daniel Kressner, Fabio Nobile, and Marco Picasso, editors, Numerical Mathematics and Advanced Applications – ENUMATH 2013, pages 547–554, Cham, Switzerland, 2015. Springer.
- [85] G. Migliorati. Multivariate Markov-type and Nikolskii-type inequalities for polynomials associated with downward closed multi-index sets. J. Approx. Theory, 189:137–159, 2015.
- [86] G. Migliorati. Adaptive approximation by optimal weighted least squares methods. SIAM J. Numer. Anal, 57(5):2217–2245, 2019.
- [87] G. Migliorati. Multivariate approximation of functions on irregular domains by weighted least-squares methods. IMA J. Numer. Anal., 41(2):1293–1317, 2021.
- [88] G. Migliorati and F. Nobile. Stable high-order randomized cubature formulae in arbitrary dimension. J. Approx. Theory, 275:105706, 2022.
- [89] G. Migliorati, F. Nobile, and R. Tempone. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points. J. Multivariate Anal., 142:167–182, 2015.
- [90] G. Migliorati, F. Nobile, E. von Schwerin, and R. Tempone. Approximation of quantities of interest in stochastic PDEs by the random discrete projection on polynomial spaces. SIAM J. Sci. Comput., 35(3):A1440–A1460, 2013.
- [91] G. Migliorati, F. Nobile, E. von Schwerin, and R. Tempone. Analysis of the discrete projection on polynomial spaces with random evaluations. Found. Comput. Math., 14:419–456, 2014.
- [92] G. V. Milovanović, D. S. Mitrinović, and M. T Rassias. Topics in Polynomials: Extremal Problems, Inequalities, Zeros. World Scientific, Singapore, 1994.
- [93] C. Musco and C. Musco. Recursive Sampling for the Nyström Method. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
- [94] N. Nagel, M. Schäfer, and T. Ullrich. A new upper bound for sampling numbers. Found. Comput. Math. (in press), 2021.
- [95] Y. Nakatsukasa. Approximate and integrate: variance reduction in Monte Carlo integration via function approximation. arXiv:1806.05492, 2018.
- [96] A. Narayan. Computation of induced orthogonal polynomial distributions. Electron. Trans. Numer. Anal., 50:71–97, 2018.
- [97] A. Narayan and T. Zhou. Stochastic collocation on unstructured multivariate meshes. Commun. Comput. Phys., 18(1):1–36, 2015.
- [98] P. Nevai. Géza Freud, orthogonal polynomials and Christoffel functions. A case study. J. Approx. Theory, 48(1):3–167, 1986.
- [99] S. Nitzan, A. Olevskii, and A. Ulanovskii. Exponential frames on unbounded sets. Proc. Amer. Math. Soc., 144(1):109–118, 2016.
- [100] A. Nouy and B. Michel. Weighted least-squares approximation with determinantal point processes and generalized volume sampling. arXiv:2312.14057, 2023.
- [101] E. Novak and H. Woźniakowski. Tractability of Multivariate Problems, Volume I: Linear Information. Number 6 in EMS Tracts in Mathematics. European Mathematical Society Publishing House, Zürich, Switzerland, 2008.
- [102] E. Novak and H. Woźniakowski. Tractability of Multivariate Problems, Volume II: Standard Information for functionals. Number 12 in EMS Tracts in Mathematics. European Mathematical Society Publishing House, Zürich, Switzerland, 2010.
- [103] T. O’Leary-Roseberry, P. Chen, U. Villa, and O. Ghattas. Derivative-informed neural operator: an efficient framework for high-dimensional parametric derivative learning. J. Comput. Phys., 496:112555, 2024.
- [104] T. O’Leary-Roseberry, U. Villa, P. Chen, and O. Ghattas. Derivative-informed projected neural networks for high-dimensional parametric maps governed by PDEs. Comput. Methods Appl. Mech. Engrg., 388(1):114199, 2022.
- [105] A. B. Owen. Monte Carlo Theory, Methods and Examples. https://artowen.su.domains/mc/, 2013.
- [106] E. Parolin, D. Huybrechs, and A. Moiola. Stable approximation of Helmholtz solutions in the disk by evanescent plane waves. ESAIM Math. Model. Numer. Anal., 57:3499–3536, 2023.
- [107] J. Peng, J. Hampton, and A. Doostan. On polynomial chaos expansion via gradient-enhanced -minimization. J. Comput. Phys., 310:440–458, 2016.
- [108] R. Platte, L. N. Trefethen, and A. Kuijlaars. Impossibility of fast stable approximation of analytic functions from equispaced samples. SIAM Rev., 53(2):308–318, 2011.
- [109] A. Prymak and O. Usoltseva. Christoffel functions on planar domains with piecewise smooth boundary. Acta Math. Hungar., 158:216–234, 2019.
- [110] H. Rauhut and R. Ward. Interpolation via weighted minimization. Appl. Comput. Harmon. Anal., 40(2):321–351, 2016.
- [111] Stephen M Stigler. Gauss and the invention of least squares. Ann. Statist., pages 465–474, 1981.
- [112] G. Szegö. Orthogonal Polynomials, volume 23 of Amer. Math. Soc. Colloq. Publ. American Mathematical Society, Providence, RI, 4th edition, 1975.
- [113] T. Tang and T. Zhou. On discrete least-squares projection in unbounded domain with random evaluations and its application to parametric uncertainty quantification. SIAM J. Sci. Comput., 36(5):A2272–A2295, 2014.
- [114] V. Temlyakov. Greedy Approximation. Cambridge University Press, Cambridge, UK, 2011.
- [115] V. Temlyakov. On optimal recovery in . J. Complexity, 65:101545, 2021.
- [116] V. N. Temlyakov. The Marcinkiewicz-type discretization theorems. Constr. Approx., 48(2):337–369, 2018.
- [117] Y. Traonmilin and R. Gribonval. Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all. Appl. Comput. Harmon. Anal., 45(1):170–205, 2018.
- [118] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Found. Comput. Math., 12:389–434, 2012.
- [119] P. Trunschke. Convergence bounds for nonlinear least squares and applications to tensor recovery. arXiv:2108.05237, 2021.
- [120] P. Trunschke. Convergence bounds for local least squares approximation. arXiv:2208.10954, 2022.
- [121] P. Trunschke and A. Nouy. Almost sure quasi-optimal approximation in reproducing kernel Hilbert spaces. arXiv:2407.0667, 2024.
- [122] P. Trunschke and A. Nouy. Optimal sampling for least squares approximation with general dictionaries. arXiv:2407.0781, 2024.
- [123] N. Weaver. The Kadison–Singer problem in discrepancy theory. Discrete Math., 278:227–239, 2004.
- [124] D. P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10(1-2):1–157, 2014.
- [125] Y. Xu. Christoffel functions and Fourier series for multivariate orthogonal polynomials. J. Approx. Theory, 82(2):205–239, 1995.
- [126] Y. Xu and A. Narayan. Randomized weakly admissible meshes. J. Approx. Theory, 285:105835, 2023.
- [127] Z. Xu and X. Zhang. Stability of least square approximation under random sampling. arXiv:2407.1022, 2024.
- [128] J. Yu, L. Lu, X. Meng, and G. E. Karniadakis. Gradient-enhanced physics-informed neural networks for forward and inverse pde problems. Comput. Methods Appl. Mech. Engrg., 393:114823, 2022.
- [129] W. Zhu and Y. Nakatsukasa. Convergence and near-optimal sampling for multivariate function approximations in irregular domains via Vandermonde with Arnoldi. arXiv:2301.12241, 2023.
Appendix A Proofs
In this appendix we give proofs for various results in the paper. We commence with Lemma 5.1. This is a standard result. We include a short proof for completeness.
Proof of Lemma 5.1.
Recall that , where is the matrix defined in (3.8). This matrix is full rank since and , and therefore the LS problem has a unique solution.
We next prove Theorem 5.4. This has now become a standard exercise involving the matrix Chernoff bound (see, e.g., [118, Thm. 1.1]), which we repeat here for convenience. This bound was first used in the context of least-squares approximation from i.i.d. samples in [34].
Theorem A.1 (Matrix Chernoff bound).
Let be independent, self-adjoint random matrices of dimension . Assume that
almost surely for each , and define
Then, for ,
and, for ,
Proof of Theorem 5.4.
Let be an orthonormal basis for and be as in (3.8). Observe that
is a sum of independent random matrices. Also, orthonormality of the basis functions implies that (5.3) holds, i.e.,
We now wish to apply Theorem A.1. Due to (3.1), the fact that and orthonormality, we have
Hence and therefore . Next, let be arbitrary. Then
We immediately deduce that . Moreover, Parseval’s identity implies that . Hence, using this and (5.5) and (5.7), we obtain
Since was arbitrary and , we conclude that
We are now ready to apply Theorem A.1. Using the union bound, we have
where and . Notice that and , where is as in (5.11). Therefore
where in the last step we used (5.11). This completes the proof. ∎
We next prove Corollary 5.9. For this, we require the following lemma, which is based on [86] (which is, in turn, based on [34]).
Lemma A.2.
Proof.
Proof of Corollary 5.9.
The condition on and Theorem 5.4 imply that (5.10) holds with probability at least . Therefore Lemma 5.1 asserts that the weighted least-squares problem has a unique solution for any function that is defined at the sample points and any noise vector. Let . Then is defined at the with probability one. Now consider arbitrary noise and write for the corresponding weighted least-squares approximation from noisy samples . Let be the (unique) element such that and write . Let be the least-squares approximation to from noiseless samples and be the least-squares approximation to the zero function from the noisy samples . Notice that , since the weighted least-squares approximation is a projection in the discrete inner product. Since the least-squares approximation is also linear, we have . This gives
(A.2) |
Lemma 5.1 and the fact that (5.10) holds imply that
(A.3) |
Now consider the term . Writing and and using the variational form (3.10) gives
Now Parseval’s identity and the fact that gives
i.e.,
(A.4) |
We now bound this term in probability. Consider the random variable . Since is the orthogonal projection of onto , and Lemma A.2 implies that
Hence, by Markov’s inequality
We deduce that
with probability at least . Substituting this and (A.3) into (A.2) we deduce, after an application of the union bound, that
with probability at least . This completes the proof. ∎
Proof of Corollary 5.10.
Let be the event that (5.10) holds Let be a polynomial attaining the infimum in (5.16). Now let be the event that
Suppose that and occur. Then the definitions of , and Lemma 5.1 imply that
This yields the desired bound (5.16). Hence, by the union bound, it suffices to show that .
The fact that follows immediately from the first condition in on in (5.15) and Theorem 5.4. We now consider . Define the random variables
Notice that
due to (3.5), and therefore
The idea now is to use Bernstein’s inequality to estimate the random variable . Let
and notice that and almost surely. Hence almost surely. We also have almost surely, and therefore
Since , we may apply Bernstein’s inequality (see, e.g., [58, Cor. 7.31]) to get
Set . Then it is a short argument involving the second condition in (5.15) to show that . Therefore,
with probability at least . Substituting the values for , and using the inequality , we see that
with probability at least . Therefore, , as required. ∎
We now prove Lemma 5.11 and Theorem 5.12. These ideas go back to [34], but the specific arguments are based on [35].
Proof of Lemma 5.11.
Proof of Theorem 5.12.
Let be the event that . Suppose that occurs. Then
Here, the second bound follows from the facts that and is a contraction in the norm. On the other hand, if does not occur then we have
Now observe that , due to the assumption on and Theorem 5.4. The result now follows by the law of total expectation and Lemma 5.11. ∎
Proof of Theorem 6.1.
The weight function corresponding to (6.8) is given by
(A.5) |
Therefore, by (5.6), we have , which gives
We deduce from Theorem 5.4 and (6.9) that with probability at least . Using this and the fact that by construction, we deduce from Lemma 5.1 that
for any and . Let be the coefficients of and be the best approximation to from . Then this gives
(A.6) |
For , define the matrices
Then
where is as in (6.7).
We now wish to use the matrix Chernoff bound to estimate . Observe that
Now, as in the proof of Theorem 5.4, note that and
The matrices are nonnegative definite and satisfy, for any ,
Using (A.5) and taking the supremum over all such with , we deduce that
Hence, the matrix Chernoff bound (Theorem A.1) gives that
where . Now let , so that . We want to choose so that . Using the bound (6.9), we see that
provided
Now is increasing and for . Therefore it suffices to take
Now we recall that , the definition of and , to get, after some algebra,
for some numerical constant . To summarize, we have shown that
Taking the union bound and recalling that , we deduce that
with probability at least and a potentially different numerical constant . Now consider . The terms are monotonically nonincreasing in . Therefore, for we have
Hence
We deduce that
where in the final step we used the fact that for to deduce that the final sum converges. Substituting this into (A.6) now gives the result. ∎
Proof of Theorem 9.4.
Much like in Lemma 5.1, if (9.5) holds with then the estimator given by (9.2) satisfies (see [8, Lem. E.5])
where, for convenience, we define and
Now let be the event that , where is as in (9.4) (the choice of here is arbitrary), and consider the estimator . We argue similarly to the proofs of Lemma 5.11 and Theorem 5.12. Since is a contraction, we have
Now fix . Then
Now observe that
where in the last step we used nondegeneracy (Assumption 9.2). We also have
Here we used the definition of the weight functions and the fact that each is a probability measure. We deduce that
Hence, the result follows, provided .
To show that , we appeal to [9, Thm. E.2], using conditions (b) and (c) defined therein. The remainder of the proof involves showing how to recast the setup considered in §9.1 as a special case of that considered in [9]. Let . Now, for , let be the distribution of operators in defined by if
Now, following [9, Ex. 2.5], define by if for . Doing this, the setup of §9.1 is a special case of [9] with the family . In particular, nondegeneracy in the sense of [9, eqn. (1.1)] is implied by (9.4).
We now apply [9, Thm. E.2], and specifically, parts (b) and (c), with and . This implies that (9.5) holds, provided
(A.7) |
or, with as in Assumption 9.3(ii),
(A.8) |
where is the so-called variation, as defined in [9, §3.1] and, for , . Consider any such set . Using this and the definition of , we see that
Since is linear, we deduce that
where is as in (9.3). Using this and setting, respectively, or we see that (A.7) is equivalent to (9.6) and (A.8) is equivalent to (9.7). The result now follows. ∎