Distance and Kernel-Based Measures for Global and Local Two-Sample Conditional Distribution Testing
Abstract
Testing the equality of two conditional distributions is crucial in various modern applications, including transfer learning and causal inference. Despite its importance, this fundamental problem has received surprisingly little attention in the literature. This work aims to present a unified framework based on distance and kernel methods for both global and local two-sample conditional distribution testing. To this end, we introduce distance and kernel-based measures that characterize the homogeneity of two conditional distributions. Drawing from the concept of conditional U-statistics, we propose consistent estimators for these measures. Theoretically, we derive the convergence rates and the asymptotic distributions of the estimators under both the null and alternative hypotheses. Utilizing these measures, along with a local bootstrap approach, we develop global and local tests that can detect discrepancies between two conditional distributions at global and local levels, respectively. Our tests demonstrate reliable performance through simulations and real data analyses.
Keywords: Conditional distribution; Generalized energy distance; Kernel smoothing; Maximum mean discrepancy; Two-sample testing; U-statistics.
1 Introduction
The canonical setting for (nonparametric) two-sample testing focuses on assessing the equality of two unconditional distributions. In many contemporary applications, however, people are instead interested in testing the equality of two conditional distributions. Consider two independent data sets for . Here and , where is allowed to be a general metric space (see Remark 2). For , assume that are independent and identically distributed (i.i.d.) samples of , where is the marginal distribution of and is the conditional distribution of given . To ensure that the testing problem considered below is nontrivial, we assume that and are equivalent, i.e., and , where means that is absolutely continuous in reference to . We aim to test the following hypothesis, which we call the global two-sample conditional distribution testing problem,
(1) |
Due to the equivalence between and , the hypotheses in (1) can be equivalently formulated by replacing with .
We want to emphasize that the marginal distributions of from the two populations may differ (i.e., ), as seen in the motivating examples below. Thus, in (1) is not equivalent to , and unconditional two-sample tests for the equality of two joint distributions (i.e., ) are generally not applicable in our context. Applying such tests would result in a failure to control the type I error when . Besides, and denote two generic random variables and do not necessarily correspond to response and covariates, respectively. For instance, in the prior shift example below, are covariates, and is a response in (1).
Hypothesis (1) is central to many important problems in econometrics, machine learning, and statistics. For example, in transfer learning, the prior and covariate shift assumptions are commonly employed to tackle distributional differences between source and target populations (Kouw and Loog,, 2018). The prior shift assumption asserts that the conditional distribution of the covariates given the response is identical in both populations while allowing for a shift in the marginal distributions of the response. Conversely, the covariate shift assumption posits that the conditional distribution of the response given the covariates remains invariant across source and target populations, but the marginal distributions of the covariates can differ. Both assumptions are widely adopted in the literature; see, e.g., Huang et al., (2024); Lee et al., (2024) for the prior shift assumption, and Shimodaira, (2000); Tibshirani et al., (2019); Liu et al., (2023); Ma et al., (2023) for the covariate shift assumption. Despite their prevalence, there is a paucity of work that formally validates these assumptions. Both of them can be framed as testing the equality of two conditional distributions, as in (1). Such tests are essential to the validity of methods developed under the prior or covariate shift assumptions.
Another motivating example comes from causal inference. Testing hypotheses in the context of treatment effect analysis has always been of interest (Imbens and Wooldridge,, 2009, Sections 3.3 and 5.12). Consider the standard setup based on the potential outcome framework (Rubin,, 1974). Suppose are i.i.d. observations of , where is the observed outcome of interest, denotes a binary treatment (1: treated, 0: untreated), and are pretreatment covariates. For each subject, we define a pair of potential outcomes, , that would be observed if the subject had been given treatment, , and control, . One may be interested in testing the null hypothesis that the conditional distribution of is the same as that of (Imbens and Wooldridge,, 2009), i.e., zero conditional distributional treatment effect. Under the prevalent assumptions of consistency, , and no unmeasured confounding, , the conditional distributions of and can be identified as those of and , respectively. Therefore, it can be formulated as testing hypothesis (1) with and being the two sets of independent samples.
When the null hypothesis of the global two-sample conditional distribution testing is rejected, one might wish to pinpoint local regions where the two conditional distributions are significantly different. Specifically, for a fixed in the support of (or equivalently, ), we are interested in testing
(2) |
which we refer to as the local two-sample conditional distribution testing problem, in contrast to the global problem (1). This problem is novel and is partly motivated by Duong, (2013) and Kim et al., (2019) in the context of unconditional two-sample testing. With a little extra effort, our framework can readily handle the local testing problem (2).
Recently, distance and kernel-based measures have received considerable attention in both the statistics and machine learning communities (Székely and Rizzo,, 2017; Muandet et al.,, 2017). These measures have been applied to a wide range of classical and modern hypothesis testing problems, including two-sample testing (Székely et al.,, 2004; Baringhaus and Franz,, 2004; Gretton et al.,, 2012), goodness-of-fit testing (Székely and Rizzo,, 2005; Balasubramanian et al.,, 2021), independence testing (Székely et al.,, 2007; Gretton et al.,, 2007; Chakraborty and Zhang,, 2019; Ke and Yin,, 2020; Deb et al.,, 2020), conditional independence testing (Fukumizu et al.,, 2007; Zhang et al.,, 2012; Wang et al.,, 2015; Sheng and Sriperumbudur,, 2023) and high-dimensional two sample and dependence testing (Zhang et al.,, 2018; Yao et al.,, 2018; Zhu et al.,, 2020; Chakraborty and Zhang,, 2021). In this paper, we establish a distance and kernel-based framework to tackle both the global and local two-sample conditional distribution testing problems (1) and (2). To achieve this, we introduce the conditional generalized energy distance (3) and the conditional maximum mean discrepancy (4), along with their integrated version (5), which fully characterize the homogeneity of two conditional distributions. Additionally, we show the equivalence between the conditional generalized energy distance and the conditional maximum mean discrepancy. Building upon estimators of these measures, we develop global and local tests capable of detecting discrepancies between two conditional distributions at global and local levels, respectively.
Our estimation strategy employs a combination of U-statistics and kernel smoothing, initially introduced in the so-called conditional U-statistics by Stute, (1991) and later applied to different problems by Wang et al., (2015) and Ke and Yin, (2020). To highlight our theoretical contributions, we summarize the distinct asymptotic behaviors of our global and local test statistics under both the null and alternative hypotheses in Table 1. It is noteworthy that local tests based on distance and kernel measures have not been previously explored in the literature. Furthermore, while Wang et al., (2015) and Ke and Yin, (2020) only provided the asymptotic distributions of their global test statistics under the null, we offer additional insights by systematically studying the properties of our statistics under both the null and alternative hypotheses. As a side note, we identify certain gaps in the derivations of the asymptotic null distribution in both Wang et al., (2015) and Ke and Yin, (2020), with details provided in Section 4.
Our theoretical analysis indicates that the implementation of kernel smoothing in U-statistic estimators of distance and kernel-based measures has two significant effects. On the one hand, as shown in Table 1, U-statistic estimators are no longer unbiased, and undersmoothing is required to mitigate the bias introduced by kernel smoothing. On the other hand, owing to kernel smoothing, the U-statistic estimators for distance and kernel-based measures become non-degenerate under the null hypotheses, which is in contrast to the unconditional situation (Székely et al.,, 2004; Gretton et al.,, 2012). Nevertheless, when undersmoothing is employed, the first-order projections in the Hoeffding decomposition of U-statistic estimators turn out to be asymptotically negligible under the null, and thus the asymptotic null distributions are determined by the second-order projections in the Hoeffding decomposition. For further discussion, please refer to Sections 3-4.
Global test Local test Null Alternative Null Alternative Asymptotic distribution Normal Normal Weighted sum of chi-squares Normal Convergence rate Undersmoothing condition
We now discuss related work and highlight several notable features of our framework. Although problems (1) and (2) are fundamental in various modern applications, surprisingly, there are very few methods available to test the equality of two conditional distributions. In the nonparametric testing literature, existing methods mainly focus on testing the equality of conditional moments of given . Specifically, most studies aim at testing the equality of conditional means, also known as the comparison of regression curves, as seen in Hall and Hart, (1990); Kulasekera, (1995); Dette and Munk, (1998); Lavergne, (2001); Neumeyer and Dette, (2003), among others (see Section 7 in González-Manteiga and Crujeiras, (2013) for a detailed review). Similarly, in the causal inference literature, hypothesis testing has largely been limited to conditional average treatment effect (see, e.g., Crump et al., (2008)). The methods we develop in this paper can detect general discrepancies between two conditional distributions, beyond specific moments. Lee, (2009) and Chang et al., (2015) proposed nonparametric tests for the null hypothesis of zero conditional distributional treatment effect. Their tests are built upon a Mann-Whitney statistic and cumulative distribution functions, respectively, and thus are only applicable for univariate . In our framework, is allowed to take values in a general metric space, while can be multivariate. Very recently, Hu and Lei, (2024) proposed a test for the hypothesis (1) using techniques from conformal prediction. However, their test requires (possibly unbalanced) sample splitting, and its performance relies on high quality density ratio estimators (see Assumption 2(b) and related discussion therein). Chen et al., (2022) and Chatterjee et al., (2024) considered the paired-sample problem, which is very different from our two-sample setting. Notably, the local testing problem (2) is new and has not been addressed in the literature. Our framework can accommodate both global and local two-sample conditional distribution testing.
Notation. For , let , and be the joint probability density function of , the marginal probability density function of and the conditional probability density function of , respectively (assuming the existence of these densities). For two sequences of real numbers and , we write if and only if and . The symbols and stand for convergence in probability and in distribution, respectively.
2 Preliminaries
This section provides an overview of the generalized energy distance and maximum mean discrepancy, as well as their equivalence, which has been extensively discussed in Sejdinovic et al., (2013).
For a non-empty set , a non-negative function is called a semimetric on if for any , it satisfies (i) and (ii) . Then is said to be a semimetric space. The semimetric space is said to have negative type if for all , and with , we have . Define , for some , where denotes the set of all probability measures on . Suppose , we have when has negative type. We say that has strong negative type if it has negative type and the equality holds only when . The generalized energy distance between is defined as (Sejdinovic et al.,, 2013)
where and . If is of strong negative type, we have and the equality holds if and only if . Every separable Hilbert space is of strong negative type (Lyons,, 2013). In particular, Euclidean spaces are separable, and thus the generalized energy distance generalizes the usual energy distance introduced by Székely et al., (2004) and independently by Baringhaus and Franz, (2004) from Euclidean spaces to semimetric spaces of strong negative type.
Let be a Hilbert space of real-valued functions defined on . A function is a reproducing kernel of if (i) , and (ii) , , where is the inner product associated with . If has a reproducing kernel, it is said to be a reproducing kernel Hilbert space (RKHS). According to Moore-Aronszajn theorem (Berlinet and Thomas-Agnan,, 2011), for every symmetric, positive definite function (henceforth kernel) , there exists an associated RKHS with reproducing kernel . For , define . The kernel mean embedding of into RKHS is defined by the Bochner integral . Besides, the kernel is said to be characteristic if the mapping is injective. Conditions under which kernels are characteristic have been studied by Sriperumbudur et al., (2008, 2010), and examples include the Gaussian kernel and the Laplace kernel. The maximum mean discrepancy (MMD) between is given by (Gretton et al.,, 2012)
When the kernel is characteristic, we have if and only if . Also, the following alternative representation of the squared MMD is useful
where and .
Let be a semimetric of negative type on . For any , the function is positive definite, and is said to be the distance-induced kernel induced by and centered at . Correspondingly, for a kernel on , the function defines a valid semimetric of negative type on , and we say that generates . It is clear that every distance-induced kernel induced by , also generates . Theorem 22 in Sejdinovic et al., (2013) establishes the equivalence between the generalized energy distance and MMD. Specifically, suppose and let be any kernel that generates , then .
3 Conditional generalized energy distance and conditional maximum mean discrepancy
Let be a semimetric on . We define the conditional generalized energy distance at as the generalized energy distance between and .
Definition 1.
Assume and for . The conditional generalized energy distance between and at is defined as
(3) |
where and are i.i.d. copies of and , respectively.
When and corresponds to the Euclidean distance, (3) also has a nice interpretation in terms of conditional characteristic function. Specifically, for and , the conditional characteristic function of given is defined as , where is the imaginary unit. When with being the Euclidean norm, we have
where is a constant with being the gamma function. The proof of this fact follows a similar approach to Proposition 1 in Székely and Rizzo, (2017), and the details are omitted for brevity.
As the counterpart to the distance-based metric (3), we now introduce a kernel-based metric to measure the discrepancy between the two conditional distributions. Let be a RKHS associated with a kernel on .
Definition 2.
Assume and for . The conditional maximum mean discrepancy (CMMD) between and at is defined as the square root of
(4) |
where and are i.i.d. copies of and , respectively.
While conducting this research, we came across Park and Muandet, (2020), where the authors present the CMMD (4) in a slightly different form. However, their estimation strategy is entirely different from ours. Neither the convergence rate nor the asymptotic distribution of their estimator is provided. By contrast, we derive the exact convergence rate and the asymptotic distribution of our statistic under both the null and alternative hypotheses. In addition, we establish a unified framework for both the distance and kernel-based measures. Furthermore, it is important to note that the CMMD is a metric indexed by , and the single measure (5) introduced in Section 4, which integrates the CMMD with a weight function, is not discussed in Park and Muandet, (2020).
The conditional generalized energy distance and CMMD both serve to characterize the homogeneity of two conditional distributions. Besides, we can demonstrate the equivalence between the conditional generalized energy distance and CMMD. As a result, we will focus on the CMMD for the remainder of this paper.
Theorem 1.
-
1.
When the semimetric is of strong negative type, for any such that and , we have , and if and only if in (2) holds.
-
2.
When the kernel is characteristic, for any such that and , we have if and only if in (2) holds.
-
3.
Let be a semimetric of negative type on . Suppose and for . If is a kernel that generates , i.e., , then
Remark 1.
While this paper employs characteristic kernels to fully capture the homogeneity of two conditional distributions, it is also possible to utilize non-characteristic kernels to detect specific moment discrepancies. For instance, by using the linear kernel , we obtain , which can be used to compare two conditional means. This illustrates the broad range of applications for our framework.
Remark 2.
Note that the definitions of the conditional generalized energy distance and CMMD, as well as the estimation procedures to be introduced, are applicable to a general metric space . This implies that our framework is capable of handling cases where is a non-Euclidean object, such as a curve, image, or even topological data encountered in various applications.
Now, we present an estimator for the CMMD. Let denote the th component of for . For , define as
where is a univariate kernel function satisfying Assumption 1 below, and are the bandwidth parameters. For ease of presentation, here we use the same bandwidth for each component of . In practice, one should always allow for varying bandwidths across each component of . Besides, it is important to note that the smoothing kernel applied to differs from the reproducing kernel applied to , and we employ different criteria for selecting these two kernels.
Motivated by the representation of CMMD in terms of the conditional moments given in (4), we propose the following estimator for the CMMD:
which we call the sample conditional maximum mean discrepancy. This estimation strategy, which combines U-statistics and kernel smoothing, has been adopted in previous studies such as Wang et al., (2015); Ke and Yin, (2020). It first appeared in the so-called conditional U-statistics (Stute,, 1991), which generalize the Nadaraya-Watson estimate of a regression function in the same way as Hoeffding’s classical U-statistic is a generalization of the sample mean.
We shall make the following assumptions throughout the analysis.
Assumption 1.
The univariate smoothing kernel is of order in the sense that , for , and . Also, is bounded with and .
Assumption 2.
For , and , as .
Assumption 3.
The marginal densities and are -times continuously differentiable.
Assumption 4.
, and are -times continuously differentiable with respect to .
Assumptions 1-3 are standard in the literature (see, e.g., Section 1.11 in Li and Racine, (2007)), allowing for multivariate and higher-order kernels. Assumption 4 can be seen as the counterpart of the usual smoothness condition imposed on the regression function in classical kernel regression. A similar assumption is used in Stute, (1991).
In what follows, we present the consistency of the sample CMMD in Theorem 2, while Theorems 3 and 4 provide its asymptotic distributions under and in (2), respectively.
Theorem 3.
As shown in the proof in the appendix, we have and . Thus, under in (2), , which is the same convergence rate as that of classical kernel regression (Ullah and Pagan,, 1999) and conditional U-statistics (Stute,, 1991). The condition as in Theorem 3 requires undersmoothing in order to make the estimator asymptotically unbiased. Undersmoothing, coupled with the use of a higher-order kernel, is commonly employed in nonparametric and semiparametric estimation to reduce the bias of estimators involving kernel smoothing (Li and Racine,, 2007). The same condition with and is used in Section 3.4 of Ullah and Pagan, (1999) for the central limit theorem (CLT) of classical kernel regression and in Stute, (1991) for the CLT of conditional U-statistics.
Theorem 4.
As demonstrated in Theorem 4, the sample CMMD converges at a faster rate under in (2): , compared to that under in (2). This explains why the undersmoothing condition in Theorem 4 is more stringent than that in Theorem 3. Compared with the unconditional scenario (Gretton et al.,, 2012), our results differ in several aspects. First, the convergence rates are distinct. Second, our spectral decomposition is performed with respect to the conditional distribution. Third, in contrast to i.i.d. standard normal random variables in Gretton et al., (2012), and have different variances when .
4 Integrated conditional maximum mean discrepancy
The CMMD is indexed by , and thus is not a single number. We can obtain a single measure by integrating the CMMD with some weight function , i.e.,
which we call the integrated conditional maximum mean discrepancy (ICMMD). It follows directly from Theorem 1 that fully characterizes the homogeneity of two conditional distributions.
Theorem 5.
When is a non-negative function with the same support as (or equivalently, ) and the kernel is characteristic, we have if and only if in (1) holds.
Motivated by Su and White, (2007); Wang et al., (2015); Ke and Yin, (2020), we consider the weight function and define
(5) |
This particular choice of weight function circumvents the random denominator issue. Otherwise, to deal with the density near zero and mitigate significant bias, additional stringent assumptions or trimming schemes may need to be used. For example, Yin and Yuan, (2020) assumes that the density functions and are bounded away from zero, which can be quite restrictive.
We employ the same estimation strategy as in Section 3 and propose the following estimator for the ICMMD, which is a U-statistic by symmetrization (see the proof of Theorem 6):
We call the sample integrated conditional maximum mean discrepancy.
Remark 3.
Theorem 6 asserts that is a consistent estimator of . By examining the Hoeffding decomposition of , Theorem 7 describes its asymptotic distribution under the alternative hypothesis in (1), while Theorem 8 characterizes its asymptotic null distribution under in (1). To get these results, we need to strengthen Assumptions 3-4 in the following manner.
Assumption 5.
In addition to Assumption 3, we require the derivatives of and to be bounded uniformly over .
Assumption 6.
In addition to Assumption 4, we require the derivatives of the bivariate functions defined therein to belong to the set .
Assumption 5 is standard in the literature, as seen in previous works such as Lee, (2009); Wang et al., (2015). Meanwhile, Assumption 6 is mild in that it does not necessitate bounded derivatives of the conditional densities or moments. Wang et al., (2015) assumes that the conditional densities are -times continuously differentiable with respect to and have bounded derivatives. However, under their assumption, the proof in Wang et al., (2015) implicitly requires the distance function to be integrable with respect to the Lebesgue measure (not some probability measure), which does not hold unless the support is bounded. Indeed, by examining the proof presented in Wang et al., (2015) and Ke and Yin, (2020), it appears that conditions analogous to our Assumption 6 are necessary.
Theorem 7.
As shown in the proof in the appendix, we have and . Hence, under in (1), , and the convergence rate is the same as that of the test statistic in Lee, (2009). Again, undersmoothing is required, and the same condition has also been used in Lee, (2009). Furthermore, we emphasize that undersmoothing is not needed for establishing the consistency of and .
Theorem 8.
As shown in the proof in the appendix, we have , and . Thus, the convergence rate of under in (1) is . Because of the integration of over , the convergence rate of the global statistic is faster than that of the local statistic under both the null and alternative hypotheses. Besides, instead of a weighted sum of chi-squares, the global statistic has a limiting normal distribution under the null, which can be proved by the martingale CLT (see, e.g., Corollary 3.1 of Hall and Heyde, (2014)).
Theorems 3-4 and 7-8 demonstrate that kernel smoothing has two key effects on the asymptotic behavior of distance and kernel-based measures:
-
•
Kernel smoothing introduces biases of the order . To ensure valid inference, undersmoothing or other bias reduction techniques are necessary to achieve asymptotic unbiasedness. For the local statistic , Theorem 4 and Assumption 2 require that the bandwidths in the statistic be chosen such that , where . There is no restriction on the relationship between the order of the univariate smoothing kernel and the dimension of . For the global statistic , Theorem 8 and Assumption 2 dictate that the bandwidths be chosen such that , where . This condition requires , implying that higher-order kernels become necessary as the dimension increases. We note that Wang et al., (2015) simply used a second-order smoothing kernel throughout, which is not suitable for (see Theorem 7 therein).
-
•
Due to kernel smoothing, the U-statistic estimators and are non-degenerate under the null hypotheses, as shown in the proof in the appendix. This is in sharp contrast to the unconditional scenario where the U-statistic estimators for the energy distance and maximum mean discrepancy are degenerate under the null (Székely et al.,, 2004; Gretton et al.,, 2012). Nonetheless, when undersmoothing is employed, a more in-depth analysis reveals that the first-order projections in their Hoeffding decompositions are asymptotically negligible under the null, compared with the second-order projections. Therefore, their asymptotic null distributions are determined by the second-order projections in the Hoeffding decompositions. Specifically, we have under in (2), instead of under in (2). Additionally, we have under in (1), instead of under in (1).
Remark 4.
Our work utilizes techniques similar to those used by Wang et al., (2015) and Ke and Yin, (2020), including U-statistics and kernel smoothing. However, we believe that there exists a gap in the derivations of the asymptotic null distribution in both studies. Specifically, Theorem 7 of Wang et al., (2015) and Theorem 9 of Ke and Yin, (2020) both rely on Lemma B.4 of Fan and Li, (1996), which establishes the CLT for degenerate U-statistics under certain conditions. To show that a U-statistic is degenerate, it must be demonstrated that under the null hypothesis, as per the notation used in their proof. However, Wang et al., (2015) merely showed that , while Ke and Yin, (2020) incorrectly claimed that . As a result, both studies overlook the crucial undersmoothing step necessary for valid inference.
5 Applications to global and local two-sample conditional distribution testing
Theorems 6-8 collectively suggest that is a suitable test statistic for testing (1). Nevertheless, it is impractical to use Theorem 8 to compute the -value since it is arduous to estimate , and . Moreover, it is widely acknowledged that a nonparametric test that relies on asymptotic normal approximation may perform poorly in finite samples (Su and White,, 2007). Consequently, we resort to the local bootstrap proposed by Paparoditis and Politis, (2000). This approach has been widely employed in other works involving conditional distributions; see, e.g., Su and White, (2008); Huang, (2010); Bouezmarni et al., (2012); Su and Spindler, (2013); Taamouti et al., (2014); Wang et al., (2015). One can follow aforementioned references to verify the asymptotic validity of this bootstrap method in our framework. Define
(6) |
where denotes a point mass at . Essentially, is a discrete distribution that assigns the probability to the observation . Then the following steps outline the procedure for the global two-sample conditional distribution test:
-
(i)
Calculate the global test statistic, i.e., the sample ICMMD, .
-
(ii)
For and , draw from . Calculate the global test statistic using the local bootstrap samples , which retains the marginal distributions of but imposes the null restriction (i.e., the same conditional distribution of given ).
-
(iii)
Repeat step (ii) times, and collect . Then, the bootstrap-based -value of the global test is given by
Theorems 2-4 support the use of as the test statistic for testing (2). Unfortunately, the asymptotic null distribution of in Theorem 4 is not pivotal and involves infinite nuisance parameters. To tackle this issue, we once again utilize the local bootstrap method to calculate the -value, replacing the in the above steps i-iii with .
6 Numerical studies
6.1 Monte Carlo simulation
In this subsection, we assess the finite-sample performance of the proposed methodologies through several simulation examples. In Examples 1-3, we compare our global test (denoted as TCDT) with the test using conformal prediction (Hu and Lei,, 2024) (denoted as CONF). The performance of the proposed local test is examined in Examples 4-5.
For our proposed global and local tests, the kernel function for local smoothing in the test statistic is chosen to be the Gaussian kernel. In Examples 1 and 3-5, we use of order . In Example 2, a higher-order Gaussian kernel, , of order is applied, since the dimension of is therein. We take the undersmoothing bandwidth for the global testing problem (1) following the guidance from Theorem 8, and use for the local testing problem (2) following Theorem 4, where is the sample standard deviation of . We set the constant factor here to 1 for both global and local tests hereafter. The effect of different values of is explored in Example 6 of the appendix. The RKHS kernel is chosen to be the popular Gaussian kernel with the bandwidth chosen by the median heuristic (Gretton et al.,, 2012). For the local bootstrap procedure (6), we take the Gaussian kernel with bandwidth . For CONF, we consider estimating the marginal and conditional density ratios both by kernel logistic regression, with parameters and selected by the data-driven out-of-sample cross entropy loss as Hu and Lei, (2024) suggested. The equal data-splitting ratio is taken for CONF. We calculate the -value of the proposed methods via the local bootstrap procedure with 299 replications, while CONF’s -value is calculated from its asymptotic distribution as stated in Hu and Lei, (2024).
Example 1.
In this example, we focus on the univariate regression framework
(7) |
where are univariate i.i.d. observations, are independent random noises, and denotes the regression function, for . Here and , which corresponds to . For both , set .
We first consider the scenarios in which the two conditional distributions differ only in the conditional means (i.e., ). Three settings of are considered as follows:
where is a parameter controlling the signal strength of the difference. Notice that corresponds to the null hypothesis, and a larger corresponds to a greater difference. To examine and compare the empirical sizes and powers of different methods, we set or , and vary the signal strength from to . Table 2 and Figure 1 summarize the empirical sizes and powers under the significance level via 1000 simulations.

One can observe that TCDT can control the empirical size below the nominal level in all cases, while CONF tends to fail in controlling the type I error. A possible reason is that, as pointed out by Assumption 2(b) in Hu and Lei, (2024), CONF relies on accurate density ratio estimators, and possibly requires unbalanced sample splitting. In the linear regression setting (1.1), CONF achieves higher power than TCDT. In settings (1.2) and (1.3) where the oscillating curve and exponential curve are considered, TCDT has demonstrated an advantage in power compared to CONF, with well-controlled type I error.
Next we consider the scenarios in which the two conditional distributions differ solely in conditional variances. Let the regression curves be identical quadratic functions across the two samples. The following three cases are considered:
Here is the parameter controlling the signal strength of the difference. The first setting represents the homogeneous case, while the latter two represent the heterogeneous cases. We set or , and vary the signal strength . Results of 1000 simulations are shown in Table 2 and Figure 1.
It can be observed that CONF generally reaches an empirical size higher than 0.05. Besides, it is evident that TCDT shows a greater ascending trend in the power curve than that of CONF. The good performance of TCDT holds across different settings of conditional variances, indicating that TCDT is capable of handling various types of alternative hypotheses.
Example 2.
Similar to Example 1, we now consider the settings where the dimension of is . Let and , where . For both , set errors with heavy tails. We let the difference between the two conditional distributions reside solely in the regression functions . Specifically,
where and is the signal strength. Let the sample sizes be or , and vary from to . We conduct the simulations 1000 times under significance level . Table 3 and Figure 2 show the empirical sizes and powers. In this setting with multivariate , TCDT equipped with the higher-order Gaussian smoothing kernel achieves better power performance compared to CONF, while still guaranteeing type I error control.
Sample Size TCDT CONF 0.019 0.018 0.018 0.014

Example 3.
We further consider a scenario that appears in transfer learning: testing whether the conditional distribution of covariates given the response, , is the same for the two populations (i.e., prior shift assumption). To generate data, conversely to (7), we consider the multivariate regression model of on :
where , is the multivariate error, and is the multivariate regression function. Let for both , where with . We consider the case, by setting and . Let and , where , and controls the signal for the difference of the conditional distributions between two samples. Then
where is the Hadamard square of , and is the Hadamard division. We consider or . When , we let for , for and for . When , we let for , for , for and for . Let , or , and vary the the signal strength from to . For each varying setting, we repeat 1000 simulations to obtain the empirical size () and power () of TCDT and CONF under the significance level of 0.05. The results are shown in Table 4 and Figure 3. Both methods can control the size in this setting. It can be observed that TCDT achieves greater power than CONF.
Sample Size TCDT CONF TCDT CONF 0.035 0.047 0.013 0.025 0.033 0.051 0.021 0.020 0.039 0.048 0.025 0.019

Example 4.
To examine the performance of the proposed test for the local testing problem (2), we first consider the data-generating setting under the univariate regression framework (7) with and . Sample sizes are set as . Set , and , that is, . The errors are considered for both . In this setting, the two samples differ only in the conditional means. When the local point is an integer, it is under the null hypothesis. The signal is cyclically increasing and decreasing between each pair of integer local points.
We then conduct the proposed test on local points ranging from to with the spacing 0.1 and obtain empirical power or size for each local point, with a total of 1000 simulations and a significance level of 0.05. The results are summarized in Figure 4. The empirical sizes at local and are and , respectively. It can be seen that the size can be controlled around the nominal level of 0.05, and the power increases as the local signal strengthens. The power loss at the region of large magnitude for may be due to the fewer sample points at the boundary area.

Example 5.
We further consider a bivariate case to show how the proposed local test performs. Specifically, let be i.i.d. and follow a bivariate truncated standard normal distribution with support on , and let . Then set and , where for both . In this setting, the null in (2) holds when the local point is in the first or third quadrant, and does not hold when is in the second or fourth quadrant.
We generate samples, and conduct the proposed local test at a fixed uniform grid of points over , under the significance level of 0.05. The result is shown in Figure 5 with a single data generation. The proposed test is capable of correctly rejecting at most grid points within the region under the alternative hypothesis, without producing any false rejections of .

6.2 Ethanol data example
We now apply the proposed test to analyze the ethanol data (Cleveland,, 1993; Kulasekera,, 1995; Kulasekera and Wang,, 1997). The response variable is the concentration of nitric oxide plus the concentration of nitrogen dioxide in the exhaust of an experimental engine when the engine is set at various equivalence ratios () and compression ratios (). These oxides of nitrogen are major air pollutants, and the experiment is to determine the dependency of the concentration of oxides of nitrogen on various engine settings. The pairs are divided into two groups according to low compression ratio (Low): and high compression ratio (High): . We model with for the two types of compression ratios and compare the two conditional distributions. The Low group had 39 observations, while the High group had 49 observations. Figure 6 shows the scatter plot for the two groups. For small values of , the Low group tends to give lower oxides concentrations than the High group, i.e., the two conditional distributions are different. However, for large values of , the Low and High groups seem to display the same conditional distribution.

To validate these findings, we first apply our global test, using the same configuration as specified in Example 1 except that the number of local bootstrap replications is set as . The resulting -value is , showing strong evidence against the equality of the two conditional distributions. Besides, we conduct the local tests on ranging from 0.75 to 1.15, and plot the corresponding curve of -values against on Figure 6. The -values are shown to be less than 0.05 for from to . This suggests that, locally at these points, the two conditional distributions differ significantly. These results are consistent with what we observed from the scatter plot. Hence, our method works well for data sets with small sample sizes.
6.3 Airfoil data example
We also validate the performance of the proposed global test through the airfoil dataset (Brooks et al.,, 2014). The dataset is collected by NASA to study the sound pressure of different airfoils, and has been explored by Tibshirani et al., (2019) and Hu and Lei, (2024) as well. It consists of observations, with a univariate response (scaled sound pressure level) and a 5-dimensional covariates (log frequency, angle of attack, chord length, free-stream velocity, and suction side log displacement thickness). We standardize each covariate to be zero mean and unit variance.
Since the dataset does not have two samples, we take the following settings similar to Hu and Lei, (2024), to partition the data into two samples so that our proposed global test can be applied.
-
1.
Random partition and exponential tilting on covariates. Following Tibshirani et al., (2019), we first randomly partition the dataset into two parts and with sample sizes and , and then sample points without replacement from to build the second sample . The sampling is conducted with probabilities proportional to , where .
-
2.
Random partition and exponential tilting on response. Similar to Setting 1, except that the sampling is conducted with probabilities proportional to .
-
3.
Response-based partition. We partition the dataset into two subsets based on the value of “sound” variable. The first sample contains points with smaller values, and the second sample with contains the rest. To avoid singularity between the two populations, we randomly select observations in each of the two samples and then flip their groups.
-
4.
Chord-based partition. Similar to Setting 3, except that the dataset is partitioned based on the value of “chord” variable.
We conduct the proposed global test at significance level to test the equality between and in Settings 1 and 3 with Gaussian smoothing kernel of , and test the equality between and in Settings 2 and 4 with Gaussian smoothing kernel of . Other implementation details are the same as those in the simulations. Notice that Settings 1-2 are under the null hypothesis, while Settings 3-4 are under the alternative hypothesis. For Settings 1-2, we repeat the data generation 500 times, and show the empirical rejection rate with in Table 5. For Settings 3-4, since there is only a single deterministic generation of the two samples except a small fraction of group flipping, we conduct the proposed global test once, and show the -value in Table 5.
7 Discussions
Several research directions warrant further exploration. First, it would be valuable to extend the framework to -sample conditional distribution testing for and establish connections with distance-based multivariate analysis of variance (Rizzo and Székely,, 2010). In addition to kernel smoothing, alternative machine learning techniques, such as random forest and neural networks, could be considered for estimating the CMMD and ICMMD. These methods are anticipated to offer better performance for higher-dimensional , but determining the asymptotic distributions of the corresponding test statistics presents significant challenges. Lastly, compared to the sample CMMD, a local polynomial-based statistic may be preferable for addressing local testing problems, particularly when lies on the boundary of the region of interest. Such a statistic could potentially be helpful in the regression-discontinuity design (Calonico et al.,, 2014).
Appendix A Computation of the sample CMMD and ICMMD
For the sample CMMD , note that, for ,
and, for ,
We first calculate and , which can be computed in and , respectively. Then it can be seen that the overall cost of computing the statistic is .
For the sample ICMMD , note that, for ,
for ,
and, for and ,
We first calculate
Each of these sequences can be computed with operations. Then, it can be seen that the overall cost of computing the statistic is .
Appendix B Ancillary results
Below we list some relevant results about the generalized U-statistic in Lee, (2019).
Definition B.1 (Section 2.2, Lee, (2019)).
Assume that and are i.i.d. samples from distributions and respectively, and is independent of . Let be a function of arguments
which is symmetric in and . The generalized U-statistic based on is a statistic of the form
where the summation is over all -subsets of and -subsets of . Then is an unbiased estimator of .
Definition B.2 (Section 2.2, Lee, (2019)).
For and , define
and
Theorem B.1 (Theorem 2 in Section 2.2, Lee, (2019)).
Definition B.3 (Section 2.2, Lee, (2019)).
Let denote the distribution function of a single point mass at . For and , define
Theorem B.2 (Hoeffding decomposition, Theorem 3 in Section 2.2, Lee, (2019)).
The generalized U-statistic in Definition B.1 admits the representation
where is the generalized U-statistic based on in Definition B.3 and is given by
Moreover, the functions satisfy
(i) ;
(ii) for all integers and sets unless , , and .
The generalized U-statistics are thus all uncorrelated. Their variances are given by
where .
Appendix C Technical details
For and , denote .
Proof of Theorem 1.
The facts 1 and 2 directly follow from the definitions of semimetric of strong negative type and characteristic kernel.
Suppose generates , i.e., . Then
where we have used the fact that . ∎
Proof of Theorem 2.
Define the generalized U-statistic
where
Under Assumptions 1-3, we have
It suffices to show that
(A.1) |
For the first part of (A.1), we have
We first consider the term
Let , , and . We have
where follows from change of variables, and holds by Taylor’s theorem and Assumptions 1 and 3-4. Similarly, one can verify that
Thus, by Assumption 2,
For the second part of (A.1), following Definition B.2 and Theorem B.1 (), we have
We first calculate
As for , it can be expanded into several terms, and each of these terms can be shown to be of order . We only present the proof for the term below. Derivations for the other terms are similar. Note that
where follows from the same change of variables technique as in the proof of the first part of (A.1) above. Hence, .
Analogously, using change of variables, one can verify that , , , , , and . Therefore, by Assumption 2,
This completes the proof. ∎
Proof of Theorem 3.
Recall from the proof of Theorem 2 that
Following Definition B.3 and Theorem B.2, we have the Hoeffding decomposition for the U-statistic :
where
and is the remainder term. Furthermore, we give the explicit expression of (the expression of is similar and omitted):
(A.2) |
By similar calculations as in the proof of Theorem 2, we have
(A.3) | ||||
(A.4) |
under in (2). Applying Lyapunov CLT as in Section 3.4 of Ullah and Pagan, (1999),
The condition that for some used in Ullah and Pagan, (1999) is implied by our Assumption 1 that is bounded and .
Also, one can show that
Thus,
Proof of Theorem 4.
Since under in (2), following the proof of Theorems 2-3, we have
and the Hoeffding decomposition for :
where
and is the remainder term, with a slight abuse of notation.
Under in (2), one can verify that and , and thus the U-statistic is non-degenerate. Nonetheless, a more in-depth analysis reveals that and can be asymptotically negligible under in (2). Specifically, by similar calculations as in the proof of Theorem 2,
Under in (2), we have , and for all at the fixed . Hence, for in (A.2), we have
under in (2). It implies that
and thus,
Analogously, one can show that under in (2). Following the proof of Theorem 3, we have , , and . Hence, when undersmoothing is employed, the asymptotic distribution of is determined by , instead of .
The proof below is essentially similar to that in Appendix B.1 of Gretton et al., (2012). The main difference lies in that the spectral decomposition is performed with respect to the conditional distribution. Define the double centered version of :
where . As , the kernel is square integrable with respect to . Then admits a spectral decomposition
(A.5) |
where are the eigenvalues and are the corresponding orthonormal eigenfunctions of with respect to the conditional distribution , i.e.,
Proof of Theorem 6.
Note that
where
It suffices to show that
(A.6) |
For the first part of (A.6), we have
We first consider the term
Let , , and . We have
where follows from change of variables, and holds by Taylor’s theorem and Assumptions 1 and 5-6. Similarly, one can verify that
Thus, by Assumption 2,
For the second part of (A.6), following Definition B.2 and Theorem B.1 (), we have
We first calculate
As for , it can be expanded into several terms, and each of these terms can be shown to be of order or . We only present the proof for the term below. Derivations for the other terms are similar. Note that
where follows from the same change of variables technique as in the proof of the first part of (A.6) above. Hence, .
Analogously, using change of variables, one can verify that , , , , , , and . Therefore, by Assumption 2,
This completes the proof. ∎
Proof of Theorem 7.
Following Definition B.3 and Theorem B.2, we have the Hoeffding decomposition for the U-statistic :
where
and is the remainder term. Furthermore, we give the explicit expression of (the expression of is similar and omitted):
(A.7) |
Proof of Theorem 8.
Following the proof of Theorem 7, we have the Hoeffding decomposition
where is the remainder term, with slight abuse of notation.
Under in (1), one can verify that and , and thus the U-statistic is non-degenerate. Nonetheless, as in the local case, and can be asymptotically negligible under in (1). Specifically, by similar calculations as in the proof of Theorem 6,
Under in (1), we have for all . Hence, for in (LABEL:eq:hoeff2),
under in (1). It implies that
and thus,
Analogously, one can show that under in (1). As in the local case, when undersmoothing is used, the asymptotic distribution of is determined by under in (1), since , and .
We have
and aim to show that
where
and
(A.10) | ||||
(A.11) | ||||
(A.12) |
Let for and for , and for ,
Write . Define for , and for . Let be the -algebra generated by for . It is straightforward that for any , and is of zero mean and square integrable. Also, for , we have . Hence, for each , is a square integrable martingale of zero mean. As
its asymptotic normality can be established by Corollary 3.1 in Hall and Heyde, (2014), i.e.,
which can be done with routine verification of the following conditions:
In our case, we have
where we have used Assumption 2. The tedious calculations are omitted here.
Appendix D Additional simulations and results
Example 6.
We explore the effect of taking different values for the constant factor in the bandwidth for the proposed global test, under the same settings as in Example 1. The empirical size table and power curves under the significance level of 0.05 for the proposed TCDT with bandwidths taking , and are displayed in Table 6 and Figure 7, respectively.

We find that regardless of whether , or , the empirical size of TCDT remains controlled under 0.05. However, different values of lead to different power performance. In Settings (1.1) and (1.2), TCDT achieves higher power with a larger , while in Setting (1.3) it performs better in terms of power when or . To achieve a balance, we consider setting for all the settings in the numerical studies.
References
- Balasubramanian et al., (2021) Balasubramanian, K., Li, T., and Yuan, M. (2021). On the optimality of kernel-embedding based goodness-of-fit tests. The Journal of Machine Learning Research, 22(1):1–45.
- Baringhaus and Franz, (2004) Baringhaus, L. and Franz, C. (2004). On a new multivariate two-sample test. Journal of multivariate analysis, 88(1):190–206.
- Berlinet and Thomas-Agnan, (2011) Berlinet, A. and Thomas-Agnan, C. (2011). Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media.
- Bouezmarni et al., (2012) Bouezmarni, T., Rombouts, J. V., and Taamouti, A. (2012). Nonparametric copula-based test for conditional independence with applications to granger causality. Journal of Business & Economic Statistics, 30(2):275–287.
- Brooks et al., (2014) Brooks, T., Pope, D., and Marcolini, M. (2014). Airfoil Self-Noise. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5VW2C.
- Calonico et al., (2014) Calonico, S., Cattaneo, M. D., and Titiunik, R. (2014). Robust nonparametric confidence intervals for regression-discontinuity designs. Econometrica, 82(6):2295–2326.
- Chakraborty and Zhang, (2019) Chakraborty, S. and Zhang, X. (2019). Distance metrics for measuring joint dependence with application to causal inference. Journal of the American Statistical Association.
- Chakraborty and Zhang, (2021) Chakraborty, S. and Zhang, X. (2021). A new framework for distance and kernel-based metrics in high dimensions. Electronic Journal of Statistics, 15(2):5455–5522.
- Chang et al., (2015) Chang, M., Lee, S., and Whang, Y.-J. (2015). Nonparametric tests of conditional treatment effects with an application to single-sex schooling on academic achievements. The Econometrics Journal, 18(3):307–346.
- Chatterjee et al., (2024) Chatterjee, A., Niu, Z., and Bhattacharya, B. B. (2024). A kernel-based conditional two-sample test using nearest neighbors (with applications to calibration, regression curves, and simulation-based inference). arXiv preprint arXiv:2407.16550.
- Chen et al., (2022) Chen, M., Tian, T., Zhu, J., Pan, W., and Wang, X. (2022). Paired-sample tests for homogeneity with/without confounding variables. Statistics and Its Interface, 15(3):335–348.
- Cleveland, (1993) Cleveland, W. S. (1993). Visualizing data. Hobart press.
- Crump et al., (2008) Crump, R. K., Hotz, V. J., Imbens, G. W., and Mitnik, O. A. (2008). Nonparametric tests for treatment effect heterogeneity. The Review of Economics and Statistics, 90(3):389–405.
- Deb et al., (2020) Deb, N., Ghosal, P., and Sen, B. (2020). Measuring association on topological spaces using kernels and geometric graphs. arXiv preprint arXiv:2010.01768.
- Dette and Munk, (1998) Dette, H. and Munk, A. (1998). Nonparametric comparison of several regression functions: exact and asymptotic theory. The Annals of Statistics, 26(6):2339–2368.
- Duong, (2013) Duong, T. (2013). Local significant differences from nonparametric two-sample tests. Journal of Nonparametric Statistics, 25(3):635–645.
- Fan and Li, (1996) Fan, Y. and Li, Q. (1996). Consistent model specification tests: omitted variables and semiparametric functional forms. Econometrica: Journal of the econometric society, pages 865–890.
- Fukumizu et al., (2007) Fukumizu, K., Gretton, A., Sun, X., and Schölkopf, B. (2007). Kernel measures of conditional dependence. Advances in neural information processing systems, 20.
- González-Manteiga and Crujeiras, (2013) González-Manteiga, W. and Crujeiras, R. M. (2013). An updated review of goodness-of-fit tests for regression models. Test, 22(3):361–411.
- Gretton et al., (2012) Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., and Smola, A. (2012). A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773.
- Gretton et al., (2007) Gretton, A., Fukumizu, K., Teo, C., Song, L., Schölkopf, B., and Smola, A. (2007). A kernel statistical test of independence. Advances in neural information processing systems, 20.
- Hall and Hart, (1990) Hall, P. and Hart, J. D. (1990). Bootstrap test for difference between means in nonparametric regression. Journal of the American Statistical Association, 85(412):1039–1049.
- Hall and Heyde, (2014) Hall, P. and Heyde, C. C. (2014). Martingale limit theory and its application. Academic press.
- Hu and Lei, (2024) Hu, X. and Lei, J. (2024). A two-sample conditional distribution test using conformal prediction and weighted rank sum. Journal of the American Statistical Association, 119(546):1136–1154.
- Huang et al., (2024) Huang, M.-Y., Qin, J., and Huang, C.-Y. (2024). Efficient data integration under prior probability shift. Biometrics, 80(2):ujae035.
- Huang, (2010) Huang, T.-M. (2010). Testing conditional independence using maximal nonlinear conditional correlation. The Annals of Statistics, 38(4):2047–2091.
- Imbens and Wooldridge, (2009) Imbens, G. W. and Wooldridge, J. M. (2009). Recent developments in the econometrics of program evaluation. Journal of economic literature, 47(1):5–86.
- Ke and Yin, (2020) Ke, C. and Yin, X. (2020). Expected conditional characteristic function-based measures for testing independence. Journal of the American Statistical Association.
- Kim et al., (2019) Kim, I., Lee, A. B., and Lei, J. (2019). Global and local two-sample tests via regression. Electronic Journal of Statistics, 13(2):5253–5305.
- Kouw and Loog, (2018) Kouw, W. M. and Loog, M. (2018). An introduction to domain adaptation and transfer learning. arXiv preprint arXiv:1812.11806.
- Kulasekera, (1995) Kulasekera, K. (1995). Comparison of regression curves using quasi-residuals. Journal of the American Statistical Association, 90(431):1085–1093.
- Kulasekera and Wang, (1997) Kulasekera, K. and Wang, J. (1997). Smoothing parameter selection for power optimality in testing of regression curves. Journal of the American Statistical Association, 92(438):500–511.
- Lavergne, (2001) Lavergne, P. (2001). An equality test across nonparametric regressions. Journal of Econometrics, 103(1-2):307–344.
- Lee, (2019) Lee, A. J. (2019). U-statistics: Theory and Practice. Routledge.
- Lee, (2009) Lee, M.-j. (2009). Non-parametric tests for distributional treatment effect for randomly censored responses. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(1):243–264.
- Lee et al., (2024) Lee, S.-h., Ma, Y., and Zhao, J. (2024). Doubly flexible estimation under label shift. Journal of the American Statistical Association, pages 1–13.
- Li and Racine, (2007) Li, Q. and Racine, J. S. (2007). Nonparametric econometrics: theory and practice. Princeton University Press.
- Liu et al., (2023) Liu, M., Zhang, Y., Liao, K. P., and Cai, T. (2023). Augmented transfer regression learning with semi-non-parametric nuisance models. Journal of Machine Learning Research, 24(293):1–50.
- Lyons, (2013) Lyons, R. (2013). Distance covariance in metric spaces. The Annals of Probability, 41(5):3284–3305.
- Ma et al., (2023) Ma, C., Pathak, R., and Wainwright, M. J. (2023). Optimally tackling covariate shift in rkhs-based nonparametric regression. The Annals of Statistics, 51(2):738–761.
- Muandet et al., (2017) Muandet, K., Fukumizu, K., Sriperumbudur, B., Schölkopf, B., et al. (2017). Kernel mean embedding of distributions: A review and beyond. Foundations and Trends® in Machine Learning, 10(1-2):1–141.
- Neumeyer and Dette, (2003) Neumeyer, N. and Dette, H. (2003). Nonparametric comparison of regression curves: an empirical process approach. The Annals of Statistics, 31(3):880–920.
- Paparoditis and Politis, (2000) Paparoditis, E. and Politis, D. N. (2000). The local bootstrap for kernel estimators under general dependence conditions. Annals of the Institute of Statistical Mathematics, 52:139–159.
- Park and Muandet, (2020) Park, J. and Muandet, K. (2020). A measure-theoretic approach to kernel conditional mean embeddings. Advances in neural information processing systems, 33:21247–21259.
- Rizzo and Székely, (2010) Rizzo, M. L. and Székely, G. J. (2010). Disco analysis: A nonparametric extension of analysis of variance. The Annals of Applied Statistics, pages 1034–1055.
- Rubin, (1974) Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology, 66(5):688.
- Sejdinovic et al., (2013) Sejdinovic, D., Sriperumbudur, B., Gretton, A., and Fukumizu, K. (2013). Equivalence of distance-based and rkhs-based statistics in hypothesis testing. The annals of statistics, pages 2263–2291.
- Sheng and Sriperumbudur, (2023) Sheng, T. and Sriperumbudur, B. K. (2023). On distance and kernel measures of conditional dependence. Journal of Machine Learning Research, 24(7):1–16.
- Shimodaira, (2000) Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244.
- Sriperumbudur et al., (2008) Sriperumbudur, B. K., Gretton, A., Fukumizu, K., Lanckriet, G., and Schölkopf, B. (2008). Injective hilbert space embeddings of probability measures. In 21st Annual Conference on Learning Theory (COLT 2008), pages 111–122. Omnipress.
- Sriperumbudur et al., (2010) Sriperumbudur, B. K., Gretton, A., Fukumizu, K., Schölkopf, B., and Lanckriet, G. R. (2010). Hilbert space embeddings and metrics on probability measures. The Journal of Machine Learning Research, 11:1517–1561.
- Stute, (1991) Stute, W. (1991). Conditional u-statistics. The Annals of Probability, pages 812–825.
- Su and Spindler, (2013) Su, L. and Spindler, M. (2013). Nonparametric testing for asymmetric information. Journal of Business & Economic Statistics, 31(2):208–225.
- Su and White, (2007) Su, L. and White, H. (2007). A consistent characteristic function-based test for conditional independence. Journal of Econometrics, 141(2):807–834.
- Su and White, (2008) Su, L. and White, H. (2008). A nonparametric hellinger metric test for conditional independence. Econometric Theory, 24(4):829–864.
- Székely and Rizzo, (2005) Székely, G. J. and Rizzo, M. L. (2005). A new test for multivariate normality. Journal of Multivariate Analysis, 93(1):58–80.
- Székely and Rizzo, (2017) Székely, G. J. and Rizzo, M. L. (2017). The energy of data. Annual Review of Statistics and Its Application, 4(1):447–479.
- Székely et al., (2007) Székely, G. J., Rizzo, M. L., and Bakirov, N. K. (2007). Measuring and testing dependence by correlation of distances. The annals of statistics, 35(6):2769–2794.
- Székely et al., (2004) Székely, G. J., Rizzo, M. L., et al. (2004). Testing for equal distributions in high dimension. InterStat, 5(16.10):1249–1272.
- Taamouti et al., (2014) Taamouti, A., Bouezmarni, T., and El Ghouch, A. (2014). Nonparametric estimation and inference for conditional density based granger causality measures. Journal of Econometrics, 180(2):251–264.
- Tibshirani et al., (2019) Tibshirani, R. J., Foygel Barber, R., Candes, E., and Ramdas, A. (2019). Conformal prediction under covariate shift. Advances in neural information processing systems, 32.
- Ullah and Pagan, (1999) Ullah, A. and Pagan, A. (1999). Nonparametric econometrics. Cambridge university press Cambridge.
- Wang et al., (2015) Wang, X., Pan, W., Hu, W., Tian, Y., and Zhang, H. (2015). Conditional distance correlation. Journal of the American Statistical Association, 110(512):1726–1734.
- Yao et al., (2018) Yao, S., Zhang, X., and Shao, X. (2018). Testing mutual independence in high dimension via distance covariance. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 80(3):455–480.
- Yin and Yuan, (2020) Yin, X. and Yuan, Q. (2020). A new class of measures for testing independence. Statistica Sinica, 30(4):2131–2154.
- Zhang et al., (2012) Zhang, K., Peters, J., Janzing, D., and Schölkopf, B. (2012). Kernel-based conditional independence test and application in causal discovery. arXiv preprint arXiv:1202.3775.
- Zhang et al., (2018) Zhang, X., Yao, S., and Shao, X. (2018). Conditional mean and quantile dependence testing in high dimension. The Annals of Statistics, 46(1):219–246.
- Zhu et al., (2020) Zhu, C., Zhang, X., Yao, S., and Shao, X. (2020). Distance-based and rkhs-based dependence metrics in high dimension. The Annals of Statistics, 48(6):3366–3394.