Orthogonal Causal Calibration
Abstract
Estimates of heterogeneous treatment effects such as conditional average treatment effects (CATEs) and conditional quantile treatment effects (CQTEs) play an important role in real-world decision making. Given this importance, one should ensure these estimates are calibrated. While there is a rich literature on calibrating estimators of non-causal parameters, very few methods have been derived for calibrating estimators of causal parameters, or more generally estimators of quantities involving nuisance parameters.
In this work, we develop general algorithms for reducing the task of causal calibration to that of calibrating a standard (non-causal) predictive model. Throughout, we study a notion of calibration defined with respect to an arbitrary, nuisance-dependent loss , under which we say an estimator is calibrated if its predictions cannot be changed on any level set to decrease loss. For losses satisfying a condition called universal orthogonality, we present a simple algorithm that transforms partially-observed data into generalized pseudo-outcomes and applies any off-the-shelf calibration procedure. For losses satisfying a weaker assumption called conditional orthogonality, we provide a similar sample splitting algorithm the performs empirical risk minimization over an appropriately defined class of functions. Convergence of both algorithms follows from a generic, two term upper bound of the calibration error of any model: one term that measures the error in estimating unknown nuisance parameters and another that measures calibration error in a hypothetical world where the learned nuisances are true. We demonstrate the practical applicability of our results in experiments on both observational and synthetic data. Our results are exceedingly general, showing that essentially any existing calibration algorithm can be used in causal settings, with additional loss only arising from errors in nuisance estimation.
1 Introduction
Estimates of heterogeneous causal effects such as conditional average treatment effects (CATEs), conditional average causal derivatives (CACDs), and conditional quantile treatment effects (CQTEs) play a pervasive role in understanding various statistical, scientific, and economic problems. Due to the partially-observed nature of causal data, estimating causal quantities is naturally more difficult than estimating non-causal ones. Despite this, a vast literature has developed focused on estimating causal effects using both classical statistical methods [34, 36, 46, 1, 19] and modern machine learning (ML) approaches [18, 7, 6, 53, 14, 30]. In this work, we focus on an complementary and relatively understudied direction — developing algorithms for calibrating estimates of heterogeneous causal effects.
Calibration is a notion of model consistency that has been extensively studied both in theory and practice by the general ML community [23, 43, 39, 42, 56, 31, 50, 29, 13, 12, 17]. Classically, a model predicting is said to be calibrated if, when it makes a given prediction, the observed outcome on average is said prediction, i.e. if
There are deep connections between calibrated predictions and optimal downstream decision-making [48, 47]. Namely, for many utility functions, when a decision maker is equipped with calibrated predictions, the optimal mapping from prediction to decision simply treats those predictions as if they were the ground truth. Given that models predicting heterogeneous causal effects or counterfactual quantities are leveraged in decision-making in domains such as medicine [15, 37], advertising and e-commerce [32, 20], and public policy [61], it is essential that these estimates are calibrated. In non-causal settings, there exists a plethora of simple algorithms for performing calibration such of algorithms such as histogram binning [25], isotonic calibration [2, 58] , linear calibration, and Platt scaling [51, 45]. These algorithms are all univariate regression procedures, regressing an observed outcome onto a model prediction over an appropriately-defined function class. However, given the partially-observed nature of causal problems, it is non-obvious how to extend these algorithms for calibrating estimates of general causal effects.
To illustrate the importance and difficulty of calibrating causal models, we look at the example of a doctor using model predictions to aid in prescribing medicine to prevent heart attacks. The doctor may have access to observations of the form , where represent covariates, is a binary treatment, and represents an observed outcome. For instance, a doctor may use a patient’s age, height, and blood pressure (covariates ) to decide whether or not to give medicine (treatment ) to a patient in order to prevent a heart attack (outcome ). To aid in their decision making, the doctor may want to employ a model predicting the conditional average treatment effect (CATE), which is defined as
In this example, the CATE measures the difference in probability of a heart attack under treatment and control. Naturally, a doctor may decide to prescribe medication to a patient if , i.e. if the model predicts the probability of a heart attack will go down under treatment. If the model is miscalibrated—for instance if we have
the doctor may actually harm patients by prescribing medication! If were calibrated, such a risk would not a occur. However, the doctor cannot run one of the aforementioned calibration algorithms because they only ever observe the outcome under treatment or control , never the target individual treatment effect .
To work around this, one typically needs to estimate nuisance parameters — functions of the underlying data-generating distribution that are used to “de-bias” partial observations (either or in the previous example) into pseudo-outcomes: quantities that, on average, look like the target treatment effect (here ). For the CATE, these nuisance functions are the propensity score and the expected outcome mapping , and pseudo-outcomes are of the form [35]
Naturally, the nuisances and pseudo-outcomes will naturally be different for other heterogeneous causal effects and can typically be derived by the statistician.
What is known about causal calibration? In the setting of the above CATE example, if we assume conditional ignorability111This is the condition that the potential outcomes and are conditionally independent of the treatment given covariates , i.e. , van der Laan et al. [58] show that by estimating the propensity score and expected outcome mappings, one can perform isotonic regression of the doubly-robust pseudo-outcomes onto model predictions to ensure asymptotic calibration. However, this result is limited in application, and doesn’t extend to other heterogeneous effects of interest like conditional average causal derivatives, conditional local average treatment effects, or even CATEs in the presence of unobserved confounding (in which case one may need instrumental variables). Furthermore, the result of van der Laan et al. [58] don’t allow a learner to calibrate using other algorithms like Platt scaling, histogram binning, or even simple linear regression. While one could derive individual results for calibrating an estimate of “parameter A” under “algorithm B”, this would likely lead to a repeated reinventing of the wheel. The question considered in this paper is thus as follows: can one construct a framework that allows a statistician to calibrate estimates of general heterogeneous causal effects using an arbitrary, off-the-shelf calibration algorithms?
1.1 Our Contributions
In this paper, we reduce the problem of causal calibration, or calibrating estimates of heterogeneous causal effects, to the well-studied problem of calibrating non-causal predictive models. We assume the scientist is interested in calibrating an estimate of some heterogeneous causal effect that can be specified as the conditional minimizer of a loss function, i.e. . Here, involves a nuisance component, and we assume that there exists a true, unknown nuisance parameter . We say a is perfectly calibrated with respect to if ,222Here, denotes the partial derivative of with respect to its first argument , i.e. the quantity is defined by . and that is approximately calibrated if the error
is small when is set as the true nuisance . In words, is calibrated if the predictions of are “unimprovable” with respect to current level sets, a condition similar to the one outlined by Noarov and Roth [47] and the concept of swap regret [4, 17].
As a concrete example, is the conditional minimizer of the doubly-robust loss given by , where , , and are as defined above. Perfect calibration for a CATE estimate becomes , and approximate calibration becomes .
Our reduction and algorithms are based around a robustness condition on the underlying loss called Neyman Orthogonality. We say a loss is Neyman orthogonal if
(1) |
i.e. if is insensitive to small estimation errors in the nuisance parameters and loss minimizing parameter .333Here, denotes a Gateaux derivative. In our work, we consider two mild variants of Neyman orthogonality. The first is universal orthogonality [18], a stronger notion of orthogonality in which in Equation (1) can be replaced by any estimate . Examples of causal effects that minimize such a loss are CATEs, conditional average causal derivatives (CACD), and conditional local average treatment effects (CLATEs). Another condition we consider is called conditional orthogonality, in which instead of conditioning of covariates in Equation (1), one conditions on a post-processing of covariates instead. Conditional orthogonality is a natural condition for calibration tasks, as we ultimately care about assessing the quality of an estimator conditional on its own predictions. An example causal parameter that can be specified as the minimizer of a conditionally orthogonal loss is the conditional quantile under treatment (CQUT). Our specific contributions, broken down by assumption on the loss, are as follows:
-
1.
In Section 3, we study calibration with respect to universally orthogonal loss functions. We present a sample splitting algorithm (Algorithm 1) that uses black-box ML algorithms to estimate unknown nuisance functions on the first half of the data, transforms the second half into de-biased “pseudo-outcomes”, and then applies any off-the-shelf calibration algorithm on the second half of the data treating pseudo-outcomes as true labels. We provide high-probability, finite-sample guarantees for our algorithm that only depend on convergence rates for the black-box nuisance estimation and calibration algorithms. We additionally provide a cross-calibration algorithm (Algorithm 2) that makes more efficient use of the data and show that our calibration algorithms do not significantly increase the risk of a model (Appendix C).
-
2.
In Section 4, we study the calibration of models predicting effects that minimize a conditionally orthogonal loss function. In this more general setting, we describe a similar sample splitting algorithm that uses half of the data lo learn nuisances and then runs an off-the-shelf algorithm for “generalized’ calibration on the second half the data. We provide examples of such calibration algorithms in Appendix D, which are motivated by extending algorithms like isotonic calibration and linear calibration to general losses. We also prove finite-sample guarantees for this algorithm and construct a similar cross-calibration algorithm (Algorithm 4).
-
3.
Lastly Section 5, we empirically evaluate the performance of the aforementioned algorithms on a mix of observational and semi-synthetic data. In one experiment, we use observational data alongside Algorithm 2 to show how one can calibrate models predicting the CATE of 401(k) eligibility and CLATE of 401(k) participation on an individual’s net total financial assets. Likewise, in another experiment on synthetically generated data, we show how Algorithm 4 can be used to both decrease calibration error and average loss for models predicting CQUTs.
The design of our algorithms is inspired by a generic upper bound on the calibration error of any estimator . We show that the calibration error of any estimator can be bounded above by two, decoupled terms: one involving nuisance estimation error and another representing calibration error under the orthogonalized loss evaluated at the learned nuisances.
Informal Theorem 1.
Suppose is some estimator, is some base loss, and is the corresponding orthogonalized loss. Let denote the true, unknown nuisance functions, and arbitrary nuisance estimates. We have
We view Informal Theorem 1 as a “change of measure” or “change of nuisance” result, allowing the learner to essentially pretend our learned nuisances represent reality while only paying a small error for misestimation. In many settings where there are two nuisances functions (e.g. and ), we will have , and thus the error will be small if we estimate at least one nuisance function sufficiently well. This in particular is the case for the aforementioned CATE, where , as noted above. More broadly, we will have . While simple in appearance, the above bound depends on deep connections between Neyman orthogonality and calibration error. We prove a bound of the above form in each of the main sections of the paper. This decoupled bound naturally suggest using some fraction of the data to learn nuisance parameters, thus minimizing the first term, and then using fresh data with the learned nuisances to perform calibration, thus minimizing the second term.
Given that our framework and results are quite general, we go through concrete examples to help with building intuition. In fact, we show that the work of van der Laan et al. [58] can be seen as a special instantiation of our framework.
1.2 Related Work
Calibration:
Important to our work is the vast literature on calibration. Calibration was considered first in the context producing calibrated probabilities, both in the online [12, 17] and i.i.d. [51, 62] settings, but has since been considered in other contexts such as distribution calibration [54], threshold calibration [52, 59], and parity calibration [11]. Calibration is typically orthogonal to model training, and usually occurs as a simple post-processing routine. Some well-known algorithms for post-hoc calibration include Platt scaling [51, 26], histogram binning [62, 25], and isotonic regression [63, 2]. Many of these algorithms simultaneously offer strong theoretical guarantees (see Gupta [24] for an overview) and strong empirical performance when applied to practically-relevant ML models [23]. We view our work as complementary to existing, non-causal results on calibration. Our two-step algorithm allows a practitioner to directly apply any of the above listed algorithms, inheriting existing error guarantees so long as nuisance estimation is efficiently performed.
Double/debiased Machine Learning:
In our work, we also draw heavily from the literature on double/debiased machine learning [6, 9, 8]. Methods relating to double machine learning aim to eschew classical non-parametric assumptions (e.g. Donsker properties) on nuisance functions, often through simple sample splitting schemes [28, 34, 3]. In particular, if target population parameters are estimated using a Neyman orthogonal loss function [44, 18], then these works show that empirical estimates of the population parameters converge rapidly to either a population or conditional loss minimizer.
Of the various works related to double/debiased machine learning, we draw most heavily on ideas from the framework of orthogonal statistical learning [18]. In their work, Foster and Syrgkanis [18] develop a simple two-step framework for statistical learning in the presence of nuisance estimation. In particular, they show that when the underlying loss is Neyman orthogonal, then the excess risk can be bounded by two decoupled error terms: error from estimating nuisances and the error incurring from applying a learning algorithm with a fixed nuisance. Following its introduction, the orthogonal statistical learning framework has found applications in tasks such as the design of causal random forests [49] and causal model ensembling via Q-aggregation [40]. In this work, we show that central ideas from orthogonal statistical learning are naturally applicable to the problem of calibrating estimators of causal parameters.
Lastly, our work can be seen as a significant generalization of existing results on the calibration of causal parameters. Primarily, we compare our results to the work of van der Laan et al. [58]. In their work, the authors construct a sample-splitting scheme for calibrating estimates of conditional average treatment effects (CATEs). The specific algorithm leveraged by the authors uses one half of the data to estimate nuisance parameters, namely propensities and expected outcomes under control/treatment. After nuisances are learned, the algorithm transforms the second half of the data into pseudo-observations and runs isotonic regression as a calibration procedure. Our results are applicable to estimates of any causal parameter that can be specified as the population minimizer of a loss function, not just CATEs. Additionally, our generic procedure allows the scientist to plug in any black-box method for calibration, not just a specific algorithm such as isotonic regression. Likewise, our work is also significantly more general than the work of Leng and Dimmery [41], who provide a maximum-likelihood based approach for performing linear calibration, a weaker notion of calibration, of CATE estimates.
2 Calibration of Causal Effects
We are interested in calibrating some estimator/ML model that is predicting some heterogeneous causal effect . We assume is specified as the conditional minimizer of some loss , i.e.
We assume is some generic loss function involving nuisance. We assume is some space containing observations, and write as a prototypical random element from this space, and as the distribution on from which is drawn. In the above, is some space of nuisance functions, which are of the form . We assume that there is some true nuisance parameter , but that this parameter is unknown to the learner and must be estimated. We generally assume is a subset of , and so as a norm we can consider the , where denotes the standard Euclidean norm on . Given a function and functions , we let and denote respectively the first and second Gateaux derivatives of at in the direction .
We typically have , where represents covariates, represents treatment, and represents an outcome in an experiment. More generally, we assume the nested structure , where intuitively represents the space of covariates or features, represents an extended set of parameters on which the true nuisance parameter may also depend (e.g. instrumental variables, whether or not an individual actually accepted a treatment), and may contain additional observable information (e.g. outcome under the given treatment). We write the marginal distributions of and respectively as and . We typically write instead of for succinctness, and we let be the partial derivative of with respect to it’s first argument, .
In our work, we consider a general definition of calibration error that holds for any loss involving a nuisance component. This general notion of calibration, which is similar to notions that have been considered in Noarov and Roth [47], Gopalan et al. [22], Foster and Vohra [16], and Globus-Harris et al. [21], implies an estimator cannot be “improved” on any level set of its prediction.
Definition 2.1.
Let be an estimator, a nuisance-dependent loss function, and a fixed nuisance parameter. We define the calibration error of with respect to and to be
We say is perfectly calibrated if , where is the true, unknown nuisance parameter.
In the special case where the loss is the squared loss and doesn’t involve a nuisance component, we recover a more classical notion of calibration error.
Definition 2.2 (Classical calibration error).
Let be a fixed estimator, and let , where is some arbitrary distribution on . The classical calibration error is defined by
where we make dependence on the underlying distribution clear for convenience.
We will leverage the classical calibration error in the sequel when reasoning about the convergence of our sample splitting algorithm. The key for now is that Definition 2.1 generalizes the above definition to arbitrary losses involving a nuisance component.
We are always interested in controlling . In words, is calibrated if, on the level set , there is no constant value we can switch the prediction to to obtain lower loss. We can gleam further semantic meaning from looking at several additional examples below.
Example 2.3.
Below, we almost always assume observations are of the form , where are covariates, or indicates treatment, and indicates an outcome. We assume conditional ignorability of the treatment given covariates. The one exception is for conditional local average treatment effects, when assigned treatment may be ignored by the subject.
-
1.
Conditional Average Treatment Effect: Recalling from earlier that the CATE can be specified as the conditional minimizer of the doubly-robust loss given by with given by
(2) Here, the true nuisances are and . It is clear perfect calibration becomes .
-
2.
Conditional Average Causal Derivative: In a setting where treatments are not binary, but rather continuous (e.g. ), it no longer makes sense to consider treatment effects as a difference. Instead, we can consider the conditional average causal derivative, which is defined by
is in fact the conditional minimizer of the loss given by
where , is again the expected outcome mapping, is the density of the treatment given covariates (we could alternatively directly estimate the nuisance — more on this later). Naturally, is perfectly calibrated with respect to if
-
3.
Conditional Local Average Treatment Effect: In settings with non-compliance, the prescribed treatment given to an individual may not be equivalent to the received treatment . Formally, we have , where we assume , and , where represent potential outcomes for treatment assignment . We also assume monotonicty, i.e. that , and that the propensity for the recommended treatment is known. The parameter of interest here is
which is identified (following standard computations, see Lan and Syrgkanis [40]) as
(3) It follows that conditionally minimizes the following somewhat complicated loss:
(4) Calibration with respect to becomes
-
4.
Conditional Quantile Under Treatment: Lastly, we consider the conditional th quantile under treatment, which, assume admits a conditional Lebesgue density, is specified as 444 here denotes the conditional CDF of given covariates . More generally, the th quantile under treatment is specified as
In the above, denotes the -pinball loss, which is defined as
where the true, unknown nuisance is the inverse propensity score , where . A direct computation yields that calibration under becomes
The first three losses considered above are “easy” to calibrate with respect to, as and are universally orthogonal losses, a concept discussed in Section 3. The pinball loss, on the other hand, is the quintessential example of a “hard” loss to calibrate with respect to. Handling more complicated losses like this is the subject of Section 4.
3 Calibration for Universally Orthogonal Losses
In this section, we develop algorithms for calibrating estimates of causal effects that are specified as conditional minimizers of universally orthogonal losses . Universal orthogonality, first introduced in Foster and Syrgkanis [18], can viewed as a robustness property of losses that are “close” to squared losses. Heuristically, a loss is universally orthogonal if it is insensitive to small errors in estimating the nuisance functions regardless of the current estimate on the conditional loss minimizer, i.e. Equation (1) holds when is replaced by any function . We formalize this in the following definition.
Definition 3.1 (Universal Orthogonality).
Let be a loss involving nuisance, and let denote the true nuisance parameter associated with . We say is universally orthogonal, if for any and , we have
By direct computation, one can verify that and all satisfy Definition 3.1, whereas does not — this aligns with the idea of universally orthogonal losses behaving like squared loss. The following example gives a general class of losses that are universally orthogonal.
Example 3.2.
Assume that we have a vector of nuisances for some . Further, assume there is some function such that555Here, can be thought of a the (conditional) Riesz representer of the linear functional .
Then, any loss that obeys the score equation
(5) |
with will satisfy Definition 3.1. We think of as representing a generalized pseudo-outcome that, on average, looks like the desired heterogeneous causal effect. In fact, any loss of the above form is equivalent to a loss that looks like .
All universally orthogonal losses we have seen up until this point satisfy can be written in terms of pseudo-outcomes in the canonical form above. We note the particular settings of and for each of these losses in Table 1. Typically, we assume the statistician will estimate and by plugging in estimates for each unknown constituent function, e.g. by plugging in estimates for and in the CATE example.
Loss | ||||
---|---|---|---|---|
Universal orthogonality allows to bound above by two decoupled terms. The first, denoted , intuitively measures the distance between some fixed nuisance estimate and the unknown, true nuisance parameters . This error is second order in nature, generally depending on the squared norm of the nuisance estimation error. The second term is , and represents the calibration error of in a reality where the learned nuisances were actually the true nuisances. As mentioned in the introduction, we view our bound as a “change of nuisance” result (akin to change of measure), allowing the learner to pay an upfront price for nuisance misestimation and then subsequently reason about the calibration error under potentially incorrect, learned nuisances. We prove the following in Appendix A.
Theorem 3.3.
Let be universally orthogonal, per Definition 3.1. Let denote the true nuisance parameters associated with . Suppose exists for any . Then, for any and , we have
where .666For , we let the interval .
While the expression defining looks unpalatable, when (as in Example 3.2), this term can quite generally be bounded above by the cross-error in nuisance estimation, i.e. the quantity . The formal statement of this fact is presented in Proposition 3.4 below. More broadly, if the conditional Hessian satisfies for any and , then we simply have . These sorts of second order bound appears in other works on causal estimation [18, 58, 40, 6].
We view the above result below as a major generalization of the main result (Theorem 1) of van der Laan et al. [58], which shows a similar bound for measuring the calibration error of estimates of conditional average treatment effects when calibration is performed according to isotonic regression. Our result, which more explicitly leverages the concept of Neyman orthogonality, can be used to recover that of van der Laan et al. [58] as a special case, including the error rate in nuisance estimation.
Proposition 3.4.
Suppose the loss satisfies the score condition outlined in Equation (5) and suppose is linear in . Then, we have
where represent the true, unknown nuisance parameters, and represent arbitrary, fixed nuisance estimates. If instead one has for any and , the one has
3.1 Sample Splitting Algorithm
Throughout the remainder of this section, we make the following assumption on the the loss . Any loss of the form presented in Equation (5) naturally satisfies the following assumption, and thus our example is applicable to and described earlier.
Assumption 1 (Linear Score).
We assume there is some function such that the loss satisfies
for any and .
The largely theoretic bound presented in Theorem 3.3 suggests a natural algorithm for performing causal calibration. First, a learner should use some fraction (say half) of the data to produce nuisance estimate using any black-box algorithm. Then, the learner should transform second half of the data into generalized pseudo-outcomes using the learned nuisances. Finally, they should apply some off-the-shelf calibration algorithm to the transformed data points. We formalize this in Algorithm 1.
Algorithm 1 generalizes the main algorithm in van der Laan et al. [58] (Algorithm 1) to allow for the calibration of an estimate of any heterogeneous causal effect using any off-the-shelf nuisance estimation and calibration algorithms. To make efficient use of the data, we recommend instead running the cross calibration procedure outlined in Algorithm 2. We do not provide a convergence analysis for this algorithm.
We now prove convergence guarantees for Algorithm 1. We start by enumerating several assumptions needed to guarantee convergence.
Assumption 2.
Let be a nuisance estimation algorithm taking in an arbitrary number of points, and let be a calibration algorithm taking some initial estimator and an arbitrary number of covariate/label pairs. We assume
-
1.
For any distribution on , i.i.d., and failure probability , we have
where and is some rate function.
-
2.
For any distribution on , i.i.d., initial estimator , and failure probability , we have
where and is some rate function.
We briefly parse the above assumptions. For the first assumption, whenever and or , one can directly apply ML, non-parametric, or semi-parametric methods to estimate the unknown nuisances. For instance, if the nuisances are assumed to assumed to satisfy Holder continuity assumptions or are assumed to belong to a ball in a reproducing kernel Hilbert space, one can apply classical kernel smoothing methods or kernel ridge regression respectively to estimate the unknown parameters [58, 57, 60] to obtain optimal rates. Further, many well-known calibration algorithms satisfy the second assumption, often in a manner that doesn’t depend on the underlying distribution . For instance, results in Gupta and Ramdas [25] on calibration error bounds directly imply that if is taken to be uniform mass/histogram binning, then the rate function can be taken as , where denotes the number of bins/buckets, denotes the essential supremum of (which will be finite so long as nuisances and observations are bounded) and the unknown constant is independent of , , and . Likewise, the convergence in probability results for isotonic calibration proven by van der Laan et al. [58] can naturally be extended to high-probability guarantees using standard concentration of measure results.
Theorem 3.5.
We prove the above theorem in Appendix B. The above result can be thought of as an analogue of Theorem 1 of Foster and Syrgkanis [18], which shows a similar bound on excess parameter risk, and also a generalization of Theorem 1 of van der Laan et al. [58], which shows a similar bound when isotonic regression is used to calibrate CATE estimates.
We note that while we state our bounds and assumptions in terms of high-probability guarantees, we could have equivalently assumed, and for appropriately chosen rate functions and 777Here, would have to be a function satisfying independent of the distribution of pseudo-outcomes to obtain convergence in probability guarantees. Such functions exists for isotonic calibration and histogram binning. This, for instance, would be useful if one wanted to apply the results on the convergence of isotonic regression due to van der Laan et al. [58], who show .
Lastly, we note that it is desirable for calibration algorithms to possess a “do no harm” guarantee, which ensures that the risk of the calibrated parameter is not much larger than the risk of original parameter. We present such a guarantee in Theorem C.1 in Appendix C, which follows using standard risk bounding techniques due to Foster and Syrgkanis [18].
4 Calibration for Conditionally Orthogonal Losses
We now consider the more challenging problem where the causal effect is not the minimizer of a universally orthogonal loss. To aid in our exposition, we introduce calibration functions. In short, the calibration function gives a canonical choice of a post-processing such that . While computing the calibration function exactly requires knowledge of the data-generating distribution, it can be approximated in finitely-many samples.
Definition 4.1 (Calibration Function).
Given any and , we define the calibration function for at as the mapping given by
In particular, when , we call the true calibration function.
As hinted, first-order optimality conditions alongside the tower rule for conditional expectations imply that for any . This, in particular, implies that is perfectly calibrated. As an example, when a loss satisfies Assumption 1, .
We now ask under what general assumptions on can we achieve calibration. In analogy with Foster and Syrgkanis [18], who consider the general task of empirical risk minimization in the presence of nuisance, the hope would be to weaken the assumption of universal orthogonality to that of Neyman orthogonality. Two commonly-used definitions for Neyman orthogonality (a marginal version and a version conditional on covariates) are provided below.
Definition 4.2 (Neyman Orthogonality).
We say is Neyman orthogonal conditional on covariates (or marginally) if, for , we have
where and denotes the true nuisances (respectively and for the latter).
Neyman orthogonality is useful for a task such as risk minimization because it allows the statistician to relate the risk under the true nuisances to the risk under the computed nuisances up to second order errors. Why do we need two separate conditions on the loss ? In general, the conditions in Definition 4.2 are not equivalent. To illustrate this, we can look at the example of the conditional/marginal quantile under treatment (We will let denote the former and the latter). Recalling that we defined the pinball loss , it is not hard to see that we have
where denotes the inverse propensity. Straightforward computation yields that is Neyman orthogonal conditional on covariates, but not marginally orthogonal. However, as noted in Kallus et al. [33], one can define the more complicated loss by performing a first order correction:
(6) |
where is an additional nuisance that must be estimated from the data. One can check that satisfies the second condition of Definition 4.2, i.e. marginal Neyman orthogonality.
In calibration, we care about the quality of a model conditional on its own predictions. More specifically, given any initial model , the goal of any calibration algorithm (e.g. histogram binning, isotonic regression) is to compute the calibration function (Definition 4.1).888We remind the reader that If were a constant function, then we would have , and thus we would want to leverage a loss satisfying marginal orthogonality. Likewise, if were roughly of the same complexity as , we may want to leverage the a loss satisfying the form of Neyman orthogonality conditional on covariates . In general, the complexity of the initial estimate will interpolate between these two extremes.
For a variant of Neyman orthogonality to thus be useful, we would need to cross-derivative to vanish (a) when evaluated at the calibration function instead of or and (b) when the expectation is taken conditionally on the prediction instead of either conditionally on or marginally. The extra structure provided by universal orthogonality allowed us to side-step this issue as the cross-derivative of the loss vanished when evaluated at any estimate of so long as nuisances were estimated correctly. The following, quite technical condition will give us the structure we need to perform calibration of estimates of more general heterogeneous causal effects.
Definition 4.3 (Conditional Orthogonality).
Suppose is some initial loss with true nuisance . Define the “corrected” loss by
where is any correction term satisfying . Then, we say is conditionally orthogonal if, for any , there exists such that
for all , where .
Definition 4.3 may be difficult to parse, but we can work through several examples to gain some intuition. Returning to the example of conditional quantile under treatment and the (corrected) pinball loss defined above, we simply took the correction term to be . From a straightforward calculation, one can check satisfies Definition 4.3 with additional nuisance given by .
More broadly, given some initial loss , one can use the Riesz representation theorem to obtain satisfying
almost surely. Then, if we can find some variable such that , we can simply take , which gives us a “corrected” loss
We conclude by pointing out the following observations about calibration with respect to such losses , which follows immediately from Definition 4.3.
Corollary 4.4.
Suppose a loss satisfies Definition 4.3. Then, the following hold:
-
1.
The calibration function
doesn’t depend on when . Thus, we can write for any .
-
2.
The calibration error of any estimate , given by
also does not depend on when . Thus, we can write in place of for any .
-
3.
Lastly, not only do and posses the same conditional minimizer when evaluated at (regardless of the choice of ), but we also have
where is arbitrary and denotes the calibration error under .
4.1 A General Bound on Calibration Error
We now prove a decoupled bound on the calibration error under the assumption that is conditionally orthogonal. This bound serves as a direct analogue of the one presented in Theorem 3.3, just in a more general setting. As we will see, bounding the calibration error of losses that are not universally orthogonalizable is a much more delicate task.
To prove our result, we will need place convexity assumptions on the underlying loss . We note that these convexity results are akin to those made in existing works, namely in the work of Foster and Syrgkanis [18].
Assumption 3.
We assume that the loss function conditioned on covariates is -strongly convex, i.e. for any and any , we have
Assumption 4.
We assume that the loss function conditioned on covariates is -smooth, i.e. for any and any , we have
We now state the main theorem of this section. The bound below appears largely identical to the one presented in Theorem 3.3 modulo two minor differences. First, we pay a multiplicative factor of in both of the decoupled terms, which ultimately is just the condition number of the loss . Second, the error term is evaluated at the calibration function instead of the parameter estimate . This difference is due to the fact that, in the proof of Theorem 4.5, we must perform a functional Taylor expansion around in order to invoke the orthogonality condition. This subtlety was absent in the case of universal orthogonality, as was insensitive to nuisance misestimation for any parameter estimate . We ultimately view this difference as minor, as for many examples (e.g. the pinball loss ) the dependence on vanishes. We prove Theorem 4.5 in Appendix A.2.
Theorem 4.5.
Let be a conditionally orthogonal loss (Definition 4.3) that is -strong convex (Assumption 3) and -smooth (Assumption 4). Suppose exists for . Then, for any estimate and nuisance parameter , we have
where are the true, unknown nuisance functions, is the calibration function associated with , and is as defined in Theorem 3.3.
Although the bound in Theorem 4.5 looks similar in spirit to the one presented in Theorem 3.3, there still remain questions to answer. For instance, what does the condition number look like for practically-relevant losses? Likewise, will the error term simplify into a cross-error term as in the case of universally orthogonalizable losses? We interpret Theorem 4.5 by spending some time looking at the example of the pinball loss .
Example 4.6.
First, for any fixed quantile , we assess the strong convexity/smoothness properties of . Let represent any inverse-propensity estimate, and let represent the true propensity score. Assume admits a conditional density with respect to the Lebesgue measure on . Straightforward calculation yields
Thus, if for all and for all for some , then we have:
i.e. that satisfies Assumption 4 with and Assumption 3 with .
We can further interpret the error term in the case of the pinball loss, where again . In particular, straightforward calculation yields
As the first term is linear in the nuisance estimate and doesn’t depend on , its second Gateaux derivative (with respect to ) is identically zero. Thus, the error term does not depend on , and thus we can write instead. Further, using the same analysis as in Proposition 3.4 and writing . we have that
where we recall that . Thus, even in the general case of conditional orthogonality, we can often obtain simple looking bounds on the error in nuisance estimation.
4.2 A Sample Splitting Algorithm
We conclude the section by presenting two algorithms for performing causal calibration with respect to conditionally orthogonal losses. As in Section 3, we first present a sample splitting algorithm (Algorithm 3) that enjoys finite sample convergence guarantees. We then present a corresponding cross-calibration algorithm that is likely more useful in practice.
Algorithm 3 is essentially a generalization of Algorithm 1 to general losses. The key difference is that we can no longer compute pseudo-outcomes for general losses. Instead, we assume that the calibration algorithm passed to Algorithm 1 can calibrate “with respect to general losses ”. What does this mean? Many calibration algorithms such as linear calibration, Platt scaling, and isotonic calibration, compute a mapping satisfying
(7) |
where denotes a calibration dataset, is some appropriately-defined class of functions, and is an appropriately chosen class of functions. Table 2 below outlines the choices of and for common calibration algorithms.
Algorithm | Loss | Function class |
---|---|---|
Isotonic Calibration | ||
Linear Calibration | ||
Histogram binning | ||
Platt Scaling |
Thus, for general losses involving nuisance , it makes sense that the calibration algorithm should compute the following minimizer. We outline a general template for in Algorithm 5 in Appendix D.
In the above, is now the calibration sample and denotes a nuisance estimate which generically may depend on the current sample . Also in Appendix D, we also prove the convergence of a simple, three-way sample splitting algorithm based on uniform mass binning. and actually prove a finite sample convergence calibration error convergence guarantee for uniform mass binning as well. We do not include this algorithm in the main paper as the focus of the work is on presenting a general framework for performing causal calibration.
We can similarly define a version of cross calibration for conditionally orthogonal losses, which we present in Algorithm 4.
We now focus our efforts on proving a convergence guarantee for Algorithm 3. We present a set of assumptions we need to prove our convergence guarantee.
Assumption 5.
Let be a nuisance estimation algorithm taking in an arbitrary number of points, and let be a general loss calibration algorithm taking some initial estimator and a sequence of partially-evaluated loss functions .
-
1.
For any distribution on , , , and failure probability , we have
where , , and is some rate function.
-
2.
For any distribution on , , , , and failure probability , we have
where , , and is some rate function.
-
3.
With probability one, for some injective mapping .
The first two assumptions are direct analogues of those made in Section 3, giving the learner control over nuisance estimation and calibration rates. In general, for arbitrary univariate mappings . This is a problem, as the learner will estimate the additional nuisance associated with the initial parameter, , but Theorem 4.5 will be instantiated with respect to the nuisance . The following lemma shows that injectivity of ensures .
Lemma 4.7.
Suppose is conditionally orthogonal, and suppose have the same level sets, i.e. they satisfy . 999For a function that is not necessarily injective, we let . Then, the calibration functions satisfy . Additionally, without loss of generality, we can assume .
Calibration algorithms that learn an injective post-processing mapping, such as Platt scaling and linear calibration, will preserve level sets. For an algorithm like isotonic calibration, one can either (a) estimate and hope or (b) if learned via isotonic regression is not strictly increasing, one can release where is any strictly increasing map. For algorithms such as histogram binning and uniform mass binning, one can first learn then the level sets of , which are just based on the quantiles of the predictions of the initial estimator . Then, these level sets entirely specify the target nuisance to estimate. We provide a version of Algorithm 3 in Appendix D that does this. We now provide a convergence guarantee for Algorithm 3.
5 Experiments
We now evaluate the performance of our cross-calibration algorithms, namely Algorithms 2 and 4. We consider two settings. First, we examine the viability of of our calibration algorithm for universally orthogonal losses using observational data. In particular, we show how Algorithm 2 can be used to calibrate estimates of the CATE of 401(k) eligibility and the conditional LATE of 401(k) participation on an individual’s net financial assets. Second, we measure the ability of Algorithm 4 to calibrate estimates of conditional quantiles under treatment on synthetic data. We examine the performance of our algorithm for several quantile both in terms of calibration error and average loss.
5.1 Effects of 401(k) Participation/Eligibility on Financial Assets
First, we consider the task of constructing and calibrating estimates of the heterogeneous effect of 401(k) eligibility/participation on an individual’s net financial assets. To do this, we use the 401(k) dataset leveraged in Chernozhukov and Hansen [5], Kallus et al. [33], and Chernozhukov et al. [10]. Since Chernozhukov and Hansen [5] argue that eligibility for 401(k) satisfies conditional ignorability given a sufficiently rich set of features101010The entire set of features are as follows: [‘age’, ‘inc’, ‘fsize’, ‘educ’, ‘db’, ‘marr’, ‘male’, ‘twoearn’, ‘pira’, ‘nohs’, ‘hs’, ‘smcol’, ‘col’, ‘hown’], we aim to measure the CATE of eligibility on net financial assets. This ignorability is not known to be satisfied for 401(k) participation, and thus we instead aim to measure to conditional LATE of 401(k) participation on net financial assets with eligibility serving as an instrument. For each parameter (either CATE or conditional LATE), we randomly split the dataset into three folds of uneven size: we use 60% of the data to construct the initial parameter estimate, 25% of the data to perform calibration, and reserve 15% of the data as a test set.
Model Training and Calibration:
To fit the initial model CATE/conditional LATE model, we split the training data randomly into evenly-sized folds. We use cross-fitting to construct appropriate pseudo-outcomes, i.e. for each we use data in all but the th fold to estimate nuisances and then appropriately use these estimates to transform observations in the th fold. In the case of the CATE, we produce estimates , of the propensity score and expected outcome mapping using gradient boosted decision and regression tress, respectively. We then produce pseudo-outcomes on the th fold, per Equation (2), and use gradient-boosted regression trees to regress these pseudo-outcomes onto covariates.
In the case of the conditional LATE, as the instrument policy is not assumed to be known, instead of using the universally orthogonal loss discussed in Equation (4), we instead use the following loss detailed in Syrgkanis et al. [55] and Lan and Syrgkanis [40]:
where , , , and , where and are as defined above. We estimate all nuisances and construct pseudo-outcomes as in the case of the CATE, once again using either gradient-boosted regression or decision trees based on appropriateness. Again, we regress pseudo-outcomes onto covariates using gradient-boosted regression trees to obtain initial parameter estimates.
After initial models are trained, we run Algorithm 2 (cross-calibration) on the 25% of the data reserved for calibration, once again using folds. We estimate all nuisances again by either using gradient-boosted decision or regression trees. We perform calibration using three different algorithms: isotonic calibration, histogram binning with buckets, and linear calibration, which performs simple linear regression with intercept of the constructed pseudo-outcomes onto the initial model predictions.
Comparing Calibration Error in Quartiles
We assess calibration of both the pre- and post-calibrated models by approximating the actual target treatment effect with the quartiles of the models’ predictions. We let denote either the pre or post calibrated model (either CATE or conditional LATE). Re-using the calibration dataset (25% of the data), we compute the order statistics of model predictions . We then define four buckets based on the quartiles of the above order statistics, i.e.
where we assume and . Next, we use cross-fitting with folds to transform the 15% of the data reserved for testing into pseudo-outcomes .111111This is done in the same manner as discussed in the previous subsection We assign each transformed sample in the test set to an appropriate bin based on the predicted value of , and average the pseudo outcomes falling into each bin. We then approximate the calibration error, which is computed as: , where denotes the average of the falling into bin and denotes the average pseudo-outcome in bin .


We repeat the above experiment over random splits of the initial 401(k) dataset. Figure 1 shows in a box and whisker plot the empirical distribution of the calibration errors over these runs for the base (uncalibrated) model, and the model calibrated using each of thee aforementioned calibration algorithms. Each box is centered at the median calibration error , the bottom of the box is given by the first quartile of calibration error , and the top of the box is given by the third quartile of calibration error. The top (bottom) of the whiskers are given by the maximum (resp. minimum) of the observations falling within .
From Figure 1, we see that all calibration algorithms result in noticeably smaller calibration error. In particular in the setting of the CATE of 401(k) eligibility on net financial assets, the third quartile for the calibration error using isotonic calibration and linear calibration falls below the first quartile of calibration error for the uncalibrated model. Likewise, in the LATE model, which aims to measure the effect of 401(k) participation on net financial assets, the third quartile of calibration error under histogram binning and linear calibration falls below the first quartile of calibration error for the uncalibrated model. This indicates that our calibration algorithms have a significant impact on the calibration error of the produced models.
5.2 Calibrating Estimates of Conditional Quantiles Under Treatment
We now show how Algorithm 4 (cross-calibration for conditionally orthogonal loss functions) can be used to calibrate estimates of the conditional th quantile under treatment. In particular, we study the the impact of sample size and chosen quantile have on both calibration error and average loss.
For this experiment, we leverage synthetically generated data. At the start of the experiment, we generate two slope vector where are i.i.d. and for all . Then, for , we generate i.i.d. covariates and treatments where and
In the above, is a fixed clipping parameter. Finally, we generate potential outcomes under treatment and control as , where are i.i.d. standard normal noise variables. We note that, because we are only interested in the conditional quantile under treatment, we make potential equivalent under both treatment and control for simplicity.
Given data as generated above, we then train an initial model using gradient-boosted regression trees. In more detail, in training these regression trees, we leverage the loss , where we determine the nuisances using fold cross-fitting, i.e. where we estimate the propensity using gradient-boosted decision trees. We note that this loss is Neyman orthogonal conditional on covariates , and thus using this loss allows for the efficient estimate of the quantile under treatment. However, this loss is not conditionally orthogonal, and thus is a poor fit for performing calibration.
We then use an additional samples generated in the same manner as above to perform calibration using Algorithm 4 and loss function (outlined in Equation (6)). We estimate the inverse propensity by using gradient-boosted decision trees to estimate the propensity . We estimate the additional CDF-like nuisance by instead estimating , again using gradient-boosted decision trees. Heuristically, we are hoping that is a “reasonable rough estimate” for the calibration function . In both settings, we perform calibration using linear calibration. After we estimate nuisance, we run the cross-calibration algorithm by using linear calibration. We repeat the entire above process times for three quantile values , and plot both the empirical calibration errors and average losses below (measuring loss according to , i.e. the loss evaluated at the true nuisances).
Results:
Figure 2 displays the results of the experimental procedure outlined above. Displayed in the left-hand column are plots demonstrating the empirical calibration at various sample sizes averaged over the runs. Likewise, in the right-hand column, the sample loss is displayed averaged again over runs. We include point-wise valid 95% confidence intervals for all plots.
Regardless of the sample size and chosen quantile , we see a significant decrease in the calibration error. This shows that not only does Algorithm 4 exhibit favorable theoretical convergence guarantees, but that it also offer strong performance in practice. Typically, one hopes calibration algorithms enjoy a “do no harm” property, i.e. that calibrating a parameter estimate will not significantly increase loss. While we do not formally prove this, the right-hand column of Figure 2 demonstrates that calibrating typically decreases loss, as desired. Moreover, the loss obtained by using samples to construct an initial estimate and to perform calibration is comparable to the loss had samples directly been used to estimate the unknown conditional parameter. This suggests that reserving some samples for calibration yields significantly lower calibration error without adversely affecting performance.






6 Conclusion
In this work, we constructed a framework for calibrating general estimates of heterogeneous causal effects with respect to nuisance-dependent losses. By leveraging variants of Neyman orthogonality, we were able to bound the calibration error of by two decoupled terms. One term, roughly, represented the error in nuisance estimation, while the other term represented the calibration error of in a world where the learned nuisances were true. These bounds suggested natural sample-splitting and cross-calibration algorithms, for which we proved high-probability convergence guarantees. Our algorithms also admitted “do no harm” style guarantees. We empirically evaluated our algorithms in Section 5, in which we considered both observational and synthetic data.
While our provided contributions are quite general, there are still interesting directions for future work. First, in our work, we only measure the convergence of our algorithms via the -calibration error. Depending on the situation, other notions of calibration error may be more appropriate. For instance, Gupta and Ramdas [25] analyze the convergence of histogram/uniform mass binning in terms of calibration error. Likewise, Globus-Harris et al. [21] study muli-calibration error. We leave it as interesting future work to extend our results to the general setting of measuring calibration error. Further, it would be interesting to study how the calibration of estimates of heterogeneous causal effects impacts utility in decision making tasks.
7 Acknowledgments
VS is supported by NSF Award IIS 2337916.
References
- Abrevaya et al. [2015] Jason Abrevaya, Yu-Chin Hsu, and Robert P Lieli. Estimating conditional average treatment effects. Journal of Business & Economic Statistics, 33(4):485–505, 2015.
- Barlow and Brunk [1972] Richard E Barlow and Hugh D Brunk. The isotonic regression problem and its dual. Journal of the American Statistical Association, 67(337):140–147, 1972.
- Bickel et al. [1993] Peter J Bickel, Chris AJ Klaassen, Peter J Bickel, Ya’acov Ritov, J Klaassen, Jon A Wellner, and YA’Acov Ritov. Efficient and adaptive estimation for semiparametric models, volume 4. Springer, 1993.
- Blum and Mansour [2007] Avrim Blum and Yishay Mansour. From external to internal regret. Journal of Machine Learning Research, 8(6), 2007.
- Chernozhukov and Hansen [2004] Victor Chernozhukov and Christian Hansen. The effects of 401 (k) participation on the wealth distribution: an instrumental quantile regression analysis. Review of Economics and statistics, 86(3):735–751, 2004.
- Chernozhukov et al. [2018] Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters, 2018.
- Chernozhukov et al. [2022a] Victor Chernozhukov, Whitney Newey, Vıctor M Quintas-Martınez, and Vasilis Syrgkanis. Riesznet and forestriesz: Automatic debiased machine learning with neural nets and random forests. In International Conference on Machine Learning, pages 3901–3914. PMLR, 2022a.
- Chernozhukov et al. [2022b] Victor Chernozhukov, Whitney K Newey, and Rahul Singh. Automatic debiased machine learning of causal and structural effects. Econometrica, 90(3):967–1027, 2022b.
- Chernozhukov et al. [2022c] Victor Chernozhukov, Whitney K Newey, and Rahul Singh. Debiased machine learning of global and local parameters using regularized riesz representers. The Econometrics Journal, 25(3):576–601, 2022c.
- Chernozhukov et al. [2024] Victor Chernozhukov, Christian Hansen, Nathan Kallus, Martin Spindler, and Vasilis Syrgkanis. Applied causal inference powered by ml and ai. arXiv preprint arXiv:2403.02467, 2024.
- Chung et al. [2023] Youngseog Chung, Aaron Rumack, and Chirag Gupta. Parity calibration. arXiv preprint arXiv:2305.18655, 2023.
- Dawid [1982] A Philip Dawid. The well-calibrated bayesian. Journal of the American Statistical Association, 77(379):605–610, 1982.
- Dawid [1985] A Philip Dawid. Calibration-based empirical probability. The Annals of Statistics, 13(4):1251–1274, 1985.
- Fan et al. [2022] Qingliang Fan, Yu-Chin Hsu, Robert P Lieli, and Yichong Zhang. Estimation of conditional average treatment effects with high-dimensional data. Journal of Business & Economic Statistics, 40(1):313–327, 2022.
- Feng et al. [2012] Ping Feng, Xiao-Hua Zhou, Qing-Ming Zou, Ming-Yu Fan, and Xiao-Song Li. Generalized propensity score for estimating the average treatment effect of multiple treatments. Statistics in medicine, 31(7):681–697, 2012.
- Foster and Vohra [1999] Dean P Foster and Rakesh Vohra. Regret in the on-line decision problem. Games and Economic Behavior, 29(1-2):7–35, 1999.
- Foster and Vohra [1998] Dean P Foster and Rakesh V Vohra. Asymptotic calibration. Biometrika, 85(2):379–390, 1998.
- Foster and Syrgkanis [2023] Dylan J Foster and Vasilis Syrgkanis. Orthogonal statistical learning. The Annals of Statistics, 51(3):879–908, 2023.
- Fröolich and Melly [2010] Markus Fröolich and Blaise Melly. Estimation of quantile treatment effects with stata. The Stata Journal, 10(3):423–457, 2010.
- Gao et al. [2024] Chen Gao, Yu Zheng, Wenjie Wang, Fuli Feng, Xiangnan He, and Yong Li. Causal inference in recommender systems: A survey and future directions. ACM Transactions on Information Systems, 42(4):1–32, 2024.
- Globus-Harris et al. [2023] Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, and Jessica Sorrell. Multicalibration as boosting for regression. arXiv preprint arXiv:2301.13767, 2023.
- Gopalan et al. [2024] Parikshit Gopalan, Michael Kim, and Omer Reingold. Swap agnostic learning, or characterizing omniprediction via multicalibration. Advances in Neural Information Processing Systems, 36, 2024.
- Guo et al. [2017] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning, pages 1321–1330. PMLR, 2017.
- Gupta [2022] Chirag Gupta. Post-hoc calibration without distributional assumptions. PhD thesis, PhD thesis, Carnegie Mellon University Pittsburgh, PA 15213, USA, 2022.
- Gupta and Ramdas [2021] Chirag Gupta and Aaditya Ramdas. Distribution-free calibration guarantees for histogram binning without sample splitting. In International Conference on Machine Learning, pages 3942–3952. PMLR, 2021.
- Gupta and Ramdas [2023] Chirag Gupta and Aaditya Ramdas. Online platt scaling with calibeating. arXiv preprint arXiv:2305.00070, 2023.
- Gupta et al. [2020] Chirag Gupta, Aleksandr Podkopaev, and Aaditya Ramdas. Distribution-free binary classification: prediction sets, confidence intervals and calibration. Advances in Neural Information Processing Systems, 33:3711–3723, 2020.
- Hasminskii and Ibragimov [1979] Rafail Z Hasminskii and Ildar A Ibragimov. On the nonparametric estimation of functionals. In Proceedings of the Second Prague Symposium on Asymptotic Statistics, volume 473, pages 474–482. North-Holland Amsterdam, 1979.
- Hébert-Johnson et al. [2018] Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning, pages 1939–1948. PMLR, 2018.
- Heckman and Vytlacil [2005] James J Heckman and Edward Vytlacil. Structural equations, treatment effects, and econometric policy evaluation 1. Econometrica, 73(3):669–738, 2005.
- Hendrycks et al. [2019] Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019.
- Hitsch et al. [2024] Günter J Hitsch, Sanjog Misra, and Walter W Zhang. Heterogeneous treatment effects and optimal targeting policy evaluation. Quantitative Marketing and Economics, 22(2):115–168, 2024.
- Kallus et al. [2019] Nathan Kallus, Xiaojie Mao, and Masatoshi Uehara. Localized debiased machine learning: Efficient inference on quantile treatment effects and beyond. arXiv preprint arXiv:1912.12945, 2019.
- Kennedy [2022] Edward H Kennedy. Semiparametric doubly robust targeted double machine learning: a review. arXiv preprint arXiv:2203.06469, 2022.
- Kennedy [2023] Edward H Kennedy. Towards optimal doubly robust estimation of heterogeneous causal effects. Electronic Journal of Statistics, 17(2):3008–3049, 2023.
- Kennedy et al. [2023] Edward H Kennedy, Sivaraman Balakrishnan, and LA Wasserman. Semiparametric counterfactual density estimation. Biometrika, 110(4):875–896, 2023.
- Kent et al. [2018] David M Kent, Ewout Steyerberg, and David Van Klaveren. Personalized evidence based medicine: predictive approaches to heterogeneous treatment effects. Bmj, 363, 2018.
- Kumar et al. [2019] Ananya Kumar, Percy S Liang, and Tengyu Ma. Verified uncertainty calibration. Advances in Neural Information Processing Systems, 32, 2019.
- Lakshminarayanan et al. [2017] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
- Lan and Syrgkanis [2023] Hui Lan and Vasilis Syrgkanis. Causal q-aggregation for cate model selection. arXiv preprint arXiv:2310.16945, 2023.
- Leng and Dimmery [2024] Yan Leng and Drew Dimmery. Calibration of heterogeneous treatment effects in randomized experiments. Information Systems Research, 2024.
- Malinin and Gales [2018] Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. Advances in neural information processing systems, 31, 2018.
- Minderer et al. [2021] Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the calibration of modern neural networks. Advances in Neural Information Processing Systems, 34:15682–15694, 2021.
- Neyman [1979] Jerzy Neyman. C() tests and their use. Sankhyā: The Indian Journal of Statistics, Series A (1961-2002), 41(1/2):1–21, 1979. ISSN 0581572X. URL http://www.jstor.org/stable/25050174.
- Niculescu-Mizil and Caruana [2005] Alexandru Niculescu-Mizil and Rich Caruana. Predicting good probabilities with supervised learning. In Proceedings of the 22nd international conference on Machine learning, pages 625–632, 2005.
- Nie and Wager [2021] Xinkun Nie and Stefan Wager. Quasi-oracle estimation of heterogeneous treatment effects. Biometrika, 108(2):299–319, 2021.
- Noarov and Roth [2023] Georgy Noarov and Aaron Roth. The scope of multicalibration: Characterizing multicalibration via property elicitation. arXiv preprint arXiv:2302.08507, 2023.
- Noarov et al. [2023] Georgy Noarov, Ramya Ramalingam, Aaron Roth, and Stephan Xie. High-dimensional prediction for sequential decision making. arXiv preprint arXiv:2310.17651, 2023.
- Oprescu et al. [2019] Miruna Oprescu, Vasilis Syrgkanis, and Zhiwei Steven Wu. Orthogonal random forest for causal inference. In International Conference on Machine Learning, pages 4932–4941. PMLR, 2019.
- Ovadia et al. [2019] Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. Advances in neural information processing systems, 32, 2019.
- Platt et al. [1999] John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61–74, 1999.
- Sahoo et al. [2021] Roshni Sahoo, Shengjia Zhao, Alyssa Chen, and Stefano Ermon. Reliable decisions with threshold calibration. Advances in Neural Information Processing Systems, 34:1831–1844, 2021.
- Semenova and Chernozhukov [2021] Vira Semenova and Victor Chernozhukov. Debiased machine learning of conditional average treatment effects and other causal functions. The Econometrics Journal, 24(2):264–289, 2021.
- Song et al. [2019] Hao Song, Tom Diethe, Meelis Kull, and Peter Flach. Distribution calibration for regression. In International Conference on Machine Learning, pages 5897–5906. PMLR, 2019.
- Syrgkanis et al. [2019] Vasilis Syrgkanis, Victor Lei, Miruna Oprescu, Maggie Hei, Keith Battocchi, and Greg Lewis. Machine learning estimation of heterogeneous treatment effects with instruments. Advances in Neural Information Processing Systems, 32, 2019.
- Thulasidasan et al. [2019] Sunil Thulasidasan, Gopinath Chennupati, Jeff A Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. Advances in neural information processing systems, 32, 2019.
- Tsybakov [2009] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer series in statistics. Springer, 2009. ISBN 978-0-387-79051-0. doi: 10.1007/B13794. URL https://doi.org/10.1007/b13794.
- van der Laan et al. [2023] Lars van der Laan, Ernesto Ulloa-Pérez, Marco Carone, and Alex Luedtke. Causal isotonic calibration for heterogeneous treatment effects. arXiv preprint arXiv:2302.14011, 2023.
- Vovk [2002] Vladimir Vovk. On-line confidence machines are well-calibrated. In The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings., pages 187–196. IEEE, 2002.
- Wainwright [2019] Martin J Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge university press, 2019.
- Wilder and Welle [2024] Bryan Wilder and Pim Welle. Learning treatment effects while treating those in need. arXiv preprint arXiv:2407.07596, 2024.
- Zadrozny and Elkan [2001] Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In Icml, volume 1, pages 609–616, 2001.
- Zadrozny and Elkan [2002] Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 694–699, 2002.
Appendix A Calibration Error Decomposition Proofs
In this appendix, we prove the main error decompositions from Sections 3 an d4. These results provide two-term, decoupled bounds on the calibration error of an arbitrary, fixed parameter in terms of calibration error assuming the learned nuisances were correct, and a term measuring the distance between the learned nuisances and the true, unknown nuisance parameters.
A.1 Universally Orthogonality
We start by proving Theorem 3.3, which provides the claimed decoupled bound under the assumption that is universally orthogonal (Definition 3.1). See 3.3
Proof of Theorem 3.3.
We start by adding a useful form of zero to the integrand, which yields
We bound and separately. As a first step in bounding , note that by a second order Taylor expansion with respect to the nuisance estimate with Lagrange remainder, we have
In the above, , and the first-order derivative (with respect to ) vanishes due to the assumption of Definition 3.1. This is because we have Taylor expanded around the true, unknown nuisance .
With this, we can apply the Cauchy-Schwarz inequality, which furnishes
In the second line, we apply Jensen’s inequality inside the conditional expectation.
Bounding is more straightforward. Applying the Cauchy-Schwarz inequality, we have:
This line of reasoning, in total, yields that
Dividing through by yields the claimed bound. ∎
A.2 Conditional Orthogonality
We now turn to proving the second error bound, which holds in the case that satisfies the weaker assumption of conditional orthogonality (Definition 4.3). We generically write for a fixed pair of nuisance parameters. Further, for any fixed post-processing function , we write where is a nuisance parameter ensuring the vanishing cross-derivative condition. To prove Theorem 4.5, we will need two technical lemmas. In what follows, we remind the reader of the definition of the calibration function denote the calibration function under the orthogonalized loss , i.e. is specified by
We recall the identity for any estimate of the second nuisance parameter, which will be useful in the sequel.
The first lemma we prove measures the distance (in terms of the norm) between the true calibration and the calibration function under any other nuisance pair , . Here, should be viewed as some arbitrary estimator. We can bound this distance in terms of the complicated looking error term, which was first introduced in Theorem 3.3. This term actually simplifies rather nicely, as was seen in the prequel when we computed the quantity for the task of calibrating estimates of conditional th quantiles under treatment.
Lemma A.1.
Let be an arbitrary function, assume is conditionally orthogonal, and let be a pair of nuisance functions guaranteeing the vanishing cross-derivative condition in Definition 4.3. Let be some fixed pair of nuisance functions. Then, assuming the base loss satisfies -strong convexity (Assumption 3), we have
where we define .
Proof.
First, strong convexity (Assumption 3) alongside equivalent conditions for strong convexity (namely that ) yields:
In the above, the equality on the third line follows from the definition of , as first order optimality conditions on imply .
Rearranging the above inequality and taking absolute values yields
Next, observe that from the condition alongside a second order Taylor expansion (with respect to nuisance pairs ) with Lagrange form remainder plus conditional orthogonality (Definition 4.3), we have
where (here, indicates for some ). Consequently, we have
Noting the identity proves the claimed result. ∎
The second lemma we prove bounds the distance between the parameter estimate and the calibration function under a fixed pair of nuisances in terms of the calibration error.
Lemma A.2.
Let be a fixed estimator, an arbitrary, fixed nuisance pair, and the calibration function associated with . Assume is -strongly convex (Assumption 3). We have
Proof.
First, observe that from strong convexity (as in the proof of the above lemma), we have
Thus, dividing through and taking the absolute value yields:
We now integrate to get the desired result. In particular, we have that
∎
With the above two lemmas in hand, we can now prove Theorem 4.5, which we recall shows a decoupled bound on the calibration of a parameter with respect to a conditionally orthogonal loss function .
See 4.5
Proof of Theorem 4.5.
First, we note that by Corollary 4.4, we have that
for any second nuisance parameter , so it suffices to bound where . We have:
where the first equality follows from the above calculation, the second from the fact regardless of choice of additional nuisance , and the third from a first order Taylor expansion with Lagrange form remainder on (here ). The final equality follows from adding a subtracting . Thus, we have
Appendix B Algorithm Convergence Proofs
B.1 Universal Orthogonality
We now restate and prove the convergence guarantee of the sample splitting algorithm for calibration with respect to universally orthogonalizable loss functions. The result below is largely just an application of Theorem 3.3, with the only caveat being that some care must be taken to handle the fact that the output parameter and the nuisance estimate are now random variables, not fixed parameters. See 3.5
Proof of Theorem 3.5.
First, observe that, under Assumption 1, for any fixed and , we have
where denotes the joint distribution of over draws . and the latter quantity is controlled with high-probability by Assumption 2. Since the above equality holds for any and , it still holds when is replaced by , the random output of Algorithm 1, and when is replaced by , the corresponding nuisance estimate obtained from .
Under Assumption 1, it is also clear that does not depend on , so it suffices to write going forward. Define the “bad” events as and . Clearly, the first part of Assumption 2 yields that . Likewise, the second part of Assumption 2 also yields that as fixing fixes the learned nuisances , per Algorithm 1. Thus, applying the law of total probability, we have that the marginal probability of (over both draws of and ) is bounded by
This is because the are independent and conditioning on the first observations fixes the nuisance estimate . Thus, on the “good” event , which occurs with probability at least , we have
where the first inequality follows from Theorem 3.3 and the second equality follows from the preamble at the beginning of this proof.
∎
B.2 Conditionally Orthogonality
We now prove the convergence guarantees for Algorithm 3. The proof is largely the same as the convergence proof for Algorithm 1, but we nonetheless include it for completeness. The only key difference is that (a) we not longer have access to “pseudo-outcomes” that, in expectation, look like the target treatment effect and (b) we have to be careful, since the second nuisance parameter that is used in the definition of conditional orthogonality does not satisfy in general. Once again, is the fixed, initial estimate treated as an input to Algorithm 3 and is the random, calibrated estimate that is the output of the algorithm. See 4.8
Proof of Theorem 4.8.
First, we observe that, by the assumption that where the random map is almost surely injective, we know that initial estimate and the calibrated estimate have the same level sets. Consequently, by Lemma 4.7, we have without loss of generality. Further, we have equivalence of the calibration functions corresponding to and , i.e. we have . Thus, letting for any , we have
The rest of the proof is now more or less identical to that of Theorem 3.5, but we nonetheless include the proof for completeness. Define the “bad” events as and . Clearly, the first part of Assumption 5 yields that . Likewise, the second part of Assumption 5 also yields that as fixing fixes the learned nuisances , per Algorithm 1. Thus, applying the law of total probability, we have that the marginal probability of (over both draws of and ) is bounded by
This is because the are independent and conditioning on the first observations fixes the nuisance estimate . Thus, on the “good” event , which occurs with probability at least , we have
where the first inequality follows from Theorem 4.5 and the second equality follows from the preamble at the beginning of this proof.
∎
Appendix C A Do No Harm Property for Universally Orthogonal Losses
Throughout the main body of the paper, we focused on deriving bounds on excess calibration error for our causal calibration algorithm. One desideratum for any calibration algorithm is that the calibrated estimator does not have significantly higher loss than the original estimator. This property, called a “do no harm” property, is satisfied by many off-the-shelf calibration algorithms. We prove such a bound for universally orthogonal losses in the following section. In what follows, we denote the risk of a certain predictor with respect to a nuisance as .
We start by proving a generic upper bound on the difference in risk between two arbitrary estimators. The argument presented below is similar in spirit to the proof of Theorem 2 in Foster and Syrgkanis [18].
Theorem C.1.
Let be a universally orthogonal loss function. Let be two estimators, and a nuisance function, and the true nuisance function. Then, we have
where if we assume is -smooth in , i.e. if we have
(8) |
If we instead assume and where
is linear in , and , then we take
Proof.
First, observe that we have the bound
Now, observe that by performing a second order Taylor expansion with respect to the nuisance component around , we have;
where , understood in a point-wise sense. Thus, we can bound the sum:
We can use universal orthogonality to show the first order difference vanishes. In particular, we have
where the second equality holds for some 121212Understood in the sense that for each by performing a first-order Taylor expansion, and the final equality follows from universal orthogonality of .
If we are in the first setting, and assume that the Hessian of the loss has a maximum eigenvalue uniformly bounded (over and ) by , then we can bound the second order difference above as
Else, if we instead assume the linear score condition outlined in Assumption 1 and further assume is linear in , and thus have for some pseudo-outcome mapping , then it is clear we have for any
Thus, by leveraging Jensen’s inequality, we can bound the second order difference by
which completes the proof.
∎
As before, with the above error decomposition bound, we can now prove a general convergence result given access to nuisance estimation and calibration algorithms that satisfy certain, high-probability convergence guarantees. Given the similarity of this result to Theorems 3.5 and 4.8 and the fact Theorem C.1 does the majority of the heavy lifting , we omit the proof.
Theorem C.2.
Assume the same setup as Theorem C.1. Assume is a nuisance estimation algorithm and is a general loss calibration algorithm. Assume
-
1.
For any distribution on , , , and failure probability , we have
where and is some rate function.
-
2.
For any distribution on , , , , and failure probability , we have
where , , and is some rate function.
Then, with probability at least , we have
Appendix D Calibration Algorithms for General Losses
Here, we present analogues of classical calibration algorithms defined with respect to general losses involving nuisance components. In particular, we give generalizations of histogram binning/uniform mass binning, isotonic calibration, and linear calibration. We only prove convergence guarantees for one algorithm (uniform mass binning), but we experimentally showed in Section 5 that other algorithm (e.g. linear calibration) work well in practice.
We first present a general algorithm that computes a univariate post-processing over a suitable class of functions and then returns a calibrated parameter estimate.
Depending on how is defined, when we obtain generalizations of many classical algorithms. We refer the reader to Table 2 in paper for appropriate choices of . As noted in Section 4, our convergence guarantee required that we learn an injective post-processing map from model predictions to calibrated predictions. We discussed in detail how many common algorithms either naturally fit this desideratum or can be easily modified to fit this desideratum. The one exception to this is uniform mass/histogram binning, which maps model predictions to a finite number of values. To address this, we prove the convergence of a separate, three-way sample splitting algorithm that performs uniform mass binning for general losses.
D.1 An analogue of Algorithm 3 for UMB
We now develop a three-way sample splitting algorithm for causal calibration based on uniform mass/histogram binning [27, 25, 38]. As in the case of Algorithm 3, our algorithm (Algorithm 6 below) is implicitly based on the calibration error decomposition presented in Theorem 4.5. To prove the convergence of Algorithm 3, we needed to assume for some injective, randomized post-processing map . While this desideratum is satisfied for some calibration algorithms (e.g. linear calibration and Platt scaling) and can be made to be satisfied for others (e.g. by slightly perturbing the post-processing map learned by isotonic calibration), it is clearly not a reasonable assumption for uniform mass binning/histogram binning. This is because, as the name suggest, binning algorithms “compress” the initial estimator into a pre-specified number of data-dependent bins.
Why did we assume was an injective post-processing of ? In any causal calibration algorithm, we need to estimate the unknown nuisance functions. For conditionally orthogonal losses, there are two parameters we need etimate. The first parameter, , does not depend on either the initial model or the calibrated model , and thus can be readily estimated from data. However, the second parameter, , depends on the calibrated model , which in general cannot be reliably estimated from data. Assuming the injectivity of the post-processing map allowed us to conclude (via Lemma 4.7) that . Since we know at the outset of the problem, it is reasonable to assume we can produce a convergent estimate of .
The question is now what can we do for binning-based algorithms, which generally cannot be written as an injective post-processing of the initial model. The key idea is that, for histogram binning/uniform mass binning, the level sets of only depend on the order statistics of , not on the nuisance parameters. Thus, we propose a natural three-way sample-splitting analogue of uniform mass binning for conditionally orthogonal losses .
Our algorithm works as follows. First, we use the covariates associated with one third of the data to compute the buckets/bins of (these will be determined by evenly-spaced order statistics of ). Fixing the level sets up front like this in turn fixes the additional nuisance we need to estimate. This is because, under Lemma 4.7, any function with the same level sets as will have . Then, we use the second third to estimate the unknown nuisance functions and . Finally, we use the learned nuisance parameters and the final third to compute the empirical loss minimizer in each bucket. We formally state this heuristically-described procedure below in Algorithm 6.
We now present the set of assumptions that we will use to prove convergence of the above algorithm.
Assumption 6.
Let be a nuisance estimation algorithm and let be any initial estimator. We make the following assumptions.
-
1.
.
-
2.
The conditionally orthogonal loss function satisfies
-
(a)
For any , the minimizer of the loss satisfies .
-
(b)
For any , and , we have .
-
(a)
-
3.
For any distribution on , , and failure probability , with probability at least over the draws of i.i.d., we have
where and is some rate function.
The above assumptions are mostly standard and essentially say the (a) the initial estimate is bounded, (b) the partial derivative of the loss is bounded and the minimizer takes values in a bounded interval, and (c) we have access to some algorithm that can accurately estimate nuisance parameters.
For simplicity, we assume that the values Algorithm 6 assigns to each of the buckets are unique. This is to ensure two distinct buckets do not merge, which would invalidate our application of Lemma 4.7. If, in practice, we have for , the learner can simply add noise to for arbitrarily small to guarantee uniqueness.
Assumption 7.
With probability 1 over the draws i.i.d., we have that are distinct.
We now state the main result of this subsection, a technical convergence guarantee for Algorithm 6. We prove Theorem D.1 (along with requisite lemmas and propositions) the next subsection of this appendix.
Theorem D.1.
Fix any initial estimator , conditionally orthogonal loss function , and failure probabilities . Suppose Assumptions 3, 4, and 6 hold, and assume . Then, with probability at least over the randomness of , the output of Algorithm 6 satisfies
where is some constant that bounds the partial derivative as discussed in Assumption 6: i.e. .
D.2 Proving Theorem D.1
We now pivot to the task of proving Theorem D.1. We first cite a result that guarantees, with high probability, that close to a fraction of points will fall into each bucket.
Lemma D.2 ([38], Lemma 4.3; [27], Lemma 13).
For a universal constant , if , the bucket produced in Algorithm 6 will satisfy
(9) |
simultaneously for all with probability at least over the randomness of
Next, we argue that, conditional on the random bins satisfying Lemma D.2, the population average derivative conditional on the observation falling into any given bucket will be close to the corresponding sample average.
Lemma D.3.
Fix any initial estimator , conditionally orthogonal loss function , and a nuisance estimate . Suppose the buckets are such that
for every , , and Assumption 6 holds. Then, with probability at least over the randomness of , we have for all and all
where
denotes the empirical conditional mean over the calibration dataset .
Proof.
For convenience, let
to denote set of indices that fall in . Given that , the multiplicative Chernoff bound (Lemma D.4) tells us that with probability ,
(10) |
for all .
Note that is the empirical mean over many points. Therefore, with inequality (10) and Hoeffding’s inequality (Lemma D.5), we have for any fixed , with probability at least , simultaneously for all ,
Now, for some to be chosen later (which we will implicitly assume satisfies ), we now take a union bound over an -covering of : with probability , we have for all and :
For any , taking its closest point in the -grid yields
as is -Lipschitz by Assumption 4. Hence, with probability , we have for any and ,
Therefore, we have with probability ,
for all and . Setting yields the result ∎
Lemma D.4 (Multiplicative Chernoff Boud).
Let be independent random variables such that and for all . For all ,
Lemma D.5 (Hoeffding’s Inequality).
Let be independent random variables such that . Consider the sum of these random variables . For all ,
We use Lemma D.3 to now prove the following Proposition.
Proposition D.6.
Assume the same setup as Lemma D.3 and suppose Assumption 6 holds. Let be i.i.d., let be a fixed nuisance pair, and set , where
Assume the are distinct almost surely. Then, for any , we have with probability at least ,
where is some constant that bounds the partial derivative as discussed in Assumption 6: i.e. .
Proof.
As we have assumed that all ’s are all distinct, we have
Since we have assumed , we have from Lemma D.3 that, with probability at least , simultaneously for each
which follows since by definition of . Thus, with probability at least , we get
∎
We now have the requisite tools to prove Theorem D.1. Our argument proceeds in largely the same way that the proof of Theorems 3.5 and 4.8 — we start by defining appropriate “good” events, and then subsequently bound the overall probability of their failure.
Proof of Theorem D.1.
As before we start by defining some “bad” events. In particular, consider the events and defined respectively by
where , where is as defined in Algorithm 6. It is clear that by Assumption 6, since fixing fixes the estimate . Thus, the law of total expectation yields that .
Bounding takes mildly more care. Define the event
which by Lemma D.2 occurs with probability at least by the assumption that . We have the following bound:
where the second to last inequality follows form the fact that by Proposition D.6. We now apply Theorem 4.5 to see that, on the “good” event (which occurs with probability at least ) we have
which proves the desired result. ∎
Appendix E Additional Proofs
In this appendix, we proofs of additional claims that do not constitute our primary results. We proceed by section of the paper. See 3.4
Proof of Proposition 3.4.
Let . Observe that we have,
(Assumption 1) | |||
Now, we compute the Hessian that appears on the second line. In particular, further calculation yields that
Thus, we can write the error term down as
Thus, taking square roots, we see that we have
Dividing both sides by two yields the claimed result. The second claim is trivial and follows from an application of Jensen’s inequality.
∎
See 4.7
Proof of Lemma 4.7.
The first claim is straightforward, as we have
for all , , so we just plug in for any . The equivalence of calibration functions, in particular, implies we have
for any direction and choice of . That is, satisfies the second point of Definition 4.3 when either or are plugged in. Thus, without loss of generality, we can assume they are the same. ∎