This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Second order ancillary: A differential view from continuity

Ailana M. Fraserlabel=e1]afraser@math.ubc.ca [ D.A.S. Fraserlabel=e2]dfraser@utstat.toronto.edu [ Ana-Maria Staiculabel=e3]staicu@stat.ncsu.edu [ Department of Mathematics, University of British Columbia, Vancouver, Canada V6T 1Z2.
Department of Statistics, University of Toronto, Toronto, Canada M5S 3G3.
Department of Statistics, North Carolina State University, Raleigh, NC 27695, USA.
(2010; 1 2009; 12 2009)
Abstract

Second order approximate ancillaries have evolved as the primary ingredient for recent likelihood development in statistical inference. This uses quantile functions rather than the equivalent distribution functions, and the intrinsic ancillary contour is given explicitly as the plug-in estimate of the vector quantile function. The derivation uses a Taylor expansion of the full quantile function, and the linear term gives a tangent to the observed ancillary contour. For the scalar parameter case, there is a vector field that integrates to give the ancillary contours, but for the vector case, there are multiple vector fields and the Frobenius conditions for mutual consistency may not hold. We demonstrate, however, that the conditions hold in a restricted way and that this verifies the second order ancillary contours in moderate deviations. The methodology can generate an appropriate exact ancillary when such exists or an approximate ancillary for the numerical or Monte Carlo calculation of pp-values and confidence quantiles. Examples are given, including nonlinear regression and several enigmatic examples from the literature.

approximate ancillary,
approximate location model,
conditioning,
confidence,
pp-value,
quantile,
doi:
10.3150/10-BEJ248
keywords:
volume: 16issue: 4

, and

1 Introduction

Ancillaries are loved or hated, accepted or rejected, but typically ignored. Recent approximate ancillary methods (e.g., [28]) give a decomposition of the sample space rather than providing statistics on the sample space (e.g., [7, 26]). As a result, continuity gives the contour along which the variable directly measures the parameter and then gives the subcontour that provides measurement of a parameter of interest. This, in turn, enables the high accuracy of cumulant generating function approximations [9, 2] to extend to cover a wide generality of statistical models.

Ancillaries initially arose (see [10]) to examine the accuracy of the maximum likelihood estimate, then (see [11]) to calibrate the loss of information in the use of the maximum likelihood estimate and then (see [12]) to develop a key instance involving the configuration statistic. The configuration of a sample arises naturally in the context of sampling a location-scale model, where a standardized coordinate z=(yμ)/σz=(y-\mu)/\sigma has a fixed and known error distribution g(z)g(z): the iith coordinate of the response thus has f(yi;μ,σ)=σ1g{(yiμ)/σ}f(y_{i};\mu,\sigma)=\sigma^{-1}g\{(y_{i}-\mu)/\sigma\}. The configuration a(y)a(y) of the sample is the plug-in estimate of the standardized residual,

a(y)=z^=(y1μ^σ^,,ynμ^σ^),a(y)=\hat{z}=\biggl{(}{y_{1}-\hat{\mu}\over\hat{\sigma}},\ldots,{y_{n}-\hat{\mu}\over\hat{\sigma}}\biggr{)}^{\prime}, (1)

where (μ^,σ^)(\hat{\mu},\hat{\sigma}) is the maximum likelihood value for (μ,σ)(\mu,\sigma) or is some location-scale equivalent. Clearly, the distribution of z^\hat{z} is free of μ\mu and σ\sigma as the substitution yi=μ+σziy_{i}=\mu+\sigma z_{i} in (1) leads to the cancellation of dependence on μ\mu and σ\sigma. This supports a common definition for an ancillary statistic a(y)a(y), that it has a parameter-free distribution; other conditions are often added to seek sensible results.

More generally, the observed value of an ancillary identifies a sample space contour along which parameter change modifies the model, thus yielding the conditional model on the observed contour as the appropriate model for the data. The ancillary method is to use directly this conditional model identified by the data.

One approach to statistical inference is to use only the observed likelihood function L0(θ)=L(θ;y0)L^{0}(\theta)=L(\theta;y^{0}) from the model f(y;θ)f(y;\theta) with observed data y0y^{0}. Inference can then be based on some simple characteristic of that likelihood. Alternatively, a weight function w(θ)w(\theta) can be applied and the composite w(θ)L(θ)w(\theta)L(\theta) treated as a distribution describing the unknown θ\theta; this leads to a rich methodology for exploring data, usually, but unfortunately, promoted solely within the Bayesian framework.

A more incisive approach derives from an enriched model which is often available and appropriate. While the commonly cited model is just a set of probability distributions on the sample space, an enriched model can specifically include continuity of the model density function and continuity of coordinate distribution functions. An approach that builds on these enrichments can then, for example, examine the observed data y0y^{0} in relation to other data points that have a similar shape of likelihood and are thus comparable, and can do even more. For the location-scale model, such points are identified by the configuration statistic; then, accordingly, the model for inference would be f{ya(y)=a0;θ}f\{y\mid a(y)=a^{0};\theta\}, where a(y)a(y) is the configuration ancillary.

Exact ancillaries as just described are rather rare and seem limited to location-type models and simple variants. However, extensions that use approximate ancillaries (e.g., [18, 22]) have recently been broadly fruitful, providing approximation in an asymptotic sense. Technical issues can arise with approximate values for an increasing number of coordinates, but these can be managed by using ancillary contours rather than statistics; thus, for a circle, we use explicitly a contour A={(x,y)=(a1/2cost,a1/2sint):t in [0,2)}A=\{(x,y)=(a^{1/2}\cos t,a^{1/2}\sin t)\colon\ t\mbox{ in }[0,2\curpi)\} rather than using implicitly a statistic x2+y2=ax^{2}+y^{2}=a.

We now assume independent coordinate distribution functions that are continuously differentiable with respect to the variable and the parameter; extensions will be discussed separately. Then, rather than working directly with a coordinate distribution function ui=Fi(yi;θ)u_{i}=F_{i}(y_{i};\theta), we will use the inverse, the quantile function yi=yi(ui;θ)y_{i}=y_{i}(u_{i};\theta) which presents a data value yiy_{i} in terms of a corresponding pp-value uiu_{i}. For additional advantage, we could use a scoring variable xx in place of the pp-value, for example, x=Φ1(u)x=\Phi^{-1}(u) or x=F1(u;θ0)x=F^{-1}(u;\theta_{0}), where Φ()\Phi(\cdot) is the standard Normal distribution function. We can then write y=y(x;θ)y=y(x;\theta), where a coordinate yiy_{i} is presented in terms of the corresponding scoring variable xix_{i}.

For the full response variable, let y=y(x;θ)={y1(x1;θ),,yn(xn;θ)}y=y(x;\theta)=\{y_{1}(x_{1};\theta),\ldots,y_{n}(x_{n};\theta)\}^{\prime} be the quantile vector expressing yy in terms of the reference or scoring variable xx with its given distribution: the quantile vector records how parameter change affects the response variable and its distribution, as prescribed by the continuity of the coordinate distribution functions.

For an observed data point y0y^{0}, a convenient reference value x^0\hat{x}^{0} or the fitted pp-value vector is obtained by solving the equation y0=y(x;θ^0)y^{0}=y(x;\hat{\theta}^{0}) for xx, where θ^0\hat{\theta}^{0} is the observed maximum likelihood value; for this, we assume regularity and asymptotic properties for the statistical model. The contour of the second order ancillary through the observed data point as developed in this paper is then given as the trajectory of the reference value,

A0={y(x^0;t):t in p},A^{0}=\{y(\hat{x}^{0};t)\colon\ t\mbox{ in }\mathbb{R}^{p}\}, (2)

to second order under parameter change, where pp here is the dimension of the parameter. A sample space point on this contour has, to second order, the same estimated pp-value vector as the observed data point and special properties for the contours are available to second order.

The choice of the reference variable with given data has no effect on the contour: the reference variable could be Uniform, as with the pp-value; or, it could be the response distribution itself for some choice of the parameter, say θ0\theta_{0}.

For the location-scale example mentioned earlier, we have the coordinate quantile function yi=μ+σziy_{i}=\mu+\sigma z_{i}, where ziz_{i} has the distribution g(z)g(z). The vector quantile function is

y(z;μ,σ)=μ1+σz,y(z;\mu,\sigma)=\mu 1+\sigma z, (3)

where 1=(1,,1)1=(1,\ldots,1)^{\prime} is the ‘one vector.’ With the data point y0y^{0}, we then have the fitted z^0=(y0μ^01)/σ^0\hat{z}^{0}=(y^{0}-\hat{\mu}^{0}1)/\hat{\sigma}^{0}. The observed ancillary contour to second order is then obtained from (2) by substituting z^0\hat{z}^{0} in the quantile (3):

A0={y(z^0;t)}={m1+sz^0;(m,s) in ×+}=+(1;z^0)A^{0}=\{y(\hat{z}^{0};t)\}=\{m1+s\hat{z}^{0};(m,s)\mbox{ in }\mathbb{R}\times\mathbb{R}^{+}\}=\mathcal{L}^{+}(1;\hat{z}^{0}) (4)

with positive coefficient for the second vector. This is the familiar exact ancillary contour a(y)=a0a(y)=a^{0} from (1).

An advantage of the vector quantile function in the context of the enriched model mentioned above is that it allows us to examine how parameter change modifies the distribution and thus how it moves data points as a direct expression of the explicit continuity. In this sense, we define the velocity vector or vectors as v(x;θ)=(/θ)y(x;θ)=y/θv(x;\theta)=(\partial/\partial\theta)y(x;\theta)=\partial y/\partial\theta. In the scalar θ\theta case, this is a vector recording the direction of movement of a point yy under θ\theta change; in the vector θ\theta case, it is a 1×p1\times p array of such vectors in n\mathbb{R}^{n}, V(x;θ)={v1(x1;θ),,vp(xp;θ)},V(x;\theta)=\{v_{1}(x_{1};\theta),\ldots,v_{p}(x_{p};\theta)\}, recording the separate effects from the parameter coordinates θ1,,θp\theta_{1},\ldots,\theta_{p}. For the location-scale example, the velocity array is V(z;μ,σ)=(1,z),V(z;\mu,\sigma)=(1,z), which can be viewed as a 1×21\times 2 array of vectors in n\mathbb{R}^{n}.

The ancillary contour can then be presented using a Taylor series about y0y^{0} with coefficients given by the velocity and acceleration VV and WW. For the location-scale example, the related acceleration vectors are equal to zero.

For more insight, consider the general scalar θ\theta case and the velocity vector v(x;θ^0)v(x;\hat{\theta}^{0}). For a typical coordinate, this gives the change dy=v(x;θ^0)dθ\mathrm{d}y=v(x;\hat{\theta}^{0})\,\mathrm{d}\theta in the variable as produced by a small change dθ\mathrm{d}\theta at θ^0\hat{\theta}^{0}. A re-expression of the coordinate variable can make these increments equal and produce a location model; the product of these location models is a full location model g(y1θ,,ynθ)g(y_{1}-\theta,\ldots,y_{n}-\theta) that precisely agrees with the initial model to first derivative at θ=θ^0\theta=\hat{\theta}^{0} (see [20, 1]). This location model then, in turn, determines a full location ancillary with configuration a(y)=(y1y¯,,yny¯)a(y)=(y_{1}-\bar{y},\ldots,y_{n}-\bar{y}). For the original model, this configuration statistic has first-derivative ancillarity at θ=θ^0\theta=\hat{\theta}^{0} and is thus a first order approximate ancillary; the tangent to the contour at the data point is just the vector v(x^0;θ^0)v(\hat{x}^{0};\hat{\theta}^{0}). Also this contour can be modified to give second order ancillarity.

In a somewhat different way, the velocity vector v(y0;θ)v(y^{0};\theta) at the data point y0y^{0} gives information as to how data change at y0y^{0} relates to parameter change at various θ\theta values of interest. This allows us to examine how a sample space direction at the data point relates to estimated pp-value and local likelihood function shape at various θ\theta values; this, in turn, leads to quite general default priors for Bayesian analysis (see [21]).

In the presence of a cumulant generating function, the saddle-point method has produced highly accurate third order approximations for density functions (see [9]) and for distribution functions (see [25]). Such approximations are available in the presence of exact ancillaries [2] and extend widely in the presence of approximate ancillaries (see [18]). For third order accuracy, only second order approximate ancillaries are needed, and for such ancillaries, only the tangents to the ancillary contour at the data point are needed (see [18, 19]). With this as our imperative, we develop the second order ancillary for statistical inference.

Tangent vectors to an ancillary at a data point give information as mentioned above concerning a location model approximation at the data point. For a scalar parameter, these provide a vector field and integrate quite generally to give a unique approximate ancillary to second order accuracy. The resulting conditional model then provides definitive pp-values by available theory; see, for example, [22]. For a vector parameter, however, the multiple vector fields may not satisfy the Frobenius conditions for integrability and thus may not define a function.

Under mild conditions, however, we show that such tangent vectors do generate a surface to second order without the Frobenius conditions holding. We show this in several steps. First, we obtain the coordinate quantile functions yi=yi(xi;θ).y_{i}=y_{i}(x_{i};\theta). Second, we Taylor series expand the full vector quantile y=(y1,,yn)y=(y_{1},\ldots,y_{n}) in terms of the full reference variable x=(x1,,xn)x=(x_{1},\ldots,x_{n}) and the parameter θ=(θ1,,θp)\theta=(\theta_{1},\ldots,\theta_{p}) about data-based values, appropriately re-expressing coordinates and working to second order. Third, we show that this generates a partition with second order ancillary properties and the usual tangent vectors. The seeming need for the full Frobenius conditions is bypassed by finding that two integration routes need not converge to each other, but do remain on the same contour, calculating, of course, to second order.

This construction of an approximate ancillary is illustrated in Section 2 using the familiar example, the Normal-on-the circle from [13]; see also [8, 3, 20, 16]. The example, of course, does have an exact ancillary and the present procedure gives an approximation to that ancillary. In Section 3, we consider various examples that have exact and approximate ancillaries, and then in Sections 4 and 5, we present the supporting theory. In particular, in Section 4, we develop notation for a pp-dimensional contour in n\mathbb{R}^{n}, A={y(x0;t):t in p},A=\{y(x_{0};t)\colon\ t\mbox{ in }\mathbb{R}^{p}\}, and use velocity and acceleration vectors to present a Taylor series with respect to tt. Then, in Section 5, we consider a regular statistical model with asymptotic properties and use the notation from Section 4 to develop the second order ancillary contour through an observed data point y0y^{0}. The re-expression of individual coordinates, both of the variable and the parameter, plays an essential role in the development; an asymptotic analysis is used to establish the second order approximate ancillarity. Section 6 contains some discussion.

2 Normal-on-the-circle

We illustrate the second order approximate ancillary with a simple nonlinear regression model, the Normal-on-the-circle example (see [13]). The model has a well-known exact ancillary. Let y=(y1,y2)y=(y_{1},y_{2})^{\prime} be Normal on the plane with mean (ρcosθ,ρsinθ)(\rho\cos\theta,\rho\sin\theta)^{\prime} and variance matrix I/nI/n with ρ\rho known. The mean is on a circle of fixed radius ρ\rho and the distribution has rotationally symmetric error with variances n1n^{-1}, suggesting an antecedent sample size nn for an asymptotic approach. The full nn-dimensional case is examined as Example 2 in Section 3 and the present case derives by routine conditioning.

The distribution is a unit probability mass centered at (ρcosθ,ρsinθ)(\rho\cos\theta,\rho\sin\theta)^{\prime} on the circle with radius ρ\rho. If rotations about the origin are applied to (y1,y2)(y_{1},y_{2})^{\prime}, then the probability mass rotates about the origin, the mean moves on the circle with radius ρ\rho and an element of probability at a distance rr from the origin moves on a circle of radius rr. The fact that the rotations move probability along circles but not between circles of course implies that probability on any circle about the origin remains constant: probability flows on the ancillary contours. Accordingly, we have that the radial distance r=(y12+y22)1/2r=(y_{1}^{2}+y_{2}^{2})^{1/2} has a fixed θ\theta-free distribution and is thus ancillary.

The statistic r(y)r(y) is the Fisher exact ancillary for this problem and Fisher recommended that inference be based on the conditional model, given the observed ancillary contour. This conditional approach has a long but uneven history; [17] provides an overview and [23] offer links with asymptotic theory. We develop the approximate second order ancillary and examine how it relates to the Fisher exact ancillary.

The model for the Normal-on-the-circle has independent coordinates, so we can invert the coordinate distribution functions and obtain the vector quantile function,

(y1y2)=ρ(cosθsinθ)+(x1x2),\pmatrix{y_{1}\cr y_{2}}=\rho\pmatrix{\cos\theta\cr\sin\theta}+\pmatrix{x_{1}\cr x_{2}},

where the xi=Φ1(ui)/n1/2x_{i}=\Phi^{-1}(u_{i})/n^{1/2} are independent normal variables with means 0 and variances n1n^{-1}, and Φ\Phi is the standard Normal distribution function. We now examine the second order ancillary contour A0A^{0} given by (2).

Let y0=(y10,y20)=(r0cosa0,r0sina0)y^{0}=(y_{1}^{0},y_{2}^{0})^{\prime}=(r^{0}\cos a^{0},r^{0}\sin a^{0}) be the observed data point where r0r^{0}, a0a^{0} are the corresponding polar coordinates; see Figure 1. For this simple nonlinear normal regression model, θ^0=a0\hat{\theta}^{0}=a^{0} is the angular direction of the data point. The fitted reference value x^0\hat{x}^{0} is the solution of the equation y0=y(x;θ^0)=ρ(cosa0,sina0)+(x1,x2)y^{0}=y(x;\hat{\theta}^{0})=\rho(\cos a^{0},\sin a^{0})^{\prime}+(x_{1},x_{2}), giving x^0=(x^10,x^20)=y0ρ(cosa0,sina0)=y0y^0\hat{x}^{0}=(\hat{x}_{1}^{0},\hat{x}_{2}^{0})^{\prime}=y^{0}-\rho(\cos a^{0},\sin a^{0})^{\prime}=y^{0}-\hat{y}^{0}, where y^0=ρ(cosa0,sina0)\hat{y}^{0}=\rho(\cos a^{0},\sin a^{0})^{\prime} is the fitted value, which is the projection of the data point y0y^{0} onto the circle. The observed ancillary contour is then

A0={ρ(cosθsinθ)+y0y^0:θ near a0}=y0y^0+{ρ(cos(a0+t)sin(a0+t)):t near 0}.A^{0}=\left\{\rho\pmatrix{\cos\theta\cr\sin\theta}+y^{0}-\hat{y}^{0}\colon\ \theta\mbox{ near }a^{0}\right\}=y^{0}-\hat{y}^{0}+\left\{\rho\pmatrix{\cos(a^{0}+t)\cr\sin(a^{0}+t)}\colon\ t\mbox{ near }0\right\}.

Figure 1 shows that A0={y(x^0;t):t near a0}A^{0}=\{y(\hat{x}^{0};t)\colon\ t\mbox{ near }a^{0}\} is a translation, as shown by the arrow of a segment SS of the solution contour, from the fitted point y^0\hat{y}^{0} to the data point y0y^{0}.

Refer to caption
Figure 1: The regression surface SS is a circle of radius RR; the local contour of the approximate ancillary A0A^{0} is a circle segment of SS moved from y^0\hat{y}^{0} to y0y^{0}; the exact ancillary contour is a circle segment of radius r0r^{0} through the data point y0y^{0}.

The second order ancillary segment at y0y^{0} does not lie on the exact ancillary surface r(y1,y2)=r0r(y_{1},y_{2})=r^{0}. The tangent vector at the data point y0y^{0} is v=(y/t)|t=a0=(ρsina0,ρcosa0)v=(\partial y/\partial t)|_{t=a^{0}}=(-\rho\sin a^{0},\rho\cos a^{0})^{\prime}, which is the same as the tangent vector for the exact ancillary and which agrees with the usual tangent vector vv (see [22]). However, the acceleration vector is w=(2/t2)y|t=a0=(ρsina0,ρcosa0)w=(\partial^{2}/\partial t^{2})y|_{t=a^{0}}=(-\rho\sin a^{0},-\rho\cos a^{0})^{\prime}, which differs slightly from that for the exact ancillary: the approximation has radius of curvature ρ\rho, as opposed to r0r^{0} for the exact, but the difference in moderate deviations about y0y^{0} can be seen to be small and is second order.

The second order ancillary contour through y0y^{0} can also be expressed in a Taylor series as A0={y0+tv+wt2/2:t near 0}A^{0}=\{y^{0}+tv+wt^{2}/2\colon\ t\mbox{ near }0\}; here, the acceleration vector ww is orthogonal to the velocity vector vv. Similar results hold in wide generality when yy has dimension nn and θ\theta has dimension pp; further examples are discussed in the next section and the general development follows in Sections 4 and 5.

3 Some examples

Example 1 ((Nonlinear regression, σ0\sigma_{0} known)).

Consider a nonlinear regression model y=η(θ)+xy=\eta(\theta)+x in n\mathbb{R}^{n}, where the error xx is 𝑁𝑜𝑟𝑚𝑎𝑙(0;σ02I)\operatorname{Normal}(0;\sigma^{2}_{0}I) and the regression or solution surface S={η(θ)}S=\{\eta(\theta)\} is smooth with parameter θ\theta of dimension, say, rr. For given data point y0y^{0}, let θ^0\hat{\theta}^{0} be the maximum likelihood value. The fitted value is then y^0=η(θ^0)\hat{y}^{0}=\eta(\hat{\theta}^{0}) and the fitted reference value is x^0=y0η(θ^0)=y0y^0\hat{x}^{0}=y^{0}-\eta(\hat{\theta}^{0})=y^{0}-\hat{y}^{0}. The model as presented is already in quantile form; accordingly, V=(η/θ)|θ^0,W=(2η/θ2)|θ^0V=(\partial\eta/\partial\theta)|_{\hat{\theta}^{0}},W=(\partial^{2}\eta/\partial\theta^{2})|_{\hat{\theta}^{0}} are the observed velocity and acceleration arrays, respectively, and the approximate ancillary contour at the data point y0y^{0} is A0={y0+Vt+tWt/2+:t in r},A^{0}=\{y^{0}+Vt+t^{\prime}Wt/2+\cdots\colon\ t\mbox{ in }\mathbb{R}^{r}\}, which is just a y0y^0y^{0}-\hat{y}^{0} translation of the solution surface S={y^0+Vt+tWt/2+:t in r}.S=\{\hat{y}^{0}+Vt+t^{\prime}Wt/2+\cdots\colon\ t\mbox{ in }\mathbb{R}^{r}\}. For this, we use matrix multiplication to linearly combine the elements in the arrays VV and WW.

Example 2 ((Nonlinear regression, circle case)).

As a special case, consider the regression model where the solution surface S={η(θ)}S=\{\eta(\theta)\} is a circle of radius ρ\rho about the origin; this is the full-dimension version of the example in Section 2. For notation, let C=(c1,,cn)C=(c_{1},\ldots,c_{n}) be an orthonormal basis with vectors c1,c2c_{1},c_{2} defining the plane that includes SS. Then y~=Cy\tilde{y}=C^{\prime}y provides rotated coordinates and η~(θ)=Cη(θ)=(ρcosθ,ρsinθ,0,,0)\tilde{\eta}(\theta)=C^{\prime}\eta(\theta)=(\rho\cos\theta,\rho\sin\theta,0,\ldots,0) gives the solution surface in the new coordinates.

There is an exact ancillary given by r=(y~12+y~22)1/2r=(\tilde{y}_{1}^{2}+\tilde{y}_{2}^{2})^{1/2} and (y~3,,y~n)(\tilde{y}_{3},\ldots,\tilde{y}_{n}); the corresponding ancillary contour through y~0\tilde{y}^{0} is a circle of radius r0r^{0} through the data point y0y^{0} and lying in the plane y~3=y~30,,y~n=y~n0\tilde{y}_{3}=\tilde{y}_{3}^{0},\ldots,\tilde{y}_{n}=\tilde{y}_{n}^{0}. The approximate ancillary contour is a segment of a circle of radius ρ\rho through the data point y0y^{0} and lying in the same plane. This directly agrees with the simple Normal-on-the-circle example of Section 2.

For the nonlinear regression model, Severini ([29], page 216) proposes an approximate ancillary by using the obvious pivot yη(θ)y-\eta(\theta) with the plug-in maximum likelihood value θ=θ^\theta=\hat{\theta}; we show that this gives a statistic A(y)=yη(θ^)A(y)=y-\eta(\hat{\theta}) that can be misleading. In the rotated coordinates, the statistic A(y)A(y) becomes

A~(y)\displaystyle\tilde{A}(y) =\displaystyle= (rcosθ^,rsinθ^,y~3,,y~n)(ρcosθ^,ρsinθ^,0,,0)\displaystyle(r\cos\hat{\theta},r\sin\hat{\theta},\tilde{y}_{3},\ldots,\tilde{y}_{n})^{\prime}-(\rho\cos\hat{\theta},\rho\sin\hat{\theta},0,\ldots,0)^{\prime}
=\displaystyle= {(rρ)cosθ^,(rρ)sinθ^,y~3,,y~n},\displaystyle\{(r-\rho)\cos\hat{\theta},(r-\rho)\sin\hat{\theta},\tilde{y}_{3},\ldots,\tilde{y}_{n}\}^{\prime},

which has observed value A~0={(r0ρ)cosθ^0,(r0ρ)sinθ^0,y~30,,y~n0}.\tilde{A}^{0}=\{(r^{0}-\rho)\cos\hat{\theta}^{0},(r^{0}-\rho)\sin\hat{\theta}^{0},\tilde{y}_{3}^{0},\ldots,\tilde{y}_{n}^{0}\}^{\prime}.

If we now set the proposed ancillary equal to its observed value, A~=A~0\tilde{A}=\tilde{A}^{0}, we obtain y~3=y~30,,y~n=y~n0\tilde{y}_{3}=\tilde{y}_{3}^{0},\ldots,\tilde{y}_{n}=\tilde{y}_{n}^{0} and also obtain r=r0r=r^{0} and θ^=θ^0\hat{\theta}=\hat{\theta}^{0}. Together, these say that y=y0y=y^{0}, and thus that the proposed approximate ancillary is exactly equivalent to the original response variable, which is clearly not ancillary. Severini does note “…it does not necessarily follow that aa is a second-order ancillary statistic since the dimension of aa increases with nn.” The consequences of using the plug-in θ^\hat{\theta} in the pivot are somewhat more serious: the plug-in pivotal approach for this example does not give an approximate ancillary.

Example 3 ((Nonlinear regression, σ\sigma unknown)).

Consider a nonlinear regression model y=η(θ)+σzy=\eta(\theta)+\sigma z in n\mathbb{R}^{n}, where the error zz is 𝑁𝑜𝑟𝑚𝑎𝑙(0;I)\operatorname{Normal}(0;I) and the solution surface S={η(θ)}S=\{\eta(\theta)\} is smooth with surface dimension rr (see [24]). Let y0y^{0} be the observed data point and (θ^0,σ^0)(\hat{\theta}^{0},\hat{\sigma}^{0}) be the corresponding maximum likelihood value. We then have the fitted regression y^0\hat{y}^{0}, the fitted residual x^0=y0y^0\hat{x}^{0}=y^{0}-\hat{y}^{0}, and the fitted reference value z^0=x^0/σ^0\hat{z}^{0}=\hat{x}^{0}/\hat{\sigma}^{0} which is just the standardized residual.

Simple calculation gives the velocity and acceleration arrays

V¯=(Vz^0),W¯=(W000)\bar{V}=(V\hat{z}^{0}),\qquad\bar{W}=\pmatrix{W&0\cr 0&0}

using VV and WW from Example 1. The approximate ancillary contour at the data point y0y^{0} is then

A~0\displaystyle\tilde{A}^{0} =\displaystyle= {y0+VT+tWt/2++sz^0:t in r,s in +}\displaystyle\{y^{0}+VT+t^{\prime}Wt/2+\cdots+s\hat{z}^{0}\colon\ t\mbox{ in }\mathbb{R}^{r},s\mbox{ in }\mathbb{R}^{+}\}
=\displaystyle= {η(t)+sz^0:t in r,s in +}\displaystyle\{\eta(t)+s\hat{z}^{0}\colon\ t\mbox{ in }\mathbb{R}^{r},s\mbox{ in }\mathbb{R}^{+}\}
=\displaystyle= A0++(z^0),\displaystyle A^{0}+\mathcal{L}^{+}(\hat{z}^{0}),

where A0A^{0} is as in Example 1. This is the solution surface from Example 1, translated from y^0\hat{y}^{0} to y0y^{0} and then positively radiated in the z^0\hat{z}^{0} direction.

Example 4 ((The transformation model)).

The transformation model (see, e.g., [14]) provides a paradigm for exact ancillary conditioning. A typical continuous transformation model for a variable y=θzy=\theta z has parameter θ\theta in a smooth transformation group GG that operates on an nn-dimensional sample space for yy; for illustration, we assume here that the group acts coordinate by coordinate. The natural quantile function for the iith coordinate is yi=θziy_{i}=\theta z_{i}, where ziz_{i} is a coordinate reference variable with a fixed distribution; the linear regression model with known and unknown error scaling are simple examples. With observed data point y0y^{0}, let θ^0\hat{\theta}^{0} be the maximum likelihood value and z^0\hat{z}^{0} the corresponding reference value satisfying y0=θ^0z^0y^{0}=\hat{\theta}^{0}\hat{z}^{0}. The second order approximate ancillary is then given as {θz^0}\{\theta\hat{z}^{0}\}, which is just the usual transformation model orbit Gz^0G\hat{z}^{0}. If the group does not apply separately to independent coordinates, then the present quantile approach may not be immediately applicable; this raises issues for the construction of the trajectories and also for the construction of default priors (see, e.g., [4]). Some discussion of this in connection with curved parameters will be reported separately. A modification achieved by adding structure to the transformation model is given by the structural model [14]. This takes the reference distribution for zz as the primary probability space for the model and examines what events on that space are identifiable from an observed response; we do not address here this alternative modelling approach.

Example 5 ((The inverted Cauchy)).

Consider a location-scale model centered at μ\mu and scaled by σ\sigma with error given by the standard Cauchy; this gives the statistical model

f(y;μ,σ)=1σ{1+(yμ)2/σ2}f(y;\mu,\sigma)=\frac{1}{\curpi\sigma\{1+(y-\mu)^{2}/\sigma^{2}\}}

on the real line. For the sampling version, this location-scale model is an example of the transformation model discussed in the preceding Example 4 and the long-accepted ancillary contour is the half-plane (4).

McCullagh [27] uses linear fractional transformation results that show that the inversion y~=1/y\tilde{y}=1/y takes the Cauchy (μ,σ\mu,\sigma) model for yy into a Cauchy (μ~,σ~\tilde{\mu},\tilde{\sigma}) model for y~\tilde{y}, where μ~=μ/(μ2+σ2),σ~=σ/(μ2+σ2)\tilde{\mu}={\mu}/(\mu^{2}+\sigma^{2}),\tilde{\sigma}={\sigma}/(\mu^{2}+\sigma^{2}). He then notes that the usual location-scale ancillary for the derived model does not map back to give the usual location-scale ancillary on the initial space and would thus typically give different inference results for the parameters; he indicates “not that conditioning is a bad idea, but that the usual mathematical formulation is in some respects ad hoc and not completely satisfactory.”

We illustrate this for n=2n=2 in Figure 2. For a data point in the upper-left portion of the plane in part (b) for the inverted Cauchy, the observed ancillary contour is shown as a shaded area; it is a half-plane subtended by (1)\mathcal{L}(1). When this contour is mapped back to the initial plane in part (a), the contour becomes three disconnected segments with lightly shaded edges indicating the boundaries; in particular, the line with marks 1, 2, 3, 4, 5, 6 becomes three distinct curves again with corresponding marks 1, 2, 3, 4, 5, 6, but two points (0,1),(1,0)(0,1),(1,0) on the line have no back images. Indeed, the same type of singularity, where a point with a zero coordinate cannot be mapped back, happens for any sample size nn. Thus the proposed sample space is not one-to-one continuously equivalent to the given sample space: points are left out and points are created. And the quantile function used on the proposed sample space for constructing the ancillary does not exist on the given sample space: indeed, it is not defined at points and is thus not continuous.

Refer to caption
Figure 2: (a) The location-scale Cauchy model for the inverted y~1=1/y1\tilde{y}_{1}=1/y_{1}, y~2=1/y2\tilde{y}_{2}=1/y_{2} has an ancillary contour given by the shaded area in (b). When interpreted back for the original (y1,y2)(y_{1},y_{2}) the connected ancillary contour becomes three unconnected regions, shown in (a). A line y~2=y~1+1\tilde{y}_{2}=\tilde{y}_{1}+1 on the contour in (b) is mapped back to three curved segments in (a) and numbered points in sequence on the line are mapped back to the numbered points on the unconnected ancillary contour.

The Cauchy inversion about 0 could equally be about an arbitrary point, say aa, on the real line and would lead to a corresponding ancillary. We would thus have a wealth of competing ancillaries and a corresponding wealth of inference procedures, and all would have the same lack of one-to-one continuous equivalence to the initial sample space. While Fisher seems not to have explicitly specified continuity as a needed ingredient for typical ancillarity, it also seems unlikely that he would have envisaged ancillarity without continuity. If continuity is included in the prescription for developing the ancillary, then the proposed ancillary for the inverted Cauchy would not arise.

Bayesian statistics involves full conditioning on the observed data and familiar frequentist inference avoids, perhaps even evades, conditioning. Ancillarity, however, represents an intermediate or partial conditioning and, as such, offers a partial bridging of the two extreme approaches to inference.

4 An asymptotic statistic

For the Normal-on-the-circle example, the exact ancillary contour was given as the observed contour of the radial distance r(y1,y2)r(y_{1},y_{2}): the contour is described implicitly. By contrast, the approximate ancillary was given as the trajectory of a point y(x^0;t)y(\hat{x}^{0};t) under change of an index or mathematical parameter tt: the contour is described explicitly. For the general context, the first approach has serious difficulties, as found even with nonlinear regression, and these difficulties arise with an approximate statistic taking an approximate value; see Example 2. Accordingly, we now turn to the second, the explicit approach, and develop the needed notation and expansions.

Consider a smooth one-dimensional contour through some point y0y_{0}. To describe such a contour in the implicit manner requires n1n-1 complementary statistics. By contrast, for the explicit method, we write y=y(t)y=y(t), which maps a scalar tt into the sample space n\mathbb{R}^{n}. More generally, for a pp-dimensional contour, we have y=y(t)y=y(t) in n\mathbb{R}^{n}, where tt has dimension pp and the mapping is again into n\mathbb{R}^{n}.

For such a contour, we define the row array V(t)=(d/dt)y(t)={v1(t),,vp(t)}V(t)=(\mathrm{d}/{\mathrm{d}t^{\prime}})y(t)=\{v_{1}(t),\ldots,v_{p}(t)\} of tangent vectors, where the vector vα(t)=(d/dtα)y(t)v_{\alpha}(t)=(\mathrm{d}/\mathrm{d}t_{\alpha})y(t) gives the direction or gradient of y(t)y(t) with respect to change in a coordinate tαt_{\alpha}. We are interested in such a contour near a particular point y0=y(t0)y_{0}=y(t_{0}); for convenience, we often choose y0y_{0} to be the observed data point y0y^{0} and the t0t_{0} to be centered so that t0=0t_{0}=0. In particular, the array V=V(t0)V=V(t_{0}) of tangent vectors at a particular data point y0y_{0} will be of special interest. The vectors in VV generate a tangent plane (V)\mathcal{L}(V) at the point y0y_{0} and this plane provides a linear approximation to the contour. Differential geometry gives length properties of such vectors as the first fundamental form:

VV=(v1v1v1vpvpv1vpvp)=(v1v1v1vpvpv1vpvp);V^{\prime}V=\pmatrix{v_{1}\cdot v_{1}&\cdots&v_{1}\cdot v_{p}\cr\vdots&&\vdots\cr v_{p}\cdot v_{1}&\cdots&v_{p}\cdot v_{p}}=\pmatrix{v_{1}^{\prime}v_{1}&\cdots&v_{1}^{\prime}v_{p}\cr\vdots&&\vdots\cr v_{p}^{\prime}v_{1}&\cdots&v_{p}^{\prime}v_{p}};

this records the matrix of inner products for the vectors VV as inherited from the inner product on n\mathbb{R}^{n}. A change in the parameterization t~=t(t)\tilde{t}=t(t) of the contour will give different tangent vectors VV, the same tangent plane (V)\mathcal{L}(V) and a different, but corresponding, first fundamental form.

Now, consider the derivatives of the tangents V(t)V(t) at t0t_{0}:

W=ddtV(t)|t=t0=(w11w1pwp1wpp),W=\frac{\mathrm{d}}{\mathrm{d}t^{\prime}}V(t)\bigg{|}_{t=t_{0}}=\pmatrix{w_{11}&\cdots&w_{1p}\cr\vdots&&\vdots\cr w_{p1}&\cdots&w_{pp}},

where wαα=(2/tαtα)y(t)|t=t0w_{\alpha\alpha^{\prime}}=(\partial^{2}/\partial t_{\alpha}\,\partial t_{\alpha^{\prime}})y(t)|_{t=t_{0}} is an acceleration or curvature vector relative to coordinates tαt_{\alpha} and tαt_{\alpha^{\prime}} at t0t_{0}. We regard the array WW as a p×pp\times p array of vectors in n\mathbb{R}^{n}. We could have used tensor notation, but the approach here has the advantage that we can write the second degree Taylor expansion of y(t)y(t) at t0=0t_{0}=0 as

y(t)=y0+Vt+tWt/2+,y(t)=y_{0}+Vt+t^{\prime}Wt/2+\cdots, (5)

which uses matrix multiplication for linearly combining the vectors in the arrays VV and WW. Some important characteristics of the quadratic term in (5) are obtained by orthogonalizing the elements of WW to the tangent plane (V)\mathcal{L}(V), to give residuals

w~αα={IV(VV)1V}wαα=wααPwαα;\tilde{w}_{\alpha\alpha^{\prime}}=\{I-V(V^{\prime}V)^{-1}V^{\prime}\}w_{\alpha\alpha^{\prime}}=w_{\alpha\alpha^{\prime}}-Pw_{\alpha\alpha^{\prime}};

this uses the regression analysis projection matrix P=V(VV)1VP=V(V^{\prime}V)^{-1}V^{\prime}. The full array W~\tilde{W} of such vectors w~αα\tilde{w}_{\alpha\alpha^{\prime}} is then written W~=WPW=WVH,\tilde{W}=W-PW=W-VH, where H=(hαα)H=(h_{\alpha\alpha^{\prime}}) is a p×pp\times p array of elements hαα=(VV)1Vwααh_{\alpha\alpha^{\prime}}=(V^{\prime}V)^{-1}V^{\prime}w_{\alpha\alpha^{\prime}}; an element hααh_{\alpha\alpha^{\prime}} is a p×1p\times 1 vector, which records the regression coefficients of wααw_{\alpha\alpha^{\prime}} on the vectors VV.

The array W~\tilde{W} of such orthogonalized curvature vectors w~\tilde{w} is the second fundamental form for the contour at the expansion point. Consider the Taylor expansion (5) and substitute W=W~+VHW=\tilde{W}+VH:

y(t)\displaystyle y(t) =\displaystyle= y0+Vt+t(W~+VH)t/2+\displaystyle y_{0}+Vt+t^{\prime}(\tilde{W}+VH)t/2+\cdots
=\displaystyle= y0+V(t+tHt/2)+tW~t/2+,\displaystyle y_{0}+V(t+t^{\prime}Ht/2)+t^{\prime}\tilde{W}t/2+\cdots,

where we note that tt and tt^{\prime} are being applied to the p×pp\times p arrays HH and W~\tilde{W} by matrix multiplication, but the elements are p×1p\times 1 vectors for HH and n×1n\times 1 vectors for W~\tilde{W}, and these are being combined linearly. We can then write y(t)=y0+Vt~+t~W~t~/2+y(t)=y_{0}+V\tilde{t}+\tilde{t}\tilde{W}\tilde{t}^{\prime}/2+\cdots and thus have the contour expressed in terms of orthogonal curvature vectors w~\tilde{w} with the reparameterization t~=t+tHt/2+\tilde{t}=t+t^{\prime}Ht/2+\cdots. When we use this in the asymptotic setting, we will have standardized coordinates and the reparameterization will take the form t~=t+tHt/2n1/2+\tilde{t}=t+t^{\prime}Ht/2n^{1/2}+\cdots.

5 Verifying second order ancillarity

We have used the Normal-on-the-circle example to illustrate the proposed second order ancillary contour {y(x^0;t)}\{y(\hat{x}^{0};t)\}. Now, generally, let f(y;θ)f(y;\theta) be a statistical model with regularity and asymptotic properties as the data dimension nn increases: we assume that the vector quantile y(x;θ)y(x;\theta) has independent scalar coordinates and is smooth in both the reference variable xx and the parameter θ\theta; more general conditions will be considered subsequently. For the verification, we use a Taylor expansion of the quantile function in terms of both xx and θ\theta, and work from theory developed in [5] and [1]. The first steps involve the re-expression of individual coordinates of yy, xx, and θ\theta, and show that the proposed contours establish a partition on the sample space; the subsequent steps establish the ancillarity of the contours. (

  • 1a)]

  • (1a)

    Standardizing the coordinates. Consider the statistical model in moderate deviations about (y0,θ^0)(y^{0},\hat{\theta}^{0}) to order O(n1)\mathrm{O}(n^{-1}). For this, we work with coordinate departures in units scaled by n1/2n^{-1/2}. Thus, for the iith coordinate, we write yi=y^i0+y~i/n1/2y_{i}=\hat{y}_{i}^{0}+\tilde{y}_{i}/n^{1/2}, xi=x^i0+x~i/n1/2x_{i}=\hat{x}_{i}^{0}+\tilde{x}_{i}/n^{1/2} and θα=θ^α0+θ~α/n1/2\theta_{\alpha}=\hat{\theta}^{0}_{\alpha}+\tilde{\theta}_{\alpha}/n^{1/2}; and for a modified iith quantile coordinate y~i=y~i(x~i,θ^)\tilde{y}_{i}=\tilde{y}_{i}(\tilde{x}_{i},\hat{\theta}), we Taylor expand to the second order, omit the subscripts and tildes for temporary clarity, and obtain y=x+Vθ+(ax2+2xBθ+θWθ)/2n1/2y=x+V\theta+(ax^{2}+2xB\theta+\theta^{\prime}W\theta)/2n^{1/2}, where VV is the 1×p1\times p gradient of yy with respect to θ\theta, BB is the 1×p1\times p cross Hessian with respect to xx and θ\theta, WW is the p×pp\times p Hessian with respect to θ\theta and vector–matrix multiplication is used for combining θ\theta with the arrays.

  • (1b)

    Re-expressing coordinates for a nicer expansion. We next re-express an xx coordinate, writing x~=x+ax2/2n1/2\tilde{x}=x+ax^{2}/2n^{1/2}, and then again omit the tildes to obtain the simpler expansion

    y=x+Vθ+(2xBθ+θWθ)/2n1/2+,y=x+V\theta+(2xB\theta+\theta^{\prime}W\theta)/2n^{1/2}+\cdots, (6)

    to order O(n1)\mathrm{O}(n^{-1}) for the modified yy, xx and θ\theta, now in bounded regions about 0.

  • (1c)

    Full response vector expansion. For the vector response y=(y1,,yn)y=(y_{1},\ldots,y_{n}) in quantile form, we can compound the preceding coordinate expansions and write y=x+Vθ+(2x:Bθ+θWθ)/2n1/2+,y=x+V\theta+(2x:B\theta+\theta^{\prime}W\theta)/2n^{1/2}+\cdots, where yy and xx are now vectors in n\mathbb{R}^{n}, V=(v1,,vp)=(vα)V=(v_{1},\ldots,v_{p})=(v_{\alpha}) and B=(b1,,bp)=(bα)B=(b_{1},\ldots,b_{p})=(b_{\alpha}) are 1×p1\times p arrays of vectors in n\mathbb{R}^{n}, W=(wαα)W=(w_{\alpha\alpha^{\prime}}) is a p×pp\times p array of vectors in n\mathbb{R}^{n} and x:Bx\mbox{:}B is a 1×p1\times p array of vectors x:bx\mbox{:}b, where the iith element of the vector x:bx\mbox{:}b is the product xibix_{i}b_{i} of the iith elements of the vectors xx and bb.

  • (1d)

    Eliminate the cross Hessian: scalar parameter case. The form of a Taylor series depends heavily on how the function and the component variables are expressed. For a particular coordinate of (6) in (1b), if we re-express the coordinate y=y~+cy~2/2n1/2y=\tilde{y}+c\tilde{y}^{2}/2n^{1/2} in terms of a modified y~\tilde{y}, substitute it in (6) and then, for notational ease, omit the tildes, we obtain y+c(x+vθ)2/2n1/2=x+vθ+(2xbθ+θ2w)/2n1/2.y+c(x+v\theta)^{2}/2n^{1/2}=x+v\theta+(2xb\theta+\theta^{2}w)/2n^{1/2}. To simplify this, we take the x2x^{2} term over to the right-hand side and combine it with xx to give a re-expressed xx, take the θx\theta x term over to the right-hand side and choose cc so that cv=bcv=b and, finally, combine the θ2\theta^{2} terms giving a new ww. We then obtain y(x;θ)=x+vθ+θ2w/2n1/2{y(x;\theta)=x+v\theta+\theta^{2}w/2n^{1/2}} with the cross Hessian removed; for this, if v=0v=0, we ignore the coordinate as being ineffective for θ\theta. For the full response accordingly, we then have y(x;θ)=x+vθ+wθ2/2n1/2+y(x;\theta)=x+v\theta+w\theta^{2}/2n^{1/2}+\cdots to the second order in terms of re-expressed coordinates xx and yy. The trajectory of a point xx is A(x)={y(x;t)}={x+vt+wt2/2n1/2+}A(x)=\{y(x;t)\}=\{x+vt+wt^{2}/2n^{1/2}+\cdots\} to the second order as tt varies.

  • (1e)

    Scalar case: trajectories form a partition. In the standardized coordinates, the initial data point is y0=0y^{0}=0 with corresponding maximum likelihood value θ^0=0\hat{\theta}^{0}=0; the corresponding trajectory is A(0)={vt+wt2/2n1/2+}.A(0)=\{vt+wt^{2}/2n^{1/2}+\cdots\}. For a general reference value xx, but with θ^(x)=0\hat{\theta}(x)=0, the trajectory is A(x)={x+vt+wt2/2n1/2+}=x+A(0)A(x)=\{x+vt+wt^{2}/2n^{1/2}+\cdots\}=x+A(0). The sets {A(x)}\{A(x)\} with θ^(x)=0\hat{\theta}(x)=0 are all translates of A(0)A(0) and thus form a partition.

    Consider an initial point x0x_{0} with maximum likelihood value θ^(x0)=0\hat{\theta}(x_{0})=0 and let y1=x0+vt1+wt12/2n1/2+y_{1}=x_{0}+vt_{1}+wt_{1}^{2}/2n^{1/2}+\cdots be a point in the set A(x0)=x0+A(0)A(x_{0})=x_{0}+A(0). We calculate the trajectory A(y1)A(y_{1}) of y1y_{1} and show that it lies on A(x0)A(x_{0}); the partition property then follows and the related Jacobian effect is constant. From the quantile function y=x+vθ+wθ2/2n1/2y=x+v\theta+w\theta^{2}/2n^{1/2}, we see that the yy distribution is a θ\theta-based translation of the reference distribution described by xx. Thus the likelihood at y1y_{1} is l(y1vθwθ2/2n1/2)l(y_{1}-v\theta-w\theta^{2}/2n^{1/2}), in terms of the log density l(x)l(x) near x0x_{0}. It follows that y1=x0+vt1+wt12/2n1/2y_{1}=x_{0}+vt_{1}+wt_{1}^{2}/2n^{1/2} has maximum likelihood value θ^(y1)=t1\hat{\theta}(y_{1})=t_{1}.

    Now, for the trajectory about y1y_{1}, we calculate derivatives

    dydθ=v+wθ/n1/2,d2ydθ2=w/n1/2,\frac{\mathrm{d}y}{\mathrm{d}\theta}=v+w\theta/n^{1/2},\qquad\frac{\mathrm{d}^{2}y}{\mathrm{d}\theta^{2}}=w/n^{1/2},

    which, at the point y1=vt1+wt12/2n1/2y_{1}=vt_{1}+wt^{2}_{1}/2n^{1/2} with θ=θ^(y1)\theta=\hat{\theta}(y_{1}), gives

    V(y1)=v+wt1/n1/2,W(y1)=w/n1/2,V(y_{1})=v+wt_{1}/n^{1/2},\qquad W(y_{1})=w/n^{1/2},

    to order O(n1)\mathrm{O}(n^{-1}). We thus obtain the trajectory of the point y1y_{1}:

    A(y1)\displaystyle A(y_{1}) =\displaystyle= {x0+vt1+wt12/2n1/2+(v+wt1/n1/2)t+wt2/2n1/2}\displaystyle\{x_{0}+vt_{1}+wt_{1}^{2}/2n^{1/2}+(v+wt_{1}/n^{1/2})t+wt^{2}/2n^{1/2}\}
    =\displaystyle= {x0+vT+wT2/2n1/2}\displaystyle\{x_{0}+vT+wT^{2}/2n^{1/2}\}

    under variation in tt. However, with T=t1+tT=t_{1}+t, we have just an arbitrary point on the initial trajectory. Thus the mapping yA(y)y\to A(y) is well defined and the trajectories generate a partition, to second order in moderate derivations in n\mathbb{R}^{n}. In the standardized coordinates, the Jacobian effect is constant.

  • (1f)

    Vector case: trajectories form a partition. For the vector parameter case, we again use standardized coordinates and choose a parameterization that gives orthogonal curvature vectors ww at the observed data point y0y^{0}. We then examine scalar parameter change on some line through θ^(y0)\hat{\theta}(y^{0}). For this, the results above give a trajectory and any point on it reproduces the trajectory under that scalar parameter. Orthogonality ensures that the vector maximum likelihood value is on the same line just considered. These trajectories are, of course, part of the surface defined by {Vt+tWt/2n1/2}\{Vt+t^{\prime}Wt/2n^{1/2}\}. We then use the partition property of the individual trajectories as these apply perpendicular to the surface; the surfaces are thus part of a partition. We can then write the trajectory of a point xx as a set

    A(x)={x+Vt+tWt/2n1/2+:t}=x+A(0)A(x)=\{x+Vt+t^{\prime}Wt/2n^{1/2}+\cdots\colon\ t\}=x+A(0) (7)

    in a partition to the second order in moderate deviations.

  • (2a)

    Observed information standardization. With moderate regularity, and following [18] and [23], we have a limiting Normal distribution conditionally on y0+(V)y^{0}+\mathcal{L}(V). We then rescale the parameter at θ^0\hat{\theta}^{0} to give identity observed information and thus an identity variance matrix for the Normal distribution to second order. We also have a limiting Normal distribution conditionally on y0+(V,W)y^{0}+\mathcal{L}(V,W); for this, we linearly modify the vectors in WW by rescaling and regressing on (V)\mathcal{L}(V) to give distributional orthogonality to θ^\hat{\theta} and identity conditional variance matrix to second order.

  • (2b)

    The trajectories are ancillary: first derivative parameter change. We saw in the preceding section that key local properties of a statistical model were summarized by the tangent vectors VV and the curvature vectors WW, and that the latter can, to advantage, be taken to be orthogonal to the tangent vectors. These vectors give local coordinates for the model and can be replaced by an appropriate subset if linear dependencies are present.

    First, consider the conditional model given the directions corresponding to the span y0+{V,W}y^{0}+\mathcal{L}\{V,W\}. From the ancillary expansion (7), we have that change of θ\theta to the second order moves points within the linear space y0+{V,W}y^{0}+\mathcal{L}\{V,W\}; accordingly, this conditioning is ancillary. Then, consider the further conditioning to an alleged ancillary contour, as described by (7). Also, let y0y_{0} be a typical point having θ^(y0)=θ^0\hat{\theta}(y_{0})=\hat{\theta}^{0} as the corresponding maximum likelihood value; y0y_{0} is thus on the observed maximum likelihood contour.

    Now, consider a rotationally symmetric Normal distribution on the (x,y)(x,y) plane with mean θ\theta on the xx axis and let a=y+cx2/2a=y+cx^{2}/2 be linear in yy with a quadratic adjustment with respect to xx. Then a=a(x,y)a=a(x,y) is first-derivative ancillary at θ=0\theta=0. For this, we assume, without loss of generality, that the standard deviations are unity. The marginal density for aa is then

    f(a;θ)=ϕ(xθ)ϕ(acx2/2)dx,f(a;\theta)=\int_{-\infty}^{\infty}\phi(x-\theta)\phi(a-cx^{2}/2)\,\mathrm{d}x,

    which is symmetric in θ\theta; thus (d/dθ)f(a;θ)|θ=0=0,(\mathrm{d}/\mathrm{d}\theta)f(a;\theta)|_{\theta=0}=0, showing that the distribution of aa is first-derivative ancillary at θ=0\theta=0 or, more intuitively, that the amount of probability on a contour of aa is first-derivative free of θ\theta at θ=0\theta=0. Of course, for this, the yy-spacing between contours of aa is constant.

    Now, more generally, consider an asymptotic distribution for (x,y)(x,y) that is first order rotationally symmetric Normal with mean θ\theta on the y=0y=0 plane; this allows O(n1/2)\mathrm{O}(n^{-1/2}) cubic contributions. Also, consider an ss-dimensional variable a=y+Q(x)/2n1/2a=y+Q(x)/2n^{1/2} which is a quadratic adjustment of yy. The preceding argument extends to show that a(y)a(y) is first-derivative ancillary: the two O(n1/2)\mathrm{O}(n^{-1/2}) effects are zero and the combination is of the next order.

  • (2c)

    Trajectories are ancillary: parameter change in moderate deviations. Now, consider a statistical model f(y;θ)f(y;\theta) with data point y0y^{0} and assume regularity, asymptotics and smoothness of the quantile functions. We examine the parameter trajectory {y(x^0;t)}\{y(\hat{x}^{0};t)\} in moderate deviations under change in tt. From the preceding paragraph, we then have first-derivative ancillarity at θ=θ^=0\theta=\hat{\theta}=0. But this holds for each expansion in moderate deviations and we thus have ancillarity in moderate deviations. The key here has been to use the expansion form about the point that has θ^\hat{\theta} equal to the parameter value being examined.

6 Discussion

(

  • iii)]

  • (i)

    On ancillarity. The Introduction gave a brief background on ancillary statistics and noted that an ancillary is typically viewed as a statistic with a parameter-free distribution; for some recent discussion, see [17]. Much of the literature is concerned with difficulties that can arise using this third Fisher concept, third after sufficiency and likelihood: that maximizing power given size typically means not conditioning on an ancillary; that shorter on-average confidence intervals typically mean ignoring ancillary conditioning; that techniques that are conditional on an ancillary are often inadmissible; and more. Some of the difficulty may hinge on whether there is merit in the various optimality criteria themselves. However, little in the literature seems focused on the continued evolution and development of this Fisher concept, that is, on what modifications or evolution can continue the exploration initiated in Fisher’s original papers (see [10, 11, 12]).

  • (ii)

    On simulations for the conditional model. The second order ancillary in moderate deviations has contours that form a partition, as shown in the preceding section. In the modified or re-expressed coordinates, the contours are in a location relationship and, correspondingly, the Jacobian effect needed for the conditional distribution is constant. However, in the original coordinates, the Jacobian effect would typically not be constant and its effect would be needed for simulations. If the parameter is scalar, then the effect is available to the second order through the divergence function of a vector field; for some discussion and examples, see [15]. For a vector parameter, generalizations can be implemented, but we do not pursue these here.

  • (iii)

    Marginal or conditional. When sampling from a scalar distribution having variable yy and moderate regularity, the familiar central limit theorem gives a limiting Normal distribution for the sample average y¯\bar{y} or sample sum yi\sum y_{i}. From a geometric view, we have probability in nn-space and contours determined by y¯\bar{y}, contours that are planes perpendicular to the 11-vector. If we then collect the probability on a contour, plus or minus a differential, and deposit it, say, on the intersection of the contour with the span (1)\mathcal{L}(1) of the 11-vector, then we obtain a limiting Normal distribution on (1)\mathcal{L}(1), using y¯\bar{y} or yi\sum y_{i} for location on that line.

    A far less familiar Normal limit result applies in the same general context, but with a totally different geometric decomposition. Consider lines parallel to the 11-vector, the affine cosets of (1)\mathcal{L}(1). On these lines, plus or minus a differential, we then obtain a limiting Normal distribution for location say y¯\bar{y} or yi\sum y_{i}. In many ways, this conditional, rather than marginal, analysis is much stronger and more useful. The geometry, however, is different, with planes perpendicular to (1)\mathcal{L}(1) being replaced by points on lines parallel to (1)\mathcal{L}(1).

    This generalizes giving a limiting conditional Normal distribution on almost arbitrary smooth contours in a partition and it has wide application in recent likelihood inference theory. It also provides third order accuracy rather than the first order accuracy associated with the usual geometry. In a simple sense, planes are replaced by lines or by generalized contours and much stronger, though less familiar, results are obtained. For some background based on Taylor expansions of log-statistical models, see [5, 6] and [1].

Acknowledgements

This research was supported by the Natural Sciences and Engineering Research Council of Canada. The authors wish to express deep appreciation to the referee for very incisive comments. We also offer special thanks to Kexin Ji for many contributions and support with the manuscript and the diagrams.

References

  • [1] Andrews, D.F., Fraser, D.A.S. and Wong, A. (2005). Computation of distribution functions from likelihood information near observed data. J. Statist. Plann. Inference 134 180–193. MR2146092
  • [2] Barndorff-Nielsen, O.E. (1986). Inference on full or partial parameters based on the standardized log likelihood ratio. Biometrika 73 307–322. MR0855891
  • [3] Barndorff-Nielsen, O.E. (1987). Discussion of “Parameter orthogonality and approximate conditional inference.” J. R. Stat. Soc. Ser. B Stat. Methodol. 49 18–20. MR0893334
  • [4] Berger, J.O. and Sun, D. (2008). Objective priors for the bivariate normal model. Ann. Statist. 36 963–982. MR2396821
  • [5] Cakmak, S., Fraser, D.A.S. and Reid, N. (1994). Multivariate asymptotic model: Exponential and location approximations. Util. Math. 46 21–31. MR1301292
  • [6] Cheah, P.K., Fraser, D.A.S. and Reid, N. (1995). Adjustment to likelihood and densities: Calculating significance. J. Statist. Res. 29 1–13. MR1345317
  • [7] Cox, D.R. (1980). Local ancillarity. Biometrika 67 279–286. MR0581725
  • [8] Cox, D.R. and Reid, N. (1987). Parameter orthogonality and approximate conditional inference. J. R. Stat. Soc. Ser. B Stat. Methodol. 49 1–39. MR0893334
  • [9] Daniels, H.E. (1954). Saddle point approximations in statistics. Ann. Math. Statist. 25 631–650. MR0066602
  • [10] Fisher, R.A. (1925). Theory of statistical estimation. Proc. Camb. Phil. Soc. 22 700–725.
  • [11] Fisher, R.A. (1934). Two new properties of mathematical likelihood. Proc. R. Soc. Lond. Ser. A 144 285–307.
  • [12] Fisher, R.A. (1935). The logic of inductive inference. J. R. Stat. Soc. Ser. B Stat. Methodol. 98 39–54.
  • [13] Fisher, R.A. (1956). Statistical Methods and Scientific Inference. Edinburgh: Oliver & Boyd.
  • [14] Fraser, D.A.S. (1979). Inference and Linear Models. New York: McGraw-Hill. MR0535612
  • [15] Fraser, D.A.S. (1993). Directional tests and statistical frames. Statist. Papers 34 213–236. MR1241598
  • [16] Fraser, D.A.S. (2003). Likelihood for component parameters. Biometrika 90 327–339. MR1986650
  • [17] Fraser, D.A.S. (2004). Ancillaries and conditional inference, with discussion. Statist. Sci. 19 333–369. MR2140544
  • [18] Fraser, D.A.S. and Reid, N. (1995). Ancillaries and third order significance. Util. Math. 47 33–53. MR1330888
  • [19] Fraser, D.A.S. and Reid, N. (2001). Ancillary information for statistical inference. In Empirical Bayes and Likelihood Inference (S.E. Ahmed and N. Reid, eds.) 185–207. New York: Springer. MR1855565
  • [20] Fraser, D.A.S. and Reid, N. (2002). Strong matching for frequentist and Bayesian inference. J. Statist. Plann. Inference 103 263–285. MR1896996
  • [21] Fraser, D.A.S., Reid, N., Marras, E. and Yi, G.Y. (2010). Default priors for Bayes and frequentist inference. J. R. Stat. Soc. Ser. B Stat. Methodol. To appear.
  • [22] Fraser, D.A.S., Reid, N. and Wu, J. (1999). A simple general formula for tail probabilities for Bayes and frequentist inference. Biometrika 86 249–264. MR1705367
  • [23] Fraser, D.A.S. and Rousseau, J. (2008). Studentization and deriving accurate pp-values. Biometrika 95 1–16. MR2409711
  • [24] Fraser, D.A.S., Wong, A. and Wu, J. (1999). Regression analysis, nonlinear or nonnormal: Simple and accurate pp-values from likelihood analysis. J. Amer. Statist. Assoc. 94 1286–1295. MR1731490
  • [25] Lugannani, R. and Rice, S. (1980). Saddlepoint approximation for the distribution of the sum of independent random variables. Adv. in Appl. Probab. 12 475–490. MR0569438
  • [26] McCullagh, P. (1984). Local sufficiency. Biometrika 71 233–244. MR0767151
  • [27] McCullagh, P. (1992). Conditional inference and Cauchy models. Biometrika 79 247–259. MR1185127
  • [28] Reid, N. and Fraser, D.A.S. (2010). Mean likelihood and higher order inference. Biometrika 97. To appear.
  • [29] Severini, T.A. (2001). Likelihood Methods in Statistics. Oxford: Oxford Univ. Press. MR1854870