Objective Priors: An Introduction for Frequentists
Abstract
Bayesian methods are increasingly applied in these days in the theory and practice of statistics. Any Bayesian inference depends on a likelihood and a prior. Ideally one would like to elicit a prior from related sources of information or past data. However, in its absence, Bayesian methods need to rely on some “objective” or “default” priors, and the resulting posterior inference can still be quite valuable.
Not surprisingly, over the years, the catalog of objective priors also has become prohibitively large, and one has to set some specific criteria for the selection of such priors. Our aim is to review some of these criteria, compare their performance, and illustrate them with some simple examples. While for very large sample sizes, it does not possibly matter what objective prior one uses, the selection of such a prior does influence inference for small or moderate samples. For regular models where asymptotic normality holds, Jeffreys’ general rule prior, the positive square root of the determinant of the Fisher information matrix, enjoys many optimality properties in the absence of nuisance parameters. In the presence of nuisance parameters, however, there are many other priors which emerge as optimal depending on the criterion selected. One new feature in this article is that a prior different from Jeffreys’ is shown to be optimal under the chi-square divergence criterion even in the absence of nuisance parameters. The latter is also invariant under one-to-one reparameterization.
doi:
10.1214/10-STS338keywords:
.T1Discussed in 10.1214/11-STS338A and 10.1214/11-STS338B; rejoinder at 10.1214/11-STS338REJ.
1 Introduction
Bayesian methods are increasingly used in recent years in the theory and practice of statistics. Their implementation requires specification of both a likelihood and a prior. With enough historical data, it is possible to elicit a prior distribution fairly accurately. However, even in its absence, Bayesian methods, if judiciously used, can produce meaningful inferences based on the so-called “objective” or “default” priors.
The main focus of this article is to introduce certain objective priors which could be potentially useful even for frequentist inference. One such example where frequentists are yet to reach a consensus about an “optimal” approach is the construction of confidence intervals for the ratio of two normal means, the celebrated Fieller–Creasy problem. It is shown in Section 4 of this paper how an “objective” prior produces a credible interval in this case which meets the target coverage probability of a frequentist confidence interval even for small or moderate sample sizes. Another situation, which has often become a real challenge for frequentists, is to find a suitable method for elimination of nuisance parameters when the dimension of the parameter grows in direct proportion to the sample size. This is what is usually referred to as the Neyman–Scott phenomenon. We will illustrate in Section 3 with an example of how an objective prior can sometimes overcome this problem.
Before getting into the main theme of this paper, we recount briefly the early history of objective priors. One of the earliest uses is usually attributed to Bayes (1763) and Laplace (1812) who recommended using a uniform prior for the binomial proportion in the absence of any other information. While intuitively quite appealing, this prior has often been criticized due to its lack of invariance under one-to-one reparameterization. For example, a uniform prior for in the binomial case does not result in a uniform prior for . A more compelling example is that a uniform prior for , the population standard deviation, does not result in a uniform prior for , and the converse is also true. In a situation like this, it is not at all clear whether there can be any preference to assign a uniform prior to either or .
In contrast, Jeffreys’ (1961) general rule prior, namely, the positive square root of the determinant of the Fisher information matrix, is invariant under one-to-one reparameterization of parameters. Wewill motivate this prior from several asymptotic considerations. In particular, for regular models where asymptotic normality holds, Jeffreys’ prior enjoys many optimality properties in the absence of nuisance parameters. In the presence of nuisance parameters, this prior suffers from many problems—marginalization paradox, the Neyman–Scott problem, just to name a few. Indeed, for the location–scale models, Jeffreys himself recommended alternate priors.
There are several criteria for the construction of objective priors. The present article primarily reviews two of these criteria in some detail, namely, “divergence priors” and “probability matching priors,” and finds optimal priors under these criteria. The class of divergence priors includes “reference priors” introduced by Bernardo (1979). The “probablity matching priors” were introduced by Welch and Peers (1963). There are many generalizations of the same in the past two decades. The development of both these priors rely on asymptotic considerations. Somewhat more briefly, I have discussed also a few other priors including the “right” and “left” Haar priors.
The paper does not claim the extensive thorough and comprehensive review of Kass and Wasserman (1996), nor does it aspire to the somewhat narrowly focused, but a very comprehensive review of probability matching priors as given in Ghosh and Mukerjee (1998), Datta and Mukerjee (2004) and Datta and Sweeting (2005). A very comprehensive review of reference priors is now available in Bernardo (2005), and a unified approach is given in the recent article of Berger, Bernardo and Sun (2009).
While primarily a review, the present article has been able to unify as well as generalize some of the previously considered criteria, for example, viewing the reference priors as members of a bigger class of divergence priors. Interestingly, with some of these criteria as presented here, it is possible to construct some alternatives to Jeffreys’ prior even in the absence of nuisance parameters.
The outline of the remaining sections is as follows. In Section 2 we introduce two basic tools to be used repeatedly in the subsequent sections. One such tool involving asymptotic expansion of the posterior density is due to Johnson (1970), and Ghosh, Sinha and Joshi (1982), and is discussed quite extensively in Ghosh, Delampady and Samanta (2006) and Datta and Mukerjee (2004). The second tool involves a shrinkage argument suggested by Dawid and used extensively by J. K. Ghosh and his co-authors. It is shown in Section 3 that this shrinkage argument can also be used in deriving priors with the criterion of maximizing the distance between the prior and the posterior. The distance measure used includes, but is not limited to, the Kullback–Leibler (K–L) distance considered in Bernardo (1979) for constructing two-group “reference priors.” Also, in this section we have considered a new prior different from Jeffreys even in the one-parameter case which is also invariant under one-to-one reparameterization. Section 4 addresses construction of priors under probability matching criteria. Certain other priors are introduced in Section 5, and it is pointed out that some of these priors can often provide exact and not just asymptotic matching. Some final remarks are made in Section 6.
Throughout this paper the results are presented more or less in a heuristic fashion, that is, without paying much attention to the regularity conditions needed to justify these results. More emphasis is placed on the application of these results in the construction of objective priors.
2 Two Basic Tools
An asymptotic expansion of the posterior density began with Johnson (1970), followed up later by Ghosh, Sinha and Joshi (1982), and many others. The result goes beyond that of the theorem of Bernstein and Von Mises which provides asymptotic normality of the posterior density. Typically, such an expansion is centered around the MLE (and occasionally the posterior mode), and requires only derivatives of the log-likelihood with respect to the parameters, and evaluated at their MLE’s. These expansions are available even for heavy-tailed densities such as Cauchy because finiteness of moments of the distribution is not needed. The result goes a long way in finding asymptotic expansion for the posterior moments of parameters of interest as well as in finding asymptotic posterior predictive distributions.
The asymptotic expansion of the posterior resembles that of an Edgeworth expansion, but, unlike the latter, this approach does not need use of cumulants of the distribution. Finding cumulants, though conceptually easy, can become quite formidable, especially in the presence of multiple parameters, demanding evaluation of mixed cumulants.
We have used this expansion as a first step in the derivation of objective priors under different criteria. Together with the shrinkage argument as mentioned earlier in the Introduction, and to be discussed later in this section, one can easily unify and extend many of the known results on prior selection. In particular, we will see later in this section how some of the reference priors of Bernardo (1979) can be found via application of these two tools. The approach also leads to a somewhat surprising result involving asymptotic expansion of the distribution function of the MLE in a fairly general setup, and is not restricted to any particular family of distributions, for example, the exponential family, or the location–scale family. A detailed exposition is available in Datta and Mukerjee (2004, pages 5–8).
For simplicity of exposition, we consider primarily the one-parameter case. Results needed for the multiparameter case will occasionally be mentioned, and, in most cases, these are straightforward, albeit often cumbersome, extensions of one-parameter results. Moreover, as stated in the Introduction, the results will be given without full rigor, that is, without any specific mention of the needed regularity conditions.
We begin with i.i.d. with common p.d.f. . Let denote the MLE of . The likelihood function is denoted by and let . Let , and let , the observed per unit Fisher information number. Consider a twice differentiable prior . Let , and let denote the posterior p.d.f. of given. Then, under certain regularity conditions, we have the following result.
Theorem 1
, where is the standard normal p.d.f., and
The proof is given in Ghosh, Delampdy and Samanta (2006, pages 107–108). The statement involves a few minor typos which can be corrected easily. We outline here only a few key steps needed in the proof.
We begin with the posterior p.d.f.,
(1) | |||
Substituting , the posterior p.d.f. of is given by
(2) | |||||
(3) | |||||
(4) |
The rest of the proof involves a Taylor expansion of and around up to a desired order, and collecting the coefficients of , , etc. The other component is evaluation of via momets of the N(0, 1) distribution.
Remark 1.
The above result is useful in finding certain expansions for the posterior moments as well. In particular, noting , it follows that the asymptotic expansion of the posterior mean of is given by
(5) | |||
Also, .
A multiparameter extension of Theorem 1 is as follows. Suppose that is the parameter vector and is the MLE of . Let
and . Then retaining only up to the term, the posterior of is given by
(6) | |||
Next we present the basic shrinkage argument of J. K. Ghosh discussed in detail in Datta and Mukherjee (2004). The prime objective here is evaluation of , say, where and can bereal- or vector-valued. The idea is to find first through a sequence of priors defined on a compact set, and then shrinking the prior to degeneracy at some interior point, say, of the compact set. The interesting point is that one never needs explicit specification of in carrying out this evaluation. We will see several illustrations of this in this article.
First, we present the shrinkage argument in a nutshell. Consider a proper prior with a compact rectangle as its support in the parameter space, and vanishes on the boundary of support, while remaining positive in the interior. The support of is the closure of the set. Consider the posterior of under and, hence, obtain . Then find for in the interior of the support of . Finally, integrate with respect to , and then allow to converge to the degenerate prior at the true value of at an interior point of the support of . This yields . The calculation assumes integrability of over the joint distribution of and . Such integrability allows change in the order of integration.
When executed up to the desired order of approximation, under suitable assumptions, these steps can lead to significant reduction in the algebra underlying higher order frequentist asymptotics. The simplification arises from two counts. First, although the Bayesian approach to frequentist asymptotics requires Edgeworth type assumptions, it avoids an explicit Edgeworth expansion involving calculation of approximate cumulants. Second, as we will see, it helps establish the results in an easily interpretable compact form. The following two sections will demonstrate multiple usage of these two basic tools.
3 Objective Priors Via Maximization of the Distance Between the Prior and the Posterior
3.1 Reference Priors
We begin with an alternate derivation of the reference prior of Bernardo. Following Lindley (1956), Bernardo (1979) suggested a Kullback–Leibler (K–L) divergence between the prior and the posterior, namely, , where expectation is taken over the joint distribution of and . The target is to find a prior which maximizes the above distance. It is shown in Berger and Bernardo (1989) that if one does this maximization for a fixed , this may lead to a discrete prior with finitely many jumps, a far cry from a diffuse prior. Hence, one needs an asymptotic maximization.
First write as
(7) | |||
where , , the likelihood function, and denotes the marginal of after integrating out . The integrations are carried out with respect to a prior having a compact support, and subsequently passing on to the limit as and when necessary.
Without any nuisance parameters, Bernardo(1979) showed somewhat heuristically that Jeffreys’ prior achieves the necessary maximization. A more rigorous proof was supplied later by Clarke and Barron (1990, 1994). We demonstrate heuristically how the shrinkage argument can also lead to the reference priors derived in Bernardo (1979). To this end, we first consider the one-parameter case for a regular family of distributions. We rewrite
Next we write
From the asymptotic expansion of the posterior, one gets
Since converges a posteriori to a distribution as , irrespective of a prior , by the Bernstein–Von Mises and Slutsky’s theorems, one gets
(9) | |||
Since the leading term in the right-hand side of (3.1) does not involve the prior , and converges almost surely () to , applying the shrinkage argument, one gets from (3.1)
In view of (3.1), considering only the leading terms in (3.1), one needs to find a prior which maximizes . The integral being nonpositive due to the property of the Kullback–Leibler information number, its maximum value is zero, which is attained for , leading once again to Jeffreys’ prior.
The multiparameter generalization of the above result without any nuisance parameters is based on the asymptotic expansion
and maximization of the leading term yields once again Jeffreys’ general rule prior .
In the presence of nuisance parameters, however, Jeffreys’ general rule prior is no longer the distance maximizer. We will demonstrate this in the case when the parameter vector is split into two groups, one group consisting of the parameters of interest, and the other involving the nuisance parameters. In particular, Bernardo’s (1979) two-group reference prior will be included as a special case.
To this end, suppose , where () is the parameter of interest and () is the nuisance parameter. We partition the Fisher information matrix as
First begin with a general conditional prior (say). Bernardo (1979) considered . The marginal prior for is then obtained by maximizing the distance. We begin by writing
(11) |
Writing and , where , the asymptotic expansion and the shrinkage argument together yield
(12) | |||
and
From (11)–(3.1), retaining only the leading term,
(14) | |||
Writing , once again by property of the Kullback–Leibler information number, it follows that the maximizing prior .
We have purposely not set limits for these integrals. An important point to note [as pointed out in Berger and Bernardo (1989)] is that evaluation of all these integrals is carried out over an increasing sequence of compact sets whose union is the entire parameter space. This is because most often we are working with improper priors, and direct evaluation of these integrals over the entire parameter space will simply give which does not help finding any prior. As an illustration, if the parameter space is as is typically the case for location–scale family of distributions, then one can take the increasing sequence of compact sets as , . All the proofs are usually carried out by taking a sequence of priors with compact support , and eventually making . This important point should be borne in mind in the actual derivation of reference priors. We will now illustrate this for the location–scale family of distributions when one of the two parameters is the parameter of interest, while the other one is the nuisance parameter.
Example 1 ((Location–scale models)).
Suppose are i.i.d. with common p.d.f. , where and . Consider the sequence of priors with support , We may note that , where the constants , and are functions of and do not involve either or . So, if is the parameter of interest, and is the nuisance parameter, following Bernardo’s (1979) prescription, one begins with the sequence of priors where, solving , one gets . Next one finds the prior which is a constant not depending on either or . Hence, the resulting joint prior , which is the desired reference prior. Incidentally, this is Jeffreys’ independence prior rather than Jeffreys’ general rule prior, the latter being proportional to . Conversely, when is the parameter of interest and is the nuisance parameter, one begins with and then, following Bernardo (1979) again, one finds . Thus, onceagain one gets Jeffreys’ independence prior. We will see in Section 5 that Jeffreys’ independence prior is a right Haar prior, while Jeffreys’ general rule prior is a left Haar prior for the location–scale family of distributions.
Example 2 ((Noncentrality parameter)).
Let be i.i.d. N(), where real and are both unknown. Suppose the parameter of interest is , the noncentrality parameter. With the reparameterization from , the likelihood is rewritten as . Then the per observation Fisher information matrix is given by . Consider once again the sequence of priors with support , Again, following Bernardo, , where . Noting that , one gets . Hence, the reference prior in this example is given by . Due to its invariance property (Datta and Ghosh, 1996), in the original ) parameterization, the two-group reference prior turns out to be .
Example 3.
As an illustration of the above, consider the celebrated Neyman–Scott problem (Berger and Bernardo, 1992a, 1992b). Consider a fixed effects one-way balanced normal ANOVA model where the number of observations per cell is fixed, but the number of cells grows to infinity. In symbols, let be mutually independent N(, , , all parameters being assumed unknown. Let . Then the MLE of is given by which converges in probability [as to ], and hence is inconsistent. Interestingly, Jeffreys’prior in this case also produces an inconsistent estimator of , but the Berger–Bernardo reference prior does not.
To see this, we begin with Fisher Information matrix . Hence, Jeffreys’ prior which leads to the marginal posterior of , denoting the entire data set. Then the posterior mean of is given by , while the posterior mode is given by . Both are inconsistent estimators of , as these converge in probability to as .
In contrast, by the result of Datta and Ghosh (1995c), the two-group reference prior . This leads to the marginal posterior of . Now the posterior mean is given by , while the posterior mode is given by . Both are consistent estimators of .
Example 4 ((Ratio of normal means)).
Let and be two independent N() random variables, where the parameter of interest is . This is the celebrated Fieller–Creasy problem. The Fisher information matrix in this case is . With the transformation , one obtains . Again, by Datta and Ghosh (1995c), the two-group reference prior .
Example 5 ((Random effects model)).
This example has been visited and revisited on several occasions. Berger and Bernardo (1992b) first found reference priors for variance components in this problem when the number of observations per cell is the same. Later, Ye (1994) and Datta and Ghosh (1995c, 1995d) also found reference priors for this problem. The case involving unequal number of observations per cell was considered by Chaloner (1987) and Datta, Ghosh and Kim (2002).
For simplicity, we consider here only the case with equal number of observations per cell. Let , . Here is an unknown parameter, while ’s and are mutually independent with ’s i.i.d. N() and i.i.d. N(). The parameters , and are all unknown. We write , , and . The minimal sufficient statistic is (, where and .
The different parameters of interest that we consider are , and . The common mean is of great relevance in meta analysis (cf. Morris and Normand, 1992). Ye (1994) pointed out that the variance ratio is of considerable interest in genetic studies. The parameter is also of importance to animal breeders, psychologists and others. Datta and Ghosh (1995d) have discussed the importance of , the error variance. In order to find reference priors for each one of these parameters, we first make the one-to-one transformation from to , where and . Thus, , and the likelihood can be expressed as
Then the Fisher information matrix simplifies to . From Theorem 1 of Datta and Ghosh (1995c), it follows now that when , and are the respective parameters of interest, while the other two are nuisance parameters, the reference priors are given respectively by , and .
3.2 General Divergence Priors
Next, back to the one-parameter case, we consider the more general distance (Amari, 1982; Cressie and Read, 1984)
(15) | |||
which is to be interpreted as its limit when . This limit is the K–L distance as considered in Bernardo (1979). Also, gives the Bhattacharyya–Hellinger (Bhattacharyya, 1943; Hellinger, 1909) distance, and leads to the chi-square distance (Clarke and Sun, 1997, 1999). In order to maximize with respect to a prior , one re-expresses (3.2) as
Hence, from (3.2), maximization of amounts to minimization (maximization) of
(17) |
for (). First consider the case . From Theorem 1, the posterior of is
Thus,
(19) | |||
Following the shrinkage argument, and noting that conditional on , , while , it follows heuristically from (3.2)
(20) | |||
Hence, from (3.2), considering only the leading term, for , minimization of (17) with respect to amounts to minimization of with respect to subject to . A simple application of Holder’s inequality shows that this minimization takes place when . Similarly, for , provides the desired maximization of the expected distance between the prior and the posterior. The K–L distance, that is, when has already been considered earlier.
Remark 2.
Equation (3.2) also holds for . However, in this case, it is shown in Ghosh, Mergel and Liu (2011) that the integral is uniquely minimized with respect to , and there exists no maximizer of this integral when . Thus, in this case, there does not exist any prior which maximizes the posterior-prior distance.
Remark 3.
Surprisingly, Jeffreys’ prior is not necessarily the solution when (the chi-square divergence). In this case, the first-order asymptotics does not work since for all . However, retaining also the term as given in Theorem 1, Ghosh, Mergel and Liu (2011) have found in this case the solution , where . We shall refer to this prior as . We will show by examples that this prior may differ from Jeffreys’prior. But first we will establish a hitherto unknown invariance property of this prior under one-to-one reparameterization.
Theorem 2
Suppose that is a one-to-one twice differentiable function of . Then , where , the constant of proportionality, does not involve any parameters.
Without loss of generality, assume that is a nondecreasing function of . By the identity
reduces to
(21) | |||
Next, from the relation , one gets the identities
(22) | |||
(23) | |||
From (3.2)–(3.2), one gets, after simplification,
(24) |
Now, on integration, it follows from (3.2) , which proves the theorem.
Example 6.
Consider the one-parameter exponential family of distributions with . Then so that , which is different from Jeffreys’ prior. Because of the invariance result proved in Theorem 2, in particular, for the problem, noting that , one gets , which is a prior, different from Jeffreys’ prior, Laplace’s prior or Haldane’s improper prior. Similarly, for the Poisson case, one gets , again different from Jeffreys’ prior. However, for the distribution, since and , a constant, which is the same as Jeffreys’ prior. It may be pointed out also that for the one-parameter exponential family, for the chi-square divergence, differs from Hartigan’s (1998) maximum likelihood prior .
Example 7.
For the one-parameter location family of distributions with , where is a p.d.f., both and are constants implying . Hence, is of the form for some constant . However, for the special case of a symmetric , that is, for all , , and then reduces once again to , which is the same as Jeffreys’ prior.
Example 8.
For the general scale family of distributions with , where is a p.d.f., for some constant , where for some constant . Then for some constant . In particular, when , different from Jeffreys’ for the general scale family of distributions.
The multiparameter extension of the general divergence prior has been explored in the Ph.D. dissertation of Liu (2009). Among other things, he has shown that in the absence of any nuisance parameters, for , the divergence prior is Jeffreys’ prior. However, on the boundary, namely, , priors other than Jeffreys’ prior emerge.
4 Probability Matching Priors
4.1 Motivation and First-Order Matching
As mentioned in the Introduction, probability matching priors are intended to achieve Bayes-frequentist synthesis. Specifically, these priors are required to provide asymptotically the same coverage probability of the Bayesian credible intervals with the corresponding frequentist counterparts. Over the years, there have been several versions of such priors-quantile matching priors, matching priors for distribution functions, HPD matching priors and matching priors associated with likelihood ratio statistics. Datta and Mukerjee provided a detailed account of all these priors. In this article I will be concerned only with quantile matching priors.
A general definition of quantile matching priors is as follows: Suppose i.i.d. with common p.d.f. , where is a real-valued parameter. Assume all the needed regularity conditions for the asymptotic expansion of the posterior around , the MLE of . We continue with the notation of the previous section. For , let denote the th asymptotic posterior quantile of based on the prior , that is,
(25) | |||
for some . If now , then some order of probability matching is achieved. If , we call a first-order probability matching prior. If , we call a second-order probability matching prior.
We first provide an intuitive argument for why Jeffreys’ prior is a first-order probability matching prior in the absence of nuisance parameters. If i.i.d. and , , then the posterior is . Now writing as the quantile of the distribution, one gets
(26) | |||
so that the one-sided credible interval for has exact frequentist coverage probability .
The above exact matching does not always hold. However, if are i.i.d., then is asymptotically . Then, by the delta method, . So if so that , is asymptotically . Hence, from (4.1), with the uniform prior for , coverage matching is asymptotically achieved for . This leads to the prior for .
Datta and Mukerjee (2004, pages 14–21) proved the result in a formal manner. They used the two basic tools of Section 3. In the absence of nuisance parameters, they showed that a first-order matching prior for is a solution of the differential equation
(27) |
so that Jeffreys’ prior is the unique first-order matching prior. However, it does not always satisfy the second-order matching property.
4.2 Second-Order Matching
In order that the matching is accomplished up to (second-order matching), one needs an asymptotic expansion of the posterior distribution function up to the term, and to set up a second differential equation in addition to (27). This equation is given by (cf. Mukerjee and Dey, 1993; Mukerjee and Ghosh, 1997)
(28) | |||
where, as before, . If Jeffrey’s prior satisfies (4.2), then it is the unique second-order matching prior. While for the location and scale family of distributions, this is indeed the case, this is not true in general. Of course, in such an instance, there does not exist any second-order matching prior.
To see this, for , (4.2) reduces to
which requires to be a constant free from . After some algebra, the above expression simplifies to . It is easy to check now that for the one-parameter location and scale family of distributions, the above expression does not depend on . However, for the one-parameter exponential family of distributions with canonical parameter , the same holds if and only if does not depend on , or, in other words, for some constant . Another interesting example is given below.
Example 9.
. One can verify that and so that is not a constant. Hence, is not a second-order matching prior, and there does not exist any second-order matching prior in this example.
4.3 First-Order Quantile Matching Priors in the Presence of Nuisance Parameters
The parameter of interest is still real-valued, but there may be one or more nuisance parameters. To fix ideas, suppose , where is the parameter of interest, while are the nuisance parameters. As shown by Welch and Peers (1963) and later more rigorously by Datta and Ghosh(1995a) and Datta (1996), writing , the probability matching equation is given by
(29) |
Example 1 ((Continued)).
First consider as the parameter of interest, and the nuisance parameter. Since each element of the inverse of the Fisher information matrix is a constant multiple of , any prior , arbitrary, satisfies (29). Conversely, when is the parameter of interest, and is the nuisance parameter, any prior satisfies (29).
A special case considered in Tibshirani (1989) is of interest. Here is orthogonal to in the Fisherian sense, that is, for .
With orthogonality, (29) simplifies to
(since ). This leads to , where is arbitrary. Often a second-order matching prior removes the arbitrariness of . We will see an example later in this section. However, this need not always be the case, and, indeed, as seen earlier in the one parameter case, second-order matching priors may not always exist. We will address this issue later in this section.
A special choice is . The resultant prior bears some intuitive appeal. Since under orthogonality, , one may expect to be a first-order probability matching prior. This prior is only a member within the class of priors , as found by Tibshirani (1989), and admittedly need not be second-order matching even when the latter exists. A recent article by Staicu and Reid (2008) has proved some interesting properties of the prior . This prior is also considered in Ghosh and Mukerjee (1992).
For a symmetric location–scale family of distributions, that is, when , , that is, and are orthogonal. Now, when is the parameter of interest and is the nuisance parameter, the class of first-order matching priors is characterized by , where is arbitrary. Similarly, when is the parameter of interest and is the nuisance parameter, the class of first-order matching priors is characterized by , where is arbitrary. The intersection of the two classes leads again to the unique prior .
Example 2 ((Continued)).
Let be i.i.d. N(), and is again the parameter of interest. In order to find a parameter which is orthogonal to , we rewrite the p.d.f. in the form
(30) | |||
Then the Fisher information matrix
It turns out now if we reparameterize from to , where , then and are orthogonal with the corresponding Fisher information matrix given by . Hence, the class of first-order matching priors when is the parameter of interest is given by, where is arbitrary.
4.4 Second-Order Quantile Matching Priors in the Presence of Nuisance Parameters
When is the parameter of interest, and is the vector of nuisance parameters, the general class of second-order quantile matching priors is characterized in (2.4.11) and (2.4.12) of Datta and Mukerjee (2004, page 12). For simplicity, we consider only the case when is orthogonal to . In this case a first-order quantile matching prior is also second-order matching if and only if satisfies (cf. Datta and Mukerjee, 2004, page 27) the differential equation
(31) | |||
We revisit Examples 1–5 and provide complete, or at least partial, characterization of second-order quantile matching priors.
Example 1 ((Continued)).
Let be symmetric so that and are orthogonal. First let be the parameter of interest and the nuisance parameter. Then since both the terms in (4.4) are zeroes, every first-order quantile matching prior of the form , say, is also second-order matching. This means that an arbitrary prior of the form is second-order matching as long as it is only a function of . On the other hand, if is the parameter of interest and is the nuisance parameter, since the second term in (4.4) is zero, a first-order quantile matching prior of the form is also second-order matching if and only if is a constant. Thus, the unique second-order quantile matching prior in this case is proportional to , which is Jeffreys’ independence prior.
Example 2 ((Continued)).
Recall that in this case writing , and , the Fisher information matrix . Also, and . Hence, (4.4) holds if and only if . This leads to the unique second-order quantile matching prior . Back to the original parameterization, this leads to the prior , Jeffreys’ independence prior.
Example 3 ((Continued)).
Consider once again the Neyman–Scott example. Since the Fisher information matrix , is orthogonal to . Now, the class of second-order matching priors is given by , where is arbitrary. Simple algebra shows that in this case both the first and second terms in (4.4) are zeroes so that every first-order quantile matching prior is also second-order matching.
Example 4 ((Continued)).
Example 5 ((Continued)).
Again from Tibshirani (1989), the class of second-order matching priors when , and are the parameters of interest are given respectively by , and , where , and are arbitrary nonnegative functions. Also, the prior is second-order matching when is the parameter of interest. On the other hand, any first-order matching prior is also second-order matching when either or is the parameter of interest.
It may be of interest to find an example where a reference prior is not a second-order matching prior. Consider the gamma p.d.f. , where the mean is the parameter of interest. The Fisher information matrix is given by . Then the two-group reference prior of Bernardo (1979) is given by , while the unique second-order quantile matching prior is given by .
In some of these examples, especially for the location and location–scale families, one gets exact rather than asymptotic matching. This is especially so when the matching prior is a right-invariant Haar prior. We will see some examples in the next section.
5 Other Priors
5.1 Invariant Priors
Very often objective priors are derived via some invariance criterion. We illustrate with the location–scale family of distributions.
Let have p.d.f. , , , where is a p.d.f. Then, as found in Section 4, the Fisher information matrix is of the form . Hence, Jeffreys’ general rule prior . This prior, as we will see in this section, corresponds to a left-invariant Haar prior. In contrast, Jeffreys’ independence prior corresponds to a right-invariant Haar prior.
In order to demonstrate this, consider a group of linear transformations , where . The induced group of transformations on the parameter space will be denoted by , where , where . The general theory of locally compact groups states that there exist two measures and on such that is left-invariant and is right-invariant. What this means is that for all and a subset of , and , where and . The measures and are referred to respectively as left- and right-invariant Haar measures. For the location–scale family of distributions, the left- and right-invariant Haar priors turn out to be and , respectively (cf. Berger, 1985, pages 406–407; Ghosh, Delampady and Samanta, 2006, pages 136–138).
The right-Haar prior usually enjoys more optimality properties than the left-Haar prior. Some optimality properties of left-Haar priors are given in Datta and Ghosh (1995b). In Example 1, for the location–scale family of distributions, the right-Haar prior is Bernardo’s reference prior when either or is the parameter of interest, while the other parameter is the nuisance parameter. Also, it is shown in Datta, Ghosh and Mukerjee (2000) that for the location–scale family of distributions, the right-Haar prior yields exact matching of the coverage probabilities of Bayesian credible intervals and the corresponding frequentist confidence intervals when either or is the parameter of interest, while the other parameter is the nuisance parameter.
For simplicity, we demonstrate this only for the normal example. Let be i.i.d. N(, where . With the right-Haar prior , the marginal posterior distribution of is Student’s with location parameter , scale parameter , where , and degrees of freedom . Hence, if denotes the th percentile of this marginal posterior, then
so that , the th percentile of . Now
This provides the exact coverage matching probability for .
Next, with the same set up, when is the parameter of interest, its marginal posterior is Inverse ). Now, if denotes the th percentile of this marginal posterior, then , where is the th percentile of the distribution. Now
showing once again the exact coverage matching.
The general definition of a right-invariant Haar density on which we will denote by must satisfy , where . Similarly, a left invariant Haar density on which we will denote by must satisfy , where . An alternate representation of the right- and left-Haar densities are given by and , respectively.
It is shown in Halmos (1950) and Nachbin (1965) that the right- and left-invariant Haar densities exist and are unique up to a multiplicative constant. Berger (1985) provides calculation of and in a very general framework. He points out that if is isomorphic to the parameter space , then one can construct right- and left-invariant Haar priors on the parameter space . A very substantial account of invariant Haar densities is available in Datta and Ghosh (1995b). Severini, Mukerjee and Ghosh (2002) have demonstrated the exact matching property of right invariant Haar densities in a prediction context under fairly general conditions.
5.2 Moment Matching Priors
Here we discuss a new matching criterion which we will refer to as the “moment matching criterion.” For a regular family of distributions, the classic article of Bernstein and Von Mises (see, e.g., Ferguson, 1996, page 141; Ghosh, Delampady and Samanta, 2006, page 104) proved the asymptotic normality of the posterior of a parameter vector centered around the maximum likelihood estimator or the posterior mode and variance equal to the inverse of the observed Fisher information matrix evaluated at the maximum likelihood estimator or the posterior mode. We utilize the same asymptotic expansion to find priors which can provide high order matching of the moments of the posterior mean and the maximum likelihood estimator. For simplicity of exposition, we shall primarily confine ourselves to priors which achieve the matching of the first moment, although it is easy to see how higher order moment matching is equally possible.
The motivation for moment matching priors stems from several considerations. First, these priors lead to posterior means which share the asymptotic optimality of the MLE’s up to a high order. In particular, if one is interested in asymptotic bias or MSE reduction of the MLE’s through some adjustment, the same adjustment applies directly to the posterior means. In this way, it is possible to achieve Bayes-frequentist synthesis of point estimates. The second important aspect of these priors is that they provide new viable alternatives to Jeffreys’ prior even for real-valued parameters in the absence of nuisance parameters motivated from the proposed criterion. A third motivation, which will be made clear later in this section, is that with moment matching priors, it is possible to construct credible regions for parameters of interest based only on the posterior mean and the posterior variance, which match the maximum likelihood based confidence intervals to a high order of approximation. We will confine ourselves primarily to regular families of distributions.
Let be independent and identically distributed with common density function, where , some interval in the real line. Consider a general class of priors for . Throughout, it is assumed that both and satisfy all the needed regularity conditions as given in Johnson (1970) and Bickel and Ghosh (1990).
Let denote the maximum likelihood estimator of . Under the prior , we denote the posterior mean of by . The formal asymptotic expansion given in Section 2 now leads to , where and are defined in Theorem 1. The law of large numbers and consistency of the MLE now give . With the choice , one gets . We will denote this prior as .
Ghosh and Liu (2011) have shown that if is a one-to-one function of , then the moment matching prior for is given by . We now see an application of this result.
Example 6 ((Continued)).
Consider the regular one-parameter exponential family of densities given by . For the canonical parameter , noting that and , which is Jeffreys’ prior. On the other hand, for the population mean which is a strictly increasing function of [since , the moment matching prior . In particular, for the binomial proportion , one gets the Haldane prior , which is the same as Hartigan’s (1964, 1998) maximum likelihood prior. However, for the canonical parameter , whereas we get Jeffreys’ prior, Hartigan (1964, 1998) gets the Laplace prior.
Remark 4.
It is now clear that a fundamental difference between priors obtained by matching probabilities and those obtained by matching moments is the lack of invariance of the latter under one-to-one reparameterization. It may be interesting to find conditions under which a moment matching prior agrees with Jeffreys’ prior or the uniform constant prior. The former holds if and only if , while the latter holds if and only if .
The if part of the above results are immediate from the definition of . To prove the only if parts, note that if , first taking logarithms, and then differentiating with respect to , one gets so that . On the other hand, if , then taking logarithms, and then differentiating with respect to , one gets.
The above approach can be extended to the matching of higher moments as well. Noting that , it follows immediately that under the moment matching prior , . This fact helps construction of credible intervals for , the parameter of interest, centered at the posterior mean and scaled by the posterior standard deviation which enjoys the same asymptotic properties as the credible interval centered at the MLE and scaled by the square root of the reciprocal of the observed Fisher information number.
6 Summary and Conclusion
As mentioned in the Introduction, this article provides a selective review of objective priors reflecting my own interest and familiarity with the topics. I am well aware that many important contributions are left out. For instance, I have discussed only the two-group reference priors of Bernardo (1979). A more appealing later contribution by Berger and Bernardo (1992b) provided an algorithm for the construction of multi-group reference priors when these groups are arranged in accordance to their order of importance. In particular, the one-at-a-time reference priors, as advocated by these authors, has proved to be quite useful in practice. Ghosal (1997, 1999) provided the construction of reference priors in nonregular cases, while a formal definition of reference priors encompassing both regular and nonregular cases has recently been proposed by Berger, Bernardo and Sun (2009).
Regarding probability matching priors, we have discussed only the quantile matching criterion. There are several others, possibly equally important probability matching criteria. Notable among these are the highest posterior density matching criterion as well as matching via inversion of test statistics, such as the likelihood ratio test statistic, Rao statistic or the Wald statistic. Extensive discussion of such matching priors is given in Datta and Mukerjee (2004). Datta et al. (2000) constructed matching priors via the prediction criterion, and related exact results in this context are available in Fraser and Reid (2002). The issue of matching priors in the context of conditional inference has been discussed quite extensively in Reid (1996).
A different class of priors called “the maximum likelihood prior” was developed by Hartigan (1964, 1998). Roughly speaking, these priors are found by maximizing the expected distance between the prior and the posterior under a truncated Kullback–Leibler distance. Like the proposed moment matching priors, the maximum likelihood prior densities, when they exist, result in posterior means asymptotically negligible from the MLE’s. I have alluded to some of these priors as a comparison with other priors as given in this paper.
With the exception of the right- and left-invariant Haar priors, the derivation of the remaining priors are based essentially on the asymptotic expansion of the posterior density as well as the shrinkage argument of J. K. Ghosh. This approach provides a nice unified tool for the development of objective priors. I believe very strongly that many new priors will be found in the future by either a direct application or slight modification of these tools.
The results of this article show that Jeffreys’ prior is a clear winner in the absence of nuisance parameters for most situations. The only exception is the chi-square divergence where different priors may emerge. But that corresponds only to one special case, namely, the boundary of the class of divergence priors, while Jeffreys’ prior continues its optimality in the interior. In the presence of nuisance parameters, my own recommendation is to find two- or multi-group reference priors following the algorithm of Berger and Bernardo (1992a), and then narrow down this class of priors by finding their intersection with the class of probability matching priors. This approach can even lead to a unique objective prior in some situations. Some simple illustrations are given in this article. I also want to point out the versatility of reference priors. For example, for nonregular models, Jeffreys’ general rule prior does not work. But as shown in Ghosal (1997) and Berger, Bernardo and Sun (2009), one can extend the definition of reference priors to cover these situations as well.
The examples given in this paper are purposely quite simplistic to aid understanding mainly of readers not familiar at all with the topic. Quite rightfully, they can be criticized as somewhat stylized. Both reference and probability matching priors, however, have been developed for more complex problems of practical importance. Among others, I may refer to Berger and Yang (1994), Berger, De Oliveira and Sanso (2001), Ghosh and Heo (2003), Ghosh, Carlin and Srivastava (1994) and Ghosh, Yin and Kim (2003). The topics of these papers include time series models, spatial models and inverse problems, such as linear calibration and problems in bioassay, in particular, slope ratio and parallel line assays. One can easily extend this list. A very useful source for all these papers is Bernardo (2005).
Acknowledgments
This research was supported in part by NSF Grant Number SES-0631426 and NSA Grant NumberMSPF-076-097. The comments of the Guest Editor and a reviewer led to substantial improvement on the manuscript.
References
- (1) Amari, S. (1982). Differential geometry of curved exponential families—Curvatures and information loss. Ann. Statist. 10 357–387. \MR0653513
- (2) Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 53 370–418.
- (3) Berger, J. O. (1985). Statistical Decision Theory and Related Topics, 2nd ed. Springer, New York. \MR0804611
- (4) Berger, J. O. and Bernardo, J. M. (1989). Estimating a product of means. Bayesian analysis with reference priors. J. Amer. Statist. Assoc. 84 200–207. \MR0999679
- (5) Berger, J. O. and Bernardo, J. M. (1992a). On the development of reference priors (with discussion). In Bayesian Statistics 4 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) 35–60. Oxford Univ. Press, New York. \MR1380269
- (6) Berger, J. O. and Bernardo, J. M. (1992b). Reference priors in a variance components problem. In Bayesian Analysis in Statistics and Econometrics (P. K. Goel and N. S. Iyengar, eds.) 177–194. Springer, New York. \MR1194392
- (7) Berger, J. O., Bernardo, J. M. and Sun, D. (2009). The formal definition of reference priors. Ann. Statist. 37 905–938. \MR2502655
- (8) Berger, J. O., de Oliveira, V. and Sanso, B. (2001). Objective Bayesian analysis of spatially correlated data. J. Amer. Statist. Assoc. 96 1361–1374. \MR1946582
- (9) Berger, J. O. and Yang, R. (1994). Noninformative priors and Bayesian testing for the AR(1) model. Econometric Theory 10 461–482. \MR1309107
- (10) Bernardo, J. M. (1979). Reference posterior distributions for Bayesian inference (with discussion). J. R. Stat. Soc. Ser. B Stat. Methodol. 41 113–147. \MR0547240
- (11) Bernardo, J. M. (2005). Reference analysis. In Handbook of Bayesian Statisics 25 (D. K. Dey and C. R. Rao, eds.). North-Holland, Amsterdam. \MR2490522
- (12) Bhattacharyya, A. K. (1943). On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. 35 99–109. \MR0010358
- (13) Bickel, P. J. and Ghosh, J. K. (1990). A decomposition for the likelihood ratio statistic and the Bartlett correction—A Bayesian argument. Ann. Statist. 18 1070–1090. \MR1062699
- (14) Clarke, B. and Barron, A. (1990). Information-theoretic asymptotics of Bayes methods. IEEE Trans. Inform. Theory 36 453–471. \MR1053841
- (15) Clarke, B. and Barron, A. (1994). Jeffreys’ prior is asymptotically least favorable under entropy risk. J. Statist. Plann. Inference 41 37–60. \MR1292146
- (16) Clarke, B. and Sun, D. (1997). Reference priors under the chi-square distance. Sankhyā A 59 215–231. \MR1665703
- (17) Clarke, B. and Sun, D. (1999). Asymptotics of the expected posterior. Ann. Inst. Statist. Math. 51 163–185. \MR1704652
- (18) Cressie, N. and Read, T. R. C. (1984). Multinomial goodness-of-fit tests. J. Roy. Statist. Soc. Ser. B 46 440–464. \MR0790631
- (19) Cox, D. R. and Reid, N. (1987). Parameter orthogonality and approximate conditional inference (with discussion). J. Roy Statist. Soc. Ser. B 53 79–109. \MR0893334
- (20) Datta, G. S. (1996). On priors providing frequentist validity of Bayesian inference for multiple parametric functions. Biometrika 83 287–298. \MR1439784
- (21) Datta, G. S. and Ghosh, J. K. (1995a). On priors providing frequentist validity for Bayesian inference. Biometrika 82 37–45. \MR1332838
- (22) Datta, G. S. and Ghosh, J. K. (1995b). Noninformative priors for maximal invariant in group models. Test 4 95–114. \MR1365042
- (23) Datta, G. S. and Ghosh, M. (1995c). Some remarks on noninformative priors. J. Amer. Statist. Assoc. 90 1357–1363. \MR1379478
- (24) Datta, G. S. and Ghosh, M. (1995d). Hierarchical Bayes estimators of the error variance in one-way ANOVA models. J. Statist. Plann. Inference 45 399–411. \MR1341333
- (25) Datta, G. S. and Ghosh, M. (1996). On the invariance of noninformative priors. Ann. Statist. 24 141–159. \MR1389884
- (26) Datta, G. S., Ghosh, M. and Mukerjee, R. (2000). Some new results on probability matching priors. Calcutta Statist. Assoc. Bull. 50 179–192. \MR1843620
- (27) Datta, G. S., Ghosh, M. and Kim, Y. (2002). Probability matching priors for one-way unbalanced rendom effects models. Statist. Decisions 20 29–51. \MR1904422
- (28) Datta, G. S. and Mukerjee, R. (2004). Probability Matching Priors: Higher Order Asymptotics. Springer, New York. \MR2053794
- (29) Datta, G. S., Mukerjee, R., Ghosh, M. and Sweeting, T. J. (2000). Bayesian prediction with approximate frequentist validity. Ann. Statist. 28 1414–1426. \MR1805790
- (30) Datta, G. S. and Sweeting, T. J. (2005). Probability matching priors. In Bayesian Thinking, Modeling and Computation. Handbook of Statistics 25 (D. K. Dey and C. R. Rao, eds.). North-Holland, Amsterdam. \MR2490523
- (31) Ferguson, T. (1996). A Course in Large Sample Theory. Chapman & Hall/CRC Press, Boca Raton, FL. \MR1699953
- (32) Fraser, D. A. S. and Reid, N. (2002). Strong matching of frequentist and Bayesian parametric inference. J. Statist. Plann. Inference 103 263–285. \MR1896996
- (33) Ghosal, S. (1997). Reference priors in multiparameter nonregular cases. Test 6 159–186. \MR1466439
- (34) Ghosal, S. (1999). Probability matching priors for nonregular cases. Biometrika 86 956–964. \MR1741992
- (35) Ghosh, J. K., Delampady, M. and Samanta, T. (2006). An Introduction to Bayesian Analysis. Springer, New York. \MR2247439
- (36) Ghosh, M. and Liu, R. (2011). Moment matching priors. Sankhyā A. To appear.
- (37) Ghosh, J. K. and Mukerjee, R. (1992). Non-informative priors (with discussion). In Bayesian Statistics 4 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) 195–210. Oxford Univ. Press, New York. \MR1380277
- (38) Ghosh, J. K., Sinha, B. K. and Joshi, S. N. (1982). Expansion for posterior probability and integrated Bayes risk. In Statistical Decision Theory and Related Topics III 1 403–456. Academic Press, New York. \MR0705299
- (39) Ghosh, M., Carlin, B. P. and Srivastava, M. S. (1994). Probability matching priors for linear calibration. Test 4 333–357. \MR1379796
- (40) Ghosh, M. and Heo, J. (2003). Default Bayesian priors for regression models with second-order autogressive residuals. J. Time Ser. Anal. 24 269–282. \MR1984597
- (41) Ghosh, M., Mergel, V. and Liu, R. (2011). A general divergence criterion for prior selection. Ann. Inst. Statist. Math. 63 43–58.
- (42) Ghosh, M. and Mukerjee, R. (1998). Recent developments on probability matching priors. In Applied Statistical Science III (S. E. Ahmed, M. Ahsanullah and B. K. Sinha, eds.) 227–252. Nova Science Publishers, New York. \MR1673669
- (43) Ghosh, M., Yin, M. and Kim, Y.-H. (2003). Objective Bayesian inference for ratios of regression coefficients in linear models. Statist. Sinica 13 409–422. \MR1977734
- (44) Halmos, P. (1950). Measure Theory. Van Nostrand, New York. \MR0033869
- (45) Hartigan, J. A. (1964). Invariant prior densities. Ann. Math. Statist. 35 836–845. \MR0161406
- (46) Hartigan, J. A. (1998). The maximum likelihood prior. Ann. Statist. 26 2083–2103. \MR1700222
- (47) Hellinger, E. (1909). Neue Begründung der Theorie quadratischen Formen von unendlichen vielen Veränderlichen. J. Reine Angew. Math. 136 210–271.
- (48) Huzurbazar, V. S. (1950). Probability distributions and orthogonal parameters. Proc. Camb. Phil. Soc. 46 281–284. \MR0034567
- Jeffreys (1961) Jeffreys, H. (1961). Theory of Probability and Inference, 3rd ed. Cambridge Univ. Press, London.
- (50) Johnson, R. A. (1970). Asymptotic expansions associated with posterior distribution. Ann. Math. Statist. 41 851–864. \MR0263198
- (51) Kass, R. E. and Wasserman, L. (1996). The selection of prior distributions by formal rules. J. Amer. Statist. Assoc. 91 1343–1370. \MR1478684
- (52) Laplace, P. S. (1812). Theorie Analytique des Probabilities. Courcier, Paris.
- (53) Lindley, D. V. (1956). On the measure of the information provided by an experiment. Ann. Math. Statist. 27 986–1005. \MR0083936
- (54) Liu, R. (2009). On some new contributions towards objective priors. Unpublished Ph.D. dissertation. Dept. Statistics, Univ. Florida, Gainesville, FL. \MR2714091
- (55) Morris, C. N. and Normand, S. L. (1992). Hierarchical models for combining information and for meta-analysis. In Bayesian Statistics 4 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) 321–344. Oxford. Univ. Press, New York. \MR1380284
- (56) Mukerjee, R. and Dey, D. K. (1993). Frequentist validity of posterior quantiles in the presence of a nuisance parameter: Higher-order asymptotics. Biometrika 80 499–505. \MR1248017
- (57) Mukerjee, R. and Ghosh, M. (1997). Second-order probability matching priors. Biometrika 84 970–975. \MR1625016
- (58) Nachbin, L. (1965). The Haar Integral. Van Nostrand, New York. \MR0175995
- (59) Reid, N. (1996). Likelihood and Bayesian approximation methods (with discussion). In Bayesian Statistics 5 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) 351–368. Oxford Univ. Press, New York. \MR1425414
- (60) Severini, T. A., Mukerjee, R. and Ghosh, M. (2002). On an exact probability matching property of right-invariant priors. Biometrika 89 952–957. \MR1946524
- (61) Staicu, A.-M. and Reid, N. (2008). On probability matching priors. Canad. J. Statist. 36 613–622. \MR2532255
- (62) Tibshirani, R. J. (1989). Noninformative priors for one parameter of many. Biometrika 76 604–608. \MR1040654
- (63) Welch, B. L. and Peers, H. W. (1963). On formulae for confidence points based on integrals of weighted likelihoods. J. Roy Statist. Soc. Ser. B 35 318–329. \MR0173309
- (64) Ye, K. (1994). Bayesian reference prior analysis on the ratio of variances for the balanced one-way random effect model. J. Statist. Plann. Inference 41 267–280. \MR1309613