A Numerical Approach to Sequential Multi-Hypothesis Testing for Bernoulli Model
Abstract
In this paper we deal with the problem of sequential testing of multiple hypotheses. The main goal is minimizing the expected sample size (ESS) under restrictions on the error probabilities.
We take, as a criterion of minimization, a weighted sum of the ESS’s evaluated at some points of interest in the parameter space aiming at its minimization under restrictions on the error probabilities.
We use a variant of the method of Lagrange multipliers which is based on the minimization of an auxiliary objective function (called Lagrangian) combining the objective function with the restrictions, taken with some constants called multipliers. Subsequently, the multipliers are used to make the solution comply with the restrictions.
We develop a computer-oriented method of minimization of the Lagrangian function, that provides, depending on the specific choice of the parameter points, optimal tests in different concrete settings, like in Bayesian, Kiefer-Weiss and other settings.
To exemplify the proposed methods for the particular case of sampling from a Bernoulli population we develop a set of computer algorithms for designing sequential tests that minimize the Lagrangian function and for the numerical evaluation of test characteristics like the error probabilities and the ESS, and other related. We implement the algorithms in the R programming language. The program code is available in a public GitHub repository.
For the Bernoulli model, we made a series of computer evaluations related to the optimality of sequential multi-hypothesis tests, in a particular case of three hypotheses. A numerical comparison with the matrix sequential probability ratio test is carried out.
A method of solution of the multi-hypothesis Kiefer-Weiss is proposed, and is applied for a particular case of three hypotheses in the Bernoulli model.
keywords:
sequential analysis; hypothesis testing; optimal stopping; optimal sequential tests; multiple hypotheses; SPRT; MSPRT62L10, 62L15, 62F03, 60G40, 62M02
1 Introduction
The problem of testing multiple hypotheses is one of the oldest problems in the sequential analysis.
A traditional approach to this problem is Bayesian. It is based on the assumption that the hypotheses come up with some probabilities called a priori (see Blackwell and Girshick 1954; Baum and Veeravalli 1994; Tartakovsky, Nikiforov, and Basseville 2015, among many others).
Despite that the optimal Bayesian solution can be characterized on the basis of general principles like dynamic programming or the theory of optimal stopping (Shiryaev 1978; Chow, Robbins, and Siegmund 1971), at least theoretically, there seems to exist a strong belief that the theoretical solution is too complex to be useful for practical purposes (see, for example Baum and Veeravalli 1994; Tartakovsky, Nikiforov, and Basseville 2015). An exception is the case of two simple hypotheses where the solution is given by the classical sequential probability ratio test (Wald’s SPRT, see Wald and Wolfowitz 1948).
For these reasons, approximate solutions of the problem have been proposed. One of the widely used tests, due to its simplicity, is the matrix sequential probability ratio test (MSPRT) by Armitage (1950). Tartakovsky, Nikiforov, and Basseville (2015) showed that the MSPRT is asymptotically optimal, as error probabilities go to 0.
Another approach that has received considerable attention through the decades is the so-called Kiefer-Weiss problem, consisting in the minimization of the maximum value of the expected sample number (ESS), over all possible parameter points (Kiefer and Weiss 1957). Lorden (1980) showed that the Kiefer-Weiss problem can be reduced to the minimization of the ESS evaluated at a specific parameter point, different from the hypothesized parameter values (so-called modified Kiefer-Weiss problem), and (in essence) used the method of Lagrange multipliers to characterize the solutions to the modified Kiefer-Weiss problem.
A generalization of the Kiefer-Weiss problem to the case of multiple hypotheses has been formulated in Tartakovsky, Nikiforov, and Basseville (2015) (Section 5.3) and received an asymptotic treatment in Section 5.3.1, ibid.
In this paper, we propose an approach to the optimal multi-hypothesis testing based on minimization of the weighted ESS evaluated at parameter points not necessarily coinciding with the hypothesized values, and then use the method of the Lagrange multipliers to reduce to the minimization of the Lagrangian function. Depending on the choice of the points for evaluating the ESS in the Lagrangian function, we obtain, in particular, the Bayesian and the Kiefer-Weiss settings, and more.
We apply the method of Novikov (2009b) and characterize the sequential tests minimizing the Lagrangian function, for any choice of multipliers. For practical applications, we propose the use of numerical methods for the Lagrange minimization, the evaluation of the characteristics (the error probabilities, the ESS, etc.), and for finding the multiplier values to comply with the restrictions on the error probabilities.
We illustrate the proposed methods in the particular case of sampling from a Bernoulli population, where we develop a complete set of computer algorithms for all the numerical tasks described above and implement them in the R programming language. The program code is available in a public GitHub repository in Novikov (2023).
Using the developed software, we run a series of numerical comparisons related to optimal properties of sequential multi-hypothesis tests in the Bernoulli model.
First, we evaluate the performance characteristics error of the MSPRT for a particular case of three hypotheses. The MSPRT is known to be asymptotically optimal, as the error probabilities go to 0, so the evaluations we carry out give an idea of how small the error probabilities should be in order that the asymptotic formulas for the ESS give a reasonably good approximation to the calculated values. We use which, apparently, is sufficient for good approximations of the characteristics of non-truncated MSPRTs.
Other comparison we carry out is also related with the MSPRT. For a number of error probability levels, we numerically find both MSPRT and the optimal Bayes test (for uniform a priori distribution) matching the given error probabilities (up to some precision). The results show a very high efficiency of the MSPRT.
Also we propose a method for solving a multi-hypothesis version of the Kiefer-Weiss problem, and give a numerical example.
In Section 2, we adapt the results of Novikov (2009b) to the problem of minimization of weighted ESS calculated at arbitrary parameter points. In Section 3, we derive computational formulas for the Bernoulli model. Numerical results are presented in Section 4. Section 5 is a brief list of the results and suggestions for further work.
2 Optimal sequential multi-hypothesis tests
In this section, we formulate some settings for the problem of optimal multi-hypothesis testing and use the general results of Novikov (2009b) for characterisation of the respective optimal solutions.
We assume that independent and identically distributed (i.i.d.) observations are potentially available to the statistician on the one-by-one basis, providing us with information about the unknown distribution of the data. Let us denote it , where is some parameter identifying the distribution in a unique manner. We are concerned with the problem of distinguishing between a finite number of simple hypotheses , , , , .
We follow Novikov (2009b) in the notation and general assumptions.
In particular, we consider sequential multi-hypothesis test as a pair of a stopping rule , and a (terminal) decision rule .
The elements of the stopping rule are measurable functions taking values in , where the value at is interpreted as the conditional probability, given the observations, to stop (randomization at the stopping time).
The elements of the decision rule are measurable functions of observations such that , and and . Given the data observed, is interpreted as a conditional probability to accept hypothesis , (randomization at the decision time).
The sequential test starts with observing (stage ). At each stage the test procedure stops with probability , given that are observed, and proceeds to taking a terminal decision. If it does not stop, the test proceeds to taking one additional observation and going to stage , etc., until the process eventually stops. When the test stops at any stage (this is called stopping time), a terminal decision is taken accepting hypothesis with probability , conditionally on . Let us denote the stopping time (as a random variable) generated by the described process.
Let
( by definition).
Then the expected sample size (ESS) of the test procedure is defined as
provided that , - otherwize it is infinite by definition. Here and throughout the paper, is the symbol of mathematical expectation with respect to . Also we use (without arguments) both for and for , depending on the context. So do we when dealing with other functions like , , etc.
Other characteristics of a sequential test are the error probabilities defined as
Another natural way to define error probabilities is less detailed:
In the case of two hypotheses the definitions are equivalent.
For , the classical result of Wald and Wolfowitz (1948) states that the sequential probability ratio test (SPRT) minimizes both and in the class of sequential tests such that
where and are the error probabilities of the SPRT.
To the best of our knowledge, no direct generalizations of this result exist for . For this reason, we propose weaker settings.
Let us choose some parameter points , and the weights , being these non-negative numbers such that , . Formally, we propose to minimize the weighted ESS
(1) |
over all sequential multi-hypothesis tests subject to
(2) |
or to
(3) |
where and are some positive numbers.
To support this formulation, let us refer to a very practical context of optimal group-sequential testing in the case of two hypotheses. For testing the mean of a normal distribution with known variance, Eales and Jennison (1992) considered five settings for the ESS minimization under restrictions on the error probabilities. Four of them, namely, to (see Eales and Jennison 1992) are of type (1), with different choices of , and . is also a kind of weighted ESS but of continuous type, which is quite possible to be treated by our method, but for the time being stays beyond the scope. Generalizations of these settings to the case of more than two hypotheses and infinite horizons are straightforward.
Given that the formulated problem is a problem of a minimization under restrictions, we want to use the Lagrange multipliers method. By the principle of the Lagrange method, to minimize under restrictions (2) one should be able to minimize the Lagrangian function
(4) |
with any constant multipliers , and to find the values of the multipliers for which equalities in (2) hold. Respectively, the problem of minimization under conditions (3) reduces to minimization of
(5) |
with multipliers , , and finding the values of for which equalities in (3) hold. It is easy to see that (5) is a particular case of (4) with for all , , so in what follows we focus on the minimization of (4).
It is not difficult to see that in the particular case when , the Lagrangian function (4) can be considered Bayesian risk (see, for example, Baum and Veeravalli 1994, among many others) corresponding to the a priori distribution on the set of parameter points , where can be interpreted as conditional loss from accepting when is true. Thus, the minimization of (4) readily solves the problem of optimal Bayesian tests for hypotheses.
The well-known modified Kiefer-Weiss problem (see, for example, Lorden 1980) also easily embeds into this scheme by taking , , and between the hypothesized values and , being . And this gives rise to a multi-hypothesis version of the Kiefer-Weiss problem, starting from a modified version of it, with such that and with some weights , adding up to 1, as additional parameters. To our knowledge, there are no known non-asymptotic solutions of the multi-hypothesis Kiefer-Weiss problem, and this could be a basis for one.
Now, let us characterize the tests which minimize the Lagrangian function (4), for a given set of multipliers. It is worth noting that implicitly depends on the Lagrange multipliers, therefore all the constructions below will also (implicitly) depend on , as well as on other elements of problem setting, like and , etc.
First of all, in a very standard way it can be shown that there is a universal decision rule that minimizes whatever fixed (see Novikov 2009b).
Let us assume that is absolutely continuous with respect to a -finite measure and denote its Radon-Nikodym derivative. Also denote , and let . Define
(6) |
Let a decision rule be such that
(7) |
(in the case of equality in (7) can be arbitrarily randomized between those sharing this equality, with the only requirement that ). It follows from Theorem 3 in Novikov (2009b) that
(8) |
and we have an optimal stopping problem of minimizing (8) over stopping rules .
The problem is first solved in the class of truncated tests, i.e. those not taking more than a finite number of observations. Let be the set of all such stopping rules that .
Let us define operator in the following way. For any measurable non-negative let
Now, starting from
define recursively over
Then for any
(9) |
and there is an equality in (9) if for all
(10) |
where denotes the indicator function of the event . In this way, stopping rule in (10) minimizes in the class of truncated stopping rules . Any may be arbitrarily randomized between samples for which there is an equality in the inequality under the indicator function in (10). This gives the same value of . The details can be found in Novikov (2009b).
The optimal non-truncated tests can be found passing to the limit, as , provided that
(11) |
(see Remark 7 in Novikov 2009b). In the case of i.i.d. observations we are considering in this paper, (11) holds without any additional conditions. The formal proof of this fact can be found in the Appendix.
The construction of the optimal non-truncated test is as follows. First of all, it is easy to see that , so there exists , Then it follows from (9) that
(12) |
and the right-hand side in (12) is attained if
(13) |
for all In this way, we obtain tests with satisfying (13) and satisfying (7) which minimize the Lagrangian function .
We propose using numerical methods for construction of the truncated tests minimizing the Lagrangian function. For the Bernoulli model, we develop numerical algorithms for this and implement them in the form of a computer program in the R programming language. Having the means for minimizing the Lagrangian function, to obtain optimal sequential tests in the conditional setting (i.e. those minimizing under conditions (2)) we need to find Lagrangian multipliers , , providing a test (7)-(10) for which equalities in (2) hold. Respectively, the minimization of under conditions (3) reduces to finding , such that for the test in (7)-(10), with for , for which there are all equalities in (3).
In no way can one be sure that such exist for every combination of (not even in the classical case of two hypotheses). On the other hand, every combination of employed in (7)-(10), produces an optimal test in the conditional setting, if one takes its error probabilities as in (2) (i.e. ) (or, respectively, as in (3), that is ).
Having at hand a computer program for the Lagrange minimization, finding the multipliers providing a tolerable level of the error probabilities is a question of some trial-and-error look-ups, because larger values of make smaller, grosso modo. As an alternative, general-purpose computer algorithms of numerical optimization can be used to get as close as possible to the desired values of by moving the input values of , for example, the method of Nelder and Mead (1965).
For the non-truncated tests, we propose using approximations by truncated tests. We illustrate all this technique on the particular case of Bernoulli distribution in the subsequent sections.
3 Optimal sequential tests for sampling from a Bernoulli population
In this section, we apply the general results of Section 2 to the model of Bernoulli observations. In this way we obtain a complete set of computer algorithms for computing the tests that minimize the Lagrangian function, and their numerical characteristics, in the Bernoulli model. For the determination of the values of the Lagrange multipliers general-purpose computer algorithms will be used.
3.1 Construction of optimal tests
We apply the results of Section 2 to the model of sampling from a Bernoulli population, in which case , , and with .
Let
be the probability mass function corresponding to the sufficient statistic (binomial distribution with parameters and ). Define
(14) |
and let
Let us define the operator defined for any function , , as
(15) |
for Starting from
define recursively for
(16) |
Proposition 3.1.
For
(17) |
where .
It is easy to see that the optimal decision rule (7) can be expressed in terms of the sufficient statistic :
(18) |
and it follows from Proposition 3.1 that the optimal truncated stopping rule (10) as well:
(19) |
for , and the optimal non-truncated one as
(20) |
with for all
Formulas (18)-(19) provide a truncated test which has an exact optimality property (neither asymptotic nor approximate), whatever be , , and Largange multipliers , .
Furthermore, they suggest a computational algorithm for evaluating the elements of optimal sequential test: start from step calculating for all (which is based on weighted sums of binomial probabilities with parameters and , , according to (18)), and recurrently use (16) for steps to calculate for all , marking those for which
as belonging to the continuation region (by virtue of (19)); for all other storing the terminal decision based on (18) as that corresponding to .
We implemented this algorithm in the form of a function in the R programming language (R Core Team 2013); the source code is available in a public GitHub repository in Novikov (2023). The documentation can be found in the repository.
Making large enough we can approximate the optimal non-truncated test corresponding to (20). In particular, this can be helpful when the optimal infinite-horizon test is in fact truncated. This happens, for example, in the case of modified Kiefer-Weiss problem, corresponding (in our notation) to the case of two hypotheses with , , (see Lorden 1980). Below in Section 4 we give another example of this possibility, in a multi-hypothesis context.
Despite that the test obtained in this subsection does not have a closed form (instead, all the values of the optimal rules (18) – (19) are stored in the computer memory), we believe it can be quite practical for many applications which do not require more than some thousands of steps. If they do, one could try the algorithm with a maximum number of steps their computer will withstand, to see if the performance requirements could be met with that reduced number of steps. If not, more computer power might be needed.
3.2 Evaluation of performance characteristics
We derive in this part computational formulas for performance characteristics of sequential multi-hypothesis tests for the Bernoulli model.
Let be any sequential multi-hypothesis test based on sufficient statistics: , with . The test is arbitrary but will be held fixed throughout this subsection, so it will be suppressed in the notation.
Proposition 3.2.
Define
(21) |
and, recursively over ,
, .
Then the probability to accept hypothesis , given that the true parameter is , can be calculated as . In particular, , .
Proof. Let us denote the event meaning that hypothesis is accepted at or after step (following the rules of the test ), .
Let us first prove, by induction over , that
(22) |
For , (22) follows from (21) and the definition of the decision rule .
Let us suppose now that (22) holds for some . Then
(23) |
But, by the supposition,
Therefore, (3.2) equals
Now that (22) is proved, we apply it for and have
thus,
In an analogous way, characteristics of sample number can be treated.
Proposition 3.3.
For any stopping rule define for any
(24) |
and, recursively over ,
(25) |
. Then .
Proof. Let us denote , , the event meaning that the test following the stopping rule does not stop at any step between and , inclusively.
Let us first prove, by induction over , that
(26) |
For , (26) follows from (24). Let us suppose now that (26) holds for some . Then
Now that (26) is proved, we apply it for and have
thus,
It follows from Proposition 3.3 that if , then
(27) |
If a stopping rule is not truncated, we can use (27) to approximate , noting that , as , by the theorem of monotone convergence, and corresponds to the truncated rule . Applying (27) to we see that , thus
Dealing with expectations, a more direct way to evaluate (27) is incorporating the summation in (27) into the inductive evaluations in (25). This is done in the following
Proposition 3.4.
For a stopping rule , define
and, recursively over ,
. Then
(28) |
Again, passing to the limit in (28), as , we obtain
We implemented the algorithms presented in this subsection in the R programming language; the source code is available in Novikov (2023).
It should be noted that the algorithms for performance evaluations in this subsection are applicable to any truncated test based on sufficient statistics, and not only to the optimal test of Subsection 3.1. In particular, we included in the program implementation a function producing the structure of the (truncated version of) the matrix sequential probability ratio test (MSPRT), enabling in this way all the performance evaluations of this subsection for the truncated MSPRT as well. Because an MSPRT for two hypotheses is an SPRT, this also covers the performance evaluation of truncated SPRTs. Also, an implementation of the Monte Carlo simulation for the performance evaluation is provided as a part of the program code.
4 Applications. Numerical results
In this section we apply the theoretical results of the preceding sections to construction and performance evaluation of sequential tests in the Bernoulli model.
4.1 Efficiency of the MSPRT
In this subsection, we evaluate the performance of the widely-used matrix probability ratio test (MSPRT) for multiple hypotheses and numerically compare its expected sample size characteristics with asymptotic bounds for these, in a particular case of testing of three hypotheses about the parameter of the Bernoulli distribution.
The idea of the MSPRT is to simultaneously run SPRTs for each pair of the hypothesized parameter values, stopping only when all the SPRTs decide in favour of a certain hypothesis. Let be some constants, . Then the stopping time of the MSPRT (let us denote it ) is defined as
(29) |
in which case hypothesis is accepted. Armitage (1950) showed that the MSPRT stops with probability one under each , and that
(30) |
where is the error probability of MSPRT (29).
For the MSPRT is an ordinary SPRT and (30) are the very well known Wald’s inequalities for its error probabilities.
To get numerical results we consider a particular case of hypotheses for the parameter of success of the Bernoulli distribution, with , and .
First of all, we will be interested in calculating the performance characteristics of the MSPRT in this particular case. It is easy to see that the rules of the MSPRT are based, in the Bernoulli case, on the sufficient statistics , , so the formulas of Subsection 3.2 apply for the truncated version of the MSPRT. Strictly speaking, the terminal decision at the last step, when the MSPRT is truncated at time , is not defined. But we will calculate the exact probability that MSPRT does not come to a decision at any earlier stage, and make the probability of this so small (choosing large enough) that any concrete decision one can take in the last step will not affect the numerical values of the error probabilities, nor those of the ESS under any one of the hypotheses.
In Tartakovsky, Nikiforov, and Basseville (2015), asymptotic formulas are obtained for the ESS of the MSPRT, so we consider this example a good opportunity to juxtapose the really obtained and the asymptotic values of the corresponding numerical characteristics, calculated in various practical scenarios. We use the thresholds which make the MSPRT in (29) asymptotically optimal, as (see Tartakovsky, Nikiforov, and Basseville 2015, Section 4.3.1).
The results of evaluations are presented in Table 1, where , are the evaluated characteristics of the MSPRT, and the respective ratio between and the asymptotic expression for it (according to Tartakovsky, Nikiforov, and Basseville 2015, p. 196), .
0.1 | 0.026091 | 0.089375 | 0.029442 | 134.5 | 211.8 | 142.5 | 1.26 | 1.85 | 1.26 |
---|---|---|---|---|---|---|---|---|---|
0.05 | 0.013039 | 0.045384 | 0.014829 | 169.4 | 264.9 | 180 | 1.22 | 1.78 | 1.23 |
0.025 | 0.006498 | 0.022826 | 0.007467 | 203.5 | 313.2 | 216.2 | 1.19 | 1.71 | 1.2 |
0.01 | 0.002575 | 0.009172 | 0.002981 | 247.4 | 372.4 | 262.7 | 1.16 | 1.63 | 1.16 |
0.005 | 0.001291 | 0.004596 | 0.001504 | 280 | 414.1 | 297.4 | 1.14 | 1.57 | 1.15 |
0.002 | 0.0005 | 0.00184 | 0.000594 | 322.8 | 468.9 | 342.8 | 1.12 | 1.52 | 1.13 |
0.001 | 0.000248 | 0.00092 | 0.000296 | 355.1 | 508.8 | 376.9 | 1.11 | 1.48 | 1.11 |
0.0005 | 0.000123 | 0.00046 | 0.000147 | 387.2 | 548.5 | 411 | 1.1 | 1.45 | 1.1 |
5E-07 | 1.14E-07 | 4.6E-07 | 1.47E-07 | 707.1 | 928.5 | 749.5 | 1.05 | 1.29 | 1.05 |
5E-09 | 1.1E-09 | 4.6E-09 | 1.46E-09 | 920.3 | 1175.5 | 975.2 | 1.04 | 1.24 | 1.04 |
4.2 Bayes vs. MSPRT
Now, let us numerically compare the optimal multi-hypothesis test with the MSPRT, provided both have the same levels of error probabilities , . To this end, we numerically find the Lagrange multipliers providing the best approximation of the error probabilities of the test (7)-(10) to , with respect to the distance
The gradient-free optimization method of Nelder and Mead (1965) works well for this fitting. We use and , for as a criterion of minimization in (1), i.e. we evaluate the Bayesian tests with the “least informative” prior distribution. The results of fitting are presented in Table 2 (upper block).
As a competing MSPRT we take the test (29), with defined as for all , and carry out the same fitting procedure as above, with respect to . The results are presented in the middle block of Table 2.
In the lower block of Table 2 we placed the ratios between the ESS of the MSPRT () and that of the respective Bayesian test (), under each one of the hypotheses.
The results show an astonishingly high efficiency of the MSPRT, especially for small . This would not be so surprising for two hypotheses, because in this case any MSPRT is in fact an SPRT, and any Bayesian test is an SPRT, too (see Wald and Wolfowitz 1948), so fitting numerically both tests to given error probabilities should give a relative efficiency of about 100%. But we see that largely the same happens for three hypotheses, at least in the case of equal error probabilities we are examining.
The question arises whether there exist Bayesian tests “essentially” outperforming MSPRTs, in the case of three hypotheses. The answer is “yes”, as the following numerical example suggests.
In a rather straightforward way, we found a Bayesian test, corresponding to very “unbalanced” weights , and an MSPRT having the same error probabilities: , ,. These correspond to Lagrangian multipliers of for the Bayesian test and the thresholds , , for the MSPRT, respectively. Accordingly, we obtained , , for the Bayesian test, and , , for the MSPRT. Respectively, the weighted ESS evaluated to and , that is, nearly 29% larger for the MSPRT in comparison with the Bayesian test.
The most desirable property an optimal test should have is that it minimizes the ESS under each one of the hypotheses, in the class of tests subject to restrictions on the error probabilities. Nevertheless, we think this property is too strong to be fulfilled by any sequential test, when there are three (or more) hypotheses. We base this opinion on the following simple observation. Suppose there is a “uniformly optimal” test in the sense that , and for any test such that for , it holds for all . Then it is obvious that, whatever be the weights , , it holds that . Thus, for any set of weights , we have a test minimizing the weighted ESS under the restrictions on the error probabilities, i.e. one test solves all the problems of minimization of weighted ESS we formulated in Section 2 (all those with but arbitrary ). It seems that this is “too much” for one test when there are more than two hypotheses (it is fine for two hypotheses because it is well known that any Bayesian test is an SPRT). Unfortunately, the discrete nature of error probabilities in the Bernoulli model seems to be a serious obstacle for constructing a formal counterexample in this case. We hope to be able to provide one in our future publications concerning continuous distribution families.
0.1 | 0.05 | 0.025 | 0.01 | 0.005 | 0.002 | 0.001 | 0.0005 | |
5.09 | 5.61 | 6.15 | 6.91 | 7.52 | 8.36 | 9.04 | 9.71 | |
5.88 | 6.55 | 7.21 | 8.10 | 8.78 | 9.68 | 10.37 | 11.06 | |
5.23 | 5.77 | 6.34 | 7.13 | 7.76 | 8.63 | 9.31 | 9.99 | |
113.4 | 160.7 | 194.4 | 242.0 | 276.1 | 320.0 | 352.6 | 385.0 | |
136.0 | 189.4 | 238.4 | 298.3 | 340.9 | 395.5 | 435.7 | 475.3 | |
115.9 | 156.6 | 202.4 | 253.1 | 289.2 | 335.8 | 370.4 | 404.7 | |
1.67 | 2.37 | 3.07 | 3.96 | 4.63 | 5.52 | 6.20 | 6.88 | |
2.81 | 3.56 | 4.27 | 5.21 | 5.90 | 6.81 | 7.52 | 8.21 | |
1.81 | 2.50 | 3.21 | 4.12 | 4.81 | 5.72 | 6.41 | 7.09 | |
110.0 | 153.4 | 192.3 | 240.1 | 273.9 | 317.5 | 350.7 | 383.1 | |
136.0 | 189.4 | 238.4 | 298.3 | 341.0 | 395.0 | 435.7 | 475.3 | |
118.3 | 163.4 | 204.7 | 255.1 | 291.2 | 337.2 | 372.3 | 406.7 | |
0.970 | 0.955 | 0.989 | 0.992 | 0.993 | 0.992 | 0.995 | 0.995 | |
1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.999 | 1.000 | 1.000 | |
1.010 | 1.043 | 1.011 | 1.007 | 1.008 | 1.004 | 1.005 | 1.005 |
4.3 The Kiefer-Weiss problem for multi-hypothesis testing
In this subsection we propose a construction of a test which might be helpful for solution of the Kiefer-Weiss problem for multiple hypotheses and present a numerical example where the proposed test provides an approximate solution to the Kiefer-Weiss problem in the case of three hypotheses about the parameter of the Bernoulli model.
Let be the hypothezised parameter values, . Generalizing the Kiefer-Weiss problem from the case of hypotheses (see Kiefer and Weiss 1957) let us say that the Kiefer-Weiss problem for hypotheses is to find a sequential test which minimizes in the class of tests subject to restrictions on the error probabilities (2).
Kiefer and Weiss (1957) and Weiss (1962) noted that in some symmetrical cases the solution can be obtained as a solution to a much simpler problem (called modified Kiefer-Weiss problem nowadays). This latter problem is to find a test minimizing among the tests satisfying the restrictions on the error probabilities, where is some point in .
For the general multi-hypothesis case we propose the following generalization of this construction. Let , for , be some parameter points. And let , , be some weights (such that ). Recall that
(31) |
Proposition 4.1.
Let us suppose there is a test , with some , and , , , such that
(32) |
for all sequential tests , and that
(33) |
Additionally, let us suppose that
(34) |
Then for any sequential test satisfying
(35) |
it holds
(36) |
i.e. solves the Kiefer-Weiss problem.
Proof. It follows from (32), (33) and (35) that
for any test satisfying (35), so
But, due to (34),
thus (36) follows.
Remark 1.
The modification of Proposition 4.1 to be used with restrictions on rather than on is straightforward: just using instead of and , respectively.
Remark 2.
We conjecture that, when sampling from exponential families of distributions, the tests constructed in Proposition 4.1 for multiple hypotheses (even without condition (34)), are always truncated, just like those in the modified Kiefer-Weiss problem for two hypotheses are, when . Using our program in Novikov (2023) it is easy to see this for any number of hypotheses in the Bernoulli case.
Remark 3.
Proposition 4.1 is valid for any number of hypotheses for any parametric family of distributions.
Let us consider now an example of of a numerical solution to the Kiefer-Weiss problem for Bernoulli model, in the case of three hypotheses.
Let , and . We took , and and used the function OptTest from the program code in Novikov (2023) to produce tests satisfying condition (32) (minimizing the Lagrangian function). To comply with (34), after a simple numerical optimization over we found that for , it holds
To calculate the error probabilities we used the function PAccept in Novikov (2023), and obtained and . Thus, we have a numerical solution of the Kiefer-Weiss problem under restrictions and . The optimal test is truncated at . The function maxNumber can be used to see the maximum number of steps a test requires.
To compare the Kiefer-Weiss solution with a Bayesian test we used the same function OptTest, now with and , at the truncation level using the Nelder-Mead optimization to get (as close as possible to and ). The fitted values are and and the maximum ESS of 60.2. Thus, the Kiefer-Weiss solution saves about 10% of observations, on the average, in comparison with the optimal Bayesian test.
5 Conclusions and further work
In this paper, we proposed a computer-oriented method of construction of sequential multi-hypothesis tests, minimizing a weighted expected sample number (ESS).
For the particular case of sampling from a Bernoulli population, we developed a computational scheme for evaluating the optimal tests and calculating the numerical characteristics of sequential tests based on sufficient statistics. An implementation of the algorithms in the R programming language has been published in a GitHub repository Novikov (2023).
A numerical evaluation of the widely-used multi-hypothesis sequential probability ratio test is carried out for the case of three simple hypotheses about the parameter of the Bernoulli distribution, and a numerical comparison is made with the asymptotic expressions for the ESS of the asymptotically optimal MSPRT.
For a series of error probabilities we evaluated the ESS of the Bayesian test and compared it with that of the MSPRT having the same error probabilities, in which case the MSPRT exhibited a very high efficiency. On the other hand, we found a numerical example where the MSPRT is substantially less efficient than the optimal Bayesian test.
We proposed a method of numerical solution of the multi-hypothesis Kiefer-Weiss problem. The proposed method is applied to three-hypothesis Kiefer-Weiss problem for the Bernoulli. Numerial results are given.
A very immediate extension of this work could be developing computational algorithms for construction and performance evaluation of optimal sequential multi-hypothesis tests for other parametric families, first of all for one-parameter exponential families (cf. Novikov and Farkhshatov 2022).
The method we applied in this paper for i.i.d. observations can in fact be used for much more general models. For example, it can be applied to the models considered in Liu, Gao, and Li (2016), where numerical methods of performance evaluation of the MSPRT for non-i.i.d. observations are developed. It would be interesting to carry out a comparison study between the MSPRT and our optimal tests. Extensions to models with dependent observations are also possible.
The proposed method for solution of the Kiefer-Weiss problem can be extended to other parametric families.
Acknowledgements
The author gratefully acknowledges a partial support of the National Researchers System (SNI) by CONACyT, Mexico, for this work.
The author thanks the anonymous Reviewers and the Associate Editor for very substantial comments and suggestions on improving earlier versions of this work.
Appendix. Proof of (11)
Let us define as the error probability of the fixed-sample-size test based on observations and using the decision rule from (7). It follows from Theorem 3 in Novikov (2009b) that
Let us prove that for any such that , as .
We have
This latter holds because
in -probability for any . Indeed, by the Markov inequality
because for , due to the Cauchy-Schwarz inequality.
References
- Armitage (1950) Armitage, P. 1950. “Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis.” Journal of the Royal Statistical Society B 12: 137–144.
- Baum and Veeravalli (1994) Baum, C. W., and V. V. Veeravalli. 1994. “A Sequential Procedure for Multihypothesis Testing.” IEEE Transactions on Information Theory 40 (6): 1994–2007.
- Blackwell and Girshick (1954) Blackwell, D., and M. A. Girshick. 1954. Theory of games and statistical decisions. John Wiley and Sons, Inc.
- Chow, Robbins, and Siegmund (1971) Chow, Y.S, H. Robbins, and S. Siegmund. 1971. Great Expectations: The Theory of Optimal Stopping. Houghton Mifflin.
- Eales and Jennison (1992) Eales, J. D., and C. Jennison. 1992. “An improved method for deriving optimal one-sided group sequential tests.” Biometrika 79 (1): 13–24. https://doi.org/10.1093/biomet/79.1.13.
- Kiefer and Weiss (1957) Kiefer, J., and L. Weiss. 1957. “Some properties of generalized sequential probability ratio tests.” Annals of Mathematical Statistics 28: 57–75.
- Liu, Gao, and Li (2016) Liu, Y., Y. Gao, and X. Rong Li. 2016. “Operating Characteristic and Average Sample Number of Binary and Multi-Hypothesis Sequential Probability Ratio Test.” IEEE Transactions on Signal Processing 64 (12): 3167–3179.
- Lorden (1980) Lorden, G. 1980. “Structure of sequential tests minimizing an expected sample size.” Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 51 (3): 291–302.
- Nelder and Mead (1965) Nelder, J. A., and T. Mead. 1965. “A simplex method for function minimization.” Computer Journal 7 (4): 308–313.
- Novikov (2009a) Novikov, A. 2009a. “Optimal Sequential Multiple Hypothesis Testing in Presence of Control Variables.” Kybernetika 45 (3): 507–528.
- Novikov (2009b) Novikov, A. 2009b. “Optimal Sequential Multiple Hypothesis Tests.” Kybernetika 45 (2): 309–330.
- Novikov (2022) Novikov, A. 2022. “Optimal design and performance evaluation of sequentially planned hypothesis tests.” https://arxiv.org/abs/2210.07203.
- Novikov (2023) Novikov, A. 2023. “ An R Project for Construction and Performance Evaluation of Sequential Multi-Hypothesis Tests.” https://github.com/HOBuKOB-MEX/multihypothesis.
- Novikov and Farkhshatov (2022) Novikov, A., and F. Farkhshatov. 2022. “Design and performance evaluation in Kiefer-Weiss problems when sampling from discrete exponential families.” Sequential Analysis 41 (04): 417 – 434.
- R Core Team (2013) R Core Team. 2013. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. http://www.R-project.org/.
- Shiryaev (1978) Shiryaev, A. N. 1978. Optimal stopping rules. Berlin: Springer.
- Tartakovsky, Nikiforov, and Basseville (2015) Tartakovsky, A. G., I. V. Nikiforov, and M. Basseville. 2015. Sequential analysis: hypothesis testing and changepoint detection. Boca Raton, Florida: Chapman & Hall/CRC Press.
- Wald and Wolfowitz (1948) Wald, A., and J. Wolfowitz. 1948. “Optimum character of the sequential probability ratio test.” Annals of Mathematical Statistics 19 (3): 326–339.
- Weiss (1962) Weiss, L. 1962. “On sequential tests which minimize the maximum expected sample size.” Journal of American Statistical Assocciation 57: 551–566.