Locally Optimal Fixed-Budget Best Arm Identification
in Two-Armed Gaussian Bandits
with Unknown Variances
Abstract
We address the problem of best arm identification (BAI) with a fixed budget for two-armed Gaussian bandits. In BAI, given multiple arms, we aim to find the best arm, an arm with the highest expected reward, through an adaptive experiment. Kaufmann et al. (2016) develops a lower bound for the probability of misidentifying the best arm. They also propose a strategy, assuming that the variances of rewards are known, and show that it is asymptotically optimal in the sense that its probability of misidentification matches the lower bound as the budget approaches infinity. However, an asymptotically optimal strategy is unknown when the variances are unknown. For this open issue, we propose a strategy that estimates variances during an adaptive experiment and draws arms with a ratio of the estimated standard deviations. We refer to this strategy as the Neyman Allocation (NA)-Augmented Inverse Probability weighting (AIPW) strategy. We then demonstrate that this strategy is asymptotically optimal by showing that its probability of misidentification matches the lower bound when the budget approaches infinity, and the gap between the expected rewards of two arms approaches zero (small-gap regime). Our results suggest that under the worst-case scenario characterized by the small-gap regime, our strategy, which employs estimated variance, is asymptotically optimal even when the variances are unknown.
1 Introduction
This study investigates the problem of best arm identification (BAI) with a fixed budget in stochastic two-armed Gaussian bandits. In this problem, we consider an adaptive experiment with a fixed number of rounds, called a budget. At each round, we can draw an arm and observe the reward. The goal of the problem is to identify the best arm with the highest expected reward at the end of the experiment (Bubeck et al., 2009; Audibert et al., 2010).
Formally, we consider the following adaptive experiment with two arms and Gaussian rewards. There are two arms and , and an arm has an -valued Gaussian reward with the mean and the variance for some universal constants . We assume that and are known to us for a technical purpose, and it is enough to set as a sufficiently large value and as a sufficiently small value. Given fixed , let
be a set of distributions generating the data, which is referred to as the Gaussian bandit models, where is a pair of distributions that generate , and is a Gaussian distribution with a mean and a variance . For an instance , the best arm is defined as , which is assumed to exist uniquely.
In the adaptive experiment, we consider a strategy to identify the best arm. A fixed budget is given. For each round , let be an independent and identically distributed (i.i.d.) copy of . At each round , we draw arm and observe a reward . At the end of the experiment (after round ), we recommend an estimated best arm . During an experiment, we follow a strategy that determines which arm to draw and which arm to recommend as the best arm. The performance of strategies is evaluated by a minimal probability of misidentification , where is the probability law under .
Background.
In fixed-budget BAI, it has been an important question of interest to investigate the probability of misidentification in the limit . For the interest, a typical approach is to derive an upper and lower bound of the probability separately and specify its value.
For a lower bound of the probability of misidentification, Kaufmann et al. (2016) develops a general theory for deriving lower bounds of the probability. Their theory applies the change-of-measure argument, which has been employed in various problems (van der Vaart, 1998), including studies for regret minimization (Lai & Robbins, 1985). Their lower bound is general and can be applied to a wide range of settings, such as the fixed confidence setting (Garivier & Kaufmann, 2016) as well as the fixed budget setting.
In contrast, an upper bound of the misidentification probability has not been fully clarified. A typical way to derive upper bounds is to construct a specific strategy and evaluate its misidentification probability. Kaufmann et al. (2016) develops a strategy under a setting in which the variance of the reward is known and shows its misidentification probability corresponds to the lower bound. However, this strategy is not available under the usual setting with unknown variance. Based on these situations, the current results are insufficient to establish an upper bound for the misidentification probability when the variances are unknown.
Based on the situation above, our interest is in strategies for identifying misidentification probabilities in the adaptive experimental setting described above. Specifically, we need a strategy such that an upper bound on its misidentification probability aligns with the lower bound proposed in Kaufmann et al. (2016). Further, this strategy must be valid when the variance is unknown.
Our approach and contribution.
In this study, we develop a strategy whose probability of misidentification aligns with the lower bound under an additional setting. To accomplish this, we develop the Neyman allocation-augmented inverse probability weight (NA-AIPW) strategy. Then, we show that the probability of misidentification aligns with the lower bound under a small-gap regime. The details of each are described below.
The NA-AIPW strategy consists of a sampling rule using the Neyman allocation (NA) and a recommendation rule using the augmented inverse probability weighting (AIPW) estimator. NA is a method of sampling arms using a ratio of the root of the variance of rewards, as utilized in Neyman (1934); Kaufmann et al. (2016). The NA-AIPW strategy samples the arms by estimating this variance during the adaptive experiment. At the end of the experiment, the NA-AIPW strategy recommends an arm with the highest expected reward estimated by using the AIPW estimator, which is an unbiased estimator with a small asymptotic variance.
The small-gap regime considers a situation as . Although this additional setting slightly simplifies the problem with BAI, the problem is still sufficiently complicated since the small gap makes it difficult to identify the best arm. This setting has been utilized in BAI with fixed confidence, such as the analysis of lil’UCB (Jamieson et al., 2014). In the realm of statistical testing, such an evaluation framework is known as the local Bahadur efficiency (Bahadur, 1960; Wieand, 1976; Akritas & Kourouklis, 1988; He & Shao, 1996). From a technical perspective, the small-gap regime is a situation where we can ignore the estimation error of the variances compared to the difficulty of identifying the best arm. Since the error of the estimation of the variance is relatively negligible in the small-gap setting, we can show that the misidentification probability of the NA-AIPW strategy matches the lower bound.
We summarize the backgrounds and our contributions. In BAI with two-armed Gaussian rewards and a fixed budget, a strategy has been needed in which its misidentification probability achieves the lower bound derived by Kaufmann et al. (2016). Although Kaufmann et al. (2016) demonstrates an asymptotically optimal strategy that satisfies the requirement with known variances, it remains an unresolved issue to find a strategy whose upper bound matches their derived lower bound when variances are unknown. For this issue, this study proposes the NA-AIPW strategy whose probability of misidentification matches the lower bound under the small-gap regime.
Organization.
This study is organized as follows. First, in Section 2, we review the lower bound of Kaufmann et al. (2016). Then, in Section 3, we propose our NA-AIPW strategy. In Section 4, we show that the misidentification probability of the strategy asymptotically corresponds to the lower bound by Kaufmann et al. (2016) under the small-gap setting. We show the proof in Section 5, where we also provide a novel concentration inequality based on the Chernoff bound. In Section 6, we discuss the difficulty in this problem. In Section 7, we introduce related work and remaining problems, which includes an extension of our small-gap setting to a setting with multi-armed bandits and non-Gaussian rewards.
Notation. Let be the sigma-algebra generated by all observations up to round . We define a truncation operator: for a variable and a constant , .
2 Lower Bound of Probability of Misidentification
As a preparation, we introduce a lower bound for the probability of misidentification in BAI with a fixed budget. We call a strategy is consistent, if for any , as . To evaluate the performance of strategies For any , we focus on the following metric for used in many studies, such as Kaufmann et al. (2016):
Note that the upper bound (resp. lower bound) of this term works as a lower bound (resp. upper bound) of the probability of misidentification since is a strictly decreasing function.
For two-armed Gaussian bandits, Kaufmann et al. (2016) presents the following lower bounds.
Proposition 2.1 (Theorem 12 in Kaufmann et al. (2016)).
For any and with known constants independent of , if is generated from , any consistent strategy satisfies
Note that we can remove the condition that we know constants independent of such that and hold for deriving the lower bound. However, it is required to implement our strategy and to derive an upper bound. For the conditions of the lower bound to align with those of the upper bound, we add the conditions in this proposition.
From the statement, there are some important aspects of this lower bound: (i) The term , which referred to a gap, appears in the numerator and the magnitude of the error is described by the gap. (ii) The variances appear in the denominator, which plays an important role.
It has been discussed to find a strategy in which the upper bound of its probability of misidentification coincides with this lower bound in Proposition 2.1. Although Kaufmann et al. (2016) develops a strategy that satisfies the requirement, it needs to sample arms with some probability depending on the known variances . To the best of our knowledge, if the variances are unknown and need to be estimated during adaptive experiments, no one has found the desired strategy.
3 The NA-AIPW Strategy
In this section, we define our strategy. Formally, a strategy gives a pair , where (i) is a sequence of arms generated by a sampling rule that determines which arm is chosen in each based on , and (ii) is a recommended arm by a recommendation rule based on . Our proposed NA-AIPW strategy consists of (i) a sampling rule with the Neyman Allocation (NA) (Neyman, 1923), and (ii) a recommendation rule using the Augmented Inverse Probability Weighting (AIPW) estimator (Robins et al., 1994; Bang & Robins, 2005). Based on these rules, we refer to this strategy as the NA-AIPW strategy111Similar strategies are often used in the context of the average treatment effect estimation by an adaptive experiment (van der Laan, 2008; Kato et al., 2020)..
3.1 Target Allocation Ratio
As preparation, we introduce the notion of a target allocation ratio, which will be used for the sampling rule. We define target allocation ratios as
A sampling rule following this target allocation ratio is known as the Neyman allocation rule (Neyman, 1934). Glynn & Juneja (2004) and Kaufmann et al. (2016) also propose this allocation. This target allocation ratio is characterized by the variances (standard deviations); therefore, the target allocation ratio is unknown when the variances are unknown. Therefore, to use this ratio, we need to estimate it from observations.
3.2 Sampling Rule with Neyman Allocation (NA)
We present the sampling rule with the NA. At each round , our sampling rule randomly draws an arm with a probability identical to an estimated version of the target allocation ratio . To estimate the target allocation ratio , we estimate the variances during the adaptive experiment. For , let and be sequences of estimators of and , that will be defined bellow.
We use the rounds and for initialization. Specifically, we draw the arm at round and the arm at round , and also set for .
At the round , we estimate the target allocation ratio (variances) for using past observations . For each , we first define an estimator of the expected reward as
Also, we define a second moment estimator , and a root of variance estimator . Then, we define the estimator with the constant , defined in Section 1. Note that this truncation is introduced for a technical purpose to draw each arm infinitely many times as and avoid the estimators of and , defined below, diverging to infinity. We just use a sufficiently small value for . Also, we define the estimator and as
(1) |
In each round , we draw arm with probability and with probability .
We note the possibility of increasing the number of initialization rounds, although our strategy utilizes only the first two rounds for this purpose. The additional rounds of initialization serve to stabilize the sampling rule in practical applications, akin to the concept of the forced-sampling (Garivier & Kaufmann, 2016). We can change the number of initialization rounds if the condition is satisfied as for every , which is crucial for our theoretical analysis. For instance, instead of using directly, an alternative arm-drawing probability could be defined as , assuming and converges to zero as approaches infinity (here, we define ). Moreover, the number of initialization rounds can be made dependent upon the number of arms without impacting the theoretical outcomes.
3.3 Recommendation Rule with the Augmented Inverse Probability Weighting (AIPW) Estimator
We present our recommendation rule. In the recommendation phase after round , we estimate for each and recommend an arm with the bigger estimated expected reward. With a truncated version of the estimated expected reward with some predetermined constant , defined in Section 1, we define the augmented inverse probability weighting (AIPW) estimator of for each as
(2) |
At the end of the experiment (after the round ), we recommend as
(3) |
We adopt the AIPW estimator for our strategy because it has several advantages. First, the AIPW estimator has the property of semiparametric efficiency, which indicates that it has the smallest asymptotic variance among a certain class (Hahn, 1998). The property is necessary to prove that the strategy using the AIPW estimator is optimal, which means the misidentification probability is small enough to achieve its lower bound. The second reason is more technical; the AIPW estimator simplifies the theoretical analysis (see Section 6.3). Specifically, we can decompose an error by the AIPW estimator into a sum of random variables with martingale properties, making it suitable for analysis using the central limit theorem. This property is unique to the AIPW estimator but not to naive estimators such as an empirical average. Details will be given in Section 5.
We provide the pseudo-code for our proposed strategy in Algorithm 1. Note that we introduce and for technical purposes to bound the estimators and any large positive value can be used.
4 Misidentification Probability and Asymptotic Optimality
In this section, we show the following upper bound of the misspecification probability of the NA-AIPW strategy, which also implies that the strategy is asymptotically optimal.
Theorem 4.1 (Upper bound of the NA-AIPW strategy).
For any with known constants independent of , if is generated from , the following holds as :
We note again that and are introduced for technical purpose. The constant is introduced to guarantee the boundedness of the estimators, and it is sufficient to use a sufficiently large value for it. The constant is used to draw each arm infinitely many times as and avoid the estimators of the means and diverging to infinity, and it is sufficient to use a sufficiently small value for .
Note that the lower bound of implies the upper bound of . This theorem implies us to evaluate the probability of misidentification up to the constant term, even when it is exponentially small, as .
This result directly implies the asymptotic optimality of the NA-AIPW strategy. As , the upper bound matches the lower bound in Proposition 2.1. This asymptotic optimality result suggests that the estimation error of the target allocation ratio (variances) is negligible when . This is because the estimation error is insignificant compared to the challenges of identifying the best arm due to the small gap.
Although studies, such as Ariu et al. (2021), Qin (2022), and Degenne (2023), point out the non-existence of the optimal strategies in fixed-budget BAI against the lower bound shown by Kaufmann et al. (2016), our result does not yield a contradiction. Existing impossibility results discuss the existence of a strategy that violates the lower bound. Note that the lower bounds in Kaufmann et al. (2016) are applicable to any instances in the bandit models (with some regularity conditions). In other words, if we consider the lower bound in Kaufmann et al. (2016) for all instances, there exists an instance under which there exists a strategy whose lower bound is larger than the lower bound derived by Kaufmann et al. (2016). In contrast, we only consider bandit models where . Our result implies that if we restrict bandit models, the upper bounds of our strategy within the restricted bandit models match the lower bound. Because our optimality is limited to a case where , we refer to our optimality as asymptotic optimality under the small-gap regime or local asymptotic optimality.
We conjecture that even if we replace the AIPW estimator with the sample average estimator, defined as in Section 3.2, the upper bound of the strategy still matches the lower bound. However, the proof is an open issue. Hirano et al. (2003) and Hahn et al. (2011) show that the sample average estimator and the AIPW estimator have the same asymptotic variance (or asymptotic distribution). To show the result, we need to employ empirical process arguments. One of the problems in extending the result to analysis for BAI is that their result focuses on the asymptotic distribution, not the tail probability. Therefore, to show the asymptotic optimality of the strategy with the sample average in the sense of the probability of misidentification, we need to modify the result in Hirano et al. (2003) and Hahn et al. (2011) to analyze the tail probability.
5 Proof of Theorem 4.1
To show Theorem 4.1, we derive the upper bound of for , which is equivalent to . Without loss of generality, we assume that and . Let us define and
Therefore, in the following parts, we aim to derive the upper bound of . Let be the expectation under . We derive the upper bound using the Chernoff bound. This proof is partially inspired by techniques in Hadad et al. (2021), and Kato et al. (2020).
First, because there exists a constant independent of such that by construction, the following lemma holds.
Lemma 5.1.
For any and all , and .
Furthermore, from and continuous mapping theorem, for any and all , .
Step 1: Sequence is a martingale difference sequence (MDS)
We prove that is an MDS; that is, . Although this fact is well-known in the literature of causal inference (van der Laan, 2008; Hadad et al., 2021; Kato et al., 2020), we show the proof for the sake of completeness.
Lemma 5.2.
For any , is an MDS.
Proof.
For each , it holds that
∎
Step 2: Evaluation by using the Chernoff Bound with Martingales
By applying the Chernoff bound, for any and any , it holds that
From the Chernoff bound and a property of an MDS, we have
Then, the Taylor expansion around yields
(4) |
as . This is given as follows. Since are finite everywhere in an open interval for , the Taylor expansion yields the following (for the details, see the textbook such as page 75 in Bulmer (1967) and Theorem 5.19 in Apostol (1974)):
as . Note that the finiteness of comes from the following in : (i) is a Gaussian random variable, (ii) and are bounded random variables by our truncation, and (iii) the lower bound of is given by . By using the Taylor expansion again, we approximate around as . Therefore, we have
as . Here, we used . Thus, the (4) holds.
Step 3: Convergence of the Second Moment
We next show . This result is a direct consequence of Lemma 5.1.
Lemma 5.3.
For any , we have
Proof.
We have
Here, for , the followings hold:
and
where we used . Therefore, the following holds:
Because and , we have
with probability . Therefore, from Lebesgue’s dominated convergence theorem, we obtain
∎
This lemma immediately yields the following lemma.
Lemma 5.4.
For any and any , there exists such that for all , we have
with probability one.
This result is a variant of the Cesàro lemma for a case with almost sure convergence. For completeness, we show the proof, which is based on the proof of Lemma 10 in Hadad et al. (2021).
Proof.
Let be . Note that .
From the proof of Lemma 5.3, we can find that is a bounded random variable. Recall that
We assumed that are all bounded random variables. Let be a constant independent of such that for all .
Almost-sure convergence of to zero as implies that for any , there exists such that for all with probability one. Let denote the event in which this happens; that is, . Under this event, for , the following holds:
where as .
Therefore, for any , there exists such that for all , holds with probability one. ∎
Step 4: Tail Bound with the Approximated Second Moment
Let . Then, we have
From Lemma 5.4, for any , there exists such that for all , we have
Step 5: Final Step of the Proof of Theorem 4.1
For any , there exists such that for all , we obtainz
as .
Let . Then, we have
as . By letting and , and then letting independently of and , we have
Thus, the proof is complete.
6 Discussion
In this section, we discuss related topics.
6.1 Neyman Allocation with Unknown Variances
For two-armed Gaussian bandits with known variances, Chen et al. (2000), Glynn & Juneja (2004), and Kaufmann et al. (2016) conclude that sampling each arm with a proportion of the standard deviation is optimal, which corresponds to the Neyman allocation Neyman (1934).
The Neyman allocation with unknown variances has been long studied in various fields. van der Laan (2008) and Hahn et al. (2011) develop algorithms for estimating the gap parameter itself in an adaptive experiment with the Neyman allocation. They estimate the variances and show their algorithms’ optimalities under the framework of semiparametric efficiency, which closely connects to the Gaussian approximation of estimators using the central limit theorem. Although they show their optimality under the framework, they do not investigate the asymptotic optimality in the large-deviation framework. Tabord-Meehan (2022), Kato et al. (2020), and Zhao (2023) also attempt to adrress related problems.
Jourdan et al. (2023) examines BAI with unknown variances in a fixed-confidence setting. Beyond the difference in settings (we focus on fixed-budget BAI), the methods of deriving lower bounds differ between our approach and theirs. They determine the lower bound while incorporating the assumption that the variances are unknown. Moreover, under a large-gap regime ( is fixed), they confirm a discrepancy between the lower bounds when variances are known versus unknown. Specifically, they consider alternative hypotheses related to both variances and means. In contrast, the lower bounds presented by Kaufmann et al. (2016) and ourselves are based on alternative hypotheses with fixed variances. While Jourdan et al. (2023) suggests that the upper bounds of strategies with unknown variances cannot align with the lower bound when variances are known, our findings indicate a match under the small-gap regime.
6.2 Necessity of the Small-Gap Regime
First, we discuss the necessity of the small-gap regime.
Estimation error of the variances.
The most critical reason we employ the small-gap regime is that the estimation error of the variances cannot be ignored in evaluating the probability of misidentification. To clarify this point, we review the probability of misidentification when we know the variances.
Probability of misidentification of the Kaufmann et al. (2016)’s strategy with known variances.
Kaufmann et al. (2016) proposes drawing arm in rounds (for simplicity, we deal with as an integer). Without loss of generality, we consider draw arm in the first rounds and draw arm in the following rounds. Then, they estimate the best arm as , where is the sample average defined as
(5) |
where denotes an arm drawn by the Kaufmann et al. (2016)’s strategy. For the strategy, they show that its probability of misidentification is given as
(6) |
Note that this upper bound comes from the upper bound of
(7) |
in case where arm is the best arm (). In contrast, in Theorem 4.1, we show that our strategy’s upper bound is . The difference between our upper bounds and theirs is the existence of term, which vanishes as . This difference comes from the estimation error of the variances.
Intuitive explanation about the influence of the influence of the variance estimation.
To understand the variance estimation, we rewrite the sample average as
(8) |
where we used . Here, we consider a strategy that estimates by estimating the variances. Let be some estimator of . Then, we design a strategy that draws arm in rounds in some way. In that case, the sample average roughly becomes
(9) |
where denotes an arm drawn by some strategy that draws arm in rounds. Then, if we recommend arm as the best arm, we evaluate
(10) |
From Markov’s inequality, for any , we have
(11) |
To obtain the same upper bound as that in the Kaufmann et al. (2016)’s strategy, we consider the following decomposition:
(12) |
Suppose that the following holds in some way:
Note that this decomposition does not generally hold, but we assume it since it makes it easy to understand the variance estimation problem. Under the assumption, it holds that
(13) | |||
(14) | |||
(15) |
This inequality implies that to obtain the same upper bound as that of Kaufmann et al. (2016)’s strategy, we need to bound
(16) |
with an arbitrage rate of convergence; more exactly, we need to show that for any ,
holds. However, it is impossible to achieve that convergence rate with commonly known theorems about convergence. Therefore, we introduced the small-gap regime, which evaluates the term as
Note that this argument is not rigorous and is simplified for explanation.
6.3 The AIPW, IPW, and Sample Average Estimators
A key component of our analysis is the AIPW estimator, which comprises an MDS and boasts minimum asymptotic variance. By using the properties of an MDS, we tackle the dependence among observations. The upper bound can also be applied to the Inverse Probability Weighting (IPW) estimator, but in this case, the upper bound may not coincide with the lower bound. This discrepancy occurs because the AIPW estimator’s asymptotic variance is smaller than the IPW estimator’s. The minimum variance property of the AIPW estimator stems from the efficient influence function (Hahn, 1998; Tsiatis, 2007).
We conjecture that the asymptotic optimality of strategies employing the naive sample average estimator in the recommendation rule can be demonstrated, although we do not prove it in this study. This is because Hahn et al. (2011) shows that, using the CLT, the AIPW and sample average estimators have the same asymptotic distribution. However, due to the inability to utilize MDS properties and the presence of sample dependency, the analysis becomes challenging when we derive a corresponding result for a large deviation (exponential rate of the probability of misidentification).
For the reader’s reference, we detail the problems related to the IPW estimator and the sample average estimator.
The NA-IPW strategy.
We consider the following strategy. In the NA-AIPW strategy, instead of the AIPW estimator, we use the following IPW estimator to estimate the means:
(17) |
At the end of the experiment (after the round ), we recommend as
(18) |
We refer to this strategy as the NA-IPW strategy, whose probability of misidentification of this strategy is given as follows.
Theorem 6.1 (Upper bound of the NA-IPW strategy).
For any , the following holds as :
where .
Proof.
Let us define and . Then, we have
as . By replacing and in the proof of Theorem 4.1 with and , we obtain
where the RHS is equal to . The proof is complete. ∎
Note that since . Therefore, the upper bound of probability of misidentification of the NA-IPW strategy is larger than that of the NA-AIPW strategy (Note that the inequality is flipped due to ; that is, implies that the upper bound of the probability of misidentification of the NA-AIPW strategy is smaller than that of the NA-IPW strategy). In the case of the evaluation using the CLT, similar results have been known in existing studies, such as Hirano et al. (2003) and Kato et al. (2020).
The NA-SA strategy.
Next, we consider the following strategy. In the NA-AIPW strategy, instead of the AIPW estimator, we use the following sample average estimator:
(19) |
At the end of the experiment (after the round ), we recommend as
(20) |
We refer to this strategy as the NA-SA strategy. Evaluation of the probability of misidentification of this strategy is not easy since we cannot employ a martingale property, which has been used in the analysis of the NA-AIPW strategy and the NA-IPW strategy. In order to derive its upper bound, we need to evaluate . Here, note that
(21) | ||||
(22) |
holds, and the variance of is in the proof of Theorem 4.1, and consists of an MDS. Therefore, if , we can directly apply the proof of Theorem 4.1 to obtain the same upper bound in Theorem 4.1. However, is not zero and remains as a bias term, and it is known that its evaluation requires several techniques. For example, to show , Hahn et al. (2011) bounds using the property of the stochastic equicontinuity, which is based on the arguments in Hirano et al. (2003). This problem is related to the use of the Donsker condition in semiparametric analysis, as explained in Kennedy (2016). We may show the upper bound of the NA-SA strategy by using a similar approach used in Hahn et al. (2011), but there are two issues. First, it is unknown what condition corresponds to the stochastic equicontinuity in the setting of BAI, where the samples are dependent. Second, it is unclear whether we can directly apply the stochastic equicontinuity or similar properties to show the large deviation upper bound since such conditions have been used for the central limit evaluation. Therefore, although the findings of Hirano et al. (2003) and Hahn et al. (2011) may aid in resolving this issue, it is an open issue how we use it. Note that this issue caused by the bias of , which is non-zero. Also note that in contrast, the bias of is zero due to the properties of an MDS.
6.4 The Tracking Strategy
In fixed-confidence BAI, the tracking strategy is popular, as used in Garivier & Kaufmann (2016). In the existing studies of the Neyman allocation, such a strategy has been used. For example, Hahn et al. (2011) splits the whole samples into two groups. In the first stage, we uniformly randomly draw each arm and estimate . In the second stage, for the estimators of we draw each arm so that holds. Then, Hahn et al. (2011) estimates using the sample average estimator. This strategy is quite similar to that in Garivier & Kaufmann (2016), since it draws arms to track the ratio of .
However, the strategy of Hahn et al. (2011) makes analyzing upper bounds difficult. As we explained in Section 6.3, in our analysis, the unbiasedness of the AIPW estimator plays an important role. In contrast, if we use the tracking strategy, we cannot employ the property of . Note that the NA-AIPW strategy draws arm with probability , but the tracking strategy draws arm more complicatedly, under which we cannot use the martingale property.
As well as we explained in Section 6.3, the bias term makes the analysis significantly difficult. According to the existing studies, we need to use some techniques for the analysis, such as the Donsker condition (Hirano et al., 2003; Hahn et al., 2011). Existing studies have proposed using the AIPW estimator to avoid this issue, as shown in van der Laan (2008) and Kato et al. (2020).
Thus, although we acknowledge the possibility of using the tracking strategy, the analysis requires some sophisticated techniques. We expect that existing studies such as (Hirano et al., 2003) and Hahn et al. (2011) will help the analysis, but it is still an open issue. Note that even in the tracking strategy, existing strategy such as (Hirano et al., 2003) and Hahn et al. (2011) bypass the evaluation of the AIPW-type estimators in the analysis. This proof procedure is also related to the semiparmaetric efficiency bound (Hahn, 1998), under which the semiparametric efficient score is given as .
7 Related Work
This section presents related works.
7.1 On the Asymptotic Optimality in Fixed-Budget BAI
There is a long debate on the optimal strategies for fixed-budget BAI. Glynn & Juneja (2004) develops their strategies by using the large deviation principles. However, while they justify their strategies using the large deviation principles, they do not provide lower bounds for strategies. Therefore, there remains a question about whether their strategies are truly asymptotically optimal.
Kaufmann et al. (2016) establishes distribution-dependent lower bounds for BAI with fixed confidence and budget, utilizing change-of-measure arguments. According to their results, we can confirm that for two-armed Gaussian bandits, the strategy of Glynn & Juneja (2004) is optimal.
However, Kaufmann et al. (2016) leaves lower bounds for multi-armed fixed-budget BAI as an open issue. Based on the arguments of Glynn & Juneja (2004) and Russo (2020), Kasy & Sautmann (2021) attempts to derive an asymptotically optimal strategy, but their attempt does not succeed. As pointed out by Ariu et al. (2021), without additional assumptions, there exists an instance whose lower bound is larger than that of Kaufmann et al. (2016). This result is based on another lower bound discovered by Carpentier & Locatelli (2016). These arguments are summarized by Qin (2022).
To address this issue, Kato et al. (2023b) and Degenne (2023) consider a restriction such that sampling rules do not depend on . Under this restriction, we can show the asymptotic optimality of the strategy provided by Glynn & Juneja (2004), which requires full knowledge about and is practically infeasible.
Komiyama et al. (2022) and Atsidakou et al. (2023) discuss asymptotically optimal strategies from minimax and Bayesian perspectives, respectively, where the leading factor ignoring some constant terms of lower and upper bounds match, unlike our optimality up to constant terms. This open issue is further explored by Komiyama et al. (2022), Wang et al. (2023a), Wang et al. (2023b), and Kato (2023).
Note that in the fixed confidence BAI setting, Garivier & Kaufmann (2016) proposes a strategy with an upper bound matching the derived lower bound. However, in the fixed-budget BAI, it remains unclear whether a strategy with an upper bound matching Kaufmann et al. (2016)’s lower bound exists.
Alternative lower bounds have been proposed by Audibert et al. (2010), Bubeck et al. (2011), Komiyama et al. (2023) and Kato et al. (2023a) for the expected simple regret minimization, which is another performance measure different from the probability of misidentification.
7.2 Extension to BAI in Multi-Armed Bandit (MAB) Problems
In contrast to two-armed bandit problems and BAI with fixed confidence, lower bounds for MAB problems remain unknown. One primary reason is the reversal of KL divergence. Kato et al. (2023b), Degenne (2023), Kato (2023) consider strategies that use sampling rules that are (asymptotically) invariant for any . Such a class of strategies is sometimes called static in the sense that it cannot estimate parameters during an adaptive experiment to avoid the dependency on . However, if we consider Gaussian bandit models, sampling strategies that are invariant for do not imply non-adaptive (static) strategies because we can still adaptively estimate the variances during an adaptive experiment (the variances are assumed to be the same for any ).
8 Simulation Studies
This section provides simulation studies to investigate the empirical performance of the NA-AIPW strategy. For comparison, we also investigate the performances of the NA-IPW and NA-SA strategies defined in Section 6.3. Furthermore, we also conduct simulation studies of the “oracle” strategy with the known variances, denoted by Oracle, and the uniform strategy that draws an equal number of arms, denoted by Uniform. We recommend an arm with the highest sample average in the Oracle and Uniform strategies.222The oracle strategy is the one proposed by Glynn & Juneja (2004) and Kaufmann et al. (2016). The Uniform strategy with the recommendation rule is referred to as the Uniform-Empirical Best Arm (EBA) strategy by Bubeck et al. (2011).
Throughout the experiment, we set arm as the best arm. We conduct experiments with the three settings.
In the first experiment, we set and choose from the set . The variances are selected with a probability of from either or , where is chosen from . We continue the strategies until and report the empirical probability of misidentification at .333This means we report the empirical probability of misidentification at . We conduct independent trials for each choice of parameters and plot the results in Figures 1 and 2.
In the second experiment, the variances are selected with a probability of from either or , where is chosen from . The other settings are the same as the first experiment. The results are shown in Figures 3 and 4.
In the third experiment, we set and choose from the set . The other settings are the same as the first experiment. The results are shown in Figures 5 and 6.
Our theoretical results imply that the probability of misidentifications of the NA-AIPW and Oracle strategies approach the same as . We can confirm the phenomenon. In the results, the Oracle strategy is a bit better than the NA-AIPW strategy when is large. However, the gap approaches zero as . Note that when is large, the convergence of the probability of misidentification is very fast, so the gap is still not so large even if is large because both the probability of misidentifications of the NA-AIPW and Oracle strategies converge to zero very fast.
We can also find that the performance improvement of the NA-AIPW strategy from the Uniform strategy is large as the difference of variances is large. For example, in the second experiment, when , there is no improvement by using the NA-AIPW strategy from the Uniform strategy because the NA allocation also leads us to draw each arm with equal ratio.
The difference between the NA-AIPW and NA-IPW strategies becomes large as the mean outcome of each arm becomes large. We can find that in the third setting, the NA-IPW strategy behaves badly since and is chosen from , while and is chosen from in the first and second settings.


9 Conclusion
This study investigated fixed-budget BAI for two-armed Gaussian bandits with unknown variances. We first reviewed the lower bound shown by Kaufmann et al. (2016). Then, we proposed the NA-AIPW strategy and found that its probability of misidentification matches the lower bound when the budget approaches infinity and the gap between the expected rewards of the two arms approaches zero. We referred to this setting as the small-gap regime and the optimality as the local asymptotic optimality. Although there are several remaining open questions, our result provides insight into long-standing open problems in BAI.
References
- Adusumilli (2022) Karun Adusumilli. Neyman allocation is minimax optimal for best arm identification with two arms, 2022. arXiv:2204.05527.
- Ahn et al. (2021) Dohyun Ahn, Dongwook Shin, and Assaf Zeevi. Online ordinal optimization under model misspecification, 2021. URL https://api.semanticscholar.org/CorpusID:235389954. SSRN.
- Akritas & Kourouklis (1988) Michael G. Akritas and Stavros Kourouklis. Local bahadur efficiency of score tests. Journal of Statistical Planning and Inference, 19(2):187–199, 1988.
- Apostol (1974) Tom M Apostol. Mathematical analysis; 2nd ed. Addison-Wesley series in mathematics. Addison-Wesley, 1974.
- Ariu et al. (2021) Kaito Ariu, Masahiro Kato, Junpei Komiyama, Kenichiro McAlinn, and Chao Qin. Policy choice and best arm identification: Asymptotic analysis of exploration sampling, 2021. arXiv:2109.08229.
- Armstrong (2022) Timothy B. Armstrong. Asymptotic efficiency bounds for a class of experimental designs, 2022. arXiv:2205.02726.
- Atsidakou et al. (2023) Alexia Atsidakou, Sumeet Katariya, Sujay Sanghavi, and Branislav Kveton. Bayesian fixed-budget best-arm identification, 2023. arXiv:2211.08572.
- Audibert et al. (2010) Jean-Yves Audibert, Sébastien Bubeck, and Remi Munos. Best arm identification in multi-armed bandits. In Conference on Learning Theory, pp. 41–53, 2010.
- Bahadur (1960) R. R. Bahadur. Stochastic Comparison of Tests. The Annals of Mathematical Statistics, 31(2):276 – 295, 1960.
- Bang & Robins (2005) Heejung Bang and James M. Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973, 2005.
- Bubeck et al. (2009) Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. Pure exploration in multi-armed bandits problems. In Algorithmic Learning Theory, pp. 23–37. Springer Berlin Heidelberg, 2009.
- Bubeck et al. (2011) Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. Pure exploration in finitely-armed and continuous-armed bandits. Theoretical Computer Science, 2011.
- Bulmer (1967) Michael George Bulmer. Principles of statistics. M.I.T. Press, 2. ed. edition, 1967.
- Carpentier & Locatelli (2016) Alexandra Carpentier and Andrea Locatelli. Tight (lower) bounds for the fixed budget best arm identification bandit problem. In COLT, 2016.
- Chen et al. (2000) Chun-Hung Chen, Jianwu Lin, Enver Yücesan, and Stephen E. Chick. Simulation budget allocation for further enhancing theefficiency of ordinal optimization. Discrete Event Dynamic Systems, 10(3):251–270, 2000.
- Degenne (2023) Rémy Degenne. On the existence of a complexity in fixed budget bandit identification. In Conference on Learning Theory, volume 195, pp. 1131–1154. PMLR, 2023.
- Garivier & Kaufmann (2016) Aurélien Garivier and Emilie Kaufmann. Optimal best arm identification with fixed confidence. In Conference on Learning Theory, 2016.
- Glynn & Juneja (2004) Peter Glynn and Sandeep Juneja. A large deviations perspective on ordinal optimization. In Proceedings of the 2004 Winter Simulation Conference, volume 1. IEEE, 2004.
- Hadad et al. (2021) Vitor Hadad, David A. Hirshberg, Ruohan Zhan, Stefan Wager, and Susan Athey. Confidence intervals for policy evaluation in adaptive experiments. Proceedings of the National Academy of Sciences, 118(15), 2021.
- Hahn (1998) Jinyong Hahn. On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica, 66(2):315–331, 1998.
- Hahn et al. (2011) Jinyong Hahn, Keisuke Hirano, and Dean Karlan. Adaptive experimental design using the propensity score. Journal of Business and Economic Statistics, 2011.
- He & Shao (1996) Xuming He and Qi-man Shao. Bahadur efficiency and robustness of studentized score tests. Annals of the Institute of Statistical Mathematics, 48(2):295–314, 1996.
- Hirano et al. (2003) Keisuke Hirano, Guido Imbens, and Geert Ridder. Efficient estimation of average treatment effects using the estimated propensity score. Econometrica, 2003.
- Jamieson et al. (2014) Kevin Jamieson, Matthew Malloy, Robert Nowak, and Sébastien Bubeck. lil’ ucb : An optimal exploration algorithm for multi-armed bandits. In Conference on Learning Theory, 2014.
- Jourdan et al. (2023) Marc Jourdan, Degenne Rémy, and Kaufmann Emilie. Dealing with unknown variances in best-arm identification. In Proceedings of The 34th International Conference on Algorithmic Learning Theory, volume 201, pp. 776–849, 2023.
- Kasy & Sautmann (2021) Maximilian Kasy and Anja Sautmann. Adaptive treatment assignment in experiments for policy choice. Econometrica, 89(1):113–132, 2021.
- Kato (2023) Masahiro Kato. Worst-case optimal multi-armed gaussian best arm identification with a fixed budget, 2023. arXiv:2310.19788.
- Kato et al. (2020) Masahiro Kato, Takuya Ishihara, Junya Honda, and Yusuke Narita. Efficient adaptive experimental design for average treatment effect estimation, 2020. arXiv:2002.05308.
- Kato et al. (2023a) Masahiro Kato, Masaaki Imaizumi, Takuya Ishihara, and Toru Kitagawa. Asymptotically minimax optimal fixed-budget best arm identification for expected simple regret minimization, 2023a. arXiv:2302.02988.
- Kato et al. (2023b) Masahiro Kato, Masaaki Imaizumi, Takuya Ishihara, and Toru Kitagawa. Fixed-budget hypothesis best arm identification: On the information loss in experimental design. In ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, 2023b.
- Kaufmann et al. (2016) Emilie Kaufmann, Olivier Cappé, and Aurélien Garivier. On the complexity of best-arm identification in multi-armed bandit models. Journal of Machine Learning Research, 17(1):1–42, 2016.
- Kennedy (2016) Edward H. Kennedy. Semiparametric theory and empirical processes in causal inference, 2016. arXiv:1510.04740.
- Komiyama et al. (2022) Junpei Komiyama, Taira Tsuchiya, and Junya Honda. Minimax optimal algorithms for fixed-budget best arm identification. In Advances in Neural Information Processing Systems, 2022.
- Komiyama et al. (2023) Junpei Komiyama, Kaito Ariu, Masahiro Kato, and Chao Qin. Rate-optimal bayesian simple regret in best arm identification. Mathematics of Operations Research, 2023.
- Lai & Robbins (1985) T.L Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 1985.
- Neyman (1923) Jerzy Neyman. Sur les applications de la theorie des probabilites aux experiences agricoles: Essai des principes. Statistical Science, 5:463–472, 1923.
- Neyman (1934) Jerzy Neyman. On the two different aspects of the representative method: the method of stratified sampling and the method of purposive selection. Journal of the Royal Statistical Society, 97:123–150, 1934.
- Qin (2022) Chao Qin. Open problem: Optimal best arm identification with fixed-budget. In Conference on Learning Theory, 2022.
- Robins et al. (1994) James M. Robins, Andrea Rotnitzky, and Lue Ping Zhao. Estimation of regression coefficients when some regressors are not always observed. Journal of the American Statistical Association, 89(427):846–866, 1994.
- Russo (2020) Daniel Russo. Simple bayesian algorithms for best-arm identification. Operations Research, 68(6):1625–1647, 2020.
- Tabord-Meehan (2022) Max Tabord-Meehan. Stratification Trees for Adaptive Randomisation in Randomised Controlled Trials. The Review of Economic Studies, 90(5):2646–2673, 2022.
- Tsiatis (2007) Anastasios Tsiatis. Semiparametric Theory and Missing Data. Springer Series in Statistics. Springer New York, 2007.
- van der Laan (2008) Mark J. van der Laan. The construction and analysis of adaptive group sequential designs, 2008. URL https://biostats.bepress.com/ucbbiostat/paper232.
- van der Vaart (1998) A.W. van der Vaart. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998.
- Wang et al. (2023a) Po-An Wang, Kaito Ariu, and Alexandre Proutiere. On uniformly optimal algorithms for best arm identification in two-armed bandits with fixed budget, 2023a. arXiv:2308.12000.
- Wang et al. (2023b) Po-An Wang, Ruo-Chun Tzeng, and Alexandre Proutiere. Best arm identification with fixed budget: A large deviation perspective. In Advances in Neural Information Processing Systems, 2023b.
- Wieand (1976) Harry S. Wieand. A Condition Under Which the Pitman and Bahadur Approaches to Efficiency Coincide. The Annals of Statistics, 4(5):1003 – 1011, 1976.
- Zhao (2023) Jinglong Zhao. Adaptive neyman allocation, 2023. arXiv:2309.08808.



