This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Recursively-Constrained Partially Observable Markov Decision Processes

Qi Heng Ho Department of Aerospace Engineering Sciences
University of Colorado Boulder
Boulder, Colorado, USA
Tyler Becker Department of Aerospace Engineering Sciences
University of Colorado Boulder
Boulder, Colorado, USA
Benjamin Kraske Department of Aerospace Engineering Sciences
University of Colorado Boulder
Boulder, Colorado, USA
Zakariya Laouar Department of Aerospace Engineering Sciences
University of Colorado Boulder
Boulder, Colorado, USA
Martin S. Feather Jet Propulsion Laboratory
California Institute of Technology
Pasadena, California, USA
Federico Rossi Jet Propulsion Laboratory
California Institute of Technology
Pasadena, California, USA
Morteza Lahijanian Department of Aerospace Engineering Sciences
University of Colorado Boulder
Boulder, Colorado, USA
Zachary Sunberg Department of Aerospace Engineering Sciences
University of Colorado Boulder
Boulder, Colorado, USA
Abstract

Many sequential decision problems involve optimizing one objective function while imposing constraints on other objectives. Constrained Partially Observable Markov Decision Processes (C-POMDP) model this case with transition uncertainty and partial observability. In this work, we first show that C-POMDPs violate the optimal substructure property over successive decision steps and thus may exhibit behaviors that are undesirable for some (e.g., safety critical) applications. Additionally, online re-planning in C-POMDPs is often ineffective due to the inconsistency resulting from this violation. To address these drawbacks, we introduce the Recursively-Constrained POMDP (RC-POMDP), which imposes additional history-dependent cost constraints on the C-POMDP. We show that, unlike C-POMDPs, RC-POMDPs always have deterministic optimal policies and that optimal policies obey Bellman’s principle of optimality. We also present a point-based dynamic programming algorithm for RC-POMDPs. Evaluations on benchmark problems demonstrate the efficacy of our algorithm and show that policies for RC-POMDPs produce more desirable behaviors than policies for C-POMDPs.

1 Introduction

Partially Observable Markov Decision Processes (POMDPs) are powerful models for sequential decision making due to their ability to account for transition uncertainty and partial observability. Their applications range from autonomous driving Pendleton et al. [2017] and robotics to geology Lauri et al. [2023], Wang et al. [2022], asset maintenance Papakonstantinou and Shinozuka [2014], and human-computer interaction Chen et al. [2020]. Constrained POMDPs (C-POMDPs) are extensions of POMDPs that impose a bound on expected cumulative costs while seeking policies that maximize expected total reward. C-POMDPs address the need to consider multiple objectives in applications such as autonomous rover that may have a navigation task as well as an energy usage budget, or human-AI dialogue systems with constraints on the length of dialogues. However, we observe that optimal policies computed for C-POMDPs exhibit pathological behavior in some problems, which can be opposed to the C-POMDP’s intended purpose.

Example 1 (Cave Navigation).

Consider a rover agent in a cave with two tunnels, A and B, which may have rocky terrains. Traversing tunnel A has a higher expected reward than traversing tunnel B. To model wheel damage, a cost of 1010 is given for traversing through rocky terrain, and 0 otherwise. The agent has noisy observations (correct with a probability of 0.80.8) of a tunnel’s terrain type, and hence, has to maintain a belief (probability distribution) over the terrain type in each tunnel. The task is to navigate to the end of a tunnel while ensuring that the expected total cost is below a threshold of 55. The agent has the initial belief of 0.50.5 probability of rocks and 0.50.5 probability of no rocks in tunnel AA, and 0 probability of rocks and 1.01.0 probability of no rocks in tunnel BB.

In this example, suppose the agent receives an observation that leads to an updated belief of 0.80.8 probability that tunnel AA is rocky. Intuitively, the agent should avoid tunnel AA since the expected cost of navigating it is 88, which violates the cost constraint of 55. However, an optimal policy computed from a C-POMDP decides to go through the rocky region, violating the constraint and damaging the wheels. Such behavior is justified in the C-POMDP framework by declaring that, due to a low probability of observing that tunnel AA is rocky in the first place, the expected cost from the initial time step is still within the threshold, and so this policy is admissible. However, this pathological behavior is clearly unsuitable especially for some (e.g., safety-critical) applications.

In this paper, we first provide the key insight that the pathological behavior is caused by the violation of the optimal substructure property over successive decision steps, and hence violation of the standard form of Bellman’s Principle of Optimality (BPO). To mitigate the pathological behavior and preserve the optimal substructure property, we propose an extension of C-POMDPs through the addition of history-dependent cost constraints at each reachable belief, which we call Recursively-Constrained POMDPs (RC-POMDPs). We prove that deterministic policies are sufficient for optimality in RC-POMDPs and that RC-POMDPs satisfy BPO. These results suggest that RC-POMDPs are highly amenable to standard dynamic programming techniques, which is not true for C-POMDPs. RC-POMDPs provide a good balance between the BPO-violating expectation constraints of C-POMDPs and constraints on the worst-case outcome, which are overly conservative for POMDPs with inherent state uncertainty. Then, we present a point-based dynamic programming algorithm to approximately solve RC-POMDPs. Experimental evaluation shows that the pathological behavior is a prevalent phenomenon in C-POMDP policies, and that our algorithm for RC-POMDPs computes polices which obtain expected cumulative rewards competitive with C-POMDPs without exhibiting such behaviors.

In summary, this paper contributes (i) an analysis that C-POMDPs do not exhibit the optimal substructure property over successive decision steps and its consequences, (ii) the introduction of RC-POMDPs, a novel extension of C-POMDPs through the addition of history-dependent cost constraints, (iii) proofs that all RC-POMDPs have at least one deterministic optimal policy, satisfy BPO, and the Bellman operator has a unique fixed point under suitable initializations, (iv) a dynamic programming algorithm for RC-POMDPs, and (v) a series of illustrative benchmarks to demonstrate the advantages of RC-POMDPs.

Related Work

Several solution approaches exist for C-POMDPs with expectation constraints de Nijs et al. [2021]. These include offline Isom et al. [2008], Kim et al. [2011], Poupart et al. [2015], Walraven and Spaan [2018], Kalagarla et al. [2022], Wray and Czuprynski [2022] and online methods Lee et al. [2018], Jamgochian et al. [2023]. These works suffer from the unintuitive behavior discussed above. This paper shows that this behavior is rooted in the violation of optimal substructure by C-POMDPs and proposes a new problem formulation that obeys BPO.

BPO violation has also been discussed in fully-observable Constrained MDPs (C-MDPs) with state-action frequency and long-run average cost constraints Haviv [1996], Chong et al. [2012]. To overcome it, Haviv [1996] proposes an MDP formulation with sample path constraints. In C-POMDPs with expected cumulative costs, this BPO-violation problem remains unexplored. Additionally, adoption of the MDP solution of worst-case sample path constraints would be overly conservative for POMDPs, which are inherently characterized by state uncertainty. This paper fills that gap by studying the BPO of C-POMDPs and addressing it by imposing recursive expected cost constraints.

From the algorithmic perspective, the closest work to ours is the C-POMDP point-based value iteration (CPBVI) algorithm Kim et al. [2011]. Samples of admissible costs, defined by Piunovskiy and Mao [2000] for C-MDPs, are used with belief points as a heuristic to improve computational tractability of point-based value iteration for C-POMDPs. However, since CPBVI is designed for C-POMDPs, the synthesized policies by CPBVI may still exhibit pathological behavior. In this paper, we formalize the use of history-dependent expected cost constraints and provide a thorough analysis of it. We show that this problem formulation eliminates the pathological behavior of C-POMDPs.

2 Constrained POMDPs

POMDPs model sequential decision making problems under transition uncertainty and partial observability.

Definition 1 (POMDP).

A Partially Observable Markov Decision Process (POMDP) is a tuple 𝒫=(S,A,O,T,R,Z,γ,b0)\mathcal{P}=(S,A,O,T,R,Z,\gamma,b_{0}), where: S,A,S,A, and OO are finite sets of states, actions and observations, respectively, T:S×A×S[0,1]T:S\times A\times S\rightarrow[0,1] is the transition probability function, R:S×A[Rmin,Rmax]R:S\times A\rightarrow[R_{min},R_{max}], for Rmin,RmaxR_{min},R_{max}\in\mathbb{R}, is the immediate reward function, Z:S×A×O[0,1]Z:S\times A\times O\rightarrow[0,1] is the probabilistic observation function, γ[0,1)\gamma\in[0,1) is the discount factor, and b0Δ(S)b_{0}\in\Delta(S) is an initial belief, where Δ(S)\Delta(S) is the probability simplex (the set of all probability distributions) over SS.

We denote the probability distribution over states in SS at time tt by btΔ(S)b_{t}\in\Delta(S) and the probability of being in state ss at time tt by bt(s)b_{t}(s).

The evolution of an agent according to a POMDP model is as follows. At each t0t\in\mathbb{N}_{0}, the agent has a belief btb_{t} of its state sts_{t} as a probability distribution over SS and takes action atAa_{t}\in A. Its state evolves from stSs_{t}\in S to st+1Ss_{t+1}\in S according to T(st,at,st+1)T(s_{t},a_{t},s_{t+1}), and it receives an immediate reward R(st,at)R(s_{t},a_{t}) and observation otOo_{t}\in O according to observation probability Z(st+1,at,ot)Z(s_{t+1},a_{t},o_{t}). The agent then updates its belief using Bayes theorem; that is for st+1=ss_{t+1}=s^{\prime},

bt+1(s)Z(s,at,ot)sST(s,at,s)bt(s).\displaystyle b_{t+1}(s^{\prime})\propto Z(s^{\prime},a_{t},o_{t})\sum_{s\in S}T(s,a_{t},s^{\prime})b_{t}(s). (1)

Then, the process repeats. Let ht={a0,o0,,at1,ot1}h_{t}=\{a_{0},o_{0},\cdots,a_{t-1},o_{t-1}\} denote the history of the actions and observations up to but not including time step tt; thus, h0=h_{0}=\emptyset. The belief at time step tt is therefore bt=P(stb0,ht)b_{t}=P(s_{t}\mid b_{0},h_{t}). For readability, we do not explicitly include b0b_{0}, as all variables are conditioned on b0b_{0}.

The agent chooses actions according to a policy π:Δ(S)Δ(A)\pi:\Delta(S)\to\Delta(A), which maps a belief bb to a probability distribution over actions. π\pi is called deterministic if π(b)\pi(b) is a unitary distribution for every bΔ(S)b\in\Delta(S). A policy is typically evaluated according to the expected rewards it accumulates over time. Let R(b,a)=𝔼sb[R(s,a)]R(b,a)=\mathbb{E}_{s\sim b}[R(s,a)] be the expected reward for the belief-action pair (b,a)(b,a). The expected discounted sum of rewards that the agent receives under policy π\pi starting from belief btb_{t} is

VRπ(bt)\displaystyle V_{R}^{\pi}(b_{t}) =𝔼π,T,Z[τ=tγτtR(bτ,π(bτ))bt].\displaystyle=\mathbb{E}_{\pi,T,Z}\Big{[}\sum_{\tau=t}^{\infty}\gamma^{\tau-t}R\left(b_{\tau},\pi(b_{\tau})\right)\mid b_{t}\Big{]}. (2)

Additionally, the QQ reward-value is defined as

QRπ(bt,a)\displaystyle Q^{\pi}_{R}(b_{t},a) =R(bt,a)+γ𝔼T,Z[VRπ(bt+1)].\displaystyle=R(b_{t},a)+\gamma\,\mathbb{E}_{T,Z}[V^{\pi}_{R}(b_{t+1})]. (3)

The objective of POMDP problems is often to find a policy that maximizes VRπ(b0)V_{R}^{\pi}(b_{0}).

As an extension of POMDPs, Constrained POMDPs add a constraint on the expected cumulative costs.

Definition 2 (C-POMDP).

A Constrained POMDP (C-POMDP) is a tuple =(𝒫,C,c^)\mathcal{M}=(\mathcal{P},C,\hat{c}), where 𝒫\mathcal{P} is a POMDP as in Def. 1, C:S×A0nC:S\times A\rightarrow\mathbb{R}^{n}_{\geq 0} is a cost function that maps each state action pair to an nn-dimensional vector of non-negative costs, and c^0n\hat{c}\in\mathbb{R}^{n}_{\geq 0} is an nn-dimensional vector of expected cost thresholds from the initial belief state b0b_{0}.

In C-POMDPs, by executing action aAa\in A at state sSs\in S, the agent receives a cost vector C(s,a)C(s,a) in addition to the reward R(s,a)R(s,a). Let C(b,a)=𝔼sb[C(s,a)]C(b,a)=\mathbb{E}_{s\sim b}[C(s,a)]. The expected sum of costs incurred by the agent under π\pi from belief btb_{t} is:

VCπ(bt)=𝔼π,T,Z[τ=tγτtC(bτ,π(bτ))bt].\displaystyle V^{\pi}_{C}(b_{t})=\mathbb{E}_{\pi,T,Z}\Big{[}\sum_{\tau=t}^{\infty}\gamma^{\tau-t}C(b_{\tau},\pi(b_{\tau}))\mid b_{t}\Big{]}. (4)

Additionally, the QQ cost-value is defined as

QCπ(bt,a)\displaystyle Q^{\pi}_{C}(b_{t},a) =C(bt,a)+γ𝔼T,Z[VCπ(bt+1)].\displaystyle=C(b_{t},a)+\gamma\,\mathbb{E}_{T,Z}[V^{\pi}_{C}(b_{t+1})]. (5)

In C-POMDPs, the constraint VCπ(b0)c^V^{\pi}_{C}(b_{0})\leq\hat{c}, where \leq refers to the component-wise inequality, is imposed on the POMDP optimization problem as formalized below.

Problem 1 (C-POMDP Planning Problem).

Given a C-POMDP, compute policy π\pi^{*} that maximizes total expected reward in Eq. (2) from initial belief b0b_{0} while the total expected cost vector in Eq. (4) is bounded by c^\hat{c}, i.e.,

π=argmaxπVRπ(b0) s.t. VCπ(b0)c^.\pi^{*}=\arg\max_{\pi}V_{R}^{\pi}(b_{0})\quad\text{ s.t. }\quad V_{C}^{\pi}\left(b_{0}\right)\leq\hat{c}. (6)

Unlike POMDPs that have at least one deterministic optimal policy Sondik [1978], optimal policies of C-POMDPs may require randomization, and hence there may not exist an optimal deterministic policy Kim et al. [2011].

Next, we discuss why the solutions to Problem 1 may not be desirable and an alternate formulation is necessary.

2.1 Optimal Substructure Property

A problem has the optimal substructure property if an optimal solution to the problem contains optimal solutions to its subproblems Cormen et al. [2009]. Additionally, Cormen et al. note that these subproblems must be independent of each other. If this holds for Problem 1, then the optimal policy π(b0)\pi^{*}(b_{0}) at b0b_{0} can be computed recursively by finding the optimal policy π(ht)\pi^{*}(h_{t}) for each successive history for the same planning problem. Thus, a natural subproblem to Eq. (6) is the history-based subproblem (,ht)(\mathcal{M},h_{t}), with π(ht)=argmaxπVRπ(ht) s.t. VCπ(b0)c^\pi^{*}(h_{t})=\arg\max_{\pi}V_{R}^{\pi}(h_{t})\text{ s.t. }V_{C}^{\pi}\left(b_{0}\right)\leq\hat{c}111Constraining VCπ(ht)c^V_{C}^{\pi}(h_{t})\leq\hat{c} also violates the property as the constraint is defined only at b0b_{0}.. We show that this subproblem violates the optimal substructure property, which makes the employment of standard dynamic programming techniques difficult222Some approaches use dynamic programming (Isom et al. [2008], Kim et al. [2011]), but they do not find optimal policies..

Since the constraint of Eq. (6) is defined only at b0b_{0}, the subproblem at hth_{t} must consider the expected cumulative cost of the policy from b0b_{0}. It is not enough to compute the expected total cost obtained from b0b_{0} to hth_{t}, as an optimal cost-value from hth_{t} depends on cost-values of other subproblems. We illustrate this with an example. Consider the POMDP (depicted as a belief MDP) in Figure 1, which is a simplified version of Example 1. W.l.o.g., let γ=1\gamma=1. The agent starts at b0b_{0} with constraint c^=5\hat{c}=5. Actions aAa_{A} and aBa_{B} represent going through tunnels A and B, and rr and nrnr are the observations that tunnel A is rocky and not rocky, respectively.

b0b_{0}aAa_{A}b1b_{1}b2b_{2}aBa_{B}aAa_{A}aBa_{B}aBa_{B}aAa_{A}b3b_{3}11rr\,0.50.5nrnr0.50.511  1\;\;111​​1
RR aAa_{A} aBa_{B}
b0b_{0} 0 10
b1b_{1} 12 0
b2b_{2} 12 0
CC aAa_{A} aBa_{B}
b0b_{0} 0 5
b1b_{1} 8 5
b2b_{2} 2 5
Figure 1: Counter-example POMDP with associated reward and cost functions. The action at b3b_{3} has 0 reward and cost.

By examining the reward function, we see that action aAa_{A} returns the highest reward everywhere except b0b_{0}. Action aBa_{B} returns a higher reward at b0b_{0}. Let πA\pi_{A} be the policy that chooses aAa_{A} at every belief, and πB\pi_{B} the one that chooses aBa_{B} at b0b_{0}. The cost-values for these policies are VCπA(b0)=VCπB(b0)=5c^V^{\pi_{A}}_{C}(b_{0})=V^{\pi_{B}}_{C}(b_{0})=5\leq\hat{c}, and the reward-values are VRπA(b0)=12V^{\pi_{A}}_{R}(b_{0})=12, VRπB(b0)=10V^{\pi_{B}}_{R}(b_{0})=10. Note that both policies satisfy the constraint and any policy that chooses aBa_{B} at b1b_{1} or b2b_{2}, or that randomizes between πA\pi_{A} and πB\pi_{B} has value less than VRπA(b0)V^{\pi_{A}}_{R}(b_{0}); hence, πA\pi_{A} is the optimal policy. However, when planning at b1b_{1}, i.e., h1h_{1}, it is impossible to decide that aAa_{A} is optimal without first knowing that action aAa_{A} at h2h_{2} incurs 22 cost and is optimal. The decisions at b1b_{1} and b2b_{2} cannot be computed separately as subproblems.

To get around this dependence, we can include information about how much cost the policy incurs at other subproblems and how much cost policies can incur from hth_{t}, obtaining a policy-dependent subproblem (,ht,π)(\mathcal{M},h_{t},\pi). This subproblem definition exhibits the optimal substructure property only if we relax the restriction of subproblems being independent. Nonetheless, the optimal solution to a subproblem (,ht,π)(\mathcal{M},h_{t},\pi) is only guaranteed to be optimal for the full problem if an optimal policy π\pi^{*} is already provided.

2.1.1 Pathological Behavior : Stochastic Self-Destruction

A main consequence of history-dependent subproblems violating the optimal substructure property and instead requiring policy-dependent subproblems is that optimal policies may exhibit unintuitive behaviors during execution.

In the above example, the optimal policy from b0b_{0} first chooses action aAa_{A}. Suppose that h1h_{1} is reached. The cost constraint at b1b_{1} remains at 55 since no cost has been incurred. However, the optimal C-POMDP policy chooses action aAa_{A} and incurs a cost of 88 which violates the constraint, even though there is another action, aBa_{B}, that incurs a lower expected cost that satisfies the constraint. Therefore, in 50%50\% of executions, when h1h_{1} is reached, the agent intentionally violates the cost constraint to get higher expected rewards, even if a policy that satisfies the cost constraint exists. We term this pathological behavior stochastic self-destruction.

This unintuitive behavior is mathematically correct in the C-POMDP framework because the policy still satisfies the constraint at the initial belief state on expectation. An optimal C-POMDP policy exploits the nature of the constraint in Eq. (6) to intentionally violate the cost constraint for some belief trajectories. A concrete manifestation of this phenomenon is in the stochasticity of the optimal policies for C-POMDPs. These policies randomize between deterministic policies that violate the expected cost threshold but obtain higher expected reward, and those that satisfy the cost threshold but obtain lower expected reward.

Another consequence is a mismatch between optimal policies planned from a current time step and optimal policies planned at future time steps. In the example in Figure 1, if re-planning is conducted at b1b_{1}, the re-planned optimal policy selects aBa_{B} instead of aAa_{A}. In fact, the policy that initially takes aBa_{B} at b0b_{0} achieves a higher expected reward than the original policy that takes aAa_{A} at b0b_{0} and re-plans at future time steps. This phenomenon can therefore lead to poor performance of the closed-loop system during execution.

Remark 1.

We remark that the pathological behavior arises due to the C-POMDP problem formulation, and not the algorithms designed to solve C-POMDPs. Further, this issue cannot be addressed by simply restricting solutions to deterministic policies since they also exhibit the pathological behavior, as seen in the example in Figure 1.

3 Recursively-Constrained POMDPs

To mitigate the pathological behaviors and obtain a (policy-independent) optimal substructure property, we aim to align optimal policies computed at a current belief with optimal policies computed at future (successor) beliefs. We propose a new problem formulation called Recursively-Constrained POMDP (RC-POMDP), which imposes additional recursively defined constraints on a policy.

An RC-POMDP has the same tuple as a C-POMDP, but with recursive constraints on beliefs at future time steps. These constraints enforce that a policy must satisfy a history dependent cumulative expected cost constraint at every future belief state. Intuitively, we bound the cost value at every belief such that the constraint in the initial node is respected.

The expected cumulative cost of the trajectories associated with history hth_{t} is given as:

W(ht)=τ=0t1γτ𝔼sτbτ[C(sτ,aτ)hτ].\displaystyle W(h_{t})=\sum_{\tau=0}^{t-1}\gamma^{\tau}\mathbb{E}_{s_{\tau}\sim b_{\tau}}\left[C(s_{\tau},a_{\tau})\mid h_{\tau}\right]. (7)

We can direct the optimal policy at each time step tt by imposing that the total expected cumulative cost satisfies the initial cost constraint c^\hat{c} . For a given hth_{t} and its corresponding btb_{t}, the expected cumulative cost at b0b_{0} is given by:

VChtπ(b0)=W(ht)+γtVCπ(bt).\displaystyle V^{\pi}_{C\mid h_{t}}(b_{0})=W(h_{t})+\gamma^{t}V_{C}^{\pi}(b_{t}). (8)

Therefore, the following constraint should be satisfied by a policy π\pi at each future belief state:

W(ht)+γtVCπ\displaystyle W(h_{t})+\gamma^{t}V_{C}^{\pi} (bt)c^.\displaystyle\left(b_{t}\right)\leq\hat{c}. (9)

We define the admissibility of a policy π\pi accordingly.

Definition 3 (Admissible Policy).

A policy π\pi is k-admissible for a k0{}k\in\mathbb{N}_{0}\cup\{\infty\} if π\pi satisfies Eq. (9) for all t{0,,k1}t\in\{0,\ldots,k-1\} and all histories hth_{t} of length tt induced by π\pi from b0b_{0}. A policy is called admissible if it is \infty-admissible.

Since RC-POMDP policies are constrained based on history, it is not sufficient to directly use belief-based policies. Thus, we consider history-based policies in this work. A history-based policy maps a history hth_{t} to a probability over actions Δ(A)\Delta(A).

The RC-POMDP optimization problem is formalized below.

Problem 2 (RC-POMDP Planning Problem).

Given a C-POMDP and an admissibility constraint k{}k\in\mathbb{N}\cup\{\infty\}, compute optimal policy π\pi^{*} that is k-admissible, i.e., ht\forall h_{t},

π(ht)=argmaxπVRπ(ht)\displaystyle\pi^{*}(h_{t})=\arg\max_{\pi}V_{R}^{\pi}(h_{t}) (10)
s.t.W(ht)+γtVCπ(bt)c^t{0,,k1}.\displaystyle\text{ s.t.}\;\;W(h_{t})+\gamma^{t}V_{C}^{\pi}\left(b_{t}\right)\leq\hat{c}\;\;\forall t\in\{0,\dots,k-1\}. (11)

Note that Problem 2 is an infinite-horizon problem since the optimization objective (10) is infinite horizon. The admissibility constraint kk is a user-defined parameter. In this work, we focus on k=k=\infty, i.e., admissible policies.

Remark 2.

In POMDPs, reasoning about cost is done on expectation due to state uncertainty. C-POMDPs bound the expected total cost of state trajectories, enabling belief trajectories with low expected costs to compensate for those with high expected costs. Conversely, a worst-case constraint formulation of the problem, which never allows any violations during execution, may be overly conservative. RC-POMDPs strike a balance between the two; it bounds the expected total cost for all belief trajectories, only allowing cost violations during execution due to state uncertainty.

4 Theoretical Analysis of RC-POMDPs

We first transform Eq. (11) into an equivalent recursive form that is better suited for policy computation, e.g., tree search and dynamic programming. By rearranging Eq. (11), VCπ(bt)γt(c^W(ht))V^{\pi}_{C}(b_{t})\leq\gamma^{-t}\cdot(\hat{c}-W(h_{t})). Based on this, we define the history-dependent admissible cost bound as:

d(ht)=γt(c^W(ht)),\displaystyle d(h_{t})={\gamma^{-t}}\cdot(\hat{c}-W(h_{t})), (12)

which can be computed recursively:

d(h0)=c^,d(ht+1)=γ1(d(ht)C(bt,at)).d(h_{0})=\hat{c},\quad d(h_{t+1})={\gamma^{-1}}\cdot\big{(}d(h_{t})-C(b_{t},a_{t})\big{)}. (13)

Then, Problem 2 can be reformulated with recursive bounds.

Proposition 1.

Problem 2 can be rewritten as:

π=argmaxπVRπ(b0) s.t. VCπ(bt)d(ht)t{0,1,,k1},\begin{split}\pi^{*}&=\arg\max_{\pi}V_{R}^{\pi}(b_{0})\\ &\text{ s.t. }\quad V_{C}^{\pi}(b_{t})\leq d(h_{t})\quad\forall t\in\{0,1,\dots,k-1\},\end{split} (14)

where d(ht)d(h_{t}) is defined recursively in Eq. (13).

Optimality of Deterministic Policies

Here, we show that deterministic policies suffice for optimality in RC-POMDPs.

Theorem 1.

An RC-POMDP with admissibility constraint k=k=\infty has at least one deterministic optimal policy if an admissible policy exists.

A proof is provided in the Appendix. The main intuition is that we can always construct an optimal deterministic policy from an optimal stochastic policy. That is, at every history in which the policy has stochasticity, we can construct a new admissible policy that achieves the same reward-value while remaining admissible by deterministically choosing one of the stochastic actions at that history. We obtain a deterministic optimal policy by inductively performing this determinization at all reachable histories.

Satisfaction of Bellman’s Principle of Optimality   Here, we show that RC-POMDPs satisfy BPO with a policy-independent optimal substructure.

Proposition 2 (Belief-Admissible Cost Formulation).

An RC-POMDP belief btb_{t} with history dependent admissible cost bound d(ht)d(h_{t}) can be rewritten as an augmented belief-admissible cost state b¯t=(bt,d(ht))\bar{b}_{t}=(b_{t},d(h_{t})). Further, the augmented QQ-values for a policy can be written as:

QRπ((bt,d(ht)),a)\displaystyle Q_{R}^{\pi}((b_{t},d(h_{t})),a) =R(bt,a)+γ𝔼[VRπ((bt+1,d(ht+1))],\displaystyle=R(b_{t},a)+\gamma\,\mathbb{E}[V^{\pi}_{R}((b_{t+1},d(h_{t+1}))],
QCπ((bt,d(ht)),a)\displaystyle Q_{C}^{\pi}((b_{t},d(h_{t})),a) =C(bt,a)+γ𝔼[VCπ((bt+1,d(ht+1))].\displaystyle=C(b_{t},a)+\gamma\,\mathbb{E}[V^{\pi}_{C}((b_{t+1},d(h_{t+1}))].

We first see that the evolution of b¯t\bar{b}_{t} is Markovian, i.e.,

P(b¯t+1b¯t,at,ot,ht)\displaystyle P(\bar{b}_{t+1}\mid\bar{b}_{t},a_{t},o_{t},h_{t})
={P(bt+1bt,a,ot,ht)if d(ht+1)=(d(ht)C(bt,at))γ0otherwise,\displaystyle=\begin{cases}P(b_{t+1}\mid b_{t},a,o_{t},h_{t})&\text{if }d(h_{t+1})=\frac{(d(h_{t})-C(b_{t},a_{t}))}{\gamma}\\ 0&\text{otherwise,}\end{cases}

thus, P(b¯t+1b¯t,at,ot,ht)=P(b¯t+1b¯t,at,ot)P(\bar{b}_{t+1}\mid\bar{b}_{t},a_{t},o_{t},h_{t})=P(\bar{b}_{t+1}\mid\bar{b}_{t},a_{t},o_{t}).

Here, we use the policy iteration version of Bellman equation, but a similar argument can be made for value iteration.

Theorem 2.

Fix π\pi. Let Vπ=(VRπ,VCπ)V^{\pi}=(V_{R}^{\pi},V_{C}^{\pi}) be reward- and cost-value function for π\pi. The Bellman operator 𝔹\mathbb{B} for policy π\pi for an RC-POMDP is given by, b¯t\forall\bar{b}_{t},

𝐚=argmaxaA[QRπ(b¯t,a)QCπ(b¯t,a)d(ht)]\displaystyle\mathbf{a}=\operatorname*{arg\,max}_{a\in A}\Big{[}Q_{R}^{\pi}(\bar{b}_{t},a)\mid Q^{\pi}_{C}(\bar{b}_{t},a)\leq d(h_{t})\Big{]} (15)
𝔹[Vπ](b¯t){(QRπ(b¯t,a),QCπ(b¯t,a)),a𝐚 if 𝐚,(VRπ(b¯t),(,,)) if 𝐚=,\displaystyle\mathbb{B}[V^{\pi}](\bar{b}_{t})\triangleq\begin{cases}\big{(}Q_{R}^{\pi}(\bar{b}_{t},a),Q_{C}^{\pi}(\bar{b}_{t},a)\big{)},\,a\in\mathbf{a}&\text{ if }\mathbf{a}\neq\emptyset,\\ \big{(}V_{R}^{\pi}(\bar{b}_{t}),(\infty,\ldots,\infty)\big{)}&\text{ if }\mathbf{a}=\emptyset,\end{cases}

Assume an admissible policy exists for the RC-POMDP with admissibility constraint k=k=\infty. Let Vπ=(VRπ,VCπ)V^{\pi^{*}}=(V_{R}^{\pi^{*}},V_{C}^{\pi^{*}}) be the values for an optimal admissible policy π\pi^{*}, and we obtain a new policy π\pi^{\prime} with (VRπ,VCπ)=𝔹[Vπ](V_{R}^{\pi^{\prime}},V_{C}^{\pi^{\prime}})=\mathbb{B}[V^{\pi^{*}}]. π\pi^{*} satisfies the BPO criterion of an admissible optimal policy:

VRπ(b¯t)\displaystyle V_{R}^{\pi^{\prime}}(\bar{b}_{t}) =VRπ(b¯t)b¯t,\displaystyle=V_{R}^{\pi^{*}}(\bar{b}_{t})\qquad\forall\bar{b}_{t}, (16)
VCπ(bt)\displaystyle V_{C}^{\pi^{\prime}}(b_{t}) d(ht)b¯tReachπ(b¯0),\displaystyle\leq d(h_{t})\qquad\quad\,\forall\bar{b}_{t}\in\textsc{Reach}^{\pi^{\prime}}\!(\bar{b}_{0}), (17)

where Reachπ(b¯0)\textsc{Reach}^{\pi}(\bar{b}_{0}) is the set of augmented belief states reachable from b0b_{0} under policy π\pi.

This theorem shows that an optimal policy remains admissible and optimal w.r.t rewards after applying 𝔹\mathbb{B} on a policy independent value function VV. Note that VCV_{C}^{*} is not unique as there may be multiple optimal cost-value functions for an optimal VRV_{R}^{*}. Next, we show that 𝔹\mathbb{B} is a contraction over reward-values for a suitably initialized value function, which is one that defines the space of admissible policies.

Theorem 3.

For each b¯t\bar{b}_{t}, define Φ(b¯t)\Phi(\bar{b}_{t}) as the set of admissible policies from b¯t\bar{b}_{t}:

Φ(b¯t)={π|VCπ(bτ)d(hτ)τt}.\displaystyle\Phi(\bar{b}_{t})=\{\pi\;|\;V^{\pi}_{C}(b_{\tau})\leq d(h_{\tau})\;\;\forall\tau\geq t\}. (18)

Vπ0V^{\pi^{0}} is a well behaved initial value function if the following holds for all b¯t\bar{b}_{t}. If Φ(b¯t)=\Phi(\bar{b}_{t})=\emptyset, VCπ0(b¯t)=(,,)V^{\pi^{0}}_{C}(\bar{b}_{t})=(\infty,\ldots,\infty). If Φ(b¯t)\Phi(\bar{b}_{t})\neq\emptyset, VCπ0(b¯t)d(ht)V_{C}^{\pi^{0}}(\bar{b}_{t})\leq d(h_{t}).

Suppose that Vπ0V^{\pi^{0}} is well behaved, then 𝔹n[(VRπ0,VCπ0)]x(VRπ,VCπn)\mathbb{B}^{n}[(V_{R}^{\pi^{0}},V_{C}^{\pi^{0}})]_{x}\rightarrow(V_{R}^{\pi^{*}},V_{C}^{\pi^{n}}) as nn\rightarrow\infty. That is, starting from π0\pi^{0}, 𝔹\mathbb{B} is a contraction on VRV_{R} and VRπV_{R}^{\pi^{*}} is a unique fixed point.

Proofs of all results are provided in the Appendix. Theorems 1-3 show that it is sufficient to search in the space of deterministic policies for an optimal one, and the policy-independent optimal substructure of RC-POMDPs can be exploited to employ dynamic programming for an effective and computationally efficient algorithm for RC-POMDPs. Further, Theorem 3 shows that determining policy admissibility is essential for effective dynamic programming. These results also indicate that optimal policies for RC-POMDPs do not exhibit the same pathological behaviors as C-POMDPs.

5 Dynamic Programming for RC-POMDPs

With the theoretical foundation above, we devise a first attempt at an algorithm that approximately solves Problem 2 with scalar cost and admissibility constraint k=k=\infty. We leave the multi-dimensional and finite kk cases for future work. The algorithm is called Admissibility Recursively Constrained Search (ARCS). ARCS takes advantage of the Markovian property of the belief-admissible cost formulation in Proposition 2, and Theorems 1-3 to utilize point-based dynamic programming in the space of deterministic and admissible policies, building on unconstrained POMDP methods [Shani et al., 2013].

ARCS is outlined in Algorithm 1. It takes as input the RC-POMDP \mathcal{M} and ϵ>0\epsilon>0, a target error between the computed policy and an optimal policy at b0b_{0}. ARCS explores the search space by incrementally sampling points in the history space. These points form nodes in a policy tree TT. At each iteration, a SAMPLE step expands a sequence of points starting from the root. Then, a Bellman BACKUP step is performed for each sampled node. Finally, a PRUNE step removes sub-optimal nodes. These three steps are repeated until an admissible ϵ\epsilon-optimal policy is found. Pseudocode for SAMPLE, BACKUP and PRUNE are provided in the appendix.

Algorithm 1 Anytime Recursively Constrained Search

ARCS(,ϵ\mathcal{M},\epsilon)

1:  Initialize cost-minimizing policy π^cmin=Γcmin\hat{\pi}_{c}^{min}=\Gamma_{c}^{min}
2:  (αr,αc)argmin(αr,αc)ΓcminαrTb0(\alpha_{r},\alpha_{c})\leftarrow\operatorname*{arg\,min}_{(\alpha_{r},\alpha_{c})\in\Gamma_{c_{min}}}\alpha_{r}^{T}b_{0}
3:  V¯RαrTb0,V¯CαcTb0\underline{V}_{R}\leftarrow\alpha_{r}^{T}b_{0},\overline{V}_{C}\leftarrow\alpha_{c}^{T}b_{0}
4:  Initialize V¯R\overline{V}_{R} and V¯C\underline{V}_{C} for b0b_{0} with FIB
5:  Initialize k0k_{0} with Eq. (19)-(20)
6:  Tv0=(b0,c^,k0,V¯R,V¯R,V¯C,V¯C,,,,)T\leftarrow v_{0}=(b_{0},\hat{c},k_{0},\overline{V}_{R},\underline{V}_{R},\overline{V}_{C},\underline{V}_{C},\emptyset,\emptyset,\emptyset,\emptyset)
7:  repeat
8:     BsamB_{sam}\leftarrow SAMPLE(ϵ\epsilon).
9:     for all vBsamv\in B_{sam} do
10:        BACKUP(vv)
11:     end for
12:     PRUNE(BsamB_{sam})
13:  until termination conditions are satisfied
14:  return T,ΓcminT,\Gamma_{c_{min}}

Policy Tree Representation   We represent the policy with a policy tree TT. A node in TT is a tuple v=(b,d,k,V¯R,V¯R,V¯C,V¯C,Q¯R,Q¯R,Q¯C,Q¯C)v=(b,d,k,\overline{V}_{R},\underline{V}_{R},\overline{V}_{C},\underline{V}_{C},\overline{Q}_{R},\underline{Q}_{R},\overline{Q}_{C},\underline{Q}_{C}), where bb is a belief, dd is a history-dependent admissible cost bound, kk is a lower bound on admissible horizon, V¯R\overline{V}_{R} and V¯R\underline{V}_{R} are the two-sided bounds on reward-values, V¯C\overline{V}_{C} and V¯C\underline{V}_{C} are the two-sided cost-value bounds, Q¯R,Q¯R\overline{Q}_{R},\underline{Q}_{R} represent two-sided bound on QQ reward-value, and Q¯C,Q¯C\underline{Q}_{C},\overline{Q}_{C} represent the two-sided bounds on QQ cost-value. The root of TT is the node v0v_{0} with b=b0b=b_{0}, d=c^d=\hat{c}, and admissible horizon lower bound k0k_{0}.

From Theorem 3, a key aspect of effective dynamic programming for RC-POMDPs is computing admissible policies. This can be approximated by minimizing VC(b^t)V_{C}(\hat{b}_{t}). As a pre-processing step, we first approximate a minimum cost-value policy πcmin=arginfπVCπ\pi_{c}^{min}=\arg\inf_{\pi}V^{\pi}_{C}. An arbitrarily tight under-approximation (upper bound) π^cmin\hat{\pi}_{c}^{min} as a set of |S||S|-dimensional hyperplanes, called α\alpha-vectors, can be computed efficiently with a POMDP algorithm Hauskrecht [2000]. The reward-values obtained by π^cmin\hat{\pi}_{c}^{min} is also a lower bound on the optimal reward-value. Thus, π^cmin\hat{\pi}_{c}^{min} is represented by a set of α\alpha-vector pairs (αr,αc)ΓCmin(\alpha_{r},\alpha_{c})\in\Gamma_{C}^{min}. π^cmin\hat{\pi}_{c}^{min} is used to initialize our policy, and is used from leaf nodes of TT.

To initialize a new node, belief bb^{\prime} is computed with Eq. (1), and dd^{\prime} is computed recursively with Eq. (13). We initialize V¯R\underline{V}_{R} and V¯C\overline{V}_{C} with π^cmin\hat{\pi}_{c}^{min}, and initialize V¯C\underline{V}_{C} and V¯R\overline{V}_{R} independently using the Fast Informed Bound (FIB) Hauskrecht [2000], and kk^{\prime} is a lower bound of the admissible horizon.

Admissible Horizon Lower Bound   It is computationally intractable to exhaustively search the possibly infinite policy space. Thus, we maintain a lower bound on the admissible horizon of the policy for every node. It is used to compute admissibility beyond the current search depth of the tree, and to improve search efficiency via pruning. To initialize the admissible horizon guarantee of a leaf node, we compute a lower bound on the admissible horizon kk when using π^cmin\hat{\pi}_{c_{min}}.

Lemma 1.

Let the maximum 11-step cost CmaxC_{max} that π^cmin\hat{\pi}_{c}^{min} incurs at each time step across the entire belief space be Cmax=maxbBC(b,π^cmin(b)).C_{max}=\max_{b\in B}C(b,\hat{\pi}_{c}^{min}(b)). Then, for a node vv, if v.d<0v.d<0, then k=0k=0. For a leaf node vv with v.d0v.d\geq 0, π^cmin\hat{\pi}_{c}^{min} is at least kk-admissible with

k=log(1(v.d/Cmax)(1γ))/log(γ),\displaystyle k=\big{\lfloor}\log\big{(}1-({v.d}/{C_{max}})\cdot(1-\gamma)\big{)}/\log(\gamma)\big{\rfloor}, (19)

and π^cmin\hat{\pi}_{c}^{min} is admissible from history hh if

Cmax/(1γ)v.d or v.V¯Cπ^cmin=0v.d.\displaystyle{C_{max}}/{(1-\gamma)}\leq v.d\text{\;\; or \;\;}v.\overline{V}^{\hat{\pi}_{c}^{min}}_{C}=0\leq v.d. (20)

A proof is provided in the appendix. This lemma provides sufficient conditions for admissibility of computed policies. We compute an upper bound on the parameter CmaxC_{max},

CmaxVC,maxπ^cmin=maxbΔ(S)min(αr,αc)ΓcminαcTb,\displaystyle C_{max}\leq V_{C,max}^{\hat{\pi}_{c}^{min}}=\max_{b\in\Delta(S)}\min_{(\alpha_{r},\alpha_{c})\in\Gamma_{c_{min}}}\alpha_{c}^{T}b, (21)

where αc\alpha_{c} refers to a cost α\alpha-vector. This can be solved efficiently with the maximin LP Williams [1990]:

maxz,bz s.t. αcTbz,(αr,αc)Γ,bΔ(S).\displaystyle\max_{z,b}\,z\;\;\text{ s.t. }\;\;\;\alpha_{c}^{T}b\geq z,\;\;(\alpha_{r},\alpha_{c})\in\Gamma,\;\;b\in\Delta(S). (22)

Sampling   ARCS uses a mixture of random sampling Spaan and Vlassis [2005] and heuristic search (SARSOP) Kurniawati et al. [2008]. Our empirical evaluations suggest that this approach is an effective balance between finding policies with high cumulative reward and that are admissible. At each SAMPLE step, ARCS expands the search space from the root of TT, with either heuristic sampling or random sampling. For heuristic sampling, we use the same sampling strategy and sampling termination condition as SARSOP. It works by choosing actions with the highest Q¯R\overline{Q}_{R}, and observations that have the largest contribution to the gap at the root of TT, and sampling terminates based on a combination of selective deep sampling and a gap termination criterion. With random sampling, actions and observations are chosen randomly while traversing the tree until a new node is reached and added to the tree. Sampled points are chosen for BACKUP.

Backup   The BACKUP operation at node vv updates the values in the node by back-propagating the information of the children of vv back to vv. First, the values of Q¯R,Q¯R\overline{Q}_{R},\underline{Q}_{R} and Q¯C\overline{Q}_{C} are computed for each action using Eq. (3) and Eq. (5) for rewards and costs, respectively. Then, an RC-POMDP backup Eq. (15) is used to update V¯R,V¯R,V¯C,V¯C\overline{V}_{R},\underline{V}_{R},\underline{V}_{C},\overline{V}_{C}. The action selected to update V¯R\underline{V}_{R} is used to update kk by back-propagating the minimum kk of all children. If no actions are feasible, all current policies from that node are inadmissible, and we update the reward- and cost-values using the action with the minimum QQ-cost value, and set k=0k=0.

Pruning   To keep the size of TT small and improve tractability, we prune nodes and node-actions that are suboptimal, using the following criteria. First, for each node vBsamv\in B_{sam}, if v.V¯C>v.dv.\underline{V}_{C}>v.d, no admissible policies exist from vv, so vv and its subtree are pruned. Next, we prune actions as follows. Let k(v,a)k(v,a) be the admissible horizon guarantees of the successor nodes from taking action aa at node vv. Between two actions aa and aa^{\prime}, if k(v,a)= and v.Q¯R(a)<v.Q¯R(a)k(v,a^{\prime})=\infty\text{ and }v.\overline{Q}_{R}(a)<v.\underline{Q}_{R}(a^{\prime}), we prune the node-action (v,a)(v,a) (disallow taking action aa at node vv), since action aa can never be taken by the optimal admissible policy. Next, if all node-actions (v,a)(v,a) are pruned, vv is also pruned. Finally, the node-action (v,a)(v,a) is pruned if any successor node from taking aa at vv is pruned. Nodes and node-actions that are pruned are not chosen during action and observation selection during SAMPLE and BACKUP.

Proposition 3.

PRUNE only removes sub-optimal policies.

Termination Condition

ARCS terminates when two conditions are met, (i) when it finds an admissible policy, i.e., v0.k=v_{0}.k=\infty, which is when all leaf nodes vleafv_{leaf} reachable under a policy satisfy Eq. (20), and (ii) when it finds an ϵ\epsilon-optimal policy, i.e., when the gap criterion at the root is satisfied, that is when v0.V¯Rv0.V¯Rϵv_{0}.\overline{V}_{R}-v_{0}.\underline{V}_{R}\leq\epsilon.

Remark 3.

ARCS can be modified to work in an anytime fashion given a time limit, and output the best computed policy and its admissible horizon guarantee v0.kv_{0}.k.

5.1 Algorithm Analysis

Here, we analyze the theoretical properties of ARCS.

Lemma 2 (Bound Validity).

Given an RC-POMDP with admissibility constraint k=k=\infty, let TT be the policy tree after some iterations of ARCS. Let VRV_{R}^{*} be the reward-value of an optimal admissible policy. At every node vv with b¯=(b,d)\bar{b}=(b,d) and admissible horizon guarantee v.k=v.k=\infty, it holds that:
v.V¯RVR(b¯)v.V¯Rv.\underline{V}_{R}\leq V_{R}^{*}(\bar{b})\leq v.\overline{V}_{R}    and    v.V¯Cd.v.\overline{V}_{C}\leq d.

Theorem 4 (Soundness).

Given an RC-POMDP with admissibility constraint k=k=\infty and ϵ\epsilon, if ARCS terminates with a solution, the policy is admissible and ϵ\epsilon-optimal.

ARCS is not complete. It may not terminate, due to conservative computation of admissible horizon and needing to search infinitely deep to find admissible policies for some problems. However, ARCS can find ϵ\epsilon-optimal admissible policies for many problems, such as the ones in our evaluation. These are problems where a finite depth is sufficient to compute admissibility even with conservatism. We leave the analysis of such classes of RC-POMDPs to future work.

6 Experimental Evaluation

To the best of our knowledge, this work is the first to propose and solve RC-POMDPs, and thus there are no existing algorithms to compare to directly. The purpose of our evaluation is to (i) empirically compare the behavior of policies computed for RC-POMDPs with those computed for C-POMDPs, and (ii) evaluate the performance of our proposed algorithm for RC-POMDPs. To this end, we consider the following offline algorithms to compare against our ARCS333Our code is open sourced at https://github.com/CU-ADCL/RC-PBVI.jl:

  • CGCP Walraven and Spaan [2018]: Algorithm that computes near-optimal policies for C-POMDPs using a primal-dual approach.

  • CGCP-CL: Closed-loop CGCP with updates on belief and admissible cost at each time step.

  • Exp-Gradient Kalagarla et al. [2022]: Algorithm that computes mixed policies for C-POMDPs using a no-regret learning approach with a primal-dual approach using an Exponentiated Gradient method.

  • CPBVI Kim et al. [2011]: Approximate dynamic programming that uses admissible cost as a heuristic.

  • CPBVI-D: We modify CPBVI to compute deterministic policies to evaluate its efficacy for RC-POMDPs.

Since the purpose of our comparison between RC-POMDPs and C-POMDPs is mainly with regard to constraints, we do not compare to online C-POMDP algorithms such as CC-POMCP Lee et al. [2018] which can handle larger problems but do not have anytime guarantees on constraint satisfaction.

We consider the following environments: (i) CE: Counterexample in Figure 1, (ii) C-Tiger: A Constrained version of Tiger POMDP Kaelbling et al. [1998], (iii) CRS: Constrained RockSample Lee et al. [2018], and (iv) Tunnels: A scaled version of Example 1, shown in Figure 2. Details on each problem, experimental setup, and algorithm implementation are in the Appendix. For all algorithms except CGCP-CL, solve time is limited to 300300 seconds and online action selection to 0.050.05 seconds. For CGCP-CL, 300300 seconds was given to re-compute each action. We report the mean discounted cumulative reward and cost, and constraint violation rate in Table 1. The constraint violation rate is the fraction of trials in which d(ht)d(h_{t}) becomes negative, which means Eq. (11) is violated.

Table 1: Results for benchmarks. We report the mean for each metric. We bold the best violation rates in black, the highest reward with violation rate greater than 0 in blue, and the highest reward with 0 violation rate in green. Standard error of the mean, and problem parameters can be found in the appendix.
Env. Algorithm Violation Rate Reward Cost
CE CGCP 0.510.51 12.00\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{12.00} 5.195.19
CGCP-CL 0.00\mathbf{0.00} 6.126.12 3.253.25
(c^=5\hat{c}=5) Exp-Gradient 0.49 11.87 4.98
CPBVI 0.00\mathbf{0.00} 8.398.39 4.384.38
CPBVI-D 0.00\mathbf{0.00} 6.106.10 3.543.54
Ours 0.00\mathbf{0.00} 10.00\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{10.00} 5.005.00
C-Tiger CGCP 0.750.75 1.69-1.69 3.003.00
CGCP-CL 0.140.14 2.98-2.98 2.932.93
Exp-Gradient 1.0 1.81\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{1.81} 3.22
(c^=3\hat{c}=3) CPBVI 0.150.15 11.11-11.11 2.582.58
CPBVI-D 0.090.09 9.49-9.49 2.762.76
Ours 0.00\mathbf{0.00} 5.75\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{-5.75} 2.982.98
CRS(4,4) CGCP 0.510.51 10.43\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{10.43} 0.510.51
CGCP-CL 0.780.78 1.681.68 0.720.72
(c^=1\hat{c}=1) Exp-Gradient 0.30 10.38 0.92
CPBVI 0.00\mathbf{0.00} 0.40-0.40 0.520.52
CPBVI-D 0.00\mathbf{0.00} 0.640.64 0.470.47
Ours 0.00\mathbf{0.00} 6.52\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{6.52} 0.520.52
CRS(5,7) CGCP 0.410.41 11.98\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{11.98} 1.001.00
CL-CGCP 0.180.18 9.649.64 0.990.99
(c^=1\hat{c}=1) Exp-Gradient 0.30 11.90 1.31
CPBVI 0.00\mathbf{0.00} 0.000.00 0.000.00
CPBVI-D 0.00\mathbf{0.00} 0.000.00 0.000.00
Ours 0.00\mathbf{0.00} 11.77\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{11.77} 0.950.95
CRS(7,8) CGCP 0.360.36 10.7810.78 0.9450.945
CL-CGCP 0.200.20 11.17\mathbf{11.17} 0.9310.931
(c^=1\hat{c}=1) EXP-Gradient 0.32 10.03 1.15
CPBVI 0.00\mathbf{0.00} 0.00.0 0.00.0
CPBVI-D 0.00\mathbf{0.00} 0.00.0 0.00.0
Ours 0.00\mathbf{0.00} 6.61\mathbf{6.61} 0.9600.960
Tunnels CGCP 0.500.50 1.611.61 1.011.01
CL-CGCP 0.310.31 1.221.22 0.680.68
(c^=1\hat{c}=1) Exp-Gradient 0.48 1.35 0.82
CPBVI 0.900.90 1.92\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{1.92} 1.621.62
CPBVI-D 0.890.89 1.92\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{1.92} 1.571.57
Ours 0.00\mathbf{0.00} 1.03\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\mathbf{1.03} 0.440.44
Refer to caption
Figure 2: Tunnels. There is a cost of 11 for rock traversal (red regions) and 0.50.5 for backtracking. Trajectories from CGCP (blue) and ARCS (green) are displayed, with opacity approximately proportional to frequency of trajectories.

In all environments, ARCS found admissible policies (k=k=\infty). In contrast, CGCP, Exp-Gradient, CPBVI and CPBVI-D only guarantee an admissible horizon of k=1k=1, since the C-POMDP constraint is only at the initial belief. CGCP-CL may have a closed-loop admissible horizon greater than 1, but does not provide guarantees, as indicated in the violation rate.

The benchmarking results show that the policies computed for ARCS generally achieve competitive cumulative reward to policies computed for C-POMDP, without any constraint violations and thus no pathological behavior. ARCS also generally performs better in all metrics than CPBVI and CPBVI-D, both of which could not search the problem space sufficiently to find good solutions in large RC-POMDPs.

Although the C-POMDP policies generally satisfy the C-POMDP expected cost constraints, the prevalence of high violation rates of C-POMDP policies across the environments strongly suggests that the manifestation of the stochastic self-destruction in C-POMDPs is not an exceptional phenomenon, but intrinsic to the C-POMDP problem formulation. This behavior is illustrated in the Tunnels problem, shown in Figure 2. CGCP (in blue) decides to traverse tunnel AA 51%51\% of the time even when it observes that AA is rocky, and traverses tunnel BB 49%49\% of the time. In contrast, ARCS never traverses tunnel AA, since such a policy is inadmissible. Instead, it traverses BB or CC depending on observation of rocks in tunnel BB, to maximize rewards while remaining admissible.

Finally, the closed-loop inconsistency of C-POMDP policies is evident when comparing open loop CGCP with closed loop CGCP-CL. In most cases (all except CRS(7,8)), the cumulative reward is decreased when going from CGCP to CGCP-CL, sometimes drastically. The violation rate also decreases, but not to 0, suggesting that planning with C-POMDPs instead of RC-POMDPs can lead to myopic behavior that cannot be addressed by re-planning. As seen in CE and both CRS, CGCP-CL attains lower reward than ARCS while still having constraint violations. Therefore, even for closed-loop planning, RC-POMDP can be more advantageous than C-POMDP.

Unconstrained POMDP problems

Table 2: Results for computed policy under-approximation (lower bound for reward values and upper bounds for cost values), best reward values in bold. SARSOP only considers reward value as an unconstrained POMDP algorithm.
Env. Algorithm Reward Cost
CE SARSOP (POMDP) 12.0\mathbf{12.0} -
CGCP (C-POMDP) 12.0\mathbf{12.0} 5.05.0
300s300s Ours (RC-POMDP) 12.0\mathbf{12.0} 5.05.0
C-Tiger SARSOP (POMDP) 1.93\mathbf{1.93} -
CGCP (C-POMDP) 1.901.90 3.2
300s300s Ours (RC-POMDP) -1.4 3.2
Tunnels SARSOP (POMDP) 1.92\mathbf{1.92} -
CGCP (C-POMDP) 1.92\mathbf{1.92} 1.6
300s300s Ours (RC-POMDP) 1.92\mathbf{1.92} 1.6
CRS(4,4) SARSOP (POMDP) 16.9\mathbf{16.9} -
CGCP (C-POMDP) 16.9\mathbf{16.9} 2.4
300s300s Ours (RC-POMDP) 16.9\mathbf{16.9} 2.2
CRS(5,7) SARSOP (POMDP) 23.9\mathbf{23.9} -
CGCP (C-POMDP) 14.814.8 3.63.6
300s300s Ours (RC-POMDP) 14.914.9 2.12.1
CRS(5,7) SARSOP (POMDP) 24.0\mathbf{24.0} -
CGCP (C-POMDP) 24.024.0 4.54.5
1000s1000s Ours (RC-POMDP) 15.315.3 2.22.2

Next, we evaluate how well the RC-POMDP framework and our proposed algorithm performs for problems that have reduced constraints, so as to trivially become an unconstrained POMDP. We evaluate ARCS (RC-POMDP), CGCP (C-POMDP algorithm) and SARSOP (unconstrained POMDP algorithm) for the same benchmark problems with very high constraint thresholds c^=1000\hat{c}=1000.

For these problems, all policies are admissible, and ARCS is guaranteed to asymptotically converge to the optimal solution. However, since ARCS needs to keep track of admissible cost values, we utilize a policy tree representation. This representation is less efficient than the α\alpha-vector policy representation used in SARSOP and CGCP, which allow value improvements at a belief state to directly improve values at other belief states.

Table 2 reports the lower bound reward and upper bound costs computed by each algorithm with a time limit of 300s300s. As seen in Table 2, our algorithm performs similar to CGCP and the unconstrained POMDP algorithm SARSOP for most smaller problems. The C-Tiger problem benefits greatly from the α\alpha-vector representation, since the optimal policy repeatedly cycles among a small set of belief states (which our algorithm considers different augmented belief-admissible cost states). For slightly larger problems (CRS(5,7)), the efficient α\alpha-vector representation and other heuristics of SARSOP (which CGCP takes advantage of, since it repeatedly calls SARSOP) enables much faster convergence than the policy tree-based method of our approach. Nonetheless, as time is increased, our algorithm slowly improves its values.

Overall, ARCS exhibits competitive performance in problems with reduced or no constraints, albeit with less scalability. However, the main advantage of the RC-POMDP formulation is in the careful treatment of constraints to mitigate the pathological behaviors of the C-POMDP formulation, making RC-POMDPs particularly valuable for problems involving nontrivial constraints.

7 Conclusion and Future Work

We introduce and analyze the stochastic self-destruction behavior of C-POMDP policies, and show C-POMDPs may not exhibit optimal substructure. We propose a new formulation, RC-POMDPs, and present an algorithm for RC-POMDPs. Results show that C-POMDP policies exhibit unintuitive behavior not present in RC-POMDP policies, and our algorithm effectively computes policies for RC-POMDPs. We believe RC-POMDPs are an alternate formulation that can be more desirable for some applications.

A future direction is to explore models that exhibit strong cases of stochastic self-destruction, and develop more metrics that signal stochastic self-destruction. Another future direction is to analyze classes (or conditions) of RC-POMDPs that are approximable and to design algorithms that converge for such cases.

Further, our offline policy tree search algorithm can benefit from better policy search heuristics and more efficient policy representations (e.g. finite state controllers). We plan to extend this work by exploring other approaches, such as searching for finite state controllers directly [Wray and Czuprynski, 2022] and online tree search approximations [Lee et al., 2018].

Finally, this work shows that RC-POMDPs can provide more desirable policies than C-POMDPs, but the cost constraints remain on expectation. For some applications, probabilistic or risk measure constraints may be more desirable than expectation constraints. These formulations also benefit from the recursive constraints that we propose for RC-POMDPs.

Acknowledgements.
This work was supported by Strategic University Research Partnership (SURP) grants from the NASA Jet Propulsion Laboratory (JPL) (RSA 1688009 and 1704147). Part of this research was carried out at JPL, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).

References

  • Chen et al. [2020] Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. Trust-aware decision making for human-robot collaboration: Model learning and planning. ACM Transactions on Human-Robot Interaction (THRI), 9(2):1–23, 2020.
  • Chong et al. [2012] Edwin K.P. Chong, Scott A. Miller, and Jason Adaska. On bellman’s principle with inequality constraints. Operations Research Letters, 40(2):108–113, 2012.
  • Cormen et al. [2009] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, 3rd Edition. MIT Press, 2009.
  • de Nijs et al. [2021] Frits de Nijs, Erwin Walraven, Mathijs De Weerdt, and Matthijs Spaan. Constrained multiagent markov decision processes: A taxonomy of problems and algorithms. Journal of Artificial Intelligence Research, 70:955–1001, 2021.
  • Egorov et al. [2017] Maxim Egorov, Zachary N Sunberg, Edward Balaban, Tim A Wheeler, Jayesh K Gupta, and Mykel J Kochenderfer. Pomdps. jl: A framework for sequential decision making under uncertainty. The Journal of Machine Learning Research, 18(1):831–835, 2017.
  • Hauskrecht [1997] Milos Hauskrecht. Planning and control in stochastic domains with imperfect information. PhD thesis, Massachusetts Institute of Technology, 1997.
  • Hauskrecht [2000] Milos Hauskrecht. Value-function approximations for partially observable markov decision processes. Journal of artificial intelligence research, 13:33–94, 2000.
  • Haviv [1996] Moshe Haviv. On constrained markov decision processes. Operations Research Letters, 19(1):25–28, 1996.
  • Isom et al. [2008] Joshua D. Isom, Sean P. Meyn, and Richard D. Braatz. Piecewise linear dynamic programming for constrained POMDPs. In AAAI Conference on Artificial Intelligence, pages 291–296, 2008.
  • Jamgochian et al. [2023] Arec Jamgochian, Anthony Corso, and Mykel J. Kochenderfer. Online planning for constrained pomdps with continuous spaces through dual ascent. Proceedings of the International Conference on Automated Planning and Scheduling, 33(1):198–202, Jul. 2023.
  • Kaelbling et al. [1998] Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1):99–134, 1998.
  • Kalagarla et al. [2022] Krishna C. Kalagarla, Kartik Dhruva, Dongming Shen, Rahul Jain, Ashutosh Nayyar, and Pierluigi Nuzzo. Optimal control of partially observable markov decision processes with finite linear temporal logic constraints. In Conference on Uncertainty in Artificial Intelligence, volume 180, pages 949–958. PMLR, 2022.
  • Kim et al. [2011] Dongho Kim, Jaesong Lee, Kee-Eung Kim, and Pascal Poupart. Point-based value iteration for constrained POMDPs. In International Joint Conference on Artificial Intelligence, page 1968–1974. AAAI Press, 2011.
  • Kurniawati et al. [2008] Hanna Kurniawati, David Hsu, and Wee Sun Lee. SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Proceedings of Robotics: Science and Systems IV, Zurich, Switzerland, June 2008.
  • Lauri et al. [2023] Mikko Lauri, David Hsu, and Joni Pajarinen. Partially observable markov decision processes in robotics: A survey. IEEE Transactions on Robotics, 39(1):21–40, 2023.
  • Lee et al. [2018] Jongmin Lee, Geon-hyeong Kim, Pascal Poupart, and Kee-Eung Kim. Monte-carlo tree search for constrained POMDPs. In Advances in Neural Information Processing Systems, page 7934–7943. Curran Associates, Inc., 2018.
  • Papakonstantinou and Shinozuka [2014] K.G. Papakonstantinou and M. Shinozuka. Planning structural inspection and maintenance policies via dynamic programming and markov processes. part ii: POMDP implementation. Reliability Engineering & System Safety, 130:214–224, 2014.
  • Pendleton et al. [2017] Scott Drew Pendleton, Hans Andersen, Xinxin Du, Xiaotong Shen, Malika Meghjani, You Hong Eng, Daniela Rus, and Marcelo H. Ang. Perception, planning, control, and coordination for autonomous vehicles. Machines, 5(1), 2017.
  • Pineau et al. [2006] Joelle Pineau, Geoffrey Gordon, and Sebastian Thrun. Anytime point-based approximations for large POMDPs. Journal of Artificial Intelligence Research, 27:335–380, 2006.
  • Piunovskiy and Mao [2000] A.B. Piunovskiy and X. Mao. Constrained markovian decision processes: the dynamic programming approach. Operations Research Letters, 27(3):119–126, 2000.
  • Poupart et al. [2015] Pascal Poupart, Aarti Malhotra, Pei Pei, Kee-Eung Kim, Bongseok Goh, and Michael Bowling. Approximate linear programming for constrained partially observable markov decision processes. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1), Mar. 2015.
  • Shani et al. [2013] Guy Shani, Joelle Pineau, and Robert Kaplow. A survey of point-based POMDP solvers. Autonomous Agents and Multi-Agent Systems, 27:1–51, 2013.
  • Smith and Simmons [2004] Trey Smith and Reid Simmons. Heuristic search value iteration for POMDPs. In Conference on Uncertainty in Artificial Intelligence, page 520–527. AUAI Press, 2004.
  • Sondik [1978] Edward J. Sondik. The optimal control of partially observable markov processes over the infinite horizon: Discounted costs. Operations Research, 26(2):282–304, 1978.
  • Spaan and Vlassis [2005] Matthijs T. J. Spaan and Nikos Vlassis. Perseus: Randomized point-based value iteration for pomdps. J. Artif. Int. Res., 24(1):195–220, aug 2005.
  • Walraven and Spaan [2018] Erwin Walraven and Matthijs T. J. Spaan. Column generation algorithms for constrained pomdps. Journal of Artificial Intelligence Research, 62(1):489–533, 2018.
  • Wang et al. [2022] Yizheng Wang, Markus Zechner, John Michael Mern, Mykel J. Kochenderfer, and Jef Karel Caers. A sequential decision-making framework with uncertainty quantification for groundwater management. Advances in Water Resources, 166:104266, 2022.
  • Williams [1990] H.P. Williams. Model Building in Mathematical Programming, chapter 3.2.3. Wiley, 5th edition, 1990.
  • Wray and Czuprynski [2022] Kyle Hollins Wray and Kenneth Czuprynski. Scalable gradient ascent for controllers in constrained pomdps. In 2022 International Conference on Robotics and Automation (ICRA), pages 9085–9091, 2022.

Appendix A Proofs of Theorem 1

Proof.

Consider an optimal (stochastic) policy π\pi^{*}. Consider a hth_{t} reachable from b0b_{0} by π\pi^{*}, where π\pi^{*} has randomization (at least two actions have probabilities in (0,1)(0,1)). Then, π\pi^{*} can be equivalently represented as a mixture over n|A|n\leq|A| policies that deterministically selects a unique action at hth_{t} but is the same (stochastic) policy as π\pi^{*} everywhere else. That is, π\pi^{*} is a mixed policy over the set π={π1,,πn}\vec{\pi}^{*}=\{\pi_{1},\cdots,\pi_{n}\} of nn policies that have a deterministic action at hth_{t} and πi=πj=π\pi_{i}=\pi_{j}=\pi^{*} at every other history. Let wiw_{i} represent the non-zero probability of choosing πi\pi_{i} at hth_{t}. Then, VRπ(ht)=i=1nwiVRπi(bt)V^{\vec{\pi}^{*}}_{R}(h_{t})=\sum_{i=1}^{n}w_{i}V^{\pi_{i}}_{R}(b_{t}) and VCπ(ht)=i=1nwiVCπi(bt)V^{\vec{\pi}^{*}}_{C}(h_{t})=\sum_{i=1}^{n}w_{i}V^{\pi_{i}}_{C}(b_{t}).

We show that all the policies πiπ\pi_{i}\in\vec{\pi}^{*} must be admissible in order for π\vec{\pi}^{*} to be admissible. Suppose there exists an inadmissible policy πjπ\pi_{j}\in\vec{\pi}^{*}. That is, there exists a history hf,ft,h_{f},f\geq t, s.t. Eq. (11) or (14) is violated by πj\pi_{j}.

For f>tf>t, hfh_{f} is only reachable by taking action πj(ht)\pi_{j}(h_{t}) at hth_{t}, since each policy in π\vec{\pi}^{*} takes a different action at hth_{t}, and so their reachable history spaces are different. Only the inadmissible πj\pi_{j} is executed from hfh_{f} when it is reached probablistically, and W(ht)+γtQCπj(bf,πj(hf))c^W(h_{t})+\gamma^{t}Q^{\pi_{j}}_{C}(b_{f},\pi_{j}(h_{f}))\not\leq\hat{c}. This means that Eq.(11) is violated at depth ff, so π\vec{\pi}^{*} is inadmissible, which is a contradiction.

Similarly, for f=tf=t, if VCπj(bt)d(ht)V_{C}^{\pi_{j}}(b_{t})\not\leq d(h_{t}) (Eq. (14) are violated),

ht+1 s.t.VCπj(bt+1)d(ht+1),\exists h_{t+1}\text{ s.t.}V_{C}^{\pi_{j}}(b_{t+1})\not\leq d(h_{t+1}),

since

[ht+1,VCπj(bt+1)d(ht+1)]VCπj(bt)d(ht).[\forall h_{t+1},V_{C}^{\pi_{j}}(b_{t+1})\leq d(h_{t+1})]\implies V_{C}^{\pi_{j}}(b_{t})\leq d(h_{t}).

This can be seen by rearranging Eq. (13) and

VCπj(bt)=C(bt,πj(ht))+γ𝔼[VCπj(bt+1)]V_{C}^{\pi_{j}}(b_{t})=C(b_{t},\pi_{j}(h_{t}))+\gamma\mathbb{E}[V_{C}^{\pi_{j}}(b_{t+1})]

. That is, Eq. (14) is violated at time step t+1t+1, so π\vec{\pi}^{*} is inadmissible, which is again a contradiction. Hence, each πiπ\pi_{i}\in\vec{\pi}^{*} is admissible.

Since maxiVRπi(bt)i=1nwiVRπi(bt)\max_{i}V^{\pi_{i}}_{R}(b_{t})\geq\sum_{i=1}^{n}w_{i}V^{\pi_{i}}_{R}(b_{t}), determinism at hth_{t} is sufficient, i.e., obtain a new π=argmaxπi(VRπi(bt))\pi^{*}=\arg\max_{\pi_{i}}(V^{\pi_{i}}_{R}(b_{t})) with one less randomization. Repeating the same process for all histories reachable from b0b_{0} by π\pi^{*} with randomization obtains a deterministic optimal policy. ∎

Appendix B Proof of Theorem 2

Proof.

Let 𝐚\mathbf{a}^{\prime} denote the action set computed during the Bellman backup operation on Vπ(b¯t)V^{\pi^{*}}(\bar{b}_{t}) to obtain Vπ(b¯t)V^{\pi^{\prime}}(\bar{b}_{t}).

𝐚=argmaxaA[QRπ(b¯t,a)QCπ(b¯t,a)d(ht)]\displaystyle\mathbf{a}^{\prime}=\operatorname*{arg\,max}_{a\in A}\Big{[}Q_{R}^{\pi^{*}}(\bar{b}_{t},a)\mid Q^{\pi^{*}}_{C}(\bar{b}_{t},a)\leq d(h_{t})\Big{]}

We first show that VRπ(b¯t)=VRπ(b¯t)b¯tV_{R}^{\pi*}(\bar{b}_{t})=V_{R}^{\pi^{\prime}}(\bar{b}_{t})\;\;\forall\bar{b}_{t}, i.e., the optimality after a Bellman backup operation is preserved.

Consider any b¯t\bar{b}_{t}. By optimality of π\pi^{*}, Suppose 𝐚\mathbf{a}\neq\emptyset,

Vπ(b¯t)=maxaA[QRπ(b¯t,a)QCπ(b¯t,a)d(ht)]\displaystyle V^{\pi^{\prime}}(\bar{b}_{t})=\max_{a\in A}\Big{[}Q_{R}^{\pi}(\bar{b}_{t},a)\mid Q^{\pi}_{C}(\bar{b}_{t},a)\leq d(h_{t})\Big{]}
Vπ(b¯t)>Vπ(b¯t)π is not optimal\displaystyle V^{\pi^{\prime}}(\bar{b}_{t})>V^{\pi^{*}}(\bar{b}_{t})\implies\pi^{*}\text{ is not optimal}

which is a contradiction, so Vπ(b¯t)Vπ(b¯t)V^{\pi^{\prime}}(\bar{b}_{t})\leq V^{\pi^{*}}(\bar{b}_{t}). However, we have that

VRπ(b¯t)=maxa[QRπ(b¯t,a)]\displaystyle V^{\pi^{\prime}}_{R}(\bar{b}_{t})=\max_{a}[Q_{R}^{\pi}(\bar{b}_{t},a)]
VπVRπ(b¯t)\displaystyle\implies V^{\pi^{\prime}}\geq V_{R}^{\pi^{*}}(\bar{b}_{t})
Vπ=VRπ(b¯t).\displaystyle\implies V^{\pi^{\prime}}=V_{R}^{\pi^{*}}(\bar{b}_{t}).

Next, we have that if 𝐚=\mathbf{a}=\emptyset, from Eq. (15)

VRπ(b¯t)=VRπ(b¯t)\displaystyle V_{R}^{\pi^{\prime}}(\bar{b}_{t})=V_{R}^{\pi^{*}}(\bar{b}_{t})

Therefore, VRπ(b¯t)=VRπ(b¯t)b¯tV_{R}^{\pi*}(\bar{b}_{t})=V_{R}^{\pi^{\prime}}(\bar{b}_{t})\;\;\forall\bar{b}_{t}.

Now, we show that VCπ(bt)d(ht),b¯tReachπ(b¯0)V_{C}^{\pi^{\prime}}(b_{t})\leq d(h_{t}),\;\forall\bar{b}_{t}\in\textsc{Reach}^{\pi^{\prime}}\!(\bar{b}_{0}), i.e., the admissibility of the optimal policy after a Bellman backup operation is preserved.

Let b¯tReachπ(b¯0),t\bar{b}_{t}\in\textsc{Reach}^{\pi^{\prime}}(\bar{b}_{0}),t\in\mathbb{N} be the first augmented belief state from b0b_{0} in a belief trajectory such that VCπ(b¯t)V_{C}^{\pi^{\prime}}(\bar{b}_{t}) does not satisfy Eq. 14, i.e., π\pi^{\prime} satisfies Eq. 14 τ<t\forall\tau<t, and VCπ(b¯t)d(h)V_{C}^{\pi^{\prime}}(\bar{b}_{t})\not\leq d(h).

VCπ(b¯t)d(ht)𝐚=\displaystyle V_{C}^{\pi^{\prime}}(\bar{b}_{t})\not\leq d(h_{t})\implies\mathbf{a}^{\prime}=\emptyset

Therefore,

VRπ(b¯t)=VRπ(b¯t),\displaystyle V_{R}^{\pi^{*}}(\bar{b}_{t})=V_{R}^{\pi^{\prime}}(\bar{b}_{t}),
VCπ(b¯t)=VCπ(b¯t)=(,,).\displaystyle V_{C}^{\pi^{\prime}}(\bar{b}_{t})=V_{C}^{\pi^{*}}(\bar{b}_{t})=(\infty,\ldots,\infty).

Consider the augmented belief state b¯t1\bar{b}_{t-1} that transitions to b¯t\bar{b}_{t} under some action at1=π(b¯t1)a^{\prime}_{t-1}=\pi^{\prime}(\bar{b}_{t-1}) and observation ot1o_{t-1}.

QCπ(b¯t1,at1)\displaystyle Q_{C}^{\pi^{*}}(\bar{b}_{t-1},a^{\prime}_{t-1}) =C(b,a)+𝔼[VRπ(b¯)]\displaystyle=C(b,a)+\mathbb{E}[V_{R}^{\pi^{*}}(\bar{b}^{\prime})]
=C(b,a)+(,,)\displaystyle=C(b,a)+(\infty,\ldots,\infty)
=(,,)>d(ht1).\displaystyle=(\infty,\ldots,\infty)>d(h_{t-1}).

Note that d(ht1)c^γt1d(h_{t-1})\leq\frac{\hat{c}}{\gamma^{t-1}} is finite for finite t1t-1. So at1argmaxaA[QRπ(b¯t1,a)QCπ(b¯t1,a)d(ht1)]a^{\prime}_{t-1}\notin\operatorname*{arg\,max}_{a\in A}\Big{[}Q_{R}^{\pi^{*}}(\bar{b}_{t-1},a)\mid Q^{\pi^{*}}_{C}(\bar{b}_{t-1},a)\leq d(h_{t-1})\Big{]} and therefore b¯t\bar{b}_{t} cannot be reachable under π\pi^{\prime}. By induction, we have that b¯tReachπ(b¯0)\forall\bar{b}_{t}\in\textsc{Reach}^{\pi^{\prime}}(\bar{b}_{0}),

VCπ(b¯t)d(h).\displaystyle V_{C}^{\pi^{\prime}}(\bar{b}_{t})\leq d(h).

B.1 Proof of Theorem 3

Proof.

Suppose that π0\pi^{0} is well behaved.

Denote Binadmiss={b¯|Φ=}B_{inadmiss}=\{\bar{b}\;|\;\Phi=\emptyset\}. Then, for all b¯=(b,d)Binadmiss\bar{b}=(b,d)\in B_{inadmiss}, VCπ0(b¯)d(ht)V_{C}^{\pi^{0}}(\bar{b})\not\leq d(h_{t}). A lack of admissible policy implies that there are no actions from b¯t\bar{b}_{t} that leads to admissibility. This implies that aA\forall a\in A, there exists at least 11 successor augmented belief state b¯\bar{b}^{\prime}, such that Φ(b¯)=\Phi(\bar{b}^{\prime})=\emptyset. Therefore,

𝐚\displaystyle\mathbf{a} =argmaxaA[QRπ(b¯,a)QCπ(b¯,a)d]=\displaystyle=\operatorname*{arg\,max}_{a\in A}\Big{[}Q_{R}^{\pi}(\bar{b},a)\mid Q^{\pi}_{C}(\bar{b},a)\leq d\Big{]}=\emptyset
𝔹[Vπ0](b¯)\displaystyle\mathbb{B}[V^{\pi^{0}}](\bar{b}) =(VRπ0(b¯t),(,,)).\displaystyle=(V_{R}^{\pi^{0}}(\bar{b}_{t}),(\infty,\ldots,\infty)).

Therefore, we have that for all b¯Binadmiss\bar{b}\in B_{inadmiss},

𝔹n[Vπ0](b¯)=(VRπ0(b¯t),(,,))n.\mathbb{B}^{n}[V^{\pi^{0}}](\bar{b})=(V_{R}^{\pi^{0}}(\bar{b}_{t}),(\infty,\ldots,\infty))\;\;\forall n.

Next, denote Badmiss={b¯t|Φ(b¯t)}B_{admiss}=\{\bar{b}_{t}\;|\;\Phi(\bar{b}_{t})\neq\emptyset\}. Consider any b¯Badmiss\bar{b}\in B_{admiss}, VRπ0(b¯)V_{R}^{\pi^{0}}(\bar{b})\in\mathbb{R} and VCπ0(b¯)d(ht)V_{C}^{\pi^{0}}(\bar{b})\leq d(h_{t}). There must exist at least 11 action that is part of an admissible policy, i.e., 𝐚\mathbf{a}\neq\emptyset. Therefore, at b¯\bar{b}

𝔹[Vπ](b¯)=(QRπ(b¯,a),QCπ(b¯,a)),a𝐚\displaystyle\mathbb{B}[V^{\pi}](\bar{b})=\big{(}Q_{R}^{\pi}(\bar{b},a),Q_{C}^{\pi}(\bar{b},a)\big{)},a\in\mathbf{a}

Since any a𝐚a\notin\mathbf{a} are inadmissible and will not be selected during the Bellman backup operation, we can exclude them from the set of actions without loss of generality. Denote A(b¯)=𝐚A^{\prime}(\bar{b})=\mathbf{a} as the set of actions that may be selected at b¯\bar{b}. Then, for all b¯=(b,d)Badmiss\bar{b}=(b,d)\in B_{admiss},

𝐚\displaystyle\mathbf{a^{\prime}} =argmaxaA(b¯)[QRπ(b¯,a)]\displaystyle=\operatorname*{arg\,max}_{a\in A^{\prime}(\bar{b})}\Big{[}Q_{R}^{\pi}(\bar{b},a)\Big{]}
𝔹[Vπ0](b¯)\displaystyle\mathbb{B}[V^{\pi^{0}}](\bar{b}) =(QRπ0((b¯),a),QCπ0(b¯,a)),a𝐚.\displaystyle=(Q_{R}^{\pi^{0}}((\bar{b}),a),Q_{C}^{\pi^{0}}(\bar{b},a)),a\in\mathbf{a^{\prime}}.

Consider Vπ1=𝔹[Vπ0](b¯)V^{\pi^{1}}=\mathbb{B}[V^{\pi^{0}}](\bar{b}). We have that

VRπ1(b¯)VRπ0(b¯) and VCπ1(b¯)d.\displaystyle V_{R}^{\pi^{1}}(\bar{b})\geq V_{R}^{\pi^{0}}(\bar{b})\text{ and }V_{C}^{\pi^{1}}(\bar{b})\leq d.

By induction, VCπn(b¯)db¯BadmissV_{C}^{\pi^{n}}(\bar{b})\leq d\;\forall\bar{b}\in B_{admiss}, so the policy πn\pi^{n} remains admissible after applying 𝔹\mathbb{B}. Therefore, we can write

𝔹R[VRπ](b¯)=maxaA(b¯)[QRπ(b¯,a)],b¯Badmiss.\displaystyle\mathbb{B}_{R}[V_{R}^{\pi}](\bar{b})=\max_{a\in A^{\prime}(\bar{b})}\Big{[}Q_{R}^{\pi}(\bar{b},a)\Big{]},\forall\bar{b}\in B_{admiss}.

Note that this is the standard Bellman operator for an unconstrained discounted-sum POMDP over the set of admissible augmented belief states. From the results of the Bellman operator for a POMDP [Hauskrecht, 1997], 𝔹R\mathbb{B}_{R} is a contraction mapping and has a single, unique fixed point, i.e., for an optimal π\pi^{*}, 𝔹R[VRπ](b¯)=VRπ(b¯)b¯Badmiss\mathbb{B}_{R}[V_{R}^{\pi^{*}}](\bar{b})=V_{R}^{\pi^{*}}(\bar{b})\;\;\forall\bar{b}\in B_{admiss}. Since VRπn(b¯)=VRπ0(b¯)b¯Binadmiss,nV_{R}^{\pi^{n}}(\bar{b})=V_{R}^{\pi^{0}}(\bar{b})\;\;\forall\bar{b}\in B_{inadmiss},\forall n, we have that

𝔹n[(VRπ0,VCπ0)](VRπ,VCπn) as n.\displaystyle\mathbb{B}^{n}[(V_{R}^{\pi^{0}},V_{C}^{\pi^{0}})]\rightarrow(V_{R}^{\pi^{*}},V_{C}^{\pi^{n}})\text{ as }n\rightarrow\infty.

Appendix C ARCS Pseudocode

Algorithm 2 Sampling of nodes for backup.

Global variables: ,T,Γcmin\mathcal{M},T,\Gamma_{c_{min}}
Let γ=.P.γ\gamma=\mathcal{M}.P.\gamma
SAMPLE(ϵ\epsilon)

1:  LT.v0.V¯RL\leftarrow T.v_{0}.\underline{V}_{R}
2:  UL+ϵU\leftarrow L+\epsilon
3:  if rand()<0.5rand()<0.5 then
4:     SampleHeu(T.v0,L,U,ϵt,γ,1)T.v_{0},L,U,\epsilon_{t},\gamma,1)
5:  else
6:     SampleRandom(T.v0,γ)T.v_{0},\gamma)
7:  end if
8:  return sampled nodes.

SampleHeu(v,L,U,ϵ,γ,tv,L,U,\epsilon,\gamma,t).

1:  Let V^\hat{V} be the predicted value of v.VRv.V^{*}_{R}
2:  if V^L\hat{V}\leq L and v.V¯Rmax{U,V¯R(v.b)+ϵγt}v.\overline{V}_{R}\leq max\{U,\underline{V}_{R}(v.b)+\epsilon\gamma^{-t}\} then
3:     return
4:  else
5:     Q¯maxaQ¯R(v.b,a)\underline{Q}\leftarrow\max_{a}\underline{Q}_{R}(v.b,a)
6:     Lmax{L,Q¯}L^{\prime}\leftarrow\max\{L,\underline{Q}\}.
7:     Umax{U,Q¯+γtϵ}U^{\prime}\leftarrow\max\{U,\underline{Q}+\gamma^{-t}\epsilon\}
8:     aargmaxa{v.Q¯R(a)v.Q¯C(a)v.d}a^{\prime}\leftarrow\operatorname*{arg\,max}_{a}\{v.\overline{Q}_{R}(a)\mid v.\underline{Q}_{C}(a)\leq v.d\}
9:     if a=a^{\prime}=\emptyset then
10:        return.
11:     end if
12:     oargmaxo[p(o|b,a)(v.V¯Rv.V¯Rϵγt)]o^{\prime}\leftarrow\operatorname*{arg\,max}_{o}[p(o|b,a^{\prime})(v^{\prime}.\overline{V}_{R}-v^{\prime}.\underline{V}_{R}-\epsilon\gamma^{-t})]
13:     Compute LtL_{t} such that L=R(v.b,a)+L^{\prime}=R(v.b,a^{\prime})+γ(p(o|b,a)Lt+oop(o|v.b,a)v.V¯R)\gamma(p(o^{\prime}|b,a^{\prime})L_{t}+\sum_{o\neq o^{\prime}}p(o|v.b,a^{\prime})v^{\prime}.\underline{V}_{R})
14:     Compute UtU_{t} such that U=R(v.b,a)+U^{\prime}=R(v.b,a^{\prime})+γ(p(o|b,a)Ut+oop(o|v.b,a)v.V¯R\gamma(p(o^{\prime}|b,a^{\prime})U_{t}+\sum_{o\neq o^{\prime}}p(o|v.b,a^{\prime})v^{\prime}.\overline{V}_{R}
15:     vSuccessorNode(v,a,o)v^{\prime}\leftarrow\texttt{SuccessorNode}(v,a^{\prime},o^{\prime}).
16:     TvT\leftarrow v^{\prime}
17:     SampleHeu(v,Lt,Ut,ϵ,t+1v^{\prime},L_{t},U_{t},\epsilon,t+1)
18:  end if

SampleRandom(v,γv,\gamma)

1:  aranda{aA}a\leftarrow rand_{a}\{a\in A\}
2:  orando{oO}o\leftarrow rand_{o}\{o\in O\}
3:  vSuccessorNode(v,a,o)v^{\prime}\leftarrow\texttt{SuccessorNode}(v,a,o).
4:  TvT\leftarrow v^{\prime}
5:  if new node is added to TT then
6:     return
7:  else
8:     SampleRandom(v,γv^{\prime},\gamma)
9:  end if
Algorithm 3 Compute Successor Node.

Global variables: ,T,Γcmin\mathcal{M},T,\Gamma_{c_{min}}
Let γ=.P.γ\gamma=\mathcal{M}.P.\gamma.
SuccessorNode(v,a,o)v,a,o)

1:  if T.child(v,a,o)TT.child(v,a,o)\notin T then
2:     bBeliefUpdate(v.b,a,o)b^{\prime}\leftarrow BeliefUpdate(v.b,a,o) using Eq. (1)
3:     d1γ(v.dC(v.b,a))d^{\prime}\leftarrow\frac{1}{\gamma}(v.d-C(v.b,a))
4:     Initialize lower bound on kk^{\prime} using Eq. (22)
5:     (αr,αc)argmin(αr,αc)ΓcminαrTb(\alpha_{r},\alpha_{c})\leftarrow\operatorname*{arg\,min}_{(\alpha_{r},\alpha_{c})\in\Gamma_{c_{min}}}\alpha_{r}^{T}b^{\prime}
6:     V¯RαrTb\underline{V}_{R}\leftarrow\alpha_{r}^{T}b^{\prime}
7:     V¯CαcTb\overline{V}_{C}\leftarrow\alpha_{c}^{T}b^{\prime}
8:     V¯RV¯R(b)\overline{V}_{R}\leftarrow\overline{V}_{R}(b^{\prime}) with Fast Informed Bound (maximizing rewards)
9:     V¯CV¯C(b)\underline{V}_{C}\leftarrow\underline{V}_{C}(b^{\prime}) with Fast Informed Bound (minimizing costs)
10:     Q¯R\overline{Q}_{R}\leftarrow\emptyset
11:     Q¯R\underline{Q}_{R}\leftarrow\emptyset
12:     Q¯C\overline{Q}_{C}\leftarrow\emptyset
13:     Q¯C\underline{Q}_{C}\leftarrow\emptyset
14:     v(b,d,k,V¯R,V¯R,V¯C,Q¯R,Q¯R,Q¯C)v^{\prime}\leftarrow(b^{\prime},d^{\prime},k^{\prime},\overline{V}_{R},\underline{V}_{R},\overline{V}_{C},\overline{Q}_{R},\underline{Q}_{R},\overline{Q}_{C})
15:  else
16:     vT.child(v,a,o)v^{\prime}\leftarrow T.child(v,a,o)
17:  end if
18:  return vv
Algorithm 4 Perform backup at a node

Global variables: ,T,Γcmin\mathcal{M},T,\Gamma_{c_{min}}
Let γ=.P.γ\gamma=\mathcal{M}.P.\gamma
BACKUP(vv)

1:  Initialize k\vec{k} of size |A||A|
2:  for all aAa\in A do
3:     v.Q¯R(a)R(v.b,a)+γ𝔼[v.V¯R]v.\overline{Q}_{R}(a)\leftarrow R(v.b,a)+\gamma\mathbb{E}[v^{\prime}.\overline{V}_{R}]
4:     v.Q¯C(a)C(v.b,a)+γ𝔼[v.V¯C]v.\overline{Q}_{C}(a)\leftarrow C(v.b,a)+\gamma\mathbb{E}[v^{\prime}.\overline{V}_{C}]
5:     v.Q¯R(a)R(v.b,a)+γ𝔼[v.V¯R]v.\underline{Q}_{R}(a)\leftarrow R(v.b,a)+\gamma\mathbb{E}[v^{\prime}.\underline{V}_{R}]
6:     v.Q¯C(a)C(v.b,a)+γ𝔼[v.V¯C]v.\underline{Q}_{C}(a)\leftarrow C(v.b,a)+\gamma\mathbb{E}[v^{\prime}.\underline{V}_{C}]
7:     k[a]minvv.k\vec{k}[a]\leftarrow\min_{v^{\prime}}v^{\prime}.k
8:  end for
9:  aargmaxa{v.Q¯R(a)Q¯C(a)v.d}a\leftarrow\operatorname*{arg\,max}_{a}\{v.\underline{Q}_{R}(a)\mid\overline{Q}_{C}(a)\leq v.d\}
10:  if aa\neq\emptyset then
11:     v.V¯Rv.Q¯R(a)v.\underline{V}_{R}\leftarrow v.\underline{Q}_{R}(a)
12:     v.V¯Cv.Q¯C(a)v.\overline{V}_{C}\leftarrow v.\overline{Q}_{C}(a)
13:     v.kk[a]+1v.k\leftarrow\vec{k}[a]+1
14:  else
15:     aargmina{v.Q¯C(a)}a\leftarrow\operatorname*{arg\,min}_{a}\{v.\overline{Q}_{C}(a)\}
16:     v.V¯Rv.Q¯R(a)v.\underline{V}_{R}\leftarrow v.\underline{Q}_{R}(a)
17:     v.V¯Cv.Q¯C(a)v.\overline{V}_{C}\leftarrow v.\overline{Q}_{C}(a)
18:     v.k0v.k\leftarrow 0
19:  end if
20:  aargmaxa{v.Q¯R(a)Q¯C(a)v.d}a\leftarrow\operatorname*{arg\,max}_{a}\{v.\overline{Q}_{R}(a)\mid\underline{Q}_{C}(a)\leq v.d\}
21:  if aa\neq\emptyset then
22:     v.V¯Rv.Q¯R(a)v.\overline{V}_{R}\leftarrow v.\overline{Q}_{R}(a)
23:     v.V¯Cv.Q¯C(a)v.\underline{V}_{C}\leftarrow v.\underline{Q}_{C}(a)
24:  else
25:     v.V¯Rv.\overline{V}_{R}\leftarrow-\infty
26:     v.V¯Cv.\underline{V}_{C}\leftarrow\infty
27:     v.V¯Rv.\underline{V}_{R}\leftarrow-\infty
28:     v.V¯Cv.\overline{V}_{C}\leftarrow\infty
29:  end if
Algorithm 5 Prune nodes and node-actions from vv

Global variables: ,T,Γcmin\mathcal{M},T,\Gamma_{c_{min}}
PRUNE(BsamB_{sam})

1:  for all vBsamv\in B_{sam} do
2:     if v.V¯C>v.dv.\underline{V}_{C}>v.d then
3:        Prune vv
4:     end if
5:     if all node-actions (v,a)(v,a) are pruned then
6:        Prune vv
7:     end if
8:     Initialize k\vec{k} of size |A||A|.
9:     for all aAa\in A do
10:        k[a]minv.k\vec{k}[a]\leftarrow\min v^{\prime}.k
11:     end for
12:     for all a,aAa,a^{\prime}\in A do
13:        if any child of (v,a)(v,a) is pruned then
14:           Prune node-action (v,a)(v,a)
15:        end if
16:        if k[a]=\vec{k}[a^{\prime}]=\infty, vQ¯R(a)<v.Q¯R(a)v\overline{Q}_{R}(a)<v.\underline{Q}_{R}(a^{\prime}) then
17:           Prune node-action (v,a)(v,a)
18:        end if
19:     end for
20:  end for

Appendix D Proof of Lemma 1

Proof.

Given a maximum one-step cost CmaxC_{max} for πcmin\pi_{c}^{min} and a non-negative admissible cost v.dv.d, a lower bound on the admissible horizon can be obtained as follows. The kk-step admissible horizon from each leaf node vv is the largest kk such that

τ=0k1γτCmaxv.d.\displaystyle\sum_{\tau=0}^{k-1}\gamma^{\tau}C_{max}\leq v.d.

The LHS is a finite geometric series:

τ=0k1γτCmax=Cmax(1γk1γ)v.d.\sum_{\tau=0}^{k-1}\gamma^{\tau}C_{max}=C_{max}\left(\frac{1-\gamma^{k}}{1-\gamma}\right)\leq v.d. (23)

Hence, the largest integer kk that satisfies Eq. (23) is

k=log(1(d(ht)Cmax)(1γ))/log(γ).\displaystyle k=\Big{\lfloor}\log\Big{(}1-\Big{(}\frac{d(h_{t})}{C_{max}}\Big{)}\cdot(1-\gamma)\Big{)}/\log(\gamma)\Big{\rfloor}.

Also, we obtain the \infty-admissibility condition on πcmin\pi_{c}^{min} by setting k=k=\infty in Eq. (23):

Cmax1γv.d.\displaystyle\frac{C_{max}}{1-\gamma}\leq v.d.

Next, when v.V¯Cπcmin=0v.dv.\overline{V}_{C}^{\pi_{c^{min}}}=0\leq v.d, since costs are non-negative, the minimum cost policy obtains 0 cost at every future belief, and hence k=k=\infty.

Finally, when admissible cost is negative, the constraints are trivially violated, and hence k=0k=0. ∎

Appendix E Proof of Proposition 3

Proof.

During search, the pruning criteria prunes policies according to four cases. We show that these four cases only prunes sub-optimal policies and inadmissible policies.

  1. 1.

    v.V¯C>v.dv.\underline{V}_{C}>v.d.
    It is easy to see that if v.V¯C>v.dv.\underline{V}_{C}>v.d, it is guaranteed that no admissible policies exist from vv, so we can prune vv and its subtree.

  2. 2.

    At node vv, actions aa and aa^{\prime} are compared. Specifically, v.Q¯R(a)v.\overline{Q}_{R}(a) is compared with v.Q¯R(a)v.\underline{Q}_{R}(a^{\prime}), if k(v,a)=k(v,a^{\prime})=\infty (the policy from taking aa^{\prime} is admissible). The node-action (v,a)(v,a) is pruned if v.Q¯R(a)<v.Q¯R(a)v.\overline{Q}_{R}(a)<v.\underline{Q}_{R}(a^{\prime}).
    There are two cases to consider: (i) k(v,a)=k(v,a)=\infty and (ii) k(v,a)<k(v,a)<\infty. In case (i), k(v,a)=k(v,a)=\infty. v.Q¯R(a)v.\overline{Q}_{R}(a) is a valid upper bound of the Q reward-value of taking action aa as a consequence of Lemma 2. In case (ii), k(v,a)<k(v,a)<\infty, v.Q¯R(a)v.\overline{Q}_{R}(a) is also a valid upper bound of the Q reward-value of taking action aa. For a node vv with b¯=(v.b,v.d)\bar{b}=(v.b,v.d), We show that v.V¯Rv.\overline{V}_{R} is an upper bound on VR(b¯)V_{R}^{*}(\bar{b}). That is, if v.kv.k\leq\infty, we have that for an optimal policy starting from b¯=(v.b,v.d)\bar{b}=(v.b,v.d) with optimal reward-value VR(b¯)V_{R}^{*}(\bar{b}) and cost-value VC(b¯)V_{C}^{*}(\bar{b}),

    VR(b¯)v.V¯R and v.V¯CVC(b¯).\displaystyle V_{R}^{*}(\bar{b})\leq v.\overline{V}_{R}\text{ and }v.\underline{V}_{C}\leq V_{C}^{*}(\bar{b}). (24)

    This can be seen by noting that the BACKUP step performs an RC-POMDP Bellman backup. If the policy is in fact admissible (since v.kv.k is an underestimate), then the results from Lemma 2 hold. If the policy is not admissible, the optimal reward-value of an admissible policy from that node cannot be higher than v.V¯Rv.\overline{V}_{R} (and the optimal cost-value cannot be higher than v.V¯Cv.\underline{V}_{C}), since an admissible policy satisfies more constraints than a finite kk-admissible one. In both cases, aa^{\prime} is strictly a better action than aa, so taking action aa at node vv cannot be part of an optimal policy.

  3. 3.

    If all node-actions (v,a)(v,a) are pruned, vv is also pruned.
    No actions are admissible from vv so it is inadmissible.

  4. 4.

    (v,a)(v,a) is pruned if any successor node from taking action aa is pruned.
    A successor node is pruned (case 1 or 3), so this policy is not admissible.

Appendix F Proof of Lemma 2

Proof.

The proof relies on the result of Theorem 1. that for admissibility constraint k=k=\infty, deterministic policies are sufficient for optimality. That is, it is sufficient to provide upper and lower bounds over deterministic policies.

We first show that the initial bounds V¯R,V¯R,V¯C,V¯C,k\overline{V}_{R},\underline{V}_{R},\overline{V}_{C},\underline{V}_{C},k for a new (leaf) node vv (Algorithm 3) are true bounds on the optimal policy.

V¯R\overline{V}_{R} and V¯C\underline{V}_{C} are initialized with the Fast Informed Bound with the unconstrained POMDP problem, separately for reward maximization and cost minimization. The Fast Informed Bound provides valid upper bounds on reward-value (and lower bounds on cost-value of a cost-minimization policy, which in turn is a lower bound on cost-value for an optimal RC-POMDP policy) Hauskrecht [2000]. The upper bound on the optimal reward-value for the reward-maximization unconstrained POMDP problem is also an upper bound on the optimal reward-value for an RC-POMDP which has additional constraints. Similarly, the lower bound on the optimal cost-value for the cost-minimization unconstrained POMDP problem is also a lower bound on the cost-value of an optimal RC-POMDP policy which has additional constraints.

V¯R\underline{V}_{R} and V¯C\overline{V}_{C} are computed using the minimum cost policy Γcmin\Gamma_{c_{min}}. This minimum cost policy is an alpha-vector policy, which is an upper bound on the cost-value function Hauskrecht [2000], so V¯C\overline{V}_{C} is an upper bound on the cost-value when following the minimum cost policy. It can also be seen that the value V¯R\underline{V}_{R} from following the same policy is a valid bound on the optimal reward-value function from that node. Finally, the admissible horizon guarantee kk is initialized using the results from Lemma 1, which is shown to be a lower bound on the true admissible horizon following the minimum cost policy.

Next, we show that performing the BACKUP step (Algorithm 4 maintains the validity of the bounds. Recall the condition of this lemma that admissible horizon guarantee v.k=v.k=\infty for the node vv. Thus, after the backup step, the admissible horizon guarantee remains at v.k=v.k=\infty. From the proof of Theorem 2, the RC-POMDP Bellman backup 𝔹\mathbb{B} satisfies Bellman’s Principle of Optimality and is a contraction mapping within the space of admissible value functions (and hence policies).

Let V¯R,V¯R,V¯C,V¯C\overline{V}_{R}^{\prime},\underline{V}_{R}^{\prime},\overline{V}_{C}^{\prime},\underline{V}_{C}^{\prime} be the value after the BACKUP step, which performs a Bellman backup 𝔹\mathbb{B} on V¯R,V¯R,V¯C,V¯C\overline{V}_{R},\underline{V}_{R},\overline{V}_{C},\underline{V}_{C}:

v.V¯R=v.𝔹(V¯R),\displaystyle v.\overline{V}_{R}^{\prime}=v.\mathbb{B}(\overline{V}_{R}),
v.V¯R=v.𝔹(V¯R),\displaystyle v.\underline{V}_{R}^{\prime}=v.\mathbb{B}(\underline{V}_{R}),
v.V¯C=v.𝔹(V¯C),\displaystyle v.\overline{V}_{C}^{\prime}=v.\mathbb{B}(\overline{V}_{C}),
v.V¯C=v.𝔹(V¯C).\displaystyle v.\underline{V}_{C}^{\prime}=v.\mathbb{B}(\underline{V}_{C}).

Since 𝔹\mathbb{B} is a contraction mapping within the space of admissible policies, we see that:

v.𝔹(V¯R)v.V¯R,\displaystyle v.\mathbb{B}(\overline{V}_{R})\leq v.\overline{V}_{R},
v.𝔹(V¯C)v.V¯C,\displaystyle v.\mathbb{B}(\overline{V}_{C})\leq v.\overline{V}_{C},
v.𝔹(V¯R)v.V¯R,\displaystyle v.\mathbb{B}(\underline{V}_{R})\geq v.\underline{V}_{R},
v.𝔹(V¯C)v.V¯C.\displaystyle v.\mathbb{B}(\underline{V}_{C})\geq v.\underline{V}_{C}.

Therefore, for v.k=v.k=\infty, we have that for an optimal policy starting from b¯=(v.b,v.d)\bar{b}=(v.b,v.d) with optimal reward-value VR(b¯)V_{R}^{*}(\bar{b}) and cost-value VC(b¯)V_{C}^{*}(\bar{b}),

v.V¯RVR(b¯)v.V¯R\displaystyle v.\underline{V}_{R}\leq V_{R}^{*}(\bar{b})\leq v.\overline{V}_{R}
VC(b¯)v.V¯C\displaystyle V_{C}^{*}(\bar{b})\leq v.\overline{V}_{C}

Appendix G Proof of Theorem 4

Proof.

There are two termination criteria for ARCS, of which both must be true before termination. ARCS terminates when (1) it finds an admissible policy, and (2) the policy is ϵ\epsilon-optimal, that is when v0.V¯Rv0.V¯Rϵv_{0}.\overline{V}_{R}-v_{0}.\underline{V}_{R}\leq\epsilon. We first discuss admissibility, then ϵ\epsilon-optimality.

(1) ARCS can terminate when it finds an admissible policy, i.e., v0.k=v_{0}.k=\infty. ARCS finds an admissible policy when every leaf node vleafv_{leaf} under the policy satisfies (i) Eq. (20), or (ii) VCπ(bt)=0V_{C}^{\pi}(b^{\prime}_{t})=0.

We prove that this is a sound condition, i.e., if the (i) and (ii) hold for every leaf node vleafv_{leaf}, the computed policy is indeed admissible. As proven in Lemma 1, the admissible horizon guarantee for a leaf node vleaf.kv_{leaf}.k is a conservative under-approximation. Therefore, a leaf node with vleaf.k=v_{leaf}.k=\infty indeed means that we have found an admissible policy from vleafv_{leaf} (with Γcmin\Gamma_{c_{min}}). Suppose all leaf nodes have vleaf.k=v_{leaf}.k=\infty. The worst-case back-propagation of admissible horizon guarantee up the tree is sound, since a non-leaf node vv only has v.k=v.k=\infty if all its leaf nodes have k=k=\infty and Eq. (9) is satisfied at that node (Lines 9-18 in Algorithm 4). Therefore, if v0.k=v_{0}.k=\infty, the policy is admissible, and ARCS can terminate if v0.k=v_{0}.k=\infty.

(2) ARCS can terminate when the gap criterion at the root is satisfied, that is when v0.V¯Rv0.V¯Rϵv_{0}.\overline{V}_{R}-v_{0}.\underline{V}_{R}\leq\epsilon.

If v0.k=v_{0}.k=\infty, the policy at v0.kv_{0}.k is admissible, which implies every history-belief reachable under the policy tree is admissible. From Lemma 2, this implies that for all nodes, V¯R,V¯R,V¯C,V¯C\overline{V}_{R},\underline{V}_{R},\overline{V}_{C},\underline{V}_{C} are valid bounds on the optimal value function. Thus, v0.V¯Rv_{0}.\overline{V}_{R} and v0.V¯Rv_{0}.\underline{V}_{R} are valid bounds on the optimal value function from b0b_{0}, and so, an ϵ\epsilon-optimal policy is indeed found.

Therefore, if ARCS terminates, the computed solution is an admissible ϵ\epsilon-optimal policy. ∎

Appendix H Experimental Evaluation

H.1 Implementation details

The code for each algorithm implementation can be found in the attached supplementary material. Here, we detail parameters and implementation of the algorithms. For hyper-parameter tuning, we used the default parameters for ARCS. For the rest of the algorithms, the values of the hyper-parameters were chosen based empirical evaluations, and fixed for the experiments. For each environment, we used a maximum time step of 2020 during evaluation. Except for the Tiger problem, all algorithms reached terminal states before 2020 time steps in these problems or produced a policy which stayed still at 2020 time steps.

H.1.1 ARCS

We implemented ARCS as described. We used the Fast Informed Bound for the initialization of upper bound on reward value and lower bound on cost value. We used SARSOP for the computation of the minimum cost policy. We set the SARSOP hyperparameter κ=0.5\kappa=0.5 for our experiments, the same value as Kurniawati et al. [2008]. We used a uniform randomization (0.50.5 probability) between heuristic sampling and random sampling during planning, and we leave an analysis on how the randomization weight may affect planning efficiency to future work.

H.1.2 CGCP

We implemented CGCP and adapted it for discounted infinite horizon problems, using Alg. 5 in Walraven and Spaan [2018] as a basis. However, we use the discounted infinite horizon POMDP solver SARSOP Kurniawati et al. [2008] in place of a finite horizon PBVI. Our method of constructing policy graphs also differs, as the approach described is for finite horizon problems. We check for a return to beliefs previously visited under the policy in order to reduce the size of the graph. A maximum time of 300300 seconds was used for CGCP. For each SARSOP iteration within CGCP, τ=20\tau=20 seconds was given initially, while the solve time was incremented by τ+=100\tau^{+}=100 seconds every time that the dual price λ\lambda remained the same. Additionally, CGCP was limited to 100100 iterations. In an effort to reduce computation time, policy graph evaluation and SARSOP search was limited to depth 2020 (the same as the monte carlo evaluation depth) in all domains except the RockSample domains, which were allowed unlimited depth.

For the Tunnels benchmark, 10001000 monte carlo simulations to depth 2020 were used in place of a policy graph to estimate the value of policies. This was due to the inability of the policy graphs to estimate the value of some infinite horizon POMDP solutions which do not lead to terminal states or beliefs which have already appeared in the tree.

H.1.3 CGCP-CL

CGCP-CL uses the same parameters as CGCP, but re-plans at every time step.

H.1.4 No-regret Learning Algorithm

We implemented the no-regret learning algorithm from [Kalagarla et al., 2022]. We used SARSOP as the unconstrained POMDP solver, and monte carlo simulations to estimate the value of policies.

H.1.5 CPBVI

We implemented CPBVI based on Kim et al. [2011]. The algorithm generates a set of reachable beliefs \mathcal{B} before performing iterations of approximate dynamic programming on the belief set. However, the paper did not include full details on belief set \mathcal{B} generation and alpha-vector set Γ\Gamma initialization.

The paper cited Pineau et al. [2006] for their belief set description, and so we followed Pineau et al. [2006] by expanding \mathcal{B} greedily towards achieving uniform density in the set of reachable beliefs. This is done by randomly simulating a step forward from a node in the tree, thereby generating candidate beliefs, and keeping the belief that is farthest from any belief already in the tree. We repeat this expansion until the desired number of beliefs have been added to the tree.

To address Γ\Gamma initialization, we adopted the blind lower bound approach. This approach represents the lower bound Γ\Gamma with a set of alpha-vectors corresponding to each action in AA. Each alpha-vector is generated under the assumption that the same action is taken forever. To compute an alpha-vector corresponding to a given action, we first compute the best-action worst-state (BAWS) lower bound. This is done by evaluating the discounted reward obtained by taking the best action in the worst state forever. We can then update the BAWS alpha-vectors by performing value backups until convergence.

The CPBVI algorithm involves the computation of a linear program (LP) to obtain the best action at a given belief. One of the constraints asserts that the convex combination of cost alpha-vectors evaluated at a given belief bb must be equal to or less than the admissible cost dd associated with bb, which is used in CPBVI’s heuristic approach. However, if d<0d<0, the LP becomes infeasible. The case of d<0d<0 is possible since no pruning of beliefs is conducted. The paper did not provide details to account for this situation. To address this, if the LP is infeasible, we output the action with the lowest cost, akin to ARCS’ minimum cost policy method when no policy is admissible.

H.1.6 CPBVI-D

CPBVI computes stochastic policies. We modify CPBVI to only compute deterministic policies with the following details. Instead of solving the LP to generate a stochastic action, we solve the for the single highest value action subject to the cost-value constraint.

Although both CPBVI and CPBVI-D theoretically have performance insensitive to random seed initialization, both algorithms are sensitive to the number of belief parameter during planning. With too few beliefs selected for a problem, both algorithms cannot search the problem space sufficiently. With too many beliefs selected, the time taken for belief selection is too high for the moderately sized problems of CRS and Tunnels. Therefore, we tuned and chose a belief parameter of 3030 that allows finding solutions in the planning time of 300s300s. Note that even with a small number of beliefs of 3030, CPBVI routinely overruns the planning time limit during its update step.

H.2 Environment details

  • CE: Simplified counterexample in Figure 1.

  • C-Tiger: A constrained version of the Tiger POMDP Problem Kaelbling et al. [1998], with cost of 11 for the “listen" action.

  • CRS: A constrained version of the RockSample problem Smith and Simmons [2004] as defined in Lee et al. [2018] with varying sizes and number of rocks.

  • Tunnels: A scaled version of Example 1, shown in Fig. 2.

Except for the RockSample environment, our environments do not depend on randomness. For each RockSample environment of which rock location depends on randomness, we used the rng algorithm MersenneTwister with a fixed seed to generate the RockSample environments.

H.2.1 Counterexample Problem

The counterexample POMDP in Figure 1 uses a discount of γ=1\gamma=1. In the experiments, we used a discount factor of γ=1e14\gamma=1-e^{-14} to approximate a discount of γ=1\gamma=1. It is modeled as an RC-POMDP as follows.

States are enumerated as {s1,s2,s3,s4,s5}\{s_{1},s_{2},s_{3},s_{4},s_{5}\} with actions following as {aA,aB}\{a_{A},a_{B}\} and observations being noisy indicators for whether or not a state is rocky.

States s1s_{1} and s2s_{2} indicate whether cave 1 or cave 2 contains rocky terrain, respectively. Taking action aBa_{B} circumvents the caves unilaterally incurring a cost of 5.05.0 and transitioning to terminal state s5s_{5}. Taking action aBa_{B} moves closer to the caves where sis_{i} deterministically transitions to si+2s_{i+2}. In this transition, the agent is given an 85% accurate observation of the true state.

At this new observation position, the agent is given a choice to commit to one of two caves where s3s_{3} indicates that cave 1 contains rocks and s4s_{4} indicates that cave 2 contains rocks. Action aAa_{A} moves through cave 1 and aBa_{B} moves through cave 2. Moving through rocks incurs a cost of 1010 while avoiding them incurs no cost. Taking action aAa_{A} at this point, regardless of true state, gives a reward of 1212. States s3s_{3} and s4s_{4} unilaterally transition to terminal state s5s_{5}.

H.2.2 Tunnels Problem

The tunnels problem is modeled as an RC-POMDP as follows. As depicted in figure 2, the tunnels problem consists of a centralized starting hall that funnels into 3 separate tunnels. At the end of tunnels 1,2,and 3 lie rewards 2.0, 1.5 and 0.5 respectively. However, with high reward also comes high cost as tunnel 1 has a 80% probability of containing rocks and tunnel 2 has a 40% probability of containing rocks while tunnel 3 is always free of rocks. If present, the rocks fill 2 steps before the reward location at the end of a tunnel and a cost of 1 is incurred if the agent traverses over these rocks. Furthermore, a cost of 1 is incurred if the agent chooses to move backwards.

The only partial observability in the state is over whether or not rocks are present in tunnels 1 or 2. As the agent gets closer to the rocks, accuracy of observations indicating the presence of rocks increases.

H.3 Experiment Evaluation Setup

We implemented each algorithm in Julia using tools from the POMDPs.jl framework Egorov et al. [2017], and all experiments were conducted single-threaded on a computer with two nominally 2.2 GHz Intel Xeon CPUs with 48 cores and 128 GB RAM. All experiments were conducted in Ubuntu 18.04.6 LTS. For all algorithms except CGCP-CL, solve time is limited to 300300 seconds and online action selection to 0.050.05 seconds. For CGCP-CL, 300300 seconds was given for each action (recomputed from scratch). We simulate each policy 10001000 times, except for CGCP-CL, which is simulated 100100 times due to the time taken for re-computation of the policy at each time step. The full results with the mean and standard error of the mean for each metric are shown in Table. 3.

Table 3: Comparison of our RC-POMDP algorithm to state-of-the-art offline C-POMDP algorithms. The second column shows the size of the states, actions, and observations of each environment. We report the mean and 11 standard error of the mean for each metric. A memory-out is indicated by a -. Note that for CRS(11,11), due to the 300s300s time limit, CGCP, Exp-Gradient and ours all compute a policy that goes directly to the exit area without interacting with any rocks, and achieve the same reward and 0 cost.
Environment State/Action/Obs Algorithm Violation Rate Cumulative Reward Cumulative Cost
CE CGCP 0.514±0.0160.514\pm 0.016 12.0±0.012.0\pm 0.0 5.19±0.1585.19\pm 0.158
CGCP-CL 0.0±0.00.0\pm 0.0 6.12±0.6036.12\pm 0.603 3.25±0.3133.25\pm 0.313
(c^=5\hat{c}=5) 5/ 2/ 25\,/\,2\,/\,2 CPBVI 0.0±0.00.0\pm 0.0 8.354±0.1358.354\pm 0.135 4.505±0.0674.505\pm 0.067
CPBVI-D 0.0±0.00.0\pm 0.0 6.192±0.196.192\pm 0.19 3.61±0.1053.61\pm 0.105
EXP-Gradient 0.485±0.0160.485\pm 0.016 11.868±0.0411.868\pm 0.04 4.975±0.1574.975\pm 0.157
Ours 0.0±0.00.0\pm 0.0 10±010\pm 0 5±05\pm 0
C-Tiger CGCP 0.674±0.0150.674\pm 0.015 62.096±3.148-62.096\pm 3.148 1.536±0.0341.536\pm 0.034
CGCP-CL 0.76±0.0430.76\pm 0.043 72.424±5.283-72.424\pm 5.283 1.535±0.0051.535\pm 0.005
(c^=1.5\hat{c}=1.5) 2 / 3 / 2 CPBVI 0.482±0.0160.482\pm 0.016 74.456±1.79-74.456\pm 1.79 1.489±0.0111.489\pm 0.011
CPBVI-D 0.0±0.00.0\pm 0.0 75.414±1.617-75.414\pm 1.617 1.497±0.01.497\pm 0.0
EXP-Gradient 1.0±0.01.0\pm 0.0 3.713±0.92-3.713\pm 0.92 2.294±0.0042.294\pm 0.004
Ours 0.0±0.00.0\pm 0.0 75.075±1.511-75.075\pm 1.511 1.422±0.01.422\pm 0.0
C-Tiger CGCP 0.753±0.0140.753\pm 0.014 1.690±0.647-1.690\pm 0.647 2.996±0.0142.996\pm 0.014
CGCP-CL 0.140±0.0350.140\pm 0.035 2.983±2.045-2.983\pm 2.045 2.930±0.0352.930\pm 0.035
(c^=3\hat{c}=3) 2 / 3 / 2 CPBVI 0.153±0.0110.153\pm 0.011 11.11±1.05-11.11\pm 1.05 2.58±0.0102.58\pm 0.010
CPBVI-D 0.0±0.00.0\pm 0.0 178±2.62-178\pm 2.62 0.0±0.00.0\pm 0.0
EXP-Gradient 1.0±0.01.0\pm 0.0 1.813±0.3231.813\pm 0.323 3.222±0.0073.222\pm 0.007
Ours 0.0±0.00.0\pm 0.0 5.75±0.522-5.75\pm 0.522 2.982±0.0012.982\pm 0.001
CRS(4,4) CGCP 0.512±0.0240.512\pm 0.024 10.434±0.12510.434\pm 0.125 0.512±0.0160.512\pm 0.016
CGCP-CL 0.78±0.0040.78\pm 0.004 1.657±0.3151.657\pm 0.315 0.724±0.0400.724\pm 0.040
(c^=1\hat{c}=1) 201 / 8 / 3 CPBVI 0.0±0.00.0\pm 0.0 0.4±0.316-0.4\pm 0.316 0.522±0.0160.522\pm 0.016
CPBVI-D 0.0±0.00.0\pm 0.0 0.321±0.434-0.321\pm 0.434 3.082±0.0053.082\pm 0.005
EXP-Gradient 0.295±0.0140.295\pm 0.014 10.383±0.15610.383\pm 0.156 0.918±0.0580.918\pm 0.058
Ours 0.0±0.00.0\pm 0.0 6.52±0.3166.52\pm 0.316 0.523±0.0160.523\pm 0.016
CRS(5,7) CGCP 0.412±0.0220.412\pm 0.022 11.984±0.19311.984\pm 0.193 1.00±0.0381.00\pm 0.038
CL-CGCP 0.18±0.0090.18\pm 0.009 9.641±0.4779.641\pm 0.477 0.991±0.0340.991\pm 0.034
(c^=1\hat{c}=1) 3201 / 12 / 3 CPBVI 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0
CPBVI-D 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0
EXP-Gradient 0.30±0.0140.30\pm 0.014 11.90±0.2211.90\pm 0.22 1.31±0.061.31\pm 0.06
Ours 0.0±0.00.0\pm 0.0 11.766±0.13711.766\pm 0.137 0.950±0.00.950\pm 0.0
CRS(7,8) CGCP 0.357±0.0150.357\pm 0.015 10.78±0.1910.78\pm 0.19 0.945±0.040.945\pm 0.04
CL-CGCP 0.20±0.130.20\pm 0.13 11.17±1.5311.17\pm 1.53 0.931±0.0780.931\pm 0.078
(c^=1\hat{c}=1) 12545 / 13 / 3 CPBVI 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0
CPBVI-D 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0
EXP-Gradient 0.322±0.0150.322\pm 0.015 10.03±0.1610.03\pm 0.16 1.154±0.0541.154\pm 0.054
Ours 0.0±0.00.0\pm 0.0 6.61±0.226.61\pm 0.22 0.960±0.0030.960\pm 0.003
CRS(11,11) CGCP 0.0±0.00.0\pm 0.0 5.987±0.05.987\pm 0.0 0.0±0.00.0\pm 0.0
CL-CGCP - - -
(c^=1\hat{c}=1) 247809 / 16 / 3 CPBVI 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0
CPBVI-D 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0 0.0±0.00.0\pm 0.0
EXP-Gradient 0.0±0.00.0\pm 0.0 5.987±0.05.987\pm 0.0 0.0±0.00.0\pm 0.0
Ours 0.0±0.00.0\pm 0.0 5.987±0.05.987\pm 0.0 0.0±0.00.0\pm 0.0
Tunnels CGCP 0.50±0.0150.50\pm 0.015 1.612±0.0111.612\pm 0.011 1.011±0.0161.011\pm 0.016
CL-CGCP 0.31±0.0460.31\pm 0.046 1.22±0.0581.22\pm 0.058 0.683±0.0820.683\pm 0.082
(c^=1\hat{c}=1) 53 / 3 / 5 CPBVI 0.90±0.0090.90\pm 0.009 1.921±0.01.921\pm 0.0 1.621±0.021.621\pm 0.02
P(correct obs) = 0.80.8 CPBVI-D 0.89±0.0100.89\pm 0.010 1.921±0.01.921\pm 0.0 1.568±0.0241.568\pm 0.024
EXP-Gradient 0.48±0.0160.48\pm 0.016 1.35±0.021.35\pm 0.02 0.815±0.030.815\pm 0.03
Ours 0.0±0.00.0\pm 0.0 1.028±0.0181.028\pm 0.018 0.440±0.0200.440\pm 0.020
Tunnels CGCP 0.492±0.0160.492\pm 0.016 1.679±0.0081.679\pm 0.008 1.056±0.0331.056\pm 0.033
CL-CGCP 0.27±0.0450.27\pm 0.045 1.17±0.0611.17\pm 0.061 0.812±0.0690.812\pm 0.069
(c^=1\hat{c}=1) 53 / 3 / 5 C-PBVI 0.783±0.0130.783\pm 0.013 1.921±0.01.921\pm 0.0 1.517±0.0261.517\pm 0.026
EXP-Gradient 0.44±0.0160.44\pm 0.016 1.42±0.021.42\pm 0.02 0.86±0.030.86\pm 0.03
P(correct obs) = 0.950.95 CPBVI-D 0.812±0.0120.812\pm 0.012 1.921±0.01.921\pm 0.0 1.57±0.0241.57\pm 0.024
Ours 0.0±0.00.0\pm 0.0 1.010±0.0171.010\pm 0.017 0.273±0.0130.273\pm 0.013