This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\newcites

SMReferences

Reinforcement Learning with Continuous Actions Under Unmeasured Confounding

Yuhan Li,  Eugene Han11footnotemark: 1,  Yifan Hu,  Wenzhuo Zhou,
Zhengling Qi,  Yifan Cui,  Ruoqing Zhu11footnotemark: 1
Department of Statistics, University of Illinois at Urbana-Champaign, Champaign, IL.Department of Human Development and Family Studies, University of Illinois at Urbana-Champaign, Champaign, IL.Department of Statistics, University of California Irvine, Irvine, CA.Department of Decision Sciences, The George Washington University, Washington, DC.School of Management & Center for Data Science, Zhejiang University, Hangzhou, China.Correspondence to Ruoqing Zhu rqzhu@illinois.edu
Abstract

This paper addresses the challenge of offline policy learning in reinforcement learning with continuous action spaces when unmeasured confounders are present. While most existing research focuses on policy evaluation within partially observable Markov decision processes (POMDPs) and assumes discrete action spaces, we advance this field by establishing a novel identification result to enable the nonparametric estimation of policy value for a given target policy under an infinite-horizon framework. Leveraging this identification, we develop a minimax estimator and introduce a policy-gradient-based algorithm to identify the in-class optimal policy that maximizes the estimated policy value. Furthermore, we provide theoretical results regarding the consistency, finite-sample error bound, and regret bound of the resulting optimal policy. Extensive simulations and a real-world application using the German Family Panel data demonstrate the effectiveness of our proposed methodology.

Keywords: Reinforcement Learning, Policy Optimization, Policy Evaluation, Causal Inference, Confounded POMDP

1 Introduction

In practical applications of reinforcement learning (RL) (Sutton and Barto, 2018), the evaluation and optimization of policies using only pre-collected datasets has become essential. This need arises from the potential costs and safety risks associated with frequent interactions with the environment, such as real-world testing in autonomous driving (Zhu et al., 2020) and treatment selection in precision medicine (Luckett et al., 2019; Zhou et al., 2024a). As a result, there has been a growing interest in offline RL (Precup, 2000; Levine et al., 2020), which focuses on policy evaluation and optimization without requiring additional environment interactions.

Recent years have seen significant advancements in both off-policy evaluation (OPE) and off-policy learning (OPL). In OPE, popular methodologies include value-based approaches (Le et al., 2019; Liao et al., 2021), importance sampling techniques (Liu et al., 2018; Xie et al., 2019), and doubly robust methods that integrate value-based and importance sampling estimators (Uehara et al., 2020; Kallus et al., 2022). Additionally, confidence intervals for a target policy have been developed using bootstrapping (Hao et al., 2021), asymptotic properties (Shi et al., 2022b), and finite-sample error bounds (Zhou et al., 2023). In OPL, various algorithms have been proposed, demonstrating notable success across both discrete (Mnih et al., 2015; Luckett et al., 2019; Zhou et al., 2024b) and continuous action spaces (Kumar et al., 2020; Fujimoto and Gu, 2021; Li et al., 2023; Zhou, 2024).

A common assumption in the works mentioned above is the absence of unmeasured confounders. Specifically, these studies assume that all state variables are fully observed, with no unmeasured variables that might confound the observed actions. However, this assumption is unverifiable with offline data and is often violated in practical settings. Real-world instances include the influence of genetic factors in personalized medicine (Fröhlich et al., 2018) and the complexities of path planning in robotics (Zhang et al., 2020). To address these challenges, a recent line of research has focused on OPE within the framework of a confounded partially observable Markov Decision Process (POMDP), where the behavior policy generating the batch data may depend on unobserved state variables (Zhang and Bareinboim, 2016).

Various methods have been developed to identify the true policy value with the presence of unmeasured confounders. Within the contextual bandit setting (i.e., one decision point), causal inference frameworks have been established to identify the treatment effect by sensitivity analysis (Bonvini and Kennedy, 2022), and through the use of instrumental variables (Cui and Tchetgen Tchetgen, 2021) and negative controls (Miao et al., 2018; Cui et al., 2023). Existing methods for extensions to multiple decision points or the infinite horizon setting fall into two main categories. The first type considers i.i.d. confounders in the dynamic system and thus preserves the Markovian property (Zhang and Bareinboim, 2016). Under this framework, OPE methods have been formulated under various identification conditions, such as partial identification via sensitivity analysis (Kallus and Zhou, 2020; Bruns-Smith, 2021) and approaches that utilize instrumental variables or mediators (Li et al., 2021; Shi et al., 2022c). The second category explores the estimation of policy values in more general confounded POMDP models, where the Markovian assumption does not hold. These methods span a wide array of strategies, including the use of proxy variables (Shi et al., 2022a; Miao et al., 2022; Uehara et al., 2024) or instrumental variables (Fu et al., 2022; Xu et al., 2023), spectral methods for undercomplete POMDPs (Jin et al., 2020), and techniques focusing on predictive state representation (Cai et al., 2022; Guo et al., 2022).

However, there are two less investigated issues in confounded POMDP settings. Firstly, existing methods mainly focus on discrete action spaces (Miao et al., 2022; Shi et al., 2022a; Bennett and Kallus, 2023) despite many real-world scenarios requiring decision making on a continuous action space (Lillicrap et al., 2015). A straightforward workaround in adapting existing methods to continuous domains is to discretize the continuous action space. However, this approach either introduces significant bias when using coarse discretization (Lee et al., 2018; Cai et al., 2021) or encounters the the curse of dimensionality when applied to fine grids (Chou et al., 2017). Secondly, the challenge of learning an optimal policy from a batch dataset, particularly in the presence of unmeasured confounders, is paramount in various fields, including personalized medicine (Lu et al., 2022) and robotics (Brunke et al., 2022). A majority of existing methods, however, focus on policy evaluation rather than optimization. There are some recent efforts to address this gap. For instance, Qi et al. (2023) explore policy learning under the contextual bandit setting, and Hong et al. (2023) investigate policy gradient methods under finite-horizon confounded POMDPs. In terms of the infinite-horizon, Kallus and Zhou (2021); Fu et al. (2022) consider a restrictive memoryless confounders setting where the Markovian property is preserved, while Guo et al. (2022); Lu et al. (2022) focus on theoretical properties of the induced estimator under more general confounded POMDP settings, yet practical computational algorithms remain elusive. Thus, the development of computationally viable policy learning algorithms under infinite-horizon confounded MDPs, where the Markovian property is violated, remains a significant challenge.

Motivated by these, in this paper, we study the policy learning with continuous actions for confounded POMDPs over an infinite horizon. Our main contribution to the literature is threefold. First, relying on some time-dependent proxy variables, we extend the proximal causal inference framework (Miao et al., 2018; Tchetgen Tchetgen et al., 2020) to infinite horizon setting, and establish a nonparametric identification result for OPE using VV-bridge functions with continuous actions for confounded POMDPs. Leveraging the identification result, we propose an unbiased minimax estimator for the VV-bridge function and introduce a computationally efficient Fitted-Q Evaluation (FQE)-type algorithm for estimating the policy value. Second, we develop a novel policy gradient algorithm that searches the optimal policy within a specified policy class by maximizing the estimated policy value. The proposed algorithm is tailored for continuous action spaces and provides enhanced interpretability of the optimal policy. Third, we thoroughly investigate the theoretical properties of the proposed methods on both policy evaluation and policy learning, including the estimator consistency, finite-sample bound for performance error, and sub-optimality of the induced optimal policy. We validate our proposed method through extensive numerical experiments and apply it to the German Family Panel (Pairfam) dataset (Brüderl et al., 2023), where we aim at identifying optimal strategies to enhance long-term relationship satisfaction.

2 Preliminaries

We consider an infinite-horizon confounded Partially Observable Markov Decision Process (POMDP) defined as ={𝒮,𝒪,𝒜,𝐏,R,γ}\mathcal{M}=\{\mathcal{S},\mathcal{O},\mathcal{A},\mathbf{P},R,\gamma\}, where 𝒮\mathcal{S} and 𝒪\mathcal{O} denote the unobserved and observed continuous state space respectively, 𝒜\mathcal{A} is the action space, 𝐏:𝒮×𝒪×𝒜Δ(𝒮×𝒪)\mathbf{P}:\mathcal{S}\times\mathcal{O}\times\mathcal{A}\to\Delta(\mathcal{S}\times\mathcal{O}) is the unknown transitional kernel, R:𝒮×𝒪×𝒜R:\mathcal{S}\times\mathcal{O}\times\mathcal{A}\to\mathbb{R} is a bounded reward function, and γ[0,1)\gamma\in[0,1) is the discounted factor that balances the immediate and future rewards. 𝒪\mathcal{O} can also be treated as the observation space in the classical POMDP, then the process \mathcal{M} can be summarized as {St,Ot,At,Rt}t=1T\{S_{t},O_{t},A_{t},R_{t}\}^{T}_{t=1}, with StS_{t} and OtO_{t} as unobserved and observed state variables, AtA_{t} as the action, and RtR_{t} as the reward.

The objective of policy learning is to search an optimal policy, π\pi^{*}, which maximizes the expected discounted sum of rewards, using batch data obtained from a behavior policy πb\pi^{b}. We assume the batch data consists of nn i.i.d. trajectories, i.e.,

𝒟n={𝒟i}i=1n={O0i,A0i,R0i,O1i,,OTi,ATi,RTi,OT+1i}i=1n,\mathcal{D}_{n}=\{\mathcal{D}^{i}\}^{n}_{i=1}=\{O_{0}^{i},A_{0}^{i},R_{0}^{i},O_{1}^{i},\ldots,O_{T}^{i},A_{T}^{i},R_{T}^{i},O_{T+1}^{i}\}^{n}_{i=1},\vskip-8.53581pt

where the length of trajectory TT is assumed to be fixed for simplicity, and the information on unobserved state StiS^{i}_{t} is not available. In this paper, we focus on scenarios where the behavior policy, πb\pi^{b}, maps both unobserved and observed state spaces to the action space, that is, πb:𝒮×𝒪𝒜\pi^{b}:\mathcal{S}\times\mathcal{O}\to\mathcal{A}. Meanwhile, the target optimal policy only depends on the observed state: π:𝒪𝒜\pi^{*}:\mathcal{O}\to\mathcal{A}. For a given target policy π:𝒪𝒜\pi:\mathcal{O}\to\mathcal{A} , its state-value function is denoted as

Vπ(s,o)=𝔼π[k=0γkRt+kSt=s,Ot=o],V^{\pi}(s,o)=\mathbb{E}_{\pi}\Big{[}\sum^{\infty}_{k=0}\gamma^{k}R_{t+k}\mid S_{t}=s,O_{t}=o\Big{]},\vskip-5.69054pt (2.1)

where 𝔼π\mathbb{E}_{\pi} denotes the expectation with respect to the distribution whose actions at each decision point tt follow policy π\pi. Our goal is to utilize the batch data 𝒟n\mathcal{D}_{n} to find the optimal policy π\pi^{*} which maximizes the target policy value defined as

J(π)=argmaxπ𝔼[Vπ(S0,O0)],J(\pi^{*})=\operatorname*{arg\,max}_{\pi}\mathbb{E}[V^{\pi}(S_{0},O_{0})], (2.2)

with 𝔼\mathbb{E} representing the expectation in accordance with the behavior policy. Due to the unobserved state StS_{t}, standard policy learning approach based on the Bellman optimality equation yields biased estimates. Thus, we first introduce an identification result to estimate the policy value for any target policy π\pi with the help of some proxy variables. Subsequently, we employ policy gradient techniques to find the optimal policy π\pi^{*}.

3 Methodology

Inspired by the proximal causal inference framework introduced by Tchetgen Tchetgen et al. (2020), we initially present the identification result for policy evaluation in Section 3.1. Following this, its associated minimax estimator for any given target policy π\pi is discussed in Section 3.2. Building on these policy evaluation findings, we detail a policy-gradient based approach in Section 3.3 to identify the optimal policy within a policy class by maximizing the estimated policy value.

3.1 Identification Results

In this section, we present the identification result for confounded POMDP setting. The derived identification equation serves as a foundation for off-policy evaluation (OPE) in the presence of unmeasured confounding, while also ensures the existence of bridge functions.

Building on the proximal causal inference framework proposed by Tchetgen Tchetgen et al. (2020), we further assume the observation of reward-inducing proxy variables, WtW_{t}, that relate to the action AtA_{t} solely through {St,Ot}\{S_{t},O_{t}\}. In practical scenarios, WtW_{t} could represent environmental factors correlated with the outcome RtR_{t}, but remain unaffected by AtA_{t}. For example, in healthcare applications, WtW_{t} might include the choice of doctors or hospitals administering the treatment, or it could consist of variables that are either not accessible for decision-making due to privacy concerns or that become available only after the treatment. As for family panel studies (Brüderl et al., 2023), WtW_{t} can be selected as the variables related to housing conditions, working environments and educational backgrounds of family members, while the action can be defined as the time spent with family. A representative Directed Acyclic Graph (DAG) of this can be seen in the left panel Figure 1. The observed data for the confounded POMDP then have the form of

𝒟n={𝒟i}i=1n={O0i,W0i,A0i,R0i,O1i,,OTi,WTi,ATi,RTi,OT+1i,WT+1i}i=1n.\mathcal{D}_{n}=\{\mathcal{D}^{i}\}^{n}_{i=1}=\{O_{0}^{i},W_{0}^{i},A_{0}^{i},R_{0}^{i},O_{1}^{i},\ldots,O_{T}^{i},W_{T}^{i},A_{T}^{i},R_{T}^{i},O_{T+1}^{i},W_{T+1}^{i}\}^{n}_{i=1}.
OtO_{t} StS_{t} WtW_{t}RtR_{t}AtA_{t}Ot+1O_{t+1} St+1S_{t+1} Wt+1W_{t+1}Rt+1R_{t+1}At+1A_{t+1}
StS_{t}OtO_{t}RtR_{t}AtA_{t}St+1S_{t+1}Ot+1O_{t+1}Rt+1R_{t+1}At+1A_{t+1}
Figure 1: DAG of the proposed confounded POMDP (left) and classical POMDP (right).

The left panel of Figure 1 can be viewed as a specific example of the proximal causal inference framework proposed by Tchetgen Tchetgen et al. (2020); Cui et al. (2023). In our representation, we consider the previously observed state-action pair (Ot1,At1)(O_{t-1},A_{t-1}) as the action-inducing proxy ZtZ_{t}. Therefore, all arrows related to ZtZ_{t} from the original framework can be removed.

In contrast to the classical POMDP setting discussed in (Shi et al., 2022a; Bennett and Kallus, 2023) and illustrated in the right panel of Figure 1, where the behavior policy depends solely on unobserved states, our proposed causal framework allows the behavior policy to be influenced by both observed and unobserved states with the help of additional reward-proxy variables WtW_{t}. Such modification more accurately captures real-world scenarios in which observable state variables can significantly affect decision-making during batch data collection process. A notable example is in precision medicine, where observable variables, such as laboratory results and the patient’s current health status, often influence treatment choices.

Before presenting the identification results, we first formally introduce the basic assumptions of the confounded POMDP. Assumption 1 indicates that the future states are independent of the past given current full state and action (St,Ot,At)(S_{t},O_{t},A_{t}). Assumption 2 requires that the reward proxy WtW_{t} relates to the unobserved state StS_{t} when conditioned on the observed state OtO_{t}, but not with the action AtA_{t} when conditioned on the current full state (St,Ot)(S_{t},O_{t}). Notably, Assumption 2 does not assume the causal relationship between WtW_{t} and RtR_{t}, thus the dashed line between WtW_{t} and RtR_{t} in Figure 1. Assumption 3 requires that the previous observed state, (Ot1,At1)(O_{t-1},A_{t-1}) does not influence the reward proxy WtW_{t} and the reward RtR_{t} given current full state and action. It can be easily verified that Assumptions 1-3 are automatically satisfied by the DAG in Figure 1.

Assumption 1 (Markovian).

(St+1,Ot+1){Sj,Oj,Aj}1j<t|(St,Ot,At)(S_{t+1},O_{t+1})\perp\!\!\!\perp\{S_{j},O_{j},A_{j}\}_{1\leq j<t}|(S_{t},O_{t},A_{t}), for 0tT0\leq t\leq T.

Assumption 2 (Reward Proxy).

Wt(At,St1,Ot1)|(St,Ot)W_{t}\perp\!\!\!\perp(A_{t},S_{t-1},O_{t-1})|(S_{t},O_{t}), and WtSt|OtW_{t}\not\!\perp\!\!\!\perp S_{t}|O_{t}, for 1tT1\leq t\leq T.

Assumption 3 (Action Proxy).

(Ot1,At1)(Wt,Rt)|St,Ot,At(O_{t-1},A_{t-1})\perp\!\!\!\perp(W_{t},R_{t})|S_{t},O_{t},A_{t} for 1tT1\leq t\leq T.

However, we still cannot directly identify the value of target policy only with Assumptions 1-3 by adjusting (St,Ot)(S_{t},O_{t}), as StS_{t} is not observable. Thus, we also need the completeness assumption as stated in Assumption 4 to get around the unobserved state StS_{t}.

Assumption 4 (Completeness).

(a) For any square-integrable function hh, 𝔼[h(St)|Ot1,At1,Ot,At]=0\mathbb{E}[h(S_{t})|O_{t-1},A_{t-1},O_{t},A_{t}]=0 a.s. if and only if h=0h=0 a.s.
(b) For any square-integrable function gg, 𝔼[g(Ot1,At1)|Ot,Wt,At]=0\mathbb{E}[g(O_{t-1},A_{t-1})|O_{t},W_{t},A_{t}]=0 a.s. if and only if g=0g=0 a.s.

Completeness is a commonly made assumption in identification problems, such as instrumental variable identification (Newey and Powell, 2003; D’Haultfoeuille, 2011), and proximal causal inference (Tchetgen Tchetgen et al., 2020; Cui et al., 2023). Assumption 4 (a) rules out conditional independence between (Ot1,At1)(O_{t-1},A_{t-1}) and StS_{t} given OtO_{t} and AtA_{t}, and indicates that the previous state-action pair (Ot1,At1)(O_{t-1},A_{t-1}) should contain sufficient information from the unobserved state StS_{t}. Assumption 4 (b) ensures the injectivity of the conditional expectation operator. Leveraging Picard’s Theorem (Kress et al., 1989), the existence of bridge functions within a contextual bandit setting can be established (Miao et al., 2018).

Based on Assumptions 1-4, we generalize the original result to an infinite horizon setting. Define the QQ-bridge function and VV-bridge function of the target policy π\pi as follows:

𝔼[Qπ(Ot,Wt,At)|Ot,St,At]\displaystyle\mathbb{E}[Q^{\pi}(O_{t},W_{t},A_{t})|O_{t},S_{t},A_{t}] =𝔼π[k=0γkRt+k|Ot,St,At],\displaystyle=\mathbb{E}_{\pi}[\sum^{\infty}_{k=0}\gamma^{k}R_{t+k}|O_{t},S_{t},A_{t}], (3.1)
𝔼[Vπ(Ot,Wt)|Ot,St]\displaystyle\mathbb{E}[V^{\pi}(O_{t},W_{t})|O_{t},S_{t}] =𝔼π[k=0γkRt+k|Ot,St].\displaystyle=\mathbb{E}_{\pi}[\sum^{\infty}_{k=0}\gamma^{k}R_{t+k}|O_{t},S_{t}]. (3.2)

Therefore, it it obvious that Vπ(Ot,Wt)=a𝒜π(a|Ot)Qπ(Ot,Wt,a)𝑑aV^{\pi}(O_{t},W_{t})=\int_{a\in\mathcal{A}}\pi(a|O_{t})Q^{\pi}(O_{t},W_{t},a)da. If there exists VπV^{\pi} that satisfy (3.2), then the value of target policy π\pi can be identified by

J(π)=𝔼[Vπ(O0,W0)].J(\pi)=\mathbb{E}[V^{\pi}(O_{0},W_{0})].\vskip-8.53581pt

Notice that bridge functions defined in (3.1) and (3.2) are not necessarily unique, but we can uniquely identify the policy value J(π)J(\pi) based on any of them. We formally present the identification result in Theorem 3.3.

Theorem 3.1.

(Identification) For a confounded POMDP model whose variables satisfy Assumptions 1-4 and some regularity conditions, there always exist QQ-bridges and VV-bridges satisfying (3.1) and (3.2) respectively. Additionally, one particular QQ-bridge and VV-bridge can be obtained by solving the following equation

𝔼[Qπ(Ot,Wt,At)RtγVπ(Ot+1,Wt+1)|Ot1,At1,Ot,At]=0.\mathbb{E}[Q^{\pi}(O_{t},W_{t},A_{t})-R_{t}-\gamma V^{\pi}(O_{t+1},W_{t+1})|O_{t-1},A_{t-1},O_{t},A_{t}]=0. (3.3)

Theorem 3.3 guarantees the existence of both VV-bridges and QQ-bridges. Additionally, the identification equation (3.3) addresses the issue of the unobserved state StS_{t} by conditioning on previous state-action pair (Ot1,At1)(O_{t-1},A_{t-1}), which forms the basis for estimating bridge functions and eventually estimate the policy value of the target policy J(π)J(\pi).

3.2 Minimax Estimation

In this section, we discuss how to estimate the bridge functions using the pre-collected dataset 𝒟n={O0i,W0i,A0i,R0i,O1i,,OTi,WTi,ATi,RTi,OT+1i,WT+1i}i=1n,\mathcal{D}_{n}=\{O_{0}^{i},W_{0}^{i},A_{0}^{i},R_{0}^{i},O_{1}^{i},\ldots,O_{T}^{i},W_{T}^{i},A_{T}^{i},R_{T}^{i},O_{T+1}^{i},W_{T+1}^{i}\}^{n}_{i=1}, which consists of nn i.i.d copies of the observable trajectory (Ot,Wt,At,Rt)t=1T(O_{t},W_{t},A_{t},R_{t})^{T}_{t=1}. Based on the identification equation (3.3), we have for the target policy π\pi and any function ff,

𝔼[(Qπ(Ot,Wt,At)RtγVπ(Ot+1,Wt+1))f(Ot1,At1,Ot,At)]=0.\mathbb{E}\Big{[}\Big{(}Q^{\pi}(O_{t},W_{t},A_{t})-R_{t}-\gamma V^{\pi}(O_{t+1},W_{t+1})\Big{)}f(O_{t-1},A_{t-1},O_{t},A_{t})\Big{]}=0.

We denote

Lπ(q,f)=[q(Ot,Wt,At)Rtγa𝒜π(a|Ot+1)q(Ot+1,Wt+1,a)𝑑a]f(Ot1,At1,Ot,At),L_{\pi}(q,f)=\Big{[}q(O_{t},W_{t},A_{t})-R_{t}-\gamma\int_{a\in\mathcal{A}}\pi(a|O_{t+1})q(O_{t+1},W_{t+1},a)da\Big{]}f(O_{t-1},A_{t-1},O_{t},A_{t}),

thus 𝔼[Lπ(Qπ,f)]=0\mathbb{E}[L_{\pi}(Q^{\pi},f)]=0 for any ff\in\mathcal{F}. Such observation directly leads us to the minimax estimator

Q~π=argminq𝒬maxf𝔼[Lπ(q,f)],\tilde{Q}^{\pi}=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\max_{f\in\mathcal{F}}\mathbb{E}[L_{\pi}(q,f)], (3.4)

where we use the function class 𝒬\mathcal{Q} to model the QQ-bridge function, the function class \mathcal{F} to model the critic function ff. The corresponding finite-sample estimator is then

Q^π=argminq𝒬maxf𝔼𝒟[Lπ(q,f)]+λnh12(q)12𝔼𝒟[f2]μnh22(f),\hat{Q}^{\pi}=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\max_{f\in\mathcal{F}}\mathbb{E}_{\mathcal{D}}[L_{\pi}(q,f)]+\lambda_{n}h_{1}^{2}(q)-\frac{1}{2}\mathbb{E}_{\mathcal{D}}[f^{2}]-\mu_{n}h^{2}_{2}(f), (3.5)

where 𝔼𝒟\mathbb{E}_{\mathcal{D}} denotes the sample average over all observed tuples {Ot1,At1,Ot,Wt,At,Rt,Ot+1,Wt+1}\{O_{t-1},A_{t-1},O_{t},W_{t},A_{t},R_{t},O_{t+1},W_{t+1}\}, h1:𝒬+,h2:+h_{1}:\mathcal{Q}\to\mathbb{R}^{+},h_{2}:\mathcal{F}\to\mathbb{R}^{+} are two regularizers and μn,λn\mu_{n},\lambda_{n} are tuning parameters.

We observe that Q^π\hat{Q}^{\pi} acts as a penalized estimator for Q~π\tilde{Q}^{\pi}. The term 12𝔼𝒟[f2]\frac{1}{2}\mathbb{E}_{\mathcal{D}}[f^{2}] serves as an L2L_{2} regularizer for the function class \mathcal{F}, which has been previously explored in the context of reinforcement learning literature (Antos et al., 2008; Hoffman et al., 2011), as well as in the broader domain of minimax estimation problems (Dikkala et al., 2020). The component λnh12(q)\lambda_{n}h_{1}^{2}(q) aims to strike a balance between the model’s fit regarding the estimated Bellman error and the complexity of the estimated QQ-function. Similarly, μnh22(f)\mu_{n}h_{2}^{2}(f) is deployed to mitigate overfitting, especially when the function class \mathcal{F} exhibits complexity. The estimated policy value can subsequently be calculated using the following equation,

J^(π)=𝔼𝒟[a𝒜π(a|O0)Q^π(O0,W0,a)𝑑a].\hat{J}(\pi)=\mathbb{E}_{\mathcal{D}}\left[\int_{a\in\mathcal{A}}\pi(a|O_{0})\hat{Q}^{\pi}(O_{0},W_{0},a)da\right].\vskip-5.69054pt (3.6)

The minimax optimization of (3.5) provides a clear direction to estimate the QQ-bridge. In practice, we can use linear basis functions, neural networks, random forests and reproducing kernel Hilbert spaces (RKHSs), etc., to parameterize qq and ff, and get the estimated Q^π\hat{Q}^{\pi}. However, directly solving the minimax optimization problem can be unstable due to its inherent complexities. Furthermore, representing ff within an arbitrary function class \mathcal{F} poses additional intractability. Fortunately, we identify the continuity invariance between the reward function and the optimal critic function f()f^{*}(\cdot) in Theorem 3.2.

Theorem 3.2.

Suppose L2(C0)\mathcal{F}\in L^{2}(C_{0}), and we define the optimal critic function as
f=argmaxf𝔼[Lπ(q,f)]f^{*}=\operatorname*{arg\,max}_{f\in\mathcal{F}}\mathbb{E}[L_{\pi}(q,f)]. Let (𝒪×𝒜×𝒪×𝒜)\mathbb{C}(\mathcal{O}\times\mathcal{A}\times\mathcal{O}\times\mathcal{A}) be all continuous functions on 𝒪×𝒜×𝒪×𝒜\mathcal{O}\times\mathcal{A}\times\mathcal{O}\times\mathcal{A}. For any (o,a,o,a)𝒪×𝒜×𝒪×𝒜(o^{-},a^{-},o,a)\in\mathcal{O}\times\mathcal{A}\times\mathcal{O}\times\mathcal{A} and s𝒮s\in\mathcal{S}, the optimal critic function f(Ot1,At1,Ot,At)(𝒪×𝒜×𝒪×𝒜)f^{*}(O_{t-1},A_{t-1},O_{t},A_{t})\in\mathcal{F}\cap\mathbb{C}(\mathcal{O}\times\mathcal{A}\times\mathcal{O}\times\mathcal{A}) is unique if the reward function R(s,o,a)R(s,o,a) and the transition kernel 𝐏(s+,o+|s,o,a)\mathbf{P}(s^{+},o^{+}|s,o,a) are continuous over (s,o,a)(s,o,a), and the density of the target policy π\pi is continuous over 𝒪×𝒜\mathcal{O}\times\mathcal{A}.

Theorem 3.2 demonstrates that, provided the reward function and the density of the target policy are continuous, which is widely held in real-world scenarios, the optimal f()f^{*}(\cdot) remains continuous. Meanwhile, for a positive definite kernel KK, a bounded RKHS denoted as HRKHS(C0):={fHRKHS:fK2C0}H_{\text{RKHS}}(C_{0}):=\{f\in H_{\text{RKHS}}:\|f\|^{2}_{\mathcal{H}_{K}}\leq C_{0}\} enjoys a diminishing approximation error to any continuous function class as C0C_{0}\rightarrow\infty (Bach, 2017). Given this observation and the previously mentioned continuity invariance, we propose to represent the critic function within a bounded RKHS. We further show that by kernel representation, the original minimax optimization problem in (3.4) can be decoupled into a single stage minimization problem in Theorem 3.3.

Theorem 3.3.

Suppose \mathcal{F} belongs to a bounded reproducing kernel Hilbert space (RKHS), i.e., ={f(o,a,o,a);fK,fK2C0}\mathcal{F}=\{f(o^{-},a^{-},o,a);f\in{\mathcal{H}_{K}},\|f\|^{2}_{\mathcal{H}_{K}}\leq C_{0}\}, then the original minimax problem defined in (3.4) can be decoupled into the following minimization problem

Q~π=argminq𝒬𝔼[(q(Ot,Wt,At)Rtγa𝒜π(a|Ot+1)q(Ot+1,Wt+1,a)da)\displaystyle\tilde{Q}^{\pi}=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathbb{E}\Big{[}\Big{(}q(O_{t},W_{t},A_{t})-R_{t}-\gamma\int_{a\in\mathcal{A}}\pi(a|O_{t+1})q(O_{t+1},W_{t+1},a)da\Big{)} (3.7)
K({Ot1,At1,Ot,At};{O¯t1,A¯t1,O¯t,A¯t})(q(O¯t,W¯t,A¯t)R¯tγa𝒜π(a|O¯t+1)q(O¯t+1,W¯t+1,a)da)],\displaystyle K(\{O_{t-1},A_{t-1},O_{t},A_{t}\};\{\bar{O}_{t-1},\bar{A}_{t-1},\bar{O}_{t},\bar{A}_{t}\})\Big{(}q(\bar{O}_{t},\bar{W}_{t},\bar{A}_{t})-\bar{R}_{t}-\gamma\int_{a\in\mathcal{A}}\pi(a|\bar{O}_{t+1})q(\bar{O}_{t+1},\bar{W}_{t+1},a)da\Big{)}\Big{]},\vskip-8.53581pt

where (O¯t1,A¯t1,O¯t,W¯t,A¯t,O¯t+1,W¯t+1)(\bar{O}_{t-1},\bar{A}_{t-1},\bar{O}_{t},\bar{W}_{t},\bar{A}_{t},\bar{O}_{t+1},\bar{W}_{t+1}) is an independent copy of the transition pair
(Ot1,At1,Ot,Wt,At,Ot+1,Wt+1)(O_{t-1},A_{t-1},O_{t},W_{t},A_{t},O_{t+1},W_{t+1}).

Theorem 3.3 essentially transforms the minimax problem presented in (3.4) into a single-stage minimization problem through its kernel representation, offering a direct approach to optimization. We note that methods have been developed to directly solve the sample version of (3.7) in both MDP (Uehara et al., 2020) and POMDP setting (Shi et al., 2022a). However, in the context of batch reinforcement learning especially with continuous action space, optimization procedure may be unstable due to limited data without appropriate regularization. To address this, we additionally demonstrate that the finite-sample estimator, as formulated in (3.5), also enjoys a closed-form solution for its inner-maximization part.

Theorem 3.4.

Suppose ={f(o,a,o,a);fK}\mathcal{F}=\{f(o^{-},a^{-},o,a);f\in{\mathcal{H}_{K}}\}, and h22(f):=fK2h_{2}^{2}(f)\vcentcolon=\|f\|^{2}_{\mathcal{H}_{K}} denotes the kernel norm of ff, then the finite-sample minimax problem as shown in (3.5) can be decoupled into the following minimization problem

Q^π:=argminq𝒬(q)=argminq𝒬Φ(q)KnT1/2[12nTμnKnT+I]1KnT1/2Φ(q)+λnh12(q),\hat{Q}^{\pi}\vcentcolon=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathcal{L}(q)=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\Phi^{\top}(q)K_{nT}^{1/2}\Big{[}\frac{1}{2nT\mu_{n}}K_{nT}+I\Big{]}^{-1}K_{nT}^{1/2}\Phi(q)+\lambda_{n}h_{1}^{2}(q), (3.8)

where KnT=K({Ot1i,At1i,Oti,Ati};{Ot1i,At1i,Oti,Ati})K_{nT}=K\Big{(}\{O^{i}_{t-1},A^{i}_{t-1},O^{i}_{t},A^{i}_{t}\};\{O^{i^{\prime}}_{t^{\prime}-1},A^{i^{\prime}}_{t^{\prime}-1},O^{i^{\prime}}_{t^{\prime}},A^{i^{\prime}}_{t^{\prime}}\}\Big{)} is the sample kernel matrix, Φ(q)=[δ1,π1(q),δ2,π1(q),,δT,π1(q),δ1,π2(q),.,δT1,πn(q),δT,πn(q)]nT\Phi(q)=[\delta^{1}_{1,\pi}(q),\delta^{1}_{2,\pi}(q),...,\delta^{1}_{T,\pi}(q),\delta^{2}_{1,\pi}(q),....,\delta^{n}_{T-1,\pi}(q),\delta^{n}_{T,\pi}(q)]^{\top}\in\mathbb{R}^{nT}, and δt,πi(q)=q(Oti,Wti,Ati)Rtiγa𝒜π(a|Ot+1i)q(Ot+1i,Wt+1i,a)𝑑a\delta^{i}_{t,\pi}(q)=q(O^{i}_{t},W^{i}_{t},A^{i}_{t})-R^{i}_{t}-\gamma\int_{a\in\mathcal{A}}\pi(a|O^{i}_{t+1})q(O^{i}_{t+1},W^{i}_{t+1},a)da.

Notice that by including the kernel norm h22(f):=fK2h_{2}^{2}(f)\vcentcolon=\|f\|^{2}_{\mathcal{H}_{K}} as the penalty term, we drop the boundedness constraint on the function class \mathcal{F} for the finite-sample estimator. Additionally, when selecting μn,λn\mu_{n},\lambda_{n} such that μn,λn,1nμn0\mu_{n},\lambda_{n},\frac{1}{n\mu_{n}}\to 0, estimating equation (3.8) converges to the form (3.7). Thus, equation (3.8) can be considered as a regularized variant of (3.7), and provides a clear path in estimating the QQ-bridge function. By parameterizing the QQ-bridge function with parameter θ\theta using tools such as linear basis functions, RKHS, neural networks among others, we introduce an SGD-based algorithm as outlined in Algorithm 1 to determine the estimated Q^π\hat{Q}^{\pi}.

Algorithm 1 Off-Policy Evaluation
1:Input observed transition pairs data {Ot1,At1,Ot,Wt,At,Rt,Ot+1,Wt+1}t=1n\{O_{t-1},A_{t-1},O_{t},W_{t},A_{t},R_{t},O_{t+1},W_{t+1}\}^{n}_{t=1}, target policy π\pi.
2:Initialize the parameters of interests θ=θ(0)\theta=\theta^{(0)}, the mini-batch size n0n_{0}, the learning rate α0\alpha_{0}, the kernel bandwidth bw0\textit{bw}_{0}, and the stopping criterion ε\varepsilon.
3:For iterations j=1j=1 to kk
4:    Randomly sample a mini-batch {Ot1,At1,Ot,Wt,At,Rt,Ot+1,Wt+1}t=1n0\{O_{t-1},A_{t-1},O_{t},W_{t},A_{t},R_{t},O_{t+1},W_{t+1}\}^{n_{0}}_{t=1}.
5:    Decay the learning rate αj=𝒪(j1/2)\alpha_{j}=\mathcal{O}(j^{-1/2}).
6:    Compute stochastic gradients with respect to θ\theta, θ(q)\nabla_{\theta}{\mathcal{L}(q)}, in (3.8).
7:    Update the parameters of interest as θ(j)θ(j1)αjθ(q).\theta^{(j)}\leftarrow\theta^{(j-1)}-\alpha_{j}{\nabla}_{\theta}{\mathcal{L}(q)}.
8:    Stop if θ(j)θ(j1)ε\|\theta^{(j)}-\theta^{(j-1)}\|\leq\varepsilon.
9:Return θ^θ(j)\widehat{\theta}\leftarrow\theta^{(j)}.

3.3 Policy Learning

Section 3.2 outlines the methodology for estimating the QQ-bridge. Following this, the value of a specific target policy π\pi can be determined by 𝔼[Vπ(O0,W0)]=𝔼[a𝒜π(a|O0)Qπ(O0,W0,a)𝑑a]\mathbb{E}[V^{\pi}(O_{0},W_{0})]=\mathbb{E}\left[\int_{a\in\mathcal{A}}\pi(a|O_{0})Q^{\pi}(O_{0},W_{0},a)da\right]. It is important to note that the induced estimator Q^π\hat{Q}^{\pi} is unbiased and exhibits a convergence rate of O(n1/2)O(n^{-1/2}), as demonstrated in Section 4. Consequently, a natural approach for identifying the optimal policy π\pi^{*} is to search for the policy π\pi that maximizes the estimated policy value, i.e.

π=argmaxπΠJ(π)=argmaxπΠ𝔼[a𝒜π(a|O0)Qπ(O0,W0,a)𝑑a].\pi^{*}=\operatorname*{arg\,max}_{\pi\in\Pi}J(\pi)=\operatorname*{arg\,max}_{\pi\in\Pi}\mathbb{E}\left[\int_{a\in\mathcal{A}}\pi(a|O_{0})Q^{\pi}(O_{0},W_{0},a)da\right]. (3.9)

However, this is a challenging problem due to the intractability of policy π\pi, thus we propose to parameterize the policy distribution with π(ζ)\pi(\zeta) to capture the learning process leading to the induced optimal policy, and the parameters corresponding to the optimal policy can then be obtained by solving the following optimization problem,

π^(o,a;ζ)=argmaxπΠJ^(π)=argmaxπΠ𝔼𝒟[a𝒜π(a|O0;ζ)Q^π(O0,W0,a;θ)𝑑a].\hat{\pi}^{*}(o,a;\zeta)=\operatorname*{arg\,max}_{\pi\in\Pi}\hat{J}(\pi)=\operatorname*{arg\,max}_{\pi\in\Pi}\mathbb{E}_{\mathcal{D}}\Big{[}\int_{a\in\mathcal{A}}\pi(a|O_{0};\zeta)\hat{Q}^{\pi}(O_{0},W_{0},a;\theta)da\Big{]}. (3.10)

Similar as QQ-bridge, practical parameterization of distribution parameters, such as the mean and variance of the normal distribution or the parameters (α,β)(\alpha,\beta) for the beta distribution, can be achieved using linear basis functions, neural networks, and RKHSs.

Algorithm 2 Off-Policy Learning
1:Input observed transition pairs data {Ot1,At1,Ot,Wt,At,Rt,Ot+1,Wt+1}t=1n\{O_{t-1},A_{t-1},O_{t},W_{t},A_{t},R_{t},O_{t+1},W_{t+1}\}^{n}_{t=1}.
2:Initialize the parameters of interests ζ=ζ(0)\zeta=\zeta^{(0)}, θ=θ(0)\theta=\theta^{(0)}, the mini-batch size n0n_{0}, the learning rate α0,β0\alpha_{0},\beta_{0}, the kernel bandwidth bw0\textit{bw}_{0}, and the stopping criterion ε\varepsilon.
3:For iterations j=1j=1 to kk
4:    Implement Algorithm 1 to get θ(j)\theta^{(j)}.
5:    Compute the gradient with respect to ζ\zeta, ζLπ(ζ(j1),θ(j))\nabla_{\zeta}L_{\pi}(\zeta^{(j-1)},\theta^{(j)}) in (3.11).
6:    Decay the learning rate βj=𝒪(j1/2)\beta_{j}=\mathcal{O}(j^{-1/2}).
7:    Update the parameter ζ(j)ζ(j1)+βjζLπ^\zeta^{(j)}\leftarrow\zeta^{(j-1)}+\beta_{j}{\nabla}_{\zeta}\widehat{L_{\pi}}.
8:Return ζ^ζ(j)\widehat{\zeta}\leftarrow\zeta^{(j)}.

To effectively tackle the optimization problem defined in (3.10), we propose an algorithm that facilitates iterative updates to the parameters (ζ,θ)(\zeta,\theta). Notice that for a predetermined ζ\zeta, the corresponding QQ-bridge Qπ(ζ)(o,w,a;θ)Q^{\pi(\zeta)}(o,w,a;\theta) can be fully determined by ζ\zeta, thus θ\theta can be considered as a function of ζ\zeta. Consequently, to solve the optimization problem in (3.10), we start with an initial policy parameter ζ(0)\zeta^{(0)}, thereby defining the policy π(ζ(0))\pi(\zeta^{(0)}). Following the specification of initial value, we apply Algorithm 1 to estimate θ(0)\theta^{(0)} and the associated QQ-bridge function Q^π(ζ(0))(O,W,A;θ(ζ(0)))\hat{Q}^{\pi(\zeta^{(0)})}(O,W,A;\theta(\zeta^{(0)})). Subsequently, the policy gradient with respect to ζ\zeta from (3.10) is calculated as:

ζLπ(θ,ζ)\displaystyle\nabla_{\zeta}L_{\pi}(\theta,\zeta) =𝔼𝒟[ζa𝒜π(a|O0;ζ)Qπ(O0,W0,a;θ(ζ))𝑑a],\displaystyle=\mathbb{E}_{\mathcal{D}}[\nabla_{\zeta}\int_{a\in\mathcal{A}}\pi(a|O_{0};\zeta)Q^{\pi}(O_{0},W_{0},a;\theta(\zeta))da], (3.11)
=𝔼𝒟[a𝒜{ζπ(a|O0;ζ)Qπ(O0,W0,a;θ(ζ))+π(a|O0;ζ)ζQπ(O0,W0,a;θ(ζ))}𝑑a].\displaystyle=\mathbb{E}_{\mathcal{D}}\Big{[}\int_{a\in\mathcal{A}}\Big{\{}\nabla_{\zeta}\pi(a|O_{0};\zeta)Q^{\pi}(O_{0},W_{0},a;\theta(\zeta))+\pi(a|O_{0};\zeta)\nabla_{\zeta}Q^{\pi}(O_{0},W_{0},a;\theta(\zeta))\Big{\}}da\Big{]}.

Gradient ascent is employed to update ζ(0)\zeta^{(0)}, with the objective of obtaining a policy with a larger estimated value. This procedure is repeated to update the parameters (ζ,θ)(\zeta,\theta) iteratively until convergence is attained, at which point (ζ,θ)(\zeta,\theta) are considered to approximate the optimal policy and its corresponding QQ-bridge function. This iterative update mechanism is outlined in Algorithm 2. We note that this policy learning mechanism is broadly applicable to identification results based on different causal graphs. This can be achieved by parameterizing the policy class and replacing the policy evaluation step in Algorithm 2 with methods designed for alternative settings. This approach offers significant computational advantages when dealing with continuous action spaces, as traditional policy iteration algorithms typically require solving an additional optimization problem maxa𝒜Qπ(o,w,a)\max_{a\in\mathcal{A}}Q^{\pi}(o,w,a) at each iteration.

4 Theoretical Results

In this section, we first derive the global rate of convergence for Q^π\hat{Q}^{\pi} in (3.5), and the finite-sample error bound for the estimated policy value J^(π)\hat{J}(\pi) in (3.6). We then extend the results to the estimated optimal policy, where we derive the regret bound for π^\hat{\pi}^{*} defined in (3.10). We note that while we use kernel representation to obtain the closed-form solution for the inner-maximization problem as demnstrated in (3.7) and (3.8), our theoretical results are based on the general form of (3.5). To simplify notation, we denote the L2L_{2} norm with respect to the average state-action distribution in the trajectory 𝒟\mathcal{D} as f2=𝔼[1Tt=1Tf2()]\|f\|^{2}=\mathbb{E}\Big{[}\frac{1}{T}\sum^{T}_{t=1}f^{2}(\cdot)\Big{]}, and the Bellman operator 𝒯π\mathcal{T}_{\pi} with respect to the target policy π\pi as 𝒯π(s,o,a;q)=𝔼[Rt+1+γa𝒜π(a|Ot+1)q(Ot+1,Wt+1,a)𝑑aSt=s,Ot=o,At=a]\mathcal{T}_{\pi}(s,o,a;q)=\mathbb{E}\big{[}R_{t+1}+\gamma\int_{a\in\mathcal{A}}\pi(a^{\prime}|O_{t+1})q(O_{t+1},W_{t+1},a^{\prime})da\mid S_{t}=s,O_{t}=o,A_{t}=a\big{]}. Before presenting our main results, we first state several standard assumptions on policy class Π\Pi, reward function RtR_{t}, and function class 𝒬\mathcal{Q} and \mathcal{F}.

Assumption 5.

The target policy class Π={πζ:ζZp}\Pi=\{\pi_{\zeta}:\zeta\in Z\subset\mathbb{R}^{p}\}, satisfies:
(i) HpH\subset\mathbb{R}^{p} is compact and let diam(Z)=sup{ζ1ζ22:ζ1,ζ2H}\text{diam}(Z)=\sup\{\|\zeta_{1}-\zeta_{2}\|_{2}:\zeta_{1},\zeta_{2}\in H\}.
(ii) There exists LZ>0L_{Z}>0, such that for ζ1,ζ2H\zeta_{1},\zeta_{2}\in H and for all (o,a)𝒪×𝒜(o,a)\in\mathcal{O}\times\mathcal{A}, the following holds

|πζ1(a|o)πζ2(a|o)|LZζ1ζ22.|\pi_{\zeta_{1}}(a|o)-\pi_{\zeta_{2}}(a|o)|\leq L_{Z}\|\zeta_{1}-\zeta_{2}\|_{2}.\vskip-8.53581pt
Assumption 6.

(i) The reward is uniformly bounded: |Rt|Rmax|R_{t}|\leq R_{\max} for all t0t\geq 0.
(ii) The function class 𝒬\mathcal{Q} satisfies qqmax\|q\|_{\infty}\leq q_{\max} for all q𝒬q\in\mathcal{Q}, and Qπ𝒬Q^{\pi}\in\mathcal{Q} for all πΠ\pi\in\Pi.

Assumption 7.

The function class \mathcal{F} satisfies (i) 00\in\mathcal{F}; (ii)ffmax\|f\|_{\infty}\leq f_{\max} for all ff\in\mathcal{F}; (iii) κ=inf{f~:𝒯π(s,o,a;q)q(o,w,a)=1,q𝒬,πΠ}>0\kappa=\inf\{\|\tilde{f}\|:\|\mathcal{T}_{\pi}(s,o,a;q)-q(o,w,a)\|=1,q\in\mathcal{Q},\pi\in\Pi\}>0, where f~=argmaxf𝔼[Lπ(q,f)12f2]\tilde{f}=\operatorname*{arg\,max}_{f}\mathbb{E}[L_{\pi}(q,f)-\frac{1}{2}f^{2}].

Assumption 5 is used to control the complexity of policy class, which are commonly assumed in policy learning problems (Liao et al., 2022; Wang and Zou, 2022). Assumption 6 is a regular assumptions to impose boundedness condition on reward and function class 𝒬\mathcal{Q} (Antos et al., 2008; Bennett et al., 2021). Assumption 7 similarly ensures the boundedness of function class \mathcal{F}. Additionally, the value of κ\kappa measures how well the function class \mathcal{F} approximates the Bellman error for all q𝒬,πΠq\in\mathcal{Q},\pi\in\Pi. This concept has been explored as the well-posedness condition within the field of off-policy evaluation (Chen and Qi, 2022; Miao et al., 2022). A strictly positive value of κ\kappa indicates a substantial overlap between the behavior and target policies in terms of the state-action distribution, ensuring the identification of the true policy value J(π)J(\pi).

Assumption 8.

(i) The regularization functionals, h1h_{1} and h2h_{2}, are pseudo norms and induced by the inner products, h1(,)h_{1}(\cdot,\cdot) and h2(,)h_{2}(\cdot,\cdot), respectively. There exist constants, C1C_{1} and C2C_{2}, such that h2(f~(q))C1+C2h1(q)h_{2}(\tilde{f}(q))\leq C_{1}+C_{2}h_{1}(q) holds for all q𝒬q\in\mathcal{Q}.
(ii) Let 𝒬M={c+q:|c|Rmax,q𝒬,h1(q)M}\mathcal{Q}_{M}=\{c+q:|c|\leq R_{\max},q\in\mathcal{Q},h_{1}(q)\leq M\}, and M={f:f,h2(f)M}\mathcal{F}_{M}=\{f:f\in\mathcal{F},h_{2}(f)\leq M\}. There exists constant C3C_{3} and α(0,1)\alpha\in(0,1) such that for any ϵ,M>0\epsilon,M>0,

max{logN(ε,M,||||),logN(ε,𝒬M,||||)}C3(Mε)2α.\max\left\{\log N(\varepsilon,\mathcal{F}_{M},||\cdot||_{\infty}),\log N(\varepsilon,\mathcal{Q}_{M},||\cdot||_{\infty})\right\}\leq C_{3}\left(\frac{M}{\varepsilon}\right)^{2\alpha}.\vskip-8.53581pt

Assumption 8 characterizes the complexity of the function class 𝒬\mathcal{Q} and \mathcal{F}. The condition that the regularizers be pseudo-norms is satisfied for common function classes such as RKHS and Sobolev spaces (Geer, 2000; Thomas and Brunskill, 2016). The upper bound on h2h_{2} is realistic when the transition kernel is sufficiently smooth (Farahm et al., 2016). We use a common α(0,1)\alpha\in(0,1) for both 𝒬\mathcal{Q} and \mathcal{F} to simplify the proof.

Theorem 4.1.

Suppose the target policy πΠ\pi\in\Pi, and let Q^π\hat{Q}^{\pi} be the estimator defined in (3.5). Suppose Assumptions 1-8 hold, and the tuning parameter satisfy 1τn11+αμnτλn\frac{1}{\tau}n^{-\frac{1}{1+\alpha}}\leq\mu_{n}\leq\tau\lambda_{n} for some constant τ>0\tau>0. Then the following bounds hold with probability at least 1δ1-\delta where δ(0,1)\delta\in(0,1),

𝒯π(,,;Q^π)Q^π21κ2[λn[1+h12(Qπ)]{1+log(1/δ)}]],\|\mathcal{T}_{\pi}(\cdot,\cdot,\cdot;\hat{Q}^{\pi})-\hat{Q}^{\pi}\|^{2}\lesssim\frac{1}{\kappa^{2}}\Big{[}\lambda_{n}\left[1+h_{1}^{2}(Q^{\pi})\right]\left\{1+\log(1/\delta)\right\}]\Big{]},\vskip-8.53581pt

where the leading constant depend only on (τ,Rmax,qmax,fmax,C1,C2,C3,α,γ)(\tau,R_{\max},q_{\max},f_{\max},C_{1},C_{2},C_{3},\alpha,\gamma).

In supplementary material, we have proved that Q^πQπ2𝒯π(,,;Q^π)Q^π2\|\hat{Q}^{\pi}-Q^{\pi}\|^{2}\lesssim\|\mathcal{T}_{\pi}(\cdot,\cdot,\cdot;\hat{Q}^{\pi})-\hat{Q}^{\pi}\|^{2}. Consequently, Theorem 4.1 demonstrates that Q^π\hat{Q}^{\pi} serves as a consistent estimator for QπQ^{\pi}, provided that λn=oP(1)\lambda_{n}=o_{P}(1). When the tuning parameter is selected such that λnμn\lambda_{n}\asymp\mu_{n} and λnn1/(1+α)\lambda_{n}\asymp n^{-1/(1+\alpha)}, we achieve the optimal convergence rate of the Bellman error at Q^π\hat{Q}^{\pi}, which is OP(n1/(1+α))O_{P}(n^{-1/(1+\alpha)}). Before delving into the finite-sample error bound for J^(π)\hat{J}(\pi), we first introduce some additional notations.

We define the discounted state-action visitation of target policy π\pi as dπ(o,s,a)=(1γ)t=0γtdtπ(o,s,a)d^{\pi}(o,s,a)=(1-\gamma)\sum^{\infty}_{t=0}\gamma^{t}d^{\pi}_{t}(o,s,a), where dtπd^{\pi}_{t} is the density of the state-action pair at ttht^{\text{th}} time point under the target policy π\pi, and denote the average density over TT decision times under the behavior policy as d¯T(o,s,a)\bar{d}_{T}(o,s,a). We further define the direction function eπ(o,a,o,a)e^{\pi}(o^{-},a^{-},o,a) by

𝔼[eπ(o,a,o,a)|s,o,a]=dπ(o,s,a)d¯T(o,s,a).\mathbb{E}[e^{\pi}(o^{-},a^{-},o,a)|s,o,a]=\frac{d^{\pi}(o,s,a)}{\bar{d}_{T}(o,s,a)}. (4.1)

The direction function eπe^{\pi} is used to control the bias |J^(π)J(π)||\hat{J}(\pi)-J(\pi)| caused by the penalization of QQ-bridge in (3.5). As demonstrated in Uehara et al. (2020); Shi et al. (2022a), the direction function eπe^{\pi} enjoys the following property in our setting

𝔼[1Tt=1Teπ(Ot1,At1,Ot,At){q(Ot,Wt,At)γa𝒜π(a|Ot+1)q(Ot+1,Wt+1,a)𝑑a}St,Ot,At]\displaystyle\mathbb{E}\Big{[}\frac{1}{T}\sum^{T}_{t=1}e^{\pi}(O_{t-1},A_{t-1},O_{t},A_{t})\Big{\{}q(O_{t},W_{t},A_{t})-\gamma\int_{a^{\prime}\in\mathcal{A}}\pi(a^{\prime}|O_{t+1})q(O_{t+1},W_{t+1},a^{\prime})da^{\prime}\Big{\}}\mid S_{t},O_{t},A_{t}\Big{]}
=(1γ)𝔼[1Tt=1Td0π(Ot,St,At)q(Ot,Wt,At)].\displaystyle=-(1-\gamma)\mathbb{E}\Big{[}\frac{1}{T}\sum^{T}_{t=1}d^{\pi}_{0}(O_{t},S_{t},A_{t})q(O_{t},W_{t},A_{t})\Big{]}. (4.2)

Next, we define 𝔼[ξπ(Ot,Wt,At)|St,Ot,At]=𝔼[t=0γtdπ(Ot,St,At)d¯T(Ot,St,At)]\mathbb{E}[\xi^{\pi}(O_{t},W_{t},A_{t})|S_{t},O_{t},A_{t}]=\mathbb{E}[\sum^{\infty}_{t=0}\gamma^{t}\frac{d^{\pi}(O_{t},S_{t},A_{t})}{\bar{d}_{T}(O_{t},S_{t},A_{t})}]. Note that ξπ\xi^{\pi} has a similar structure to the QQ-bridge function, where the “reward” at time tt is defined as dπ(Ot,St,At)d¯T(Ot,St,At)\frac{d^{\pi}(O_{t},S_{t},A_{t})}{\bar{d}_{T}(O_{t},S_{t},A_{t})}. Similar to the QQ-bridge, ξπ\xi^{\pi} satisfies the following Bellman-like equation as demonstrated in Theorem 3.1:

𝔼\displaystyle\mathbb{E} [ξπ(Ot,Wt,At)eπ(Ot1,At1,Ot,At)+\displaystyle\Big{[}\xi^{\pi}(O_{t},W_{t},A_{t})-e^{\pi}(O_{t-1},A_{t-1},O_{t},A_{t})+
γa𝒜π(a|Ot+1)ξπ(Ot+1,Wt+1,a)daOt1,At1,Ot,At]=0.\displaystyle\qquad\qquad\qquad\gamma\int_{a^{\prime}\in\mathcal{A}}\pi(a^{\prime}|O_{t+1})\xi^{\pi}(O_{t+1},W_{t+1},a^{\prime})da^{\prime}\mid O_{t-1},A_{t-1},O_{t},A_{t}\Big{]}=0.

We make the following smoothness assumption about eπe^{\pi} and ξπ\xi^{\pi}.

Assumption 9.

The direction function eπe^{\pi}\in\mathcal{F}, and ξπ𝒬\xi^{\pi}\in\mathcal{Q} for all πΠ\pi\in\Pi.

The condition on direction function eπe^{\pi} is to ensure that eπe^{\pi} is sufficiently smooth, which will be used to show that the bias of the estimator in (3.5) decreases sufficiently fast to 0. The assumption on ξπ\xi^{\pi} is analog to the assumptions used in partially linear regression literature (Geer, 2000). The counterpart of ξπ𝒬\xi^{\pi}\in\mathcal{Q} in partially linear regression problem, Y=g(Z)+Xβ+ϵY=g(Z)+X^{\top}\beta+\epsilon is the standard assumption that 𝔼[X|Z=]𝒢\mathbb{E}[X|Z=\cdot]\in\mathcal{G}, where 𝒢\mathcal{G} is the function class to model g(Z)g(Z). The last assumption concerns the coverage of the collected dataset.

Assumption 10.

(a) There exist positive constants p1,min,p2,minp_{1,\min},p_{2,\min} and p1,maxp_{1,\max}, such that the visitation density d¯T,d0π(o,s,a)\bar{d}_{T},d^{\pi}_{0}(o,s,a) satisfy p1,mind¯T,d0πp1,maxp_{1,\min}\leq\bar{d}_{T},d^{\pi}_{0}\leq p_{1,\max} for all πΠ\pi\in\Pi, and the behavior policy πbp2,min\pi^{b}\geq p_{2,\min} for every (o,s,a)𝒪×𝒮×𝒜(o,s,a)\in\mathcal{O}\times\mathcal{S}\times\mathcal{A}.
(b) The target policy π\pi is absolute continuous with respect to behaviour policy πb\pi^{b} for all πΠ\pi\in\Pi, and 𝒫π(o,s,w,a|o,s,w,a)p2,max\mathcal{P}^{\pi}(o^{\prime},s^{\prime},w^{\prime},a^{\prime}|o,s,w,a)\leq p_{2,\max} for some positive constant p2,maxp_{2,\max}, where 𝒫π\mathcal{P}^{\pi} denotes the 1-step visitation density induced by target policy π\pi.

We define pmin=min{p1,min,p2,min},pmax=max{p1,max,p2,max}p_{\min}=\min\{p_{1,\min},p_{2,\min}\},p_{\max}=\max\{p_{1,\max},p_{2,\max}\}. Assumption 10 (a) is frequently referred as the coverage assumption in RL literature (Precup, 2000; Kallus and Uehara, 2020), which guarantees the collected offline data sufficiently covers the entire state-action space. Assumption 10 (b) imposes one mild condition on the target policy, which essentially states that the collected batch data is able to identify the true value of target policy. We now analyze the performance error between the finite-sample estimator and the true policy value.

Theorem 4.2.

Suppose the assumptions in Theorem 4.1 hold. Additionally, suppose Assumption 9 and 10 hold, and λn=o(n1/(1+α))\lambda_{n}=o(n^{-1/(1+\alpha)}) . Then, with probability at least 1δ1-\delta where δ(0,1)\delta\in(0,1), the estimation error from (3.5) is upper bounded by

|J^(π)J(π)|2[(1+γ)qmax+Rmax]fmax1γlog(2/δ)2n+oP(n1/2).|\hat{J}(\pi)-J(\pi)|\leq\frac{2[(1+\gamma)q_{\max}+R_{\max}]f_{\max}}{1-\gamma}\sqrt{\frac{\log(2/\delta)}{2n}}+o_{P}(n^{-1/2}).

From Theorem 4.2, it is evident that the proposed estimator, as defined in (3.5), achieves a convergence rate of O(n1/2)O(n^{-1/2}). This finite-sample error bound effectively extends the result from Shi et al. (2022a) to encompass a wider range of function classes, while still preserving the optimal convergence rate of O(n1/2)O(n^{-1/2}). Moreover, Theorem 4.2 not only details the convergence rate for a specified target policy π\pi, but also sets the groundwork for extending this outcome to the finite-sample regret bound applicable to policy learning.

Proposition 4.1.

Let Q^π\hat{Q}^{\pi} be the estimator defined in (3.5). Suppose Assumptions 1-8 hold, and the tuning parameter satisfy 1τn11+αμnτλn\frac{1}{\tau}n^{-\frac{1}{1+\alpha}}\leq\mu_{n}\leq\tau\lambda_{n} for some constant τ>0\tau>0. Then the following bound holds with probability at least 1δ1-\delta where δ(0,1)\delta\in(0,1), for all πΠ\pi\in\Pi,

𝒯π(,,;Q^π)Q^π2pκ2[λn[1+h12(Qπ)]{1+log(1/δ)}]],\|\mathcal{T}_{\pi}(\cdot,\cdot,\cdot;\hat{Q}^{\pi})-\hat{Q}^{\pi}\|^{2}\lesssim\frac{p}{\kappa^{2}}\Big{[}\lambda_{n}\left[1+h_{1}^{2}(Q^{\pi})\right]\left\{1+\log(1/\delta)\right\}]\Big{]},\vskip-8.53581pt

where the leading constant depend only on (τ,Rmax,qmax,fmax,C1,C2,C3,diam(Z),LZ,α,γ)(\tau,R_{\max},q_{\max},f_{\max},C_{1},C_{2},C_{3},\text{diam}(Z),L_{Z},\alpha,\gamma).

Proposition 4.1 extends the error bound in Theorem 4.1 to all πΠ\pi\in\Pi with additional efforts in controlling the complexity of policy class Π\Pi. This demonstrates that Q^π\hat{Q}^{\pi} is a consistent estimator for any π\pi in Π\Pi. Building on this, we analyze the suboptimality of the estimated optimal policy π^\hat{\pi}^{*}.

Theorem 4.3.

Suppose π^\hat{\pi}^{*} and π\pi^{*} are defined in (3.9) and (3.10) respectively, and the assumptions in Theorem 4.1 hold. Then with probability at least 1δ1-\delta where δ(0,1)\delta\in(0,1), we have

J(π)J(π^)C(δ)(p1/2n1/2)+oP(p1/2n1/2),J(\pi^{*})-J(\hat{\pi}^{*})\leq C(\delta)(p^{1/2}n^{-1/2})+o_{P}(p^{1/2}n^{-1/2}),\vskip-8.53581pt

where C(δ)C(\delta) is a function of (τ,Rmax,qmax,fmax,C1,C2,C3,diam(Z),LZ,α,γ)(\tau,R_{\max},q_{\max},f_{\max},C_{1},C_{2},C_{3},\text{diam}(Z),L_{Z},\alpha,\gamma).

Theorem 4.3 is based on the critical observation that J(π)J(π^)(J(π)J^(π))+(J^(π)J^(π^))+(J^(π^)J(π^))J(\pi^{*})-J(\hat{\pi}^{*})\leq(J(\pi^{*})-\hat{J}(\pi^{*}))+(\hat{J}(\pi^{*})-\hat{J}(\hat{\pi}^{*}))+(\hat{J}(\hat{\pi}^{*})-J(\hat{\pi}^{*})), where the error bounds for the first and last terms can be established from the uniform error bound on |J^(π)J(π)||\hat{J}(\pi)-J(\pi)| for all πΠ\pi\in\Pi. Recall that pp is the number of parameters in the policy, nn is the number of i.i.d. trajectories in the collected dataset, and the regret of the estimated optimal policy is O(p1/2n1/2)O(p^{1/2}n^{-1/2}). The first term represents the regret of an estimated policy as if the qq-bridge were known beforehand, and the second term arises from the estimation error of the qq-bridge function. Theorem 4.3 essentially characterizes the regret of the estimated optimal in-class policy without double robustness, achieving the optimal minimax convergence rate of O(n1/2)O(n^{-1/2}).

5 Simulation Studies

In this section, we evaluate our proposed method through extensive simulation studies. Since our framework introduces the reward-inducing proxy variable and is designed for both continuous state and action spaces, existing methods for POMDPs are not directly applicable (Shi et al., 2022a; Miao et al., 2022). Therefore, we compare our proposed method to other methods designed for MDP scenarios. For policy evaluation, we compare our method to the MDP version of our proposed approach, as well as an augmented MDP, hereafter referred to as MDPW, with (Ot,Wt)(O_{t},W_{t}) as the new observed state variables. Specifically, we adopted the method by Uehara et al. (2020), treating OtO_{t} (MDP) and (Ot,Wt)(O_{t},W_{t}) (MDPW) as the state variables as two baselines.. For policy learning, we benchmark our method against state-of-the-art baselines, including implicit Q-learning (IQL) (Kostrikov et al., 2021), soft actor-critic (SAC) (Haarnoja et al., 2018) and conservative Q-learning (CQL) (Kumar et al., 2020).

For practical implementation, both Qπ(;θ)Q^{\pi}(\cdot;\theta) and π(;ζ)\pi(\cdot;\zeta) need to be parameterized. We propose to parameterize QπQ^{\pi} as a RKHS using second-degree polynomial kernels. The critic function class \mathcal{F} is specified as a Gaussian RKHS with bandwidths selected by the median heuristic (Fukumizu et al., 2009). The penalty terms h1(q),h2(f)h_{1}(q),h_{2}(f) are selected as the kernel norm of q,fq,f respectively. Thus, it can be verified that Assumptions 5-8 are automatically satisfied by the above choice.

To determine the values of the tuning parameters (λn,μn)(\lambda_{n},\mu_{n}), we employ a k-fold cross-validation approach. For each candidate pair of tuning parameters, we implement Algorithm 2 to search for the optimal policy on the training set and then evaluate the learned optimal policy on the validation set. We select the optimal (λn,μn)(\lambda_{n},\mu_{n}) that maximizes the estimated optimal policy value on the validation set, given by: (λ^n,μ^n)=argmaxλn,μn1kr=1kJ^λn,μn(r)(π^)(\hat{\lambda}_{n},\hat{\mu}_{n})=\arg\max_{\lambda_{n},\mu_{n}}\frac{1}{k}\sum_{r=1}^{k}\hat{J}_{\lambda_{n},\mu_{n}}^{(r)}(\hat{\pi}^{*}), where J^λn,μn(r)(π^)\hat{J}_{\lambda_{n},\mu_{n}}^{(r)}(\hat{\pi}^{*}) represents the estimated optimal policy value on the rr-th validation set. We conduct numerical experiments on a synthetic environment to evaluate the finite sample performance of our proposed method. Both state and action space are continuous for the synthetic environment, and the discount factor is specified as γ=0.9\gamma=0.9 for all experiments.

We consider a one-dimensional continuous state space and a continuous action space for the synthetic environment, where the unobserved initial state follows S0Unif(0.5,0.5)S_{0}\sim\text{Unif}(-0.5,0.5). The observed state OtO_{t} is generated according to the additive noise model, i.e., Ot=St+𝒩(0,1)O_{t}=S_{t}+\mathcal{N}(0,1). The reward proxy, reward function, and state transition are given by

Rt=At(Ot0.2Wt0.8St)0.8At2,Wt=St0.5Ot+N(0,0.32),\displaystyle\vskip-8.53581ptR_{t}=A_{t}(O_{t}-0.2W_{t}-0.8S_{t})-0.8A_{t}^{2},\qquad W_{t}=S_{t}-0.5O_{t}+N(0,0.3^{2}),
St+1=0.8Ot0.3At+N(0,0.12).\displaystyle S_{t+1}=0.8O_{t}-0.3A_{t}+N(0,0.1^{2}).\vskip-8.53581pt

To ensure sufficient coverage of the observed dataset, we set the behavior policy as AtSt,Otclip(N(St/3,0.42),1,1)A_{t}\mid S_{t},O_{t}\sim\text{clip}(N(-S_{t}/3,0.4^{2}),-1,1) to generate the batch dataset.

We first conduct policy evaluation by selecting two different target policies. The first is designed to be similar to the behavior policy, while the second is near-optimal and therefore substantially different from the behavior policy. We then apply Algorithm 1 to estimate the target policy value, and consider different sample sizes for the observed dataset in which (n,T)={(25,25),(50,25),(75,50),(100,50)}(n,T)=\{(25,25),(50,25),(75,50),(100,50)\}. To evaluate the model performance, we generate 1,000 independent trajectories under the target policy and define the true policy value as the simulated mean cumulative reward. We compare the MSE of our proposed estimator with its MDP and MDPW counterparts. The simulation results are shown in Figure 2.

Refer to caption
Figure 2: Logarithms of relative MSEs of the proposed (blue sqaures), MDPW (orange circles), and MDP (green triangles) estimators and their associated 95% confidence interval, based on 50 simulations, with different choices of the sample size and length of trajectory (n,T)(n,T).

Figure 2 illustrates that our proposed method consistently achieves lower bias in policy evaluation across all settings. This improvement stems from leveraging the identification result in Theorem 3.1, which enables the recovery of lost information from unobserved variables by using reward-inducing proxies. In contrast, MDP-based methods typically exhibit higher bias and variance because they treat the observed state as the true state, even when the observed state is at best a noisy representation of the true state. Thus, these methods could easily lead to a biased estimate by ignoring the impact of unmeasured confounders.

For policy learning, we consider both Gaussian and Beta distribution policy class, and parameterize distribution parameters using either a linear basis or neural network. For Beta distribution policy class with a linear basis, we define πζLinear(ao)2Beta(log(1+exp(ζ1To)),log(1+exp(ζ2To)))1\pi^{\text{Linear}}_{\zeta}(a\mid o)\sim 2Beta(\log(1+\exp(\zeta_{1}^{T}o)),\log(1+\exp(\zeta_{2}^{T}o)))-1, where oo is the state vector including an intercept, and ζ1,ζ2d\zeta_{1},\zeta_{2}\in\mathbb{R}^{d} with the same dimension as oo. For the neural network parameterization, we consider a one‐hidden‐layer MLP to approximate the (α,β)(\alpha,\beta) in the Beta distributions. For Gaussian distribution policy class, the policy parameters are similarly parameterized by π(ao)tanh(𝒩(μ(o),σ2(o)))\pi(a\mid o)\sim\tanh\left(\mathcal{N}(\mu(o),\sigma^{2}(o))\right), where both the mean and standard deviation are approximated by either the same linear basis or neural network. While the neural network policy parameterization is capable of learning more complex optimal policies, the linear basis provides better interpretability and are often preferable in real-world applications.

To evaluate the performance of our proposed method, we apply Algorithm 2 and compare its results against the MDP and MDPW variants of the same approach, as well as against state-of-the-art policy learning algorithms including IQL, SAC, and CQL. To ensure a fair comparison, we employ the identical policy class for all policy parameterizations of the MDP and MDPW counterparts of our proposed method. Since IQL, SAC, and CQL are designed for deep reinforcement learning, we restrict their policy class to a Gaussian neural network. All data are generated using the synthetic environments and behavior policies described in the policy evaluation section, and Table 1 summarizes the averaged results over 50 simulation runs.

Table 1 indicates that our proposed method consistently outperforms competing approaches across varying sample sizes while maintaining comparable variance. This advantage is primarily due to the unbiased estimation of policy value at each iteration, which results in accurate policy gradients and ensures that policies are updated in the correct direction. In contrast, the MDP and MDPW variants of our method may suffer from biased estimation of the target policies, causing suboptimal policy updates and therefore poorer performance. Furthermore, methods originally designed for MDP settings (i.e., IQL, SAC, and CQL) directly solve the Bellman optimality equation, which does not hold in the presence of unmeasured confounders. As a result, these methods perform worse when unmeasured confounders are significant. Lastly, our proposed method achieves stable performance even with relatively small sample sizes, which is particularly desired in real-world applications where data can be limited.

Method/(n, T) (25, 25) (50, 25) (75, 50) (100, 50)
Proposed-NN-Gaussian 1.71 (0.43) 1.80 (0.19) 1.80 (0.16) 1.77 (0.13)
Proposed-NN-Beta 1.62 (0.36) 1.68 (0.24) 1.71 (0.13) 1.72 (0.12)
Proposed-Linear-Gaussian 1.66 (0.13) 1.65 (0.09) 1.68 (0.07) 1.71 (0.07)
Proposed-Linear-Beta 1.53 (0.11) 1.55 (0.10) 1.55 (0.09) 1.58 (0.10)
MDPW-NN-Gaussian 1.44 (0.29) 1.41 (0.23) 1.41 (0.20) 1.33 (0.36)
MDPW-NN-Beta 1.27 (0.38) 1.34 (0.31) 0.93 (0.43) 0.95 (0.33)
MDPW-Linear-Gaussian 1.41 (0.19) 1.42 (0.12) 1.41 (0.10) 1.37 (0.10)
MDPW-Linear-Beta 1.27 (0.14) 1.28 (0.13) 1.02 (0.58) 0.98 (0.51)
MDPW-SAC 0.72 (1.13) 1.04 (0.34) 1.23 (0.17) 1.26 (0.15)
MDPW-CQL 1.41 (0.39) 1.44 (0.39) 1.30 (0.32) 1.29 (0.30)
MDPW-IQL 0.61 (0.33) 0.90 (0.21) 0.94 (0.16) 0.98 (0.13)
MDP-NN-Gaussian 0.55 (0.39) 0.10 (0.28) -0.28 (0.10) -0.18 (0.24)
MDP-NN-Beta -0.21 (0.19) -0.32 (0.10) -0.24 (0.11) -0.22 (0.17)
MDP-Linear-Gaussian 0.86 (0.53) 0.91 (0.48) 1.08 (0.14) 1.04 (0.12)
MDP-Linear-Beta 0.46 (0.19) 0.38 (0.09) 0.11 (0.15) 0.10 (0.21)
MDP-SAC 0.22 (0.64) 0.31 (0.34) 0.28 (0.12) 0.32 (0.10)
MDP-CQL 0.44 (0.35) 0.50 (0.27) 0.41 (0.17) 0.36 (0.18)
MDP-IQL 0.29 (0.35) 0.57 (0.25) 0.66 (0.19) 0.73 (0.14)
Table 1: The mean and standard deviation (in parentheses) of the learned optimal policy value over 50 simulation runs. The true policy value are obtained from Monte Carlo simulation with T=100,n=1000T=100,n=1000.

In terms of the comparison between the neural network and linear policy classes, neural networks show better performance due to the richness of the function class, but the linear basis is capable of achieving comparable performance with fewer and more interpretable parameters, and thus may be of more interest in real-world applications, as demonstrated in Section 6.

6 Application to Pairfam Dataset

We applied our proposed method to the Panel Analysis of Intimate Relationships and Family Dynamics (Pairfam) dataset (Brüderl et al., 2023). Initiated in 2008, Pairfam is a comprehensive longitudinal dataset designed to explore the evolution of romantic relationships and family structures within Germany. For this study, we utilized the most recent release of the Pairfam dataset, which encompasses survey results from 14 waves (Brüderl et al., 2023). Given that the true feelings and thoughts of participants regarding their relationships are unobserved and survey responses are proxies for these sentiments, we treated the evolution of relationships as a confounded Partially Observable Markov Decision Process (POMDP). Our objective was to use the Pairfam dataset to estimate the optimal policy for maximizing long-term satisfaction in romantic relationships.

We define the immediate reward, RtR_{t}, as the mean relationship satisfaction reported by the couple during each survey wave. The continuous action variable, AtA_{t}, is selected as the frequency of sharing private feelings and communicating with partners, referred to as intimacy frequency. The observed state variables at each time point, OtO_{t}, are represented by a 6-dimensional vector, which includes frequency of conflicts, frequency of appreciation, health status, future orientation of the couple, sexual satisfaction, and satisfaction with friendship of the couple. We defined the reward-proxy, WtW_{t}, as a 4-dimensional vector that includes housing condition, household income, division of labor between the couple, and presence of relatives during the interview. Based on romantic relationship literature, all variables in WtW_{t} could be considered as independent of AtA_{t} when conditioned on observed states. To ensure data quality, we exclude samples with more than three missing values for any variable and applied matrix completion methods (Van Buuren and Groothuis-Oudshoorn, 2011) to impute the remaining missing data. Consequently, our refined dataset includes 809 trajectories (n=809n=809) each with 14 time points (T=14T=14).

To conduct policy learning on the dataset, we consider both neural network and linear policy class as discussed in Section 5. We choose the same network structure as in Section 5, and the linear policy class is defined as πζ(ao)Beta(1+ζ1expit(ζ2o),1+ζ3expit(ζ4o))\pi_{\zeta}(a\mid o)\sim\text{Beta}(1+\zeta_{1}\text{expit}(\zeta_{2}^{\top}o),1+\zeta_{3}\text{expit}(\zeta_{4}^{\top}o)), where ζ=[ζ1,ζ2,ζ3,ζ4]\zeta=[\zeta_{1},\zeta_{2},\zeta_{3},\zeta_{4}]^{\top} and oo is a 7-dimensional vector that includes the intercept along with the 6 observed state variables discussed previously. For numerical stability, we constrain ζ1,ζ3(0,20)\zeta_{1},\zeta_{3}\in(0,20). The forms 1+ζiexpit(ζjo)1+\zeta_{i}\text{expit}(\zeta_{j}^{\top}o) for i=1,3,j=2,4i=1,3,j=2,4 are specifically selected for normalization purposes, enabling us to constrain the parameters associated with the beta distribution to be in the range (1,21)(1,21), thus avoiding extremely large or small values that may result in abnormal policies.

We set the discount factor at γ\gamma = 0.9 and employ our proposed method to identify the optimal policy that maximizes long-term relationship satisfaction. To ensure robustness, each simulation randomly selects 400 trajectories as the training data and uses the remaining trajectories for testing data. Following Luckett et al. (2019), we use a Monte Carlo approximation of the policy value to evaluate performance of each method. Specifically, we train the policy on the training data and then apply our proposed policy evaluation method from Algorithm 1 to the testing set to evaluate the learned optimal policy, obtaining an estimate of the learned policy’s value. As shown in Figure 2, our proposed OPE method exhibits the smallest bias in the presence of unmeasured confounders compared to alternative methods.

Figure 4 presents boxplots of the estimated policy values on both the training and testing sets for each method based on 100 simulation runs, with the baseline representing the observed discounted return. As shown in Figure 4, the proposed method achieves the best performance in terms of the improvement on policy value by taking unobserved confounders into account, which is consistent with the results presented in Section 5. In contrast, the MDP-based method tends to overestimate the policy value, which may be undesirable in safe-critical scenarios. Additionally, we observe that the linear policy class shows similar performance compared to the neural network policy class. Since the linear basis is more interpretable in such a setting, we further examine the learned optimal policy obtained using the linear policy class.

Intercept Conflicts Appreciation Health Commitment Sex Friendship
ζ2\zeta_{2} -0.0626 -1.1602 1.4199 1.3774 1.1408 0.9960 -0.2873
ζ4\zeta_{4} -0.8386 1.4781 0.1855 -0.0315 -0.7316 0.3191 -0.6859
Table 2: Estimated Coefficients of the Optimal Policy.
Refer to caption
Figure 3: Boxplots of discounted reward improvements under 100 simulation runs with γ=0.9\gamma=0.9. The first two boxes represent the proposed method using neural network and linear policy class. The remaining boxes show the MDP and MDPW version of competing methods.
Refer to caption
Figure 4: The estimated optimal policy distribution under typical states. The corresponding states are defined in Table 3.

For linear policy class, the estimated values of ζ=[ζ1,ζ2,ζ3,ζ4]\zeta=[\zeta_{1},\zeta_{2},\zeta_{3},\zeta_{4}]^{\top} are shown in Table 2, with ζ1=12.3463\zeta_{1}=12.3463 and ζ3=7.4619\zeta_{3}=7.4619. To better illustrate the learned policy, we demonstrate the learned optimal policy under four different observed states. In romantic relationship studies, researchers have classified couples into different categories based on their commitment levels and couple dynamics (e.g., negative interactions) (Beckmeyer and Jamison, 2021). Thus, we chose commitment and conflict frequency as a proxy for couple dynamics. Consequently, we divided all the samples in the dataset into four categories based on whether their commitment and conflict levels are higher than the sample means, as shown in Table 3. It is important to note that all variables have been normalized to have mean 0 and variance 1. The values of observed state variables, other than conflicts and commitment, are selected as the sample means of those variables within each category. Figure 4 shows the induced optimal policy of each observed states.

Scenarios Conflicts Appreciation Health Commitment Sex Friendship
Scenario 1 0.67 -0.02 0.03 0.51 0.01 0.02
Scenario 2 -0.65 -0.21 -0.12 -0.85 -0.19 -0.14
Scenario 3 -0.78 0.43 0.09 0.52 0.38 0.19
Scenario 4 0.93 -0.66 -0.14 -1.16 -0.61 -0.30
Table 3: Observed state variables for each scenario.

Figure 4 suggests that for couples experiencing a high frequency of conflicts and low levels of commitment (Scenario 4), fewer instances of sharing feelings and thoughts should be undertaken to improve satisfaction. In contrast, more frequent sharing is preferred for couples experiencing low conflicts and high commitment (Scenario 3). Additionally, moderate to high levels of sharing feelings and thoughts are preferred for couples with moderately high commitment and conflict levels (Scenario 1) and couples with moderately low commitment and conflict levels (Scenario 2). These results are consistent with the literature on the effects of sharing feelings and thoughts in romantic relationships. Although relationship researchers generally acknowledge the positive effects of sharing feelings (e.g., openness) (Ogolsky and Bowers, 2013), some studies have found that withholding feelings can be beneficial to relationship quality for individuals with high levels of social anxiety (Kashdan et al., 2007) and for couples with a more communal orientation (Le and Impett, 2013). Additionally, partner appreciation has been found to foster positive interactive dynamics through validation, thereby improving relationship satisfaction (Algoe, 2012).

7 Discussion

In this paper, we proposed a novel framework to conduct offline policy evaluation and policy learning on continuous action space with the existence of unmeasured confounder. We extended the proximal causal inference framework (Cui and Tchetgen Tchetgen, 2021) to an infinite horizon setting for the identification of policy value for a fixed target policy, and developed the corresponding minimax estimator. Based on the proposed estimator, we further developed a policy-gradient based method to search for the in-class optimal policy. The PAC bound of the proposed algorithm is provided to analyze its sample complexity.

Several improvements and extensions are worth exploring in the future. First, we assumed there is no coverage issue for the batch dataset. However, this assumption may not hold in real-world scenarios, such as medical applications where sample sizes are generally small and data is expensive to obtain. Thus, properly addressing the data coverage issue under the POMDP setting for both policy evaluation (Zhang and Jiang, 2024) and policy learning would be an interesting topic. Second, as demonstrated in real-data application, different observed state variables play various roles in formulating the policy. Meanwhile, conducting variable selection in an offline RL setting remains a challenging task, as there is no ground truth available for performance comparison. Systematically studying this problem would greatly improve the generalization of RL techniques. Finally, the proposed algorithm requires relatively large computation and memory resources due to the necessity of evaluating a new policy at each iteration. Therefore, developing a more efficient algorithm is desirable. A possible approach is to directly identify the policy gradient under the POMDP (Hong et al., 2023), however, such identification in continuous action space remains a challenge.

References

  • Algoe (2012) Algoe, S. B. (2012), “Find, remind, and bind: The functions of gratitude in everyday relationships,” Social and personality psychology compass, 6, 455–469.
  • Antos et al. (2008) Antos, A., Szepesvári, C., and Munos, R. (2008), “Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path,” Machine Learning, 71, 89–129.
  • Bach (2017) Bach, F. (2017), “Breaking the curse of dimensionality with convex neural networks,” The Journal of Machine Learning Research, 18, 629–681.
  • Beckmeyer and Jamison (2021) Beckmeyer, J. J. and Jamison, T. B. (2021), “Identifying a typology of emerging adult romantic relationships: Implications for relationship education,” Family Relations, 70, 305–318.
  • Bennett and Kallus (2023) Bennett, A. and Kallus, N. (2023), “Proximal reinforcement learning: Efficient off-policy evaluation in partially observed markov decision processes,” Operations Research.
  • Bennett et al. (2021) Bennett, A., Kallus, N., Li, L., and Mousavi, A. (2021), “Off-policy evaluation in infinite-horizon reinforcement learning with latent confounders,” in International Conference on Artificial Intelligence and Statistics, PMLR, pp. 1999–2007.
  • Bonvini and Kennedy (2022) Bonvini, M. and Kennedy, E. H. (2022), “Sensitivity analysis via the proportion of unmeasured confounding,” Journal of the American Statistical Association, 117, 1540–1550.
  • Brüderl et al. (2023) Brüderl, J., Schmiedeberg, C., Castiglioni, L., Arránz Becker, O., Buhr, P., Fuß, D., Ludwig, V., Schröder, J., and Schumann, N. (2023), “The German Family Panel: Study Design and Cumulated Field Report (Waves 1 to 14),” .
  • Brunke et al. (2022) Brunke, L., Greeff, M., Hall, A. W., Yuan, Z., Zhou, S., Panerati, J., and Schoellig, A. P. (2022), “Safe learning in robotics: From learning-based control to safe reinforcement learning,” Annual Review of Control, Robotics, and Autonomous Systems, 5, 411–444.
  • Bruns-Smith (2021) Bruns-Smith, D. A. (2021), “Model-free and model-based policy evaluation when causality is uncertain,” in International Conference on Machine Learning, PMLR, pp. 1116–1126.
  • Cai et al. (2021) Cai, H., Shi, C., Song, R., and Lu, W. (2021), “Jump interval-learning for individualized decision making,” arXiv preprint arXiv:2111.08885.
  • Cai et al. (2022) Cai, Q., Yang, Z., and Wang, Z. (2022), “Reinforcement learning from partial observation: Linear function approximation with provable sample efficiency,” in International Conference on Machine Learning, PMLR, pp. 2485–2522.
  • Chen and Qi (2022) Chen, X. and Qi, Z. (2022), “On well-posedness and minimax optimal rates of nonparametric q-function estimation in off-policy evaluation,” in International Conference on Machine Learning, PMLR, pp. 3558–3582.
  • Chou et al. (2017) Chou, P.-W., Maturana, D., and Scherer, S. (2017), “Improving stochastic policy gradients in continuous control with deep reinforcement learning using the beta distribution,” in International conference on machine learning, PMLR, pp. 834–843.
  • Cui et al. (2023) Cui, Y., Pu, H., Shi, X., Miao, W., and Tchetgen Tchetgen, E. (2023), “Semiparametric proximal causal inference,” Journal of the American Statistical Association, 1–12.
  • Cui and Tchetgen Tchetgen (2021) Cui, Y. and Tchetgen Tchetgen, E. (2021), “A semiparametric instrumental variable approach to optimal treatment regimes under endogeneity,” Journal of the American Statistical Association, 116, 162–173.
  • Dikkala et al. (2020) Dikkala, N., Lewis, G., Mackey, L., and Syrgkanis, V. (2020), “Minimax estimation of conditional moment models,” Advances in Neural Information Processing Systems, 33, 12248–12262.
  • D’Haultfoeuille (2011) D’Haultfoeuille, X. (2011), “On the completeness condition in nonparametric instrumental problems,” Econometric Theory, 27, 460–471.
  • Farahm et al. (2016) Farahm, A.-m., Ghavamzadeh, M., Szepesvári, C., and Mannor, S. (2016), “Regularized policy iteration with nonparametric function spaces,” Journal of Machine Learning Research, 17, 1–66.
  • Fröhlich et al. (2018) Fröhlich, H., Balling, R., Beerenwinkel, N., Kohlbacher, O., Kumar, S., Lengauer, T., Maathuis, M. H., Moreau, Y., Murphy, S. A., Przytycka, T. M., et al. (2018), “From hype to reality: data science enabling personalized medicine,” BMC medicine, 16, 1–15.
  • Fu et al. (2022) Fu, Z., Qi, Z., Wang, Z., Yang, Z., Xu, Y., and Kosorok, M. R. (2022), “Offline reinforcement learning with instrumental variables in confounded markov decision processes,” arXiv preprint arXiv:2209.08666.
  • Fujimoto and Gu (2021) Fujimoto, S. and Gu, S. S. (2021), “A minimalist approach to offline reinforcement learning,” Advances in neural information processing systems, 34, 20132–20145.
  • Fukumizu et al. (2009) Fukumizu, K., Gretton, A., Lanckriet, G., Schölkopf, B., and Sriperumbudur, B. K. (2009), “Kernel choice and classifiability for RKHS embeddings of probability distributions,” Advances in neural information processing systems, 22.
  • Geer (2000) Geer, S. A. (2000), Empirical Processes in M-estimation, vol. 6, Cambridge university press.
  • Guo et al. (2022) Guo, H., Cai, Q., Zhang, Y., Yang, Z., and Wang, Z. (2022), “Provably efficient offline reinforcement learning for partially observable markov decision processes,” in International Conference on Machine Learning, PMLR, pp. 8016–8038.
  • Haarnoja et al. (2018) Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., Kumar, V., Zhu, H., Gupta, A., Abbeel, P., et al. (2018), “Soft actor-critic algorithms and applications,” arXiv preprint arXiv:1812.05905.
  • Hao et al. (2021) Hao, B., Ji, X., Duan, Y., Lu, H., Szepesvari, C., and Wang, M. (2021), “Bootstrapping fitted q-evaluation for off-policy inference,” in International Conference on Machine Learning, PMLR, pp. 4074–4084.
  • Hoffman et al. (2011) Hoffman, M. W., Lazaric, A., Ghavamzadeh, M., and Munos, R. (2011), “Regularized least squares temporal difference learning with nested L2 and L1 penalization,” in European Workshop on Reinforcement Learning, Springer, pp. 102–114.
  • Hong et al. (2023) Hong, M., Qi, Z., and Xu, Y. (2023), “A Policy Gradient Method for Confounded POMDPs,” arXiv preprint arXiv:2305.17083.
  • Jin et al. (2020) Jin, C., Kakade, S., Krishnamurthy, A., and Liu, Q. (2020), “Sample-efficient reinforcement learning of undercomplete pomdps,” Advances in Neural Information Processing Systems, 33, 18530–18539.
  • Kallus et al. (2022) Kallus, N., Mao, X., Wang, K., and Zhou, Z. (2022), “Doubly robust distributionally robust off-policy evaluation and learning,” in International Conference on Machine Learning, PMLR, pp. 10598–10632.
  • Kallus and Uehara (2020) Kallus, N. and Uehara, M. (2020), “Statistically efficient off-policy policy gradients,” in International Conference on Machine Learning, PMLR, pp. 5089–5100.
  • Kallus and Zhou (2020) Kallus, N. and Zhou, A. (2020), “Confounding-robust policy evaluation in infinite-horizon reinforcement learning,” Advances in neural information processing systems, 33, 22293–22304.
  • Kallus and Zhou (2021) — (2021), “Minimax-Optimal Policy Learning Under Unobserved Confounding,” Management Science, 67, 2870–2890.
  • Kashdan et al. (2007) Kashdan, T. B., Volkmann, J. R., Breen, W. E., and Han, S. (2007), “Social anxiety and romantic relationships: The costs and benefits of negative emotion expression are context-dependent,” Journal of Anxiety Disorders, 21, 475–492.
  • Kostrikov et al. (2021) Kostrikov, I., Nair, A., and Levine, S. (2021), “Offline reinforcement learning with implicit q-learning,” arXiv preprint arXiv:2110.06169.
  • Kress et al. (1989) Kress, R., Maz’ya, V., and Kozlov, V. (1989), Linear integral equations, vol. 82, Springer.
  • Kumar et al. (2020) Kumar, A., Zhou, A., Tucker, G., and Levine, S. (2020), “Conservative q-learning for offline reinforcement learning,” Advances in Neural Information Processing Systems, 33, 1179–1191.
  • Le and Impett (2013) Le, B. M. and Impett, E. A. (2013), “When Holding Back Helps: Suppressing Negative Emotions During Sacrifice Feels Authentic and Is Beneficial for Highly Interdependent People,” Psychological Science, 24, 1809–1815.
  • Le et al. (2019) Le, H., Voloshin, C., and Yue, Y. (2019), “Batch policy learning under constraints,” in International Conference on Machine Learning, PMLR, pp. 3703–3712.
  • Lee et al. (2018) Lee, K., Kim, S.-A., Choi, J., and Lee, S.-W. (2018), “Deep reinforcement learning in continuous action spaces: a case study in the game of simulated curling,” in International conference on machine learning, PMLR, pp. 2937–2946.
  • Levine et al. (2020) Levine, S., Kumar, A., Tucker, G., and Fu, J. (2020), “Offline reinforcement learning: Tutorial, review, and perspectives on open problems,” arXiv preprint arXiv:2005.01643.
  • Li et al. (2021) Li, J., Luo, Y., and Zhang, X. (2021), “Causal reinforcement learning: An instrumental variable approach,” arXiv preprint arXiv:2103.04021.
  • Li et al. (2023) Li, Y., Zhou, W., and Zhu, R. (2023), “Quasi-optimal Reinforcement Learning with Continuous Actions,” in The Eleventh International Conference on Learning Representations.
  • Liao et al. (2021) Liao, P., Klasnja, P., and Murphy, S. (2021), “Off-policy estimation of long-term average outcomes with applications to mobile health,” Journal of the American Statistical Association, 116, 382–391.
  • Liao et al. (2022) Liao, P., Qi, Z., Wan, R., Klasnja, P., and Murphy, S. A. (2022), “Batch policy learning in average reward markov decision processes,” Annals of statistics, 50, 3364.
  • Lillicrap et al. (2015) Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2015), “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971.
  • Liu et al. (2018) Liu, Q., Li, L., Tang, Z., and Zhou, D. (2018), “Breaking the curse of horizon: Infinite-horizon off-policy estimation,” Advances in neural information processing systems, 31.
  • Lu et al. (2022) Lu, M., Min, Y., Wang, Z., and Yang, Z. (2022), “Pessimism in the face of confounders: Provably efficient offline reinforcement learning in partially observable markov decision processes,” arXiv preprint arXiv:2205.13589.
  • Luckett et al. (2019) Luckett, D. J., Laber, E. B., Kahkoska, A. R., Maahs, D. M., Mayer-Davis, E., and Kosorok, M. R. (2019), “Estimating dynamic treatment regimes in mobile health using v-learning,” Journal of the American Statistical Association.
  • Miao et al. (2022) Miao, R., Qi, Z., and Zhang, X. (2022), “Off-policy evaluation for episodic partially observable markov decision processes under non-parametric models,” Advances in Neural Information Processing Systems, 35, 593–606.
  • Miao et al. (2018) Miao, W., Geng, Z., and Tchetgen Tchetgen, E. J. (2018), “Identifying causal effects with proxy variables of an unmeasured confounder,” Biometrika, 105, 987–993.
  • Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015), “Human-level control through deep reinforcement learning,” nature, 518, 529–533.
  • Newey and Powell (2003) Newey, W. K. and Powell, J. L. (2003), “Instrumental variable estimation of nonparametric models,” Econometrica, 71, 1565–1578.
  • Ogolsky and Bowers (2013) Ogolsky, B. G. and Bowers, J. R. (2013), “A meta-analytic review of relationship maintenance and its correlates,” Journal of Social and Personal Relationships, 30, 343–367.
  • Precup (2000) Precup, D. (2000), “Eligibility traces for off-policy policy evaluation,” Computer Science Department Faculty Publication Series, 80.
  • Qi et al. (2023) Qi, Z., Miao, R., and Zhang, X. (2023), “Proximal learning for individualized treatment regimes under unmeasured confounding,” Journal of the American Statistical Association, 1–14.
  • Shi et al. (2022a) Shi, C., Uehara, M., Huang, J., and Jiang, N. (2022a), “A minimax learning approach to off-policy evaluation in confounded partially observable markov decision processes,” in International Conference on Machine Learning, PMLR, pp. 20057–20094.
  • Shi et al. (2022b) Shi, C., Zhang, S., Lu, W., and Song, R. (2022b), “Statistical inference of the value function for reinforcement learning in infinite-horizon settings,” Journal of the Royal Statistical Society Series B: Statistical Methodology, 84, 765–793.
  • Shi et al. (2022c) Shi, C., Zhu, J., Ye, S., Luo, S., Zhu, H., and Song, R. (2022c), “Off-policy confidence interval estimation with confounded markov decision process,” Journal of the American Statistical Association, 1–12.
  • Sutton and Barto (2018) Sutton, R. S. and Barto, A. G. (2018), Reinforcement learning: An introduction, MIT press.
  • Tchetgen Tchetgen et al. (2020) Tchetgen Tchetgen, E., Ying, A., Cui, Y., Shi, X., and Miao, W. (2020), “An introduction to proximal causal learning,” arXiv preprint arXiv:2009.10982.
  • Thomas and Brunskill (2016) Thomas, P. and Brunskill, E. (2016), “Data-efficient off-policy policy evaluation for reinforcement learning,” in International Conference on Machine Learning, PMLR, pp. 2139–2148.
  • Uehara et al. (2020) Uehara, M., Huang, J., and Jiang, N. (2020), “Minimax weight and q-function learning for off-policy evaluation,” in International Conference on Machine Learning, PMLR, pp. 9659–9668.
  • Uehara et al. (2024) Uehara, M., Kiyohara, H., Bennett, A., Chernozhukov, V., Jiang, N., Kallus, N., Shi, C., and Sun, W. (2024), “Future-dependent value-based off-policy evaluation in pomdps,” Advances in Neural Information Processing Systems, 36.
  • Van Buuren and Groothuis-Oudshoorn (2011) Van Buuren, S. and Groothuis-Oudshoorn, K. (2011), “mice: Multivariate imputation by chained equations in R,” Journal of statistical software, 45, 1–67.
  • Wang and Zou (2022) Wang, Y. and Zou, S. (2022), “Policy gradient method for robust reinforcement learning,” in International conference on machine learning, PMLR, pp. 23484–23526.
  • Xie et al. (2019) Xie, T., Ma, Y., and Wang, Y.-X. (2019), “Towards optimal off-policy evaluation for reinforcement learning with marginalized importance sampling,” Advances in neural information processing systems, 32.
  • Xu et al. (2023) Xu, Y., Zhu, J., Shi, C., Luo, S., and Song, R. (2023), “An instrumental variable approach to confounded off-policy evaluation,” in International Conference on Machine Learning, PMLR, pp. 38848–38880.
  • Zhang and Bareinboim (2016) Zhang, J. and Bareinboim, E. (2016), “Markov decision processes with unobserved confounders: A causal approach,” Purdue AI Lab, West Lafayette, IN, USA, Tech. Rep.
  • Zhang et al. (2020) Zhang, J., Kumor, D., and Bareinboim, E. (2020), “Causal imitation learning with unobserved confounders,” Advances in neural information processing systems, 33, 12263–12274.
  • Zhang and Jiang (2024) Zhang, Y. and Jiang, N. (2024), “On the Curses of Future and History in Future-dependent Value Functions for Off-policy Evaluation,” arXiv preprint arXiv:2402.14703.
  • Zhou (2024) Zhou, W. (2024), “Bi-Level Offline Policy Optimization with Limited Exploration,” Advances in Neural Information Processing Systems, 36.
  • Zhou et al. (2024a) Zhou, W., Li, Y., and Zhu, R. (2024a), “Policy learning for individualized treatment regimes on infinite time horizon,” in Statistics in Precision Health: Theory, Methods and Applications, Springer, pp. 65–100.
  • Zhou et al. (2023) Zhou, W., Li, Y., Zhu, R., and Qu, A. (2023), “Distributional shift-aware off-policy interval estimation: A unified error quantification framework,” arXiv preprint arXiv:2309.13278.
  • Zhou et al. (2024b) Zhou, W., Zhu, R., and Qu, A. (2024b), “Estimating optimal infinite horizon dynamic treatment regimes via pt-learning,” Journal of the American Statistical Association, 119, 625–638.
  • Zhu et al. (2020) Zhu, M., Wang, Y., Pu, Z., Hu, J., Wang, X., and Ke, R. (2020), “Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving,” Transportation Research Part C: Emerging Technologies, 117, 102662.