This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces

Takahisa Imagawa,1 Takuya Hiraoka, 123 Yoshimasa Tsuruoka 14
Abstract

Meta-reinforcement learning (RL) addresses the problem of sample inefficiency in deep RL by using experience obtained in past tasks for a new task to be solved. However, most meta-RL methods require partially or fully on-policy data, i.e., they cannot reuse the data collected by past policies, which hinders the improvement of sample efficiency. To alleviate this problem, we propose a novel off-policy meta-RL method, embedding learning and evaluation of uncertainty (ELUE). An ELUE agent is characterized by the learning of a feature embedding space shared among tasks. It learns a belief model over the embedding space and a belief-conditional policy and Q-function. Then, for a new task, it collects data by the pretrained policy, and updates its belief based on the belief model. Thanks to the belief update, the performance can be improved with a small amount of data. In addition, it updates the parameters of the neural networks to adjust the pretrained relationships when there are enough data. We demonstrate that ELUE outperforms state-of-the-art meta RL methods through experiments on meta-RL benchmarks.

Introduction

Deep reinforcement learning (DRL) has shown superhuman performance in several domains, such as computer games and board games (Silver et al. 2017; Berner et al. 2019). However, conventional DRL considers only learning for a single task and does not reuse experience from past tasks. This is a major reason for the sample inefficiency in conventional DRL.

To overcome this problem, the concept of meta-learning has been proposed (Schmidhuber, Zhao, and Wiering 1996). Meta-learning is a class of methods for learning how to efficiently learn with a small amount of data on a new task by utilizing previous experience. An instance of meta-learning consists of two phases: meta-training and meta-testing. In meta-training, the agent prepares itself for learning in meta-testing by using some training tasks. In meta-testing, the agent is evaluated on the basis of its performance on the task to be solved. Although meta-learning aims to improve sample efficiency in meta-testing, the sample efficiency in meta-training is also important from the perspective of computational cost (Mendonca et al. 2019; Rakelly et al. 2019).

One type of meta-learning methods is called gradient-based meta-learning, and they include MAML (Finn, Abbeel, and Levine 2017) and Reptile (Nichol, Achiam, and Schulman 2018). These methods learn to reduce the loss of the model whose parameters (e.g., weights of neural networks) are updated over several steps. Finn, Abbeel, and Levine (2017) have shown that the sample efficiency of MAML in meta-testing is improved compared with that of naive pretraining methods. However, most reinforcement learning (RL) applications of these methods work with on-policy methods (Mendonca et al. 2019), which are less sample efficient than off-policy methods because on-policy methods cannot reuse the data collected by old policies. In addition, the performance of the learned initial parameters in meta-testing can be very poor in some cases until the parameters are updated. For example, there are tasks where an agent aims to reach a goal as fast as possible, and the tasks differ in terms of goal positions. Let us assume that there are two tasks whose goals are in opposite directions from the initial position of the agent; then, the well-trained policies for the tasks require contradicting actions. Thus, in this case, the performance is very poor before the parameters are updated.

Another type of meta-learning methods is called context-based meta-learning. PEARL (Rakelly et al. 2019) is a state-of-the-art meta-RL method that belongs to this type. PEARL learns how to infer task information in meta-training and uses it in meta-testing. Because of this inference, PEARL generally needs less data to improve its performance in meta-testing than methods that update the parameters of neural networks when the tasks in meta-training and meta-testing are similar. In addition, the policy and Q-function in PEARL are trained in an off-policy manner, which can further improve its sample efficiency. Indeed, PEARL was shown to be more sample efficient than MAML (Rakelly et al. 2019).

In this paper, we extend the idea of PEARL and propose a novel meta-RL method, embedding learning and evaluation of uncertainty (ELUE), which has the following features:

Off-policy embedding training

In PEARL, policy training is based on an off-policy method, but the training for task embedding, which is used for calculating distributions over tasks, depends on what the current policy is, i.e., it is on-policy. By removing the dependency on the current policy from the task embedding learning, we propose a fully off-policy method. Thanks to policy-independent embedding, the training objective is expected to be stable, and the data collected by past policies can be reused.

Policy and Q-function conditioned by beliefs over tasks

PEARL introduces distributions over tasks, i.e. beliefs, but both its policy and Q-function depend on a task variable sampled from the distribution. After the task variable is sampled, the variable contains no information on the uncertainty over the tasks. By contrast, ELUE conditions the policy and Q-function on the belief over tasks, which can be used to evaluate uncertainty. The use of such policy and Q-function has been shown lead to more precise exploration (Humplik et al. 2019; Zintgraf et al. 2020). In addition, for learning these functions, we apply the information bottleneck objective (Tishby, Pereira, and Bialek 1999) for generalization, instead of naively maximizing the cumulative reward.

Combination of belief and parameter update

PEARL does not update the parameters of the policy and Q-function in meta-testing. When there are large differences between the tasks in meta-training and in meta-testing, the pretrained belief model, policy, and Q-function may no longer be useful, and PEARL can fail to improve the performance. To alleviate this drawback, our method performs not only inference but also parameter update.

We compare the sample efficiency of state-of-the-art meta-RL methods and ELUE through experiments in the Meta-World (Yu et al. 2019) environment and show that ELUE performs better than those methods.

Preliminaries

Markov decision processes (MDPs) are models for reinforcement learning (RL) tasks. An MDP is defined as a tuple (𝒮,𝒜,T,R,ρ)(\mathcal{S},\mathcal{A},T,R,\rho), where 𝒮\mathcal{S} and 𝒜\mathcal{A} are the state and the action spaces respectively, R:𝒮×𝒜×[0,1]R:\mathcal{S}\times\mathcal{A}\times\mathbb{R}\rightarrow[0,1] is a reward function that determines the probability of the reward amount and T:𝒮×𝒜×𝒮[0,1]T:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1] is a transition function that determines the probability of the next state. ρ\rho is the initial state distribution. Let us denote the policy as π\pi, which determines the probability of choosing an action at each state. The objective of RL is to maximize the expected cumulative reward by changing the

We assume that each task in meta-training and meta-testing can be represented by an MDP, where 𝒮\mathcal{S} and 𝒜\mathcal{A} are the same among tasks. In addition, we consider tasks to be the same when they differ only in ρ\rho because the difference in ρ\rho does not change the optimal policy. Thus, a different task means different TT or RR.

In our problem, it is assumed that the reward and the transition function are not observable directly. We treat this problem as a partially observable MDP (POMDP) and introduce a probability over RR and TT, which is called a belief (Humplik et al. 2019; Zintgraf et al. 2020). For clarity, let us assume that RR and TT are parameterized by φ\varphi and denote them as RφR_{\varphi} and TφT_{\varphi}. It is known that a POMDP can be transformed into a belief MDP whose states are based on beliefs and that the optimal policy of the belief MDP is also optimal in the original POMDP (Kaelbling, Littman, and Cassandra 1998). We denote a history as ht:=(s0,a0,r0,s1,,st)h_{t}:=(s_{0},a_{0},r_{0},s_{1},\dots,s_{t}), where sτ𝒮s_{\tau}\in\mathcal{S}, aτ𝒜a_{\tau}\in\mathcal{A}, rτr_{\tau}\in\mathbb{R} are the state, the action, and the reward at time τ\tau, respectively. In our problem, a belief at time tt is b~t(φ):=P(φ|ht)\tilde{b}_{t}(\varphi):=P(\varphi|h_{t}), and the state of the belief MDP at time tt is st+=(st,b~t)s^{+}_{t}=(s_{t},\tilde{b}_{t}), which is often called a hyper-state. The objective of our problem is maximizing 𝔼h[t=0γtrt]\mathbb{E}_{h_{\infty}}[\sum_{t=0}^{\infty}\gamma^{t}r_{t}] by changing a policy that is conditioned on a hyper-state, where γ\gamma is a discounted factor. In our problem, the belief is updated by observations:

b~t+1(φ)=P(φ|ht+1)\displaystyle\tilde{b}_{t+1}(\varphi)=P(\varphi|h_{t+1})
P(φ)τ=0tRφ(sτ,aτ,rτ)Tφ(sτ,aτ,sτ+1)\displaystyle\propto P(\varphi)\prod_{\tau=0}^{t}R_{\varphi}(s_{\tau},a_{\tau},r_{\tau})T_{\varphi}(s_{\tau},a_{\tau},s_{\tau+1}) (1)
P(φ|ht)Rφ(st,at,rt)Tφ(st,at,st+1).\displaystyle\propto P(\varphi|h_{t})R_{\varphi}(s_{t},a_{t},r_{t})T_{\varphi}(s_{t},a_{t},s_{t+1}). (2)

However, in general, the exact calculation of this belief update is intractable, so the existing methods approximate beliefs and avoid the direct calculation (Humplik et al. 2019; Zintgraf et al. 2020; Igl et al. 2018; Kapturowski et al. 2019). In the next section, we introduce the approximated belief model, its update, and other parts of our method.

Method

In this section, we introduce our method, embedding learning and evaluation of uncertainty (ELUE), which learns how to infer tasks and how to use beliefs based on embeddings of task features. In addition, to alleviate the gap between meta-training and testing, which may prevent improvements from being made if only belief updating is done, it also adapts the learned policy and Q-function to the meta-test task through the updating of their parameters.

Refer to caption
Figure 1: A sketch of the architecture of networks in ELUE. The networks in the blue box are for the embedding, consisting of an encoder (left) and decoders (right). The input of the encoder is a set of ct=(st,at,rt,st+1)c_{t}=(s_{t},a_{t},r_{t},s_{t+1}) and it outputs the parameters of a Gaussian distribution, as introduced in Equation (6). A part of the inputs of the decoders, zz, is sampled from the Gaussian distribution, and the decoders predict rewards and next states. In the red box, there are networks for the policy, V-function, and Q-function, which are trained in a similar way to soft actor-critic (Haarnoja et al. 2018). These networks are conditioned on a belief and use the outputs of the encoder as the belief. Only the networks in the red box are adapted in meta-testing, while all of the networks are pretrained in meta-training.

A sketch of the architecture of our networks is shown in Figure 1.

Learning Embedding

In meta-training, ELUE learns embeddings for task features. In this section, we introduce its theoretical background.

We formulate the embedding learning problem as follows. There is a latent task variable zz, whose density is p(z)p(z), and let us assume that the reward rtr_{t} and next state st+1s_{t+1} are sampled from a parameterized model pϕ(rt,st+1|st,at,z)p_{\phi}(r_{t},s_{t+1}|s_{t},a_{t},z), which is shared across tasks. If hth_{t} is observed frequently, a reasonable model is expected to give hth_{t} a high density. Thus, in proportion to the frequency of hth_{t}, maximizing the density, logp(s0)τ=0t1pϕ(rτ,sτ+1|sτ,aτ,z)π(aτ|hτ)p(z)dz\log\int p(s_{0})\prod_{\tau=0}^{t-1}p_{\phi}(r_{\tau},s_{\tau+1}|s_{\tau},a_{\tau},z)\pi(a_{\tau}|h_{\tau})p(z)dz leads to a reasonable model. However, this objective depends on the initial distribution and the current policy. In terms of belief estimation, they do not contribute to the belief as shown in proportional expression (1). Thus, instead of the density, we consider the maximization of the ELBO of the following value.

logτ=0t1pϕ(rτ,sτ+1|sτ,aτ,z)p(z)dz\displaystyle\log\int\prod_{\tau=0}^{t-1}p_{\phi}(r_{\tau},s_{\tau+1}|s_{\tau},a_{\tau},z)p(z)dz (3)

We introduce a parameterized variational distribution qϕq_{\phi}, and the ELBO is:

logτ=0t1pϕ(rτ,sτ+1|sτ,aτ,z)p(z)dz\displaystyle\log\int\prod_{\tau=0}^{t-1}p_{\phi}(r_{\tau},s_{\tau+1}|s_{\tau},a_{\tau},z)p(z)dz (4)
𝔼qϕ(z|ht)[τlogpϕ(rτ,sτ+1|sτ,aτ,z)]\displaystyle\geq\mathbb{E}_{q_{\phi}(z|h_{t})}[\sum_{\tau}\log p_{\phi}(r_{\tau},s_{\tau+1}|s_{\tau},a_{\tau},z)]
DKL(qϕ(z|ht)||p(z)).\displaystyle-D_{KL}(q_{\phi}(z|h_{t})||p(z)). (5)

We maximize this ELBO in a similar way to a conditional variational autoencoder (Sohn, Lee, and Yan 2015), i.e., optimizing the parameters of encoder qq and decoder pp. The sum of log-likelihoods in the ELBO is permutation-invariant in terms of time tt of tuple ct:=(st,at,rt,st+1)c_{t}:=(s_{t},a_{t},r_{t},s_{t+1}). We introduce the following structure so that qϕ(z|ht)q_{\phi}(z|h_{t}) is also permutation-invariant, because the order of tuples can be ignored to estimate the belief, as shown in expression (1).

As shown in Zaheer et al. (2017), a function q(X)q(X) is invariant to the permutation of instances in XX, iff it can be decomposed into the form g(xXf(x))g(\sum_{x\in X}f(x)). We follow this fact and, instead of history-conditional posterior qϕ(z|ht)q_{\phi}(z|h_{t}), we use a posterior conditioned on a set of tuples,

qϕ(z|c0:t1):=𝒩(z;gϕ(τ=0t1fϕ(cτ))),q_{\phi}(z|c_{0:t-1}):=\mathcal{N}\left(z;g_{\phi}\left(\sum_{\tau=0}^{t-1}f_{\phi}(c_{\tau})\right)\right), (6)

where 𝒩()\mathcal{N}(\cdot) is a Gaussian distribution, and gϕ(τ=0t1fϕ(cτ))g_{\phi}(\sum_{\tau=0}^{t-1}f_{\phi}(c_{\tau})) outputs the parameters of the distribution. Note that qϕ(z|c0:t1)q_{\phi}(z|c_{0:t-1}) can be used as an approximated belief over zz, i.e., bt(z)b_{t}(z) and that it can be updated with low computational cost.

Let us denote a replay buffer of tuples of task ii as DiD_{i} and a set of sampled tuples ct1i,ct2i,ctkic_{t_{1}}^{i},c_{t_{2}}^{i},\dots c_{t_{k}}^{i} from DiD_{i} as ct1:kic^{i}_{t_{1:k}}. We define the loss of embedding, embed(ϕ)\mathcal{L}_{embed}(\phi), as

𝔼i,ct1:ki[𝔼qϕ(z|ct1:ki)[cτct1:kilogpϕ(rτ,sτ+1|sτ,aτ,z)]\displaystyle\mathbb{E}_{i,c^{i}_{t_{1:k}}}[-\mathbb{E}_{q_{\phi}(z|c^{i}_{t_{1:k}})}[\sum_{c_{\tau}\in c^{i}_{t_{1:k}}}\log p_{\phi}(r_{\tau},s_{\tau+1}|s_{\tau},a_{\tau},z)]
+DKL(qϕ(z|ct1:ki)||p(z))].\displaystyle+D_{KL}(q_{\phi}(z|c^{i}_{t_{1:k}})||p(z))]. (7)

Note that this loss function does not depend on the policy. Thus, for the embedding training, we can reuse the data in the replay buffer collected by past policies, the amount of which is generally large.

This can contribute to the stability of the training objective. In our implementation, we use two decoders whose outputs are the probability of reward, pϕ(rt|st,at,st+1,z)p_{\phi}(r_{t}|s_{t},a_{t},s_{t+1},z), and that of next state, pϕ(st+1|st,at,z)p_{\phi}(s_{t+1}|s_{t},a_{t},z).

Learning Belief-Conditional Policy and Q-Function

The ELUE agent learns a belief-conditional policy and Q-function in meta-training. Instead of simply maximizing the cumulative reward by changing its belief-conditional policy, the agent maximizes it with an additional information bottleneck (IB) objective for generalization (Tishby, Pereira, and Bialek 1999). IB is a kind of regularization based on mutual information. While conventional deep RL methods face the problem of generalization (Zhang et al. 2018; Zhao et al. 2019; Cobbe et al. 2019), it has been shown that this problem can be alleviated by applying IB (Goyal et al. 2018; Igl et al. 2019).

We applied the IB loss based on InfoBot (Goyal et al. 2018), which is an application of IB for reinforcement learning. In their method, it is assumed that goal information is given explicitly for each task. In our case, goal information is not given, so we use beliefs instead. We introduce policy π\pi which can be decomposed into the form π(at|st+)=π1(wt|st+)π2(at|wt,st)𝑑zt\pi(a_{t}|s_{t}^{+})=\int\pi^{1}(w_{t}|s_{t}^{+})\pi^{2}(a_{t}|w_{t},s_{t})dz_{t}, where wtw_{t} is an additional variable for the IB objective. In addition, we add mutual information I(wt;b~t|st)I(w_{t};\tilde{b}_{t}|s_{t}) as a penalty to the cumulative reward, and the objective is

𝔼[t=0γtrt]βt=0γtI(wt;b~t|st)\displaystyle\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}\right]-\beta\sum_{t=0}^{\infty}\gamma^{t}I(w_{t};\tilde{b}_{t}|s_{t}) (8)
=\displaystyle= 𝔼[t=0γt{rtβ𝔼wt[logp(wt|st+)p(wt|st)]}]\displaystyle\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}\left\{r_{t}-\beta\mathbb{E}_{w_{t}}\left[\log\frac{p(w_{t}|s_{t}^{+})}{p(w_{t}|s_{t})}\right]\right\}\right] (9)

The equation is derived from I(wt;b~t|st)=𝔼[logp(wt,b~t|st)p(wt|st)p(b~t|st)]=𝔼[logp(wt|st+)p(wt|st)]I(w_{t};\tilde{b}_{t}|s_{t})=\mathbb{E}[\log\frac{p(w_{t},\tilde{b}_{t}|s_{t})}{p(w_{t}|s_{t})p(\tilde{b}_{t}|s_{t})}]=\mathbb{E}[\log\frac{p(w_{t}|s_{t}^{+})}{p(w_{t}|s_{t})}]. This mutual information penalty is expected to help obtain a task-independent yet useful representation as much as possible.

However, p(wt|st)p(w_{t}|s_{t}) in Equation (9) is difficult to compute because of the marginalization by b~t\tilde{b}_{t} (by hth_{t}). We thus approximate it by a variational distribution qq as in previous work (Alemi et al. 2017; Goyal et al. 2018; Igl et al. 2019). For any random variables X,Y,ZX,Y,Z, I(X;Y|Z)=𝔼[logp(X|Y,Z)p(X|Z)]I(X;Y|Z)=\mathbb{E}[\log\frac{p(X|Y,Z)}{p(X|Z)}] and KL divergence DKL(p(X|Z)||q(X|Z))0D_{KL}(p(X|Z)||q(X|Z))\geq 0. Thus, I(X;Y|Z)𝔼[logp(X|Y,Z)q(X|Z)]I(X;Y|Z)\leq\mathbb{E}[\log\frac{p(X|Y,Z)}{q(X|Z)}]. By using the fact, a lower bound of (9) is derived and it is

𝔼[t=0γt{rtβ𝔼wt[logp(wt|st+)q(wt|st)]}].\displaystyle\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}\left\{r_{t}-\beta\mathbb{E}_{w_{t}}\left[\log\frac{p(w_{t}|s_{t}^{+})}{q(w_{t}|s_{t})}\right]\right\}\right]. (10)

Although any qq is allowed and Alemi et al. (2017) used a Gaussian as qq, for simplicity, we fix q(wt|st+)q(w_{t}|s_{t}^{+}) to be a uniform distribution. By removing a constant part, the objective is

𝔼[t=0γt{rtβ𝔼wt[logp(wt|st+)]}],\displaystyle\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}\{r_{t}-\beta\mathbb{E}_{w_{t}}[\log p(w_{t}|s_{t}^{+})]\}\right], (11)

and ELUE maximizes this value.

This objective is a variant of the objective of soft actor-critic (SAC) (Haarnoja et al. 2018), which is one of the most sample efficient off-policy RL methods. We follow the same way as SAC and update the policy, Q-function, and V-function. Let us denote a belief conditioned on the tuple set ct1:k1ic^{i}_{t_{1:k-1}} as bib^{i} and the belief updated from bib^{i} by an additional tuple, ctkic^{i}_{t_{k}}, as bib^{\prime i}. For simplicity, we abbreviate the subscript tkt_{k} in ctkic^{i}_{t_{k}} and denote it as (si,ai,ri,si)(s^{i},a^{i},r^{i},s^{\prime i}). ELUE minimizes actor(θπ)\mathcal{L}_{actor}(\theta_{\pi}), criticQ(θQ)\mathcal{L}_{critic}^{Q}(\theta_{Q}), and criticV(θV)\mathcal{L}_{critic}^{V}(\theta_{V}), which are respectively,

𝔼i,ct1:ki[𝔼w,a[βlogπθπ(w|si,bi)QθQ(si,bi,a)]],\displaystyle\mathbb{E}_{i,c^{i}_{t_{1:k}}}[\mathbb{E}_{w,a}[\beta\log\pi_{\theta_{\pi}}(w|s^{i},b^{i})-Q_{\theta_{Q}}(s^{i},b^{i},a)]], (12)
𝔼i,ct1:ki[(QθQ(si,bi,ai)Q^(si,bi,ai))2],and\displaystyle\mathbb{E}_{i,c^{i}_{t_{1:k}}}[(Q_{\theta_{Q}}(s^{i},b^{i},a^{i})-\hat{Q}(s^{i},b^{i},a^{i}))^{2}],\;\mbox{and} (13)
𝔼i,ct1:ki[(VθV(si)V^(si))2],where\displaystyle\mathbb{E}_{i,c^{i}_{t_{1:k}}}\left[(V_{\theta_{V}}(s^{i})-\hat{V}(s^{i}))^{2}\right],\;\mbox{where} (14)
Q^(si,bi,ai)=ri+γVθ¯V(si,bi),\displaystyle\hat{Q}(s^{i},b^{i},{a}^{i})=r^{i}+\gamma V_{\bar{\theta}_{V}}\left({s}^{\prime i},b^{\prime i}\right), (15)
V^(si)=𝔼w,a[QθQ(si,bi,a)βlogπθπ(w|si,bi)]\displaystyle\hat{V}(s^{i})=\mathbb{E}_{w,a}[Q_{\theta_{Q}}(s^{i},b^{i},a)-\beta\log\pi_{\theta_{\pi}}(w|s^{i},b^{i})] (16)

and θ¯V\bar{\theta}_{V} is a parameter vector that is updated by θ¯V(1λ)θ¯V+λθV\bar{\theta}_{V}\leftarrow(1-\lambda)\bar{\theta}_{V}+\lambda\theta_{V}. We show the procedures of our method in meta-training in Algorithm 1. Our belief is based on the random sampling of tuples. Because of that, this training depends less on actual trajectories than training with naive trajectory-based beliefs. This can be an advantage, as PEARL with a naive trajectory-based encoder was not so effective as that with an encoder based on sampled tuples (Rakelly et al. 2019).

In addition to the policy, we also apply IB to the Q-function and V-function. To stablize the target values i.e. π\pi and QQ in (16), VV in (15), and QQ in (12), we use the mean values of the additional variables for IB for the V-function, Q-function, and policy, instead of sampled variables in the same way as Igl et al. (2019). This is expected to reduce fluctuations by introducing IB.

Here, we describe more details on the implementation of our method. First, to reduce the computational cost, instead of naively allocating one random sampled tuple set ct1:k1ic^{i}_{t_{1:k-1}} and belief conditioned on it bib^{i} to one additional tuple ctkic^{i}_{t_{k}}, we allocate the same tuple set to some additional tuples. Second, to train the agent in a variety of situations in terms of the amount of data necessary to infer a task, we randomly sample kk, the number of tuples in ct1:k1ic^{i}_{t_{1:k-1}}. Third, for π1(wt|st+)\pi^{1}(w_{t}|s_{t}^{+}) and π2(at|wt,st)\pi^{2}(a_{t}|w_{t},s_{t}), we use Gaussian distributions, and for bounding actions, we use tanh as in SAC (Haarnoja et al. 2018).

Algorithm 1 Meta-training
1:  A set of meta-training tasks, 𝒯\mathcal{T} is given
2:  while not done do
3:     Sample tasks from 𝒯\mathcal{T}
4:     Initialize beliefs
5:     for ii\in the sampled tasks do
6:        for step in data collection steps do
7:           Gather data from task ii by policy π(|si,bi)\pi(\cdot|s^{i},b^{i})
8:           Update belief bib^{i} and replay buffer DiD^{i}
9:        end for
10:     end for
11:     for step in training steps do
12:        Make a set of tasks 𝒯\mathcal{T}^{\prime} randomly from 𝒯\mathcal{T}
13:        Calculate embed\mathcal{L}_{embed} for 𝒯\mathcal{T}, as in formula (7), and update parameters to minimize embed\mathcal{L}_{embed}
14:        Calculate actor\mathcal{L}_{actor} and critic\mathcal{L}_{critic} for 𝒯\mathcal{T}^{\prime}, as in formulae (12), (13), and (14), and update parameters to minimize actor\mathcal{L}_{actor} and critic\mathcal{L}_{critic}
15:     end for
16:  end while

Adaptation in Meta-Test

Algorithm 2 Meta-testing
1:  A meta-test task is given
2:  while not done do
3:     for step in data collection steps do
4:        Gather data from meta-test task by policy π(a|s,b)\pi(a|s,b)
5:        Update belief bb and replay buffer DD
6:     end for
7:     for step in training steps do
8:        Calculate actor\mathcal{L}_{actor} and critic\mathcal{L}_{critic} for the task, as in formulae (12), (13), and (14) and update parameter to minimize modified actor\mathcal{L}_{actor} and critic\mathcal{L}_{critic}
9:     end for
10:  end while

In meta-testing, the ELUE agent collects data based on the policy conditioned on a belief. The belief is updated at every time step. After updating the belief enough times, our method updates the parameters of neural networks to alleviate the gap between tasks in meta-training and meta-testing. Pseudo code is shown in Algorithm 2. In meta-testing, there are differences in the parameter update in meta-training:

1) The parameters for embedding, ϕ\phi, are fixed to avoid catastrophic forgetting (French 1999) about what it learned in meta-training. Naive updating of the parameters about embedding in meta-testing leads to catastrophic forgetting because the number of tasks in meta-testing is one, which means that the decoder can reconstruct the reward and next state without the latent task variable information, if the decoder is sufficiently trained in meta-testing. If the output of the decoder is independent from the latent task variable, only the second term of (7) is relevant to the learning of the encoder, which means that the encoder loss is minimized when its output is the same as that of the prior, p(z)p(z). On the basis of these considerations, we fix ϕ\phi in meta-testing.

2) After updating belief enough times, we fix and copy it for each neural network, and train it with the parameters of each network by gradient descent. This expected to help adaptation to the meta-test task by giving additional flexibility to task information processing. This update method is inspired by semi-amortized variational autoencoder (Kim et al. 2018), whose update method is a combination of inference and gradient descent. By combining gradient descent, it showed better asymptotic performance.

{subfigmatrix}

3

Refer to caption
(a) basket-ball
Refer to caption
(b) dial-turn
Refer to caption
(c) pick-place
Refer to caption
(d) reach
Refer to caption
(e) sweep-into
Refer to caption
(f) window-open
Figure 2: The comparison of sample efficiency of each method in ML1 benchmarks. The vertical axis is the average rewards in the first episode in meta-testing and the horizontal axis is the number of time steps in meta-training.

Related Work

In this section, we review existing methods related to our method and discuss the differences between them.

Our method is inspired by PEARL, but there are essential differences. First, PEARL has no decoder, and the encoder is trained to minimize the critic loss. It is a simple approach, but its embedding can change depending on the current policy. The performance of PEARL was shown to degrade when used with off-policy (i.e., not recent) data (Rakelly et al. 2019). Therefore, PEARL uses an additional buffer for recent data to avoid the degradation; in contrast, our method can train the embedding using old data and does not need an additional buffer. Second, PEARL uses an encoder that can be represented as qϕ(z|ht)τ=0t1𝒩(z;μτ,στ2)𝒩(z;τ=0t1μτστ2τ=0t11στ2,1τ=0t11στ2)q_{\phi}(z|h_{t})\propto\prod_{\tau=0}^{t-1}\mathcal{N}(z;\mu_{\tau},\sigma_{\tau}^{2})\propto\mathcal{N}(z;\frac{\sum_{\tau=0}^{t-1}\frac{\mu_{\tau}}{\sigma_{\tau}^{2}}}{\sum_{\tau=0}^{t-1}\frac{1}{\sigma_{\tau}^{2}}},\frac{1}{\sum_{\tau=0}^{t-1}\frac{1}{\sigma_{\tau}^{2}}}), where μτ\mu_{\tau} and στ2\sigma_{\tau}^{2}, the mean and variance of a Gaussian distribution, are the outputs of the neural network, fϕ()f_{\phi}(\cdot). As discussed around Equation (6), this is not a general form for encoder representation in terms of permutation invariance among cτc_{\tau}, while our encoder is represented in a general form. Third, PEARL’s policy and Q-function, π(s,z)\pi(s,z) and Q(s,a,z)Q(s,a,z), where zz is sampled from q(z|ht)q(z|h_{t}), are zz-conditional, and zz itself has no uncertainty information. In comparison, ours are belief-conditional, which has uncertainty information. Fourth, PEARL only considers inference in meta-testing, while our method considers the updating of the parameters of neural networks.

There are other meta-learning methods based on inference; however, these methods are on-policy (Zintgraf et al. 2020; Humplik et al. 2019; Vuorio et al. 2019; Duan et al. 2016). Among these methods, VariBAD (Zintgraf et al. 2020) is the most related to our method. It considers embedding just like ours and beliefs over the embedding space. However, its sample efficiency is not as high as PEARL as shown in Zintgraf et al. (2020). In addition, its encoder is based on recurrent neural networks whose input is simply a history. Moreover, it does not consider updating the parameters of neural networks in meta-testing.

As for off-policy approaches, guided meta policy search (Mendonca et al. 2019) is introduced as an off-policy meta-learning method. Their method is based on MAML. In meta-training, it updates the parameters to imitate expert trajectories (outer updates) after updates by policy gradients (inner updates), which means that the inner updates and the updates in meta-testing need on-policy data. In addition, it is not based on inference. Lin et al. (2020) proposed an off-policy method which uses a decoder and policy conditioned the parameters of the decoder. The parameters corresponds to a belief in our method and they are adapted to a meta-test task by gradient descent. Their method is also not inference. It is based on MAML-like parameter updates.

Experiments

In this section, we compare the sample efficiency of the existing “context-based” methods (PEARL, variBAD, and RL2 (Duan et al. 2016)) and our method. In the conventional setting of meta-learning experiments, the tasks in meta-testing and meta-training were in the same category and the tasks differed only in goal positions or weights of torso of the agent (Finn, Abbeel, and Levine 2017; Rakelly et al. 2019; Zintgraf et al. 2020; Vuorio et al. 2019). Our experimental settings include not only the conventional ones but also settings where the types of tasks in meta-testing and meta-training are different. The environment of the experiments is Meta-World with MuJoCo 2.0. Meta-World is a collection of Sawyer robot arm tasks, and there are 50 types of tasks and several benchmarks. In a task of Meta-World, each episode length is 150.

First, to examine the sample efficiency of our method, we conducted experiments with the conventional settings. We followed the ML1 benchmark scheme of Meta-World, where the differences in tasks mean differences in goals and the goals are sampled from the same distribution in meta-training and meta-testing. We chose six types of tasks, basket-ball, dial-turn, pick-place, reach, sweep-into, window-open, that were chosen from the types of tasks of the ML10 benchmark111The names of the types of tasks in the Meta-World paper are different from those in the Meta-World implementation. We refer to the names in the implementation. (descriptions of the types of tasks we used are shown in Table 1 in appendices). For each run, 20 meta-training tasks were generated. We executed five runs with different seeds and evaluated the average episode reward at the first episode in meta-testing. The results are shown in Figure 2. The results show that ELUE achieved higher performance than the other methods.

Second, to see the merits of our belief updates in meta-testing, we examined the performance of ELUE and PEARL in meta-testing with only inferences (i.e. belief updates in ELUE, and posterior updates and sampling the task variable in PEARL) and that of ELUE without any update of a belief, i.e. keeping a belief being a prior (“NoBelUpdate”) for comparison. In the experiment, meta-trained networks at about three million time steps in the first experiment of pick-place were used. The results are shown in Figure 3. The results show that both methods only need small amounts of data to improve the performance. Especially, the performance of ELUE was relatively high from the first episode while that of PEARL was low at first and improved in the next episode.

Refer to caption
Figure 3: The comparison of performance in meta-test of pick-place on ML1 benchmark by inference. The vertical axis is the average episode reward and the horizontal axis is the number of episodes.
{subfigmatrix}

3

Refer to caption
(a) The first episode rewards
Refer to caption
(b) The Third episode rewards
Refer to caption
(c) Learning curves
Figure 4: The comparison of sample efficiency of each method on ML10 benchmark. The vertical axis is the average episode rewards in meta-testing. The horizontal axis is the number of time steps (a), (b) in meta-training and (c) in meta-testing.

Third, to examine the sample efficiency in a set of diverse tasks, we conducted experiments, following the ML10 benchmark scheme, where the types of tasks in meta-training and meta-testing were different. In the benchmark, there are 15 types of tasks, ten for meta-training and five for meta-testing. We evaluated the sample efficiency from two perspectives: the performance in meta-testing in the early stages like Figure 2 and their improvement as the amount of data increases. For an experiment in the latter perspective, we modified PEARL and variBAD to make them update the parameters of neural networks in meta-testing, although the original versions of them do not update the parameters. In the experiment, meta-trainings were executed five times. For each run, 40 tasks were generated. For each meta-training, meta-tests were executed six times. The meta-tests were executed for about two million steps. In the meta-tests, we used meta-trained networks after 600 iterations (25 million time steps) for ELUE and PEARL, 4000 iterations (32 million time steps) for variBAD. To clarify the amount of improvement with our method, we also executed a variant of ELUE which learns policies from scratch in meta-testing without using beliefs. The results are shown in Figure 4. The performances of PEARL and ELUE slightly improved as time steps in meta-training increased. As for the learning curves of meta-testing, although the variance of the performance of ELUE was large, ELUE outperformed the other methods on average.

Fourth, to see the effectiveness of applying information bottleneck to our method, we compared the performances of variants of ELUE which was naively combined with soft actor-critic. We changed the coefficients of entropy in soft-actor critic. The other hyper parameters were the same among the variants.

Refer to caption
Figure 5: The comparison of variants of ELUE. The vertical axis is the average episode rewards in meta-testing and the horizontal axis is the number of time steps in meta-training.

The results (Figure 5) show that ELUE was slightly better than the other variants. This implies that information bottleneck objectives helped generalization of learned policies as shown in Igl et al. (2019).

Fifth, to examine the effectiveness of our updating method in meta-testing, we compared the performances of the different update methods (“BelGrad”, “NoBelGrad” and “Inference”) in the ML10 benchmark. BelGrad means belief updates by gradient descent after inference, which is used in meta-test of ELUE. NoBelGrad means that gradient descent updates are applied to not the belief but the parmeters of neural networks. Inference means no gradient updates. NoBelGrad and Inference were executed three times for each meta-training. We used the same settings and the same meta-trained networks in the experiment of Figure 4. The results (Figure 6) show that the performance of Inference did not improve as the amount of data increased, and that BelGrad was better than NoBelGrad.

Refer to caption
Figure 6: The comparison of updating method in meta-testing. The vertical axis is the average episode rewards in meta-testing and the horizontal axis is the number of time steps in meta-testing. “BelGrad” is the same as “ELUE” in Figure 4 (c).

Conclusion

We have proposed a novel off-policy meta-learning method, ELUE, which learns the embeddings of task features, beliefs over the embedding space, belief-conditional policies and Q-functions. Because of the belief-conditional policies, the performance can be improved by updating the beliefs, especially when the meta-test task is similar to the meta-training tasks. ELUE also updates the parameters of neural networks in meta-testing, which can alleviate the gap between tasks in meta-testing and those in meta-training. In the experiments with Meta-World benchmarks, we examined the sample efficiency of ELUE and existing methods in the two cases where the tasks in meta-training and meta-testing were similar and diverse. The results show that ELUE outperforms the other methods in both cases. We have also shown merits of the belief-conditional policies and the parameter updates in meta-testing.

References

  • Alemi et al. (2017) Alemi, A.; Fischer, I.; Dillon, J.; and Murphy, K. 2017. Deep Variational Information Bottleneck. In Proceedings of International Conference on Learning Representations.
  • Berner et al. (2019) Berner, C.; Brockman, G.; Chan, B.; Cheung, V.; Dębiak, P.; Dennison, C.; Farhi, D.; Fischer, Q.; Hashme, S.; Hesse, C.; et al. 2019. Dota 2 with Large Scale Deep Reinforcement Learning. arXiv preprint arXiv:1912.06680 .
  • Cobbe et al. (2019) Cobbe, K.; Klimov, O.; Hesse, C.; Kim, T.; and Schulman, J. 2019. Quantifying generalization in reinforcement learning. In Proceedings of International Conference on Machine Learning, 1282–1289.
  • Duan et al. (2016) Duan, Y.; Schulman, J.; Chen, X.; Bartlett, P. L.; Sutskever, I.; and Abbeel, P. 2016. RL2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779 .
  • Finn, Abbeel, and Levine (2017) Finn, C.; Abbeel, P.; and Levine, S. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of International Conference on Machine Learning, 1126–1135.
  • French (1999) French, R. M. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3(4): 128–135.
  • Goyal et al. (2018) Goyal, A.; Islam, R.; Strouse, D.; Ahmed, Z.; Larochelle, H.; Botvinick, M.; Bengio, Y.; and Levine, S. 2018. InfoBot: Transfer and Exploration via the Information Bottleneck. In Proceedings of International Conference on Learning Representations.
  • Ha and Schmidhuber (2018) Ha, D.; and Schmidhuber, J. 2018. World models. arXiv preprint arXiv:1803.10122 .
  • Haarnoja et al. (2018) Haarnoja, T.; Zhou, A.; Abbeel, P.; and Levine, S. 2018. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. In Proceedings of International Conference on Machine Learning, 1856–1865.
  • Humplik et al. (2019) Humplik, J.; Galashov, A.; Hasenclever, L.; Ortega, P. A.; Teh, Y. W.; and Heess, N. 2019. Meta reinforcement learning as task inference. arXiv preprint arXiv:1905.06424 .
  • Igl et al. (2019) Igl, M.; Ciosek, K.; Li, Y.; Tschiatschek, S.; Zhang, C.; Devlin, S.; and Hofmann, K. 2019. Generalization in reinforcement learning with selective noise injection and information bottleneck. In Proceedings of Advances in Neural Information Processing Systems, 13978–13990.
  • Igl et al. (2018) Igl, M.; Zintgraf, L.; Le, T. A.; Wood, F.; and Whiteson, S. 2018. Deep Variational Reinforcement Learning for POMDPs. arXiv preprint arXiv:1806.02426 .
  • Kaelbling, Littman, and Cassandra (1998) Kaelbling, L. P.; Littman, M. L.; and Cassandra, A. R. 1998. Planning and acting in partially observable stochastic domains. Artificial intelligence 101(1-2): 99–134.
  • Kapturowski et al. (2019) Kapturowski, S.; Ostrovski, G.; Dabney, W.; Quan, J.; and Munos, R. 2019. Recurrent Experience Replay in Distributed Reinforcement Learning. In Proceedings of International Conference on Learning Representations. URL https://openreview.net/forum?id=r1lyTjAqYX.
  • Kim et al. (2018) Kim, Y.; Wiseman, S.; Miller, A.; Sontag, D.; and Rush, A. 2018. Semi-Amortized Variational Autoencoders. In Proceedings of International Conference on Machine Learning, 2683–2692.
  • Lin et al. (2020) Lin, Z.; Thomas, G.; Yang, G.; and Ma, T. 2020. Model-based Adversarial Meta-Reinforcement Learning. arXiv preprint arXiv:2006.08875 .
  • Mendonca et al. (2019) Mendonca, R.; Gupta, A.; Kralev, R.; Abbeel, P.; Levine, S.; and Finn, C. 2019. Guided meta-policy search. In Proceedings of Advances in Neural Information Processing Systems, 9653–9664.
  • Nichol, Achiam, and Schulman (2018) Nichol, A.; Achiam, J.; and Schulman, J. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 .
  • Rakelly et al. (2019) Rakelly, K.; Zhou, A.; Quillen, D.; Finn, C.; and Levine, S. 2019. Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables. In Proceedings of International Conference on Machine Learning, 5331–5340.
  • Schmidhuber, Zhao, and Wiering (1996) Schmidhuber, J.; Zhao, J.; and Wiering, M. 1996. Simple principles of metalearning. Technical report IDSIA 69: 1–23.
  • Silver et al. (2017) Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. 2017. Mastering the game of Go without human knowledge. Nature 550(7676): 354.
  • Sohn, Lee, and Yan (2015) Sohn, K.; Lee, H.; and Yan, X. 2015. Learning structured output representation using deep conditional generative models. In Proceedings of Advances in neural information processing systems, 3483–3491.
  • Tishby, Pereira, and Bialek (1999) Tishby, N.; Pereira, F. C.; and Bialek, W. 1999. The information bottleneck method. 368–377.
  • Vuorio et al. (2019) Vuorio, R.; Sun, S.-H.; Hu, H.; and Lim, J. J. 2019. Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation. In Proceedings of Advances in Neural Information Processing Systems, 1–12.
  • Yu et al. (2019) Yu, T.; Quillen, D.; He, Z.; Julian, R.; Hausman, K.; Finn, C.; and Levine, S. 2019. Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning. In Proceedings of Conference on Robot Learning.
  • Zaheer et al. (2017) Zaheer, M.; Kottur, S.; Ravanbakhsh, S.; Poczos, B.; Salakhutdinov, R. R.; and Smola, A. J. 2017. Deep sets. In Proceedings of Advances in neural information processing systems, 3391–3401.
  • Zhang et al. (2018) Zhang, C.; Vinyals, O.; Munos, R.; and Bengio, S. 2018. A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893 .
  • Zhao et al. (2019) Zhao, C.; Sigaud, O.; Stulp, F.; and Hospedales, T. M. 2019. Investigating generalisation in continuous deep reinforcement learning. arXiv preprint arXiv:1902.07015 .
  • Zintgraf et al. (2020) Zintgraf, L.; Shiarlis, K.; Igl, M.; Schulze, S.; Gal, Y.; Hofmann, K.; and Whiteson, S. 2020. VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning. In Proceedings of International Conference on Learning Representations.

Appendices

In this section, we provide supplementary information about Meta-World, ELUE implementation, additional experimental results, and theoretical derivations.

Meta-World

For convenience, we briefly explain Meta-World. In Meta-World, there are 50 types of tasks using Sawyer robot arm. The descriptions of 15 types of tasks we use are shown in Table 1. The dimensions of the observation and action of each task are six and four, respectively. The amount of reward is determined based on the center positions of fingers of the arm, an object, and a goal. Here, the goal position cannot be observed by the agent. In the reach tasks, only the center position of the fingers and the goal is related to the reward. In the other 14 types of tasks, the position of the object is also related to the reward, and the agent can get high reward by moving the object to the goal.

Type of task Description
basket-ball Dunk the basketball into the basket. Randomize basketball and basket positions
button-press-topdown Press a button from the top. Randomize button positions
dial-turn Rotate a dial 180 degrees. Randomize dial positions
drawer-close Push and close a drawer. Randomize the drawer positions
pick-place Pick and place a puck to a goal. Randomize puck and goal positions
reach Reach a goal position. Randomize the goal positions
sweep Sweep a puck off the table. Randomize puck positions
window-open Push and open a window. Randomize window positions
push Push the puck to a goal. Randomize puck and goal positions
door Open a door with a revolving joint. Randomize door positions
door-close Close a door with a revolvinig joint. Randomize door positions
drawer-open Open a drawer. Randomize drawer positions
lever-pull Pull a lever down 90 degrees. Randomize lever positions
shelf-place Pick and place a puck onto a shelf. Randomize puck and shelf positions
sweep-into Sweep a puck into a hole. Randomize puck positions
Table 1: The description of each type of task in Meta-World, which are extracted from Table 1 in (Yu et al. 2019). Above/below types of tasks are used for meta-training/testing in ML10 benchmarks

Implementation Details

We provide additional explanations about our implementation. In our implementation, we execute initial sampling at the beginning of meta-training and meta-testing. In the sampling phase, the agent collect data (lines 6 – 9 in Algorithm 1) for all tasks. After this phase, we execute initial embedding training. This is done by a similar way to existing methods for training variational autoencoders (Ha and Schmidhuber 2018; Zintgraf et al. 2020). In meta-testing, we fix the belief after the initial sampling phase. This is because we found that the improvement by belief updates was very fast and even one episode was enough for the convergence of the performance, as shown in Figure 3.

ML1 Meta-Testing

In addition to the ML10 benchmark (Figure 6), we compare learning methods in meta-testing of pick-place on the ML1 benchmark. The experimental settings were the same as the experiments of Figure 3 except for the number of time steps in meta-testing. The result is shown in Figure 7.

Refer to caption
Figure 7: The comparison of learning curves in meta-testing. “PEARLOrg” is original PEARL’s update i.e. updating posterior and sampling a task variable from the posterior and “PEARLGrad” is PEARL with updating the parameters of neural networks as “PEARL” in Figure 4 (c). The other methods are the same as the methods in Figure 6

The results show that the performances of Inference and PEARLOrg were almost the same from beginning to end, while those of BelGrad, NoBelGrad, and PEARLGrad were improved. In PEARL and ELUE, the performance of methods with the parameter updates were worse than that of methods without the parameter updates at first but gradually improved, and at last became better. In Figure 7, the difference between BelGrad and NoBelGrad was not so clear as that in Figure 6.

ML1 Supplemental Results

We show supplementary results about Figure 5. The results are shown in Figure 8. These results are not included in Figure 5 because of its visibility.

Refer to caption
Figure 8: The comparison of variants of ELUE. The vertical axis is the average episode rewards in meta-testing and the horizontal axis is the number of episodes.

The results of Figure 5 and 8 suggest that information bottleneck objectives help generalization of policies.

We also show supplementary results about Figure 2.

{subfigmatrix}

3

Refer to caption
(a) basket-ball
Refer to caption
(b) dial-turn
Refer to caption
(c) pick-place
Refer to caption
(d) reach
Refer to caption
(e) sweep-into
Refer to caption
(f) window-open
Figure 9: Comparison of sample efficiency of each method in ML1 benchmarks. The vertical axis is the average rewards in the third episode in meta-testing and the horizontal axis is the number of time steps in meta-training.

Figure 9 is the third episode rewards of the same experiments of Figure 2. The results show that the performance of PEARL was better than that in the first episode. However, even in the third episode, ELUE outperformed PEARL in some tasks.

To see what happened in meta-testing in pick-place, we analyze the trajectories of the center of the robot arm’s fingers and the objects in the tasks. The results are shown in Figure 10. The experimental settings were the same as that of Figure 3. Figure 10 show that PEARL did not move the object in the first episode in some runs, while ELUE moved the object in all runs. Note that transferring learned policies is difficult when there are only a small number of meta-train tasks. That is because in such a situation, the probability that a meta-test task is not similar to any of the meta-train tasks is high. In our ML1 experiment, there were only 20 meta-train tasks, this may be one of the reasons for the difficulty of transferring policies to the meta-test task.

{subfigmatrix}

3

Refer to caption
(a) First episodes of PEARL
Refer to caption
(b) Third episodes of PEARL
Refer to caption
(c) Fifth episodes of PEARL
Refer to caption
(d) First episodes of ELUE
Refer to caption
(e) Third episodes of ELUE
Refer to caption
(f) Fifth episodes of ELUE
Figure 10: The center positions of the robot arm’s fingers and the positions of objects in pick-place. The former and the latter are depicted as lines and circles. The initial positions are marked as “x”, and the goals are marked as “y”.

Learning Policies Without Embedding

For an ablation study, we examine the performance of ELUE without the embedding learning (i.e. SAC with the information bottleneck objectives), which is referred to as “NoEmb”. NoEmb does not have the ability to identify what the current task is. This can be a drawback, e.g. each task requires contradicting actions at the same state. After meta-training in pick-place and ML10, we executed meta-testing for each benchmark. The experimental settings were the same as those of the former experiments in ML1 and ML10. The results are shown in Figure 11. For comparison, the other results are also shown in Figure 11. To see the performance at the early stage, we show the third episode reward instead of the first one because the performance of PEARL was better as shown in Figure 2 and 9. The results show that the performance of ELUE improved faster than that of NoEmb. In ML10, the third episode reward of ELUE was better than that of NoEmb, while there was not a clear difference in the performance in ML1. The results imply that a method without task information like NoEmb is one of the competitive baselines in some benchmarks, although it was not analyzed in some existing work (Rakelly et al. 2019; Zintgraf et al. 2020).

{subfigmatrix}

2

Refer to caption
(a) Third episode reward in pick-place
Refer to caption
(b) Learning curves
Refer to caption
(c) Third episode reward in ML10
Refer to caption
(d) Learning curves
Figure 11: Comparison to learning without task embedding in ML1 pick-place and ML10. The vertical axis is the average episode rewards in meta-testing and the horizontal axis is the number of time steps (a), (c) in meta-training and (b), (d) in meta-testing.

Conventional Benchmarks

We examine the sample efficiency of our method in the conventional benchmarks, ant-fwd-back and humanoid-dir, which are extensions of MuJoCo robot control tasks in OpenAI gym. In the benchmarks, tasks differ in terms of goal directions of the ant robot or the humanoid robot. In the tasks, their episode rewards are determined by the agent speed in the goal direction, alive bonus, and the other cost e.g. control cost. The alive bonus is bonus of the agent is “alive” e.g. not falling and the control cost is cost of the agent to execute actions. In ant-fwd-back, there are only two tasks, tasks with forward and backward directions. In these benchmarks, unlike Meta-World, the episode ends when the agent falls even before 150 steps. The results are shown in Figure 12.

{subfigmatrix}

2

Refer to caption
(a) First episode reward in ant-fwd-back
Refer to caption
(b) Third episode reward in ant-fwd-back
Refer to caption
(c) First episode reward in humanoid-dir
Refer to caption
(d) Third episode reward in humanoid-dir
Figure 12: Comparison of sample efficiency of each method in the conventional benchmarks. The vertical axis is the average episode rewards in meta-testing and the horizontal axis is the number of time steps in meta-training.

As for the ant benchmark, the performance of NoEmb did not improve more than about 150, while that of PEARL and ELUE were gradually improved. Although the learning speed of ELUE was slightly worse than those of PEARL in the third episode reward, the final performance of ELUE was slightly better. In the first episode, ELUE achieved better results like that in ML1 benchmarks.

As for the humanoid benchmark, the first and third episode rewards were almost the same among the methods and their episode rewards were about 750. It seems that the inference was not helpful for improvement of their performances. In this benchmark, the alive bonus was five. Thus, if the agent is alive in a whole episode, its episode reward is 5×1505\times 150 ++ goal direction bonus - control cost. These results imply that most of the reward came from the alive bonus and that the goal direction bonus may be too small to learn different behaviors from task to task.

Theoretical Supplement

For any random variables X,Y,ZX,Y,Z, (and their realization x,y,zx,y,z,) and variational distribution qq, we show I(X;Y|Z)𝔼[logp(X|Y,Z)q(X|Z)]I(X;Y|Z)\leq\mathbb{E}\left[\log\frac{p(X|Y,Z)}{q(X|Z)}\right], as with inequation (14) in Alemi et al. (2017).

I(X;Y|Z)=𝔼[logp(X|Y,Z)p(X|Z)]\displaystyle I(X;Y|Z)=\mathbb{E}\left[\log\frac{p(X|Y,Z)}{p(X|Z)}\right] (17)

and KL divergence DKL(p(|z)||q(|z))0D_{KL}(p(\cdot|z)||q(\cdot|z))\geq 0. Thus,

𝔼[logp(X|Z)]𝔼[logq(X|Z)]\displaystyle\mathbb{E}[-\log p(X|Z)]\leq\mathbb{E}[-\log q(X|Z)] (18)
p(z)p(x|z)log1p(x|z)dxdz\displaystyle\iint p(z)p(x|z)\log\frac{1}{p(x|z)}dxdz
p(z)p(x|z)log1q(x|z).\displaystyle\leq\iint p(z)p(x|z)\log\frac{1}{q(x|z)}. (19)

Therefore, I(X;Y|Z)I(X;Y|Z) is

𝔼[logp(X|Y,Z)p(X|Z)]\displaystyle\mathbb{E}\left[\log\frac{p(X|Y,Z)}{p(X|Z)}\right] (20)
=p(z)p(x,y|z)logp(x|y,z)p(x|z)dxdydz\displaystyle=\iiint p(z)p(x,y|z)\log\frac{p(x|y,z)}{p(x|z)}dxdydz (21)
=p(z)p(x|z)p(y|x,z)logp(x|y,z)p(x|z)dxdydz\displaystyle=\iiint p(z)p(x|z)p(y|x,z)\log\frac{p(x|y,z)}{p(x|z)}dxdydz (22)
p(z)p(x|z)p(y|x,z)logp(x|y,z)q(x|z)dxdydz\displaystyle\leq\iiint p(z)p(x|z)p(y|x,z)\log\frac{p(x|y,z)}{q(x|z)}dxdydz (23)
=𝔼[logp(X|Y,Z)q(X|Z)].\displaystyle=\mathbb{E}\left[\log\frac{p(X|Y,Z)}{q(X|Z)}\right]. (24)