Provably Efficient Multi-Task Reinforcement Learning with Model Transfer
Abstract
We study multi-task reinforcement learning (RL) in tabular episodic Markov decision processes (MDPs). We formulate a heterogeneous multi-player RL problem, in which a group of players concurrently face similar but not necessarily identical MDPs, with a goal of improving their collective performance through inter-player information sharing. We design and analyze an algorithm based on the idea of model transfer, and provide gap-dependent and gap-independent upper and lower bounds that characterize the intrinsic complexity of the problem.
1 Introduction
In many real-world applications, reinforcement learning (RL) agents can be deployed as a group to complete similar tasks at the same time. For example, in healthcare robotics, robots are paired with people with dementia to perform personalized cognitive training activities by learning their preferences [42, 21]; in autonomous driving, a set of autonomous vehicles learn how to navigate and avoid obstacles in various environments [27]. In these settings, each learning agent alone may only be able to acquire a limited amount of data, while the agents as a group have the potential to collectively learn faster through sharing knowledge among themselves. Multi-task learning [7] is a practical framework that can be used to model such settings, where a set of learning agents share/transfer knowledge to improve their collective performance.
Despite many empirical successes of multi-task RL (see, e.g., [51, 28, 27]) and transfer learning for RL (see, e.g., [26, 39]), a theoretical understanding of when and how information sharing or knowledge transfer can provide benefits remains limited. Exceptions include [16, 6, 11, 17, 32, 25], which study multi-task learning from parameter- or representation-transfer perspectives. However, these works still do not provide a completely satisfying answer: for example, in many application scenarios, the reward structures and the environment dynamics are only slightly different for each task—this is, however, not captured by representation transfer [11, 17] or existing works on clustering-based parameter transfer [16, 6]. In such settings, is it possible to design provably efficient multi-task RL algorithms that have guarantees never worse than agents learning individually, while outperforming the individual agents in favorable situations?
In this work, we formulate an online multi-task RL problem that is applicable to the aforementioned settings. Specifically, inspired by a recent study on multi-task multi-armed bandits [43], we formulate the -Multi-Player Episodic Reinforcement Learning (abbreviated as -MPERL) problem, in which all tasks share the same state and action spaces, and the tasks are assumed to be similar—i.e., the dissimilarities between the environments of different tasks (specifically, the reward distributions and transition dynamics associated with the players/tasks) are bounded in terms of a dissimilarity parameter . This problem not only models concurrent RL [34, 16] as a special case by taking , but also captures richer multi-task RL settings when is nonzero. We study regret minimization for the -MPERL problem, specifically:
-
1.
We identify a problem complexity notion named subpar state-action pairs, which captures the amenability to information sharing among tasks in -MPERL problem instances. As shown in the multi-task bandits literature (e.g., [43]), inter-task information sharing is not always helpful in reducing the players’ collective regret. Subpar state-action pairs, intuitively speaking, are clearly suboptimal for all tasks, for which we can robustly take advantage of (possibly biased) data collected for other tasks to achieve a lower regret in a certain task.
-
2.
In the setting where the dissimilarity parameter is known, we design a model-based algorithm Multi-task-Euler (Algorithm 1), which is built upon state-of-the-art algorithms for learning single-task Markov decision processes (MDPs) [3, 46, 36], as well as algorithmic ideas of model transfer in RL [39]. Multi-task-Euler crucially utilizes the dissimilarity assumption to robustly take advantage of information sharing among tasks, and achieves regret upper bounds in terms of subpar state-action pairs, in both (value function suboptimality) gap-dependent and gap-independent fashions. Specifically, compared with a baseline algorithm that does not utilize information sharing, Multi-task-Euler has a regret guarantee that: (1) is never worse, i.e., it avoids negative transfer [33]; (2) can be much superior when there are a large number of subpar state-action pairs.
-
3.
We also present gap-dependent and gap-independent regret lower bounds for the -MPERL problem in terms of subpar state-action pairs. These lower bounds nearly match the upper bounds when the episode length of the MDP is a constant. Together, the upper and lower bounds can be used to characterize the intrinsic complexity of the -MPERL problem.
2 Preliminaries
Throughout this paper, we denote by . For a set in a universe , we use to denote its complement. Denote by the set of probability distributions over . For functions , we use or (resp. or ) to denote that there exists some constant , such that (resp. ), and use to denote and simultaneously. Define , and . We use to denote the expectation operator, and use to denote the variance operator. Throughout, we use and notation to hide polylogarithmic factors.
Multi-task RL in episodic MDPs.
We have a set of MDPs , each associated with a player . Each MDP is regarded as a task. The MDPs share the same episode length , finite state space , finite action space , and initial state distribution . Let be a default terminal state that is not contained in . The transition probabilities and reward distributions of the players are not necessarily identical. We assume that the MDPs are layered111This is a standard assumption (see, e.g., [44]). It is worth noting that any episodic MDP (with possibly nonstationary transition and reward) can be converted to a layered MDP with stationary transition and reward, with the state space size being times the size of the original state space., in that the state space can be partitioned into disjoint subsets , where is supported on , and for every , , and every , is supported on ; here, we define . We denote by the size of the state space, and the size of the action space.
Interaction process.
The interaction process between the players and the environment is as follows: at the beginning, both and are unknown to the players. For each episode , conditioned on the interaction history up to episode , each player independently interacts with its respective MDP ; specifically, player starts at state , and at every step (layer) , it chooses action , transitions to next state and receives a stochastic immediate reward ; after all players have finished their -th episode, they can communicate and share their interaction history. The goal of the players is to maximize their expected collective reward .
Policy and value functions.
A deterministic, history-independent policy is a mapping from to , which can be used by a player to make decisions in its respective MDP. For player and step , we use and to denote its respective value and action-value functions, respectively. They satisfy the following recurrence known as the Bellman equation:
where we use the convention that , and for , , and is the expected immediate reward of player . For player and policy , denote by its expected reward.
For player , we also define its optimal value function and the optimal action-value function using the Bellman optimality equation:
(1) |
where we again use the convention that . For player , denote by its optimal expected reward.
Given a policy , as for different ’s are only defined in the respective layer , we “collate” the value functions and obtain a single value function . Formally, for every and ,
We define similarly. For player , given its optimal action value function , any of its greedy policies is optimal with respect to .
Suboptimality gap.
For player , we define the suboptimality gap of state-action pair as . We define the mininum suboptimality gap of player as , and the minimum suboptimality gap over all players as . For player , define as the set of optimal state-action pairs with respect to .
Performance metric.
We measure the performance of the players using their collective regret, i.e., over a total of episodes, how much extra reward they would have collected in expectation if they were executing their respective optimal policies from the beginning. Formally, suppose for each episode , player executes policy , then the collective regret of the players is defined as:
Baseline: individual Strong-Euler.
A naive baseline for multi-task RL is to let each player run a separate RL algorithm without communication. For concreteness, we choose to let each player run the state-of-the-art Strong-Euler algorithm [36] (see also its precursor Euler [46]), which enjoys minimax gap-independent [3, 8] and gap-dependent regret guarantees, and we refer to this strategy as individual Strong-Euler. Specifically, as it is known that Strong-Euler has a regret of ), individual Strong-Euler has a collective regret of . In addition, by a union bound and summing up the gap-dependent regret guarantees of Strong-Euler for the MDPs altogether, it can be checked that with probability , individual Strong-Euler has a collective regret of order222The originally-stated gap-dependent regret bound of Strong-Euler ([36], Corollary 2.1) uses a slightly different notion of suboptimality gap, which takes an extra minimum over all steps. A close examination of their proof shows that Strong-Euler has regret bound (2) in layered MDPs. See also Remark LABEL:rem:clipping in Appendix LABEL:subsec:conclude-reg-bounds.
(2) |
Our goal is to design multi-task RL algorithms that can achieve collective regret strictly lower than this baseline in both gap-dependent and gap-independent fashions when the tasks are similar.
Notion of similarity.
Throughout this paper, we will consider the following notion of similarity between MDPs in the multi-task episodic RL setting.
Definition 1.
A collection of MDPs is said to be -dissimilar for , if for all , and ,
If this happens, we call an -Multi-Player Episodic Reinforcement Learning (abbrev. -MPERL) problem instance.
If the MDPs in are -dissimilar, then they are identical by definition, and our interaction protocol degenerates to the concurrent RL protocol [34]. Our dissimilarity notion is complementary to those of [6, 16]: they require the MDPs to be either identical, or have well-separated parameters for at least one state-action pair; in contrast, our dissimilarity notion allows the MDPs to be nonidentical and arbitrarily close.
We have the following intuitive lemma that shows the closeness of optimal value functions of different MDPs, in terms of the dissimilarity parameter :
Lemma 2.
If are -dissimilar, then for every , and , ; consequently,
3 Algorithm
We now describe our main algorithm, Multi-task-Euler (Algorithm 1). Our model-based algorithm is built upon recent works on episodic RL that provide algorithms with sharp instance-dependent guarantees in the single task setting [46, 36]. In a nutshell, for each episode and each player , the algorithm performs optimistic value iteration to construct high-probability upper and lower bounds for the optimal value and action value functions and , and uses them to guide its exploration and decision making process.
Empirical estimates of model parameters.
For each player , the construction of its value function bound estimates relies on empirical estimates on its transition probability and expected reward function. For both estimands, we use two estimators with complementary roles, which are at two different points of the bias-variance tradeoff spectrum: one estimator uses only the player’s own data (termed individual estimate), which has large variance; the other estimator uses the data collected by all players (termed aggregate estimate), which has lower variance but can easily be biased, as transition probabilities and reward distributions are heterogeneous. Such an algorithmic idea of “model transfer”, where one estimates model in one task using data collected from other tasks has appeared in prior works (e.g., [39]). Specifically, at the beginning of episode , for every and , the algorithm has its empirical count of encountering for each player , along with its total empirical count across all players, respectively:
(3) |
The individual and aggregate estimates of immediate reward are defined as:
(4) |
Similarly, for every and , we also define the individual and aggregate estimates of transition probability as:
(5) |
If , we define and ; and if , we define and . The counts and reward estimates can be maintained by Multi-task-Euler efficiently in an incremental manner.
Constructing value function estimates via optimistic value iteration.
For each player , based on these model parameter estimates, Multi-task-Euler performs optimistic value iteration to compute the value function estimates for states at all layers (lines 1 to 1). For the terminal layer , trivially, so nothing needs to be done. For earlier layers , Multi-task-Euler iteratively builds its value function estimates in a backward fashion. At the time of estimating values for layer , the algorithm has already obtained optimal value estimates for layer . Based on the Bellman optimality equation (1), Multi-task-Euler estimates using model parameter estimates and its estimates of , i.e., and (lines 1 to 1).
Specifically, Multi-task-Euler constructs estimates of for all in two different ways. First, it uses the individual estimates of model of player to construct and , upper and lower bound estimates of (lines 1 and 1); this construction is reminiscent of Euler and Strong-Euler [46, 36], in that if we were only to use and as our optimal action value function estimate and , our algorithm becomes individual Strong-Euler. The individual value function estimates are key to establishing Multi-task-Euler’s fall-back guarantees, ensuring that it never performs worse than the individual Strong-Euler baseline. Second, it uses the aggregate estimate of model to construct and , also upper and lower bound estimates of (lines 1 and 1); this construction is unique to the multitask learning setting, and is our new algorithmic contribution.
To ensure that and (resp. and ) are valid upper bounds (resp. lower bounds) of , Multi-task-Euler adds bonus terms and , respectively, in the optimistic value iteration process, to account for estimation error of the model estimates against the true models. Specifically, both bonus terms comprise three parts:
where
and .
The bonus terms altogether ensures strong optimism [36], i.e.,
(6) |
In short, strong optimism is a stronger form of optimism (the weaker requirement that for any and , and ), which allows us to use the clipping lemma (Lemma B.6 of [36], see also Lemma LABEL:lem:clipping-main in Appendix LABEL:subsec:conclude-reg-bounds) to obtain sharp gap-dependent regret guarantees. The three parts in the bonus term serve for different purposes towards establishing (6):
-
1.
The first component accounts for the uncertainty in reward estimation: with probability , , and .
-
2.
The second component accounts for the uncertainty in estimating : with probability , and .
-
3.
The third component accounts for the lower order terms for strong optimism: with probability , , and .
Based on the above concentration inequalities and the definitions of bonus terms, it can be shown inductively that, with probability , both and (resp. and ) are valid upper bounds (resp. lower bounds) of .
Finally, observe that for any , has range . By taking intersections of all confidence bounds of it has obtained, Multi-task-Euler constructs its final upper and lower bound estimates for , and respectively, for (line 1 to 1). Similar ideas on using data from multiple sources to construct confidence intervals and guide explorations have been used by [37, 43] for multi-task noncontextual and contextual bandits. Using the relationship between the optimal value and and optimal action values , Multi-task-Euler also constructs upper and lower bound estimates for , and , respectively for (line 1).
Executing optimistic policies.
At each episode , for each player , its optimal action-value function upper bound estimate induces a greedy policy (line 1); the player then executes this policy in this episode to collect a new trajectory and use this to update its individual model parameter estimates. After all players finish their episode , the algorithm also updates its aggregate model parameter estimates (lines 1 to 1) using Equations (3), (4) and (5), and continues to the next episode.
4 Performance guarantees
Before stating the guarantees of Algorithm 1, we define an instance-dependent complexity measure that characterizes the amenability to information sharing.
Definition 3.
The set of subpar state-action pairs is defined as:
where we recall that .
Definition 3 generalizes the notion of subpar arms defined for multi-task multi-armed bandit learning [43] in two ways: first, it is with regards to state-action pairs as opposed to actions only; second, in RL, suboptimality gaps depend on optimal value function, which in turn depends on both immediate reward and subsequent long-term return.
To ease our later presentation, we also present the following lemma.
Lemma 4.
For any , we have that: (1) for all , , where we recall that is the set of optimal state-action pairs with respect to ; (2) for all , .
The lemma follows directly from Lemma 2; its proof can be found in the Appendix along with proofs of the following theorems. Item 1 implies that any subpar state-action pair is suboptimal for all players. In other words, for every player , the state-action space can be partitioned to three disjoint sets: . Item 2 implies that for any subpar , its suboptimal gaps with respect to all players are within a constant of each other.
4.1 Upper bounds
With the above definitions, we are now ready to present the performance guarantees of Algorithm 1. We first present a gap-independent collective regret bound of Multi-task-Euler.
Theorem 5 (Gap-independent bound).
If are -dissimilar, then Multi-task-Euler satisfies that with probability ,
We again compare this regret upper bound with individual Strong-Euler’s gap independent regret bound. Recall that individual Strong-Euler guarantees that with probability ,
We focus on the comparison on the leading terms, i.e., the terms. As , we see that an improvement in the collective regret bound comes from the contributions from subpar state-action pairs: the term is reduced to , a factor of improvement. Moreover, if and , Multi-task-Euler provides a regret bound of lower order than individual Strong-Euler.
We next present a gap-dependent upper bound on its collective regret.
Theorem 6 (Gap-dependent upper bound).
If are -dissimilar, then Multi-task-Euler satisfies with probability ,
where we recall that , and .
Comparing this regret bound with the regret bound obtained by the individual Strong-Euler baseline, recall that by summing over the regret guarantees of Strong-Euler for all players , and taking a union bound over all , individual Strong-Euler guarantees a collective regret bound of
that holds with probability . We again focus on comparing the leading terms, i.e., the terms that have polynomial dependences on the suboptimality gaps in the above two bounds. It can be seen that an improvement in the regret bound by Multi-task-Euler comes from the contributions from the subpar state-action pairs: for each , the regret bound is reduced from to , a factor of improvement. Recent work of [44] has shown that in the single-task setting, it is possible to replace with a sharper problem-dependent complexity term that depends on the multiplicity of optimal state-action pairs. We leave improving the guarantee of Theorem 6 in a similar manner as an interesting open problem.
Key to the proofs of Theorems 5 and 6 is a new bound on the surplus [36] of the value function estimates. Our new surplus bound is a minimum of two terms: one depends on the usual state-action visitation counts of player , the other depends on the task dissimilarity parameter and the state-action visitation counts of all players. Detailed proofs can be found at Appendix LABEL:sec:proof-upper.
4.2 Lower bounds
To complement the above upper bounds, we now present gap-dependent and gap-independent regret lower bounds that also depend on our subpar state-action pair notion. Our lower bounds are inspired by regret lower bounds for episodic RL [36, 8] and multi-task bandits [43].
Theorem 7 (Gap-independent lower bound).
For any , , , , , and with and , there exists some that satisfies: for any algorithm , there exists an -MPERL problem instance with states, actions, players and an episode length of such that , and
We also present a gap-dependent lower bound. Before that, we first formally define the notion of sublinear regret algorithms: for any fixed , we say that an algorithm is a sublinear regret algorithm for the -MPERL problem if there exists some (that possibly depends on the state-action space, the number of players, and ) and such that for all and all -MPERL environments, .
Theorem 8 (Gap-dependent lower bound).
Fix . For any , , , , with , let ; and let be any set of values that satisfies: (1) each , (2) for every , there exists at least one action such that , and (3) for every and , . There exists an -MPERL problem instance with states, actions, players and an episode length of , such that , for all , and
for this problem instance, any sublinear regret algorithm for the -MPERL problem must satisfy:
Comparing the lower bounds with Multi-task-Euler’s regret upper bounds in Theorems 5 and 6, we see that the upper and lower bounds nearly match for any constant . When is large, a key difference between the upper and lower bounds is that the former are in terms of , while the latter are in terms of . We conjecture that our upper bounds can be improved by replacing with —our analysis uses a clipping trick similar to [36], which may be the reason for a suboptimal dependence on . We leave closing this gap as an open question.
5 Related Work
Regret minimization for MDPs.
Our work belongs to the literature of regret minimization for MDPs, e.g., [5, 18, 8, 3, 9, 19, 10, 46, 36, 49, 45, 44]. In the episodic setting, [3, 10, 46, 36, 49] achieve minimax regret bounds for general stationary MDPs. Furthermore, the Euler algorithm [46] achieves adaptive problem-dependent regret guarantees when the total reward within an episode is small or when the environmental norm of the MDP is small. [36] refines Euler, proposing Strong-Euler that provides more fine-grained gap-dependent regret guarantees. [45, 44] show that the optimistic Q-learning algorithm [19] and its variants can also achieve gap-dependent logarithmic regret guarantees. Remarkably, [44] achieves a regret bound that improves over that of [36], in that it replaces the dependence on the number of optimal state-action pairs with the number of non-unique state-action pairs.
Transfer and lifelong learning for RL.
A considerable portion of related works concerns transfer learning for RL tasks (see [40, 24, 50] for surveys from different angles), and many studies investigate a batch setting: given some source tasks and target tasks, transfer learning agents have access to batch data collected for the source tasks (and sometimes for the target tasks as well). In this setting, model-based approaches have been explored in e.g., [39]; theoretical guarantees for transfer of samples across tasks have been established in e.g., [25, 41]. Similarly, sequential transfer has been studied under the framework of lifelong RL in e.g., [38, 1, 15, 22]—in this setting, an agent faces a sequence of RL tasks and aims to take advantage of knowledge gained from previous tasks for better performance in future tasks; in particular, analyses on the sample complexity of transfer learning algorithms are presented in [6, 29] under the assumption that an upper bound on the total number of unique (and well-separated) RL tasks is known. We note that, in contrast, we study an online setting in which no prior data are available and multiple RL tasks are learned concurrently by RL agents.
Concurrent RL.
Data sharing between multiple RL agents that learn concurrently has also been investigated in the literature. For example, in [20, 35, 16, 12], a group of agents interact in parallel with identical environments. Another setting is studied in [16], in which agents solve different RL tasks (MDPs); however, similar to [6, 29], it is assumed that there is a finite number of unique tasks, and different tasks are well-separated, i.e., there is a minimum gap. In this work, we assume that players face similar but not necessarily identical MDPs, and we do not assume a minimum gap. [17] study multi-task RL with linear function approximation with representation transfer, where it is assumed that the optimal value functions of all tasks are from a low dimensional linear subspace. Our setting and results are most similar to [32] and [13]. [32] study concurrent exploration in similar MDPs with continuous states in the PAC setting; however, their PAC guarantee does not hold for target error rate arbitrarily close to zero; in contrast, our algorithm has a fall-back guarantee, in that it always has a sublinear regret. Concurrent RL from similar linear MDPs has also been recently studied in [13]: under the assumption of small heterogeneity between different MDPs (a setting very similar to ours), the provided regret guarantee involves a term that is linear in the number of episodes, whereas our algorithm in this paper always has a sublinear regret; concurrent RL under the assumption of large heterogeneity is also studied in that work, but additional contextual information is assumed to be available for the players to ensure a sublinear regret.
Other related topics and models.
In many multi-agent RL models [47, 31], a set of learning agents interact with a common environment and have shared global states; in particular, [48] study the setting with heterogeneous reward distributions, and provide convergence guarantees for two policy gradient-based algorithms. In contrast, in our setting, our learning agents interact with separate environments. Multi-agent bandits with similar, heterogeneous reward distributions are investigated in [37, 43]; herein, we generalize their multi-task bandit problem setting to the episodic MDP setting.
6 Conclusion and Future Directions
In this paper, we generalize the multi-task bandit learning framework in [43] and formulate a multi-task concurrent RL problem, in which tasks are similar but not necessarily identical. We provide a provably efficient model-based algorithm that takes advantage of knowledge transfer between different tasks. Our instance-dependent regret upper and lower bounds formalize the intuition that subpar state-action pairs are amenable to information sharing among tasks.
There still remain gaps between our upper and lower bounds which can be closed by either a finer analysis or a better algorithm: first, the dependence on in the upper bound does not match the dependence of in the lower bound when is large; second, the gap-dependent upper bound has dependence, whereas the gap-dependent lower bound only has dependence; third, the additive dependence on the number of optimal state-action pairs can potentially be removed by new algorithmic ideas [44].
Furthermore, one major obstacle in deploying our algorithm in practice is its requirement for knowledge of ; an interesting avenue is to apply model selection strategies in bandits and RL to achieve adaptivity to unknown . Another interesting future direction is to consider more general parameter transfer for online RL, for example, in the context of function approximation.
7 Acknowledgements
We thank Kamalika Chaudhuri for helpful initial discussions, and thank Akshay Krishnamurthy and Tongyi Cao for discussing the applicability of adaptive RL in metric spaces to the multitask RL problem studied in this paper. CZ acknowledges startup funding support from the University of Arizona. ZW thanks the National Science Foundation under IIS 1915734 and CCF 1719133 for research support.
References
- [1] David Abel, Yuu Jinnai, Sophie Yue Guo, George Konidaris, and Michael Littman. Policy and value transfer in lifelong reinforcement learning. In International Conference on Machine Learning, pages 20–29. PMLR, 2018.
- [2] Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Tuning bandit algorithms in stochastic environments. In International conference on algorithmic learning theory, pages 150–165. Springer, 2007.
- [3] Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In International Conference on Machine Learning, pages 263–272. PMLR, 2017.
- [4] Peter Bartlett, Varsha Dani, Thomas Hayes, Sham Kakade, Alexander Rakhlin, and Ambuj Tewari. High-probability regret bounds for bandit online linear optimization. In Proceedings of the 21st Annual Conference on Learning Theory-COLT 2008, pages 335–342. Omnipress, 2008.
- [5] Peter L Bartlett and Ambuj Tewari. Regal: a regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35–42, 2009.
- [6] Emma Brunskill and Lihong Li. Sample complexity of multi-task reinforcement learning. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, pages 122–131, 2013.
- [7] Rich Caruana. Multitask learning. Machine learning, 28(1):41–75, 1997.
- [8] Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 2, pages 2818–2826, 2015.
- [9] Christoph Dann, Tor Lattimore, and Emma Brunskill. Unifying pac and regret: uniform pac bounds for episodic reinforcement learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 5717–5727, 2017.
- [10] Christoph Dann, Lihong Li, Wei Wei, and Emma Brunskill. Policy certificates: Towards accountable reinforcement learning. In International Conference on Machine Learning, pages 1507–1516. PMLR, 2019.
- [11] Carlo D’Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, and Jan Peters. Sharing knowledge in multi-task deep reinforcement learning. In International Conference on Learning Representations, 2020.
- [12] Maria Dimakopoulou and Benjamin Van Roy. Coordinated exploration in concurrent reinforcement learning. In International Conference on Machine Learning, pages 1271–1279. PMLR, 2018.
- [13] Abhimanyu Dubey and Alex Pentland. Provably efficient cooperative multi-agent reinforcement learning with function approximation. arXiv preprint arXiv:2103.04972, 2021.
- [14] David A Freedman. On tail probabilities for martingales. the Annals of Probability, pages 100–118, 1975.
- [15] Francisco M Garcia and Philip S Thomas. A meta-mdp approach to exploration for lifelong reinforcement learning. arXiv preprint arXiv:1902.00843, 2019.
- [16] Zhaohan Guo and Emma Brunskill. Concurrent pac rl. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015.
- [17] Jiachen Hu, Xiaoyu Chen, Chi Jin, Lihong Li, and Liwei Wang. Near-optimal representation learning for linear bandits and linear rl. arXiv preprint arXiv:2102.04132, 2021.
- [18] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(4), 2010.
- [19] Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, and Michael I Jordan. Is q-learning provably efficient? In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 4868–4878, 2018.
- [20] R Matthew Kretchmar. Parallel reinforcement learning. In The 6th World Conference on Systemics, Cybernetics, and Informatics. Citeseer, 2002.
- [21] Alyssa Kubota, Emma IC Peterson, Vaishali Rajendren, Hadas Kress-Gazit, and Laurel D Riek. Jessie: Synthesizing social robot behaviors for personalized neurorehabilitation and beyond. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pages 121–130, 2020.
- [22] Nicholas C Landolfi, Garrett Thomas, and Tengyu Ma. A model-based approach for sample-efficient multi-task reinforcement learning. arXiv preprint arXiv:1907.04964, 2019.
- [23] Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020.
- [24] Alessandro Lazaric. Transfer in reinforcement learning: a framework and a survey. In Reinforcement Learning, pages 143–173. Springer, 2012.
- [25] Alessandro Lazaric and Marcello Restelli. Transfer from multiple mdps. In Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc., 2011.
- [26] Alessandro Lazaric, Marcello Restelli, and Andrea Bonarini. Transfer of samples in batch reinforcement learning. In Proceedings of the 25th international conference on Machine learning, pages 544–551, 2008.
- [27] Xinle Liang, Yang Liu, Tianjian Chen, Ming Liu, and Qiang Yang. Federated transfer reinforcement learning for autonomous driving. arXiv preprint arXiv:1910.06001, 2019.
- [28] Boyi Liu, Lujia Wang, and Ming Liu. Lifelong federated reinforcement learning: a learning architecture for navigation in cloud robotic systems. IEEE Robotics and Automation Letters, 4(4):4555–4562, 2019.
- [29] Yao Liu, Zhaohan Guo, and Emma Brunskill. Pac continuous state online multitask reinforcement learning with identification. In Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’16, page 438–446, 2016.
- [30] Andreas Maurer and Massimiliano Pontil. Empirical bernstein bounds and sample variance penalization. COLT, 2009.
- [31] Afshin OroojlooyJadid and Davood Hajinezhad. A review of cooperative multi-agent deep reinforcement learning. arXiv preprint arXiv:1908.03963, 2019.
- [32] Jason Pazis and Ronald Parr. Efficient pac-optimal exploration in concurrent, continuous state mdps with delayed updates. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016.
- [33] Michael T. Rosenstein, Zvika Marx, Leslie Pack Kaelbling, and Thomas G. Dietterich. To transfer or not to transfer. In NIPS 2005 workshop on transfer learning, 2005.
- [34] David Silver, Leonard Newnham, David Barker, Suzanne Weller, and Jason McFall. Concurrent reinforcement learning from customer interactions. In International conference on machine learning, pages 924–932. PMLR, 2013.
- [35] David Silver, Leonard Newnham, David Barker, Suzanne Weller, and Jason McFall. Concurrent reinforcement learning from customer interactions. In International conference on machine learning, pages 924–932. PMLR, 2013.
- [36] Max Simchowitz and Kevin Jamieson. Non-asymptotic gap-dependent regret bounds for tabular mdps. arXiv preprint arXiv:1905.03814, 2019.
- [37] Marta Soare, Ouais Alsharif, Alessandro Lazaric, and Joelle Pineau. Multi-task linear bandits. NIPS2014 Workshop on Transfer and Multi-task Learning : Theory meets Practice, 2014.
- [38] Fumihide Tanaka and Masayuki Yamamura. Multitask reinforcement learning on the distribution of mdps. In Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation, volume 3, pages 1108–1113. IEEE, 2003.
- [39] Matthew E Taylor, Nicholas K Jong, and Peter Stone. Transferring instances for model-based reinforcement learning. In Joint European conference on machine learning and knowledge discovery in databases, pages 488–505. Springer, 2008.
- [40] Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(7), 2009.
- [41] Andrea Tirinzoni, Mattia Salvini, and Marcello Restelli. Transfer of samples in policy search via multiple importance sampling. In International Conference on Machine Learning, pages 6264–6274. PMLR, 2019.
- [42] Konstantinos Tsiakas, Cheryl Abellanoza, and Fillia Makedon. Interactive learning and adaptation for robot assisted therapy for people with dementia. In Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, pages 1–4, 2016.
- [43] Zhi Wang, Chicheng Zhang, Manish Kumar Singh, Laurel Riek, and Kamalika Chaudhuri. Multitask bandit learning through heterogeneous feedback aggregation. In International Conference on Artificial Intelligence and Statistics, pages 1531–1539. PMLR, 2021.
- [44] Haike Xu, Tengyu Ma, and Simon S Du. Fine-grained gap-dependent bounds for tabular mdps via adaptive multi-step bootstrap. arXiv preprint arXiv:2102.04692, 2021.
- [45] Kunhe Yang, Lin Yang, and Simon Du. Q-learning with logarithmic regret. In International Conference on Artificial Intelligence and Statistics, pages 1576–1584. PMLR, 2021.
- [46] Andrea Zanette and Emma Brunskill. Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds. In International Conference on Machine Learning, pages 7304–7312. PMLR, 2019.
- [47] Kaiqing Zhang, Zhuoran Yang, and Tamer Başar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. arXiv preprint arXiv:1911.10635, 2019.
- [48] Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, and Tamer Basar. Fully decentralized multi-agent reinforcement learning with networked agents. In International Conference on Machine Learning, pages 5872–5881. PMLR, 2018.
- [49] Zihan Zhang, Yuan Zhou, and Xiangyang Ji. Almost optimal model-free reinforcement learningvia reference-advantage decomposition. Advances in Neural Information Processing Systems, 33, 2020.
- [50] Zhuangdi Zhu, Kaixiang Lin, and Jiayu Zhou. Transfer learning in deep reinforcement learning: A survey. arXiv preprint arXiv:2009.07888, 2020.
- [51] Hankz Hankui Zhuo, Wenfeng Feng, Qian Xu, Qiang Yang, and Yufeng Lin. Federated reinforcement learning. arXiv preprint arXiv:1901.08277, 2019.