Reward-Poisoning Attacks on Offline Multi-Agent Reinforcement Learning
Abstract
In offline multi-agent reinforcement learning (MARL), agents estimate policies from a given dataset. We study reward-poisoning attacks in this setting where an exogenous attacker modifies the rewards in the dataset before the agents see the dataset. The attacker wants to guide each agent into a nefarious target policy while minimizing the norm of the reward modification. Unlike attacks on single-agent RL, we show that the attacker can install the target policy as a Markov Perfect Dominant Strategy Equilibrium (MPDSE), which rational agents are guaranteed to follow. This attack can be significantly cheaper than separate single-agent attacks. We show that the attack works on various MARL agents including uncertainty-aware learners, and we exhibit linear programs to efficiently solve the attack problem. We also study the relationship between the structure of the datasets and the minimal attack cost. Our work paves the way for studying defense in offline MARL.
1 Introduction
Multi-agent reinforcement learning (MARL) has achieved tremendous empirical success across a variety of tasks such as the autonomous driving, cooperative robotics, economic policy-making, and video games. In MARL, several agents interact with each other and the underlying environment, and each of them aims to optimize their individual long-term reward (Zhang, Yang, and Başar 2021). Such problems are often formulated under the framework of Markov Games (Shapley 1953), which generalizes the Markov Decision Process model from single-agent RL. In offline MARL, the agents aim to learn a good policy by exploiting a pre-collected dataset without further interactions with the environment or other agents (Pan et al. 2022; Jiang and Lu 2021; Cui and Du 2022; Zhong et al. 2022). The optimal solution in MARL typically involves equilibria concepts.
While the above empirical success is encouraging, MARL algorithms are susceptible to data poisoning attacks: the agents can reach the wrong equilibria if an exogenous attacker manipulates the feedback to agents. For example, a third party attacker may want to interfere with traffic to cause autonomous vehicles to behave abnormally; teach robots an incorrect procedure so that they fail at certain tasks; misinform economic agents about the state of the economy and guide them to make irrational investment or saving decisions; or cause the non-player characters in a video game to behave improperly to benefit certain human players. In this paper, we study the security threat posed by reward-poisoning attacks on offline MARL. Here, the attacker wants the agents to learn a target policy of the attacker’s choosing ( does not need to be an equilibrium in the original Markov Game). Meanwhile, the attacker wants to minimize the amount of dataset manipulation to avoid detection and accruing high cost. This paper studies optimal offline MARL reward-poisoning attacks. Our work serves as a first step toward eventual defense against reward-poisoning attacks.
Our Contributions
We introduce reward-poisoning attacks in offline MARL. We show that any attack that reduces to attacking single-agent RL separately must be suboptimal. Consequently, new innovations are necessary to attack effectively. We present a reward-poisoning framework that guarantees the target policy becomes a Markov Perfect Dominant Strategy Equilibrium (MPDSE) for the underlying Markov Game. Since any rational agent will follow an MPDSE if it exists, this ensures the agents adopt the target policy . We also show the attack can be efficiently constructed using a linear program.
The attack framework has several important features. First, it is effective against a large class of offline MARL learners rather than a specific learning algorithm. Second, the framework allows partially decentralized agents who can only access their own individual rewards rather than the joint reward vectors of all agents. Lastly, the framework only makes the minimal assumption on the rationality of the learners that they will not take dominated actions.
We also give interpretable bounds on the minimal cost to poison an arbitrary dataset. These bounds relate the minimal attack cost to the structure of the underlying Markov Game. Using these bounds, we derive classes of extremal games that are especially cheap or expensive for the attacker to poison. These results show which games may be more susceptible to an attacker, while also giving insight to the structure of multi-agent attacks.
In the right hands, our framework could be used by a benevolent entity to coordinate agents in a way that improves social welfare. However, a malicious attacker could exploit the framework to harm learners and only benefit themselves. Consequently, our work paves the way for future study of MARL defense algorithms.
Related Work
Online Reward-Poisoning:
Reward poisoning problem has been studied in various settings, including online single-agent reinforcement learners (Banihashem et al. 2022; Huang and Zhu 2019; Liu and Lai 2021; Rakhsha et al. 2021a, b, 2020; Sun, Huo, and Huang 2020; Zhang et al. 2020), as well as online bandits (Bogunovic et al. 2021; Garcelon et al. 2020; Guan et al. 2020; Jun et al. 2018; Liu and Shroff 2019; Lu, Wang, and Zhang 2021; Ma et al. 2018; Yang et al. 2021; Zuo 2020). Online reward poisoning for multiple learners is recently studied as a game redesign problem in (Ma, Wu, and Zhu 2021).
Offline Reward Poisoning:
Ma et al. (2019); Rakhsha et al. (2020, 2021a); Rangi et al. (2022b); Zhang and Parkes (2008); Zhang, Parkes, and Chen (2009) focus on adversarial attack on offline single-agent reinforcement learners. Gleave et al. (2019); Guo et al. (2021) study the poisoning attack on multi-agent reinforcement learners, assuming that the attacker controls one of the learners. Our model instead assumes that the attacker is not one of the learners, and the attacker wants to and is able to poison the rewards of all learners at the same time. Our model pertains to many applications such as autonomous driving, robotics, traffic control and economic analysis, in which there is a central controller whose interests are not aligned with any of the agents and can modify the rewards and therefore manipulate all agents at the same time.
Constrained Mechanism Design:
Our paper is also related to the mechanism design literature, in particular, the K-implementation problem in Monderer and Tennenholtz (2004); Anderson, Shoham, and Altman (2010). Our model differs mainly in that the attacker, unlike a mechanism designer, does not alter the game/environment directly, but instead modifies the training data, from which the learners infer the underlying game and compute their policy accordingly. In practical applications, rewards are often stochastic due to imprecise measurement and state observation, hence the mechanism design approach is not directly applicable to MARL reward poisoning. Conversely, constrained mechanism design can be viewed a special case when the rewards are deterministic and the training data has uniform coverage of all period-state-action tuples.
Defense against Attacks on Reinforcement Learning:
There is also recent work on defending against reward poisoning or adversarial attacks on reinforcement learning; examples include Banihashem, Singla, and Radanovic (2021); Lykouris et al. (2021); Rangi et al. (2022a); Wei, Dann, and Zimmert (2022); Wu et al. (2022); Zhang et al. (2021a, b). These work focus on the single-agent setting where attackers have limited ability to modify the training data. We are not aware of defenses against reward poisoning in our offline multi-agent setting. Given the numerous real-world applications of offline MARL, we believe it is important to study the multi-agent version of the problem.
2 Preliminaries
Markov Games.
A finite-horizon general-sum -player Markov Game is given by a tuple (Littman 1994). Here is the finite state space, and is the finite joint action space. We use to represent a joint action of the learners; we sometimes write to emphasize that learner takes action and the other learners take joint action . For each period , is the transition function, where denotes the probability simplex on , and is the probability that the state is in period given the state is and the joint action is in period . is the mean reward function for the players, where denotes the scalar mean reward for player in state and period when the joint action is taken. The initial state distribution is .
Policies and value functions.
We use to denote a deterministic Markovian policy for the players, where is the policy in period and specifies the joint action in state and period . We write , where is the action taken by learner and is the joint action taken by learners other than in state period . The value of a policy represents the expected cumulative rewards of the game assuming learners take actions according to . Formally, the value of learner in state in period under a joint action is given recursively by
The value of learner in state in period under policy is given by , and we use to denote the vector of values for all learners in state in period under policy .
Offline MARL.
In offline MARL, the learners are given a fixed batch dataset that records historical plays of agents under some behavior policies, and no further sampling is allowed. We assume that contains episodes of length . The data tuple in period of episode consists of the state , the joint action profile , and reward vector , where the superscript denotes the original rewards before any attack. The next state can be found in the next tuple.Given the shared data , each learner independently constructs a policy to maximize their own cumulative reward. They then behave according to the resulting joint policy in future deployment. Note that in a multi-agent setting, the learners’ optimal solution concept is typically an approximate Nash equilibrium or Dominant Strategy Equilibrium (Cui and Du 2022; Zhong et al. 2022).
An agent’s access to may be limited, for example due to privacy reasons. There are multiple levels of accessibility. In the first level, the agents can only access data that directly involves itself: instead of the tuple , agent would only be able to see . In the second level, agent can see the joint action but only its own reward: . In the third level, agent can see the whole . We focus on the second level in this paper.
Let be the total number of episodes containing in period . We consider a dataset that satisfies the following coverage assumption.
Assumption 1.
(Full Coverage) For each and , .
While this assumption might appear strong, we later show that it is necessary to effectively poison the dataset.
Attack Model
We assume that the attacker has access to the original dataset . The attacker has a pre-specified target policy and attempts to poison the rewards in with the goal of forcing the learners to learn from the poisoned dataset. The attacker also desires that the attack has minimal cost. We let denote the cost of a specific poisoning, where are the original rewards and are the poisoned rewards. We focus on the -norm cost .
Rationality.
For generality, the attacker makes minimal assumptions on the learners’ rationality. Namely, the attacker only assumes that the learners never take dominated actions (Monderer and Tennenholtz 2004). For technical reasons, we strengthen this assumption slightly by introducing an arbitrarily small margin (e.g. representing the learners’ numerical resolution).
Definition 1.
A -strict Markov perfect dominant strategy equilibrium (-MPDSE) of a Markov Game is a policy satisfying that for all learners , periods , and states ,
Note that a strict MPDSE, if exists, must be unique.
Assumption 2.
(Rationality) The learners will play an -MPDSE should one exist.
Uncertainty-aware attack.
State-of-the-art MARL algorithms are typically uncertainty-aware (Cui and Du 2022; Zhong et al. 2022), meaning that learners are cognizant of the model uncertainty due to finite, random data and will calibrate their learning procedure accordingly. The attacker accounts for such uncertainty-aware learners, but does not know the learners’ specific algorithm or internal parameters. It only assumes that the policies computed by the learners are solutions to some game that is plausible given the dataset. Accordingly, the attacker aims to poison the dataset in such a way that the target policy is an -MPDSE for every game that is plausible for the poisoned dataset.
To formally define the set of plausible Markov Games for a given dataset , we first need a few definitions.
Definition 2.
(Confidence Game Set) The confidence set on the transition function has the form:
where
is the maximum likelihood estimate (MLE) of the true transition probability. Similarly, the confidence set on the reward function has the form:
where is the MLE of the reward. Then, the set of all plausible Markov Games consistent with , denoted by , is defined to be:
Note that both the attacker and the learners know that all of the rewards are bounded within (we allow ). The values of and are typically given by concentration inequalities. One standard choice takes the Hoeffding-type form and where we recall that is the visitation count of the state-action pair (Xie et al. 2020; Cui and Du 2022; Zhong et al. 2022). We remark that with proper choice of and , contains the game constructed by optimistic MARL algorithms with upper confidence bounds (Xie et al. 2020), as well as that by pessimistic algorithms with lower confidence bounds (Cui and Du 2022; Zhong et al. 2022). See the appendix for details.
With the above definition, we consider an attacker that attempts to modify the original dataset into so that is an -MPDSE for every plausible game in induced by the poisoned . This would guarantee the learners adopt .
The full coverage Assumption 1 is necessary for the above attack goal, as shown in the following proposition. We defer the proof to appendix.
Proposition 1.
If for some , then there exist MARL learners for which the attacker’s problem is infeasible.
3 Poisoning Framework
In this section, we first argue that naively applying single-agent poisoning attack separately to each agent results in suboptimal attack cost. We then present a new optimal poisoning framework that accounts for multiple agents and thereby allows for efficiently solving the attack problem.
Suboptimality of single-agent attack reduction.
As a first attempt, the attacker could try to use existing single-agent RL reward poisoning methods. However, this approach is doomed to be suboptimal. Consider the following game with learners, one period and one state:
Suppose that the original dataset has full coverage. For simplicity, we assume that each pair appears sufficiently many times so that is small. In this case, the target policy is already a MPDSE, so no reward modification is needed. However, if we use a single-agent approach, each learner will observe the following dataset:
In this case, to learner it is not immediately clear which of the two actions is strictly better, for example, when appears relatively more often then . To ensure that both players take action 1, the attacker needs to modify at least one of the rewards for each player, thus incurring a nonzero (and thus suboptimal) attack cost.
The example above shows that a new approach is needed to construct an optimal poisoning framework tailored to the multi-agent setting. Below we develop such a framework, first for the simple Bandit Game setting, which is then generalized to Markov Games.
Bandit Game Setting
As a stepping stone, we start with a subclass of Markov Games with and , which are sometimes called bandit games. A bandit game consists of a single stage normal-form game. For now, we also pretend that the learners simply use the data to compute an MLE point estimate of the game and then solve the estimated game . This is unrealistic, but it highlights the attacker’s strategy to enforce that is an -strict DSE in .
Suppose the original dataset is (recall we no longer have state or period). Also, let be the action counts. The attacker’s problem can be formulated as a convex optimization problem given in (1).
(1) | ||||
s.t. | ||||
The first constraint in (1) models the learners’ MLE after poisoning. The second constraint enforces that is an -strict DSE of by definition. We observe that:
-
1.
The problem is feasible if , since the attacker can always set, for each agent, the reward to be for the target action and for all other actions;
-
2.
If the cost function is the -norm, the problem is a linear program (LP) with variables and inequality constraints (assuming each learner has actions);
-
3.
After the attack, learner only needs to see its own rewards to be convinced that is a dominant strategy; learner does not need to observe other learners’ rewards.
This simple formulation serves as an asymptotic approximation to the attack problem for confidence bound based learners. In particular, when is large for all , the confidence intervals on and are usually small.
With the above idea in place, we can consider more realistic learners that are uncertainty-aware. For these learners, the attacker attempts to enforce an separation between the lower bound of the target action’s reward and the upper bounds of all other actions’ rewards (similar to arm elimination in bandits). With such separation, all plausible games in would have the target action profile as the dominant strategy equilibrium. This approach can be formulated as a slightly more complex optimization problem (2), where the second and third constraints enforce the desired separation. The formulation (2) can be solved using standard optimization solvers, hence the optimal attack can be computed efficiently.
(2) | ||||
s.t. | ||||
We next consider whether this formulation has a feasible solution. Below we characterize the feasibility of the attack in terms of the margin parameter and the confidence bounds.
Proposition 2.
The attacker’s problem (2) is feasible if
Proposition 2 is a special case of the general Theorem 5 with . We note that the condition in Proposition 2 has an equivalent form that relates to the structure of the dataset. We later present this form for more general case.
When an -norm cost function is used, we show in the appendix that the formulation (2) can also be efficiently solved.
Proposition 3.
With -norm cost function , the problem (2) can be formulated as a linear program.
Markov Game Setting
We now generalize the ideas from the bandit setting to derive a poisoning framework for arbitrary Markov Games. With multiple states and periods, there are two main complications:
-
1.
In each period , the learners’ decision depends on , which involves both the immediate reward and the future return ;
-
2.
The uncertainty in amplifies as it propagates backward in .
Accordingly, the attacker needs to design the poisoning attack recursively.
Our main technical innovation is an attack formulation based on confidence-bound backward induction. The attacker maintains confidence upper and lower bounds on the learners’ function, and , with backward induction. To ensure becomes an -MPDSE, the attacker again attempts to -separate the lower bound of the target action and the upper bound of all other actions, at all states and periods.
Recall Definition 2: given the training dataset , one can compute the MLEs and corresponding confidence sets for the reward. The attacker aims to poison into so that the MLEs and confidence sets become and , under which is the unique -MPDSE for all plausible games in the corresponding confidence game set. The attacker finds the minimum cost way of doing so by solving a confidence-bound backward induction optimization problem, given in (3)–(7).
(3) | ||||
s.t. | ||||
(4) | ||||
(5) | ||||
(6) | ||||
(7) |
The backward induction steps (4) and (5) ensure that and are valid lower and upper bounds for the function for all plausible Markov Games in , for all periods. The margin constraints (6) enforces an -separation between the target action and other actions at all states and periods. We emphasize that the agents need not consider at all in their learning algorithm; only appears in the optimization due to its presence in the definition of MPDSE.
Again, pairing an efficient optimization solver with the above formulation gives an efficient algorithm for constructing the poisoning. We now answer the important questions of whether this formulation admits a feasible solution and whether these solutions yield successful attacks. The lemma below provides a positive answer to the second question.
Moreover, the attack formulation admits feasible solutions under mild conditions on the dataset.
We remark that the learners know the upper bound and may use it exclude implausible games. The accumulation of confidence intervals over the periods results in the extra factor on . Theorem 5 implies that the problem is feasible so long as the dataset is sufficiently populated; that is, each pair should appear frequently enough to have a small confidence interval half-width . The following corollary provides a precise condition on the visit accounts that guarantees feasibility.
Corollary 6.
Given a confidence probability and the confidence interval half-width for some strictly increasing function , the condition in Theorem 5 holds if
In particular, for the natural choice of Hoeffding-type , it suffices that,
Despite the inner min and max in the problem (3)–(7), the problem can be formulated as an LP, thanks to LP duality.
The proofs of the above results can be found in appendix.
4 Cost Analysis
Now that we know how the attacker can poison the dataset in the multi-agent setting, we can study the structure of attacks. The structure is most easily seen by analyzing the minimal attack cost. To this end, we give general bounds that relate the minimal attack cost to the structure of the underlying Markov Game. The attack cost upper bounds show which games are particularly susceptible to poison, and the attack cost lower bounds demonstrate that some games are expensive to poison.
Overview of results: Specifically, we shall present two types of upper/lower bounds on the attack cost: (i) universal bounds that hold for all attack problem instances simultaneously; (ii) instance-dependent bounds that are stated in terms of certain properties of the instance. We also discuss problem instances under which these two types of bounds are tight and coincide with each other.
We note that all bounds presented here are with respect to the -cost, but many of them generalize to other cost functions, especially the -cost. The proofs of the results presented in this section are provided in the appendix.
Setup: Let denote an instance of the attack problem, and denote the corresponding MLE of the Markov Game derived from . We denote by the restriction of the instance to period . In particular, derived from is exactly the normal-form game at state and period of . We define to be the optimal -poisoning cost for the instance ; that is, is the optimal value of the optimization problem (3)–(7) evaluated on . We say the attack instance is feasible if this optimization problem is feasible. If is infeasible, we define . WLOG, we assume that . In addition, we define the minimum visit count for each period in as , and the minimal over all periods as . We similarly define the maximum visit counts as and . Lastly, we define and , the minimum and maximum confidence half-width.
Universal Cost Bounds
With the above definitions, we present universal attack cost bounds that hold simultaneously for all attack instances.
Theorem 8.
For any feasible attack instance , we have that,
As these upper and lower bounds hold for all instances, they are typically loose. However, they are nearly tight. If is already an -MPDSE for all plausible games, then no change to the rewards is needed and the attack cost is , hence the lower bound is tight for such instances. We can also construct a high cost instance to show near-tightness of the upper bound.
Specifically, consider the dataset for a bandit game, where and each action appears exactly times, i.e., and . The target policy is . The dataset is constructed so that if and otherwise. These rewards are essentially the extreme opposite of what the attacker needs to ensure is an -DSE. Note, the dataset induces the MLE of the game shown in Table 1 for the special case with players.
For simplicity, suppose that the same confidence half-width is used for all . Let be arbitrary. For this instance, to install as the -DSE, the attacker can flip all rewards in a way that is illustrated in Table 2, inducing a cost as the upper bound in Theorem 8. The situation is the same for learners.
Our instance-dependent lower bound, presented later in Theorem 12, implies that any attack on this instance must have cost at least . This lower bound matches the refined upper bound in the proof of Theorem 9, implying the refined bounds are tight for this instance. Noticing that the universal bound in Theorem 8 only differs by an -factor implies it is nearly tight.
Instance-Dependent Cost Bounds
Next, we derive general bounds on the attack cost that depend on the structure of the underlying instance. Our strategy is to reduce the problem of bounding Markov Game costs to the easier problem of bounding Bandit Game costs. We begin by showing that the cost of poisoning a Markov Game dataset can be bounded in terms of the cost of poisoning the datasets corresponding to its individual period games.
Theorem 9.
For any feasible attack instance , we have that and,
Here we see the effect of the learner’s uncertainty. If is small, then poisoning costs slightly more than poisoning each bandit instance independently. This is desirable since it allows the attacker to solve the much easier bandit instances instead of the full problem.
The lower bound is valid for all Markov Games, but it is weak in that it only uses the last period cost. However, this is the most general lower bound one can obtain without additional assumptions on the structure of the game. If we assume additional structure on the dataset, then the above lower bound can be extended beyond the last period, forcing a higher attack cost.
Lemma 10.
Let be any feasible attack instance containing at least one uniform transition in for each period , i.e., there is some with . Then, we have that
In words, for these instances the optimal cost for poisoning is not too far off from the optimal cost of poisoning each period game independently. We note this is where the effects of show themselves. If the dataset is highly uncertain on the transitions, it becomes likely that a uniform transition exists in . Thus, a higher leads to a higher cost and effectively devolves the set of plausible games into a series of independent games.
Now that we have the above relationships, we can focus on bounding the attack cost for bandit games. To be precise, we bound the cost of poisoning a period game instance . To this end, we define -dominance gaps.
Definition 3.
(Dominance Gaps) For every and , the -dominance gap, , is defined as
where is the MLE w.r.t. the original dataset .
The dominance gaps measure the minimum amount by which the attacker would have to increase the reward for learner while others are playing , so that the action becomes -dominant for learner . We then consolidate all the dominance gaps for period into the variable ,
Where is a minor overflow term defined in the appendix. With all this machinery set up, we can give precise bounds on the minimal cost needed to attack a single period game.
Lemma 11.
The optimal attack cost for satisfies
Combining these bounds with Theorem 9 gives complete attack cost bounds for general Markov game instances.
The lower bounds in both Lemma 10 and Lemma 11 expose an exponential dependency on , the number of players, for some datasets . These instances essentially require the attacker to modify for every . A concrete instance can be constructed by taking the high cost dataset derived as the tight example before and extend it into a general Markov Game. We simply do this by giving the game several identical states and uniform transitions. In terms of the dataset, each episode consists of independent plays of the same normal-form game, possibly with a different state observed. For this dataset the -dominance gap can be shown to be . A direct application of Lemma 10 gives the following explicit lower bound.
Theorem 12.
There exists a feasible attack instance for which it holds that
Recall the attacker wants to assume little about the learners, and therefore chooses to install an -MPDSE (instead of making stronger assumptions on the learners and installing a Nash equilibrium or a non-Markov perfect equilibrium). On some datasets , the exponential poisoning cost is the price the attacker pays for this flexibility.
5 Conclusion
We studied a security threat to offline MARL where an attacker can force learners into executing an arbitrary Dominant Strategy Equilibrium by minimally poisoning historical data. We showed that the attack problem can be formulated as a linear program, and provided analysis on the attack feasibility and cost. This paper thus helps to raise awareness on the trustworthiness of multi-agent learning. We encourage the community to study defense against such attacks, e.g. via robust statistics and reinforcement learning.
Acknowledgements
McMahan is supported in part by NSF grant 2023239. Zhu is supported in part by NSF grants 1545481, 1704117, 1836978, 2023239, 2041428, 2202457, ARO MURI W911NF2110317, and AF CoE FA9550-18-1-0166. Xie is partially supported by NSF grant 1955997 and JP Morgan Faculty Research Awards. We also thank Yudong Chen for his useful comments and discussions.
References
- Anderson, Shoham, and Altman (2010) Anderson, A.; Shoham, Y.; and Altman, A. 2010. Internal implementation. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, 191–198. Citeseer.
- Banihashem et al. (2022) Banihashem, K.; Singla, A.; Gan, J.; and Radanovic, G. 2022. Admissible Policy Teaching through Reward Design. arXiv preprint arXiv:2201.02185.
- Banihashem, Singla, and Radanovic (2021) Banihashem, K.; Singla, A.; and Radanovic, G. 2021. Defense against reward poisoning attacks in reinforcement learning. arXiv preprint arXiv:2102.05776.
- Bogunovic et al. (2021) Bogunovic, I.; Losalka, A.; Krause, A.; and Scarlett, J. 2021. Stochastic linear bandits robust to adversarial attacks. In International Conference on Artificial Intelligence and Statistics, 991–999. PMLR.
- Cui and Du (2022) Cui, Q.; and Du, S. S. 2022. When is Offline Two-Player Zero-Sum Markov Game Solvable? arXiv preprint arXiv:2201.03522.
- Garcelon et al. (2020) Garcelon, E.; Roziere, B.; Meunier, L.; Teytaud, O.; Lazaric, A.; and Pirotta, M. 2020. Adversarial Attacks on Linear Contextual Bandits. arXiv preprint arXiv:2002.03839.
- Gleave et al. (2019) Gleave, A.; Dennis, M.; Wild, C.; Kant, N.; Levine, S.; and Russell, S. 2019. Adversarial policies: Attacking deep reinforcement learning. arXiv preprint arXiv:1905.10615.
- Guan et al. (2020) Guan, Z.; Ji, K.; Bucci Jr, D. J.; Hu, T. Y.; Palombo, J.; Liston, M.; and Liang, Y. 2020. Robust stochastic bandit algorithms under probabilistic unbounded adversarial attack. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 4036–4043.
- Guo et al. (2021) Guo, W.; Wu, X.; Huang, S.; and Xing, X. 2021. Adversarial policy learning in two-player competitive games. In International Conference on Machine Learning, 3910–3919. PMLR.
- Huang and Zhu (2019) Huang, Y.; and Zhu, Q. 2019. Deceptive reinforcement learning under adversarial manipulations on cost signals. In International Conference on Decision and Game Theory for Security, 217–237. Springer.
- Jiang and Lu (2021) Jiang, J.; and Lu, Z. 2021. Offline decentralized multi-agent reinforcement learning. arXiv preprint arXiv:2108.01832.
- Jun et al. (2018) Jun, K.-S.; Li, L.; Ma, Y.; and Zhu, J. 2018. Adversarial attacks on stochastic bandits. Advances in Neural Information Processing Systems, 31: 3640–3649.
- Littman (1994) Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, 157–163. Elsevier.
- Liu and Shroff (2019) Liu, F.; and Shroff, N. 2019. Data poisoning attacks on stochastic bandits. In International Conference on Machine Learning, 4042–4050. PMLR.
- Liu and Lai (2021) Liu, G.; and Lai, L. 2021. Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning. Advances in Neural Information Processing Systems, 34.
- Lu, Wang, and Zhang (2021) Lu, S.; Wang, G.; and Zhang, L. 2021. Stochastic Graphical Bandits with Adversarial Corruptions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 8749–8757.
- Lykouris et al. (2021) Lykouris, T.; Simchowitz, M.; Slivkins, A.; and Sun, W. 2021. Corruption-robust exploration in episodic reinforcement learning. In Conference on Learning Theory, 3242–3245. PMLR.
- Ma et al. (2018) Ma, Y.; Jun, K.-S.; Li, L.; and Zhu, X. 2018. Data poisoning attacks in contextual bandits. In International Conference on Decision and Game Theory for Security, 186–204. Springer.
- Ma, Wu, and Zhu (2021) Ma, Y.; Wu, Y.; and Zhu, X. 2021. Game Redesign in No-regret Game Playing. arXiv preprint arXiv:2110.11763.
- Ma et al. (2019) Ma, Y.; Zhang, X.; Sun, W.; and Zhu, J. 2019. Policy poisoning in batch reinforcement learning and control. Advances in Neural Information Processing Systems, 32: 14570–14580.
- Monderer and Tennenholtz (2004) Monderer, D.; and Tennenholtz, M. 2004. k-Implementation. Journal of Artificial Intelligence Research, 21: 37–62.
- Pan et al. (2022) Pan, L.; Huang, L.; Ma, T.; and Xu, H. 2022. Plan better amid conservatism: Offline multi-agent reinforcement learning with actor rectification. In International Conference on Machine Learning, 17221–17237. PMLR.
- Rakhsha et al. (2020) Rakhsha, A.; Radanovic, G.; Devidze, R.; Zhu, X.; and Singla, A. 2020. Policy teaching via environment poisoning: Training-time adversarial attacks against reinforcement learning. In International Conference on Machine Learning, 7974–7984. PMLR.
- Rakhsha et al. (2021a) Rakhsha, A.; Radanovic, G.; Devidze, R.; Zhu, X.; and Singla, A. 2021a. Policy teaching in reinforcement learning via environment poisoning attacks. Journal of Machine Learning Research, 22(210): 1–45.
- Rakhsha et al. (2021b) Rakhsha, A.; Zhang, X.; Zhu, X.; and Singla, A. 2021b. Reward poisoning in reinforcement learning: Attacks against unknown learners in unknown environments. arXiv preprint arXiv:2102.08492.
- Rangi et al. (2022a) Rangi, A.; Tran-Thanh, L.; Xu, H.; and Franceschetti, M. 2022a. Saving stochastic bandits from poisoning attacks via limited data verification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 8054–8061.
- Rangi et al. (2022b) Rangi, A.; Xu, H.; Tran-Thanh, L.; and Franceschetti, M. 2022b. Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning. In Raedt, L. D., ed., Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, 3394–3400. International Joint Conferences on Artificial Intelligence Organization. Main Track.
- Shapley (1953) Shapley, L. S. 1953. Stochastic games. Proceedings of the national academy of sciences, 39(10): 1095–1100.
- Sun, Huo, and Huang (2020) Sun, Y.; Huo, D.; and Huang, F. 2020. Vulnerability-aware poisoning mechanism for online rl with unknown dynamics. arXiv preprint arXiv:2009.00774.
- Wei, Dann, and Zimmert (2022) Wei, C.-Y.; Dann, C.; and Zimmert, J. 2022. A model selection approach for corruption robust reinforcement learning. In International Conference on Algorithmic Learning Theory, 1043–1096. PMLR.
- Wu et al. (2022) Wu, F.; Li, L.; Xu, C.; Zhang, H.; Kailkhura, B.; Kenthapadi, K.; Zhao, D.; and Li, B. 2022. COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks. arXiv preprint arXiv:2203.08398.
- Xie et al. (2020) Xie, Q.; Chen, Y.; Wang, Z.; and Yang, Z. 2020. Learning zero-sum simultaneous-move markov games using function approximation and correlated equilibrium. In Conference on learning theory, 3674–3682. PMLR.
- Yang et al. (2021) Yang, L.; Hajiesmaili, M.; Talebi, M. S.; Lui, J.; and Wong, W. S. 2021. Adversarial Bandits with Corruptions: Regret Lower Bound and No-regret Algorithm. In Advances in Neural Information Processing Systems (NeurIPS).
- Zhang and Parkes (2008) Zhang, H.; and Parkes, D. C. 2008. Value-Based Policy Teaching with Active Indirect Elicitation. In AAAI, volume 8, 208–214.
- Zhang, Parkes, and Chen (2009) Zhang, H.; Parkes, D. C.; and Chen, Y. 2009. Policy teaching through reward function learning. In Proceedings of the 10th ACM conference on Electronic commerce, 295–304.
- Zhang, Yang, and Başar (2021) Zhang, K.; Yang, Z.; and Başar, T. 2021. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Handbook of Reinforcement Learning and Control, 321–384.
- Zhang et al. (2021a) Zhang, X.; Chen, Y.; Zhu, J.; and Sun, W. 2021a. Corruption-robust offline reinforcement learning. arXiv preprint arXiv:2106.06630.
- Zhang et al. (2021b) Zhang, X.; Chen, Y.; Zhu, X.; and Sun, W. 2021b. Robust policy gradient against strong data corruption. In International Conference on Machine Learning, 12391–12401. PMLR.
- Zhang et al. (2020) Zhang, X.; Ma, Y.; Singla, A.; and Zhu, X. 2020. Adaptive reward-poisoning attacks against reinforcement learning. In International Conference on Machine Learning, 11225–11234. PMLR.
- Zhong et al. (2022) Zhong, H.; Xiong, W.; Tan, J.; Wang, L.; Zhang, T.; Wang, Z.; and Yang, Z. 2022. Pessimistic minimax value iteration: Provably efficient equilibrium learning from offline datasets. arXiv preprint arXiv:2202.07511.
- Zuo (2020) Zuo, S. 2020. Near Optimal Adversarial Attack on UCB Bandits. arXiv preprint arXiv:2008.09312.