Distributed Multi-Agent Reinforcement Learning Based on Graph-Induced Local Value Functions
Abstract
Achieving distributed reinforcement learning (RL) for large-scale cooperative multi-agent systems (MASs) is challenging because: (i) each agent has access to only limited information; (ii) issues on scalability and sample efficiency emerge due to the curse of dimensionality. In this paper, we propose a general distributed framework for sample efficient cooperative multi-agent reinforcement learning (MARL) by utilizing the structures of graphs involved in this problem. We introduce three coupling graphs describing three types of inter-agent couplings in MARL, namely, the state graph, observation graph and reward graph. By further considering a communication graph, we propose two distributed RL approaches based on local value functions derived from the coupling graphs. The first approach is able to reduce sample complexity significantly under specific conditions on the aforementioned four graphs. The second approach provides an approximate solution and can be efficient even for problems with dense coupling graphs. Here there is a trade-off between minimizing the approximation error and reducing the computational complexity. Simulations show that our RL algorithms have a significantly improved scalability to large-scale MASs compared with centralized and consensus-based distributed RL algorithms.
Index Terms:
Reinforcement learning, distributed learning, optimal control, Markov decision process, multi-agent systemsI Introduction
Reinforcement Learning (RL) [1] aims to find an optimal policy for an agent to accomplish a specific task by making this agent interact with the environment. Although RL has found wide practical applications such as boarding games [2], robotics [3], and power systems [4], the problem is much more complex for multi-agent reinforcement learning (MARL) due to the non-stationary environment for each agent and the curse of dimensionality. MARL, therefore, has attracted increasing attention recently, and has been studied extensively, see the survey papers [5, 6, 7]. Nonetheless, many challenges still remain to be overcome. In this paper, we focus on dealing with the following two difficulties in developing distributed cooperative RL algorithms for large-scale networked multi-agent systems (MASs): (i) how to deal with inter-agent couplings all over the network with only local information observable for each agent; and (ii) how to guarantee scalability of the designed RL algorithm to large-scale network systems?
The first challenge refers to the fact that an agent cannot make its decision independently since it will affect and be affected by other agents in different ways. There are mainly three types of structure constraints causing inter-agent couplings in MARL111In the literature, there have been many different settings for the MARL problem. Our work aims to find a distributed control policy for a dynamic MAS to cooperatively maximize an accumulated global joint reward function. In our literature review, a reference is considered as an MARL reference as long as it employed RL to seek a policy for a group of agents to cooperatively optimize an accumulated global joint function in a dynamic environment. that have been considered in the literature, e.g., coupled dynamics (transition probability) [8, 9], partial observability222Partial observability generally means that only incomplete information of the environment is observed by the learner. In many numerical experiments for MARL, e.g., [10, 11], partial observation refers to the observation on partial agents and the environment. In this work, we consider a specific scenario where each agent only observes partial agents, which is consistent with [12]. [10, 11, 12], and coupled reward functions [13, 9, 14, 15]. The aforementioned references either consider only one type of structure constraints, or employ only one graph to characterize different types of structure constraints. To give a specific example, consider the MAS as a network of linear systems with dynamics coupling, and set the global objective function as an accumulated quadratic function of state and control policy, the problem of learning an optimal distributed policy becomes a data-driven distributed linear quadratic regulator (LQR) design problem[16, 17], which involves all the aforementioned structure constraints. In the literature, distributed RL algorithms e.g., [18, 17, 19] have been proposed to deal with this problem. However, the three types of structure constraints are yet to be efficiently utilized. Moreover, the most popular formulation of MARL has long been known as the Markov games [20], while the multi-agent LQR problem [16] is only a specific application scenario.
The scalability issue, as the second challenge, is resulted from high dimensions of state and action spaces of MASs. Although each agent may represent an independent individual, the inter-agent couplings in the MARL problem make it difficult for each agent to learn its optimal policy w.r.t. the global objective with only local information. In the literature of distributed RL [21, 13, 22, 12], when dealing with these couplings, the most common method is to make different agents exchange information with each other via a consensus algorithm, so that each agent is able to estimate the value of the global reward function although it only has access to local information from neighbors. However, the performance of such distributed RL algorithms will be similar to or even worse than the centralized RL algorithm because (i) it may take a long time for convergence of consensus when the network is of large scale, (ii) essentially the learning process is conducted via an estimated global reward information. As a result, consensus-based distributed RL algorithms still suffer from significant scalability issues in terms of the convergence rate and the learning variance.
In this paper, we develop distributed scalable algorithms for a class of cooperative MARL problems where each agent has its individual state space and action space (similar to [8, 9]), and all the aforementioned three types of couplings exist. We consider a general case where the inter-agent state transition couplings, state observation couplings, and reward couplings are characterized by three different graphs, namely, state graph, observation graph, and reward graph, respectively. Based on these graphs, we derive a learning graph, which describes the required information flow during the RL process. The learning graph also provides a guidance for constructing a local value function (LVF) for each agent, which is able to play the role of the global value function (GVF)333Please note that the abbreviation of “GVF” has been used to denote “General Value Function” in the literature, e.g., [23]. In this paper, “GVF” always means global value function. in learning, but only involves partial agents, and therefore, can enhance scalability of the learning algorithm.
When each agent has access to all the information involved in LVF, distributed RL can be achieved by policy gradient algorithms immediately. However, this approach is usually based on interactions between many agents, which requires a dense communication graph. To further reduce the number of communication links, we design a distributed RL algorithm based on local consensus, whose computational complexity depends on the aforementioned three graphs (see Theorem 1). Compared with global consensus-based RL algorithms, local consensus algorithms usually have an improved convergence rate as the network scale is reduced444Although the convergence rate of a consensus algorithm depends on not only the network scale but also the communication weights, typically the convergence rate can be significantly improved if the network scale is largely reduced. In [24], the relationship between consensus convergence time and the number of nodes in the network is analyzed under a specific setting for the communication weights.. This implies that the scalability of this RL algorithm requires specific conditions on the graphs embedded in the MARL problem. To relax the graphical conditions, we further introduce a truncation index and design a truncated LVF (TLVF), which involves fewer agents than LVF. While being applicable to MARL with any graphs, the distributed RL algorithm based on TLVFs only generates an approximate solution, and the approximation error depends on the truncation index to be artificially designed (see Theorem 2). We will show that there is a trade-off between minimizing the approximation error and reducing the computational complexity (enhancing the convergence rate).
In [25], we have considered the case when no couplings exist between the rewards of different individual agents. In contrast, this paper considers coupled individual rewards, which further induces a reward graph. Moreover, this paper provides more interesting graphical results, a distributed RL algorithm via local consensus, and a TLVF-based distributed RL framework.
The main novel contributions of our work that add to the existing literature are summarized as follows.
(i). We consider a general formulation for distributed RL of networked dynamic agents, where the aforementioned three types of inter-agent couplings exist in the problem, simultaneously. Similar settings have been considered in [8, 9]. The main novelty here is that the three coupling graphs in this paper are fully independent and inherently exist in the problem. Based on the three graphs corresponding to three types of couplings, we derive a learning graph describing the information flow required in learning. By discussing the relationship between the learning graph and the three coupling graphs (see Lemma 13), one can clearly observe how different types of couplings affect the required information exchange in learning.
(ii). By employing the learning graph, we construct a LVF for each agent such that the partial gradient of the LVF w.r.t. each individual’s policy parameter is exactly the same as that of the GVF (see Lemma 2), which can be directly employed in policy gradient algorithms. MARL algorithms based on LVFs have also been proposed by network-based LVF approximation [8, 26, 9] and value function decomposition [14, 15, 27, 28]. However, the network-based LVF approximation only provided an approximate solution. Moreover, the aforementioned value function decomposition references always assumed that all the agents share a common environment state, therefore never involved the state graph (describing dynamics couplings between different agents).
(iii). To show the benefits of employing the constructed LVFs in policy gradient algorithms, we focus on zeroth-order optimization (ZOO)555The ZOO method can be implemented with very limited information (only the objective function evaluations), therefore has a wide range of applications. In recent years, the ZOO-based RL algorithms have been shown efficient in solving model-free optimal control problems, e.g., [29, 30, 17]. Inspired by these facts, we employ the ZOO-based method to deal with the model-free optimal distributed control problem of MASs under a very general formulation.. Due to the removal of redundant information (less agents are involved) in gradient estimation, our learning framework based on LVFs always exhibits a reduced gradient estimation variance (see Remark 3) compared with GVF-based policy evaluation. Note that most of the existing distributed ZOO algorithms [31, 32, 33, 34] essentially evaluate policies via the global value.
(iv). To deal with the scenario when the learning graph is dense, we construct TLVFs by further neglecting the couplings between agents that are far away in the coupling graph. The underlying idea is motivated by [8, 26, 9]. Our design, however, is different from them as they construct a TLVF for each agent, whereas we deign a TLVF for each cluster.
The rest of this paper is organized as follows. Section II describes the MARL formulation and the main goal of this paper. Section III introduces the LVF design and the learning graph derivation. Section IV shows the distributed RL algorithm based on LVFs and local consensus, and provides convergence analysis. Section V introduces the RL algorithm based on TLVFs as well as the convergence analysis. Section VII shows several simulation examples to illustrate the advantages of our proposed algorithms. Section VIII concludes the paper. Sections IX and X provide theoretical proofs and the relationships among different cluster-wise graphs, respectively.
Notation: Throughout the paper, unless otherwise stated, always denotes an unweighted directed graph666In this paper, we will introduce multiple graphs, here may represent , , , and ., where is the set of vertices, is the set of edges, means that there is a directional edge in from to . The in-neighbor set and out-neighbor set of agent are denoted by , and , respectively. A path from to is a sequence of distinct edges of the form , , …, where and . We use to denote that there is a path from to in edge set . A subgraph with and is said to be a strongly connected component (SCC) if there is a path between any two vertices in . One vertex is a special SCC. Given a directed graph , we define the transpose graph of as , where . Given two edge sets and , it can be verified that if and only if . Moreover, if , then . Given a set and a vector , with . Given a set , is the set of probability distributions over . The identity matrix is denoted by , the zero matrix is denoted by . is the -dimensional Euclidean space. is the set of non-negative integers.
II Multi-Agent Reinforcement Learning
Consider the optimal control problem of a MAS modeled by a Markov decision process (MDP), which is described as a tuple with describing different interaction graphs and specifying the evolution process777The evolution process of each agent depends on other agents, therefore does not possess the Markov property. However, the whole MAS has the Markov property as its full state only depends on the state and action at last step, and is independent of previous states and actions. of agent . Detailed notation explanations are listed below:
-
•
is the set of agent indices;
-
•
is the edge set of the state graph , which specifies the dynamics couplings among the agents’ states, implies that the state evolution of agent involves the state of agent ;
-
•
is the edge set of the observation graph , which determines the partial observation of each agent. More specifically, agent observes the state of agent if ;
-
•
is the edge set of the reward graph , which describes inter-agent couplings in the reward of each individual agent, the reward of agent involves the state and the action of agent if ;
-
•
is the edge set of the communication graph . An edge implies that agent is able to receive information from agent ;
-
•
and are the state space and the action space of agent , respectively, and can be either continuous or finite;
-
•
is the transition probability function specifying the state probability distribution of agent at the next time step under current states and actions , here includes agent and its neighbors in the state graph ;
-
•
is the immediate reward returned to agent when each agent takes action at the current state , here ;
-
•
is the observation space of agent , which includes the states of all the agents in , here ;
-
•
is the discount factor that trades off the instantaneous and future rewards.
Let , , and denote the joint state space, action space and transition probability function of the whole MAS. Each agent has a state and an action . The global state and action at time step are denoted by and , respectively. Let and be a global policy function of the MAS and a local policy function of agent , respectively. Here and are the sets of probability distributions over and , respectively. The global policy is the policy of the whole MAS from the centralized perspective, thus is based on the global state . The local policy of agent is based on the local observation , which constitutes states of partial agents. Note that a global policy always corresponds to a collection of local policies uniquely.
At each time step of the MDP, each agent executes an action according to its policy and the local observation , then obtains a reward . Note that such a formulation is different from that in [13, 35], where the transition and reward of each agent are associated with the global state . Moreover, different from many MARL references where the reward of each agent only depends on its own state and action, in our work, we consider a more general formulation for cooperative MARL where the reward of each agent may be influenced by other agents, determined by the reward graph .
The long-term accumulated discounted global reward is defined as
(1) |
where is the global reward for the MAS at time , is the local reward for agent at time . Note that maximizing is equivalent to maximizing its average , which has been commonly adopted as the learning objective in many MARL references, e.g. [21, 13, 12]. Based on this long term global reward, with a given policy , we are able to define the global state value function and state-action value function , which describe the expected long term global reward when agents have initial state and initial state-action pair , respectively. Similarly, the local state value function with initial state for each agent can be defined as .
The goal of this paper is to design a distributed RL algorithm for the MAS to find a control policy maximizing , whose expression is
(2) |
where denotes the distribution that the initial state follows. For convenience of analysis, we also define the expected value to be maximized corresponding to individual reward for each agent as
(3) |
Note that here may be determined by the policies of partial agents, instead of the global policy . However, the global policy is always able to determine , therefore, can be employed as the argument of .
We parameterize the global policy using parameters with . The global policy and agent ’s local policy are then rewritten as and , respectively. Note that given any global state , a global policy and a collection of local policies can always be transformed to each other. Now we turn to solve the following optimization problem:
(4) |
Next we present a distributed multi-warehouse resource transfer problem to demonstrate our formulation. This example is a variation of many practical applications, e.g., it can also be read as a problem of energy transfer among different rooms in a smart building.
Example 1
Consider a network of 9 warehouses consuming resources while transferring resources among each other. The goal is to guarantee adequate supplies for each warehouse. Each warehouse is denoted by a vertex in the graph. The state graph interpreting the transition relationship is shown in Fig. 1. The observation graph only contains 3 edges involving 3 leaf nodes in graph , as shown in Fig. 1, which implies that only warehouses 2, 3, and 5 observe the current resource stock of warehouses other than itself. The motivation behind this setting is that warehouses 1, 4 and 6 do not send out resources at all, hence their neighbors need to keep monitoring their states so that the resources sent to them are neither insufficient nor redundant. The reward graph is shown in Fig. 2, which contains as a subgraph. This ensures that the observation of each warehouse always influences its own reward, implying that a warehouse is responsible for the resource shortage of those warehouses it can observe. At time step , warehouse stores resources of the amount , receives a local demand , sends partial of its resources to and receives resources from its neighbors in the state graph , besides its neighbors, warehouse also receives resources supply of the amount from outside. Let , then agent has the following dynamics
(5) |
where denotes the fraction of resources agent sends to its neighbor at time , determines whether the -th warehouse has resources to send out, i.e., if , and otherwise, is a constant, is a bounded random quantity, and is a positive scalar, and are the in-neighbor set and the out-neighbor set of agent , respectively.
From the MARL perspective, besides the three graphs and transition dynamics introduced above, the rest of entries in for each agent at time step can be recognized as Individual state: . Individual action: . Individual policy function: . Partial observation: . Individual reward: , where if , and otherwise.
The goal of the resource transfer problem is to maximize under the dynamics constraint (1). In other words, we aim to find the optimal transfer policy such that each warehouse keeps having enough resources for use.
Remark 1
Note that many settings in this example can be adjusted while maintaining the applicability of the proposed approach in this paper. For example, the partial observation of each agent can be replaced by or . Depending on different observation settings, the optimal policy may change.


Existing distributed policy gradient methods such as actor-critic [13] and zeroth-order optimization [12] can be employed to solve the problem when there is a connected undirected communication graph among the agents. However, these approaches are based on estimation of the GVF, which requires a large amount of communications during each learning episode. Moreover, policy evaluation based on the GVF has a significant scalability issue due to the high dimension of the state and action spaces for large-scale networks.
III Local Value Function and Learning Graph
In this section, we introduce how to design an appropriate LVF for each agent, which involves only partial agents, but its gradient w.r.t. the local policy parameter is the same as that of the GVF.
III-A Local Value Function Design
Although the state graph , the observation graph , and the reward graph can be defined independently, we observe that all of them will induce the couplings between different agents in the optimization objective. In this subsection, we will build a connection between these graphs and the couplings between agents, based on which the LVFs will be designed.
Define a new graph where , and define
(6) |
which includes the vertices in graph that are reachable from vertex and vertex itself. In fact, the states of the agents in will be affected by agent ’s state and action as time goes on.
To design the LVF for each agent , we need to specify the agents whose individual rewards will be affected by the action of agent . To this end, we define the following composite reward for agent :
(7) |
where
(8) |
here consists of the out-neighbors of vertex in graph and itself.
To demonstrate the definitions of and , let us look at Example 1. One can observe from Fig. 1 and Fig. 2 that . In fact, we have since for all , .
Accordingly, we define the LVF for agent as
(9) |
When the GVF is replaced by the LVF, the agent is expected to maximize the following objective:
(10) |
Different from the global objective function , the local objective only involves agents in a subset . We make the following assumption on the graphs so that for at least one agent .
Assumption 1
There exists a vertex such that .
Define graph . The following lemma shows a sufficient graphical condition and a necessary graphical condition for Assumption 1.
Lemma 1
The following statements are true:
(i). Assumption 1 holds if graph has SCCs.
(ii). Assumption 1 holds only if graph has SCCs.
One may question if the converses of the statements in Lemma 1 are true. Both answers are no. This is because graph may contain some edges that connect different SCCs in , but the paths involving more than two vertices in cannot be used in expanding . For statement (i), may be strongly connected even when there exists a vertex . Fig. 3 shows a counter-example where is only a subset of but is strongly connected. For statement (ii), a simple counter-example can be obtained by setting . Note that Lemma 1 induces a necessary and sufficient condition for Assumption 1 when , which happens when .

Given a function and a positive , we define
(11) |
The following lemma shows the equivalence between the gradients of the smoothed local objective and the smoothed global objective w.r.t. the local policy parameter of each individual agent.
Lemma 2
The following statements are true:
(i) for any , .
(ii) If , are differentiable, then , .
Lemma 2 reveals the implicit connection between graphs and agents’ couplings in the optimization objective, and provides the theoretical guarantee for the reasonability of the designed LVFs. It is important to give the following two notes. (a). Although the RL algorithm in this paper is based on ZOO, Lemma 2 is independent of ZOO. Therefore, Lemma 2 is also compatible with other policy gradient algorithms. (b). Statement (i) in Lemma 2 does not require , to be differentiable because is always differentiable [36, Lemma 2].
In order to adapt our approach to the scenario when is not differentiable, we choose to find the stationary point of . The gap between and can be bounded if is Lipschitz continuous and is sufficiently small.
To guarantee the Lipschitz continuity888The Lipschitz continuity of a value function implies that similar policy parameters have similar performance for the problem. This is reasonable in practice especially for problems with continuous state and action spaces. In [37], it has been shown that the value function becomes Lipschitz continuous w.r.t. policy parameters as long as both the MDP and the policy function have Lipschitz continuity properties. of , we make the following assumption on functions for :
Assumption 2
, are -Lipschitz continuous w.r.t. in for any . That is, for any , .
Assumption 2 directly implies that is -Lipschitz continuous. Moreover, is -Lipschitz continuous in , where , due to the following fact:
(12) |
III-B Learning Graph
Lemma 2 has shown that having the local gradient of a specific local objective function is sufficient for each agent to optimize its policy according to the following gradient ascent:
(13) |
where is the policy parameter at step , and can be estimated by evaluating the value of .
Then we are able to define the learning graph based on the set (8) in the LVF design, which interprets the required reward information flow during the learning process. The edge set is defined as:
(14) |
The definition of implies the following result.
Lemma 3
If , then .
The converse of Lemma 3 is not true, see Fig. 3 as a counterexample. More specifically, , as shown in graph (c), however, , see the learning graph .
To better understand the learning graph , we find a clustering for the graph , where is the vertex set of the -th maximal strongly connected component (SCC) in , and for any distinct . According to Lemma 1, such a clustering with can always be found under Assumption 1.
According to the definitions (8) and (14), we have the following observations:
-
•
The agents in each cluster form a clique in .
-
•
The agents in the same cluster share the same LVF.
The first observation holds because any pair of agents in each cluster are reachable to each other and as long as is reachable from in graph . The second observation holds because for any and belonging to the same cluster, here is defined in (6).
To demonstrate the edge set definition (14), the learning graph corresponding to the state graph and the observation graph in Fig. 1, and the reward graph in Fig. 2, is shown in Fig. 4. In fact, it is interesting to see some connections between different graphs from the cluster-wise perspective. Please refer to Appendix B for more details.

The learning graph interprets the required reward information flow in our distributed MARL algorithm. If the agents are able to exchange information via communications following , then each agent can acquire the information of its LVF via local communications with others. The zeroth-order oracle in [36] can then be employed to estimate in (13). However, is usually dense, inducing high communication costs, and having such a dense communication graph may be unrealistic in practice. To further relax the condition on the communication graph, in the next section, we will design a distributed RL algorithm based on local consensus algorithms.
IV Distributed RL Based on Local Consensus
In this section, we propose a distributed RL algorithm based on local consensus algorithms and ZOO with policy search in the parameter space. ZOO-based RL with policy search in the action space has been proposed in [38]. Compared to the action space, the parameter space usually has a higher dimension. However, the work in [38] requires the action space to be continuous and leverages the Jacobian of the policy w.r.t. . Our RL algorithm is applicable to both continuous and discrete action spaces and even does not require to be differentiable. In addition, our distributed learning framework based on LVFs is compatible with policy search in the action space.
IV-A Communication Weights and Distributed RL Design
We have shown that agents in the same strongly connected component share the same LVF. Therefore, there are LVFs to be estimated, where is the number of maximal SCCs in . Moreover, it is unnecessary for an agent to estimate a LVF that is independent of this agent. For notation simplicity, we use to denote the index set of agents involved in the LVF for the -th cluster, . As a result, if . Moreover, we denote by the number of agents involved in the LVF of cluster . Note that different LVFs for different clusters may involve overlapped agents, that is, it may hold that for different clusters , .
Suppose that the communication graph is available. To make each agent obtain all the individual rewards involved in its LVF, we design a local consensus algorithm based on which the agents involved in each LVF cooperatively estimate the average of their rewards by achieving average consensus. Define communication weight matrices , as follows:
(15) |
where is the set of indices for clusters.
We assume that given an initial state, by implementing the global joint policy , each agent is able to obtain reward , at each time step , where is the number of evolution steps for policy evaluation, accounts for the random effects of both the initial states and the state transition of agents involved in agent ’s reward, , and , which is bounded, . Then we rewrite the obtained individual value of agent as . The quantity can follow any distribution as long as it has a zero mean and a bounded variance. The zero mean assumption is to ensure that agent is able to evaluate its individual value and thereby estimate the gradient accurately, if sufficiently many noisy observations are collected. The boundedness of is to guarantee the boundedness of each noisy observation. Similar assumptions have been made in other RL references, e.g., [39].
We further define as the observed LVF value of agent , and define as the observed GVF value.
The distributed RL algorithm999In Algorithm 1, the transition probability of the MAS is never used. This is consistent with most of model-free RL algorithms in the literature. is shown in Algorithm 1. The consensus algorithm (16) is to make each agent in cluster estimate , which is the average of the reward sum among the agents involved in the corresponding LVF.
Input: Step-size , initial state distribution , number of learning epochs , number of evolution steps (for policy evaluation), iteration number for consensus seeking , initial policy parameter , smoothing radius .
Output: .
-
1.
for do
-
2.
Sample .
-
3.
for all do (Simultaneous Implementation)
-
4.
Agent samples , implements policy for , observes . For , sets if , and sets otherwise.
-
5.
for do
-
6.
Agent sends , to its neighbors in , and computes according to the following updating law:
(16) where , denotes the neighbor set of agent in the communication graph .
-
7.
end for
-
8.
Agent estimates its local gradient
(17) where denotes the cluster including . Then agent updates its policy according to
(18) -
9.
end for
-
10.
end for
To ensure that Algorithm 1 works efficiently, we make the following assumption on graph .
Assumption 3
The communication graph is undirected, and the agents specified by form a connected component of for all .
The following lemma gives a sufficient condition for Assumption 3.
Lemma 4
Given that graph is undirected, Assumption 3 holds if .
Proof: Note that for each cluster , there must exist a path in from cluster to any agent in , recall that is undirected, agents in must be connected in .
Once a communication graph satisfying Assumption 3 is available, we design the communication weights such that the following assumption holds.
Assumption 4
is doubly stochastic, i.e., and , for all .
Assumption 4 guarantees that average consensus can be achieved among the agents involved in each LVF. Since one agent may be involved in LVFs of multiple clusters, it may keep multiple different nonzero communication weights for the same communication link. From the definition of in (15), for all , and . Then for for any . Moreover, let be the principle submatrix of by removing the -th row and column for all , then Assumption 4 implies that is doubly stochastic for all . Define , it has been shown in [40] that under Assumption 4, we have .
Remark 2
When graph is strongly connected, all the agents form one cluster and achieve average consensus during the learning process. Algorithm 1 then reduces to a global consensus-based distributed RL algorithm. In fact, under any graph , the global consensus-based framework can always solve the distributed RL problem. However, when Assumption 1 holds, Algorithm 1 requires consensus to be achieved among smaller-size groups, therefore exhibiting a faster convergence rate. When the multi-agent network is of large scale, it is possible that the number of agents involved in each LVF is significantly smaller than the total number of agents in the whole network. In such scenarios, Algorithm 1 converges much faster than the global consensus-based algorithm due to two reasons: (i) the average consensus tasks are performed within smaller-size groups; (ii) the gradient estimation based on the LVF has a lower variance compared with that based on the GVF , see Remark 3 for more details.
IV-B Convergence Analysis
In this subsection, convergence analysis of Algorithm 1 will be presented. The following assumption is made to guarantee the solvability of the problem (2).
Assumption 5
The individual reward of each agent at any time is uniformly bounded, i.e., for all and .
Lemma 5
Under Assumption 5, there exist and such that for any , .
Lemma 5 implies that there exists an optimal policy for the RL problem (4), which is the premise of solving problem (4). Based on Lemma 5, we can bound and by and , respectively. The following lemma bounds the error between the actual LVF and the expectation of observed LVF.
Lemma 6
Under Assumption 5, the following holds for all and :
(19) |
where , , is the number of agents involved in .
Let , the following lemma bounds the LVF estimation error.
Lemma 7
The following lemma bounds the variance of the zeroth-order oracle (17).
Remark 3
(Low Gradient Estimation Variance Induced by LVFs) Lemma 8 shows that the variance of each local zeroth-order oracle is mainly associated with in , which is the number of agents involved in the LVF for the -th cluster. If the policy evaluation is based on the global reward, the bound of will be . When the network is of a large scale, may be significantly larger than . As a result, the variance of the zeroth-order oracle is much higher than that in our case. Therefore, our algorithm has a significantly improved scalability to large-scale networks.
Theorem 1
(i). for any .
Remark 4
(Optimality Analysis) Theorem 1 (ii) implies the convergence to a stationary point of , which is the smoothed value function101010The reason why we do not analyze the stationary point of is that we did not assume to be differentiable. Since is close to (as shown in Theorem 1 (i)), an optimal policy for will be a near-optimal policy for . If we further assume to have a Lipschitz continuous gradient, then the error of convergence to a stationary point of can be obtained by quantifying the error between and .. When the MARL problem satisfies “gradient domination” and the policy parameterization is complete, a stationary point will always be the global optimum [41, 42]. Note that our formulation is general and contains the cases that do not satisfy gradient domination. For example, as a special case of our formulation, the linear optimal distributed control problem has many undesired stationary points [43].
Remark 5
(Sample Complexity Analysis) According to Theorem 1, the sample complexity of Algorithm 1 is , which is worse than other ZOO-based algorithms in [44, Table 1], which is mainly caused by the use of one-point zeroth-order oracles and the mild assumption (non-smoothness and nonconvexity) on the value function. In Section VI, we will provide analysis on the advantage of using two-point zeroth-order oracles. Note that the lower bounds of , and are all positively associated with , which is the maximal number of agents involved in one LVF. According to the definition of in (8), is determined by the length of the path starting from cluster in graph . This implies that the convergence rate depends on the maximal length of a path in and having shorter paths is beneficial for improving sample efficiency and enhancing the convergence rate.
When Assumption 1 is invalid, Lemmas 7 and 8, and Theorem 1 still hold. However, the LVF-based method becomes a GVF-based method and no longer exhibits advantages. In this case, there is only one cluster and each path achieves its maximum length, then the distributed RL algorithm becomes centralized and the sample complexity reaches maximum.
V Distributed RL via Truncated Local Value Functions
Even if Assumption 1 holds, the sample complexity of Algorithm 1 may still be high due to the large size of some SCC or some long paths in . In this section, motivated by [8], we resolve this issue by further dividing large size SCCs into smaller size SCCs (clusters) and ignoring agents that are far away when designing the LVF for each cluster in the graph. For each cluster, the approximation error turns out to be dependent on the distance between ignored agents and this cluster. Different from [8], where each agent neglects the effects of other agents that are far away, our design aims to make each cluster neglect its effects on other agents that are far away. Moreover, in our setting, the agents in each cluster estimate their common LVF value via local consensus, whereas in [8], each agent has its unique LVF, and it was not mentioned how this value can be obtained.
V-A Truncated Local Value Function Design
Different from the aforementioned SCC-based clustering for , now we artificially give a clustering , where for distinct , still corresponds to a SCC in . However, each cluster may no longer be a maximum SCC. That is, multiple clusters may form a larger SCC of .
Next we define a distance function to describe how many steps are needed for the action of agent to affect another agent . According to Lemma 1 (i), when , there is always a path from to in . Let be the length of the shortest path 111111The length of a path refers to the number of edges included in this path. from vertex to vertex in graph . The distance function is defined as
(24) |
We clarify the following facts regarding . (i). It may happen that there is a path from to in but , see Fig. 3 as an example. Therefore, to exclude , we artificially defined instead of using directly to characterize the inter-agent distance. (ii). The distance function defined here is unidirectional and does not satisfy the symmetry property of the distance in metric space. (iii). Although the artificial SCC clustering is obtained from , the path length is calculated via graph because it always contains all the edges from to any . If is used instead, some agent in , may be missing.
We further define the distance from a cluster to an agent as . Denote by the maximum distance from cluster to any agent out of this cluster that can be affected by cluster . Since we have defined , it is observed that .
Given a cluster , for any agent , we define the following TLVF:
(25) |
where is the set of agents involved in the TLVF of cluster , is a pre-specified truncation index describing the maximum distance from each cluster within which the agents are taken into account in the TLVF of cluster . Similarly, we define if , and otherwise, if , and otherwise.
The following lemma bounds the error between the local gradients of the TLVF and the GVF.
Lemma 9
Lemma 9 implies that the error between and exponentially decays with the exponent . Therefore, when is sufficiently small, by employing in the gradient ascent algorithm, the induced error should be acceptable. This is the fundamental idea of our approach. In next subsection, we will propose the detailed algorithm design and convergence analysis.
V-B Distributed RL with Convergence Analysis
Next we design a distributed RL algorithm based on the TLVF. It suffices to redesign the communication weights, so that the value of (instead of ) can be estimated for each agent . For any cluster , instead of using , we set the index set of agents involved in the LVF as .
Define the number of agents involved in each TLVF as . Then we have , and the equality holds if .
The -th communication weight matrix is then redesigned as
(27) |
where .
Assumption 6
The communication graph is undirected, and the agents specified by form a connected component of for all .
Assumption 7
is doubly stochastic for all .
Note that Assumption 6 is milder than Assumption 3 because , implying that fewer communication links are needed when the TLVF method is employed. Moreover, when the communication graph is available, can be designed to meet Assumption 7 .
The distributed RL algorithm based on TLVFs can be obtained by simply replacing the communication weight matrices with , for all .
Lemma 10
Under Assumption 5, the following holds for all and :
(28) |
Lemma 11
Theorem 2
(i). for any .
Remark 6
(Sample Complexity Analysis) The sample complexity provided in Theorem 2 is associated with , which may be significantly smaller than (depending on the choice of ) in Theorem 1. On the other hand, the convergence error in Theorem 2 has an extra term associated with . Therefore, there is a trade-off when we choose . The greater is, the smaller the convergence error will be, however, the convergence rate may be decreased. For example, when , we have if . In this case, we can choose , implying that each cluster only consider its effects on 6 agents other than this cluster even when is strongly connected. Therefore, when the network is of a huge scale with long paths in graph , using the TLVFs can further reduce the sample complexity.
VI Variance Reduction by Two-Point Zeroth-Order Oracles
The two distributed RL algorithms proposed in the last two sections are based on the one-point zeroth-order oracle (17). We observe that Algorithm 1 is always efficient as long as is an unbiased estimate of and is bounded. Therefore, the two-point feedback oracles proposed in [36] and the residual feedback oracle in [44] can also be employed in Algorithm 1. In this section, we will give a brief analysis on how the two-point zeroth-order oracle further reduces the gradient estimation variance.
Based on the LVF design in our work, the two-point feedback oracle for each agent at learning episode can be obtained as
(33) |
where is the approximate estimation of via local consensus.
Define , which is a Lipschitz constant of . Then we can show (34),
(34) |
where and are the noises in the observations and , respectively, i.e., , ; the first inequality used the bound in (20) and the assumptions and , .
Comparisons with one-point feedback. Note that in (20) can be arbitrarily small as long as and are sufficiently large. Let us first consider the ideal case where the consensus estimation is perfect and the observation is exact, i.e., ( and ), then and . As a result, the upper bound of is independent of , whereas the upper bound of becomes , as shown in Lemma 8. This implies that the variance of the two-point zeroth-order oracle is independent of the reward value and the maximum path length , thus is more scalable than the one-point feedback. Now we consider a more practical scenario where both the consensus estimations and the observations are inexact. For convenience of analysis, we consider as an infinitesimal quantity and neglect terms in the upper bounds independent of the network scale. Then we have . In Lemma 8, we showed that the variance bound for the zeroth-order oracle with one-point feedback is . Therefore, when is small enough, the two-point zeroth-order oracle still outperforms the one-point feedback scheme in terms of lower variance and faster convergence speed.
VII Simulation Results
In this section, we present two examples, where the first one shows the results of applying Algorithm 1 with the communication weight matrices (15) to the resource allocation problem in Example 1, and the second one shows the results of applying Algorithm 1 with the communication weight matrices (27) to a large-scale network scenario.
Example 2
We employ the distributed RL with LVFs in (10) to solve the problem in Example 1. To seek the optimal policy for agent to determine its action , we adopt the following parameterization for the policy function:
(35) |
where is approximated by radial basis functions:
(36) |
is the center of the -th feature for agent , here and are set according to the ranges of and , respectively, such that , are approximately evenly distributed in the range of .
Set for all , and are both set as random variables following the Guassian distribution with mean 0 amd variance 0.01 truncated to , the number of evolution steps , and the number of learning epochs , . The communication graph is set as , which satisfies Assumption 3. Let be the 0-1weighted matrix of graph , that is, if and otherwise. Let . The communication weights are set as Metropolis weights [45]:
(37) |
where .
By further setting the consensus iteration number , , and , Fig. 5 (left) depicts the evolution of the observed values of the GVF by implementing 4 different RL algorithms. The two boundaries of the shaded area are obtained by running each RL algorithm for 10 times and taking the upper bound and lower bound of in each learning episode. Here denotes the specific noise generated in the simulation, and is different in different learning processes. In each time of implementation, one perturbation vector is sampled and used for all the 4 algorithms during each learning episode . The centralized algorithm is the zeroth-order optimization algorithm based on global value evaluation, while the distributed algorithm is based on local value evaluation (Algorithm 1). The distributed two-point feedback algorithm is Algorithm 1 with replaced by in (33). We observe that the distributed algorithms are always faster than the centralized algorithms. Fig. 5 (middle) and Fig. 5 (right) show the comparison of centralized and distributed one-point feedback algorithms, and the comparison of centralized and distributed two-point feedback algorithms, respectively. From these two figures, it is clear that the distributed algorithms always exhibit lower variances in contrast to the centralized algorithms. This implies that policy evaluation based on LVFs is more robust than that based on the GVF.

Example 3
Next, in a setting similar to Example 1, we consider an extendable example with warehouses. By regarding 1 and as the same warehouse, we set
The reward graph is set as . According to the definitions introduced in Subsection III-A, we have if is odd, if is even, for all .
The communication graph is set as , implying that each agent can estimate its LVF value without using the local consensus algorithm. The learning iteration step is set as . Other parameter settings are the same as those in Example 2. By implementing four different RL algorithms, Fig. 6 shows the results for , , and , respectively. Observe that the convergence time for distributed algorithms remain almost invariant for networks with different scales, whereas the centralized algorithms converge much slower when the network scale is increased. Moreover, the two-point oracle always outperforms the one-point oracle in terms of lower variance and faster convergence. These observations are consistent with our analysis in Remark 2 and Section VI.


Example 4
Now we consider warehouses with connected undirected state graph and observation graph,
where warehouse 1 is also viewed as warehouse .
The edge set of graph is set as , implying that the individual reward of each agent only depends on its own state and action. In this case, the learning graph is complete because for any . Then Algorithm 1 with the LVF setting in (10) becomes a centralized algorithm. Hence, we employ the TLVF defined in (25). The communication graph is set as . By setting , , and choosing same parameters , , as those in Example 2, the simulation results for , 1 and 2 are shown in Fig. 7. We observe that the distributed RL algorithm with gains the lowest variance and fastest convergence rate. This means that in this example, the TLVF approximation error does not harm the improved performance of our RL algorithm. Moreover, the smaller is, the faster the algorithm converges, which is consistent with the analysis in Remark 6.
VIII Conclusions
We have recognized three graphs inherently embedded in MARL, namely, the state graph, observation graph, and reward graph. A connection between these three graphs and the learning graph was established, based on which we proposed our distributed RL algorithm via LVFs and derived conditions on the communication graph required in RL. It was shown that the LVFs constructed based on the aforementioned 3 graphs are able to play the same role as the GVF in gradient estimation. To adapt our algorithm to MARL with general graphs, we have designed TLVFs associated with an artificially specified index. The choice of this index is a trade-off between variance reduction and gradient approximation errors. Simulation examples have shown that our proposed algorithms with LVFs or TLVFs significantly outperform RL algorithms based on GVF, especially for large-scale MARL problems.
The RL algorithms proposed in this work are policy gradient algorithms based on ZOO, which are general, but may be not the best choice for specific applications. In the future, we are interested in exploring how our graph-theoretic approach can be combined with other RL techniques to facilitate learning for large-scale network systems.
IX Appendix A: Proofs of Lemmas and Theorems
Proof of Lemma 1. (i). Suppose that for all . Since has SCCs, there must exist distinct such that is not reachable from in . However, implies that there exists such that , and , implying that is reachable from in , which is a contradiction.
(ii). Suppose only has 1 SCC. According to (6), for any . This implies that for any , which contradicts with Assumption 1.
Let be the set of vertices in graph that can reach and vertex . Note that for each agent , its action at time , i.e., , is only affected by the partial observation , the current state , and policy . Therefore, there exists a function such that
(39) |
Similarly, according to the definition of , there exists another function such that
(40) |
together with (39), we have
(41) |
and
(42) |
where .
According to (41) and (42), we conclude that is affected by only if . As a result, is affected by only if .
Next we show once , it must hold that , i.e., will not affect . By the definition in (8), implies that . That is, there are no vertices in that are reachable from vertex in graph . As a result, .
Then we conclude that never influences for any if . Therefore, is independent of .
Proof for (i): it has been shown in [36] that
(43) |
where . Define
(44) |
then . It follows that
(45) | ||||
Let , . Since we have proved that is independent of , the following holds:
Therefore,
(46) |
Proof for (ii): differentiability of for all implies that for all and are differentiable as well. Since is independent of , we have
(47) |
This completes the proof.
Proof of Lemma 7. Note that the following holds for any step :
(50) |
where the last equality holds because is doubly stochastic. It follows that for any .
Next we evaluate the estimation error. The following holds:
(51) |
where the second equality used (50) and the third equality holds because .
As a result, for any , we have
(52) |
where the second inequality used (51) and Lemma 6, the third inequality used Lemma 6 again, and the last inequality used the uniform bound of and the fact that .
Due to Assumption 4, we have . Let . Then we have
(53) |
The proof is completed.
Proof of Lemma 8. According to (17), we have
(54) |
where is defined in (43), the first inequality used (21), and the second inequality holds because , which has been proved in [36, Lemma 1].
Proof of Theorem 1. Statement (i) can be obtained by using the Lipschitz continuity of . The details have been shown in [36, Theorem 1].
Now we prove statement (ii). According to Assumption 2 and [36, Lemma 2], the gradient of is -Lipschitz continuous. Let , the following holds:
(55) |
which implies that
(56) |
Moreover, Lemma 7 implies that
(58) |
where the first equality used Lemma 2 and , the inequality used (20). Summing (IX) over from to yields
(59) |
where .
Combining (59) and (56), and taking expectation on both sides, we obtain
(60) |
where the second inequality employed (57).
Note that under the conditions on , we have
(61) |
Summing (IX) over from 0 to and dividing both sides by , yields
(62) |
where the second inequality used (61), the last inequality used the conditions on and . The proof is completed.
Let be the individual reward of agent at time under the global policy . Due to Lemma 2, it suffices to analyze .
Let , and . Then
(63) |
Notice that if , then the reward of each agent is not affected by cluster at any time step . Therefore, , which leads to (64),
(64) |
where the second equality used the two-point feedback zeroth-order oracle [36], the third equality used the definition of , the second inequality used Assumption 2, the third inequality used Holder’s inequality. The proof is completed.
Proof of Theorem 2. Here we only show the different part compared with the proof of Theorem 1.
(65) |
where , the first equality used and , the inequality used . Then we have
where , .
It follows that
(66) |
where .
Therefore, the following holds:
(67) |
The proof is completed.
X Appendix B: Properties of Cluster-Wise Graphs
In this appendix, we will analyze the relationships among graphs , from the cluster-wise perspective.
Inspired by the observations in Subsection III-B, the graph in Fig. 4 can be interpreted from a cluster perspective. By regarding each cluster121212Please note that the clustering in this paper is only conducted once for graph . The clusters discussed in other graphs still correspond to SCCs in graph . (corresponding to a maximal SCC in ) as a node, and adding a directional edge between any pair of nodes as long as there is at least one edge from cluster to cluster in , we define the cluster-wise graph of as the graph in Fig. 8 (a). Similarly, we define cluster-wise graphs for and as and , respectively. In Example 1, due to the setting that there are no edges between different clusters in , it holds that , as shown in Fig. 8 (b). Note that since .

The cluster-wise graph for a graph is constructed by regarding each cluster as one node, and adding one edge between two nodes if there is an edge between two agents belonging to these two clusters in , respectively. Note that a SCC in remains to be a SCC in and . Therefore, an edge from cluster to cluster in always implies that any vertex is reachable from any vertex in the corresponding node-wise graph . Based on this fact, we give a specific result regarding the relationship between , and as below.
Lemma 13
Given , , and the induced , the following statements are true:
(i) ;
(ii) if ;
(iii) if and only if for any .
Proof. (i). Given , there must hold that for any and . Due to the definition of , we have (i.e., ) for any and , implying that . Therefore, . It follows that .
On the other hand, for any , we have for some and . According to Lemma 3, . Hence, .
(ii). By the virtue of statement (i), it suffices to show that if , which is true due to the definition of .
(iii). Sufficiency. The condition implies that the reachability between any two vertices in is the same as that in . Therefore, .
Necessity. Suppose that there exists an edge such that is not reachable from in . Then and must belong to two different clusters and , respectively. It follows that and , which contradicts with .
In the existing literature of MARL, it is common to see the assumption that . In this scenario, , therefore it always holds that .
References
- [1] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.
- [2] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the game of go without human knowledge,” nature, vol. 550, no. 7676, pp. 354–359, 2017.
- [3] A. S. Polydoros and L. Nalpantidis, “Survey of model-based reinforcement learning: Applications on robotics,” Journal of Intelligent & Robotic Systems, vol. 86, no. 2, pp. 153–173, 2017.
- [4] S. Mukherjee, A. Chakrabortty, H. Bai, A. Darvishi, and B. Fardanesh, “Scalable designs for reinforcement learning-based wide-area damping control,” IEEE Transactions on Smart Grid, vol. 12, no. 3, pp. 2389–2401, 2021.
- [5] A. OroojlooyJadid and D. Hajinezhad, “A review of cooperative multi-agent deep reinforcement learning,” arXiv preprint arXiv:1908.03963, 2019.
- [6] S. Gronauer and K. Diepold, “Multi-agent deep reinforcement learning: a survey,” Artificial Intelligence Review, pp. 1–49, 2021.
- [7] K. Zhang, Z. Yang, and T. Başar, “Multi-agent reinforcement learning: A selective overview of theories and algorithms,” Handbook of Reinforcement Learning and Control, pp. 321–384, 2021.
- [8] G. Qu, A. Wierman, and N. Li, “Scalable reinforcement learning of localized policies for multi-agent networked systems,” in Learning for Dynamics and Control. PMLR, 2020, pp. 256–266.
- [9] Y. Lin, G. Qu, L. Huang, and A. Wierman, “Multi-agent reinforcement learning in stochastic networked systems,” in Thirty-Fifth Conference on Neural Information Processing Systems, 2021.
- [10] S. Omidshafiei, J. Pazis, C. Amato, J. P. How, and J. Vian, “Deep decentralized multi-task multi-agent reinforcement learning under partial observability,” in International Conference on Machine Learning. PMLR, 2017, pp. 2681–2690.
- [11] S. Nayak, K. Choi, W. Ding, S. Dolan, K. Gopalakrishnan, and H. Balakrishnan, “Scalable multi-agent reinforcement learning through intelligent information aggregation,” in International Conference on Machine Learning. PMLR, 2023, pp. 25 817–25 833.
- [12] Y. Zhang and M. M. Zavlanos, “Cooperative multi-agent reinforcement learning with partial observations,” IEEE Transactions on Automatic Control, 2023, doi: 10.1109/TAC.2023.3288025.
- [13] K. Zhang, Z. Yang, H. Liu, T. Zhang, and T. Basar, “Fully decentralized multi-agent reinforcement learning with networked agents,” in International Conference on Machine Learning. PMLR, 2018, pp. 5872–5881.
- [14] C. Guestrin, M. Lagoudakis, and R. Parr, “Coordinated reinforcement learning,” in ICML, vol. 2. Citeseer, 2002, pp. 227–234.
- [15] J. R. Kok and N. Vlassis, “Collaborative multiagent reinforcement learning by payoff propagation,” Journal of Machine Learning Research, vol. 7, pp. 1789–1828, 2006.
- [16] G. Jing, H. Bai, J. George, A. Chakrabortty, and P. K. Sharma, “Asynchronous distributed reinforcement learning for LQR control via zeroth-order block coordinate descent,” arXiv preprint arXiv:2107.12416, 2021.
- [17] Y. Li, Y. Tang, R. Zhang, and N. Li, “Distributed reinforcement learning for decentralized linear quadratic control: A derivative-free policy optimization approach,” IEEE Transactions on Automatic Control, 2021.
- [18] D. Görges, “Distributed adaptive linear quadratic control using distributed reinforcement learning,” IFAC-PapersOnLine, vol. 52, no. 11, pp. 218–223, 2019.
- [19] G. Jing, H. Bai, J. George, and A. Chakrabortty, “Model-free optimal control of linear multi-agent systems via decomposition and hierarchical approximation,” IEEE Transactions on Control of Network Systems, 2021.
- [20] M. L. Littman, “Markov games as a framework for multi-agent reinforcement learning,” in Machine learning proceedings. Elsevier, 1994, pp. 157–163.
- [21] S. Kar, J. M. Moura, and H. V. Poor, “-learning: A collaborative distributed strategy for multi-agent reinforcement learning through ,” IEEE Transactions on Signal Processing, vol. 61, no. 7, pp. 1848–1862, 2013.
- [22] S. V. Macua, A. Tukiainen, D. G.-O. Hernández, D. Baldazo, E. M. de Cote, and S. Zazo, “Diff-dac: Distributed actor-critic for average multitask deep reinforcement learning,” in Adaptive Learning Agents (ALA) Conference, 2018.
- [23] R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup, “Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction,” in The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, 2011, pp. 761–768.
- [24] A. Olshevsky, “Linear time average consensus on fixed graphs and implications for decentralized optimization and multi-agent control,” arXiv preprint arXiv:1411.4186, 2014.
- [25] G. Jing, H. Bai, J. George, A. Chakrabortty, and P. K. Sharma, “Distributed cooperative multi-agent reinforcement learning with directed coordination graph,” in 2022 American Control Conference (ACC), to appear. IEEE, 2022.
- [26] G. Qu, Y. Lin, A. Wierman, and N. Li, “Scalable multi-agent reinforcement learning for networked systems with average reward,” Advances in Neural Information Processing Systems, vol. 33, 2020.
- [27] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls et al., “Value-decomposition networks for cooperative multi-agent learning,” arXiv preprint arXiv:1706.05296, 2017.
- [28] T. Zhang, Y. Li, C. Wang, G. Xie, and Z. Lu, “Fop: Factorizing optimal joint policy of maximum-entropy multi-agent reinforcement learning,” in International Conference on Machine Learning. PMLR, 2021, pp. 12 491–12 500.
- [29] M. Fazel, R. Ge, S. Kakade, and M. Mesbahi, “Global convergence of policy gradient methods for the linear quadratic regulator,” in International Conference on Machine Learning. PMLR, 2018, pp. 1467–1476.
- [30] D. Malik, A. Pananjady, K. Bhatia, K. Khamaru, P. L. Bartlett, and M. J. Wainwright, “Derivative-free methods for policy optimization: Guarantees for linear quadratic systems,” Journal of Machine Learning Research, vol. 21, no. 21, pp. 1–51, 2020.
- [31] D. Hajinezhad, M. Hong, and A. Garcia, “Zone: Zeroth-order nonconvex multiagent optimization over networks,” IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 3995–4010, 2019.
- [32] C. Gratton, N. K. Venkategowda, R. Arablouei, and S. Werner, “Privacy-preserving distributed zeroth-order optimization,” arXiv preprint arXiv:2008.13468, 2020.
- [33] Y. Tang, J. Zhang, and N. Li, “Distributed zero-order algorithms for nonconvex multi-agent optimization,” IEEE Transactions on Control of Network Systems, 2020.
- [34] A. Akhavan, M. Pontil, and A. B. Tsybakov, “Distributed zero-order optimization under adversarial noise,” arXiv preprint arXiv:2102.01121, 2021.
- [35] T. Chen, K. Zhang, G. B. Giannakis, and T. Basar, “Communication-efficient policy gradient methods for distributed reinforcement learning,” IEEE Transactions on Control of Network Systems, 2021.
- [36] Y. Nesterov and V. Spokoiny, “Random gradient-free minimization of convex functions,” Foundations of Computational Mathematics, vol. 17, no. 2, pp. 527–566, 2017.
- [37] M. Pirotta, M. Restelli, and L. Bascetta, “Policy gradient in lipschitz markov decision processes,” Machine Learning, vol. 100, no. 2, pp. 255–283, 2015.
- [38] H. Kumar, D. S. Kalogerias, G. J. Pappas, and A. Ribeiro, “Zeroth-order deterministic policy gradient,” arXiv preprint arXiv:2006.07314, 2020.
- [39] A. Vemula, W. Sun, and J. Bagnell, “Contrasting exploration in parameter and action space: A zeroth-order optimization perspective,” in The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019, pp. 2926–2935.
- [40] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol. 53, no. 1, pp. 65–78, 2004.
- [41] J. Bhandari and D. Russo, “Global optimality guarantees for policy gradient methods,” arXiv preprint arXiv:1906.01786, 2019.
- [42] A. Agarwal, S. M. Kakade, J. D. Lee, and G. Mahajan, “On the theory of policy gradient methods: Optimality, approximation, and distribution shift.” Journal of Machine Learning Research, vol. 22, no. 98, pp. 1–76, 2021.
- [43] H. Feng and J. Lavaei, “On the exponential number of connected components for the feasible set of optimal decentralized control problems,” in 2019 American Control Conference (ACC). IEEE, 2019, pp. 1430–1437.
- [44] Y. Zhang, Y. Zhou, K. Ji, and M. M. Zavlanos, “A new one-point residual-feedback oracle for black-box learning and control,” Automatica, p. 110006, 2021.
- [45] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributed sensor fusion based on average consensus,” in IPSN 2005. Fourth International Symposium on Information Processing in Sensor Networks, 2005. IEEE, 2005, pp. 63–70.
Gangshan Jing received the Ph.D. degree in Control Theory and Control Engineering from Xidian University, Xi’an, China, in 2018. From 2016-2017, he was a research assistant at Hong Kong Polytechnic University. From 2018 to 2019, he was a postdoctoral researcher at Ohio State University. From 2019 to 2021, he was a postdoctoral researcher at North Carolina State University. Since 2021 Dec., he has been an assistant professor in School of Automation, at Chongqing University. His research interests include control, optimization, and machine learning for network systems. |
He Bai received his Ph.D. degree in Electrical Engineering from Rensselaer Polytechnic Institute, Troy, NY, in 2009. From 2009 to 2010, he was a postdoctoral researcher at Northwestern University, Evanston, IL. From 2010 to 2015, he was a Senior Research and Development Scientist at UtopiaCompression Corporation, Los Angeles, CA. In 2015, he joined the School of Mechanical and Aerospace Engineering at Oklahoma State University, Stillwater, OK, as an assistant professor. His research interests include distributed estimation, control and learning, reinforcement learning, nonlinear control, and robotics. |
Jemin George received his M.S. (’07), and Ph.D. (’10) in Aerospace Engineering from the State University of New York at Buffalo. Prior to joining ARL in 2010, he worked at the U.S. Air Force Research Laboratory’s Space Vehicles Directorate and the National Aeronautics, and Space Administration’s Langley Aerospace Research Center. From 2014-2017, he was a Visiting Scholar at the Northwestern University, Evanston, IL. His principal research interests include decentralized/distributed learning, stochastic systems, control theory, nonlinear estimation/filtering, networked sensing and information fusion. |
Aranya Chakrabortty received the Ph.D. degree in Electrical Engineering from Rensselaer Polytechnic Institute, NY in 2008. From 2008 to 2009 he was a postdoctoral research associate at University of Washington, Seattle, WA. From 2009 to 2010 he was an assistant professor at Texas Tech University, Lubbock, TX. Since 2010 he has joined the Electrical and Computer Engineering department at North Carolina State University, Raleigh, NC, where he is currently a Professor. His research interests are in all branches of control theory with applications to electric power systems. He received the NSF CAREER award in 2011. |
Piyush Sharma received his M.S. and Ph.D. degrees in Applied Mathematics from the University of Puerto Rico and Delaware State University respectively. He has government and industry work experiences. Currently, he is with the U.S. Army as an AI Coordinator at ATEC, earlier a Computer Scientist at DEVCOM ARL. Prior to joining ARL, he worked at Infosys’ Data Analytics Unit (DNA) as a Senior Associate Data Scientist responsible for Thought Leadership and solving stakeholders’ problems. |