This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Zhejiang University, Hangzhou, Zhejiang Province, China
11email: {luojieting,baiseliao}@zju.edu.cn
22institutetext: Utrecht University, Utrecht, the Netherlands
22email: J.J.C.Meyer@uu.nl

A Formal Framework for Reasoning about Agents’ Independence in Self-organizing Multi-agent Systems

Jieting Luo 11    Beishui Liao 11    John-Jules Meyer 22
Abstract

Self-organization is a process where a stable pattern is formed by the cooperative behavior between parts of an initially disordered system without external control or influence. It has been introduced to multi-agent systems as an internal control process or mechanism to solve difficult problems spontaneously. However, because a self-organizing multi-agent system has autonomous agents and local interactions between them, it is difficult to predict the behavior of the system from the behavior of the local agents we design. This paper proposes a logic-based framework of self-organizing multi-agent systems, where agents interact with each other by following their prescribed local rules. The dependence relation between coalitions of agents regarding their contributions to the global behavior of the system is reasoned about from the structural and semantic perspectives. We show that the computational complexity of verifying such a self-organizing multi-agent system is in exponential time. We then combine our framework with graph theory to decompose a system into different coalitions located in different layers, which allows us to verify agents’ full contributions more efficiently. The resulting information about agents’ full contributions allows us to understand the complex link between local agent behavior and system level behavior in a self-organizing multi-agent system. Finally, we show how we can use our framework to model a constraint satisfaction problem.

Keywords:
Self-organization, logic, Multi-agent Systems, Graph Theory, Verification

1 Introduction

In the modern society artificial intelligence has been applied in many industries such as health care, retail and traffic. Since our nature presents beautiful ways of solving problems to us, biological insights have been the source of inspiration for the development of several techniques and methods to solve complex engineering problems. One of the examples is the adoption of self-organization from complex systems. Self-organization is a process where a stable pattern is formed by the cooperative behavior between parts of an initially disordered system without external control or influence. It has been introduced to multi-agent systems as an internal control process or mechanism to solve difficult problems spontaneously, especially if the system is operated in an open environment thereby having no perfect and a priori design to be guaranteed [28][25][27]. One typical example using self organization mechanisms is ant colony optimization [13], where ants collaborate to find out the optimal solution through laying down pheromone trails. In a wireless mobile sensor network, robots with sensors can deploy themselves to achieve optimal sensing coverage when the system designer is not aware of robots’ interest or the operated environment [18].

However, making a self-organizing system is highly challenging [17][32]. The traditional development of a multi-agent system is a top-down process which starts from the specification of the goal that the system needs to achieve to the development of specific agents. In this way, the goal of the system will guarantee to be achieved if the specific agents are implemented successfully. As the hierarchical in Figure.1(left), the global objective can be achieved by modules E and F, and module E can be refined by agents A and B and module F can be refined by agents B, C and D. However, such an approach cannot be applied to the development of self-organizing multi-agent systems: because a self-organizing multi-agent system has autonomous agents and local interactions between them, the development of a self-organizing multi-agent system is usually a bottom-up process which starts from defining local components to examining global behavior. The distributive structure in Figure.1(right) represents a self-organizing multi-agent system, where agents A, B, C and D interact with each other. Because of that, it is difficult to predict the behavior of the system from the system specification about autonomous agents and local interactions between them. Consequently, a self-organizing multi-agent system is usually evaluated through implementation. In other words, the complex link between local agent behavior and system level behavior in a self-organizing multi-agent system makes implementation the usual way of correctness evaluation. There have been some methodologies for developing self-organizing multi-agent systems (such as ADELFE [3][4]), but they do not explain the complex link between local agent behavior and system level behavior of a self-organizing multi-agent system. The community of Self-Adaptive and Self-Organizing Systems (SASO) also highlights that it is still challenging to investigate how micro-level behavior lead to desirable macro-level outcomes. What if we can understand the complex link between micro-level agent behavior and macro-level system behavior in a self-organizing multi-agent system? If there is approach available that helps us understand how global system behavior emerges from agents’ local interactions, the development of self-organizing multi-agent systems can be facilitated. For example, if we know a coalition of agents brings about a property independently and we want to change that property, what we have to do is to reconfigure the behavior of that coalition instead of the behavior of agents outside that coalition. As we can see, the independent relation between coalitions of agents in self-organizing multi-agent systems is an crucial issue to be investigated.

Refer to caption
Figure 1: A comparison between top-down approach (left) and bottom-up approach (right).

In theoretical computer science, logic has been used for proving the correctness of a system [11][10]. Instead of implementing the system with respect to a specification, we can verify whether the system specification fulfills the global objective by checking logical formulas. That indeed provides a new way of evaluating a self-organizing multi-agent system apart from implementation. In this paper, we propose a logic-based framework of self-organizing multi-agent systems, where agents interact with each other by following their prescribed local rules. Based on the local rules, we define a structural property called independent component, a coalition of agents which do not get input from agents outside the coalition. Our semantics and the structure derived from communication between agents allow us not only to verify behavior of the system, but also to reason about the independence relation between coalitions of agents regarding their contributions to the global system behavior from two perspectives. Moreover, we propose a layered approach to decompose a self-organizing multi-agent system into different coalitions, which allows us to check agents’ full contributions more efficiently. The resulting information about agents’ full contributions allows us to understand the complex link between local agent behavior and system level behavior in a self-organizing multi-agent system. We finally show how we can use our framework to model a constraint satisfaction problem, where a solution based on self-organization is used.

The rest of the paper is organized as follows: Section 2 introduces the abstract framework to represent a self-organizing multi-agent system, proposing the notion of independent components; Section 3 provides the semantics of our framework to reason about agents’ independence in terms of their contributions to the global behavior of the system; the model-checking problem is investigated in Section 4; Section 5 proposes a layered approach to decompose a self-organizing multi-agent system; in Section 6 we show how to use our framework to model constraint satisfaction problems; finally, related work and conclusion are provided in Sections 7 and 8 respectively.

2 Abstract Framework

In this section, we will propose the model of this paper: a concurrent game structure extended with local rules and define the structural property of independent components whose behavior is independent on the behavior of the agents outside of the components.

2.1 Self-organizing Multi-agent Systems

The semantic structure of this paper is concurrent game structures (CGSs). It is basically a model where agents can simultaneously choose actions that collectively bring the system from the current state to a successor state. Compared to other kripke models of transaction systems, each transition in a CGS is labeled with collective actions and the agents who perform those actions. Moreover, we treat actions as first-class entities instead of using choices that are identified by their possible outcomes. Formally,

Definition 2.1

A concurrent game structure is a tuple 𝒮=(k,Q,π,Π,ACT,d,δ)\mathcal{S}=(k,Q,\pi,\Pi,ACT,d,\delta) such that:

  • A natural number k1k\geq 1 of agents, and the set of all agents is Σ={1,,k}\Sigma=\{1,\ldots,k\}; we use AA to denote a coalition of agents AΣA\subseteq\Sigma;

  • A finite set QQ of states;

  • A finite set Π\Pi of propositions;

  • A labeling function π\pi which maps each state qQq\in Q to a subset of propositions which are true at qq; thus, for each qQq\in Q we have π(q)Π\pi(q)\subseteq\Pi;

  • A finite set ACTACT of actions;

  • For each agent iΣi\in\Sigma and a state qQq\in Q, di(q)ACTd_{i}(q)\subseteq ACT is the non-empty set of actions available to agent ii in qq; D(q)=d1(q)××dk(q)D(q)=d_{1}(q)\times\ldots\times d_{k}(q) is the set of joint actions in qq; given a state qQq\in Q, an action vector is a tuple α1,,αk\langle\alpha_{1},\ldots,\alpha_{k}\rangle such that αidi(q)\alpha_{i}\in d_{i}(q);

  • A function δ\delta which maps each state qQq\in Q and a joint action α1,,αkD(q)\langle\alpha_{1},\ldots,\alpha_{k}\rangle\in D(q) to another state that results from state qq if each agent adopted the action in the action vector, thus for each qQq\in Q and each α1,,αkD(q)\langle\alpha_{1},\ldots,\alpha_{k}\rangle\in D(q) we have δ(q,α1,,αk)Q\delta(q,\langle\alpha_{1},\ldots,\alpha_{k}\rangle)\in Q.

Note that the model is deterministic: the same update function adopted in the same state will always result in the same resulting state. A computation over 𝒮\mathcal{S} is an infinite sequence λ=q0,q1,q2,\lambda=q_{0},q_{1},q_{2},\ldots of states such that for all positions i0i\geq 0, there is a joint action α1,,αkD(qi)\langle\alpha_{1},\ldots,\alpha_{k}\rangle\in D(q_{i}) such that δ(qi,α1,,αk)=qi+1\delta(q_{i},\langle\alpha_{1},\ldots,\alpha_{k}\rangle)=q_{i+1}. For a computation λ\lambda and a position i0i\geq 0, we use λ[i]\lambda[i] to denote the iith state of λ\lambda. More elaboration of concurrent game structures can be found in [2].

Self-organization has been introduced into multi-agent systems for a long time to solve various problems in multi-agent systems [32][16]. It is a mechanism or a process which enables a system to finish a difficult task by the cooperative behavior between agents spontaneously [12]. In particular, agents in a self-organizing multi-agent system have local view of the system and the system reaches a desired state spontaneously without guided by any externals. In this paper, we argue that the cooperative behavior is guided by prescribed local rules that agents are supposed to follow with communication between agents as a prerequisite. Therefore, we can define a self-organizing multi-agent system as a concurrent game structure together with a set of local rules for agents to follow. For example, in ant colony optimization algorithms, ants are required to record their positions and the quality of their solutions (lay down pheromones) so that in later simulation iterations more ants locate better solutions. Before defining such a type of local rules, we first define what to communicate, which is given by an internal function.

Definition 2.2 (Internal Functions)

Given a concurrent game structure 𝒮\mathcal{S}, the internal function of an agent ii is a function mi:Qpropm_{i}:Q\to\mathcal{L}_{prop} that maps a state qQq\in Q to a propositional formula over π(q)\pi(q).

The internal function returns the information that is provided by participating agents themselves at a given state and might be different from agent to agent. Depending on the application, we might have different interpretation on mi(q)m_{i}(q). For example, vehicles in a busy traffic situation are required to communicate their urgencies, and robots sensors in a self-deploy sensing network are required to communicate their sensing areas. Here we assume that agents are wise enough to process their local rules with the communicated information. A local rule is defined based on agents’ communication as follows:

Definition 2.3 (Abstract Local Rules)

Given a concurrent game structure 𝒮\mathcal{S}, an abstract local rule for an agent aa is a tuple τa,γa\langle\tau_{a},\gamma_{a}\rangle consisting of a function τa(q)\tau_{a}(q) that maps a state qQq\in Q to a subset of agents, that is, τa(q)Σ\tau_{a}(q)\subseteq\Sigma, and a function γa(M(q))\gamma_{a}(M(q)) that maps M(q)={mi(q)iτa(q)}M(q)=\{m_{i}(q)\mid i\in\tau_{a}(q)\} to an action available in state qq to agent aa, that is, γa(M(q))da(q)\gamma_{a}(M(q))\in d_{a}(q). We denote the set of all the abstract local rules as Γ\Gamma and a subset of abstract local rules as ΓA\Gamma_{A} that are designed for coalition of agents AA.

An abstract local rule consists of two parts: the first part τa(q)\tau_{a}(q) states the agents with whom agent aa is supposed to communicate in state qq, and the second part γa\gamma_{a} states the action that agent aa is supposed to take given the communication result with agents in τa(q)\tau_{a}(q) for their internals. Moreover, in [2] and [19], a rule (or a norm) is defined as a mapping γ(q)\gamma(q) that explicitly prescribes what agents need to do in a given state, which requires that system designers have complete information of the system including agents’ internals such that the desired state as well as the legal computations can be identified. Differently, a local rule in this paper is defined based on agents’ communication. Different participating agents might have different internal functions, making the communication results and thus the actions that are required to take different. Hence, the system allows agents to find out the desired state and how to get there in a self-organizing way. We see local rules not only as constraints but also guidance on agents’ behavior, namely an agent does not know what to do if he does not communicate with other agents. Therefore, we exclude the case where agents get no constraint from their respective local rules. Notation outout denotes a set of computations and out(q,ΓA)out(q,\Gamma_{A}) is the set of computations starting from state qq where agents in coalition AA follow their respective local rules in ΓA\Gamma_{A}. A computation λ=q0,q1,q2,\lambda=q_{0},q_{1},q_{2},... is in out(q0,ΓA)out(q_{0},\Gamma_{A}) if and only if it holds that for all positions i0i\geq 0 there is a move vector α1,,αkD(λ[i])\langle\alpha_{1},\ldots,\alpha_{k}\rangle\in D(\lambda[i]) such that δ(λ[i],α1,,αk)=λ[i+1]\delta(\lambda[i],\langle\alpha_{1},\ldots,\alpha_{k}\rangle)=\lambda[i+1] and for all aAa\in A it is the case that αa=γa(M(q))\alpha_{a}=\gamma_{a}(M(q)). Because function γa\gamma_{a} returns only an action that agent aa is allowed to take in any state, there will be only one computation in the set of computations out(q,ΓΣ)out(q,\Gamma_{\Sigma}), which will be denoted as λ(q)\lambda^{*}(q) without the curly brackets outside in the rest of the paper. Now we are ready to define a self-organizing multi-agent system. Formally,

Definition 2.4 (Self-organizing Multi-agent Systems)

A self-organizing multi-agent system (SOMAS) is a tuple (𝒮,Γ)(\mathcal{S},\Gamma), where 𝒮\mathcal{S} is a concurrent game structure and Γ\Gamma is a set of local rules for agents in the system to follow.

We will use the example in [29] for better understanding the above definitions.

Example 1

We consider a CGS scenario as Fig.2 where there are two trains, each controlled by an agent, going through a tunnel from the opposite side. The tunnel has only room for one train, and the agents can either wait or go. Starting from state q0q_{0}, if the agents choose to go simultaneously, the trains will crash, which is state q4q_{4}; if one agent goes and the other waits, they can both successfully go through the tunnel without crashing, which is q3q_{3}.

Refer to caption
Figure 2: A CGS example.

Local rules τ1,γ1\langle\tau_{1},\gamma_{1}\rangle and τ2,γ2\langle\tau_{2},\gamma_{2}\rangle are prescribed for the agents to follow: both agents communicate with each other for their urgency u1u_{1} and u2u_{2} in state q0q_{0}; the one who is more urgent can go through the tunnel first and the other one has to wait for it; after q0q_{0} the agent who waits can go. We formalize as follows. In state q0q_{0}, τ1(q0)={a1,a2}\tau_{1}(q_{0})=\{a_{1},a_{2}\} and τ2(q0)={a1,a2}\tau_{2}(q_{0})=\{a_{1},a_{2}\},

γ1(u1,u2)={goif a1 is more urgent than oras urgent as a2;waitotherwise.\gamma_{1}(u_{1},u_{2})=\left\{\begin{array}[]{ll}go&\mbox{if $a_{1}$ is more urgent than or}\\ &\mbox{as urgent as $a_{2}$};\\ wait&\mbox{otherwise}.\end{array}\right.
γ2(u1,u2)={goif a1 is less urgent than a2;waitotherwise.\gamma_{2}(u_{1},u_{2})=\left\{\begin{array}[]{ll}go&\mbox{if $a_{1}$ is less urgent than $a_{2}$};\\ wait&\mbox{otherwise}.\end{array}\right.

In state q1q_{1} and q2q_{2}, τ1(q1)={a1}\tau_{1}(q_{1})=\{a_{1}\} and τ2(q2)={a2}\tau_{2}(q_{2})=\{a_{2}\}, and

γ1(u1)=go,γ2(u2)=go.\gamma_{1}(u_{1})=go,\quad\gamma_{2}(u_{2})=go.

Given the above local rules, if a1a_{1} is more urgent than or as urgent as a2a_{2} w.r.t. u1u_{1} and u2u_{2}, the desired state q3q_{3} is reached along with computation q0,q2,q3q_{0},q_{2},q_{3}\ldots; if a1a_{1} is less urgent than a2a_{2} w.r.t. u1u_{1} and u2u_{2}, the desired state q3q_{3} is reached along with computation q0,q1,q3q_{0},q_{1},q_{3}\ldots. As we can see, any legal computation is not prescribed by system designers, because it depends on the urgencies that are provided by agents themselves. Instead, the agents can find out how to get to the desire state q3q_{3} by themselves through following their local rules. Certainly, agents can collaborate to cross the tunnel successfully without communication, but that requires an external who is aware of the available actions for each train and the game structure to make a plan for them, which is not allowed in a self-organizing multi-agent system. In a self-organizing multi-agent system, each train does not know the available actions from the other train but it can follow its local rule to behave based on the communication result instead of listening to an external to behave.

2.2 Full Contribution

Similar to ATL [2], our language ATL-Γ\Gamma is interpreted over a concurrent game structure 𝒮\mathcal{S} that has the same propositions and agents. It is an extension of classical propositional logic with temporal cooperation modalities and path quantifiers. A formula of the form Aψ\langle A\rangle\psi means that coalition of agents AA will bring about the subformula ψ\psi by following their respective local rules in ΓA\Gamma_{A}, no matter what agents in Σ\A\Sigma\backslash A do, where ψ\psi is a temporal formula of the form φ\bigcirc\varphi, φ\Box\varphi or φ1𝒰φ2\varphi_{1}\mathcal{U}\varphi_{2} (where φ\varphi, φ1\varphi_{1}, φ2\varphi_{2} are again formulas in our language). Formally, the grammar of our language is defined below, where pΠp\in\Pi and AΣA\subseteq\Sigma:

φ::=p¬φφ1φ2AφAφAφ1𝒰φ2\varphi::=p\mid\lnot\varphi\mid\varphi_{1}\land\varphi_{2}\mid\langle A\rangle\bigcirc\varphi\mid\langle A\rangle\Box\varphi\mid\langle A\rangle\varphi_{1}\mathcal{U}\varphi_{2}\hskip 14.22636pt

Given a self-organizing multi-agent system (𝒮,Γ)(\mathcal{S},\Gamma), where 𝒮\mathcal{S} is a concurrent game structure and Γ\Gamma is a set of local rules, and a state qQq\in Q, we define the semantics with respect to the satisfaction relation \models inductively as follows:

  • 𝒮,Γ,qp\mathcal{S},\Gamma,q\models p iff pπ(q)p\in\pi(q);

  • 𝒮,Γ,q¬φ\mathcal{S},\Gamma,q\models\lnot\varphi iff 𝒮,Γ,q⊧̸φ\mathcal{S},\Gamma,q\not\models\varphi;

  • 𝒮,Γ,qφ1φ2\mathcal{S},\Gamma,q\models\varphi_{1}\land\varphi_{2} iff 𝒮,Γ,qφ1\mathcal{S},\Gamma,q\models\varphi_{1} and 𝒮,Γ,qφ2\mathcal{S},\Gamma,q\models\varphi_{2};

  • 𝒮,Γ,qAφ\mathcal{S},\Gamma,q\models\langle A\rangle\bigcirc\varphi iff for all λout(q,ΓA)\lambda\in out(q,\Gamma_{A}), we have 𝒮,Γ,λ[1]φ\mathcal{S},\Gamma,\lambda[1]\models\varphi;

  • 𝒮,Γ,qAφ\mathcal{S},\Gamma,q\models\langle A\rangle\Box\varphi iff for all λout(q,ΓA)\lambda\in out(q,\Gamma_{A}) and all positions i0i\geq 0 it holds that 𝒮,Γ,λ[i]φ\mathcal{S},\Gamma,\lambda[i]\models\varphi;

  • 𝒮,Γ,qAφ1𝒰φ2\mathcal{S},\Gamma,q\models\langle A\rangle\varphi_{1}\mathcal{U}\varphi_{2} iff for all λout(q,ΓA)\lambda\in out(q,\Gamma_{A}) there exists a position i0i\geq 0 such that for all positions 0ji0\leq j\leq i it holds that 𝒮,Γ,λ[j]φ1\mathcal{S},\Gamma,\lambda[j]\models\varphi_{1} and 𝒮,Γ,λ[i]φ2\mathcal{S},\Gamma,\lambda[i]\models\varphi_{2}.

Dually, we write Aφ\langle A\rangle\Diamond\varphi for A𝒰φ\langle A\rangle\top\mathcal{U}\varphi. Importantly, when we say a coalition of agents ensures a temporal formula by following their respective local rules, it means that agents in the coalition ensure a temporal formula if they take the actions that their local rules return, no matter whether agents outside of the coalition take the actions that are allowed to take or not. It is merely interpreted from our semantics, while agents’ dependence relation in terms of communication does not play a role.

Proposition 2.1

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma) and a coalition AA, for any coalition AAA^{\prime}\supseteq A, it holds that

𝒮,Γ,qAψ𝒮,Γ,qAψ,\mathcal{S},\Gamma,q\models\langle A\rangle\psi\Rightarrow\mathcal{S},\Gamma,q\models\langle A^{\prime}\rangle\psi,
Proof

Because AAA\subseteq A^{\prime}, computation set out(q,ΓA)out(q,ΓA)out(q,\Gamma_{A^{\prime}})\subseteq out(q,\Gamma_{A}). Thus, if 𝒮,Γ,qAψ\mathcal{S},\Gamma,q\models\langle A\rangle\psi, meaning that ψ\psi holds in all λout(q,ΓA)\lambda\in out(q,\Gamma_{A}), then ψ\psi will also hold in all λout(q,ΓA)\lambda\in out(q,\Gamma_{A^{\prime}}). Therefore, 𝒮,Γ,qAψ\mathcal{S},\Gamma,q\models\langle A^{\prime}\rangle\psi.

It means if a coalition ensures an temporal formula, then any coalition that contains that coalition will also ensure that temporal formula. Notice that AA^{\prime} can be the whole agent set Σ\Sigma.

Example 2

According to the local rules in the two-train example, the train who is more urgent can go through the tunnel first and the other one has to wait for him. We have that one train by itself cannot bring about the result of passing through the tunnel without crash through following the local rule, which can be expressed:

𝒮,Γ,q0⊧̸a1passed,\mathcal{S},\Gamma,q_{0}\not\models\langle a_{1}\rangle\Diamond\text{passed},
𝒮,Γ,q0⊧̸a2passed.\mathcal{S},\Gamma,q_{0}\not\models\langle a_{2}\rangle\Diamond\text{passed}.

Instead, both trains have to cooperate to bring about the result. Thus, we have that both agents by themselves can bring about the result of passing through the tunnel without crash through following the local rules, which can be expressed:

𝒮,Γ,q0a1,a2passed.\mathcal{S},\Gamma,q_{0}\models\langle a_{1},a_{2}\rangle\Diamond\text{passed}.

As we mentioned in the introduction, because a self-organizing multi-agent system has autonomous agents and local interactions between them, we are not clear about how the local components lead to the global behavior of the system, making a self-organizing multi-agent system usually evaluated through implementation. One possible solution to understand the complex link between local agent behavior and system level behavior is to divide the system into components, each of which has contribution to the global behavior of the system and is independent on agents outside the coalition. The principle is that, when studying the behavior of one component, we do not need to think about the influences coming from agents outside the component. The independence between different components in terms of their contributions to the global behavior of the system allows us to understand the complex link. With this idea, we first define a notion of independent components over a self-organizing multi-agent system.

Definition 2.5 (Independent Components)

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a coalition of agents AA is an independent component w.r.t. a state qq iff for all aAa\in A and its abstract local rule τa,γa\langle\tau_{a},\gamma_{a}\rangle it is the case that τa(q)A\tau_{a}(q)\subseteq A; a coalition of agents AA is an independent component w.r.t. a set of computations outout iff for all λout\lambda\in out and qλq\in\lambda AA is an independent component w.r.t. state qq.

An independent component w.r.t. state qq is a coalition of agents AA which only gets input information from agents inside coalition AA in state qq. In other words, an independent component might output information to agents in Σ\A\Sigma\backslash A, but never get input from agents in Σ\A\Sigma\backslash A.

Proposition 2.2

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma) and two coalitions AA and BB where ABA\cap B\not=\emptyset, if both AA and BB are independent components, then ABA\cap B is also an independent component.

Proof

Because AA is an independent component and ABAA\cap B\subseteq A, agents in ABA\cap B do not get input from agents in Σ\A\Sigma\backslash A. For the same reason, agents in ABA\cap B do not get input from agents in Σ\B\Sigma\backslash B. Thus, agents in ABA\cap B do not get input from agents in (Σ\A)(Σ\B)=Σ\(AB)(\Sigma\backslash A)\cup(\Sigma\backslash B)=\Sigma\backslash(A\cap B). Therefore, ABA\cap B is also an independent component.

Example 3

In the two-train example, as in state q0q_{0} both trains need to communicate with each other as their local rules require, neither a1a_{1} nor a2a_{2} is an independent component w.r.t. state q0q_{0}. Because the system only consists of two trains, the two trains form an independent component w.r.t. state q0q_{0}. Because in states q1q_{1} and q2q_{2} each train only gets urgency from itself to go ahead, a1a_{1} is an independent component w.r.t. state q1q_{1} and a2a_{2} is an independent component w.r.t. state q2q_{2}.

Using the notion of independent components and our language, We then propose the notion of semantic independence, structural independence and full contribution to characterize the independence between agents from different perspectives.

Definition 2.6 (Semantic Independence, Structural Independence and Full Contribution)

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a coalition of agents AA and a state qq,

  • AA is semantically independent with respect to a temporal formula ψ\psi from qq iff 𝒮,M,Γ,qAψ\mathcal{S},M,\Gamma,q\models\langle A\rangle\psi;

  • AA is structurally independent from qq iff AA is an independent component w.r.t. the set of computations out(q,ΓA)out(q,\Gamma_{A});

  • AA has full contribution to ψ\psi in qq iff AA is the minimal (w.r.t. set-inclusion) coalition that is both semantically independent with respect to ψ\psi and structurally independent from qq.

The notion of full contribution captures the property of coalition AA in terms of independence from two different perspectives: semantically, coalition AA ensures ψ\psi through following its local rules no matter what other agents do; structurally, coalition AA do not communicate with other agents when following its local rules no matter what other agents do. Notice that when we say coalition AA has full contribution to ψ\psi in qq, it is important for coalition AA to be the minimal coalition that is both semantically independent with respect to ψ\psi and structurally independent from state qq. This is because there might exist multiple coalitions that are both semantically independent with respect to ψ\psi and structurally independent from state qq and the set of all agents Σ\Sigma is apparently one of them. In other words, coalition AA has full contribution to ψ\psi because any subset of coalition AA is either semantically dependent with respect to ψ\psi or structurally dependent. The following example illustrates why we need both semantic independence and structural independence to characterize the full contribution of a coalition of agents.

Example 4

Consider the transition system in Fig.3(top). α,\langle\alpha,*\rangle is interpreted as an action vector where agent a1a_{1} performs action α\alpha and agent a2a_{2} does whatever he can. Local rules are prescribed for agents a1a_{1} and a2a_{2}: in state q0q_{0} a1a_{1} needs to communicate with a2a_{2} for some valuable information and is supposed to do α\alpha based on the communication result. As we can see from the structure, a1a_{1} brings about pp no matter what a2a_{2} does, which means that coalition {a1}\{a_{1}\} is semantically independent. However, since a1a_{1} needs to communicate with a2a_{2} in state q0q_{0}, coalition {a1}\{a_{1}\} is not structurally independent.

Consider the transition system in Fig.3(down). α,\α\langle\alpha,*\backslash\alpha\rangle is interpreted as an action vector where agent a1a_{1} performs action α\alpha and agent a2a_{2} deviates from action α\alpha. Similar for \α,α\langle*\backslash\alpha,\alpha\rangle. Local rules are prescribed for agents a1a_{1} and a2a_{2}: in state q0q_{0}, both agents do not need to communicate with each other and are supposed to do α\alpha. As we can see from the structure, a1a_{1} and a2a_{2} can bring about pp through following their local rules but neither a1a_{1} nor a2a_{2} can achieve that by itself, which means that coalitions {a1}\{a_{1}\} or {a2}\{a_{2}\} is not semantically independent. But since they do not need to communicate with each other, it is structurally independent.

Refer to caption
Figure 3: Comparison between semantic independence (top) and structural independence (down).

As for the two-train example, neither train, namely a1a_{1} or a2a_{2} as a coalition, has full contribution to the result of passing through the tunnel without crash. The reasons are listed as follows: any single train cannot ensure the result of passing through the tunnel without crash, which means that any single train is not semantically independent. Moreover, both trains follow their local rules to communicate with each other in state q0q_{0}, which means that any single train is not an independent component w.r.t. state q0q_{0} thus not being structurally independent.

The two trains have full contribution to the result of passing through the tunnel without crash, because both agents by themselves can bring about the result of passing through the tunnel without crash through following the local rules, and the coalition of two trains is obviously an independent component w.r.t. out(q0,Γ{a1,a2})out(q_{0},\Gamma_{\{a_{1},a_{2}\}}), which means that it is structurally independent, and the coalition of two trains is obviously the minimal coalition that is both semantically independent w.r.t. the result of no crash and structurally independent from state q0q_{0}.

Proposition 2.3

Given an SOMAS (𝒮,M,Γ)(\mathcal{S},M,\Gamma), a state qq and a temporal formula ψ\psi, there does not exist two different coalitions AA and BB such that ABA\cap B\not=\emptyset and both AA and BB has full contribution to ψ\psi in qq.

Proof

Suppose there exists two different coalitions AA and BB that have full contribution to ψ\psi, which means that both AA and BB are the minimal (w.r.t. set-inclusion) coalitions that are both semantically independent with respect to ψ\psi and structurally independent from qq. Because 𝒮,M,Γ,qAψ\mathcal{S},M,\Gamma,q\models\langle A\rangle\psi and 𝒮,M,Γ,qBψ\mathcal{S},M,\Gamma,q\models\langle B\rangle\psi, we have 𝒮,M,Γ,qABψ\mathcal{S},M,\Gamma,q\models\langle A\cap B\rangle\psi. Because ABA\cap B\not=\emptyset and both AA and BB are independent components, we have ABA\cap B is also an independent component. Therefore, ABA\cap B is both semantically independent with respect to ψ\psi and structurally independent from qq. If ABAA\cap B\subset A, AA is not the minimal (w.r.t. set-inclusion) coalition that is both semantically independent with respect to ψ\psi and structurally independent from qq. If AB=AA\cap B=A, which means that ABA\subset B, then AA, not BB, is the minimal (w.r.t. set-inclusion) coalition that is both semantically independent with respect to ψ\psi and structurally independent from qq. Contradiction!

The above proposition is consistent with an intuition: when semantically coalition AA ensures ψ\psi through following local rules but AA is not structurally independent from qq, meaning that agents in AA need to communicate with agents outside AA to ensure ψ\psi, we will enlarge the coalition through including the agents with which agents from AA communicate. The resulting coalition is the minimal (w.r.t. set-inclusion) coalition that is both semantically independent with respect to ψ\psi and structurally independent from qq, which is also unique.

3 Model Checking

Our logic-based framework provides us another approach to verify a self-organizing multi-agent system. If we only care about whether the system will bring about a property, we only need to check whether a formula in ATL-Γ\Gamma with the whole set of agents is satisfied in a certain state; if we want to know whether a coalition of agents will bring about a property independently, we need to check whether a coalition of agents has full contribution to a temporal formula in a certain state. In this section, we will investigate how difficult to answer these two model-checking problems. We first measure the model-checking problem for ATL-Γ\Gamma. In order to answer that, we extend our concurrent game structure 𝒮\mathcal{S} by adding new propositions to states that indicate for each local rule τa,γaΓ\langle\tau_{a},\gamma_{a}\rangle\in\Gamma, whether or not it is followed by a corresponding agent. For this purpose, we define the extended game structure 𝒮F=(k,QF,πF,ΠF,ACT,dF,δF)\mathcal{S}^{F}=(k,Q^{F},\pi^{F},\Pi^{F},ACT,d^{F},\delta^{F}) as follows:

  • QF={,qqQ}{q,qq,qQ and q is a successor of q in 𝒮}Q^{F}=\{\langle\bot,q\rangle\mid q\in Q\}\cup\{\langle q^{\prime},q\rangle\mid q^{\prime},q\in Q\text{ and }q\text{ is a successor of }q^{\prime}\text{ in }\mathcal{S}\}. In other words, a state of the form ,q\langle\bot,q\rangle of 𝒮F\mathcal{S}^{F} corresponds to the game structure 𝒮\mathcal{S} being in state qq at the beginning of a computation, and a state of the form q,q\langle q^{\prime},q\rangle corresponds to 𝒮\mathcal{S} being in state qq during a computation whose previous state was qq^{\prime}.

  • For each agent aΣa\in\Sigma, there is a new proposition followed; that is ΠF=Π{followedaaΣ}\Pi^{F}=\Pi\cup\{followed_{a}\mid a\in\Sigma\}.

  • For each state of the form ,qQF\langle\bot,q\rangle\in Q^{F}, we have πF(,q)=π(q)\pi^{F}(\langle\bot,q\rangle)=\pi(q); For each state q,qQF\langle q^{\prime},q\rangle\in Q^{F}, we have

    πF(q,q)=\displaystyle\pi^{F}(\langle q^{\prime},q\rangle)= π(q){followeda there is a move vector α1,,αkD(q)\displaystyle\pi(q)\cup\{followed_{a}\mid\text{ there is a move vector }\langle\alpha_{1},\ldots,\alpha_{k}\rangle\in D(q^{\prime})
    such that δ(q,α1,,αk)=q and αaγa(M(q)).}\displaystyle\text{such that }\delta(q^{\prime},\langle\alpha_{1},\ldots,\alpha_{k}\rangle)=q\text{ and }\alpha_{a}\in\gamma_{a}(M(q)).\}
  • For each player αΣ\alpha\in\Sigma and each state ,qQF\langle\cdot,q\rangle\in Q^{F}, we have dαF(,q)=da(q)d^{F}_{\alpha}(\langle\cdot,q\rangle)=d_{a}(q);

  • For each state ,qQF\langle\cdot,q\rangle\in Q^{F} and each move vector α1,,αkD(q)\langle\alpha_{1},\ldots,\alpha_{k}\rangle\in D(q), we have δF(,q,α1,,αk)=δ(q,α1,,αk)\delta^{F}(\langle\cdot,q\rangle,\langle\alpha_{1},\ldots,\alpha_{k}\rangle)=\delta(q,\langle\alpha_{1},\ldots,\alpha_{k}\rangle).

That is how we transfer 𝒮\mathcal{S} to 𝒮F\mathcal{S}^{F}, and one computation in 𝒮\mathcal{S} corresponds to one computation in 𝒮F\mathcal{S}^{F}. The new propositions {followedaaΣ}\{followed_{a}\mid a\in\Sigma\} allow us to identify the computations that follow the local rules. Using 𝒮F\mathcal{S}^{F}, we encode agents’ following of local rules as propositions in the states. Therefore, evaluating formulas of the form Aψ\langle A\rangle\psi over states of 𝒮\mathcal{S} can be reduced to evaluating standard ATL formulas over states of 𝒮F\mathcal{S}^{F}. In classic ATL [2], a formula of the form \llangleA\rrangleψ\llangle A\rrangle\psi means there exists a set FAF_{A} of strategies, one for each player in AA, to bring about ψ\psi, which can be interpreted as agents’ capacity of bringing about a property and is different from the semantics of Aψ\langle A\rangle\psi in this paper. Namely, given a SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a state qq and a set of agents AA, a formula 𝒮,Γ,qAψ\mathcal{S},\Gamma,q\models\langle A\rangle\psi holds iff the state ,q\langle\bot,q\rangle of the extended game structure 𝒮F\mathcal{S}^{F} satisfies the following ATL formula:

\llangleA\rrangle(aAfollowedaψ).\llangle A\rrangle(\bigwedge_{a\in A}\Box followed_{a}\land\psi).

As we can see, even though coalition of agents AA has the capacity to bring about ψ\psi, formula 𝒮,Γ,qAψ\mathcal{S},\Gamma,q\models\langle A\rangle\psi does not necessarily hold, because coalition AA has to follow its local rules to achieve that. To see why, consider the two-train example.

Example 5

Suppose we change the local rules to be the following:

γ1(u1,u2)={goif a1 is more urgent than a2;waitotherwise.\gamma_{1}(u_{1},u_{2})=\left\{\begin{array}[]{ll}go&\mbox{if $a_{1}$ is more urgent than $a_{2}$};\\ wait&\mbox{otherwise}.\end{array}\right.
γ2(u1,u2)={goif a1 is less urgent than a2;waitotherwise.\gamma_{2}(u_{1},u_{2})=\left\{\begin{array}[]{ll}go&\mbox{if $a_{1}$ is less urgent than $a_{2}$};\\ wait&\mbox{otherwise}.\end{array}\right.

When both trains have the same urgency, it will result in deadlock instead of passing through the tunnel if both trains follow their local rules to wait, which can be expressed in

𝒮,Γ,q0⊧̸a1,a2passed.\mathcal{S},\Gamma,q_{0}\not\models\langle a_{1},a_{2}\rangle\Diamond\text{passed}.
𝒮,Γ,q0a1,a2deadlock.\mathcal{S},\Gamma,q_{0}\models\langle a_{1},a_{2}\rangle\Diamond\text{deadlock}.

However, it is clear that both trains can cooperate to pass through the tunnel without crash, which can be expressed in

𝒮,q0\llanglea1,a2\rranglepassed.\mathcal{S},q_{0}\models\llangle a_{1},a_{2}\rrangle\Diamond\text{passed}.

Although checking an ATL-Γ{\Gamma} formula can be reduced to checking a logically equivalent ATL formula, it still can be done in an efficient way so that the corresponding complexity bounds are much lower than those for general ATL model checking.

Proposition 3.1

The model-checking problem for ATL-Γ\Gamma can be solved in time 𝒪(m2nl)\mathcal{O}(m^{2}\cdot n\cdot l) for a self-organizing multi-agent system with mm states, a coalition of agents |A|=n|A|=n and a formula of length ll.

Proof

We adopt the proof strategy from [2]. We first construct the extended game structure SFS^{F}, where the obedience/violation of local rules are encoded in the structure. To verify whether a temporal formula ψ\psi can be enforced by a coalition of agents AA through following its local rules in a self-organizing multi-agent system at state qq, we need to check formula 𝒮,Γ,qAψ\mathcal{S},\Gamma,q\models\langle A\rangle\psi, which can be reduced to evaluating an ATL formula for 𝒮F\mathcal{S}^{F} and ,q\langle\bot,q\rangle:

𝒮F,,q\llangleA\rrangle(aAfollowedaψ).\mathcal{S}^{F},\langle\bot,q\rangle\models\llangle A\rrangle(\bigwedge_{a\in A}\Box followed_{a}\land\psi).

We then construct a 2-player turn-based synchronous game 𝒮AF\mathcal{S}^{F}_{A}, where player 1 controls all the actions of coalition AA (called A-move) leading to an auxiliary state, after which player 2 controls all the actions of coalition Σ\A\Sigma\backslash A (called B-move) leading to the next state of the original transition. Such a game can be interpreted as an AND-OR graph, for which we can solve certain invariance and reachability problems in linear time. Because following the local rules is a necessary condition for coalition AA to win the game, we can remove all the outgoing A-move in 𝒮F\mathcal{S}^{F} from an state where ¬aAfolloweda\lnot\bigwedge_{a\in A}followed_{a} holds and its outgoing transitions, which can be done in polynomial time to the number of states and the number of agents in AA. If the original game structure 𝒮\mathcal{S} has mm states, then the turn-based synchronous structure 𝒮AF\mathcal{S}^{F}_{A} has 𝒪(m2)\mathcal{O}(m^{2}) states. We then perform normal model checking for \llangleA\rrangleψ\llangle A\rrangle\psi in 𝒮AF\mathcal{S}^{F}_{A}, which can be done in polynomial time to the length of ψ\psi. Therefore, checking formula 𝒮,Γ,qAψ\mathcal{S},\Gamma,q\models\langle A\rangle\psi is in the complexity of 𝒪(m2nl)\mathcal{O}(m^{2}\cdot n\cdot l).

We then measure the complexity of verifying a self-organizing multi-agent system, namely checking whether a coalition of agents has full contribution to a temporal formula in a certain state. A coalition has full contribution to a temporal formula in a state if only if it is the minimal (w.r.t. set-inclusion) coalition that is both semantically independent with respect to that temporal formula and structurally independent from that state. To verify structural independence, we can encode agents’ communication in the structure like what we did for the obedience/violation of local rules. Namely, given a coalition AA, we define the extended game structure 𝒮E=(k,Q,πE,ΠE,ACT,d,δ)\mathcal{S}^{E}=(k,Q,\pi^{E},\Pi^{E},ACT,d,\delta) as follows:

  • For each agent aAa\in A, there is a new proposition InAInA; that is ΠE=Π{InAaaA}\Pi^{E}=\Pi\cup\{InA_{a}\mid a\in A\}.

  • πE(q)=π(q){InAaτa(q)A}\pi^{E}(q)=\pi(q)\cup\{InA_{a}\mid\tau_{a}(q)\subseteq A\}.

The new propositions {InAaaA}\{InA_{a}\mid a\in A\} allow us to identify the states where agents in AA only get input from agents inside the coalition. We then construct the extended game structure 𝒮EF\mathcal{S}^{EF}, where the obedience/violation of local rules are encoded in the structure. Therefore, evaluating whether a coalition of agents AA is an independent component with respect to out(q,ΓA)out(q,\Gamma_{A}) can be again reduced to standard ATL formulas over states of 𝒮EF\mathcal{S}^{EF}.

Proposition 3.2

The model-checking problem for structural independence can be solved in time 𝒪(m2n2)\mathcal{O}(m^{2}\cdot n^{2}) for a self-organizing multi-agent system with mm states and a coalition of agents |A|=n|A|=n.

Proof

Given a SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a state qq and a set of agents AA, AA is an independent component with respect to out(q,ΓA)out(q,\Gamma_{A}) iff the state ,q\langle\bot,q\rangle of the extended game structure 𝒮EF\mathcal{S}^{EF} satisfies the following ATL formula:

𝒮EF,,q\llangleA\rrangleaA(followedaInAa)\mathcal{S}^{EF},\langle\bot,q\rangle\models\llangle A\rrangle\bigwedge_{a\in A}\Box(followed_{a}\land InA_{a})

As we have to check InAaInA_{a} for each agent in AA, checking the above formula is in the complexity of 𝒪(m2n2)\mathcal{O}(m^{2}\cdot n^{2}).

With the results of Propositions 3.1 and 3.2, we can measure the complexity of verifying whether a coalition of agents has full contribution to a temporal formula in a certain state.

Proposition 3.3

The model-checking problem for verifying the full contribution of a coaltion of agents can be solved in time 𝒪(m2n2l2n)\mathcal{O}(m^{2}\cdot n^{2}\cdot l\cdot 2^{n}) for a self-organizing multi-agent system with mm states, a coalition of agents |A|=n|A|=n and a formula of length ll.

Proof

Given a SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a state qq, a set of agents A(|A|=n)A(|A|=n) and a temporal formula ψ\psi with the length of ll, we need to follow Definition 2.6 to verify whether coalition AA has full contribution to ψ\psi in qq. Since we know that the model-checking problem for ATL-Γ\Gamma can be solved in time 𝒪(m2nl)\mathcal{O}(m^{2}\cdot n\cdot l), and that the model-checking problem for structural independence can be solved in time 𝒪(m2n2)\mathcal{O}(m^{2}\cdot n^{2}), checking semantic and structural independence can be solved in time 𝒪(m2n2l)\mathcal{O}(m^{2}\cdot n^{2}\cdot l). Moreover, we need to ensure that AA is the minimal coalition that is both semantically independent and structurally independent. Hence, we have to check every subset of AA for its semantic independence and structural independence. Therefore, checking the full contribution of a coalition of agents in a state is in the complexity of 𝒪(m2n2l2n)\mathcal{O}(m^{2}\cdot n^{2}\cdot l\cdot 2^{n}).

4 Decomposing a Self-organizing Multi-agent System: a Layered Approach

In Section 2 we explored the dependence relation between agents from both structural and semantic perspectives. However, we still have no idea how agents guided by their respective local rules bring about the global behavior of the system. If we check formula 𝒮,Γ,qΣψ\mathcal{S},\Gamma,q\models\langle\Sigma\rangle\psi and it returns true, it is just proved that all the agents in the system have full contribution to the global behavior of the system ψ\psi, which does not explain anything. For sure, we can enumerate all the possible coalitions of agents to check their contributions to the global behavior of the system, but it might be computationally expensive if the system has a large number of agents. Inspired by the decomposition approach in argumentation [21], we propose to decompose the system into different coalitions based on their dependence relation such that there is a partial order among different coalitions. Since a directed graph with nodes and arrows can better represent the dependence relation, we define a notion of a dependence graph w.r.t. a set of computations. Formally,

Definition 4.1 (Dependence Graph)

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a dependence graph w.r.t. a set of computations outout is a directed graph G(V,E)G(V,E) where V=ΣV=\Sigma and E={(a,b)E=\{(a,b)\mid there exists λout\lambda\in out and qλq\in\lambda such that aτb(q)}a\in\tau_{b}(q)\}. Typically, given a state qq and a coalition of agents AA, when GG is a dependence graph w.r.t. a set of computations out(q,ΓA)out(q,\Gamma_{A}), we will denote it as G(q,ΓA)G(q,\Gamma_{A}).

In words, a dependence graph w.r.t. a set of computations consists of a set of nodes VV, which are the agents in 𝒮\mathcal{S}, and a set of arrows EE, each of which indicates one agent gets input from another agent in a state along a computation in outout. Having this graph allows us to analyze the dependence relation between agents and the dependence relation between coalitions of agents with respect to a set of computations.

Proposition 4.1

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a coalition AA is an independent component w.r.t. a set of computations outout iff AA does not get input from agents outside of AA in the corresponding dependence graph GG.

Proof

Coalition AA is an independent component w.r.t. a set of computations outout iff for all λout\lambda\in out and qλq\in\lambda AA is an independent component w.r.t. state qq iff for all λout\lambda\in out and qλq\in\lambda and aAa\in A it is the case that τa(q)A\tau_{a}(q)\subseteq A, which means that for all λout\lambda\in out and qλq\in\lambda and aAa\in A it is the case that it is the case that aa does not get input from agents in Σ\A\Sigma\backslash A, iff any aAa\in A does not get input from agents outside of AA in the corresponding dependence graph GG.

From that, we can check whether a coalition of agents is an independent component w.r.t. outout through simply checking its corresponding dependence graph.

Proposition 4.2

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma) and two sets of computations outout, if a coalition of agents AA is an independent component w.r.t. a set of computations outout, then AA is also an independent component w.r.t. any set of computations outout^{\prime} where outoutout^{\prime}\subseteq out.

Proof

Let GG be the dependence graph of outout and GG^{\prime} be the dependence graph of outout^{\prime}. Because outoutout^{\prime}\subseteq out, GG^{\prime} is a spanning subgraph of GG. Therefore, if AA does not get input from other agents in GG, AA will also not get input from other agents in GG^{\prime}, which means that if AA is an independent component w.r.t. outout, then AA is also an independent component w.r.t. outout^{\prime}.

Typically, given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), if a coalition of agents AA is an independent component w.r.t. out(q,ΓA)out(q,\Gamma_{A}), then AA is also an independent component w.r.t. λ(q)\lambda^{*}(q); if AA is not an independent component w.r.t. λ(q)\lambda^{*}(q), then AA is also not an independent component w.r.t. out(q,ΓA)out(q,\Gamma_{A}). In this section, we will use a more complicated example illustrate our definitions.

Example 6

A multi-agent system consists of a couple of agents, each of which has specific capacity that is not common knowledge among agents. A complicated task is delegated to the agents and the agents have to cooperate to finish the task in a self-organizing way. The local rule for each agent is as follows: each agent works on one part of the task based on its capacity; once its part is finished, it passes the rest of the task to other agents who can continue via wireless signals until the whole task is finished. We will use our framework to study the contributions of agents and how they cooperate to finish the whole task. Given agents’ capacities, the whole task is finished along the computation λ(q)\lambda^{*}(q). The dependence graph GG w.r.t. λ(q)\lambda^{*}(q) is represented as Figure.4(left), which consists of 5 agents {a,b,c,d,e}\{a,b,c,d,e\} and communication between them regarding the task information.

Refer to caption
Figure 4: A dependence graph (left) and a layered decomposition (right).

A dependence graph clearly illustrates not only the dependence relation between agents. When there exists circles in a dependence graph, we can shrink them down to single nodes based on the theory about strongly connected components and condensation graphs in graph theory. Because it is not the main concern of this paper, we omit the process of transforming a dependence graph to a condensation graph and assume that the dependence graph under consideration is a directed acyclic graph. As we mentioned before, we want to decompose the system such that there is a partial order of dependence among coalitions of agents. How to decompose the system based on the dependence graph becomes a problem we need to solve now. Similar to the solution in [21], we use a layering decomposition approach in this paper. We first propose the notion of layer. Formally,

Definition 4.2 (Layer of an Agent)

Given a dependence graph G(V,E)G(V,E) w.r.t. a set of computations outout, for all aVa\in V, the layer of aa is a function ρ:V\rho:V\to\mathbb{N}, and ρ(a)\rho(a) is defined as:

  • if aa has no parent in GG, then ρ(a)=0\rho(a)=0;

  • otherwise, ρ(a)=max{ρ(p)+1:p is a parent of a}\rho(a)=\max\{\rho(p)+1:p\text{ is a parent of }a\},

We use h=max{ρ(a):aV}h=\max\{\rho(a):a\in V\} to denote the highest layer.

The above definition indicates the layer of an agent in a given dependence graph in two cases. If aa is a leaf node in the dependence graph GG, then aa is located in the lowest layer such that it is independent on the behavior of any other agents. If aa is not a leaf node, then aa is located in the layer above all of its parents. A self-organizing multi-agent system can be decomposed into a number of layers. Formally,

Definition 4.3 (Decomposition of an SOMAS)

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a decomposition of (𝒮,Γ)(\mathcal{S},\Gamma) w.r.t. a set of computations outout, denoted as decomp(𝒮,Γ,out)\operatorname{decomp}(\mathcal{S},\Gamma,out), is a tuple

decomp(𝒮,Γ,out)=(L0,L1,,Lh),\operatorname{decomp}(\mathcal{S},\Gamma,out)=(L_{0},L_{1},\cdots,L_{h}),

where Li={aΣρ(a)=i}L_{i}=\{a\in\Sigma\mid\rho(a)=i\} (0ih0\leq i\leq h).

Using this decomposition approach, any agents in the same layer are independent on each other, and any agents in a given layer are only dependent on the agents located in the lower layers. It is expressed by the following proposition:

Proposition 4.3

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma) and a decomposition decomp(𝒮,Γ,out)\operatorname{decomp}(\mathcal{S},\Gamma,out), for any aLi,bLja\in L_{i},b\in L_{j} (0i,jh0\leq i,j\leq h):

  • if i=ji=j, then aa and bb do not get input from each other;

  • if bb gets input from aa, then i<ji<j.

Proof

If i=ji=j, agents aa and bb are on the same level. For aa located on LiL_{i}, if i=0i=0, it has no parent, thus it does not get input from bb; if i>0i>0, the parents of aa are located below LiL_{i}, thus bb cannot reach aa, meaning that aa does not get input from bb. Similarly, bb does not get input from aa. If bb gets input from aa, then there exists an arrow coming from aa to bb. According to our decomposition approach, aa is located below bb. Hence, i<ji<j.

Thus, we organize the system in a way where there is a partial order of dependence among agents located in different layers. This layered structure helps us to find coalitions of agents which do not get input from other agents from the dependence graph efficiently. Starting from L0L_{0}, any coalitions located in L0L_{0} do not get input from other agents. We then go to L1L_{1}, where each agent combining with its parents located in lower layers does not get input from other agents. After that, we go to a higher level until we reach LhL_{h}. We have the following theorem to check whether a coalition of agents has full contribution to a temporal formula in a state.

Theorem 4.1

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma), a coalition of agents AA fully contributes to a temporal formula ψ\psi in a state qq iff AA satisfies the following conditions

  • AA does not get input from agents in Σ\A\Sigma\backslash A in the dependence graph G(q,ΓA)G(q,\Gamma_{A});

  • 𝒮,Γ,qAψ\mathcal{S},\Gamma,q\models\langle A\rangle\psi;

  • for any AAA^{\prime}\subset A if it does not get input from agents in Σ\A\Sigma\backslash A^{\prime} in G(q,ΓA)G(q,\Gamma_{A^{\prime}}) then 𝒮,Γ,q⊧̸Aψ\mathcal{S},\Gamma,q\not\models\langle A^{\prime}\rangle\psi.

Proof

For the first condition, coalition AA does not get input from agents in Σ\A\Sigma\backslash A in the dependence graph G(q,ΓA)G(q,\Gamma_{A}) if and only if coalition AA is an independent component w.r.t. out(q,ΓA)out(q,\Gamma_{A}), which means the coalition AA is structurally independent. The second condition ensures that AA is semantically independent w.r.t. ψ\psi. For the third condition, for any subsets of AA AAA^{\prime}\subset A, if it gets input from agents in Σ\A\Sigma\backslash A^{\prime} in G(q,ΓA)G(q,\Gamma_{A^{\prime}}), it is not structurally independent; if it does not get input from agents in Σ\A\Sigma\backslash A^{\prime} in G(q,ΓA)G(q,\Gamma_{A^{\prime}}) and 𝒮,Γ,q⊧̸Aψ\mathcal{S},\Gamma,q\not\models\langle A^{\prime}\rangle\psi, it is structurally independent but not semantically independent. Thus, for any AAA^{\prime}\subset A, it is either not structurally independent or not semantically independent, which means that AA is the minimal (w.r.t. set-inclusion) coalition that is both semantically independent with respect to ψ\psi and structurally independent from qq. Therefore, AA fully contributes to a temporal formula ψ\psi in a state qq.

It indicates how to verify whether a coalition of agents AA has full contribution to ψ\psi in state qq: structurally, we need to check whether AA does not get input from agents outside AA in the dependence graph of out(q,ΓA)out(q,\Gamma_{A}); semantically, we need to check whether AA ensures ψ\psi by following local rules; for any of its proper subsets AA^{\prime}, if AA^{\prime} has been confirmed to be structurally independent w.r.t. out(q,ΓA)out(q,\Gamma_{A^{\prime}}), then we need to check whether it is not semantically independent w.r.t. ψ\psi. In other words, for any proper subsets of AA, we need to ensure it is either not structurally independent or not semantically independent w.r.t. ψ\psi from state qq.

Given a finite set of temporal formulas tformulas that we would like to verify, we need to follow a procedure to efficiently examine the contribution of agents to the global system behavior rather than checking an arbitrary coalition. The results from Proposition 4.3 and Theorem 4.1 can be used to design such a procedure. According to Proposition 4.2 if a coalition of agents AA is not an independent component w.r.t. λ(q)\lambda^{*}(q) then it is also not an independent component w.r.t. out(q,ΓA)out(q,\Gamma_{A}). Thus, we investigate the dependence graph GG of λ(q)\lambda^{*}(q) so that we can rule out the coalitions that are not independent components in GG. Given a computation λ(q)\lambda^{*}(q) and its corresponding dependence graph GG, we first find out all the coalitions of agents that do not get input from other agents according to the result of Proposition 4.3, putting them in the order of set inclusion. From the first coalition to the last coalition in the order, we follow Theorem 4.1 to check whether it has full contribution to any temporal property in tformulas. In this part, after we confirm that a coalition of agents AA is structurally and semantically independent w.r.t. a temporal formula ψ\psi, we check whether AAA^{\prime}\subset A, which does not get input from other agents in G(q,ΓA)G(q,\Gamma_{A^{\prime}}), also bring about ψ\psi. If no, then AA is the minimal coalition (w.r.t. set-inclusion) coalition that is both semantically independent with respect to ψ\psi and structurally independent from qq. We provide the following pseudocode to illustrate the process.

Algorithm 1 Finding Agents’Full Contributions in an SOMAS
function FConSOMAS(𝒮,Γ,q,tformulas\mathcal{S},\Gamma,q,tformulas)
     def tformulas;
     def hashmap = FindIndependentComponent(λ(q)\lambda^{*}(q));
     for each AA in hashmap do
         if AA is not an independent component w.r.t. out(q,ΓA)out(q,\Gamma_{A}) then
              hashmap.remove(A);
         end if
         for each ψ\psi in tformulas do
              if 𝒮,Γ,qψ\mathcal{S},\Gamma,q\models\psi then
                  for each AAA^{\prime}\subset A do
                       if hashmap.get(AA^{\prime})!= null & 𝒮,Γ,q⊧̸ψ\mathcal{S},\Gamma,q\not\models\psi then
                           hashmap.put(AA, ψ\psi);
                       end if
                  end for
              end if
         end for
     end for
     return hashmap;
end function

When it is finished, we collect the information about agents’ full contributions in a set, which allows us to understand how different coalitions of agents contribute to the global system behavior.

Definition 4.4 (Agents’ Independence in Terms of Full Contributions)

Given an SOMAS (𝒮,Γ)(\mathcal{S},\Gamma) and a state qq, agents’ independence in terms of their full contributions in (𝒮,Γ)(\mathcal{S},\Gamma) is a set F(q)F(q) containing the information of agents’ full contributions, namely F(q)={(A,ψA)F(q)=\{(A,\psi_{A})\mid Coalition AA has full contribution to ψA\psi_{A} in state qq. }\}.

Example 7

A decomposition of dependence graph GG in Fig.4(right) is as follows. According to Definition 4.2, ρ(a)=0\rho(a)=0, ρ(b)=1\rho(b)=1, ρ(c)=2\rho(c)=2, ρ(d)=1\rho(d)=1 and ρ(e)=0\rho(e)=0. Therefore, the system is decomposed into three layers: decomp(𝒮,Γ)=(L0,L1,L2)\operatorname{decomp}(\mathcal{S},\Gamma)=(L_{0},L_{1},L_{2}), where L0={a,e}L_{0}=\{a,e\}, L1={b,d}L_{1}=\{b,d\} and L2={c}L_{2}=\{c\}. Based on the decomposition, from L0L_{0} to L2L_{2}, we can find multiple independent components, namely {a}\{a\}, {e}\{e\}, {a,e}\{a,e\}, {a,b,e}\{a,b,e\}, {d,e}\{d,e\}, {a,d,e}\{a,d,e\}, {a,b,d,e}\{a,b,d,e\} and Σ\Sigma. We then examine whether they fully contribute to any temporal formula one by one, which does a lot of savings over checking 252^{5} possible coalitions without using the dependence graph. Starting from {a}\{a\}, we check whether agent aa also does not get input from other agents in the dependence graph G(q,Γ{a})G(q,\Gamma_{\{a\}}), and check whether it ensures ψa\psi_{a} by following its local rule. If both return yes, then aa fully contributes to ψa\psi_{a}. We perform the same examination for coalition {e}\{e\}. For coalition {a,e}\{a,e\}, after confirming that it is both semantically independent with respect to ψ\psi and structurally independent from qq, we still need to check whether neither coalitions {a}\{a\} nor {e}\{e\} brings about ψae\psi_{ae} to ensure the minimization requirement. The verification is finished when we have done with Σ\Sigma. Suppose we have the following result: F(q)={({a},ψa),({d,e},ψde),({a,b,e},ψabe),(Σ,ψ)}F(q)=\{(\{a\},\psi_{a}),(\{d,e\},\psi_{de}),(\{a,b,e\},\psi_{abe}),(\Sigma,\psi)\}, each of which indicates that a coalition has full contribution to a part of the task. Therefore, F(q)F(q) contains the information on how different coalitions of agents finish the whole task.

5 Modeling Constraint Satisfaction Problems

In mult-agent systems, it is normal that agents have personal goals and constraints. From the perspective of the system, we would like to find a solution such that we can balance the satisfaction of the goals and constraints among agents. In mathematics, those problems are called constraint satisfaction problems. Constraint satisfaction problems are NP-complete in general, which means that it will more and more computationally expensive to solve those problems in centralized way when we enlarge the size of the problem. After realizing that, people start to look at agents themselves and the cooperation among them: agents can negotiate as to reach a global equilibrium. Algorithms based on self organization have been developed to solve various constraint satisfaction problems, such as task/resource allocation [22] and relation adaption [31]. Formally, a constraint satisfaction problem is defined as a triple (X,D,C)(X,D,C), where

  • X={x1,,xn}X=\{x_{1},\ldots,x_{n}\} is a set of variables;

  • D={D1,,Dn}D=\{D_{1},\ldots,D_{n}\} is a set of domains;

  • C={C1,,Cm}C=\{C_{1},\ldots,C_{m}\} is a set of constraints.

Each variable xix_{i} can take on the values in its nonempty domain DiD_{i}. Every constraint CjCC_{j}\in C is a pair (tj,Rj)(t_{j},R_{j}), where tjXt_{j}\subset X is a subset of n variables and RjR_{j} is the relation on tjt_{j}. An evaluation of the variables is a function vv from a subset of variables to a particular set of values in the corresponding subset of domains. An evaluation vv is said to satisfy a constraint (tj,Rj)(t_{j},R_{j}) if the values assigned to the variables tjt_{j} satisfies the relation RjR_{j}. A solution to the constraint satisfaction problem is an evaluation that include all the variables and do not violate any of the constraints.

Using our framework of self-organizing multi-agent systems to model a constraint satisfaction problem, we can see a complete evaluation that includes all the variables as a particular state in the system, and a self-organization based algorithm is a set of local rules we design for agents to follow. The goal of the system is to converge to a stable state where the values of XX do not violate any constraints, which is a feasible solution to the problem. Notice that set CC consists of multiple constraints. An evaluation that satisfies constraints CC^{\prime} always implies an evaluation that satisfies constraints C′′C^{\prime\prime} if C′′CC^{\prime\prime}\subseteq C^{\prime}. Next, we will use a real example to illustrate how we use our framework to model a constraint satisfaction problem.

Multi-agent systems can be used to support information exchange within user communities where each user agent represents a user’s interests. But how does a user agent make contact with other user agents with common interests? We can use a peer-to-peer network, but this could flood the network with queries, risking overloading the system. A decentralized solution based on middle agents can self-organize to form communities among agents with common interests [28]. User agents represent users and each user agent registers with one or more middle agent. User agents (requestors) send queries about their user’s interests to middle agents. Given these queries, each middle agent then search among the information it holds from other user agents registered with it. If it can respond to a query based on this information then the search is completed. Otherwise, the middle agent communicates with other middle agents in order to try and obtain the information. Once this has been done, the middle agent relays results of the search to the requester. After that, the middle agent checks whether both the requester and the provider are within its group. If not, the middle agent of the requester transfers the provider to its group, so that both user agents are in the same group. The consequence is that users with common interests form a community. The querying behavior of user agents builds up a profile of their interests, which is updated by successive queries. It is a solution with self-organization features that solves a constraint satisfaction problem that the middle agents where user agents register correspond to the set of variables XX, the possible middle agents where user agents can register correspond to the domains of the variables DD, and user agents’ interests in other user agents correspond to the constraints CC. Compared to centralizedly solving the problem that might encounter computational issues when the number of users increases, the above solution is a highly scalable process that operates efficiently with a large number of users. As user agents broadcast their queries in the system when implementing the solution, it is not required that the system designer is aware of user agents’ interests beforehand.

Refer to caption
Figure 5: Self organization of agent communities - different colours represent different user interests.

We now use our framework to model the example. The system has a set of user agents UU and a set of middle agents MM; each middle agent can register regreg, disregister disregdisreg a user agent or does nothing. Function int:U𝒫(U)int:U\to\mathcal{P}(U) returns the static interest of each user agent uUu\in U. Each state qq is labeled with propositions, each of which is of the form reg(u,m)reg(u,m), meaning that user agent uu registers in middle agent mm. We assume that middle agents are aware of the positions where user agents register, but not the interests of user agents before user agents broadcast their queries in the system. The system moves from one state to another state once a user agent registers or disregister with a user agent. The goal of the system is to converge to a state qq^{*} where all user agents are in their own user communities. As Fig. 5 described, a user community consists of user agent(s) and middle agent(s); for any user agent in the community, it registers with the same middle agent as its interested user agents do. In order to formalize this property, we extend our semantics as follows: given UUU^{\prime}\subseteq U and MMM^{\prime}\subseteq M,

  • 𝒮,Γ,qcom(U,M)\mathcal{S},\Gamma,q\models\operatorname{com}(U^{\prime},M^{\prime}) iff for all uUu\in U^{\prime} and uint(u)u^{\prime}\in int(u) there exists mMm\in M^{\prime} such that 𝒮,Γ,qreg(u,m)reg(u,m)\mathcal{S},\Gamma,q\models reg(u,m)\land reg(u^{\prime},m).

Local rule τ,γ\langle\tau,\gamma\rangle for each agent is prescribed as follows: if a user agent u1u_{1} has interest in another user agent u2u_{2} while they don’t register with the same middle agent, then the middle agent of u2u_{2} has to disregister u2u_{2} and send the information about u2u_{2} to the middle agent of u1u_{1} so that it can register u2u_{2}. Suppose at the starting state q0q_{0} user agents randomly register with different middle agents and each has interest in some other user agents (see Table 1). Starting from q0q_{0}, u1u_{1} asks a question about u2u_{2}. After hearing the query about u2u_{2}, m2m_{2} disregisters u2u_{2} and sends information about u2u_{2} to m1m_{1}, which moves the system from state q0q_{0} to state q1q_{1}. After getting the information about u2u_{2} from m2m_{2}, m1m_{1} registers u2u_{2}, which moves the system from state q1q_{1} to state q2q_{2}. In state q2q_{2}, both u1u_{1} and u2u_{2} register with the same middle agent m1m_{1}. Later on, u2u_{2} asks a question about u1u_{1}. Because u1u_{1} and u2u_{2} register with the same middle agent m1m_{1}, the system does not change anything from the query. And then u3u_{3} asks a question about u4u_{4}. Similar to u2u_{2}, m1m_{1} disregisters u4u_{4} after hearing the query, and m3m_{3} registers u4u_{4} after getting the information about u4u_{4} from m1m_{1}. And then u4u_{4} asks a question about u3u_{3}, which does not trigger any changes to the system. Based on the local rules, the dependence graph between user agents and middle agents in terms of their communication along the computation λ(q0)\lambda^{*}(q_{0}) is shown in Fig. 6. Triggered by the queries, all agents follow their local rules to behave and the system transits from q0q_{0} to q4q_{4}. See Fig. 7 for the state transitions of the system and the valuation of each state. Because in q4q_{4} both u1u_{1} and u2u_{2} with common interests register with m1m_{1} and both u3u_{3} and u4u_{4} with common interests register with m3m_{3}, q2q_{2} satisfies the property of our desired state. Starting from a disorder state q0q_{0}, the system finally converges to state q4q_{4} where for each user uUu\in U it registers with the same middle agent as its interested user agents do, forming two user communities com({u1,u2},{m1})\operatorname{com}(\{u_{1},u_{2}\},\{m_{1}\}) and com({u3,u4},{m3})\operatorname{com}(\{u_{3},u_{4}\},\{m_{3}\}). How does the system reach this desired state? What are the agents’ contributions to forming such two communities? Here is the analysis.

Table 1: User agents’ interests and their initial registration positions
User Agent Interest Register in
u1u_{1} u2u_{2} m1m_{1}
u2u_{2} u1u_{1} m2m_{2}
u3u_{3} u4u_{4} m3m_{3}
u4u_{4} u3u_{3} m1m_{1}
Refer to caption
Figure 6: Dependence graph between user agents and middle agents along the computation λ(q0)\lambda^{*}(q_{0}), where all agents follow their local rules to behave.
Refer to caption
Figure 7: State transitions of the system.

From Fig. 6, we can decompose the system into four layers, which are all the user agens as layer 0, {m2}\{m_{2}\} as layer 1, {m1}\{m_{1}\} as layer 2 and {m3}\{m_{3}\} as layer 3. We can see there are lots of independent components w.r.t. computation λ(q0)\lambda^{*}(q_{0}). Because the contributions of user agents are not encoded in the system, we do not discuss the independent components that are formed only by user agents. Given a set of temporal formulas {com({u1,u2},{m1}),com({u3,u4},{m3})}\{\Diamond\operatorname{com}(\{u_{1},u_{2}\},\{m_{1}\}),\Diamond\operatorname{com}(\{u_{3},u_{4}\},\{m_{3}\})\}, we verify which coalition of agents full contributes to any of them. Firstly, coalition {u1,u2,m1,m2}\{u_{1},u_{2},m_{1},m_{2}\} has full contribution to com({u1,u2},{m1})\Diamond\operatorname{com}(\{u_{1},u_{2}\},\{m_{1}\}). This is because any agents in the coalition do not get input from agents outside the coalition, and they form the community of {u1,u2,m1}\{u_{1},u_{2},m_{1}\}, which can be expressed by

𝒮,Γ,q0u1,u2,m1,m2com({u1,u2},{m1}),\mathcal{S},\Gamma,q_{0}\models\langle u_{1},u_{2},m_{1},m_{2}\rangle\Diamond\operatorname{com}(\{u_{1},u_{2}\},\{m_{1}\}),

and apparently it cannot be achieved without any of them, which satisfies the minimization condition. Secondly, consider community com({u3,u4},{m3})\operatorname{com}(\{u_{3},u_{4}\},\{m_{3}\}). From Fig. 6, we can see that coalition {u3,u4,m3}\{u_{3},u_{4},m_{3}\} is not an independent component as it gets input from middle agent m1m_{1}. Finally, the whole system fully contributes to forming the two communities com({u1,u2},{m1})\operatorname{com}(\{u_{1},u_{2}\},\{m_{1}\}) and com({u3,u4},{m3})}\operatorname{com}(\{u_{3},u_{4}\},\{m_{3}\})\}.

Such information regarding agents’ full contributions allows us to understand how user agents’ interest are satisfied and how user communities are formed, which facilitates the development of the system. For example, since coalition {u1,u2,m1,m2}\{u_{1},u_{2},m_{1},m_{2}\} has full contribution to com({u1,u2},{m1})\Diamond\operatorname{com}(\{u_{1},u_{2}\},\{m_{1}\}), the community will not change even if we change the interests of u3u_{3} and u4u_{4} or m3m_{3} does not register u4u_{4}.

6 Related Work

In order to understand the complex link between local agent behavior and system level behavior, we need to study the independence between local agent behavior in terms of ensuring properties. Coalition-related logic achieves that from different perspectives. For instance, alternating temporal logic (ATL) and game theory can be used to check whether a coalition of agents can enforce a state property regardless of what other agents do [19]. [1] study whether a normative system remains effective even though some agents do not comply with it. [30] identifies two types of system properties that are unchangeable by restricting the joint behavior of a coalition. Apart from the semantics dimension, we have proposed the notion of independent components, a coalition of agents which only get input information from agents in the coalition. It is a structural dimension to represent a coalition of agents whose behavior is independent on the behavior of agents outside of the coalition.

If we see local rules as norms that are used to regulate the behavior of the systems, self-organizing multi-agent systems are actually a category of normative multi-agent systems. In preventative control systems [26], norms are represented as hard constraints where violations are impossible, but this does not respect agents’ autonomy. Soft constraints are used in detective control systems where norms are possible to be violated but agents are motivated to follow the norms through sanctions or punishments [5]. It is indeed more flexible than setting hard constraints in the system. However, agents coming from an open environment have their own personal situations such as knowledge and preferences, which might not be known to the designer of the system. In such a case, the system designer cannot identity which outcome is desired thus it is hard to identify legal computations to get there [7][20]. The norms that we consider in this paper are designed based on agents’ communication. As the norm prescribes, agents are supposed to communicate with each other about their own situations and what agents need to do depends on the communication results. Such type of norms are in fact widely used in multi-agent systems because it allows agents to regulate the system by themselves [28][27][22].

The logic we use in this paper is inspired by ATL [2]. ATL is usually used to reason about the strategies of participating agents [6][15]: formula \llangleA\rrangleψ\llangle A\rrangle\psi is read as coalition AA have a strategy or can collaborate to bring about formula ψ\psi, which can be seen as the capacity of the participating agents to ensure a result no matter what other agents outside of the coalition do. A strategy is a function that takes a state (imperfect recall) or a sequence of states (perfect recall) as an argument and outputs an action, which can be seen as a plan. In order to make a plan for a coalition of agents to ensure a property, there is an external that is aware of the available actions from each agent and the game structure. In [Alur et al., 2002], the authors use a game between a protagonist and an antagonist to gain more intuition about ATL formulas. The ATL formula \llangleA\rrangleψ\llangle A\rrangle\psi is satisfied at state qq iff the protagonist has a winning strategy to achieve ψ\psi in this game, which means that the protagonist is aware of the available actions from each agent and the game so that he can make a strategy (or a plan) for coalition AA to win the game. However, in a self-organizing multi-agent system, agents only have local view of the system, which is in this paper the state where they are, their own internal functions and the actions that are available to them and the local rules they need to follow, but not the internal functions of other agents and the actions that are available to other agents. Hence, there is no external to centrally make a plan for agents to follow. The logic in this paper is used to reason about the local rules of the participating agents: formula Aψ\langle A\rangle\psi is read as coalition AA bring about formula ψ\psi by following their local rules no matter what other agents outside the coalition do. The use of logic and graph theory is new in the research of multi-agent systems, but it has appeared in the area of argumentation. An abstract argumentation framework can be represented as a directed graph (called a defeat graph) where nodes represent arguments and arcs represent attacks between them [14]. [21] use strongly connected components in graph theory to decompose an abstract argumentation framework for efficient computation of argumentation extensions. The decomposition approach in this paper is inspired by that, while we use it not for efficient computation but for system verification.

While we use modal logic to understand the complex link between local agent behavior and system level behavior, one could think that causal reasoning is a plausible alternative. Causal reasoning is the process of identifying the relationship between a cause and its effect [23]. It indeed has been used to identify the causal relation between macro- and micro-variables in [9] [8]. For self-organizing multi-agent systems, the local rules that agents follow are the cause and the global property they contribute is the effect. But sometimes it might be difficult to identify the causal relation because of the interactions between agents. In such cases, causal reasoning can still be combined with our graph-based layered approach to decompose the system, which is similar to the idea proposed by Judea Pearl and Thomas Verma to combine logic and directed graphs for causal reasoning [24].

7 Conclusion

Self-organization has been introduced to multi-agent systems as an internal control process or mechanism to solve difficult problems spontaneously. However, because a self-organizing multi-agent system has autonomous agents and local interactions between them, it is difficult to predict the global behavior of the system from the behavior of the agents we design deductively, making implementation the usual way to evaluate a self-organizing multi-agent system. Therefore, we believe that if we can understand how agents bring about the behavior of the system in the sense that which coalition contributes to which system property independently, the development of self-organizing multi-agent systems will be facilitated. In this paper, we propose a logic-based framework for self-organizing multi-agent systems, where abstract local rules are defined in the way that agents make their next moves based on their communication with other agents. Such a framework allows us to verify properties of the system without doing implementation. A structural property called independent components is introduced to represent a coalition of agents which do not get input from agents outside the coalition. The dependence relation between coalitions of agents regarding their contributions to the global behavior of the system is reasoned about from the structural and semantic perspectives. We then show model-checking a formula in our language remains close to the complexity of model-checking standard ATL, while model-checking whether a coalition of agents has full contribution to a temporal property is in exponential time. Moreover, we combine our framework with graph theory to decompose a system into different coalitions located in different layers. The resulting information about agents’ full contributions allows us to understand the complex link between local agent behavior and system level behavior in a self-organizing multi-agent system. We finally show how we can use our framework to model a constraint satisfaction problem, where a solution based on self-organization is used. Certainly, there may be some possible limitations in this study: for example, we only consider communication as the interaction between agents, which does not capture all types of interaction in the system. Moreover, the dependence graph with respect to computation λ(q)\lambda^{*}(q) is determined by agents’ communication required by local rules. Thus, the internal function mim_{i} of each agent play an important role in the global system behavior. In the future, it will be interesting to investigate the robustness of self-organizing multi-agent systems due to the change of internal functions, which is possible to happen when the system is deployed in an open environment and thus agents can join or exit the system as they want.

References

  • [1] Thomas Ågotnes, Wiebe Van der Hoek, and Michael Wooldridge, ‘Robust normative systems and a logic of norm compliance’, Logic Journal of IGPL, 18(1), 4–30, (2010).
  • [2] Rajeev Alur, Thomas A Henzinger, and Orna Kupferman, ‘Alternating-time temporal logic’, Journal of the ACM (JACM), 49(5), 672–713, (2002).
  • [3] Carole Bernon, Valérie Camps, Marie-Pierre Gleizes, and Gauthier Picard, ‘Tools for self-organizing applications engineering’, in International Workshop on Engineering Self-Organising Applications, pp. 283–298. Springer, (2003).
  • [4] Carole Bernon, Marie-Pierre Gleizes, Sylvain Peyruqueou, and Gauthier Picard, ‘Adelfe: a methodology for adaptive multi-agent systems engineering’, in International Workshop on Engineering Societies in the Agents World, pp. 156–169. Springer, (2002).
  • [5] Ronen I Brafman and Moshe Tennenholtz, ‘On partially controlled multi-agent systems’, Journal of Artificial Intelligence Research, 4, 477–507, (1996).
  • [6] Nils Bulling, Modelling and Verifying Abilities of Rational Agents, Papierflieger-Verlag, 2010.
  • [7] Nils Bulling and Mehdi Dastani, ‘Norm-based mechanism design’, Artificial Intelligence, 239, 97–142, (2016).
  • [8] Krzysztof Chalupka, Automated Macro-scale Causal Hypothesis Formation Based on Micro-scale Observation, Ph.D. dissertation, California Institute of Technology, 2017.
  • [9] Krzysztof Chalupka, Tobias Bischoff, Pietro Perona, and Frederick Eberhardt, ‘Unsupervised discovery of el nino using causal feature learning on microlevel climate data’, in Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, pp. 72–81, (2016).
  • [10] Edmund M. Clarke, E Allen Emerson, and A Prasad Sistla, ‘Automatic verification of finite-state concurrent systems using temporal logic specifications’, ACM Transactions on Programming Languages and Systems (TOPLAS), 8(2), 244–263, (1986).
  • [11] Edmund M Clarke Jr, Orna Grumberg, Daniel Kroening, Doron Peled, and Helmut Veith, Model checking, MIT press, 2018.
  • [12] Giovanna Di Marzo Serugendo, Marie-Pierre Gleizes, and Anthony Karageorgos, ‘Self-organization in multi-agent systems’, Knowledge Engineering Review, 20(2), 165–189, (2005).
  • [13] Marco Dorigo, Mauro Birattari, and Thomas Stutzle, ‘Ant colony optimization’, IEEE computational intelligence magazine, 1(4), 28–39, (2006).
  • [14] Phan Minh Dung, ‘On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games’, Artificial intelligence, 77(2), 321–357, (1995).
  • [15] Valentin Goranko, Antti Kuusisto, and Raine Rönnholm, ‘Game-theoretic semantics for alternating-time temporal logic’, ACM Transactions on Computational Logic (TOCL), 19(3), 1–38, (2018).
  • [16] VI Gorodetskii, ‘Self-organization and multiagent systems: I. models of multiagent self-organization’, Journal of Computer and Systems Sciences International, 51(2), 256–281, (2012).
  • [17] VI Gorodetskii, ‘Self-organization and multiagent systems: Ii. applications and the development technology’, Journal of Computer and Systems Sciences International, 51(3), 391–409, (2012).
  • [18] Abdelkader Khelil and Rachid Beghdad, ‘Esa: an efficient self-deployment algorithm for coverage in wireless sensor networks’, Procedia Computer Science, 98, 40–47, (2016).
  • [19] Max Knobbout and Mehdi Dastani, ‘Reasoning under compliance assumptions in normative multiagent systems’, in Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 331–340. International Foundation for Autonomous Agents and Multiagent Systems, (2012).
  • [20] Max Knobbout, Mehdi Dastani, and John-Jules Ch Meyer, ‘Formal frameworks for verifying normative multi-agent systems’, in Theory and Practice of Formal Methods, 294–308, Springer, (2016).
  • [21] B. Liao, Efficient Computation of Argumentation Semantics, Intelligent systems series, Elsevier Science, 2013.
  • [22] Kathryn Sarah Macarthur, Ruben Stranders, Sarvapali Ramchurn, and Nicholas Jennings, ‘A distributed anytime algorithm for dynamic task allocation in multi-agent systems’, in Twenty-Fifth AAAI Conference on Artificial Intelligence, (2011).
  • [23] Judea Pearl, Causality, Cambridge university press, 2009.
  • [24] Judea Pearl and Thomas Verma, The logic of representing dependencies by directed graphs, University of California (Los Angeles). Computer Science Department, 1987.
  • [25] Gauthier Picard, Carole Bernon, and Marie-Pierre Gleizes, ‘Etto: emergent timetabling by cooperative self-organization’, in International Workshop on Engineering Self-Organising Applications, pp. 31–45. Springer, (2005).
  • [26] Yoav Shoham, ‘Agent-oriented programming’, Artificial intelligence, 60(1), 51–92, (1993).
  • [27] Gabriele Valentini, Heiko Hamann, and Marco Dorigo, ‘Self-organized collective decision making: The weighted voter model’, in Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pp. 45–52. International Foundation for Autonomous Agents and Multiagent Systems, (2014).
  • [28] Fang Wang, ‘Self-organising communities formed by middle agents’, in Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3, pp. 1333–1339. ACM, (2002).
  • [29] Michael Wooldridge and Wiebe Van Der Hoek, ‘On obligations and normative ability: Towards a logical analysis of the social contract’, Journal of Applied Logic, 3(3-4), 396–420, (2005).
  • [30] Jun Wu, Chongjun Wang, and Junyuan Xie, ‘A framework for coalitional normative systems’, in The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 259–266, (2011).
  • [31] Dayong Ye, Minjie Zhang, and Danny Sutanto, ‘Self-organization in an agent network: A mechanism and a potential application’, Decision Support Systems, 53(3), 406–417, (2012).
  • [32] Dayong Ye, Minjie Zhang, and Athanasios V Vasilakos, ‘A survey of self-organization mechanisms in multiagent systems’, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(3), 441–461, (2016).