This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Probably approximately correct stability of allocations in uncertain coalitional games with private sampling

\NameGeorge Pantazis \Emailg.pantazis@tudelft.nl
\addrDelft Center for Systems and Control
   TU Delft    The Netherlands    \NameFiliberto Fele \Emailffele@us.es
\addrDepartment of Systems Engineering and Automation
   University of Seville    Spain    \NameFilippo Fabiani \Emailfilippo.fabiani@imtlucca.it
\addrIMT School for Advanced Studies Lucca
   Italy    \NameSergio Grammatico \Emails.grammatico@tudelft.nl
\addrDelft Center for Systems and Control
   TU Delft    The Netherlands    \NameKostas Margellos \Emailkostas.margellos@eng.ox.ac.uk
\addrDepartment of Engineering Science
   University of Oxford    UK
Abstract

We study coalitional games with exogenous uncertainty in the coalition value, in which each agent is allowed to have private samples of the uncertainty. As a consequence, the agents may have a different perception of stability of the grand coalition. In this context, we propose a novel methodology to study the out-of-sample coalitional rationality of allocations in the set of stable allocations (i.e., the core). Our analysis builds on the framework of probably approximately correct learning. Initially, we state a priori and a posteriori guarantees for the entire core. Furthermore, we provide a distributed algorithm to compute a compression set that determines the generalization properties of the a posteriori statements. We then refine our probabilistic robustness bounds by specialising the analysis to a single payoff allocation, taking, also in this case, both a priori and a posteriori approaches. Finally, we consider a relaxed ζ\zeta-core to include nearby allocations and also address the case of empty core. For this case, probabilistic statements are given on the eventual stability of allocations in the ζ\zeta-core.

keywords:
Uncertain coalitional games; Statistical learning; Data privacy

1 Introduction

Multi-agent systems are pervasive across various fields, including engineering (Raja and Grammatico, 2021; Karaca and Kamgarpour, 2020; Han et al., 2019; Fele et al., 2017, 2018), economics, and social sciences (McCain, 2008). Even though agents often behave as self-interested entities, their limited ability to improve their utility can, in certain scenarios, motivate them to form coalitions to achieve a higher individual payoff. This situation can be modelled through coalitional games (Chalkiadakis et al., 2011). As each agent’s participation in a coalition is subject to their own payoff being maximised, a question emerges about how to allocate the total value of the coalition such that no agent deviates from it. This problem is known as stability of agents’ allocations.

In real-world scenarios, the value of a coalition is typically affected by uncertainty. The consideration of uncertainty in the coalitional values of a game finds its roots in seminal works such as Charnes and Granot (1976, 1977, 1973), which provided solution concepts to yield stable allocations against uncertainty. Suijs et al. (1999) discussed non-emptiness of the core for a particular class of stochastic games. Chalkiadakis and Boutilier (2004) and Ieong and Shoham (2008) addressed uncertainty through Bayesian learning methods, while Li and Conitzer (2015) investigated different solution concepts that maximize the probability of obtaining stable allocations. Chalkiadakis and Boutilier (2008) and Guéneron and Bonnet (2021) explored the dynamics of repeated stochastic coalitional games.

The connection between probably approximately correct (PAC) learning and uncertain coalitional games has been explored in Balcan et al. (2015). The spirit of Procaccia and Rosenschein (2006) is similar, using Vapnik-Chervonenkis (VC) theory to learn the winning coalitions for the class of the so-called simple games. Balcan et al. (2015) focused on a complementary problem where only a randomized subset of coalitions is considered. Both these works evaluate the sample complexity from the VC theory perspective, and hence their results suffer from the associated conservativeness. Pantazis et al. (2022a) leverage the scenario approach (Campi and Garatti, 2021, 2018a, 2018b; Garatti and Campi, 2022) to provide distribution-free guarantees on the stability of allocations in a PAC manner, based on samples from the exogenous uncertainty affecting the value functions. On a parallel line of research, Pantazis et al. (2023) propose a data-driven Wasserstein-based distributionally robust approach for allocations’ stability.

Unlike the aforementioned works, we consider a more general setting where the uncertainty data is privately drawn by each agent. As such, information is heterogeneous across agents, and the samples are regarded as a private resource. When data samples are commonly shared among agents, the work of Pantazis et al. (2022a) provide guarantees on stability of allocations. In particular, the developments in Pantazis et al. (2022a) rely on the notion of scenario core – a data-driven approximation of the robust core based on the hypothesis that all uncertainty samples are shared among the agents (Raja and Grammatico, 2021). Given this common set of samples, the perception of a stable allocation based on the available data is identical across agents. This is no longer the case for private sampling, as different samples of the exogenous uncertainty can result in a different perception of stability by each agent. We adopt a PAC learning approach and leverage the concept of compression (Margellos et al., 2015), i.e., the set of samples essential for the reconstruction of the scenario core and whose cardinality affects the generalization properties of the provided guarantees. We give a priori and a posteriori certificates for the entire scenario core obtained by private sampling, and propose a distributed algorithm to calculate a compression set.

We then focus on the specific allocation returned by some algorithm (akin, e.g., to the distributed payoff algorithm by Raja and Grammatico (2021)). We then provide a priori coalition stability guarantees for this allocation (Theorem 3). We show how the dimension of the problem – in our case the number of agents – plays a key role. Less conservative probabilistic bounds can often be obtained by taking an a posteriori approach (Campi et al., 2018), i.e., based on observation of the realized uncertainty. In this case, the probabilistic bounds can be significantly improved and even a tighter a priori bound can be obtained (see Theorem 4). Finally, we consider a relaxed ζ\zeta-core to include nearby allocations and also address the case of empty core. We leverage recents results by Campi and Garatti (2021) and study the probabilistic stability of allocations in the ζ\zeta-core.

2 Scenario approach for stochastic coalitional games

We consider a coalitional game with NN agents identified by the set N={1,,N}\pazocal{N}=\{1,\dots,N\}. We denote the number of possible subcoalitions, excluding the grand coalition, by MM, i.e., M=2N1M=2^{N}-1. The worth of a coalition SNS\subseteq\pazocal{N} is given by the so called value function; here we consider transferable utility problems, i.e., the value of coalitions is expressed by a real value which can be split among participating agents. We posit that the value of each coalition is subject to uncertainty. Then, the value of a coalition SNS\subset\pazocal{N} is a function uS:Ξu_{S}:\Xi\to\mathbb{R} that given an uncertainty realization ξΞ\xi\in\Xi returns the total payoff for the agents in SS. The value of the grand coalition is the deterministic quantity uNu{N}\in\mathbb{R}.

The uncertain coalitional game is then defined as the tuple GΞ=N,{uS}SN,Ξ,G_{\Xi}=\langle\pazocal{N},\{u_{S}\}_{S\subseteq\pazocal{N}},\Xi,\mathbb{P}\rangle, where \mathbb{P} is some unknown probability measure over Ξ\Xi. A vector 𝒙(xi)iNN\bm{x}\coloneqq(x_{i})_{i\in\pazocal{N}}\in\mathbb{R}^{N}, where xix_{i} is the payoff received by agent ii, is called an allocation. For a given uncertainty realization ξΞ\xi\in\Xi, an allocation is strictly rational for the members of SS if iSxi>uS(ξ)\sum_{i\in S}x_{i}>u_{S}(\xi); the latter implies that agents have an incentive to form this coalition for this particular uncertainty realization. An allocation is efficient if iNxi=uN\sum_{i\in\pazocal{N}}x_{i}=u{N}. Efficient allocations such that there are no incentives for agents to deviate the grand coalition are called stable. The set of all such allocations is the core of the game.

The notion of robust core is proposed to account for the presence of uncertainty; see, e.g., Pantazis et al. (2022a); Raja and Grammatico (2021); Nedić and Bauso (2013). We formally define it as C(GΞ){𝒙N:iNxi=uN,iSximaxξΞuS(ξ) for all SN}C(G_{\Xi})\coloneqq\{\bm{x}\in\mathbb{R}^{N}\colon\textstyle\sum_{i\in\pazocal{N}}x_{i}=u{N},\,\textstyle\sum_{i\in S}x_{i}\geq\max_{\xi\in\Xi}u_{S}(\xi)\text{ for all }S\subset\pazocal{N}\}. For any possible realization of the uncertainty ξΞ\xi\in\Xi, any allocation 𝒙C(GΞ)\bm{x}\in C(G_{\Xi}) gives the agents no incentive to defect from the grand coalition and form sub-coalitions. Unfortunately, computing explicitly the robust core is hard, as we assume no knowledge on the uncertainty support Ξ\Xi (nor on the underlying probability distribution \mathbb{P}). To circumvent this challenge, we adopt a data-driven methodology and approximate the robust core by drawing a finite number KK of independent and identically distributed (i.i.d.) samples 𝝃(ξ(1),,ξ(K))ΞK\bm{\xi}\coloneqq(\xi^{(1)},\ldots,\xi^{(K)})\in\Xi^{K}, where ΞK\Xi^{K} denotes the KK-fold cartesian product of Ξ\Xi; we refer to vectors 𝝃\bm{\xi} as multi-samples. This constitutes the scenario game GK=N,{uS}SN,𝝃G_{K}=\langle\pazocal{N},\{u_{S}\}_{S\subseteq\pazocal{N}},\bm{\xi}\rangle, whose core is the set {𝒙N:iNxi=uN and iSximaxk=1,KuS(ξ(k)),SN}\{\bm{x}\in\mathbb{R}^{N}:\sum_{i\in\pazocal{N}}x_{i}=u{N}\text{ and }\sum_{i\in S}x_{i}\geq\max_{k=1,\dots K}u_{S}(\xi^{(k)}),\,\forall S\subset\pazocal{N}\}, referred to as the scenario core.

We now take the notion of allocation stability a step further, considering the more general setting where every agent ii has only access to a private set of samples 𝝃i\bm{\xi}_{i} from Ξ\Xi. Let 𝝃iΞKi\bm{\xi}_{i}\in\Xi^{K_{i}} be the multi-sample privately drawn by agent ii. We say that an allocation 𝒙\bm{x} of GKG_{K} is stable with respect to 𝝃i\bm{\xi}_{i} if iNxi=uN\sum_{i\in\pazocal{N}}x_{i}=u{N} and iSximaxk=1,KiuS(ξi(k))\sum_{i\in S}x_{i}\geq\max_{k=1,\dots K_{i}}u_{S}(\xi_{i}^{(k)}), SN\forall S\subset\pazocal{N} s.t. S{i}S\supseteq\{i\}.

This immediately leads to the following extension of the scenario core:

Definition 2.1.

Let 𝛏i=(ξi(1),,ξi(Ki))ΞKi\bm{\xi}_{i}=(\xi_{i}^{(1)},\ldots,\xi_{i}^{(K_{i})})\in\Xi^{K_{i}} be some multi-sample drawn by agent ii. The scenario core with private sampling is given by

C(GK)={𝒙N:iNxi=uN and iSximaxiSmaxk=1,KiuS(ξ(k)),SN}.C(G_{K})=\bigg{\{}\bm{x}\in\mathbb{R}^{N}:\sum_{i\in\pazocal{N}}x_{i}=u{N}\text{ and }\sum_{i\in S}x_{i}\geq\max_{i\in S}\max_{k=1,\dots K_{i}}u_{S}(\xi^{(k)}),\,\forall S\subset\pazocal{N}\bigg{\}}. (1)

Unless differently specified, we assume the following conditions hold throughout the paper:

Assumption 1
  1. (i)

    Each agent iNi\in\pazocal{N} draws KiK_{i} independent samples from the probability distribution \mathbb{P}. The samples drawn by any agent are independent from those drawn by other agents.

  2. (ii)

    C(GK)C(G_{K}) is non-empty for any multi-sample (𝝃i)iNΞK(\bm{\xi}_{i})_{i\in\pazocal{N}}\in\Xi^{K}.

In the following, we let K=iNKiK=\sum_{i\in\pazocal{N}}K_{i}, and 𝝃=(𝝃i)i=1N\bm{\xi}=(\bm{\xi}_{i})_{i=1}^{N}. Also, for brevity, we will use S{i}S\supseteq\{i\} to denote all SNS\subset\pazocal{N} which allow agent ii as a member.

On the basis of privately available data, we wish to provide guarantees on the probability that allocations 𝒙C(GK)\bm{x}\in C(G_{K}) will remain stable (i.e., within C(GK)C(G_{K})) for any future, yet unseen, uncertainty realization. Capitalizing on Pantazis et al. (2022b); Fabiani et al. (2022); Pantazis et al. (2022a), we define two probabilistic notions of instability. In particular, the first refers to a particular allocation in the core, whereas the second involves the entire set C(GK)C(G_{K}).

Definition 2.2.
  1. (i)

    Let V:N[0,1]V:\mathbb{R}^{N}\rightarrow[0,1]. For any 𝒙N\bm{x}\in\mathbb{R}^{N}, V(𝒙){ξΞ:SN,iSxi<uS(ξ)}V(\bm{x})\coloneqq\mathbb{P}\{\xi\in\Xi:\exists S\subset\pazocal{N},\,\textstyle\sum_{i\in S}x_{i}<u_{S}(\xi)\} is the probability of allocation instability.

  2. (ii)

    Let 𝕍:2N[0,1]\mathbb{V}:2^{\mathbb{R}^{N}}\rightarrow[0,1]. We call 𝕍(C(GK)){ξΞ:𝒙C(GK),SN:iSxi<uS(ξ)}\mathbb{V}(C(G_{K}))\coloneqq\mathbb{P}\{\xi\in\Xi:\exists\bm{x}\in C(G_{K}),\,S\subset\pazocal{N}\colon\textstyle\sum_{i\in S}x_{i}<u_{S}(\xi)\} probability of core instability.

𝕍(C(GK))\mathbb{V}(C(G_{K})) thus denotes the probability with respect to the realizations of ξ\xi that, for some SS with value function uS(ξ)u_{S}(\xi), at least one of the allocations in the scenario core will become unstable, i.e., will be dominated by the option of defecting from the grand coalition to form SS.

To bound the probability of core instability in a PAC fashion, we introduce two key concepts from statistical learning theory, namely the algorithm and the compression set (Margellos et al., 2015). The latter refers to the fact that only a subset of data from 𝝃\bm{\xi} may be sufficient to produce the same scenario core. As we will see later this underpins the quality of the provided probabilistic stability guarantees.

Definition 2.3.

A mapping A:ΞK2N\pazocal{A}:\Xi^{K}\rightarrow 2^{\mathbb{R}^{N}} that takes as input a multi-sample 𝛏ΞK\bm{\xi}\in\Xi^{K} and returns the scenario core of game C(GK)C(G_{K}) is called an algorithm. With K\mathbb{P}^{K}-probability one w.r.t. the choice of 𝛏\bm{\xi}, a subset I{ξ(1),,ξ(K)}I\subseteq\{\xi^{(1)},\dots,\xi^{(K)}\} is a compression set for 𝛏\bm{\xi} if A(I)=A(𝛏)\pazocal{A}(I)=\pazocal{A}(\bm{\xi}).

Any compression set of least cardinality is called minimal compression set. Another important notion used in our derivations is the support rank (Schildbach et al., 2012).

Definition 2.4.

Consider the maximal unconstrained subspace LLL\in\pazocal{L} of a constraint in the form f(𝐱,ξ)0f(\bm{x},\xi)\leq 0, i.e., LLL^{\prime}\subseteq L for all LLL^{\prime}\in\pazocal{L}, where L=ξΞ𝐱N{L¯ is a linear subspace in N and L¯F(𝐱,ξ)}\pazocal{L}=\bigcap_{\xi\in\Xi}\bigcap_{\bm{x}\in\mathbb{R}^{N}}\{\bar{L}\text{ is a linear subspace in }\mathbb{R}^{N}\text{ and }\bar{L}\subset F(\bm{x},\xi)\}, with F(𝐱,ξ)={ξ:f(𝐱+ξ,δ)=f(𝐱,ξ)}F(\bm{x},\xi)=\{\xi\in\mathbb{R}:f(\bm{x}+\xi,\delta)=f(\bm{x},\xi)\}. Then, the support rank ρ\rho of this constraint is given by ρ=Ndim(L)\rho=N-\text{dim}(L).

In words, the support rank of an uncertain constraint is equal to the dimension of the problem at hand minus the dimension of the maximal unconstrained space of the constraint.

3 PAC stability guarantees for the scenario core with private sampling

3.1 A posteriori collective stability guarantees

In the following theorem we bound with high confidence the probability that some allocation 𝒙C(GK)\bm{x}\in C(G_{K}) will become unstable, i.e., that the scenario core – computed on the basis of K=iNKiK=\sum_{i\in\pazocal{N}}K_{i} samples – will be reduced after a new uncertainty realization.

Theorem 1

Suppose that each agent independently draws a multi-sample 𝛏iΞKi\bm{\xi}_{i}\in\Xi^{K_{i}} and let 𝛏=(𝛏i)i=1N\bm{\xi}=(\bm{\xi}_{i})_{i=1}^{N}. Fix a confidence parameter β(0,1)\beta\in(0,1) and choose βi>0\beta_{i}>0 such that iNβi=β\sum_{i\in\pazocal{N}}\beta_{i}=\beta. Consider an algorithm that takes as input 𝛏\bm{\xi} and returns the scenario core C(GK)C(G_{K}). It holds:

K{𝝃ΞK:𝕍(C(GK))iNϵi(si,K)}1β,\mathbb{P}^{K}\Big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{V}(C(G_{K}))\leq{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i,K})}\Big{\}}\geq 1-\beta, (2)

where si,Ks_{i,K} is the cardinality of the subset of a compression relative to agent ii, quantified a posteriori, and ϵi\epsilon_{i} satisfies

ϵi(Ki)=1,k=1Ki1(Kik)(1ϵi(k))Kik=βi.\epsilon_{i}(K_{i})=1,\quad\sum_{k=1}^{K_{i}-1}{K_{i}\choose k}(1-\epsilon_{i}(k))^{K_{i}-k}=\beta_{i}. (3)

Proof: First, note that 𝝃ΞK\bm{\xi}\in\Xi^{K} due to Assumption 1-(i). Then, the following inequalities hold.

K{𝝃ΞK:𝕍(C(GK))iNϵi(si,K)}=K{𝝃ΞK:{ξΞ:(i,S{i},𝒙):iSxi<uS(ξ)}iNϵi(si,K)}=K{𝝃ΞK:{iN{ξΞ:(i,S{i},𝒙):iSxi<uS(ξ)}iNϵi(si,K)}K{𝝃ΞK:iN{ξΞ:(i,S{i},𝒙):iSxi<uS(ξ)}iNϵi(si,K)}K{iN{𝝃ΞK:{ξΞ:(i,S{i},𝒙):iSxi<uS(ξ)}ϵi(si,K)}}1iNK{𝝃ΞK:{ξΞ:(i,S{i},𝒙):iSxi<uS(ξ)}>ϵi(si,K)}\begin{split}&\mathbb{P}^{K}\bigg{\{}\bm{\xi}\in\Xi^{K}:\mathbb{V}(C(G_{K}))\leq{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i,K})}\bigg{\}}\\ =&\mathbb{P}^{K}\bigg{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\exists(i,S\supseteq\{i\},\bm{x}):\sum_{i\in S}x_{i}<u_{S}(\xi)\}\leq{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i,K})}\bigg{\}}\\ =&\mathbb{P}^{K}\bigg{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\big{\{}\bigcup_{i\in\pazocal{N}}\{\xi\in\Xi:\exists(i,S\supseteq\{i\},\bm{x}):\sum_{i\in S}x_{i}<u_{S}(\xi)\big{\}}\leq{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i,K})}\bigg{\}}\\ \geq&\mathbb{P}^{K}\bigg{\{}\bm{\xi}\in\Xi^{K}:\sum_{i\in\pazocal{N}}\mathbb{P}\{\xi\in\Xi:\exists(i,S\supseteq\{i\},\bm{x}):\sum_{i\in S}x_{i}<u_{S}(\xi)\}\leq\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i,K})\bigg{\}}\\ \geq&\mathbb{P}^{K}\bigg{\{}\bigcap_{i\in\pazocal{N}}\{\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\exists(i,S\supseteq\{i\},\bm{x}):\sum_{i\in S}x_{i}<u_{S}(\xi)\}\leq\epsilon_{i}(s_{i,K})\}\bigg{\}}\\ \geq&1-\sum_{i\in\pazocal{N}}\mathbb{P}^{K}\bigg{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\exists(i,S\supseteq\{i\},\bm{x}):\sum_{i\in S}x_{i}<u_{S}(\xi)\}>\epsilon_{i}(s_{i,K})\bigg{\}}\end{split}

Given some new uncertainty realization ξ\xi, let C(GK+ξ)C(G_{K+\xi}) designate the scenario core built from the multisample (ζ(1),,ζ(K),ξ)(\zeta^{(1)},\ldots,\zeta^{(K)},\xi). The first equality stems from Definition 2.2, and expresses the fact that for C(GK+ξ)C(G_{K+\xi}) to be a strict subset of C(GK)C(G_{K}), there must exist some allocation in C(GK)C(G_{K}) which, due to ξ\xi, violates the rationality condition for some subcoalition SS, i.e., iNxi<uS(ξ)\sum_{i\in\pazocal{N}}x_{i}<u_{S}(\xi), causing the departure of some agent iSi\in S. The third inequality is obtained by applying the subadditivity property, while the second to last from applying Bonferroni’s inequality (Margellos et al., 2018; Falsone et al., 2020). Note now that C(GK)C(G_{K}) can be found as the solution of the feasibility problem

Find all 𝒙N s.t. iNxi=uN, and iSximaxk=1,KiuS(ξ(k)),S{i},iN,\begin{split}&\text{Find all }\bm{x}\in\mathbb{R}^{N}\\ &\text{ s.t. }\sum_{i\in\pazocal{N}}x_{i}=u{N},\\ &\text{ and }\sum_{i\in S}x_{i}\geq\max_{k=1,\dots K_{i}}u_{S}(\xi^{(k)}),\ \forall\ {\color[rgb]{0,0,0}S\supseteq\{i\}},\,\forall i\in\pazocal{N},\end{split} (4)

Let 𝒞ξi={Ci2RN:iSxiuS(ξ),S{i},𝒙Ci}\mathscr{C}_{\xi}^{i}=\{C_{i}\subset 2^{R^{N}}:\sum_{i\in S}x_{i}\geq u_{S}(\xi),\,\forall S\supseteq\{i\},\forall\bm{x}\in C_{i}\} be the collection of (sub)sets of allocations satisfying the coalitional constraints obtained from data corresponding to agent iNi\in\pazocal{N}. Considering the unique set C¯i={𝒙N:iSximaxk=1,KiuS(ξi(k)),S{i}}\bar{C}_{i}=\{\bm{x}\in\mathbb{R}^{N}:\sum_{i\in S}x_{i}\geq\max_{k=1,\dots K_{i}}u_{S}(\xi_{i}^{(k)}),\,\forall S\supseteq\{i\}\}, note that C¯ik=1Ki𝒞ξi(k)i\bar{C}_{i}\in\bigcap_{k=1}^{K_{i}}\mathscr{C}_{\xi_{i}^{(k)}}^{i}, i.e., C¯i\bar{C}_{i} satisfies the consistency assumption required by Campi et al. (2018, Th. 1). From this,

K{𝝃ΞK:{ξΞ:(S{i},𝒙):iNxi<uS(ξ)}>ϵi(si,K)}=K{𝝃ΞK:{ξΞ:C¯i𝒞ξi}>ϵi(si,K)}βi,\mathbb{P}^{K}\Big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\exists(S\supseteq\{i\},\bm{x}):\sum_{i\in\pazocal{N}}x_{i}<u_{S}(\xi)\}>\epsilon_{i}(s_{i,K})\Big{\}}\\ =\mathbb{P}^{K}\left\{\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\bar{C}_{i}\notin\mathscr{C}_{\xi}^{i}\}>\epsilon_{i}(s_{i,K})\right\}\leq\beta_{i}, (5)

from which (2) follows. \blacksquare

A compression (sub)set IiI_{i} originated from agent’s ii uncertainty samples can be obtained by means of Algorithm 1; its cardinality can then be obtained as si,K=|Ii|s_{i,K}=|I_{i}|.

Algorithm 1 Distributed compression algorithm
1:Input: Multi-sample 𝝃i\bm{\xi}_{i}, coalition values {uS()}S{i}\{u_{S}(\cdot)\}_{S\supseteq\{i\}};
2:Output: Compression set IiI_{i};
3:Initialization: Ii=I_{i}=\varnothing;
4:Each agent iNi\in\pazocal{N} performs
5:For all S{i}S^{\prime}\supseteq\{i\}:
𝒙Sargmin𝒙0 0s.t. iSxi=maxk=1,,KiuS(ξi(k)),iSximaxk=1,,KiuS(ξi(k)),S{i} and SS.\begin{split}&\bm{x}^{\ast}_{S^{\prime}}\in{\color[rgb]{0,0,0}\operatorname*{arg\,min}_{\bm{x}\geq 0}}\;0\\ \text{s.t. }&\sum_{i\in S^{\prime}}x_{i}=\max_{k=1,\dots,K_{i}}u_{S^{\prime}}(\xi_{i}^{(k)}),\\ &\sum_{i\in S}x_{i}\geq\max_{k=1,\dots,K_{i}}u_{S}(\xi_{i}^{(k)}),\forall S\supseteq\{i\}\text{ and }S\neq S^{\prime}.\end{split}
6:  If 𝒙S\bm{x}^{\ast}_{S^{\prime}}\neq\varnothing
7:    IiIiargmaxk=1,,KiuS(ξi(k))I_{i}\leftarrow I_{i}\cup\operatorname*{arg\,max}\limits_{k=1,\dots,K_{i}}u_{S^{\prime}}(\xi_{i}^{(k)});
8:  End If
9:End For

Note that differently from the compression algorithm in Pantazis et al. (2022a), Algorithm 1 is distributed among agents. At each iteration of Algorithm 1, a feasibility program is solved where coalitional rationality is enforced with equality for each coalition SS^{\prime} in which agent ii is allowed to participate. This allows to identify whether the sample that maximizes uS()u_{S}(\cdot) is critical for the construction of a stable allocation set for agent iNi\in\pazocal{N}. The latter is verified if the problem is feasible, and the sample collected as part of the compression set IiI_{i}. Note that unless coordination is imposed among agents, the compression set obtained through Algorithm 1 is non-minimal.

3.2 A priori collective stability guarantees

As a byproduct of our a posteriori analysis, an a priori bound can be obtained. Specifically one can obtain an a priori bound by considering the worst case value of iNϵi(si,K)\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i,K}) over all possible combinations for which iNsi,KM\sum_{i\in\pazocal{N}}s_{i,K}\leq M (where M=2N1M=2^{N}-1). The following theorem provides then an a priori bound for the entire core with private samples.

Theorem 2

Fix β(0,1)\beta\in(0,1) and choose βi(0,1)\beta_{i}\in(0,1) such that iNβi=β\sum_{i\in\pazocal{N}}\beta_{i}=\beta. It holds that

K{𝝃ΞK:𝕍(C(GK))ϵ}1β,\mathbb{P}^{K}\{\bm{\xi}\in\Xi^{K}:\mathbb{V}(C(G_{K}))\leq\epsilon^{\ast}\}\geq 1-\beta, (6)

where ϵ=max{iNϵi(si):iNsiM,si}\epsilon^{\ast}=\max\{\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i})\colon\sum_{i\in\pazocal{N}}s_{i}\leq M,\,s_{i}\in\mathbb{N}\}, with ϵi\epsilon_{i} defined as in (3), is a quantity independent from the given multi-sample.

Proof: We apply the a posteriori result in Pantazis et al. (2022b, Th. 1) – which admits the number of facets of a randomized feasibility domain as an upper bound to the cardinality of the minimal compression set – to C(GK)C(G_{K}), where each facet is associated to some subcoalition in 2N2^{\pazocal{N}}. Now, let si,Ks_{i,K} denote the cardinality of the minimal (sub)compression set relative to agent ii, quantified a posteriori: by Pantazis et al. (2022b, Th.1) it holds that iNsi,KM\sum_{i\in\pazocal{N}}s_{i,K}\leq M. Then, iNϵi(si,K)max{si}iNiNϵi(si)\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i,K})\leq\max_{\{s_{i}\}_{i\in\pazocal{N}}}\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i}), for any {si}iN\{s_{i}\}_{i\in\pazocal{N}} such that iNsiM\sum_{i\in\pazocal{N}}s_{i}\leq M. By definition of ϵ\epsilon^{*} it follows

K{𝝃ΞK:𝕍(C(GK))ϵ}K{𝝃ΞK:𝕍(C(GK))iNϵi(si,K)}1β,\mathbb{P}^{K}\{\bm{\xi}\in\Xi^{K}:\mathbb{V}(C(G_{K}))\leq\epsilon^{\ast}\}\geq\mathbb{P}^{K}\bigg{\{}\bm{\xi}\in\Xi^{K}:\mathbb{V}(C(G_{K}))\leq{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i,K})}\bigg{\}}\geq 1-\beta,

where the last inequality holds because of Theorem 1. \blacksquare
The results above can be applied to the entire set of allocations that are stable w.r.t. the observed data; because of their generality, these theoretical guarantees tend to be conservative. In what follows we specialise our analysis to a single allocation.

4 PAC stability of a single allocation with private sampling

Let 𝒙\bm{x}^{\ast} be the unique allocation in C(GK)C(G_{K}) which maximises some convex utility function f()f(\cdot).111Uniqueness of the maximiser is without loss of generality. If multiple maximisers are possible, a convex tie-break rule can be applied to single out a unique solution. Recalling the notion of support rank as per Definition 2.4, the following result holds for 𝒙\bm{x}^{*}.

Theorem 3

Suppose that each agent draws a multi-sample 𝛏iΞKi\bm{\xi}_{i}\in\Xi^{K_{i}} and let 𝛏=(𝛏i)iN\bm{\xi}=(\bm{\xi}_{i})_{i\in\pazocal{N}}. Fix ϵ(0,1)\epsilon\in(0,1), and choose ϵi>0\epsilon_{i}>0 such that iNϵi=ϵ\sum_{i\in\pazocal{N}}\epsilon_{i}=\epsilon. Then,

K{𝝃ΞK:{ξΞ:𝒙C(GK)}ϵ}1iNβi,\mathbb{P}^{K}\big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:{\color[rgb]{0,0,0}\bm{x}^{\ast}\notin C(G_{K})}\}\leq\epsilon\big{\}}\geq 1-{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\beta_{i}}, (7)

where βi=j=1ρi(Kij)ϵij(1ϵi)Kij\beta_{i}=\sum\limits_{j=1}^{\rho_{i}}{K_{i}\choose j}\epsilon^{j}_{i}(1-\epsilon_{i})^{K_{i}-j}, with ρiN\rho_{i}\leq N being the support rank of the coalitional rationality constraints corresponding to agent ii, i.e., relative to all SNS\subset\pazocal{N} such that S{i}S\supseteq\{i\}.

Proof: Similarly to the proofline of Theorem 1, we have that

K{𝝃ΞK:V(𝒙)ϵ}\displaystyle\mathbb{P}^{K}\Big{\{}\bm{\xi}\in\Xi^{K}:V(\bm{x}^{\ast})\leq\epsilon\Big{\}}
=\displaystyle= K{𝝃ΞK:{ξΞ:(iN,S{i}):iSxi<uS(ξ)}iNϵi}\displaystyle\mathbb{P}^{K}\Big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\exists(i\in\pazocal{N},S\supset\{i\}):\sum_{i\in S}x^{\ast}_{i}<u_{S}(\xi)\}\leq{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\epsilon_{i}}\Big{\}}
=\displaystyle= K{𝝃ΞK:{iN{ξΞ:S{i}:iSxi<uS(ξ)}}iNϵi}\displaystyle\mathbb{P}^{K}\Big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\bigcup_{i\in\pazocal{N}}\{\xi\in\Xi:\exists S\supseteq\{i\}:\sum_{i\in S}x^{\ast}_{i}<u_{S}(\xi)\}\}\leq{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\epsilon_{i}}\Big{\}}
\displaystyle\geq K{𝝃ΞK:iN{ξΞ:S{i}:iSxi<uS(ξ)}iNϵi}\displaystyle\mathbb{P}^{K}\Big{\{}\bm{\xi}\in\Xi^{K}:\sum_{i\in\pazocal{N}}\mathbb{P}\{\xi\in\Xi:\exists S\supseteq\{i\}:\sum_{i\in S}x^{\ast}_{i}<u_{S}(\xi)\}\leq{\color[rgb]{0,0,0}\sum_{i\in\pazocal{N}}\epsilon_{i}}\Big{\}}
\displaystyle\geq K{iN{𝝃ΞK:{ξΞ:S{i}:iSxi<uS(ξ)}ϵi}}\displaystyle\mathbb{P}^{K}\Big{\{}\bigcap_{i\in\pazocal{N}}\{\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\exists S\supseteq\{i\}:\sum_{i\in S}x^{\ast}_{i}<u_{S}(\xi)\}\leq\epsilon_{i}\}\Big{\}}
\displaystyle\geq 1iNK{𝝃ΞK:{ξΞ:S{i}:iSxi<uS(ξ)}>ϵi}1iNβi.\displaystyle 1-\sum_{i\in\pazocal{N}}\mathbb{P}^{K}\Big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\exists S\supseteq\{i\}:\sum_{i\in S}x^{\ast}_{i}<u_{S}(\xi)\}>\epsilon_{i}\Big{\}}\geq 1-\sum_{i\in\pazocal{N}}\beta_{i}.

To obtain the last inequality consider the following optimization program

𝒙=argmin𝒙Nf(𝒙) s.t. iNxi=uN, and iSximaxk=1,KiuS(ξi(k)),S{i},iN.\begin{split}&{\color[rgb]{0,0,0}\bm{x}^{*}=\operatorname*{arg\,min}_{\bm{x}\in\mathbb{R}^{N}}f(\bm{x})}\\ &\text{ s.t. }\sum_{i\in\pazocal{N}}x_{i}=u{N},\\ &\text{ and }\sum_{i\in S}x_{i}\geq\max_{k=1,\dots K_{i}}u_{S}(\xi_{i}^{(k)}),\;\forall{\color[rgb]{0,0,0}S\supseteq\{i\},}\;\forall i\in\pazocal{N}.\end{split} (8)

We then group the constraints depending on which agent they correspond to. Let Xi(ξ){𝒙N:iSxiuS(ξ),S{i}}X_{i}(\xi)\coloneqq\{\bm{x}\in\mathbb{R}^{N}\colon\sum_{i\in S}x_{i}\geq u_{S}(\xi),\,\forall S\supseteq\{i\}\}. With βi\beta_{i} defined as above, from Schildbach et al. (2012) we have

βiK{𝝃ΞK:{ξΞ:𝒙Xi(ξ)}>ϵi}=K{𝝃ΞK:{ξΞ:(iN,S{i}):iSxi<uS(ξ)}>ϵi},\begin{split}\beta_{i}&{}\geq\mathbb{P}^{K}\{\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\bm{x}^{\ast}\notin X_{i}(\xi)\}>\epsilon_{i}\}\\ &{}=\mathbb{P}^{K}\{\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\exists(i\in\pazocal{N},S\supseteq\{i\}):\sum_{i\in S}x^{\ast}_{i}<u_{S}(\xi)\}>\epsilon_{i}\},\end{split}

which concludes the proof. \blacksquare

Note that the confidence parameter βi\beta_{i} in Theorem 3 depends on the number of samples KiK_{i} of agent iNi\in\pazocal{N} and the violation level ϵi\epsilon_{i} set individually by the agent to determine the probability of satisfaction of the rationality constraints. Most importantly, βi\beta_{i} depends on the so called support rank ρi\rho_{i} of the coalitional constraints corresponding to iNi\in\pazocal{N}, which is in any instance upper bounded by the dimension of the decision variable in (8). Theorem 3 is an a priori result and in certain cases can be conservative. The following result provides an improved probabilistic bound, based on the observation of the uncertainty realization. Let

ϵi(s)=1(βi(N+1)(Kis))1Kis.\epsilon_{i}(s)=1-\Bigg{(}\frac{\beta_{i}}{(N+1){K_{i}\choose s}}\Bigg{)}^{\frac{1}{K_{i}-s}}. (9)

Note that (9) is compatible with (3), and allows to explicitly compute ϵi\epsilon_{i} as a function of βi\beta_{i}.

Theorem 4

Fix β(0,1)\beta\in(0,1) and choose βi>0\beta_{i}>0 such that iNβi=β\sum_{i\in\pazocal{N}}\beta_{i}=\beta. Set ϵi\epsilon_{i} as in (9). Then, for the unique allocation 𝐱C(GK)\bm{x}^{*}\in C(G_{K}) it holds

K{𝝃ΞK:{ξΞ:𝒙C(GK)}iSϵi(si,K)}1β.\mathbb{P}^{K}\bigg{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:{\color[rgb]{0,0,0}\bm{x}^{\ast}\notin C(G_{K})}\}\leq\sum_{i\in S}\epsilon_{i}(s_{i,K})\bigg{\}}\geq 1-\beta. (10)

Recall that, as in Theorem 4, si,Ks_{i,K} is the cardinality of the (sub)compression set associated with the rationality constraints relative to all S{i}S\supseteq\{i\}, quantified a posteriori, e.g., by means of the procedure illustrated in Campi et al. (2018, §II).

Proof: For each agent iNi\in\pazocal{N}, taking the conditional probability over all other agents’ multi-samples, Theorem 1 in Campi et al. (2018) allows to state

K{𝝃ΞK:{ξΞ:iSxi>uS(ξ)}ϵi(si,K)|(𝝃j)jiΞKKi}1βi,\mathbb{P}^{K}\big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\sum_{i\in S}x_{i}^{\ast}>u_{S}(\xi)\}\leq\epsilon_{i}(s_{i,K})|(\bm{\xi}_{j})_{j\neq i}\in\Xi^{K-K_{i}}\big{\}}\geq 1-\beta_{i},

which, by integrating w.r.t. the probability of realization of all other agents’ scenarios, becomes

K{𝝃ΞK:{ξΞ:iSxi>uS(ξ)}ϵi(si,K)}1βi.\displaystyle\mathbb{P}^{K}\big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:\sum_{i\in S}x_{i}^{\ast}>u_{S}(\xi)\}\leq\epsilon_{i}(s_{i,K})\big{\}}\geq 1-\beta_{i}. (11)

The proof is concluded by applying relation (11) in the proofline of Theorem 3. \blacksquare

Finally, an a priori bound can be derived from Theorem 4 by considering the worst case among all a posteriori observable compressions {si,K}\{s_{i,K}\}; this bound is nontrivial by noticing that iNsi,KN\sum_{i\in\pazocal{N}}s_{i,K}\leq N, which follows from Calafiore and Campi (2006, Th. 3).

Corollary 1.

Let ϵ=max{iNϵi(si):iNsiN,si}\epsilon^{\ast}=\max\{\sum_{i\in\pazocal{N}}\epsilon_{i}(s_{i})\colon\sum_{i\in\pazocal{N}}s_{i}\leq N,\,s_{i}\in\mathbb{N}\}, with ϵi\epsilon_{i} defined as in (9). Then

K{𝝃ΞK:{ξΞ:𝒙C(GK)}ϵ}1β.\mathbb{P}^{K}\big{\{}\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:{\color[rgb]{0,0,0}\bm{x}^{\ast}\notin C(G_{K})}\}\leq\epsilon^{\ast}\big{\}}\geq 1-\beta. (12)

4.1 The case of empty core

Lifting Assumption 1-(ii) on non-emptiness of the scenario core, we define a relaxed version of the latter, the so called ζ\zeta-core for the case of private uncertainty sampling as follows:

Definition 4.1.

For any (𝛏i)iNΞK(\bm{\xi}_{i})_{i\in\pazocal{N}}\in\Xi^{K}, fixed ζ¯i0\bar{\zeta}_{i}\geq 0 for all iNi\in\pazocal{N}, the scenario ζ\zeta-core is given by

Cζ(GK)={𝒙N:iNxi=uN and iSximaxiSmaxk=1,,KiuS(ξi(k))ζ¯i,SN}.C_{\zeta}{(G_{K})}=\Big{\{}\bm{x}\in\mathbb{R}^{N}:\sum_{i\in\pazocal{N}}x_{i}=u{N}\text{ and }\sum_{i\in S}x_{i}\geq\max_{i\in S}\max_{k=1,\dots,K_{i}}u_{S}(\xi_{i}^{(k)})-\bar{\zeta}_{i},\ \forall S\subset\pazocal{N}\Big{\}}. (13)

It is worth pointing out that the ζ\zeta-core allows to extend the analysis to allocations “closest” to the core, and the interest in it may not be necessarily restricted to cases where the standard core is empty (this is recovered by setting ζ¯i=0\bar{\zeta}_{i}=0 for all iNi\in\pazocal{N}). Also, Definition 4.1 contemplates a different relaxation parameter ζ¯i\bar{\zeta}_{i} for every agent, allowing to specialise the definition according to individual preferences, or possibly different information on, e.g., each agent’s sampling.

On these grounds, here we address the problem of i) providing an allocation with (relaxed) coalitional stability certificates and ii) measuring how agents’ multi-samples contribute to lack of coalitional stability. These questions can be answered at once by solving

min𝒙,{ζi0}iNiNk=1Kiζi(k),\displaystyle\min_{\bm{x},\{\zeta_{i}\geq 0\}_{i\in\pazocal{N}}}\;\sum_{i\in\pazocal{N}}\sum_{k=1}^{K_{i}}{\color[rgb]{0,0,0}\zeta_{i}^{(k)}}, (14a)
s.t. iNxi=uN\displaystyle\text{s.t. }\sum_{i\in\pazocal{N}}x_{i}=u{N} (14b)
and iSximaxk=1,,KiuS(ξ(k))ζi(k),S{i},iN.\displaystyle\text{and }\sum_{i\in S}x_{i}\geq\max_{k=1,\dots,K_{i}}u_{S}(\xi^{(k)})-\zeta_{i}^{(k)},\;\forall S\supseteq\{i\},\;\forall i\in\pazocal{N}. (14c)

In what follows, we consider without loss of generality that for any multi-sample (𝝃i)iNΞK(\bm{\xi}_{i})_{i\in\pazocal{N}}\in\Xi^{K}, (14) returns a unique pair (𝒙,𝜻)N×K(\bm{x}^{\ast},\bm{\zeta}^{\ast})\in\mathbb{R}^{N}\times\mathbb{R}^{K} (this can be ensured by using a convex tie-break rule).

Assumption 2

For every allocation 𝐱N\bm{x}\in\mathbb{R}^{N}, {ξΞ:iSxi=uS(ξ)}=0\mathbb{P}\{\xi\in\Xi:\sum_{i\in S}x_{i}=u_{S}(\xi)\}=0 for any SNS\subset\pazocal{N}.

Assumption 2 is related to non-degeneracy and is often satisfied when ξ\xi does not accumulate as is the case for continuous probability distributions. Now, for iNi\in\pazocal{N}, consider the polynomial equation (Campi and Garatti, 2021, Th. 1)

(Kis)tKisβi2Nj=sKi1(is)tjsβi6Kij=Ki+14Ki(js)tjs=0\binom{K_{i}}{s}t^{K_{i}-s}-\frac{\beta_{i}}{2N}\sum\limits_{j=s}^{K_{i}-1}\binom{i}{s}t^{j-s}-\frac{\beta_{i}}{6K_{i}}\sum\limits_{j=K_{i}+1}^{4K_{i}}\binom{j}{s}t^{j-s}=0 (15)

For s{0,,Ki1}s\in\{0,\dots,K_{i}-1\}, (15) has two solutions in [0,+)[0,+\infty). We denote the smallest as t¯i(s)\underline{t}_{i}(s), and further let t¯i(s)=0\underline{t}_{i}(s)=0 for s=Kis=K_{i}. We can now propose the following a posteriori statement on 𝒙\bm{x}^{*}:

Theorem 5

Fix β(0,1)\beta\in(0,1) and choose βi>0\beta_{i}>0 such that iNβi=β\sum_{i\in\pazocal{N}}\beta_{i}=\beta. Let (𝐱,𝛇)(\bm{x}^{*},\bm{\zeta}^{*}) be the solution of (14), and denote by sis_{i}^{*} the cardinality of {k:ζi(k)>0}\{k\colon\zeta_{i}^{*(k)}>0\}. Under Assumption 2,

K{𝝃ΞK\displaystyle\mathbb{P}^{K}\big{\{}\bm{\xi}\in\Xi^{K} :{ξΞ:(i,S{i}):iSxi<uS(ξ)}iNϵ¯i(si)}1β,\displaystyle:\mathbb{P}\{\xi\in\Xi:\exists(i,S\supseteq\{i\}):\sum_{i\in S}x^{\ast}_{i}<u_{S}(\xi)\}\leq\sum_{i\in\pazocal{N}}\bar{\epsilon}_{i}(s_{i}^{*})\big{\}}\geq 1-\beta, (16)

where ϵ¯i(si)=1t¯i(si)\overline{\epsilon}_{i}(s_{i}^{*})=1-\underline{t}_{i}(s_{i}^{*}), for all iNi\in\pazocal{N}.

Proof: Let fi(𝒙,ξ)maxj=1,,M(bj(ξ)aj𝒙)f_{i}(\bm{x},\xi)\coloneqq\max_{j=1,\ldots,M}(b_{j}(\xi)-a^{\top}_{j}\bm{x}), where the ii-th component of aja_{j} is 1 if iSi\in S (and 0 otherwise), and bj(ξ)=(uS(ξ))S{i}b_{j}(\xi)=(u_{S}(\xi))_{S\supseteq\{i\}}. Then (14c) can be rewritten as

fi(𝒙,ξi(k))ζi,k=1,,Ki,iN.f_{i}(\bm{x},\xi_{i}^{(k)})\leq\zeta_{i},\;\forall k=1,\dots,K_{i},\;\forall i\in\pazocal{N}.

We note that in our setting Campi and Garatti (2021, Assum. 1) is satisfied; under Assumption 2, it is then possible to apply Campi and Garatti (2021, Th. 1), which yields

K{𝝃ΞK:{ξΞ:fi(𝒙,ξ)>0}>ϵ¯i(si)}βi,\mathbb{P}^{K}\left\{\bm{\xi}\in\Xi^{K}:\mathbb{P}\{\xi\in\Xi:f_{i}(\bm{x}^{*},\xi)>0\}>\bar{\epsilon}_{i}(s_{i}^{*})\right\}\leq\beta_{i}, (17)

where ϵ¯i()\bar{\epsilon}_{i}(\cdot) is derived from (15) as described above. By applying arguments similar to those used in proving Theorems 3 and 4, we obtain (16). \blacksquare

The statement in Theorem 5 can be interpreted as follows: as a result of (14), 𝒙\bm{x}^{*} is an allocation in the ζ\zeta-core defined by {ζ¯i}iN\{\bar{\zeta}_{i}\}_{i\in\pazocal{N}}, with ζi¯=maxk=1,,Kiζi(k)\bar{\zeta_{i}}=\max_{k=1,\ldots,K_{i}}\zeta_{i}^{*(k)}. It holds with confidence 1β1-\beta that 𝒙\bm{x}^{*} is a coalitionally stable allocation with probability 1iNϵ¯i1-\sum_{i\in\pazocal{N}}\bar{\epsilon}_{i}, under all possible realizations of the uncertain parameter ξΞ\xi\in\Xi.

5 Conclusion

In this work we considered uncertain coalitional games and proposed a data-driven methodology to study the stability of allocations for the general setting where uncertainty is privately sampled by the agents. Future work will investigate stability of allocations through a distributionally robust framework, as well as analyse the case where the value of the grand coalition can also be affected by uncertainty.

\acks

F. Fele gratefully acknowledges support from grant RYC2021-033960-I funded by MCIN/AEI/ 10.13039/501100011033 and European Union NextGenerationEU/PRTR, as well as from grant PID2022-142946NA-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by ERDF A way of making Europe.

References

  • Balcan et al. (2015) Maria-Florina Balcan, Ariel D. Procaccia, and Yair Zick. Learning cooperative games. In IJCAI15: Proceedings of the 24th International Conference on Artificial Intelligence, 2015.
  • Calafiore and Campi (2006) Giuseppe C. Calafiore and Marco C. Campi. The scenario approach to robust control design. IEEE Transactions on Automatic Control, 51(5):742–753, 2006. 10.1109/TAC.2006.875041.
  • Campi and Garatti (2018a) Marco C. Campi and Simone Garatti. Introduction to the scenario approach. Society for Industrial and Applied Mathematics, 2018a.
  • Campi and Garatti (2018b) Marco C. Campi and Simone Garatti. Wait-and-judge scenario optimization. Mathematical Programming, 167:155–189, 2018b.
  • Campi and Garatti (2021) Marco C. Campi and Simone Garatti. A theory of the risk for optimization with relaxation and its application to support vector machines. Journal of Machine Learning Research, 22(288):1–38, 2021.
  • Campi et al. (2018) Marco C. Campi, Simone Garatti, and Federico Alessandro Ramponi. A general scenario theory for nonconvex optimization and decision making. IEEE Transactions on Automatic Control, 63(12):4067–4078, 2018. 10.1109/TAC.2018.2808446.
  • Chalkiadakis and Boutilier (2004) Georgios Chalkiadakis and Craig Boutilier. Bayesian reinforcement learning for coalition formation under uncertainty. In AAMAS 2004: Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multiagent Systems., pages 1090–1097, 2004.
  • Chalkiadakis and Boutilier (2008) Georgios Chalkiadakis and Craig Boutilier. Sequential decision making in repeated coalition formation under uncertainty. In AAMAS08: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, 2008.
  • Chalkiadakis et al. (2011) Georgios Chalkiadakis, Edith Elkind, and Michael Wooldridge. Computational aspects of cooperative game theory. Synthesis Lectures on Artificial Intelligence and Machine Learning, 5(6):1–168, 2011.
  • Charnes and Granot (1973) Abraham Charnes and Daniel Granot. Prior solutions: Extensions of convex nucleus solutions to chance-constrained games. In Proceedings of the Computer Science and Statistics Seventh Symposium at Iowa State University, pages 323–332, 1973.
  • Charnes and Granot (1976) Abraham Charnes and Daniel Granot. Coalitional and chance-constrained solutions to n-person games I: The prior satisficing nucleolus. SIAM Journal on Applied Mathematics, 31(2):358–367, 1976.
  • Charnes and Granot (1977) Abraham Charnes and Daniel Granot. Coalitional and chance-constrained solutions to n-person games II: Two-stage solutions. Operations Research, 25(6):1013–1019, 1977.
  • Fabiani et al. (2022) Filippo Fabiani, Kostas Margellos, and Paul J. Goulart. Probabilistic feasibility guarantees for solution sets to uncertain variational inequalities. Automatica, 137:110120, 2022. ISSN 0005-1098.
  • Falsone et al. (2020) Alessandro Falsone, Kostas Margellos, Maria Prandini, and Simone Garatti. A scenario-based approach to multi-agent optimization with distributed information. IFAC-PapersOnLine, 53(2):20–25, 2020. ISSN 2405-8963. 10.1016/j.ifacol.2020.12.034. 21st IFAC World Congress.
  • Fele et al. (2017) Filiberto Fele, José M. Maestre, and Eduardo F. Camacho. Coalitional control: Cooperative game theory and control. IEEE Control Systems Magazine, 37(1):53–69, 2017.
  • Fele et al. (2018) Filiberto Fele, Ezequiel Debada, José M. Maestre, and Eduardo F. Camacho. Coalitional control for self-organizing agents. IEEE Transactions on Automatic Control, 63(9):2883–2897, 2018. 10.1109/TAC.2018.2792301.
  • Garatti and Campi (2022) Simone Garatti and Marco C. Campi. Risk and complexity in scenario optimization. Mathematical Programming, 191:243–279, 2022.
  • Guéneron and Bonnet (2021) Josselin Guéneron and Grégory Bonnet. Are exploration-based strategies of interest for repeated stochastic coalitional games? Advances in Practical Applications of Agents, Multi-Agent Systems, and Social Good. The PAAMS Collection, pages 89–100, 2021.
  • Han et al. (2019) Liyang Han, Thomas Morstyn, and Malcolm McCulloch. Incentivizing prosumer coalitions with energy management using cooperative game theory. IEEE Transactions on Power Systems, 34(1):303–313, 2019.
  • Ieong and Shoham (2008) Samuel Ieong and Yoav Shoham. Bayesian coalitional games. In Proceedings of the 23rd AAAI Conference on Artificial Intelligence, pages 95–100, 2008.
  • Karaca and Kamgarpour (2020) Orcun Karaca and Maryam Kamgarpour. Core-selecting mechanisms in electricity markets. IEEE Transactions on Smart Grid, 11(3):2604–2614, 2020. 10.1109/TSG.2019.2958710.
  • Li and Conitzer (2015) Yuqian Li and Vincent Conitzer. Cooperative game solution concepts that maximize stability under noise. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, page 979–985, 2015.
  • Margellos et al. (2015) Kostas Margellos, Maria Prandini, and John Lygeros. On the connection between compression learning and scenario-based single-stage and cascading optimization problems. IEEE Transactions on Automatic Control, 60(10):2716–2721, 2015.
  • Margellos et al. (2018) Kostas Margellos, Alessandro Falsone, Simone Garatti, and Maria Prandini. Distributed constrained optimization and consensus in uncertain networks via proximal minimization. IEEE Transactions on Automatic Control, 63(5):1372–1387, 2018. 10.1109/TAC.2017.2747505.
  • McCain (2008) Roger A. McCain. Cooperative games and cooperative organizations. The Journal of Socio-Economics, 37(6):2155–2167, 2008. ISSN 1053-5357.
  • Nedić and Bauso (2013) Angelia Nedić and Dario Bauso. Dynamic coalitional tu games: Distributed bargaining among players’ neighbors. IEEE Transactions on Automatic Control, 58(6):1363–1376, 2013. 10.1109/TAC.2012.2236716.
  • Pantazis et al. (2022a) George Pantazis, Filippo Fabiani, Filiberto Fele, and Kostas Margellos. Probabilistically robust stabilizing allocations in uncertain coalitional games. IEEE Control Systems Letters, 6:3128–3133, 2022a. 10.1109/LCSYS.2022.3182152.
  • Pantazis et al. (2022b) George Pantazis, Filiberto Fele, and Kostas Margellos. On the probabilistic feasibility of solutions in multi-agent optimization problems under uncertainty. European Journal of Control, 63:186–195, 2022b. ISSN 0947-3580.
  • Pantazis et al. (2023) George Pantazis, Barbara Franci, Sergio Grammatico, and Kostas Margellos. Distributionally robust stability of payoff allocations in stochastic coalitional games. Accepted for publication at the IEEE Conference on Decision and Control 2023, 2023.
  • Procaccia and Rosenschein (2006) Ariel D. Procaccia and Jeffrey S. Rosenschein. Learning to identify winning coalitions in the PAC model. In AAMAS06 - 5th International Joint Conference on Autonomous Agents and Multi-agent Systems, page 673–675. Association for Computing Machinery, NY, US, 2006. 10.1145/1160633.1160751.
  • Raja and Grammatico (2021) Aitazaz Ali Raja and Sergio Grammatico. Payoff distribution in robust coalitional games on time-varying networks. IEEE Transactions on Control of Network Systems, 2021.
  • Schildbach et al. (2012) Georg Schildbach, Lorenzo Fagiano, and Manfred Morari. Randomized solutions to convex programs with multiple chance constraints. SIAM J. Optim., 23:2479–2501, 2012.
  • Suijs et al. (1999) Jeroen Suijs, Peter Borm, Anja De Waegenaere, and Stef Tijs. Cooperative games with stochastic payoffs. European Journal of Operational Research, 113(1):193–205, 1999. ISSN 0377-2217. 10.1016/S0377-2217(97)00421-9.