This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\AtAppendix

Barter Exchange with Shared Item Valuations

Juan Luque Department of Computer ScienceUniversity of MarylandCollege ParkMDUSA Sharmila Duppala Department of Computer ScienceUniversity of MarylandCollege ParkMDUSA John Dickerson Department of Computer ScienceUniversity of MarylandCollege ParkMDUSA  and  Aravind Srinivasan Department of Computer ScienceUniversity of MarylandCollege ParkMDUSA
Abstract.

In barter exchanges agents enter seeking to swap their items for other items on their wishlist. We consider a centralized barter exchange with a set of agents and items where each item has a positive value. The goal is to compute a (re)allocation of items maximizing the agents’ collective utility subject to each agent’s total received value being comparable to their total given value. Many such centralized barter exchanges exist and serve crucial roles; e.g., kidney exchange programs, which are often formulated as variants of directed cycle packing. We show finding a reallocation where each agent’s total given and total received values are equal is NP-hard. On the other hand, we develop a randomized algorithm that achieves optimal utility in expectation and where, i) for any agent, with probability 1 their received value is at least their given value minus vv^{*} where vv^{*} is said agent’s most valuable owned and wished-for item, and ii) each agent’s given and received values are equal in expectation.

Barter-Exchanges, Centralized exchanges, Community Markets
copyright: noneprice: ccs: Theory of computation Rounding techniquesccs: Applied computing Electronic commerce

1. Introduction

Social media platforms have recently emerged into small scale business websites. For example, platforms like Facebook, Instagram, etc. allow its users to buy and sell goods via verified business accounts. With the proliferation of such community marketplaces, there are growing communities for buying, selling and exchanging (swapping) goods amongst its users. We consider applications, viewed as Barter Exchanges, which allow users to exchange board games, digital goods, or any physical items amongst themselves. For instance, the subreddit GameSwap111www.reddit.com/r/Gameswap (61,000 members) and Facebook group BoardgameExchange222https://www.facebook.com/groups/boardgameexchange (51,000 members) are communities where users enter with a list of owned video games and board games. The existence of this community is testament to the fact that although users could simply liquidate their goods and subsequently purchase the desired goods, it is often preferable to directly swap for desired items. Additionally, some online video games have fleshed out economies allowing for the trade of in-game items between players while selling items for real-world money is explicitly illegal e.g., Runescape333www.jagex.com/en-GB/terms/rules-of-runescape. In these applications, a centralized exchange would achieve greater utility, in collective exchanged value and convenience, as well as overcome legality obstacles.

A centralized barter exchange market provides a platform where agents can exchange items directly, without money/payments. Beyond the aforementioned applications, there exist a myriad of other markets facilitating the exchange of a wide variety of items, including books, children’s items, cryptocurrency, and human organs such as kidneys. There are both centralized and decentralized exchange markets for various items. HomeExchange444www.homeexchange.com and ReadItSwapIt555www.readitswapit.co.uk are decentralized marketplaces that facilitate pairwise exchanges by mutual agreement of vacation homes and books, respectively. Atomic cross chain swaps allow users to exchange currencies within or across various cryptocurrencies (e.g., Herlihy, 2018; Thyagarajan et al., 2022). Kidney exchange markets (see, e.g., Aziz et al., 2021; Abraham et al., 2007) and children’s items markets (e.g., Swap666www.swap.com) are examples of centralized exchanges facilitating swaps amongst incompatible patient-donor pairs and children items or services amongst parents. Finding optimal allocations is often NP-hard. As a result heuristic solutions have been explored extensively (Glorie et al., 2014; Plaut et al., 2016).

Currently, the aforementioned communities GameSwap and BoardGameExchange make swaps in a decentralized manner between pairs of agents, but finding such pairwise swaps is often inefficient and ineffective due to demanding a “double coincidence of wants” (Jevons, 1879). However, centralized multi-agent exchanges can help overcome such challenges by allowing each user to give and receive items from possibly different users. Moreover, the user’s goal is to swap a subset of their owned games for a subset of their desired games with comparable (or greater) value. Although an item’s value is subjective, a natural proxy is its re-sale price, which is easily obtained from marketplaces such as Ebay.

We consider a centralized exchange problem where each agent has a have-list and a wishlist of distinct (indivisible) items (e.g., physical games) and, more generally, each item has a value agreed upon by the participating agents (e.g., members of the GameSwap community). The goal is to find an allocation/exchange that (i) maximizes the collective utility of the allocation such that (ii) the total value of each agent’s items before and after the exchange is equal.777Equivalently, the total value of the items given is equal to the total value of the items received. We call this problem barter with shared valuations, 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}, and it is our subject of study. Notice that bipartite perfect matching is a special case of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} where each agent has a single item in both its have-list and wishlist each and where all the items are have the same value. On the other hand, we show 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} is NP-Hard (Theorem 2).

In the following sections we formulate 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} as bipartite graph-matching problem with additional barter constraints. Our algorithm BarterDR is based on rounding the fractional allocation of the natural LP relaxation to get a feasible integral allocation. A direct application of existing rounding algorithms (like (Gandhi et al., 2006)) to 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} results in a worst-case where some agents give away all their items and receive none in exchange. This is wholly unacceptable for any deployed centralized exchange. In contrast, our main result ensures BarterDR allocations have reasonable net value for all agents; more precisely each agent gives and receives the same value in expectation and the absolute difference between given and received values is at most the value of their most valuable item (Theorem 1).

1.1. Problem formulation: 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}

Suppose we are given a ground set of items \mathcal{I} to be swapped, item values {vj+:j}\{v_{j}\in\mathbb{R}^{+}:j\in\mathcal{I}\} where +\mathbb{R}^{+} denotes the non-negative real numbers, and a community of agents 𝒩=[n]\mathcal{N}=[n] where [n][n] denotes {1,2,,n}\{1,2,\dots,n\}. Each agent ii possesses items HiH_{i}\subseteq\mathcal{I} and has wishlist WiW_{i}\subseteq\mathcal{I}. A valid allocation of these items involves agents swapping their items with other agents that desire said item. Let w(i,i,j)w(i,i^{\prime},j) denote the utility gained when agent ii gives an item jHiWij\in H_{i}\cap W_{i^{\prime}} to an agent ii^{\prime}. The goal of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} is to find a valid allocation of maximum utility subject to no agent giving away more value than they received. Formally, any valid allocation is represented by the function g:𝒩×𝒩×{0,1}g:\mathcal{N}\times\mathcal{N}\times\mathcal{I}\rightarrow\{0,1\} such that g(i,i,j)=1g(i,i^{\prime},j)=1 if agent ii gives agent ii^{\prime} item jj and g(i,i,j)=0g(i,i^{\prime},j)=0, otherwise. Additionally, each agent i𝒩i\in\mathcal{N} (i) receives only desired items, meaning i𝒩g(i,i,j)1\sum_{i^{\prime}\in\mathcal{N}}g(i^{\prime},i,j)\leq 1 if jWij\in W_{i}, and i𝒩g(i,i,j)0\sum_{i^{\prime}\in\mathcal{N}}g(i^{\prime},i,j)\leq 0 otherwise; and (ii) gives away only items they possess, thus i𝒩g(i,i,j)1\sum_{i^{\prime}\in\mathcal{N}}g(i,i^{\prime},j)\leq 1 if jHij\in H_{i}, and i𝒩g(i,i,j)0\sum_{i^{\prime}\in\mathcal{N}}g(i,i^{\prime},j)\leq 0 otherwise. We begin by defining a special variant of a graph matching problem called Value-balanced Matching (VBM).

Definition 0.

Value-balanced Matching (VBM) Suppose there is bipartite graph G=(L,R,E)G=(L,R,E) with vertex values va>0v_{a}>0 aLR\forall a\in L\cup R. Let each edge eEe\in E have weight wew_{e}\in\mathbb{R}, L=˙iLiL=\dot{\bigcup}_{i}L_{i}, and R=˙iRiR=\dot{\bigcup}_{i}R_{i}. For a given matching MEM\subseteq E, let Vi(L)=:(,r)M,Liv{V_{i}}^{(L)}=\sum_{\ell:(\ell,r)\in M,\ell\in L_{i}}v_{\ell} and Vi(R)=r:(,r)M,rRivr{V_{i}}^{(R)}=\sum_{r:(\ell,r)\in M,r\in R_{i}}v_{r}, where ˙\dot{\bigcup} denotes disjoint union. The goal of VBM is to find MM of maximum weight subject to, for each ii, the value of items matched in LiL_{i} and RiR_{i} are equal i.e., Vi(R)=Vi(L){V_{i}}^{(R)}={V_{i}}^{(L)}.

Refer to caption
Figure 1. A VBM instance for a 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance with 𝒩={1,2}\mathcal{N}=\{1,2\}, ={a,b,c,d}\mathcal{I}=\{a,b,c,d\}, H1={a,b}H_{1}=\{a,b\}, W1={c,d}W_{1}=\{c,d\}, H2={c}H_{2}=\{c\}, W2={a,d}W_{2}=\{a,d\}, H3={d}H_{3}=\{d\}, W3={b,c}W_{3}=\{b,c\}, va=100v_{a}=100, vb=vc=vd=1v_{b}=v_{c}=v_{d}=1, and we=1w_{e}=1 for all eEe\in E
Lemma 1.0.

For any instance \mathcal{B} of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}, there exists a corresponding instance 𝒢\mathcal{G} of VBM such that the utility of an optimal allocation in the instance \mathcal{B} is equal to the optimal weight matching in the corresponding instance 𝒢\mathcal{G} and vice-versa.

Proof.

Given a 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance \mathcal{B} given by (𝒩,,{vj}j,(Hi,Wi)i[n],w)(\mathcal{N},\mathcal{I},\{v_{j}\}_{j\in\mathcal{I}},(H_{i},W_{i})_{i\in[n]},w) we can reduce it to a corresponding instance 𝒢\mathcal{G} of VBM via the construction of an appropriate bipartite graph with vertex values. For each agent i𝒩i\in\mathcal{N}, build the vertex sets Li:={ij:jHi}L_{i}:=\{\ell_{ij}:j\in H_{i}\} and Ri:={rij:jWi}R_{i}:=\{r_{ij}:j\in W_{i}\}. Then the bipartite graph of interest has vertex sets L=˙i[n]LiL=\dot{\bigcup}_{i\in[n]}L_{i} and R=˙i[n]RiR=\dot{\bigcup}_{i\in[n]}R_{i}, as well as edge set E:={(ij,rij):jHiWii,i}E:=\{(\ell_{ij},r_{i^{\prime}j}):j\in H_{i}\cap W_{i^{\prime}}\wedge i,i^{\prime}\in\mathcal{I}\}. For each edge e=(ij,rij)e=(\ell_{ij},r_{i^{\prime}j}), the weight of ee is given by we:=w(i,i,j)w_{e}:=w(i,i^{\prime},j). Each vertex ij\ell_{ij} and rijr_{i^{\prime}j} has vertex values vjv_{j} i.e., vij,vrij:=vjv_{\ell_{ij}},v_{r_{i^{\prime}j}}:=v_{j}. Notice that the edges in EE represent vertices belonging the same item, hence they have identical values. Refer to Figure 1 for an example constructing a VBM instance.

Suppose we are given any feasible allocation gg for an instance \mathcal{B} of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} with utility of DD, then we can say construct a feasible matching MM corresponding to instance 𝒢\mathcal{G} of weight DD. We construct the matching as follows. For any i,i𝒩i,i^{\prime}\in\mathcal{N} and jHiWij\in H_{i}\cap W_{i^{\prime}}, if g(i,i,j)=1g(i,i^{\prime},j)=1, then add the edge e=(ij,rij)e=(\ell_{ij},r_{i^{\prime}j}) to MM. The total weight of the matching is given by eMwe=(ij,rij)Mw(i,i,j)=D\sum_{e\in M}w_{e}=\sum_{(\ell_{ij},r_{i^{\prime}j})\in M}w(i,i^{\prime},j)=D. It remains to show that MM is indeed a value-balanced matching. We begin by showing that MM is a feasible matching to the VBM problem. Since gg is a valid allocation to \mathcal{B}, we can say that for any agent i𝒩i\in\mathcal{N}, the value of the items received is equal to the value of the items given away i.e., jHii𝒩g(i,i,j)=jWii𝒩g(i,i,j)\sum_{j\in H_{i}}\sum_{i^{\prime}\in\mathcal{N}}g(i,i^{\prime},j)=\sum_{j\in W_{i}}\sum_{i^{\prime}\in\mathcal{N}}g(i^{\prime},i,j). This implies that for each g(i,i,j)=1g(i,i^{\prime},j)=1, we have (lij,rij)M(l_{ij},r_{i^{\prime}j})\in M, therefore, Vi(L)=Vi(R)V_{i}^{(L)}=V_{i}^{(R)}. Thus, MM is a feasible VBM matching.

To prove the other direction we begin by assuming that MM is a feasible matching to 𝒢\mathcal{G} of weight DD. Given MM, we can construct a feasible allocation gg by assigning g(i,i,j)=1g(i,i^{\prime},j)=1 if the edge (ij,rij)M(\ell_{ij},r_{i^{\prime}j})\in M, and g(i,i,j)=0g(i,i^{\prime},j)=0 otherwise. Notice that gg is a valid allocation; for any agent i𝒩i\in\mathcal{N}, the value of items received is equal to the value of the items given away. This is because MM is a feasible matching to VBM, meaning for any i[n]i\in[n], Vi(L)=Vi(R)V_{i}^{(L)}=V_{i}^{(R)} implying that i𝒩jHiWig(i,i,j)=i𝒩jWiHig(i,i,j)\sum_{i^{\prime}\in\mathcal{N}}\sum_{j\in H_{i}\cap W_{i^{\prime}}}g(i,i^{\prime},j)=\sum_{i^{\prime}\in\mathcal{N}}\sum_{j\in W_{i}\cap H_{i^{\prime}}}g(i^{\prime},i,j). The utility of allocation is given by i,iNjHiWiw(i,i,j)=e=(ij,rij)Mwe=D.\sum_{i^{\prime},i\in N}\sum_{j\in H_{i}\cap W_{i^{\prime}}}w(i,i^{\prime},j)=\sum_{e=(\ell_{ij},r_{i^{\prime}j})\in M}w_{e}=D. This completes our proof. ∎

Corollary 0.

𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} is equivalent to VBM.

The proof of the corollary directly follows from Lemma 2.

1.2. LP formulation of VBM

Any feasible matching MM in (L,R,E)(L,R,E) such that for each eEe\in E, xe=1x_{e}=1 if eMe\in M and xe=0x_{e}=0 otherwise is feasible in the following Integer Program (IP);

(1a) max\displaystyle\max\quad eEwexe\displaystyle\sum_{e\in E}w_{e}x_{e}
(1b) subj. to x(ij)1,\displaystyle x(\ell_{ij})\leq 1, i[n],ijLi\displaystyle i\in[n],\ell_{ij}\in L_{i}
(1c) x(rij)1,\displaystyle x(r_{ij})\leq 1, i[n],rijRi\displaystyle i\in[n],r_{ij}\in R_{i}
(1d) aLix(a)va=bRix(b)vb,\displaystyle\sum_{a\in L_{i}}x(a)v_{a}=\sum_{b\in R_{i}}x(b)v_{b}, i[n]\displaystyle i\in[n]
(1e) xe{0,1},\displaystyle x_{e}\in\{0,1\}, e=(ij,rij)E\displaystyle e=(\ell_{ij},r_{i^{\prime}j})\in E .

For aLRa\in L\cup R, we denote x(a):=eN(a)xex(a):=\sum_{e\in N(a)}x_{e} where N(a)N(a) denotes the open neighborhood of aa i.e., N(a):={(a,b)E:bLR}N(a):=\{(a,b)\in E:b\in L\cup R\}. Thus we can say that if an edge e=(ij,rij)Me=(\ell_{ij},r_{i^{\prime}j})\in M says agent ii gives item jj to agent ii^{\prime} in the corresponding 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance.

Lemma 1.0.

IP (1) is equivalent to 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}. Moreover, the objective of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP} is an upper bound on the objective of IP (1).

Proof.

IP (1) is clearly equivalent to VBM and VBM is equivalent to 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}, per Corollary 3. The objective of IP 1, eEwexe\sum_{e\in E}w_{e}x_{e} is the allocation’s utility. By relaxing (1e) to xe0x_{e}\geq 0 for eEe\in E we arrive at the natural LP relaxation of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}, namely 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP}, which can only have a greater optimal objective value. ∎

We conclude this section by a simple note. For each e=(ij,rij)Ee=(\ell_{ij},r_{i^{\prime}j})\in E we may set we=vjw_{e}=v_{j} and recover the objective of maximizing the collective value received by all agents. Nevertheless, our results hold even if wew_{e} is set arbitrarily. For example, the algorithm designer could place greater value on certain item transactions, or they may maximize the sheer number of items received by uniformly setting we=1w_{e}=1.

2. Preliminaries: GKPS dependent rounding

Our results build on the dependent rounding algorithm due to (Gandhi et al., 2006), henceforth referred to as GKPS-DR. GKPS-DR is an algorithm that takes {xe}[0,1]|E|\{x_{e}\}\in[0,1]^{|E|} defined over the edge set EE of a biparite graph (L,R,E)(L,R,E) and outputs {Xe}{0,1}|E|\{X_{e}\}\in\{0,1\}^{|E|}. In each iteration GKPS-DR considers the graph of floating edges (those edges ee with 0<xe<10<x_{e}<1) and selects a maximal path or cycle PEP\subseteq E on floating edges. The edges of PP are decomposed into alternate matchings M1M_{1} and M2M_{2} and rounded in the following way. Fix αGKPS=min{γ>0:(eM1xe+γ=1)(eM2xeγ=0)}\alpha^{\text{GKPS}}=\min\left\{\gamma>0:\left(\bigvee_{e\in M_{1}}x_{e}+\gamma=1\right)\vee\left(\bigvee_{e\in M_{2}}x_{e}-\gamma=0\right)\right\}, and βGKPS=min{γ>0:(eM2xe+γ=1)(eM1xeγ=0)}\beta^{\text{GKPS}}=\min\left\{\gamma>0:\left(\bigvee_{e\in M_{2}}x_{e}+\gamma=1\right)\vee\left(\bigvee_{e\in M_{1}}x_{e}-\gamma=0\right)\right\}. Thus, each xex_{e} is updated to xex_{e}^{\prime} according to one of the following disjoint events: with probability βGKPSαGKPS+βGKPS\frac{\beta^{\text{GKPS}}}{\alpha^{\text{GKPS}}+\beta^{\text{GKPS}}}

xe={xe+α,eM1xeα,eM2;else,xe={xeβ,eM1xe+β,eM2.x_{e}^{\prime}=\begin{cases}x_{e}+\alpha,&e\in M_{1}\\ x_{e}-\alpha,&e\in M_{2}\end{cases};\quad\text{else,}\quad x_{e}^{\prime}=\begin{cases}x_{e}-\beta,&e\in M_{1}\\ x_{e}+\beta,&e\in M_{2}.\end{cases}

The selection of α\alpha and β\beta ensures at least one edge is rounded to 0 or 11 in every iteration. GKPS-DR guarantees (P1) marginal, (P2) degree preservation, and (P3) negative correlation properties:

  1. (P1)

    eE\forall e\in E, Pr(Xe=1)=xe\Pr(X_{e}=1)=x_{e}.

  2. (P2)

    aLR\forall a\in L\cup R and with probability 1, X(a){x(a),x(a)}X(a)\in\{\lfloor x(a)\rfloor,\lceil x(a)\rceil\}.

  3. (P3)

    aLR,SN(a),c{0,1}\forall a\in L\cup R,\;\forall S\subseteq N(a),\;\forall c\in\{0,1\}, Pr(sSXs=c)sSPr(Xs=c)\Pr\left(\bigwedge_{s\in S}X_{s}=c\right)\leq\prod_{s\in S}\Pr\left(X_{s}=c\right).

Remark 1.

When GKPS-DR rounds a path between vertices aa and bb, the signs of the changes to x(a)x(a) and x(b)x(b) are equal if and only if aa and bb belong to different graph sides.

3. Related work

Centralized barter exchanges have been studied by several others in the context of kidney-exchanges (Abraham et al., 2007; Ashlagi et al., 2018; Aziz et al., 2021). 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} generalizes a well-studied kidney-exchange problem in the following way. The Kidney Exchange Problem (KEP) is often formulated as directed cycle packing in compatibility patient-donor graphs (Abraham et al., 2007) where each node in the graph corresponds to a patient-donor pair and directed edges between nodes indicate compatibility. Abraham et al. (2007); Biró et al. (2009) observed this problem reduces to bipartite perfect matching, which is solvable in polynomial-time. We show 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} is NP-Hard and thus resort to providing a randomized algorithm with approximate guarantees on the agents’ net values via LP relaxation followed by dependent rounding.

There has been extensive work on developing dependent rounding techniques, that round the fractional solution in some correlated way to satisfy both the hard constraints and ensure some negative dependence amongst rounded variables that can result in concentration inequalities. For instance, the hard constraints might arise from an underlying combinatorial object such as a packing (Brubach et al., 2017), spanning tree (Chekuri et al., 2010), or matching (Gandhi et al., 2006) that needs to be produced. In our case, the rounded variables must satisfy both matching (1b), (1c), and barter constraints (1d) (i.e., each agent gives the items of same total value as it received). Gandhi et al. (2006) developed a rounding scheme where the rounded variables satisfy the matching constraints along with other useful properties. Therefore, we adapt their rounding scheme (to satisfy matching constraints) followed by a careful rounding scheme that results in rounded variables satisfying the barter constraints.

Centralized barter exchanges are well-studied under various barter settings. For instance, Abraham et al. (2007) showed that the bounded length edge-weighted directed cycle packing is NP-Hard which led to several heuristic based methods to solve these hard problems, e.g., by using techniques of operations research (Constantino et al., 2013; Glorie et al., 2014; Plaut et al., 2016; Carvalho et al., 2021), AI/ML modeling (McElfresh et al., 2020; Noothigattu et al., 2020). Recently several works focused on the fairness in barter exchange problems (Abbassi et al., 2013; Fang et al., 2015; Klimentova et al., 2021; Farnadi et al., 2021). Our work adds to the growing body of research in theory and heuristics surrounding ubiquitous barter exchange markets.

4. Outline of our contributions and the paper

Firstly, we introduce the 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} problem, a natural generalization of edge-weighted directed cycle packing and show that it is NP-Hard to solve the problem exactly. Our main contribution is a randomized dependent rounding algorithm BarterDR with provable guarantees on the quality of the allocation. The following definitions help present our results. Suppose we are given an integral allocation {Xe}{0,1}|E|\{X_{e}\}\in\{0,1\}^{|E|}, we define the net value loss of each agent ii (i.e., the violation in the barter constraint (1d)):

(2) Di:=bLivbX(b)aRivaX(a).D_{i}:=\sum_{b\in L_{i}}v_{b}X(b)-\sum_{a\in R_{i}}v_{a}X(a).

Our main contribution is a rounding algorithm BarterDR that satisfies both matching ((1b) and (1c)) and barter constraints ((1d)) as desired in multi-agent exchanges. Recall that the rounding algorithm GKPS-DR (indeed a pre-processing step of BarterDR) rounds the fractional matching to an integral solution enjoying the properties mentioned in Section 2. The main challenge in our problem is satisfying the barter constraint. Here, a direct application of GKPS-DR alone can result in a worst case violation of aLiva\sum_{a\in L_{i}}v_{a} on DiD_{i}, corresponding to the agent losing all their items and gaining none (see the example in the Appendix). However, our algorithm BarterDR rounds much more carefully to ensure, for each agent ii, DiD_{i} is at most vi:=maxaLiRivav_{i}^{*}:=\max_{a\in L_{i}\cup R_{i}}v_{a}, i.e., the most valuable item in HiWiH_{i}\cup W_{i}. The two following theorems provide lower and upper bounds on tractable DiD_{i} (i.e., (2)) guarantees for 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}. BarterDR on a bipartite graph (L,R,E)(L,R,E) is worst-case time 𝒪((|L|+|R|)(|L|+|R|+|E|))\mathcal{O}((|L|+|R|)(|L|+|R|+|E|)) where L,R=𝒪(||n)L,R=\mathcal{O}(|\mathcal{I}|n). We view Theorem 1 as our main result.

Theorem 1.

Given a 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance, BarterDR is an efficient randomized algorithm achieving an allocation with optimal utility in expectation and where, for all agents ii, Di<viD_{i}<v_{i}^{*} with probability 11 and 𝔼[Di]=0\mathbb{E}[D_{i}]=0.

Theorem 2.

Deciding whether a 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance has a non-empty valid allocation with Di=0D_{i}=0 for all agents ii is NP-hard, even if all item values are integers.

Owing to its similarities to GKPS-DR, BarterDR enjoys similar useful properties:

Theorem 3.

BarterDR rounds {xe}[0,1]|E|\{x_{e}\}\in[0,1]^{|E|} in the feasible region of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP} into {Xe}{0,1}|E|\{X_{e}\}\in\{0,1\}^{|E|} while satisfying (P1), (P2), and (P3).

Outline of the paper

In Section 5 we describe BarterDR (Algorithm 1), our randomized algorithm for 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}, and its subroutines FindCCC and CCWalk in detail. Next, we give proofs and proof sketches for Theorems 1 and 3.

5. BarterDR: dependent rounding algorithm for 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}

Notation

BarterDR is a rounding algorithm that proceeds in multiple iterations, therefore we use a superscript rr to denote the value of a variable at the beginning of iteration rr. An edge eEe\in E is said to be floating if xer(0,1)x_{e}^{r}\in(0,1). Analogously, let Er:={eE:xer(0,1)}E^{r}:=\{e\in E:x_{e}^{r}\in(0,1)\}, aLRa\in L\cup R is said to be a floating vertex if xr(a):=eN(a)xerx^{r}(a):=\sum_{e\in N(a)}x_{e}^{r}\not\in\mathbb{Z} and the sets of floating vertices are Lr:={aL:xr(a)}L^{r}:=\{a\in L:x^{r}(a)\not\in\mathbb{Z}\} and Rr:={aR:xr(a)}R^{r}:=\{a\in R:x^{r}(a)\not\in\mathbb{Z}\}. L(Er)={aL(a,b)EraL}L(E^{r})=\{a\in L\mid\exists(a,b)\in E^{r}\wedge a\in L\}. R(Er)R(E^{r}) is defined similarly. Gr:=(L(Er),R(Er),Er)G^{r}:=(L(E^{r}),R(E^{r}),E^{r}). Kr(a)K^{r}(a) denotes the connected component in GrG^{r} containing vertex aa. Define κ(i):={a:aLiRi}\kappa(i):=\{a:a\in L_{i}\cup R_{i}\}, for each i[n]i\in[n], to be the set of participating vertices in each barter constraint. We say two vertices a,bLRa,b\in L\cup R are partners if there exists i[n]i\in[n] such that a,bκ(i)a,b\in\kappa(i) and aba\neq b. Note if aa and bb are partners, then they are distinct vertices corresponding to items (owned or desired) of some common agent ii. In iteration rr, a vertex aκ(i)a\in\kappa(i) is said to be partnerless if κ(i)(LrRr)={a}\kappa(i)\cap(L^{r}\cup R^{r})=\{a\}; i.e., aa is the only floating vertex in κ(i)\kappa(i). We use the shorthand aba\sim b to denote aa and bb are partners. Edges and vertices not floating are said to be settled: once an edge ee (vertex aa) is settled BarterDR will not modify xex_{e} (x(a)x(a)). For vertices aa and bb, aba\leadsto b denotes a simple path from aa to bb. Define DirD_{i}^{r} to be DiD_{i}, as defined in (2), but with variables {xer}\{x_{e}^{r}\} instead of {Xe}\{X_{e}\}. The fractional degree of aLRa\in L\cup R refers to xr(a)x^{r}(a).

Once an edge is settled, its value does not change. In the each iteration BarterDR looks exclusively at the floating edges ErE^{r} and the graph induced by them. Namely, Gr:=(L(Er),R(Er),Er)G^{r}:=(L(E^{r}),R(E^{r}),E^{r}) where L(Er):={aL:eEr,eN(a)}L(E^{r}):=\{a\in L:\exists e\in E^{r},\;e\in N(a)\} and R(Er)R(E^{r}) is defined analogously. In each iteration, at least one edge or vertex becomes settled, i.e., |Er|+|Lr|+|Rr|>|Er+1|+|Lr+1|+|Rr+1||E^{r}|+|L^{r}|+|R^{r}|>|E^{r+1}|+|L^{r+1}|+|R^{r+1}|. Therefore BarterDR terminates in iteration TT where |ET|=0|E^{T}|=0 and T|L|+|R|+|E|T\leq|L|+|R|+|E|.

Algorithm and analysis outline.

BarterDR begins by making GG acyclic via the pre-processing step in Section 5.1. Next, BarterDR proceeds as follows. While there are floating edges find an appropriate sequence of paths 𝒫\mathcal{P} constituting a CCC or CCW (defined in Section 5.2). The strategy for judiciously rounding 𝒫\mathcal{P} is fleshed out in Section 5.3. Finally, Section 5.4 concludes with proofs for Theorems 1 and 3.

5.1. Pre-processing: remove cycles in GG

The pre-processing step consists of finding a cycle CC via depth-first search in the graph of floating edges and rounding CC via GKPS-DR until there are no more cycles. Let {xe0}eE\{x_{e}^{0}\}_{e\in E} denote the LP solution and {xe1}eE\{x^{1}_{e}\}_{e\in E} denote the output of the pre-processing step. BarterDR begins on {xe1}eE\{x^{1}_{e}\}_{e\in E}.

GKPS-DR on cycles never changes fractional degrees, i.e., aLR,x0(a)=x1(a)\forall a\in L\cup R,\;x^{0}(a)=x^{1}(a). Lemma 3 is used to construct CCC’s and CCW’s, and it is the raison d’être for the pre-processing step.

Lemma 5.0.

The pre-processing step is efficient and gives Di1=0D_{i}^{1}=0 for all agents ii with probability 11.

Proof.

The pre-processing step runs GKPS-DR on GG until no cycles remain. GKPS-DR on a cycle guarantees that at the end of each iteration the fractional vertex degrees remain unchanged. Moreover, the full GKPS algorithm takes time O(|E|(||E|+|V|))O(|E|\cdot(||E|+|V|)), bounding the pre-processing step’s runtime which can be seen as GKPS with an early stoppage condition. ∎

Lemma 5.0.

Let {xe}(0,1)|E|\{x_{e}\}\in(0,1)^{|E|} be a vector of floating edges over a bipartite graph (U,V,E)(U,V,E). Then the number of floating vertices in (U,V,E)(U,V,E) is not 1.

Proof.

Let uu be the sole floating vertex, i.e., x(u)x(u)\not\in\mathbb{Z}, and, without loss of generality, let uUu\in U. For SUVS\subseteq U\cup V, let d(S):=sSxsd(S):=\sum_{s\in S}x_{s}. Then d(U{u})+xu=d(U)=eExe=d(V)d(U-\{u\})+x_{u}=d(U)=\sum_{e\in E}x_{e}=d(V). But uu being the only floating vertex implies d(U{u})+x(u)d(U-\{u\})+x(u)\not\in\mathbb{Z} and d(V)d(V)\in\mathbb{Z}. ∎

Lemma 5.0.

Each connected component of G1G^{1} has at least 22 floating vertices.

Proof.

Fix an arbitrary connected component KK of G1G^{1}. If KK has 0 floating vertices, then each vertex has degree at least two because eEr,xer(0,1)\forall e\in E^{r},x_{e}^{r}\in(0,1). This implies the existence of a cycle in KK, but this cannot happen as the pre-processing step eliminates all cycles. Applying Lemma 2 to KK, the only remaining possibility is that G1G^{1} has at least two floating vertices. ∎

5.2. Construction of CCC’s and CCW’s via FindCCC

This section introduces CCC’s and CCW’s. The definition of these structures facilitates rounding edges while respecting the barter constraints each iteration. The subroutines for constructing CCC’s and CCW’s, FindCCC and CCWalk, are described in Algorithms 2 and 3. The correctness of these subroutines, and thus the existence of CCC’s and CCW’s, follows from Lemma 5.

Definition 0.

A connected component cycle (CCC) is a sequence of q1q\geq 1 paths 𝒫=s1t1,,sqtq\mathcal{P}=\langle s_{1}\leadsto t_{1},\dots,s_{q}\leadsto t_{q}\rangle such that, letting V(𝒫)=i[q]{si,ti}V(\mathcal{P})=\bigcup_{i\in[q]}\{s_{i},t_{i}\} be the paths’ endpoint vertices,

  1. (1)

    i[q]\forall i\in[q], tisi+1t_{i}\sim s_{i+1} (taking sq+1s1s_{q+1}\equiv s_{1}),

  2. (2)

    aV(𝒫)\forall a\in V(\mathcal{P}), |V(𝒫)κ(a)|=2|V(\mathcal{P})\cap\kappa(a)|=2,

  3. (3)

    i[q]\forall i\in[q], sitis_{i}\leadsto t_{i} belongs to a unique connected component, and

  4. (4)

    i[q]\forall i\in[q], sis_{i} and tit_{i} are floating vertices.

Instead, we have a connected component walk (CCW) if criteria 3) and 4) are met but 1) and 2) are relaxed to: 1) i[q1]\forall i\in[q-1], tisi+1t_{i}\sim s_{i+1} and s1s_{1} and tqt_{q} are partnerless; and 2) aV(𝒫){s1,tq},|V(𝒫)κ(a)|=2\forall a\in V(\mathcal{P})-\{s_{1},t_{q}\},|V(\mathcal{P})\cap\kappa(a)|=2.

Refer to caption
Figure 2. An example CCC 𝒫=1,ar2,a,r2,dr1,d\mathcal{P}=\left<\ell_{1,a}\leadsto r_{2,a},r_{2,d}\leadsto r_{1,d}\right> based on the VBM from Figure 1. All edges and path endpoint vertices, i.e., 1,a\ell_{1,a}, r2,ar_{2,a}, r1,dr_{1,d}, and r2,dr_{2,d}, must be floating. The vertex 3,d\ell_{3,d}. The rounding step proceeds in one of two ways denoted by orange and green text, chosen at random. With the probability α/(α+β)\alpha/(\alpha+\beta), displayed in orange, the orange modifications to xex_{e}’s take place. Otherwise, with probability β/(α+β)\beta/(\alpha+\beta), the green modifications take place. In the orange rounding event agent 1 gives away an additional vaα/va=αv_{a}\alpha/v_{a}=\alpha value and gains an additional vdα/vd=αv_{d}\alpha/v_{d}=\alpha value therefore DiD_{i} for agent 1 does not change after this rounding step. It is easy to see the same occurs for both agents, in either – orange or green – rounding event.

Recall that a rounding iteration rr is fixed so whether a vertex is floating or partnerless is well-defined. When 𝒫\mathcal{P} is rounded the set of vertices whose fractional degrees change is precisely V(𝒫)V(\mathcal{P}). Requirements 1 and 2 of a CCC say tit_{i} and si+1s_{i+1} are partners and they do not have any other partner vertices in V(𝒫)V(\mathcal{P}). Comparably, for CCW’s these requirements imply the same for all vertices but the “first” and “last,” which are partnerless. Therefore, for CCC’s and CCW’s the vertices in V(𝒫)V(\mathcal{P}) respectively appear in qq and q+1q+1 distinct barter constraints. The requirements in the definitions of CCC and CCW come in handy during the analysis because: each path belongs to a different connected component hence they are vertex and edge disjoint; if a barter constraint has exactly two vertices in V(𝒫)V(\mathcal{P}) then these vertices’ fractional degree changes can be made to cancel each other out in the barter constraint; and floating vertices ensure paths can be rounded in a manner analogous to GKPS-DR. For comparison, GKPS-DR also needed paths with floating endpoints, but maximal paths always have such endpoints whereas the paths of 𝒫\mathcal{P} need not be maximal. Consequently, the requirement that paths of 𝒫\mathcal{P} have floating endpoints must be imposed.

Input: {xe1}[0,1]|E|\{x_{e}^{1}\}\in[0,1]^{|E|}, corresponding to G1=(L(E1),R(E1),E1)G^{1}=(L(E^{1}),R(E^{1}),E^{1}); i.e., the output of the pre-processing described in Section 5.1
1 r1r\leftarrow 1
2 while ErE^{r}\neq\emptyset do
3       𝒫\mathcal{P}\leftarrow CCC or CCW returned by FindCCC in GrG^{r}
4       Round 𝒫\mathcal{P} as described in Section 5.3 yielding Gr+1G^{r+1} and {xer+1}\{x_{e}^{r+1}\}
5       rr+1r\leftarrow r+1
6      
7 end while
return {xer}{0,1}|E|\{x_{e}^{r}\}\in\{0,1\}^{|E|}
Algorithm 1 BarterDR
Input: Gr=(L(Er),R(Er),Er)G^{r}=(L(E^{r}),R(E^{r}),E^{r})
Output: CCC or CCW sitii[q]\langle s_{i}\leadsto t_{i}\rangle_{i\in[q]}
1 s1,t1s_{1},t_{1}\leftarrow distinct floating vertices in some connected component CC of GrG_{r}
2 (O1,σ1)(O_{1},\sigma_{1})\leftarrow CCWalk(t1)\text{{CCWalk}}(t_{1})
3 if σ1=\sigma_{1}= ”CCC” then
4       return O1O_{1}
5 (O2,σ2)(O_{2},\sigma_{2})\leftarrow CCWalk(s1)\text{{CCWalk}}(s_{1})
6 if σ2=\sigma_{2}= ”CCC” then
7       return O2O_{2}
8 Let O1=(t1,s2,t2,,sq,tq)O_{1}=(t_{1},s_{2},t_{2},\dots,s_{q},t_{q}) and O2=(t1,s2,t2,,sq,tq)O_{2}=(t^{\prime}_{1},s^{\prime}_{2},t^{\prime}_{2},\dots,s^{\prime}_{q^{\prime}},t^{\prime}_{q^{\prime}})
9 If O1O_{1} and O2O_{2} “cross,” resolve O1O_{1} and O2O_{2} into a CCC and return it; see Section 5.2
return tqsq,tq1,sq1,,s2t2,t1t1,s2t2,,sqtq\left<t^{\prime}_{q^{\prime}}\leadsto s^{\prime}_{q^{\prime}},t^{\prime}_{q^{\prime}-1},s^{\prime}_{q^{\prime}-1},\dots,s^{\prime}_{2}\leadsto t^{\prime}_{2},t^{\prime}_{1}\leadsto t_{1},s_{2}\leadsto t_{2},\dots,s_{q}\leadsto t_{q}\right>
  // CCW
Algorithm 2 FindCCC
Input: Gr=(L(Er),R(Er),Er)G^{r}=(L(E^{r}),R(E^{r}),E^{r}), the walk’s starting vertex aa
Output: A CCC or half of a CCW and a string indicating whether a CCC was returned
V(t1)V\leftarrow(t_{1}), letting t1:=at_{1}:=a
  // ordered list of path endpoints
S{C1}S\leftarrow\{C_{1}\} where C1=Kr(a)C_{1}=K^{r}(a)
  // SS is the set of seen CC’s
1 i2i\leftarrow 2
2 while True do
3       if ti1t_{i-1} is partnerless then
             return (V,”CCW”)(V,\text{"CCW"})
              // Return half of a CCW
4            
5       sis_{i}\leftarrow partner of ti1t_{i-1}
6       CiC_{i}\leftarrow connected component containing sis_{i}
7       if  CiSC_{i}\in S then // CiC_{i} was previously visited
8             Ci=CkC_{i}=C_{k} for some k<ik<i so let sk:=sis_{k}^{\prime}:=s_{i}
9             return (sktk,,si1ti1,”CCC”)(\left<s^{\prime}_{k}\leadsto t_{k},\dots,s_{i-1}\leadsto t_{i-1}\right>,\text{"CCC"})
10       tit_{i}\leftarrow floating vertex in CiC_{i} distinct from sis_{i}
11       VV(si)V\leftarrow V\oplus(s_{i}), where \oplus denotes sequence concatenation
12       if  bV,bti\exists b\in V,\;b\sim t_{i}  then // tit_{i} already has a partner in VV
13             It must be that bb already had a partner cVc\in V. WLOG, b=tk1b=t_{k-1} and c=skc=s_{k}, some kik\leq i
14             return (sktk,,siti,”CCC”)(\left<s_{k}\leadsto t_{k},\dots,s_{i}\leadsto t_{i}\right>,\text{"CCC"})
15       VV(ti)V\leftarrow V\oplus(t_{i})
16       ii+1i\leftarrow i+1, SS{Ci}S\leftarrow S\cup\{C_{i}\}
17 end while
Algorithm 3 CCWalk(a)\text{{CCWalk}}(a)
Uncrossing the half-CCWs.

We define what we mean by ”crossing” half-CCW’s in FindCCC Algorithm 2 and how to resolve this into a CCC. Using O1O_{1} and O2O_{2} build V¯:=sq,tq1,,s2,t1,t1,s2,,sq\bar{V}:=\left<s_{q}^{\prime},t_{q-1}^{\prime},\dots,s_{2}^{\prime},t_{1}^{\prime},t_{1},s_{2},\dots,s_{q}\right>. V¯\bar{V} can be seen as the sequence of path endpoints (i.e., VV in CCWalk) resulting from a run of CCWalk(sq)\text{{CCWalk}}(s_{q}^{\prime}), which possibly did not stop when it should have returned a CCC. By the half-CCW’s ”crossing” we mean that in some iteration of the while-loop of CCWalk either a connected component is revisited or tit_{i} was partners with a vertex previously visited. But these cases are precisely Algorithms 3 and 3 from CCWalk where a CCC is resolved and returned.

Lemma 5.0.

If GrG^{r} has no cycles, FindCCC efficiently returns a CCC or CCW.

Proof of Lemma 5.

GrG^{r} for r1r\geq 1 is guaranteed to be acyclic by the pre-processing step. Recall, this acyclic property is used by Lemma 3 to guarantee each CC has at least two floating vertices. This ensures Algorithm 3 of FindCCC is well defined.

Let VrV_{r} be the sequence of vertices VV at the beginning of iteration rr, and let VraV_{r}-a denote VrV_{r} without some vertex aa. Like in CCWalk, “\oplus” denotes sequence concatenation. Let ara_{r} and zrz_{r} be the first and last vertices of VrV_{r}; note ara_{r} does not change over iterations. We prove the correctness of CCWalk with the aid of the following loop invariants maintained at the beginning of each iteration rr of the while-loop.

  1. (I1)

    zrz_{r} has no partners in VrV_{r}.

  2. (I2)

    bVrzr\forall b\in V_{r}-z_{r}, bb has exactly one partner in VrV_{r}.

  3. (I3)

    ara_{r} is the only vertex from VrV_{r} in K(ar)K(a_{r}).

  4. (I4)

    bVrar\forall b\in V_{r}-a_{r}, there are exactly two vertices from VrV_{r} contained in K(b)K(b).

Proceed by induction. When r=1r=1, Vr=aV_{r}=\left<a\right> so ar=a=zra_{r}=a=z_{r} so all invariants are (vacuously) true. Let P(k)P(k) be the predicate saying all invariants hold at the beginning of iteration k1k\geq 1. We assume P(k)P(k) and show P(k+1)P(k+1).

If there is an iteration k+1k+1, then CCWalk did not return during iteration kk and must have added sk,tk\left<s_{k},t_{k}\right> to Vk=t1,s2,t2,,sk1,tk1V_{k}=\left<t_{1},s_{2},t_{2},\dots,s_{k-1},t_{k-1}\right>. If zk+1=tkz_{k+1}=t_{k} had a partner in VkskV_{k}\oplus\left<s_{k}\right> then iteration kk would have been the last as a CCC would have been returned. Therefore (I1) holds at the beginning of iteration k+1k+1.

By P(k)P(k), bVktk1\forall b\in V_{k}-t_{k-1}, bb has exactly one partner in VkV_{k} and tk1t_{k-1} has zero partners in VkV_{k}. By construction sks_{k} is selected to be the partner of tk1t_{k-1} so now tk1t_{k-1} and sks_{k} have exactly one partner each in Vk+1V_{k+1}. Therefore (I2) holds at the beginning of iteration k+1k+1.

If Vk+1V_{k+1} were to not meet (I3) then it must mean that either skK(ak)s_{k}\in K(a_{k}) or tkK(ak)t_{k}\in K(a_{k}). But in this case the connected component K(ak)K(a_{k}) was revisited during iteration kk and a CCC would have been returned. Therefore, (I3) holds.

By construction K(sk)=K(tk)K(s_{k})=K(t_{k}). Moreover, bVk,K(sk)K(b)\forall b\in V_{k},\;K(s_{k})\neq K(b) otherwise K(b)K(b) was revisited and iteration kk would have been the last. Therefore, (I4) continues to hold.

Moreover note the while-loop runs at most 𝒪(|L|+|R|)\mathcal{O}(|L|+|R|) many times since revisiting a connected component of GrG^{r} causes the function to return.

Next we leverage the loop invariants to prove CCWalk returns valid CCC’s. First observe that by construction of VV, Properties 1) and 4) of a CCC are always immediate. CCWalk returns CCC’s when a connected component is revisited or when the added trt_{r} already has partners present in VV. We inspect both return cases to ensure indeed a valid CCC is returned. Fix some iteration r1r\geq 1.

Suppose a connected component CkC_{k}, k<rk<r is revisited. Then sktk,,sr1tr1\left<s^{\prime}_{k}\leadsto t_{k},\dots,s_{r-1}\leadsto t_{r-1}\right> is returned. By (I1), tr1t_{r-1} and sks^{\prime}_{k} are each others only partners. Recall by construction tksk+1t_{k}\sim s_{k+1}, tk+1sk+1,,t_{k+1}\sim s_{k+1},\dots, and sr1tr1s_{r-1}\sim t_{r-1}, so by (I2) it follows that each of tk,sk+1,tk+1,,sr1,tr1t_{k},s_{k+1},t_{k+1},\dots,s_{r-1},t_{r-1} has exactly one partner amongst themselves. Therefore, Property 2) of CCC’s holds. It remains to check Property 3). By (I4) each of sptps_{p}\leadsto t_{p}, for kp<rk\leq p<r belongs to a unique connected component. Therefore, sktks^{\prime}_{k}\leadsto t_{k}, too, must belong to a unique connected component different from each sptps_{p}\leadsto t_{p}; otherwise there was a connected component containing sps_{p}, tpt_{p}, and sks_{k} contradicting one of (I3) and (I4) (depending on whether k=1k=1 or k>1k>1).

Instead suppose a CCC is returned because trt_{r} had a partner bVrsrb\in V_{r}\oplus\left<s_{r}\right>. Let bb be the last vertex in VrsrV_{r}\oplus\left<s_{r}\right> such that btrb\sim t_{r}. If b=srb=s_{r} then srtr\left<s_{r}\leadsto t_{r}\right> is clearly a CCC. So suppose bVrb\in V_{r}. It must be that b=spb=s_{p} for some p<rp<r; otherwise bb is not the last such vertex. So focus on proving sptp,,srtr\left<s_{p}\leadsto t_{p},\dots,s_{r}\leadsto t_{r}\right> is a CCC. Property 2) follows because tr1t_{r-1} and srs_{r} are each other’s only partners by (I1); trt_{r} and sps_{p} are each other’s only partners because by (I2) sps_{p} previously had only partner tp1t_{p-1} but we have cut it out from the CCC; and the rest of the pairs have unique partners by (I2). Lastly, Property 3) holds because K(sr)K(s_{r}) was not a revisited CC so srs_{r} and trt_{r} share a unique CC, and the rest of the path endpoints belong to distinct CC’s by (I4).

The last remaining case is that where the two half-CCW’s overlap. This can be resolved into a CCC in the manner already described in the main text under the paragraph Uncrossing the half-CCW’s of Section 5.2.

Runtime of FindCCC

We conclude with comments about the runtime of FindCCC. We can build a hash-map mapping vertices to a set of all their floating partners. This hash-map can be constructed in time 𝒪(|L|+|R|)\mathcal{O}(|L|+|R|). Similarly, we can build a set to keep track of visited connected components. Finding the partner sis_{i} of ti1t_{i-1} can be done in 𝒪(1)\mathcal{O}(1) time by checking the hash-map, and finding the vertex tit_{i} in K(si)K(s_{i}) can be done by starting a depth first search from tit_{i} until a floating vertex is reached. After rounding the CCC/CCW, remove vertices that became settled from their respective sets of floating partners. Altogether, the depth first searches starting from each tit_{i} altogether visit each vertex and edge at most 𝒪(1)\mathcal{O}(1) times before returning and each vertex is removed from the hash-map’s set of floating partners at most once. Therefore, CCWalk runs in time 𝒪(|L|+|R|)\mathcal{O}(|L|+|R|). FindCCC calls CCWalk 𝒪(1)\mathcal{O}(1) times and resolving the two-half CCW’s can be thought of as another run of CCWalk, as argued above. Therefore, FindCCC finishes in time 𝒪(|L|+|R|)\mathcal{O}(|L|+|R|). ∎

5.3. Rounding CCC’s and CCW’s

5.3.1. Roundable colorings

Now we shed light on what we mean by carefully rounding the paths of the CCC/CCW 𝒫\mathcal{P}. But first we build some intuition. Focus on tpt_{p} and sp+1s_{p+1} for some fixed 1pq1\leq p\leq q in case of a CCC (or p<qp<q in the case of a CCW). Since tpsp+1t_{p}\sim s_{p+1}, whatever rounding procedure we use, we want the relative signs of the changes to xr(tp)x^{r}(t_{p}) and xr(sp+1)x^{r}(s_{p+1}) to depend on whether tpt_{p} and sp+1s_{p+1} fall on the same or different sides of GG (these sides being “left” and “right” corresponding to vertex sets LL and RR; equivalently, left and right of “==” in (1d)). This way (1d) is preserved after rounding. Likewise, the magnitudes of fractional degree changes to tpt_{p} and sp+1s_{p+1} must be balanced depending on vpv_{p} and vp+1v_{p+1} so that (1d) is preserved for ii corresponding to tp,sp+1κ(i)t_{p},s_{p+1}\in\kappa(i). Intuitively, we round 𝒫\mathcal{P} carefully (1d) by ensuring i) the signs of the changes depend on which side of the equation tehy appear on, and ii), the magnitudes of said changes are weighted by the values of the items they represent. We make this precise via roundable colorings. If aa and bb are vertices belonging to different sides of the graph, we say aba\perp b; otherwise we say a⟂̸ba\not\perp b.

Definition 0 (Roundable coloring).

The CCC 𝒫=\mathcal{P}=sitii[q]\langle s_{i}\leadsto t_{i}\rangle_{i\in[q]} has a roundable coloring if there exists f:V(𝒫){1,1}f:V(\mathcal{P})\to\{-1,1\} such that i) for all i[q]i\in[q], f(si)=f(ti)f(s_{i})=f(t_{i}) if and only if sitis_{i}\perp t_{i}; and ii) for all i[q]i\in[q], f(ti)=f(si+1)f(t_{i})=f(s_{i+1}) if and only if tisi+1t_{i}\perp s_{i+1}. A roundable coloring for a CCW is defined the same way except ii) becomes i[q1]\forall i\in[q-1], f(ti)=f(si+1)f(t_{i})=f(s_{i+1}) if and only if tisi+1t_{i}\perp s_{i+1}.

Property i) will ensure a,bV(𝒫)a,b\in V(\mathcal{P}) see same-sign fractional degree change if and only if f(a)=f(b)f(a)=f(b), which happens if and only if they appear on the same side (1d). Property ii) is equivalent to Remark 1 and verifies each sitis_{i}\leadsto t_{i} is roundable in GKPS-DR-inspired manner.

Lemma 5.0.

Every CCC and CCW admits an efficiently computable roundable coloring ff.

Proof of Lemma 7.

Consider the following heuristic to find a roundable coloring of 𝒫=sitii[q]\mathcal{P}=\langle s_{i}\leadsto t_{i}\rangle_{i\in[q]}. Assign an arbitrary color to s1s_{1}, say f(s1)=1f(s_{1})=1. Because ff only has two colors, this immediately determines the color of t1t_{1}, which depends on whether s1s_{1} and t1t_{1} are same-side vertices. Again, this immediately determines the color of s2s_{2}, which in turn determines the color of t2t_{2}, and so on.

If 𝒫\mathcal{P} is a CCW then s1s_{1} and tqt_{q} are partnerless so their colors do not affect the first property. The vertices are colored in the order s1,t1,s2,t2,,sq,tqs_{1},t_{1},s_{2},t_{2},\dots,s_{q},t_{q} ensuring that both properties hold. Therefore, this scheme produces a roundable coloring for CCW’s in time 𝒪(|L|+|R|)\mathcal{O}(|L|+|R|).

Instead suppose 𝒫\mathcal{P} is a CCC. It only remains to check that f(tq)=f(s1)f(t_{q})=f(s_{1}). We now verify this. Observe that the greedy algorithm ensures

i[q],f(ti)=f(si)(1)𝟙(tisi)\forall i\in[q],\;f(t_{i})=f(s_{i})\cdot(-1)^{\mathbbm{1}(t_{i}\perp s_{i})}

and

i[q1],f(si+1)=f(ti)(1)𝟙(ti⟂̸si+1)\forall i\in[q-1],\;f(s_{i+1})=f(t_{i})\cdot(-1)^{\mathbbm{1}(t_{i}\not\perp s_{i+1})}

where 𝟙()\mathbbm{1}(\cdot) equals 1 if “\cdot” is true and 0 otherwise; this is a slight abuse of notation since \perp is a relation but we are treating it as a boolean function. Expanding by repeated application of the above observations, we have

f(tq)\displaystyle f(t_{q}) =f(s1)(1)i[q]𝟙(t1⟂̸si)+i[q1]𝟙(ti⟂̸si+1)\displaystyle=f(s_{1})\cdot(-1)^{\sum_{i\in[q]}\mathbbm{1}(t_{1}\not\perp s_{i})+\sum_{i\in[q-1]}\mathbbm{1}(t_{i}\not\perp s_{i+1})}
=f(s1)(1)p+ψq1\displaystyle=f(s_{1})\cdot(-1)^{p+\psi_{q-1}}

where pp is the number of same-side paths and ψq1\psi_{q-1} is the number of same-side partners not counting the pair tqt_{q} and s1s_{1}. Then ensuring we have a roundable coloring reduces to ensuring that tqs1p+ψq1t_{q}\perp s_{1}\implies p+\psi_{q-1} is even and tq⟂̸s1p+ψq1t_{q}\not\perp s_{1}\implies p+\psi_{q-1} is odd. Letting ψ\psi be the total number of same-side partners, the above is equivalent to asking p+ψp+\psi be even, which we now prove.

Let dd be the number of different-side paths, and let cLc_{L}, cRc_{R}, cLRc_{LR} respectively be the number of left-left, right-right, and left-right partner pairs. So,

(3) p+d=q=ψ+cLR.p+d=q=\psi+c_{LR}.

Let nLn_{L} be the number of left vertices in the CCC. Clearly, nL=2cL+cLRn_{L}=2c_{L}+c_{LR}. Look at nLdn_{L}-d, this is the number of left vertices remaining after removing the different-side paths in sitii[q]\langle s_{i}\leadsto t_{i}\rangle_{i\in[q]}. Since these vertices must be covered by same side paths we must have nLdn_{L}-d even. Then, with all congruences taken modulo 22,

0nLd=2cL+cLRdcLRd.0\equiv n_{L}-d=2c_{L}+c_{LR}-d\equiv c_{LR}-d.

Plugging the above into (3) gives p=ψ+cLRdψp=\psi+c_{LR}-d\equiv\psi. Therefore, ψ+pψ+ψ0\psi+p\equiv\psi+\psi\equiv 0. ∎

5.3.2. Using roundable colorings

Roundable colorings will ensure the sings of changes are as desired. We now introduce notation to use the roundable colorings and ensure that the magnitudes of said changes is done in proportion to the value of items involved.

Although the notation used next is cumbersome, the intuition is to fix α,β>0\alpha,\beta>0 “small enough” that all edge variables stay in [0,1][0,1] and vertex fractional degrees stay within their current ceilings and floors but “large enough” that at least one edge or vertex is settled. First, fix the roundable coloring ff, which is possible per Lemma 7. Next, decompose each path sitis_{i}\leadsto t_{i} into alternating matchings M1iM_{-1}^{i} and M1iM_{1}^{i} such that a{si,ti},eMf(a)i\forall a\in\{s_{i},t_{i}\},\;\exists e\in M_{f(a)}^{i} such that eN(a)e\in N(a); property ii) of ff guarantees this is possible. In other words, vertex a{si,ti}a\in\{s_{i},t_{i}\} is present in Mf(a)iM_{f(a)}^{i}. For readability drop the rr superscripts briefly and let

Γ1i(γ)\displaystyle\Gamma_{-1}^{i}(\gamma)\equiv eM1i(xe+γ=1)eM1i(xeγ=0)\displaystyle\bigvee_{e\in M_{-1}^{i}}(x_{e}+\gamma=1)\vee\bigvee_{e\in M_{1}^{i}}(x_{e}-\gamma=0)
a{si,ti}(f(a)=1x(a)+γ=x(a))\displaystyle\vee\bigvee_{a\in\{s_{i},t_{i}\}}(f(a)=-1\implies x(a)+\gamma=\lceil x(a)\rceil)\phantom{\Gamma_{-1}^{i}(\gamma)\equiv}
(4) a{si,ti}(f(a)=1x(a)γ=x(a)),\displaystyle\vee\bigvee_{a\in\{s_{i},t_{i}\}}(f(a)=1\implies x(a)-\gamma=\lfloor x(a)\rfloor),

and, symmetrically,

Γ1i(γ)\displaystyle\Gamma_{1}^{i}(\gamma)\equiv eM1i(xe+γ=1)eM1i(xeγ=0)\displaystyle\bigvee_{e\in M_{1}^{i}}(x_{e}+\gamma=1)\vee\bigvee_{e\in M_{-1}^{i}}(x_{e}-\gamma=0)
a{si,ti}(f(a)=1x(a)+γ=x(a))\displaystyle\vee\bigvee_{a\in\{s_{i},t_{i}\}}(f(a)=1\implies x(a)+\gamma=\lceil x(a)\rceil)
(5) a{si,ti}(f(a)=1x(a)γ=x(a)).\displaystyle\vee\bigvee_{a\in\{s_{i},t_{i}\}}(f(a)=-1\implies x(a)-\gamma=\lfloor x(a)\rfloor).

Finally, the magnitudes fixed (in analogy to Section 2) are

(6) α:=min{γ>0:i[q]Γ1i(1viγ)},β:=min{γ>0:i[q]Γ1i(1viγ)}.\alpha:=\min\left\{\gamma>0:\bigvee_{i\in[q]}\Gamma_{-1}^{i}\left(\frac{1}{v_{i}}\gamma\right)\right\},\;\beta:=\min\left\{\gamma>0:\bigvee_{i\in[q]}\Gamma_{1}^{i}\left(\frac{1}{v_{i}}\gamma\right)\right\}.

Both α\alpha and β\beta are well defined as they are the minima of non-empty finite sets. The update proceeds probabilistically as follows: i[q],esiti\forall i\in[q],\forall e\in s_{i}\leadsto t_{i},

(7) w.p.βα+β,xer+1={xer+1viα,eM1ixer1viα,eM1i;\text{w.p.}\;\frac{\beta}{\alpha+\beta},\;x_{e}^{r+1}=\begin{cases}x_{e}^{r}+\frac{1}{v_{i}}\alpha,&e\in M_{-1}^{i}\\ x_{e}^{r}-\frac{1}{v_{i}}\alpha,&e\in M_{1}^{i}\end{cases};
(8) else, w.p.αα+β,xer+1={xer1viβ,eM1ixer+1viβ,eM1i.\text{else, w.p.}\;\frac{\alpha}{\alpha+\beta},\;\;x_{e}^{r+1}=\begin{cases}x_{e}^{r}-\frac{1}{v_{i}}\beta,&e\in M_{-1}^{i}\\ x_{e}^{r}+\frac{1}{v_{i}}\beta,&e\in M_{1}^{i}\end{cases}.

At a high level α\alpha and β\beta are large enough that every rounding iteration at least one vertex or edge is settled. Adding the ±1vi\pm\frac{1}{v_{i}} terms ensures term appearing in (1d) will cancel out nicely, as seen in the proof of Lemma 9.

5.4. Algorithm analysis

Proof sketch of Theorem 3

Except for (J2), the proofs for the invariants are almost identical to those in (Gandhi et al., 2006). This is because BarterDR is crafted so as to be similar to GKPS-DR in the ways necessary for this analysis to carry over. Owing to proofs’ similarities, we defer their full treatment to Appendix B. The proof proceeds via the following invariants maintained at each iteration rr of BarterDR.

  1. (J1)

    eE\forall e\in E, 𝔼[xer]=xe0\mathbb{E}[x_{e}^{r}]=x_{e}^{0}.

  2. (J2)

    aLR\forall a\in L\cup R and with probability 1, x0(a)xr(a)x0(a)\lfloor x^{0}(a)\rfloor\leq x^{r}(a)\leq\lceil x^{0}(a)\rceil.

  3. (J3)

    aLR,SN(a),c{0,1}\forall a\in L\cup R,\;\forall S\subseteq N(a),\;\forall c\in\{0,1\}, 𝔼[eSxer+1]𝔼[eSxer]\mathbb{E}[\prod_{e\in S}x_{e}^{r+1}]\leq\mathbb{E}[\prod_{e\in S}x_{e}^{r}].

The main place where our proof deviates results from BarterDR choosing α\alpha and β\beta differently. In particular, GKPS-DR selects a single path and at least one of its edges becomes settled each iteration. In BarterDR the guarantee says, among all paths of the fixed CCC/CCW, at least one edge or vertex becomes settled each iteration.

Lemma 5.0.

BarterDR achieves optimal objective in expectation and i[n],𝔼[Di]=0\forall i\in[n],\;\mathbb{E}[D_{i}]=0.

Proof of Lemma 8.

Given a 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance, let OPTIP\operatorname{OPT_{IP}} and OPTLP\operatorname{OPT_{LP}} be the optimal objectives of the corresponding IP (1) and the corresponding 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP}. Let {Xe}eE\{X_{e}^{*}\}_{e\in E} and {xe}eE\{x_{e}^{*}\}_{e\in E} be optimal solutions to the IP and LP, respectively. Then OPTIP=eEweXeeEwexe=OPTLP\operatorname{OPT_{IP}}=\sum_{e\in E}w_{e}X_{e}^{*}\leq\sum_{e\in E}w_{e}x_{e}^{*}=\operatorname{OPT_{LP}}. Per Theorem 3, BarterDR satisfies (P1) when rounding {xe}eE\{x_{e}^{*}\}_{e\in E} to {Xe}{0,1}|E|\{X_{e}\}\in\{0,1\}^{|E|}. Therefore, 𝔼[eEweXe]=eEwe𝔼[Xe]=eEwexe=OPTLP\mathbb{E}[\sum_{e\in E}w_{e}X_{e}]=\sum_{e\in E}w_{e}\mathbb{E}[X_{e}]=\sum_{e\in E}w_{e}x_{e}^{*}=\operatorname{OPT_{LP}}.

By the linearity of expectation and (P1),

𝔼[Di]\displaystyle\mathbb{E}[D_{i}] =𝔼[aLiX(a)vabRiX(b)vb]\displaystyle=\mathbb{E}[\sum_{a\in L_{i}}X(a)v_{a}-\sum_{b\in R_{i}}X(b)v_{b}]
=bLi𝔼[X(b)]vbaRi𝔼[X(a)]va\displaystyle=\sum_{b\in L_{i}}\mathbb{E}[X(b)]v_{b}-\sum_{a\in R_{i}}\mathbb{E}[X(a)]v_{a}
=bLix(b)vbaRix(a)va=Di0=0.\displaystyle=\sum_{b\in L_{i}}x^{*}(b)v_{b}-\sum_{a\in R_{i}}x^{*}(a)v_{a}=D_{i}^{0}=0.

The last equation follows because {xe}eE\{x^{*}_{e}\}_{e\in E} satisfies (1d) as argued in the proof of Theorem 3. ∎

Lemma 5.0.

If Dir=0D_{i}^{r}=0 and there exists distinct floating a,bf(i)a,b\in f(i), then Dir+1=0D_{i}^{r+1}=0.

Proof of Lemma 9.

If no vertex from LiRiL_{i}\cup R_{i} appears in the CCC/CCW’s endpoints V(𝒫):=i[q]{si,ti}V(\mathcal{P}):=\bigcup_{i\in[q]}\{s_{i},t_{i}\} then we are done. So suppose aLiRia\in L_{i}\cup R_{i} and aV(𝒫)a\in V(\mathcal{P}) in this rr-th rounding iteration. By assumption there exists another floating vertex, namely bLiRib\in L_{i}\cup R_{i} in iteration rr when sitii[q]\langle s_{i}\leadsto t_{i}\rangle_{i\in[q]} was constructed. Therefore, aa is not partnerless hence it cannot be the endpoint of a CCW so there exists bV(𝒫)b\in V(\mathcal{P}) such that aba\sim b. Moreover, by property 2 of the definition of a CCC/CCW, said bb is unique. Therefore, aa and bb are the only vertices in V(𝒫)V(\mathcal{P}) affecting DiD_{i} this iteration rr; i.e., d(LiRi){a,b},xr(d)=xr+1(d)\forall d\in(L_{i}\cup R_{i})-\{a,b\},\;x^{r}(d)=x^{r+1}(d). Since aba\sim b, we may assume without loss of generality that a=tka=t_{k} and b=sk+1b=s_{k+1} for some k[q]k\in[q] (or k[q1]k\in[q-1] for a CCW); recall sq+1s1s_{q+1}\equiv s_{1}. We know f(a)=f(b)f(a)=f(b) if and only if aa and bb belong to opposite graph sides where ff is the valid coloring function corresponding to 𝒫\mathcal{P}, which can be fixed efficiently per Lemma 7.

Consider the two possible rounding events, described in (7). Call these events θ1\theta_{1} and θ2\theta_{2}. Suppose aa and bb are opposite-side vertices, hence f(a)=f(b)f(a)=f(b). Focus on event θ1\theta_{1} (the proof for θ2\theta_{2} is exactly the same but replacing α\alpha with β-\beta). Under event θ1\theta_{1} we have

xr+1(a)=xr(a)f(a)1vaαandxr+1(b)=xr(b)f(b)1vbα.x^{r+1}(a)=x^{r}(a)-f(a)\frac{1}{v_{a}}\alpha\quad\text{and}\quad x^{r+1}(b)=x^{r}(b)-f(b)\frac{1}{v_{b}}\alpha.

A moment’s notice here shows that xr+1(a),xr+1(b)[0,1]x^{r+1}(a),x^{r+1}(b)\in[0,1] per the definitions of Sections 5.3.2, 5.3.2 and 6. Note the factor of “f(a)-f(a)” appears because aa belongs to Mf(a)iM^{i}_{f(a)} for some ii. Conveniently, this leaves us with

(9) xr+1(a)vaf(a)f(b)xr+1(b)vb\displaystyle x^{r+1}(a)v_{a}-f(a)f(b)x^{r+1}(b)v_{b} =xr(a)vaf(a)f(b)xr(b)vbf(a)α+f(a)α\displaystyle=x^{r}(a)v_{a}-f(a)f(b)x^{r}(b)v_{b}-f(a)\alpha+f(a)\alpha
(10) =xr(a)vaf(a)f(b)xr(b)vb,\displaystyle=x^{r}(a)v_{a}-f(a)f(b)x^{r}(b)v_{b},

using the fact f(b)f(b)=1f(b)\cdot f(b)=1. Without loss of generality let aLia\in L_{i}. Therefore, expanding Dir+1D_{i}^{r+1}:

(11) Dir+1=sLixr+1(s)vttRixr+1(t)vt.D_{i}^{r+1}=\sum_{s\in L_{i}}x^{r+1}(s)v_{t}-\sum_{t\in R_{i}}x^{r+1}(t)v_{t}.

Having fixed aLia\in L_{i}, we know f(a)f(b)=1f(a)f(b)=1 if and only if bRib\in R_{i}. Thus, take out xr+1(a)x^{r+1}(a) and xr+1(b)x^{r+1}(b) from the sums and substitute (10) to have

(12) sLi{a,b}xr+1(s)vttRi{b}xr+1(t)vt+xr(a)vaf(a)f(b)xr(b)vb.\sum_{s\in L_{i}-\{a,b\}}x^{r+1}(s)v_{t}-\sum_{t\in R_{i}-\{b\}}x^{r+1}(t)v_{t}+x^{r}(a)v_{a}-f(a)f(b)x^{r}(b)v_{b}.

Now observe that xr+1(p)=xr(p)x^{r+1}(p)=x^{r}(p) for all p(LiRi){a,b}p\in(L_{i}\cup R_{i})-\{a,b\}. Moreover, bRib\in R_{i} if and only if f(a)f(b)=1f(a)f(b)=1, so we can reabsorb the terms “xr(a)vax^{r}(a)v_{a}” and “xr(b)vbx^{r}(b)v_{b}” into their respective summations; thus yielding sLixr(s)vttRixr(t)vt\sum_{s\in L_{i}}x^{r}(s)v_{t}-\sum_{t\in R_{i}}x^{r}(t)v_{t}. But this is precisely DirD_{i}^{r}, which we’ve assumed to be 0. ∎

Proof of Theorem 1.

It is straightforward to check that after solving 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}LP\text{-}Caps} there are at most |E||E| floating edges. Each iteration of the pre-processing step finds a cycle, say using depth-first-search, and rounds said cycle in time 𝒪(|L|+|R|)\mathcal{O}(|L|+|R|) with at least one edge being settled every time a cycle is rounded. Therefore, the pre-processing step takes time at most 𝒪(|E|(|L|+|R|))\mathcal{O}(|E|\cdot(|L|+|R|)). Similarly, FindCCC takes time 𝒪(|L|+|R|)\mathcal{O}(|L|+|R|) to find and round a CCC or CCW. Each iteration a CCC/CCW is rounded either one edge or vertex becomes settled. Therefore BarterDR runs in time 𝒪((|L|+|R|)(|L|+|R|+|E|))\mathcal{O}((|L|+|R|)\cdot(|L|+|R|+|E|)).

Let DirD_{i}^{r} be DiD_{i} like in (2) but with variables xerx_{e}^{r} instead of XeX_{e}. Then Lemma 9 guarantees that for each agent ii, Dir=0D_{i}^{r}=0 implies Dir+1=0D_{i}^{r+1}=0 until LiRiL_{i}\cup R_{i} has exactly one floating vertex (if this happens at all). This means if in some iteration the number of floating vertices in LiRiL_{i}\cup R_{i} went from at least 2 to 0, then Di=0D_{i}=0 by the degree preservation invariant (J2), proved in the proof of Theorem 3, and we are done. Therefore, the only case we must consider is when there is a solitary floating vertex dLiRid\in L_{i}\cup R_{i}. Let tt^{\prime} be the first iteration that started with LiRiL_{i}\cup R_{i} having a sole vertex dd with xt(d)x^{t^{\prime}}(d)\not\in\mathbb{Z}. Then by expanding

(13) Di\displaystyle D_{i} |aLivaX(a)bRivbX(b)|\displaystyle\leq\left|\sum_{a\in L_{i}}v_{a}X(a)-\sum_{b\in R_{i}}v_{b}X(b)\right|
(14) =|aLivaX(a)aLivaxt(a)+bRivbxt(b)bRivbX(b)|\displaystyle=\left|\sum_{a\in L_{i}}v_{a}X(a)-\sum_{a\in L_{i}}v_{a}x^{t^{\prime}}(a)+\sum_{b\in R_{i}}v_{b}x^{t^{\prime}}(b)-\sum_{b\in R_{i}}v_{b}X(b)\right|
(15) =|aLiva(X(a)xt(a))+bRivb(xt(b)X(b))|\displaystyle=\left|\sum_{a\in L_{i}}v_{a}(X(a)-x^{t^{\prime}}(a))+\sum_{b\in R_{i}}v_{b}(x^{t^{\prime}}(b)-X(b))\right|
(16) a(LiRi){d}va|X(a)xt(a)|+vd|xt(d)X(d)|\displaystyle\leq\sum_{a\in(L_{i}\cup R_{i})-\{d\}}v_{a}\left|X(a)-x^{t^{\prime}}(a)\right|+v_{d}\left|x^{t^{\prime}}(d)-X(d)\right|
(17) <vdvi,\displaystyle<v_{d}\leq v_{i}^{*},

which is our desired DiD_{i} bound for Theorem 1. Equation (14) follows because we assume Di1=0D_{i}^{1}=0 (as Di0D_{i}^{0} corresponds to the LP solution and the pre-processing step thus guarantees Di1=0D_{i}^{1}=0) and tt^{\prime} is the first iteration where LiRiL_{i}\cup R_{i} contains exactly one floating vertex; therefore, by induction and Lemma 9, Dit=0D_{i}^{t^{\prime}}=0. Inequality (16) follows from the triangle inequality. The strict inequality in (17) follows because dd was the sole floating vertex of LiRiL_{i}\cup R_{i} in iteration tt^{\prime}; hence by Lemma 2, a(LiRi){d},X(a)xt(a)=0\forall a\in(L_{i}\cup R_{i})-\{d\},\;X(a)-x^{t^{\prime}}(a)=0 and |xt(d)X(d)|<1|x^{t^{\prime}}(d)-X(d)|<1.

By assumption, Di1=0D_{i}^{1}=0 for all ii since {xe0}eE\{x_{e}^{0}\}_{e\in E} is an optimal solution to the corresponding 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP} and the pre-processing step ensures Di0=0Di1=0D_{i}^{0}=0\implies D_{i}^{1}=0. Then, by Lemma 9, the only DiD_{i}’s that are not necessarily preserved are those where LiRiL_{i}\cup R_{i} ends up with exactly one floating vertex in some algorithm iteration tt^{\prime}. As argued above, this case leads to Di<viD_{i}<v_{i}^{*}. Together with Lemma 8 this completes the proof. ∎

Consequently from Theorems 1 and 3:

Corollary 0.

𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP} with all items having equal values is integral.

To see this note that if all item values are equal then setting them all equal to 11 gives an equivalent 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance with v=1v^{*}=1. Then the Di<vD_{i}<v^{*} guarantee recovers Di=0D_{i}=0. In this special case, our randomized algorithm recovers an integral solution to the LP, proving its integrality.

6. Extension of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} to multiple copies of each item

In Section 1.1 we formulated 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} where each agent has and wants at most one copy of any item jj\in\mathcal{I}. We now consider 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}Caps} the natural generalization where agents may swap an arbitrary number item copies. In addition to the previous inputs, for each agent i𝒩i\in\mathcal{N}, we take capacity functions ηi:+\eta_{i}:\mathcal{I}\to\mathbb{Z}^{+} and ωi:+\omega_{i}:\mathcal{I}\to\mathbb{Z}^{+} denoting the (non-negative) number of copies owned and desired of each jj\in\mathcal{I}. For simplicity we define η(j)=0\eta(j)=0 for jHij\not\in H_{i} and ωi(j)=0\omega_{i}(j)=0 for jWij\not\in W_{i}. Analogously, now g:𝒩×𝒩×+g:\mathcal{N}\times\mathcal{N}\times\mathcal{I}\to\mathbb{Z}^{+} and g(i,i,j)=kg(i,i^{\prime},j)=k says agent ii gives kk copies of item jj to agent ii^{\prime}. Much like before, a valid allocation is one where each agent i𝒩i\in\mathcal{N} (i) receives only desired items within capacity, meaning i𝒩g(i,i,j)ωi(j)\sum_{i^{\prime}\in\mathcal{N}}g(i^{\prime},i,j)\leq\omega_{i}(j); and (ii) gives away only items they possess within capacity, thus i𝒩g(i,i,j)ηi(j)\sum_{i^{\prime}\in\mathcal{N}}g(i,i^{\prime},j)\leq\eta_{i}(j). The goal of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}Caps} remains the same: to find a valid allocation of maximum utility (now defined as i𝒩i𝒩jw(i,i,j)g(i,i,j)\sum_{i\in\mathcal{N}}\sum{i^{\prime}\in\mathcal{N}}\sum_{j\in\mathcal{I}}w(i,i^{\prime},j)g(i,i^{\prime},j)) subject to no agent giving away more value than they receive (where the values given and received by agent ii are now i𝒩jvjg(i,i,j)\sum{i^{\prime}\in\mathcal{N}}\sum_{j\in\mathcal{I}}v_{j}g(i,i^{\prime},j) and i𝒩jvjg(i,i,j)\sum{i^{\prime}\in\mathcal{N}}\sum_{j\in\mathcal{I}}v_{j}g(i^{\prime},i,j), respectively).

Like before, 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}Caps} admits the following similar IP formulation with LP relaxation 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}LP\text{-}Caps} after relaxing (18e). For e=(i,j,ri,j)e=(\ell_{i,j},r_{i^{\prime},j}) we relax yey_{e} to be in [0,min{ηi(j),ωi(j)}][0,\min\{\eta_{i}(j),\omega_{i^{\prime}}(j)\}].

(18a) max\displaystyle\max\quad eEwexe\displaystyle\sum_{e\in E}w_{e}x_{e}
(18b) subj. to x(ij)ηi(j),\displaystyle x(\ell_{ij})\leq\eta_{i}(j), i[n],ijLi\displaystyle i\in[n],\ell_{ij}\in L_{i}
(18c) x(rij)ωi(j),\displaystyle x(r_{ij})\leq\omega_{i}(j), i[n],rijRi\displaystyle i\in[n],r_{ij}\in R_{i}
(18d) aLix(a)va=bRix(b)vb,\displaystyle\sum_{a\in L_{i}}x(a)v_{a}=\sum_{b\in R_{i}}x(b)v_{b}, i[n]\displaystyle i\in[n]
(18e) xe+,\displaystyle x_{e}\in\mathbb{Z}^{+}, e=(ij,rij)E\displaystyle e=(\ell_{ij},r_{i^{\prime}j})\in E .

6.1. Equivalence of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}-Caps with single item copies

By explicitly making distinct items for each capacitated item in a 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}Caps} instance we can reduce said instance to one of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}. Although, the resulting 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} and 𝖵𝖺𝗅𝖡𝖺𝗅𝖬𝖺𝗍𝖼𝗁𝗂𝗇𝗀\mathsf{ValBalMatching} formulation now has a number of vertices exponential in the size of the input, this is not a problem. We only use this 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance as a thought experiment to establish that BarterDR will give correct output. To avoid solving any large instances we first solve the polynomial-sized 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}LP\text{-}Caps} and subsequently observe BarterDR actually has polynomial size input in Lemma 2.

Lemma 6.0.

Any instance of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}Caps} has a corresponding (larger but) equivalent 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance with unit capacities.

Proof of Lemma 1.

The instance with unit copies of each item will be large but it is only a thought experiment; we do not directly solve the corresponding LP nor write the full graph down. Fix an agent ii and an item jj in HiH_{i}. Make ηi(j)\eta_{i}(j) copies of this vertex each with unit capacity, say ij1,ij2,,ijηi(j)\ell_{ij1},\ell_{ij2},\dots,\ell_{ij\eta_{i}(j)}. Similarly, for an item jWij^{\prime}\in W_{i}, make ωi(j)\omega_{i}(j^{\prime}) copies rij1,rij2,,rijωi(j)r_{ij^{\prime}1},r_{ij^{\prime}2},\dots,r_{ij^{\prime}\omega_{i}(j^{\prime})}. Like before add edges between all vertices corresponding to the same original items. Keep all edge weights the same and use the same corresponding weights for edges between copies i.e., if e=(ijk1,rijk2)e=(\ell_{ijk_{1}},r_{i^{\prime}jk_{2}}) and f=(ij,ij)f=(\ell_{ij},\ell_{i^{\prime}j}) then we=wfw_{e}=w_{f}. Call this new set of edges over vertex copies EE^{\prime}.

To see the two formulations are equivalent, we show xx, eE,xe0\forall e\in E,x_{e}\geq 0 is feasible to 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}LP\text{-}Caps} if and only if zz, eE,ze[0,1]\forall e\in E^{\prime},z_{e}\in[0,1] is feasible to 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP}. Moreover, xx and zz have the same objective value. Let e=(ij,rij)e=(\ell_{ij},r_{i^{\prime}j}) then xe=k+rx_{e}=k+r for k+k\in\mathbb{Z}^{+} and 0r<10\leq r<1. Correspondingly let ep=(ijp,rijp)Ee_{p}=(\ell_{ijp},r_{i^{\prime}jp})\in E^{\prime} and set ze1,ze2,,zekz_{e_{1}},z_{e_{2}},\dots,z_{e_{k}} all equal to 11 and zek+1=rz_{e_{k+1}}=r. k+rmin{ηi(j),ωi(j)}k+r\leq\min\{\eta_{i}(j),\omega_{i}(j)\} if and only if xx and zz are feasible. Moreover, both xex_{e} and (ze1,,zek+1)(z_{e_{1}},\dots,z_{e_{k+1}}) each contribute (k+r)we(k+r)w_{e} to the objective and (k+r)vj(k+r)v_{j} value given by agent ii and received by agent ii^{\prime}.

Therefore we always write and solve 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}LP\text{-}Caps} and use 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP} only as a thought experiment to facilitate the presentation of the problem. It is straightforward to check that the size of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}LP\text{-}Caps} is poly(||,n,logη,logω)\operatorname{poly}(|\mathcal{I}|,n,\log\eta,\log\omega) where η=maxi[n],jHiηi(j)\eta=\max_{i\in[n],j\in H_{i}}\eta_{i}(j) and similarly ω=maxi[n],jHiωi(j)\omega=\max_{i\in[n],j\in H_{i}}\omega_{i}(j). ∎

Let EE^{\prime} correspond to the graph with vertex copies as outlined in the proof of Lemma 1.

Lemma 6.0.

A solution {ye}eE\{y_{e}\}_{e\in E^{\prime}} to 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP} has at most |E||E| floating variables.

Proof of Lemma 2.

Like in the proof of Lemma 1, corresponding to each group of ze1,ze2,z_{e_{1}},z_{e_{2}},\dots there is at most one zep=rz_{e_{p}}=r for 0<r<10<r<1. Therefore, the number of floating edges is at most |E||E|. ∎

In conclusion, by Lemmas 1 and 2, BarterDR is also an efficient algorithm for the more general 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖢𝖺𝗉𝗌\mathsf{BarterSV\text{-}Caps}.

7. Fairness

Fairness is an important consideration when resource allocation algorithms are deployed in the real-world. Theorem 3 allows for adding fairness constraints to 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP}. Previous works such as (Duppala et al., 2023; Esmaeili et al., 2022, 2023) studied various group fairness notions, and formulated the fair variants of problems like Clustering, Set Packing, etc., by adding fairness constraints to the Linear Programs of the respective optimization problems.

Consider a toy example of such an approach where we are given \ell communities G1,,G[n]G_{1},\dots,G_{\ell}\subseteq[n] of agents coming together to thicken the market. In order to incentivize said communities to join the centralized exchange, the algorithm designer may promise that each community GpG_{p} will receive at least μp\mu_{p} units of value on average. By adding the constraints

(19) iGpx(rij)vjμp,p[]\sum_{i\in G_{p}}x(r_{ij})v_{j}\geq\mu_{p},\quad p\in[\ell]

to the 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP}, the algorithm designer ensures that the expected utility of each group GpG_{p} is at least μp\mu_{p}. More precisely, (P1) and the linearity of expectation ensures

𝔼[iGpX(rij)vj]\displaystyle\mathbb{E}[\sum_{i\in G_{p}}X(r_{ij})v_{j}] =iGp𝔼[X(rij)]vj\displaystyle=\sum_{i\in G_{p}}\mathbb{E}[X(r_{ij})]v_{j}
=iGpx(rij)vjμp.\displaystyle=\sum_{i\in G_{p}}x(r_{ij})v_{j}\geq\mu_{p}.

The same rationale can be extended to provide individual guarantees (in expectation) by adding analogous constraints for each agent. We conclude this brief discussion by highlighting the versatility of LPs and, as a result, of BarterDR.

8. Hardness of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}

We first prove Theorem 2: it is NP-Hard to find any non-empty allocation satisfying Dk=0D_{k}=0 for all agents kk. By non-empty we mean the corresponding LP solution x0x\neq 0, i.e., at least one agent gives away an item. The proof proceeds by reducing from the NP-hard problem of Partition.

Definition 0 (Partition).

A Partition instance takes a set S={a1,a2,,an}S=\{a_{1},a_{2},\dots,a_{n}\} of nn positive integers summing to an integer 2T2T. The goal of Partition is to determine if SS can be partitioned into disjoint subsets S1S_{1} and S2S_{2} such that each subset sums exactly to an integer TT.

Lemma 8.0.

Given a Partition instance, it can be reduced in polynomial time to a corresponding 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance with two agents.

Proof.

Consider an instance I=(S,2T)I=(S,2T) of partition problem where S={a1,a2,,an}S=\{a_{1},a_{2},\dots,a_{n}\} is a set of integers where i[n]ai=2T\sum_{i\in[n]}a_{i}=2T. Given II, the corresponding 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance is constructed as follows.

Let the set of items ={i1,i2,,in,in+1}\mathcal{I}=\{i_{1},i_{2},\dots,i_{n},i_{n+1}\} with item values vj:=ajv_{j}:=a_{j} for each item ij,j[n]i_{j},j\in[n] and vn+1:=Tv_{n+1}:=T for item in+1i_{n+1}. There are only two agents. Agent 11 has item lists H1:={i1,,in}H_{1}:=\{i_{1},\dots,i_{n}\} and W1:={in+1}W_{1}:=\{i_{n+1}\}. Symmetrically, agent 22 has item lists W2:={ij:j[n]}W_{2}:=\{i_{j}:j\in[n]\} and H2:={in+1}H_{2}:=\{i_{n+1}\}. The particular weights wew_{e} of allocating items (i.e., allocation’s utility) do not matter as we only care about whether some non-empty allocation exists. ∎

Recall the goal is to show there exists a non-empty allocation such that for each agent k[2]k\in[2], Dk=0D_{k}=0 if and only if the corresponding Partition instance has a solution.

Lemma 8.0.

There exists a polynomial time algorithm to find a non-empty allocation of items with Dk=0D_{k}=0 for each agent kk in the 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance if and only if there exists a polynomial time algorithm to the corresponding Partition instance.

Proof.

Forward direction (Partition \implies 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}). Given a solution (S1,S2)(S_{1},S_{2}) to the Partition instance, the corresponding 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance has a solution in the following manner. Allocate the items {ij:jS1}H1\{i_{j}:j\in S_{1}\}\subseteq H_{1} to agent 22 and allocate the item in+1H2i_{n+1}\in H_{2} to agent 11. Thus, the value of the items received and given by both the agents is exactly TT resulting in a non-empty allocation with D1=D2=0D_{1}=D_{2}=0.

Backward Direction (𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} \implies Partition). Take a non-empty allocation of items \mathcal{I} to each agent k[2]k\in[2] with Dk=0D_{k}=0. Such a non-empty allocation must have agent 11 giving away their only item, which has value TT. Therefore D1=0D_{1}=0 implies agent 11 received TT units of value. Let the items agent 11 received be ij1,ij2,,iji_{j_{1}},i_{j_{2}},\dots,i_{j_{\ell}}, letting J={j1,,j}J=\{j_{1},\dots,j_{\ell}\}. Thus, pJvp=T\sum_{p\in J}v_{p}=T. Therefore, the corresponding partition instance has solution S1={ap:pJ}S_{1}=\{a_{p}:p\in J\} and S2={ap:pJ}S_{2}=\{a_{p}:p\not\in J\}. ∎

Together, Lemmas 2 and 3 prove Theorem 2: it is NP-hard to find a non-empty allocation of items with Dk=0D_{k}=0 for any agent kk.

8.1. LP approaches will not beat Di<vD_{i}<v^{*}

A natural follow-up question: is it possible to guarantee Di<vϵD_{i}<v^{*}-\epsilon for some ϵ>0\epsilon>0? At least using 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP}, this is not be possible. Consider the family of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instances Π(n)\Pi(n) wherein two agents have one item each and desire each other’s item. The agent’s items are j1j_{1} and j2j_{2}, respectively, with values vj1=1v_{j_{1}}=1 and vj2=1/nv_{j_{2}}=1/n. Let the utility be that of maximizing the total value reallocated (i.e., for e=(a,b)e=(a,b), we=va=vbw_{e}=v_{a}=v_{b}, eE\forall e\in E in the corresponding VBM instance). The LP solution corresponds to agent 1 giving a 1/n1/n-fraction of her item away and agent 2 giving his entire item away for an optimal objective value of 2/n2/n. Even if agent 22 always gives his item away, the objective achieved is only 1/n1/n. Therefore, to achieve the optimal objective agent 11 must give away her item with positive probability. This leads to the worst case, D111/n=v1/nD_{1}\geq 1-1/n=v^{*}-1/n. Therefore,

Remark 2.

For any ϵ>0\epsilon>0, there exists nn such that Π(n)\Pi(n) achieves DivϵD_{i}\geq v^{*}-\epsilon.

A moment’s reflection makes it clear that the only feasible (integral) reallocation to this family of instances is the empty reallocation i.e., the zero vector. A direction to bypass this ”integrality gap”-like barrier in DiD_{i} might be to write 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵-𝖫𝖯\mathsf{BarterSV\text{-}LP} only for instances in which we know Di=0D_{i}=0 has a feasible non-empty integral reallocation. However, this is in and of itself a hard problem as proved in Theorem 2.

9. Conclusion

We introduce and study 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV}, a centralized barter exchange problem where each item has a value agreed upon by the participating agents. The goal is to find an allocation/exchange that (i) maximizes the collective utility of the allocation such that (ii) the total value of each agent’s items before and after the exchange is equal. Though it is NP-hard to solve 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} exactly, we can efficiently compute allocations with optimal expected utility where each agent’s net value loss is at most a single item’s value. Our problem is motivated by the proliferation of large scale web markets on social media websites with 50,000-60,000 active users eager to swap items with one another. We formulate and study this novel problem with several real-world exchanges of video games, board games, digital goods and more. These exchanges have large communities, but their decentralized nature leaves much to be desired in terms of efficiency.

Future directions of this work include accounting for arbitrary item valuations i.e., different agents may value items differently. Directly applying our algorithm does not work because we rely on the paths of GG consisting of items with all-equal values. Additionally, we briefly touch on how some fairness criteria interact with our model algorithm; however, the exploration of many important fairness criteria is left for future work. Finally, we argue why our specific LP-based approach cannot guarantee better than Di<vD_{i}<v^{*}. It remains an open question whether other approaches (LP-based or not) can beat this bound or whether improving this result is NP-hard altogether.

Acknowledgements.
Sharmila Duppala, Juan Luque and Aravind Srinivasan are all supported in part by NSF grant CCF-1918749.

References

  • (1)
  • Abbassi et al. (2013) Zeinab Abbassi, Laks V. S. Lakshmanan, and Min Xie. 2013. Fair Recommendations for Online Barter Exchange Networks. In International Workshop on the Web and Databases.
  • Abraham et al. (2007) David J. Abraham, Avrim Blum, and Tuomas Sandholm. 2007. Clearing Algorithms for Barter Exchange Markets: Enabling Nationwide Kidney Exchanges. In Proceedings of the 8th ACM Conference on Electronic Commerce (EC ’07). Association for Computing Machinery, New York, NY, USA, 295–304. https://doi.org/10.1145/1250910.1250954
  • Ashlagi et al. (2018) Itai Ashlagi, Adam Bingaman, Maximilien Burq, Vahideh Manshadi, David Gamarnik, Cathi Murphey, Alvin E. Roth, Marc L. Melcher, and Michael A. Rees. 2018. Effect of Match-Run Frequencies on the Number of Transplants and Waiting Times in Kidney Exchange. American Journal of Transplantation 18, 5 (2018), 1177–1186. https://doi.org/10.1111/ajt.14566
  • Aziz et al. (2021) Haris Aziz, Ioannis Caragiannis, Ayumi Igarashi, and Toby Walsh. 2021. Fair Allocation of Indivisible Goods and Chores. Auton Agent Multi-Agent Syst 36, 1 (Nov. 2021), 3. https://doi.org/10.1007/s10458-021-09532-8
  • Biró et al. (2009) Péter Biró, David F. Manlove, and Romeo Rizzi. 2009. Maximum Weight Cycle Packing in Directed Graphs, with Application to Kidney Exchange Programs. Discrete Math. Algorithm. Appl. 01, 04 (Dec. 2009), 499–517. https://doi.org/10.1142/S1793830909000373
  • Brubach et al. (2017) Brian Brubach, Karthik Abinav Sankararaman, Aravind Srinivasan, and Pan Xu. 2017. Algorithms to Approximate Column-sparse Packing Problems. ACM Transactions on Algorithms (TALG) 16 (2017), 1 – 32.
  • Carvalho et al. (2021) Margarida Carvalho, Xenia Klimentova, Kristiaan Glorie, Ana Viana, and Miguel Constantino. 2021. Robust models for the kidney exchange problem. INFORMS Journal on Computing 33, 3 (2021), 861–881.
  • Chekuri et al. (2010) Chandra Chekuri, Jan Vondrák, and Rico Zenklusen. 2010. Dependent randomized rounding via exchange properties of combinatorial structures. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science. IEEE, 575–584.
  • Constantino et al. (2013) Miguel Constantino, Xenia Klimentova, Ana Viana, and Abdur Rais. 2013. New insights on integer-programming models for the kidney exchange problem. European Journal of Operational Research 231, 1 (2013), 57–68.
  • Duppala et al. (2023) Sharmila Duppala, Juan Luque, John Dickerson, and Aravind Srinivasan. 2023. Group Fairness in Set Packing Problems. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, Edith Elkind (Ed.). International Joint Conferences on Artificial Intelligence Organization, 391–399. https://doi.org/10.24963/ijcai.2023/44 Main Track.
  • Esmaeili et al. (2023) Seyed Esmaeili, Sharmila Duppala, Davidson Cheng, Vedant Nanda, Aravind Srinivasan, and John P Dickerson. 2023. Rawlsian fairness in online bipartite matching: Two-sided, group, and individual. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 5624–5632.
  • Esmaeili et al. (2022) Seyed A. Esmaeili, Sharmila Duppala, John P. Dickerson, and Brian Brubach. 2022. Fair Labeled Clustering. https://doi.org/10.48550/arXiv.2205.14358 arXiv:2205.14358 [cs]
  • Fang et al. (2015) Wenyi Fang, Aris Filos-Ratsikas, Søren Kristoffer Stiil Frederiksen, Pingzhong Tang, and Song Zuo. 2015. Randomized assignments for barter exchanges: Fairness vs. efficiency. In Algorithmic Decision Theory: 4th International Conference, ADT 2015, Lexington, KY, USA, September 27–30, 2015, Proceedings. Springer, 537–552.
  • Farnadi et al. (2021) Golnoosh Farnadi, William St-Arnaud, Behrouz Babaki, and Margarida Carvalho. 2021. Individual Fairness in Kidney Exchange Programs. Proceedings of the AAAI Conference on Artificial Intelligence 35, 13 (May 2021), 11496–11505.
  • Gandhi et al. (2006) Rajiv Gandhi, Samir Khuller, Srinivasan Parthasarathy, and Aravind Srinivasan. 2006. Dependent Rounding and Its Applications to Approximation Algorithms. J. ACM 53, 3 (May 2006), 324–360. https://doi.org/10.1145/1147954.1147956
  • Glorie et al. (2014) Kristiaan M Glorie, J Joris van de Klundert, and Albert PM Wagelmans. 2014. Kidney exchange with long chains: An efficient pricing algorithm for clearing barter exchanges with branch-and-price. Manufacturing & Service Operations Management 16, 4 (2014), 498–512.
  • Herlihy (2018) Maurice Herlihy. 2018. Atomic cross-chain swaps. In ACM Symposium on Principles of Distributed Computing (PODC). 245–254.
  • Jevons (1879) William Stanley Jevons. 1879. The Theory of Political Economy. Macmillan and Company.
  • Klimentova et al. (2021) Xenia Klimentova, Ana Viana, João Pedro Pedroso, and Nicolau Santos. 2021. Fairness models for multi-agent kidney exchange programmes. Omega 102 (2021), 102333.
  • McElfresh et al. (2020) Duncan McElfresh, Michael Curry, Tuomas Sandholm, and John Dickerson. 2020. Improving Policy-Constrained Kidney Exchange via Pre-Screening. In Advances in Neural Information Processing Systems, Vol. 33. 2674–2685.
  • Noothigattu et al. (2020) Ritesh Noothigattu, Dominik Peters, and Ariel D Procaccia. 2020. Axioms for learning from pairwise comparisons. Advances in Neural Information Processing Systems 33 (2020), 17745–17754.
  • Plaut et al. (2016) Benjamin Plaut, John Dickerson, and Tuomas Sandholm. 2016. Fast optimal clearing of capped-chain barter exchanges. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30.
  • Thyagarajan et al. (2022) Sri AravindaKrishnan Thyagarajan, Giulio Malavolta, and Pedro Moreno-Sanchez. 2022. Universal atomic swaps: Secure exchange of coins across all blockchains. In IEEE Symposium on Security and Privacy (S&P). IEEE, 1299–1316.

Appendix A Direct application of GKPS-DR fails

Example 1 is a worst case instance where a direct application of GKPS-DR to the fractional optimal solution of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} results in a net loss of jLivj\sum_{j\in L_{i}}v_{j} for some agent ii; i.e., agent ii gives away all their items and does not receive any item from its wishlist.

Example 0.

Consider an instance of 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} with two agents where W1={1,2},W2={3,4}W_{1}=\{1,2\},W_{2}=\{3,4\} and H1={3,4},H2={1,2}H_{1}=\{3,4\},H_{2}=\{1,2\}. Let the values of the items be v(1)=v(2)=10v(1)=v(2)=10 and v(3)=v(4)=20v(3)=v(4)=20.

Figure 3 shows the bipartite graph of the 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance from Example 1. The edges are unweighted and the optimal LP solution is x=[0.5,0.5,1,1]x=[0.5,0.5,1,1]. The first two coordinates of xx correspond to items given by agent 11. GKPS-DR will round both of these co-ordinates to 0 with positive probability thus resulting in agent 22 incurring a net loss of aL2va=20\sum_{a\in L_{2}}v_{a}=20 units of value.

Refer to caption
Figure 3. The bipartite graph corresponding to the 𝖡𝖺𝗋𝗍𝖾𝗋𝖲𝖵\mathsf{BarterSV} instance in Example 1is pictured. Blue and orange vertices correspond to agents 11 and 22, respectively. The optimal LP solution is x=[0.5,0.5,1,1]x=[0.5,0.5,1,1].
Observation 1.

GKPS-DR rounding xx results in the vector X=[0,0,1,1]X=[0,0,1,1] with positive probability; this is the worst case for agent 22 where their net loss is aL2va=20\sum_{a\in L_{2}}v_{a}=20 units of value.

The above example can be easily generalized to instances with larger item-lists and more agents where some agent ii achieves net value loss aLiva\sum_{a\in L_{i}}v_{a} with positive probability.

Appendix B Proof of Theorem 3

Lemma B.0.

BarterDR satisfies (J1).

Proof.

The property holds trivially for r=0r=0. Recall r=1r=1 corresponds to the output of the pre-processing step. This fact is proved in (Gandhi et al., 2006). Therefore, focus on some fixed r>1r>1 and proceed by induction. Fix eEe\in E and the CCC/CCW to be rounded 𝒫=sitii[q]\mathcal{P}=\langle s_{i}\leadsto t_{i}\rangle_{i\in[q]}. Proceed by considering the following two events.

Event AA: ee does not appear in 𝒫\mathcal{P} so ee does not change this iteration. Thus by the induction hypothesis 𝔼[xer+1(xer=z)A]=z\mathbb{E}[x_{e}^{r+1}\mid(x_{e}^{r}=z)\wedge A]=z.
Event BB: ee appears in 𝒫\mathcal{P}, say, in path sitis_{i}\leadsto t_{i} for a fixed ii. Recall values α\alpha and β\beta from (6) are fixed and xerx_{e}^{r} is modified according to (7). Assuming eM1ie\in M^{i}_{-1} then

𝔼[xer+1(xer=z)B]=z+αvi(βα+β)βvi(αα+β)=z.\mathbb{E}[x_{e}^{r+1}\mid(x_{e}^{r}=z)\wedge B]=z+\frac{\alpha}{v_{i}}\left(\frac{\beta}{\alpha+\beta}\right)-\frac{\beta}{v_{i}}\left(\frac{\alpha}{\alpha+\beta}\right)=z.

The same holds if instead eM1ie\in M^{i}_{1}. Hence

𝔼[xer+1(xer=z)]\displaystyle\mathbb{E}[x_{e}^{r+1}\mid(x_{e}^{r}=z)] =𝔼[xer+1(xer=z)B]Pr(B)\displaystyle=\mathbb{E}[x_{e}^{r+1}\mid(x_{e}^{r}=z)\wedge B]\cdot\Pr(B)
+𝔼[xer+1(xer=z)A]Pr(A)\displaystyle\phantom{=}+\mathbb{E}[x_{e}^{r+1}\mid(x_{e}^{r}=z)\wedge A]\cdot\Pr(A)
=z(Pr(A)+Pr(B))=z.\displaystyle=z(\Pr(A)+\Pr(B))=z.

Let ZZ be the set of possible values for xerx_{e}^{r}.

𝔼[xer+1]\displaystyle\mathbb{E}[x_{e}^{r+1}] =zZ𝔼[xer+1(xer=z)]Pr(xer=z)\displaystyle=\sum_{z\in Z}\mathbb{E}[x_{e}^{r+1}\mid(x_{e}^{r}=z)]\cdot\Pr(x_{e}^{r}=z)
=zZzPr(xer=z)=𝔼[xer].\displaystyle=\sum_{z\in Z}z\cdot\Pr(x_{e}^{r}=z)=\mathbb{E}[x_{e}^{r}].

By the IH, then 𝔼[xer+1]=xe0\mathbb{E}[x_{e}^{r+1}]=x_{e}^{0}. ∎

Lemma B.0.

BarterDR satisfies (J2).

Proof.

The property holds trivially for r=0r=0. Recall r=1r=1 corresponds to the output of the pre-processing step. This fact is proved in (Gandhi et al., 2006). Therefore, focus on some fixed r>1r>1 and proceed by induction. Fix aLRa\in L\cup R and the CCC/CCW to be rounded 𝒫=sitii[q]\mathcal{P}=\langle s_{i}\leadsto t_{i}\rangle_{i\in[q]}. Recall V(𝒫)V(\mathcal{P}) denotes the endpoints of the paths of 𝒫\mathcal{P}. Proceed by cases.

Case AA: aV(𝒫)a\not\in V(\mathcal{P}). Then either aa does not appear in 𝒫\mathcal{P} or aa appears in 𝒫\mathcal{P} but with two edges incident on it. In the former case clearly xar+1=xarx_{a}^{r+1}=x_{a}^{r}. In the latter case, the change of each incident edge is equal in magnitude and opposite in sign (since one edge belongs to M1iM^{i}_{-1} and the other to M1iM^{i}_{1}) therefore xar+1=xarx_{a}^{r+1}=x_{a}^{r} as well. Thus by the IH x0(a)xr+1(a)x0(a)\lfloor x^{0}(a)\rfloor\leq x^{r+1}(a)\leq\lceil x^{0}(a)\rceil.

Case BB: aV(𝒫)a\in V(\mathcal{P}). There is a single incident edge eN(a)e\in N(a). Without loss of generality, said edge belongs path sitis_{i}\leadsto t_{i} and thus to M1iM^{i}_{-1} (the proof for M1iM^{i}_{1} is identical). Then either xr+1(a)=xr(a)+α/vix^{r+1}(a)=x^{r}(a)+\alpha/v_{i} or xr+1(a)=xr(a)β/vix^{r+1}(a)=x^{r}(a)-\beta/v_{i}. In either case, by definition of α\alpha and β\beta (i.e., (6)), α\alpha and β\beta are small enough that xr(a)xr+1(a)xr(a)\lfloor x^{r}(a)\rfloor\leq x^{r+1}(a)\leq\lceil x^{r}(a)\rceil. Observe x0(a)=xr(a)\lfloor x^{0}(a)\rfloor=\lfloor x^{r}(a)\rfloor and x0(a)=xr(a)\lceil x^{0}(a)\rceil=\lceil x^{r}(a)\rceil.

Having handled exhaustive cases, the proof is complete. ∎

Lemma B.0.

BarterDR satisfies (J3).

Proof.

The property holds trivially for r=0r=0. Recall r=1r=1 corresponds to the output of the pre-processing step. This fact is proved in (Gandhi et al., 2006). Therefore, focus on some fixed r>1r>1 and proceed by induction. Fix a vertex aa and a subset of edges SS incident on aa like in (J3). Also fix the CCC/CCW to be rounded 𝒫=sitii[q]\mathcal{P}=\langle s_{i}\leadsto t_{i}\rangle_{i\in[q]}. Proceed based on the following events.

Event AA: no edge in SS has its value modified. Then 𝔼[eSxer+1A]=𝔼[eSxerA]\mathbb{E}[\prod_{e\in S}x^{r+1}_{e}\mid A]=\mathbb{E}[\prod_{e\in S}x^{r}_{e}\mid A].

Event BB: two edges e1,e2Se_{1},e_{2}\in S have their values modified. Said edges must both belong to sitis_{i}\leadsto t_{i}, for some fixed ii, with one belonging to M1iM^{i}_{-1} and the other to M1iM^{i}_{1}; say e1M1ie_{1}\in M^{i}_{1} and e2M1ie_{2}\in M^{i}_{-1}. Then

(xe1r+1,xe2r+1)={(xe1r+α/vi,xe2rα/vi) with probability β/(α+β)(xe1rβ/vi,xe2r+β/vi) with probability α/(α+β)(x_{e_{1}}^{r+1},x_{e_{2}}^{r+1})=\begin{cases}(x_{e_{1}}^{r}+\alpha/v_{i},x_{e_{2}}^{r}-\alpha/v_{i})\text{ with probability }\beta/(\alpha+\beta)\\ (x_{e_{1}}^{r}-\beta/v_{i},x_{e_{2}}^{r}+\beta/v_{i})\text{ with probability }\alpha/(\alpha+\beta)\end{cases}

where α\alpha and β\beta are fixed per (6). Let S1=S{e1,e2}S_{1}=S-\{e_{1},e_{2}\}. Then

𝔼[eSxer+1(eS,xer=ze)B]=𝔼[xe1rxe2r(eS,xer=ze)B]eS1ze.\mathbb{E}\left[\prod_{e\in S}x_{e}^{r+1}\mid(\forall e\in S,x_{e}^{r}=z_{e})\wedge B\right]=\mathbb{E}\left[x_{e_{1}}^{r}\cdot x_{e_{2}}^{r}\mid(\forall e\in S,x_{e}^{r}=z_{e})\wedge B\right]\prod_{e\in S_{1}}z_{e}.

The above expectation can be written as (Ψ+Φ)eS1ze(\Psi+\Phi)\prod_{e\in S_{1}}z_{e}, where

Ψ\displaystyle\Psi =(β/(α+β))(ze1+α)(ze2α) and\displaystyle=(\beta/(\alpha+\beta))\cdot(z_{e_{1}}+\alpha)\cdot(z_{e_{2}}-\alpha)\text{ and }
Φ\displaystyle\Phi =(α/(α+β))(ze1β)(ze2+β).\displaystyle=(\alpha/(\alpha+\beta))\cdot(z_{e_{1}}-\beta)\cdot(z_{e_{2}}+\beta).

It is easy to s how Ψ+Φze1ze2\Psi+\Phi\leq z_{e_{1}}z_{e_{2}}. Thus, for any fixed {e1,e2}S\{e_{1},e_{2}\}\subseteq S and for any fixed (α,β)(\alpha,\beta), and for fixed values of zez_{e}, the following holds:

𝔼[eSxer+1(eS,xer=ze)B]eSze.\mathbb{E}\left[\prod_{e\in S}x_{e}^{r+1}\mid(\forall e\in S,x_{e}^{r}=z_{e})\wedge B\right]\leq\prod_{e\in S}z_{e}.

Hence, 𝔼[eSxer+1B]𝔼[eSxerB]\mathbb{E}[\prod_{e\in S}x_{e}^{r+1}\mid B]\leq\mathbb{E}[\prod_{e\in S}x_{e}^{r}\mid B].

Event CC: exactly one edge in the set SS has its value modified. Let CC denote the event that edge e1Se_{1}\in S has its value changed in the following probabilistic way

xe1r+1={xe1r+α with probability β/(α+β)xe1rβ with probability α/(α+β).x_{e_{1}}^{r+1}=\begin{cases}x_{e_{1}}^{r}+\alpha\text{ with probability }\beta/(\alpha+\beta)\\ x_{e_{1}}^{r}-\beta\text{ with probability }\alpha/(\alpha+\beta).\end{cases}

Thus, 𝔼[xe1r+1(eS,xer=ze)C]=ze1\mathbb{E}[x_{e_{1}}^{r+1}\mid(\forall e\in S,x_{e}^{r}=z_{e})\wedge C]=z_{e_{1}}. Letting S1=S{e1}S_{1}=S-\{e_{1}\}, we get that 𝔼[eSxer+1(eS,xer=zf)C]\mathbb{E}[\prod_{e\in S}x_{e}^{r+1}\mid(\forall e\in S,x_{e}^{r}=z_{f})\wedge C] equals

E[xe1r+1(eS,xer=zf)C]eS1ze=eSze.E[x_{e_{1}}^{r+1}\mid(\forall e\in S,x_{e}^{r}=z_{f})\wedge C]\prod_{e\in S_{1}}z_{e}=\prod_{e\in S}z_{e}.

Since the equation holds for any e1Se_{1}\in S, for any values of zez_{e}, and for any (α,β)(\alpha,\beta), we have 𝔼[eSxer+1C]=𝔼[eSxer]\mathbb{E}[\prod_{e\in S}x_{e}^{r+1}\mid C]=\mathbb{E}[\prod_{e\in S}x_{e}^{r}]. ∎

Proof of Theorem 3.

By Lemmas 1 and 2 BarterDR satisfies (P1) and (P2). Let TT be the last iteration of BarterDR. From Lemma 3 we have

Pr(eS(Xe=1))=𝔼[eSxeT+1]𝔼[eSxe1]=eSxe0=eSPr(Xe=1).\Pr(\bigwedge_{e\in S}(X_{e}=1))=\mathbb{E}[\prod_{e\in S}x_{e}^{T+1}]\leq\mathbb{E}[\prod_{e\in S}x_{e}^{1}]=\prod_{e\in S}x_{e}^{0}=\prod_{e\in S}\Pr(X_{e}=1).

The proof for c=0c=0 (i.e., Pr(Xe=0)\Pr(X_{e}=0)) is identical. Therefore, BarterDR satisfies (P3). ∎