This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Bargaining dynamics in exchange networks

Mohsen Bayati Operations and Information Technology, Graduate School of Business, Stanford University    Christian Borgs Microsoft Research New England    Jennifer Chayes2    Yashodhan Kanoria Department of Electrical Engineering, Stanford University, ykanoria@stanford.edu, (650) 353-0476    Andrea Montanari Department of Electrical Engineering and Department of Statistics, Stanford University
Abstract

We consider a one-sided assignment market or exchange network with transferable utility and propose a model for the dynamics of bargaining in such a market. Our dynamical model is local, involving iterative updates of ‘offers’ based on estimated best alternative matches, in the spirit of pairwise Nash bargaining. We establish that when a balanced outcome (a generalization of the pairwise Nash bargaining solution to networks) exists, our dynamics converges rapidly to such an outcome. We extend our results to the cases of (i) general agent ‘capacity constraints’, i.e., an agent may be allowed to participate in multiple matches, and (ii) ‘unequal bargaining powers’ (where we also find a surprising change in rate of convergence).

Keywords: Nash bargaining, network, dynamics, convergence, matching, assignment, exchange, balanced outcomes.
JEL classification: C78

1 Introduction

Bargaining has been heavily studied in the economics and sociology literature, e.g., [32, 36, 26, 35]. While the case of bargaining between two agents is fairly well understood [32, 36, 26], less is known about the results of bargaining on networks. We consider exchange networks [13, 27], also called assignment markets [39, 33], where agents occupy the nodes of a network, and edges represent potential partnerships between pairs of agents, which can generate some additional value for these agents. To form a partnership, the pair of agents must reach an agreement on how to split the value of the partnership. Agents are constrained on the number of partnerships they can participate in, for instance, under a matching constraint, each agent can participate in at most one partnership. The fundamental question of interest is: Who will partner with whom, and on what terms? Such a model is relevant to the study of the housing market, the labor market, the assignment of interns to hospitals, the marriage market and so on. An assignment model is suitable for markets with heterogeneous indivisible goods, where trades are constrained by a network structure.

Balanced outcomes111Also called symmetrically pairwise-bargained allocations in [33] or Nash bargaining solutions [27]. generalize the pairwise Nash bargaining solution to this setting. The key issue here is the definition of the threats or best alternatives of participants in a match — these are defined by assuming the incomes of other potential partners to be fixed. In a balanced outcome on a network, each pair plays according to the local Nash bargaining solution thus defined. Balanced outcomes have been found to possess some favorable properties:

Predictive power. The set of balanced outcomes refines the set of stable outcomes (the core), where players have no incentive to deviate from their current partners. For instance, in the case of a two player network, all possible deals are stable, but there is a unique balanced outcome. Balanced outcomes have been found to capture various experimentally observed effects on small networks [13, 27].

Computability. Kleinberg and Tardos [27] provide an efficient centralized algorithm to compute balanced outcomes. They also show that balanced outcomes exist if and only if stable outcomes exist.

Connection to cooperative game theory. The set of balanced outcomes is identical to the core intersection prekernel of the corresponding cooperative game [33, 4].

However, this leaves unanswered the question of whether balanced outcomes can be predictive on large networks, since there was previously no dynamical description of how players can find such an outcome via a bargaining process.

In this work, we consider an assignment market with arbitrary network structure and edge weights that correspond to the value of the potential partnership on that edge. We present a model for the bargaining process in such a network setting, under a capacity constraint on the number of exchanges that each player can establish. We define a model that satisfies two important requirements: locality and convergence.

Locality. It is reasonable to assume that agents have accurate information about their neighbors in the network (their negotiation partners). Specifically, an agent is expected to know the weights of the edges with each of her negotiation partners. Further, for (ij)E(ij)\in E, agent ii may estimate the ‘best alternative’ of agent jj in the current ‘state’ of negotiations.222For instance, ii may form this estimate based on her conversation with jj. (We do not present a game theoretic treatment with fully rational/strategic agents in this work, cf. Section 6.) On the other hand, it is unlikely that the offer made to an agent during a negotiation depends on arbitrary other agents in the network. We thus require that agent make ‘myopic’ choices on the basis of their neighborhood in the exchange networks. This is consistent with the bulk of the game theory literature on learning [18, 10].

Convergence. The duration of a negotiation is unlikely to depend strongly on the overall network size. For instance, the negotiation on the price of a house, should not depend too much on the size of the town in which it takes place, all other things being equal. Thus a realistic model for negotiation should converge to a fixed point (hence to a set of exchange agreements) in a time roughly independent of the network size.

Our dynamical model is fairly simple. Players compute the current best alternative to each exchange, both for them, and for their partner. On the basis of that, they make a new offer to their partner according to the pairwise Nash bargaining solution. This of course leads to a change in the set of best alternatives at the next time step. We make the assumption that ‘pairing’ occurs at the end, or after several iterative updates, thus suppressing the effect of agents pairing up and leaving.

This dynamics is of course local. Each agent only needs to know the offers she is receiving, as well as the offers that her potential partner is receiving. The technical part of this paper is therefore devoted to the study of the convergence properties of this dynamics.

Remarkably, we find that it converges rapidly, in time roughly independent of the network size. Further its fixed points are not arbitrary, but rather a suitable generalization of Nash bargaining solutions [13, 33] introduced in the context of assignment markets and exchange networks (also referred to as ‘balanced outcomes’ [27] or ‘symmetrically pairwise-bargained allocations’ [33]). Thus, our work provides a dynamical justification of Nash bargaining solutions in assignment markets.

We now present the mathematical definitions of bargaining networks and balanced outcomes.

The network consists of a graph G=(V,E)G=(V,E), with positive weights wij>0w_{ij}>0 associated to the edges (i,j)E(i,j)\in E. A player sits at each node of this network, and two players connected by edge (i,j)(i,j) can share a profit of wijw_{ij} dollars if they agree to trade with each other. Each player can trade with at most one of her neighbors (this is called the 11-exchange rule), so that a set of valid trading pairs forms a matching MM in the graph GG.

We define an outcome or trade outcome as a pair (M,γ¯)(M,\underline{\gamma}) where MEM\subseteq E is a matching of GG, and γ¯={γi:iV}\underline{\gamma}=\{\gamma_{i}\,:\,i\in V\} is the vector of players’ profits. This means, γi0\gamma_{i}\geq 0, and (i,j)M(i,j)\in M implies γi+γj=wij\gamma_{i}+\gamma_{j}=w_{ij}, whereas for every unmatched node iMi\notin M we have γi=0\gamma_{i}=0.

A balanced outcome, or Nash bargaining (NB) solution, is a trade outcome that satisfies the additional requirements of stability and balance. Denote by i\partial i the set of neighbors of node ii in GG.

Stability. If player ii is trading with jj, then she cannot earn more by simply changing her trading partner. Formally γi+γjwij\gamma_{i}+\gamma_{j}\geq w_{ij} for all (i,j)EM(i,j)\in E\setminus M.

Balance. If player ii is trading with jj, then the surplus of ii over her best alternative must be equal to the surplus of jj over his best alternative. Mathematically,

γimaxki\j(wikγk)+=γjmaxlj\i(wjlγl)+\displaystyle\gamma_{i}-\max_{k\in{\partial i}\backslash j}(w_{ik}-\gamma_{k})_{+}=\gamma_{j}-\max_{l\in{\partial j}\backslash i}(w_{jl}-\gamma_{l})_{+} (1)

for all (i,j)M(i,j)\in M. Here (x)+(x)_{+} refers to the non-negative part of xx, i.e. (x)+max(0,x)(x)_{+}\equiv\max(0,x).

It turns out that the interplay between the 11-exchange rule and the stability and balance conditions results in highly non-trivial predictions regarding the influence of network structure on individual earnings.

Refer to caption

G1G_{1}G2G_{2}G3G_{3}aaccddeeffhhiijjkkll

Figure 1: Examples of networks and corresponding balanced outcomes. The network G1G_{1} admits a unique balanced outcome, G2G_{2} admits multiple balanced outcomes, and G3G_{3} admits no balanced outcome. For G2G_{2} one solution is shown inside the square and the other solution is outside.

We conclude with some examples of networks and corresponding balanced outcomes (see Figure 1).

The network G1G_{1} has a unique balanced outcome with the nodes aa and cc forming a partnership with a split of γa=0.5,γc=1.5\gamma_{a}=0.5,\gamma_{c}=1.5. Node dd remains isolated with γd=0\gamma_{d}=0. The best alternative of node cc is (wcdγd)+=1(w_{cd}-\gamma_{d})_{+}=1, whereas it is 0 for node aa, and the excess of 21=12-1=1 is split equally between aa and cc, so that each earns a surplus of 0.50.5 over their outside alternatives.

The network G2G_{2} admits multiple balanced outcomes. Each balanced outcome involves the pairing M={(e,f),(h,i)}M=\{(e,f),(h,i)\}. The earnings γe=0.5,γf=1.5,γh=2,γi=1\gamma_{e}=0.5,\gamma_{f}=1.5,\gamma_{h}=2,\gamma_{i}=1 are balanced, and so is the symmetric counterpart of this earnings vector γe=1.5,γf=0.5,γh=1,γi=2\gamma_{e}=1.5,\gamma_{f}=0.5,\gamma_{h}=1,\gamma_{i}=2. In fact, every convex combination of these two earnings vectors is also balanced.

The network G3G_{3} does not admit any stable outcome, and hence does not admit any balanced outcomes. To see this, observe that for any outcome, there is always a pair of agents who can benefit by deviating.

1.1 Related work

Recall the LP relaxation to the maximum weight matching problem

maximize (i,j)Ewijxij,\displaystyle\sum_{(i,j)\in E}w_{ij}x_{ij}, (2)
subject to jixij1iV,\displaystyle\sum_{j\in{\partial i}}x_{ij}\leq 1\;\;\;\forall i\in V,
xij0(i,j)E.\displaystyle x_{ij}\geq 0\;\;\;\forall(i,j)\in E\,.

The dual problem to (2) is

minimize iVyi,\displaystyle\sum_{i\in V}y_{i}, (3)
subject to yi+yjwij(i,j)E,\displaystyle y_{i}+y_{j}\geq w_{ij}\;\;\;\forall(i,j)\in E,
yi0iV.\displaystyle y_{i}\geq 0\;\;\;\forall i\in V.

Stable outcomes were studied by Sotomayor [40].

Proposition 1.

[40] Stable outcomes exist if and only if the linear programming relaxation (2) of the maximum weight matching problem on GG admits an integral optimum. Further, if (M,γ¯)(M,\underline{\gamma}) is a stable solution then MM is a maximum weight matching and γ¯\underline{\gamma} is an optimum solution to the dual LP (3).

Following [33, 13], Kleinberg and Tardos [27] first considered balanced outcomes on general exchange networks and proved that: a network GG admits a balanced outcome if and only if it admits a stable outcome.

The same paper describes a polynomial algorithm for constructing balanced outcomes. This is in turn based on the dynamic programming algorithm of Aspvall and Shiloach [1] for solving systems of linear inequalities. However, [27] left open the question of whether the actual bargaining process converges to balanced outcomes.

Rochford [33], and recent work by Bateni et al [4], relate the assignment market problem to the extensive literature on cooperative game theory. They find that balanced outcomes correspond to the core intersect pre-kernel of the corresponding cooperative game. A consequence of the connection established is that the results of Kleinberg and Tardos [27] are implied by previous work in the economics literature. The existence result follows from Proposition 1, and the fact that if the core of a cooperative game is non-empty then the core intersect prekernel is non-empty. Efficient computability follows from work by Faigle et al [17], who provide a polynomial time algorithm for finding balanced outcomes. 333In fact, Faigle et al [17] work in the more general setting of cooperative games. The algorithm involves local ‘transfers’, alternating with a non-local LP based step after every O(n2)O(n^{2}) transfers.

However, [33, 4] also leave open the twin questions of finding (i) a natural model for bargaining, and (ii) convergence (or not) to NB solutions.

Azar and co-authors [2] studied the question as to whether a balanced outcome can be produced by a local dynamics, and were able to answer it positively444Stearns [43] defined a very similar dynamics and proved convergence for general cooperative games. The dynamics can be interpreted in terms of the present model using the correspondence with cooperative games discussed in [4].. Their results left, however, two outstanding challenges: (I)(I) The algorithm analyzed by these authors first selects a matching MM in GG using the message passing algorithm studied in [5, 22, 6, 37], corresponding to the pairing of players that trade. In a second phase the algorithm determines the profit of each player. While such an algorithm can be implemented in a distributed way, Azar et al. point out that it is not entirely realistic. Indeed the rules of the dynamics change abruptly after the matching is found. Further, if the pairing is established at the outset, the players lose their bargaining power; (II)(II) The bound on the convergence time proved in [2] is exponential in the network size, and therefore does not provide a solid justification for convergence to NB solutions in large networks.

1.2 Our contribution

The present paper (based partly on our recent conference papers [25, 23]) aims at tackling these challenges. First we introduce a natural dynamics that is interpretable as a realistic negotiation process. We show that the fixed points of the dynamics are in one to one correspondence with NB solutions, and prove that it converges to such solutions. Moreover, we show that the convergence to approximate NB solutions is fast. Furthermore we are able to treat the more general case of nodes with unsymmetrical bargaining powers and generalize the result of [27] on existence of NB solutions to this context. These results are obtained through a new and seemingly general analysis method, that builds on powerful quantitative estimates on mappings in the Banach spaces [3]. For instance, our approach allows us to prove that a simple variant of the edge balancing dynamics of [2] converges in polynomial time (see Section 7).

We consider various modifications to the model and analyze the results. One direction is to allow arbitrary integer ‘capacity constraints’ that capture the maximum number of deals that a particular node is able to simultaneously participate in (the model defined above corresponds to a capacity of one for each node). Such a model would be relevant, for example, in the context of a job market, where a single employer may have more than one opening available. We show that many of our results generalize to this model in Section 5.

Another well motivated modification is to depart from the assumption of symmetry/balance and allow nodes to have different ‘bargaining powers’. Rochford and Crawford [34] mention this modification in passing, with the remark that it “…seems to yield no new insights”. Indeed, we show here (see Section 4) that our asymptotic convergence results generalize to the unsymmetrical case. However, surprisingly, we find that the natural dynamics may now take exponentially long to converge. We find that this can occur even in a two-sided network with the ‘sellers’ having slightly more bargaining power than the ‘buyers’. Thus, a seemingly minor change in the model appears to drastically change the convergence properties of our dynamics. Other algorithms like that of Kleinberg and Tardos [27] and Faigle et al [17] also fail to generalize, suggesting that, in fact, we may lose computability of solutions in allowing asymmetry. However, we show that a suitable modification to the bargaining process yields a fully polynomial time approximation scheme (FPTAS) for the unequal bargaining powers. The caveat is that this algorithm, though local, is not a good model for bargaining because it fixes the matching at the outset (cf. comment (I)(I) above).

Our dynamics and its analysis have similarities with a series of papers on using max-product belief propagation for the weighted matching problems [5, 22, 6, 37]. We discuss that connection and extensions of our results to those settings in one of our conference papers [25, Appendix F]. We obtain a class of new message passing algorithms to compute the maximum weight matching, with belief propagation and our dynamics being special cases.

Related work in sociology

Besides economists, sociologists have been interested in such markets, called exchange networks in that literature. The key question addressed by network exchange theory is that of how network structure influences the power balance between agents. Numerous predictive frameworks have been suggested in this context including generalized Nash bargaining solutions [13]. Moreover, controlled experiments [45, 29, 41] have been carried out by sociologists. The typical experimental set-up studies exactly the model of assignment markets proposed by economists [39, 33]. It is often the case that players are provided information only about their immediate neighbors. Typically, a number of ‘rounds’ of negotiations are run, with no change in the network, so as to allow the system to reach an ‘equilibrium’. Further, players are usually not provided much information beyond who their immediate neighbors are, and the value of the corresponding possible deals.

In addition to balanced outcomes [13], other frameworks have been suggested to predict/explain the outcomes of these experiments [12, 9, 41].

1.3 Outline of the paper

Our dynamical model of bargaining in a network is described in Section 2. We state our main results characterizing fixed points and convergence of the dynamics in Section 3.

We present two extensions of our model. In Section 4 we investigate the unsymmetrical case with nodes having different bargaining powers. We find that the (generalized) dynamics may take exponentially long to converge in this case, but we provide a modified local algorithm that provides an FPTAS. We consider the case of general capacity constraints in Section 5, and show that our main results generalize to this case.

In Section 7, we prove Theorems 2 and 3 on convergence of our dynamics. We characterize fixed points in Section 8 with a proof of Theorem 1 (the proof of Theorem 4 is deferred to Appendix D). Section 9 shows polynomial time convergence on bipartite graphs (proof of Theorem 5).

We present a discussion of our results in Section 6.

Appendix A contains a discussion on variations of the natural dynamics including time and node varying damping factors and asynchronous updates.

2 Dynamical model

Consider a bargaining network G=(V,E)G=(V,E), where the vertices represent agents, and the edges represent potential partnerships between them. There is a positive weight wij>0w_{ij}>0 on each edge (i,j)E(i,j)\in E, representing the fact that players connected by edge (i,j)(i,j) can share a profit of wijw_{ij} dollars if they agree to trade with each other. Each player can trade with at most one of her neighbors (this is called the 11-exchange rule), so that a set of valid trading pairs forms a matching MM in the graph GG. We define a trade outcome as in Section 1, in accordance with the above constraints.

We expect natural dynamical description of a bargaining network to have the following properties: It should be local, i.e. involve limited information exchange along edges and processing at nodes; It should be time invariant, i.e. the players’ behavior should be the same/similar on identical local information at different times; It should be interpretable, i.e. the information exchanged along the edges should have a meaning for the players involved, and should be consistent with reasonable behavior for players.

In the model we propose, at each time tt, each player sends a message to each of her neighbors. The message has the meaning of ‘best current alternative’. We denote the message from player ii to player jj by αi\jt\alpha_{i\backslash j}^{t}. Player ii is telling player jj that she (player ii) currently estimates earnings of αi\jt\alpha_{i\backslash j}^{t} elsewhere, if she chooses not to trade with jj.

The vector of all such messages is denoted by α¯t+2|E|\underline{\alpha}^{t}\in{\mathds{R}}_{+}^{2|E|}. Each agent ii makes an ‘offer’ to each of her neighbors, based on her own ‘best alternative’ and that of her neighbor. The offer from node ii to jj is denoted by mijtm_{i\rightarrow j}^{t} and is computed according to

mijt=(wijαi\jt)+12(wijαi\jtαj\it)+.\displaystyle m_{i\rightarrow j}^{t}=(w_{ij}-\alpha_{i\backslash j}^{t})_{+}-\frac{1}{2}(w_{ij}-\alpha_{i\backslash j}^{t}-\alpha_{j\backslash i}^{t})_{+}\,. (4)

It is easy to deduce that this definition corresponds to the following policy: (i)(i) An offer is always non-negative, and a positive offer is never larger than wijαi\jtw_{ij}-\alpha_{i\backslash j}^{t} (no player is interested in earning less than her current best alternative); (ii)(ii) Subject to the above constraints, the surplus (wijαi\jtαj\it)(w_{ij}-\alpha_{i\backslash j}^{t}-\alpha_{j\backslash i}^{t}) (if non-negative) is shared equally. We denote by m¯t+2|E|\underline{m}^{t}\in{\mathds{R}}_{+}^{2|E|} the vector of offers.

Notice that m¯t\underline{m}^{t} is just a deterministic function of α¯t\underline{\alpha}^{t}. In the rest of the paper we shall describe the network status uniquely through the latter vector, and use m¯|α¯t\underline{m}|_{\underline{\alpha}^{t}} to denote m¯t\underline{m}^{t} defined by (4) when required so as to avoid ambiguity.

Each node can estimate its potential earning based on the network status, using

γitmaxkimkit,\displaystyle\gamma_{i}^{t}\equiv\max_{k\in{\partial i}}\,m_{k\rightarrow i}^{t}, (5)

the corresponding vector being denoted by γ¯t+|V|\underline{\gamma}^{t}\in{\mathds{R}}_{+}^{|V|}. Notice that γ¯t\underline{\gamma}^{t} is also a function of α¯t\underline{\alpha}^{t}.

Messages are updated synchronously through the network, according to the rule

αi\jt+1=(1κ)αi\jt+κmaxki\jmkit.\displaystyle\alpha_{i\backslash j}^{t+1}=(1-\kappa)\,\alpha_{i\backslash j}^{t}+\kappa\max_{k\in{\partial i}\backslash j}m_{k\rightarrow i}^{t}\,. (6)

Here κ(0,1]\kappa\in(0,1] is a ‘damping’ factor: (1κ)(1-\kappa) can be thought of as the inertia on the part of the nodes to update their current estimates (represented by outgoing messages). The use of κ<1\kappa<1 eliminates pathological behaviors related to synchronous updates. In particular, we observe oscillations on even-length cycles in the undamped synchronous version. We mention here that in Appendix A we present extensions of our results to various update schemes (e.g., asynchronous updates, time-varying damping factor).

Remark 2.

An update under the natural dynamics requires agent ii to perform O(|i|)O(|\partial i|) arithmetic operations, and O(|E|)O(|E|) operations in total.

Let Wmaxmax(ij)Ewij{W_{\rm max}}\equiv\max_{(ij)\in E}w_{ij}. Often in the paper we take Wmax=1{W_{\rm max}}=1, since this can always be achieved by rescaling the problem, which is the same as changing units. It is easy to see that α¯t[0,Wmax]2|E|\underline{\alpha}^{t}\in[0,{W_{\rm max}}]^{2|E|}, m¯t[0,Wmax]2|E|\underline{m}^{t}\in[0,{W_{\rm max}}]^{2|E|} and γ¯t[0,Wmax]|V|\underline{\gamma}^{t}\in[0,{W_{\rm max}}]^{|V|} at all times (unless the initial condition violates this bounds). Thus we call α¯\underline{\alpha} a ‘valid’ message vector if α¯[0,Wmax]2|E|\underline{\alpha}\in[0,{W_{\rm max}}]^{2|E|}.

2.1 An Example

We consider a simple graph GG with V={A,B,C,D}V=\{A,B,C,D\}, E={(A,B),(B,C),(C,D)}E=\{(A,B),(B,C),(C,D)\}, wAB=8w_{AB}=8, wBC=6w_{BC}=6 and wCD=2w_{CD}=2.

Refer to caption

iijjαi\j\alpha_{i\backslash j}mijm_{i\rightarrow j}\downarrowt=0t=0
Refer to caption\downarrowt=1t=1Refer to captiont=2t=2\downarrow\vdotsRefer to captiont=6t=6

Figure 2: Progress of the natural dynamics on a graph with four nodes and three edges. We (arbitrarily) choose the initialization α¯0=0¯\underline{\alpha}^{0}=\underline{0}. A fixed point is reached at t=6t=6.

The unique maximum weight matching on this graph is M={(A,B),(C,D)}M=\{(A,B),(C,D)\}. By Proposition 1, stable outcomes correspond to matching MM and can be parameterized as

γ¯\displaystyle\underline{\gamma} =(8γB,γB,γC,2γC)\displaystyle=(8-\gamma_{B},\gamma_{B},\gamma_{C},2-\gamma_{C})

where (γB,γC)(\gamma_{B},\gamma_{C}) are constrained as

γB\displaystyle\gamma_{B} [0,8]\displaystyle\in[0,8]
γC\displaystyle\gamma_{C} [0,2]\displaystyle\in[0,2]
γB+γC\displaystyle\gamma_{B}+\gamma_{C} 6\displaystyle\geq 6

For instance, the set of stable outcomes (all on matching MM) includes (0,8,2,0)(0,8,2,0), (4,4,2,0)(4,4,2,0), (3,5,1,1)(3,5,1,1) and so on. Now suppose we impose the balance condition Eq. (1) in addition, i.e., we look for balanced outcomes. Using the algorithm of Kleinberg and Tardos [27], we find that the network admits a unique balanced outcome γ¯=(1.5,6.5,1,1)\underline{\gamma}=(1.5,6.5,1,1).

Now we consider the evolution of the natural dynamics proposed above on the graph GG. We arbitrarily choose to study the initialization α¯0=0¯\underline{\alpha}^{0}=\underline{0}, i.e., each node initially estimates its best alternatives to be 0 with respect to each neighbor. We set κ=1\kappa=1 for simplicity555Our results assume κ<1\kappa<1 to avoid oscillatory behavior. However, it turns out that on graphs with no even cycles, for instance the graph GG under consideration, oscillations do not occur. We choose to consider κ=1\kappa=1 for simplicity of presentation.. The evolution of the estimates and offers under the dynamics is shown in Figure 2. We now comment on a few noteworthy features demonstrated by this example. In the first step, nodes AA and BB receive their best offers from each other, node CC receives its best offer from BB and node DD receives its best offer from CC. Thus, we might expect nodes AA and BB to be considering the formation of a partnership already (though the terms are not yet clear), but this is not the case for CC and DD. After one iteration, at t=1t=1, both pairs (A,B)(A,B) and (C,D)(C,D) receive their best offers from each other. In fact, this property remains true at all future times (the case t=2t=2 is shown). However, the vectors α¯\underline{\alpha} and m¯\underline{m} continue to evolve from one iteration to the next. At iteration t=6t=6, a fixed point is reached, i.e., α¯\underline{\alpha} and m¯\underline{m} remain unchanged for t6t\geq 6. Moreover, we notice that the fixed point captures the unique balanced outcome on this graph, with the matching MM and the splits (γA=1.5,γB=6.5)(\gamma_{A}=1.5,\gamma_{B}=6.5) and (γC=1,γD=1)(\gamma_{C}=1,\gamma_{D}=1) emerging from the fixed point m¯\underline{m}^{*}.

We remark here that convergence to a fixed point in finite number of iterations is not a general phenomenon. This occurs as a consequence of the simple example considered and the choice κ=1\kappa=1. However, as we prove below, we always obtain rapid convergence of the dynamics, and fixed points always correspond to balanced outcomes, on any graph possessing balanced outcomes, and for any initialization.

3 Main results: Fixed point properties and convergence

Our first result is that fixed points of the update equations (4), (6) (hereafter referred to as ‘natural dynamics’) are indeed in correspondence with Nash bargaining solutions when such solutions exist. Note that the fixed points are independent of the damping factor κ\kappa. The correspondence with NB solutions includes pairing between nodes, according to the following notion of induced matching.

Definition 3.

We say that a state (α¯,m¯,γ¯)(\underline{\alpha},\underline{m},\underline{\gamma}) (or just α¯\underline{\alpha}) induces a matching MM if the following happens. For each node iVi\in V receiving non-zero offers (mi>0m_{\cdot\rightarrow i}>0), ii is matched under MM and gets its unique best offer from node jj such that (i,j)M(i,j)\in M. Further, if γi=0\gamma_{i}=0 then ii is not matched in MM. In other words, pairs in MM receive unique best offers that are positive from their respective matched neighbors whereas unmatched nodes receive no non-zero offers.

Consider the LP relaxation to the maximum weight matching problem (2). A feasible point x¯\underline{x} for LP (2) is called half-integral if for all eEe\in E, xe{0,1,12}x_{e}\in\{0,1,\frac{1}{2}\}. It is well known that problem (2) always has an optimum x¯\underline{x}^{*} that is half-integral [38]. An LP with a fully integer x¯\underline{x}^{*} (xe{0,1}x_{e}^{*}\in\{0,1\}) is called tight.

Theorem 1.

Let GG be an instance admitting one or more Nash bargaining solutions, i.e. the LP (2) admits an integral optimum.
(a) Unique LP optimum (generic case): Suppose the optimum is unique corresponding to matching MM^{*}. Let (α¯,m¯,γ¯)(\underline{\alpha},\underline{m},\underline{\gamma}) be a fixed point of the natural dynamics. Then α¯\underline{\alpha} induces matching MM^{*} and (M,γ¯)(M^{*},\underline{\gamma}) is a Nash bargaining solution. Conversely, every Nash bargaining solution (M,γ¯NB)(M,\underline{\gamma}_{{\textup{\tiny NB}}}) has M=MM=M^{*} and corresponds to a unique fixed point of the natural dynamics with γ¯=γ¯NB\underline{\gamma}=\underline{\gamma}_{{\textup{\tiny NB}}}.

(b) Let (α¯,m¯,γ¯)(\underline{\alpha},\underline{m},\underline{\gamma}) be a fixed point of the natural dynamics. Then (M,γ¯)(M^{*},\underline{\gamma}) is a Nash bargaining solution for any integral maximum weight matching MM^{*}. Conversely, if (M,γ¯NB)(M,\underline{\gamma}_{{\textup{\tiny NB}}}) is a Nash bargaining solution, MM is a maximum weight matching and there is a unique fixed point of the natural dynamics with γ¯=γ¯NB\underline{\gamma}=\underline{\gamma}_{{\textup{\tiny NB}}}.

We prove Theorem 1 in Section 8. Theorem 12 in Appendix C extends this characterization of fixed points of the natural dynamics to cases where Nash bargaining solutions do not exist.

Remark 4.

The condition that a tight LP (2) has a unique optimum is generic (see Appendix C, Remark 18). Hence, fixed points induce a matching for almost all instances (cf. Theorem 1(a)). Further, in the non-unique optimum case, we cannot expect an induced matching, since there is always some node with two equally good alternatives.

The existence of a fixed point of the natural dynamics is immediate from Brouwer’s fixed point theorem. Our next result says that the natural dynamics always converges to a fixed point. The proof is in Section 7.

Theorem 2.

The natural dynamics has at least one fixed point. Moreover, for any initial condition with α¯0[0,W]2|E|\underline{\alpha}^{0}\in[0,W]^{2|E|}, α¯t\underline{\alpha}^{t} converges to a fixed point.

Note that Theorem 2 does not require any condition on LP (2). It also does not require uniqueness of the fixed point.

With Theorems 1 and 2, we know that in the limit of a large number of iterations, the natural dynamics yields a Nash bargaining solution. However, this still leaves unanswered the question of the rate of convergence of the natural dynamics. Our next theorem addresses this question, establishing fast convergence to an approximate fixed point.

However, before stating the theorem we define the notion of approximate fixed point.

Definition 5.

We say that α¯\underline{\alpha} is an ϵ{\epsilon}-fixed point, or ϵ{\epsilon}-FP in short, if, for all (i,j)E(i,j)\in E we have

|αi\jmaxki\jmki|ϵ,\displaystyle\big{|}\alpha_{i\backslash j}-\max_{k\in\partial i\backslash j}m_{k\rightarrow i}\big{|}\;\leq\;{\epsilon}\,, (7)

and similarly for αj\i\alpha_{j\backslash i}. Here, m¯\underline{m} is obtained from α¯\underline{\alpha} through Eq. (4) (i.e., m¯=m¯|α¯\underline{m}=\underline{m}|_{\underline{\alpha}}).

Note that ϵ{\epsilon}-fixed points are also defined independently of the damping κ\kappa.

Theorem 3.

Let G=(V,E)G=(V,E) be an instance with weights (we,eE)[0,1]|E|(w_{e},e\in E)\in[0,1]^{|E|}. Take any initial condition α¯0[0,1]2|E|\underline{\alpha}^{0}\in[0,1]^{2|E|}. Take any ϵ>0{\epsilon}>0. Define

T(ϵ)=1πκ(1κ)ϵ2.\displaystyle T^{*}({\epsilon})=\frac{1}{\pi\kappa(1-\kappa){\epsilon}^{2}}\,. (8)

Then for all tT(ϵ)t\geq T^{*}({\epsilon}), α¯t\underline{\alpha}^{t} is an ϵ{\epsilon}-fixed point. (Here π=3.14159\pi=3.14159\ldots)

Thus, if we wait until time tt, we are guaranteed to obtain an (1/πκ(1κ)t)\left(1/\sqrt{\pi\kappa(1-\kappa)t}\right)-FP. Theorem 3 is proved in Section 7.

Remark 6.

For any ϵ>0{\epsilon}>0, it is possible to construct an example such that it takes Ω(1/ϵ)\Omega(1/{\epsilon}) iterations to reach an ϵ{\epsilon}-fixed point. This lower bound can be improved to Ω(1/ϵ2)\Omega(1/{\epsilon}^{2}) in the unequal bargaining powers case (cf. Section 4). However, in our constructions, the size of the example graph grows with decreasing ϵ{\epsilon} in each case.

We are left with the problem of relating approximate fixed points to approximate Nash bargaining solutions. We use the following definition of ϵ{\epsilon}-Nash bargaining solution, that is analogous to the standard definition of ϵ{\epsilon}-Nash equilibrium (e.g., see [14]).

Definition 7.

We say that (M,γ¯)(M,\underline{\gamma}) is an ϵ{\epsilon}-Nash bargaining solution if it is a valid trade outcome that is stable and satisfies ϵ{\epsilon}-balance. ϵ{\epsilon}-balance means that for every (i,j)M(i,j)\in M we have

|[γimaxki\j(wikγk)+][γjmaxlj\i(wjlγl)+]|ϵ.\displaystyle\big{|}[\gamma_{i}-\max_{k\in{\partial i}\backslash j}(w_{ik}\!-\!\gamma_{k})_{+}]-[\gamma_{j}-\max_{l\in{\partial j}\backslash i}(w_{jl}\!-\!\gamma_{l})_{+}]\big{|}\leq{\epsilon}\,. (9)

A subtle issue needs to be addressed. For an approximate fixed point to yield an approximate Nash bargaining solution, a suitable pairing between nodes is needed. Note that our dynamics does not force a pairing between the nodes. Instead, a pairing should emerge quickly from the dynamics. In other words, nodes on the graph should be able to identify their trading partners from the messages being exchanged. As before, we use the notion of an induced matching (see Definition 3).

Definition 8.

Consider LP (2). Let \mathcal{H} be the set of half integral points in the primal polytope. Let x¯\underline{x}^{*}\in\mathcal{H} be an optimum. Then the LP gap gg is defined as g=minx¯\{x¯}eEwexeeEwexeg=\min_{\underline{x}\in\mathcal{H}\backslash\{\underline{x}^{*}\}}\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}.

Theorem 4.

Let GG be an instance for which the LP (2) admits a unique optimum, and this is integral, corresponding to matching MM^{*}. Let the gap be g>0g>0. Let α¯\underline{\alpha} be an ϵ{\epsilon}-fixed point of the natural dynamics, for some ϵ<g/(6n2){\epsilon}<g/(6n^{2}). Let γ¯\underline{\gamma} be the corresponding earnings estimates. Then α¯\underline{\alpha} induces the matching MM^{*} and (γ¯,M)(\underline{\gamma},M^{*}) is an (6ϵ)(6{\epsilon})-Nash bargaining solution. Conversely, every ϵ{\epsilon}-Nash bargaining solution (M,γ¯NB)(M,\underline{\gamma}_{{\textup{\tiny NB}}}) has M=MM=M^{*} for any ϵ>0{\epsilon}>0.

Note that g>0g>0 is equivalent to the unique optimum condition (cf. Remarks 2, 5). The proof of this theorem requires generalization of the analysis used to prove Theorem 1 to the case of approximate fixed points. Since its proof is similar to the proof of Theorem 1, we defer it to Appendix D. We stress, however, that Theorem 4 is not, in any sense, an obvious strengthening of Theorem 1. In fact, this is a delicate property of approximate fixed points that holds only in the case of balanced outcomes. This characterization breaks down in the face of a seemingly benign generalization to unequal bargaining powers (cf. Section 4 and [23, Section 4]).

Theorem 4 holds for all graphs, and is, in a sense, the best result we can hope for. To see this, consider the following immediate corollary of Theorems 3 and 4.

Corollary 9.

Let G=(V,E)G=(V,E) be an instance with weights (we,eE)[0,1]|E|(w_{e},e\in E)\in[0,1]^{|E|}. Suppose LP (2) admits a unique optimum, and this is integral, corresponding to matching MM^{*}. Let the gap be g>0g>0. Then for any α¯0[0,1]2|E|\underline{\alpha}^{0}\in[0,1]^{2|E|}, there exists T=O(n4/g2)T^{*}=O(n^{4}/g^{2}) such that for any tTt\geq T^{*}, α¯t\underline{\alpha}^{t} induces the matching MM^{*} and (γ¯t,M)(\underline{\gamma}^{t},M^{*}) is an (6/πκ(1κ)t)(6/\sqrt{\pi\kappa(1-\kappa)t})-NB solution.

Proof.

Choose TT^{*} as T(g/(10n2))T^{*}(g/(10n^{2})) as defined in (8). Clearly, T=O(n4/g2)T^{*}=O(n^{4}/g^{2}). From Theorem 3, α¯t\underline{\alpha}^{t} is an ϵ(t){\epsilon}(t)-FP for ϵ(t)=1/πκ(1κ)t{\epsilon}(t)=1/\sqrt{\pi\kappa(1-\kappa)t}. Moreover, for all tTt\geq T^{*}, ϵ(t)g/(10n2){\epsilon}(t)\leq g/(10n^{2}). Hence, by Theorem 4, α¯t\underline{\alpha}^{t} induces the matching MM^{*} and (γ¯t,M)(\underline{\gamma}^{t},M^{*}) is a (6ϵ(t))(6{\epsilon}(t))-NB solution for all tTt\geq T^{*}. ∎

Corollary 9 implies that for any ϵ>0{\epsilon}>0, the natural dynamics finds an ϵ{\epsilon}-NB solution in time O(max(n4/g2,1/ϵ2))O\left(\max\left(n^{4}/g^{2},1/{\epsilon}^{2}\right)\right).

This result is the essentially the strongest bound we can hope for in the following sense. First, note that we need to find MM^{*} (see converse in Theorem 4) and balance the allocations. Max product belief propagation, a standard local algorithm for computing the maximum weight matching, requires O(n/g)O(n/g) iterations to converge, and this bound is tight [6]. Similar results hold for the Auction algorithm [7] which also locally computes MM^{*}. Moreover, max product BP and the natural dynamics are intimately related (see [25]), with the exception that max product is designed to find MM^{*}, but this is not true for the natural dynamics. Corollary 9 shows that natural dynamics only requires a time that is polynomial in the same parameters nn and 1/g1/g to find MM^{*}, while it simultaneously takes rapid care of balancing the outcome.

3.1 Example: Polynomial convergence to ϵ{\epsilon}-NB solution on bipartite graphs.

The next result further shows a concrete setting in which Corollary 9 leads to a strong guarantee on quickly reaching an approximate NB solution.

Theorem 5.

Let G=(V,E)G=(V,E) be a bipartite graph with weights (we,eE)[0,1]|E|(w_{e},e\in E)\in[0,1]^{|E|}. Take any ξ(0,1),η(0,1)\xi\in(0,1),\eta\in(0,1). Construct a perturbed problem instance with weights w¯e=we+ηUe\bar{w}_{e}=w_{e}+\eta U_{e}, where UeU_{e} are independent identically distributed random variables uniform in [0,1][0,1]. Then there exists C=C(κ)<C=C(\kappa)<\infty, such that for

T=C(n2|E|ηξ)2,\displaystyle T^{*}=C\left(\frac{n^{2}|E|}{\eta\xi}\right)^{2}, (10)

the following happens for all tTt\geq T^{*} with probability at least 1ξ1-\xi. State α¯t\underline{\alpha}^{t} induces a matching MM that is independent of tt. Further, (γ¯t,M)(\underline{\gamma}^{t},M) is a ϵ(t){\epsilon}(t)-NB solution for the perturbed problem, with ϵ(t)=12/πκ(1κ)t{\epsilon}(t)=12/\sqrt{\pi\kappa(1-\kappa)t}.

ξ\xi represents our target in the probability that a pairing does not emerge, while η\eta represents the size of perturbation of the problem instance.

Theorem 5 implies that for any fixed η\eta and ξ\xi, and any ϵ>0{\epsilon}>0, we find an ϵ{\epsilon}-NB solution in time τ(ϵ)=Kmax(n4|E|2,1/ϵ2)\tau({\epsilon})=K\max(n^{4}|E|^{2},1/{\epsilon}^{2}) with probability at least 1ξ1-\xi, where K=K(η,ξ,κ)<K=K(\eta,\xi,\kappa)<\infty. Theorem 5 is proved in Section 9.

3.2 Other results

A different analysis allows us to prove exponentially fast convergence to a unique Nash bargaining solution. We describe this briefly in Section 7.1, referring to an unpublished manuscript [24] for the proof, in the interest of space.

We investigate the case of nodes with unsymmetrical bargaining powers in Section 4. We show that generalizations of the Theorems 1, 2 and 3 hold for a suitably modified dynamics. Somewhat surprisingly, we find that the modified dynamics may take exponentially long to converge (so Theorem 3 does not generalize). However, we find a different local procedure that efficiently finds approximate solutions.

In Section 5 we consider the case where agents have arbitrary integer capacity constraints on the number of partnerships they can participate in, instead of the one-matching constraint. We generalize our dynamics and the notion of balanced outcomes to this case. We show that Theorems 1, 2 and 3 generalize. As a corollary, we establish the existence of balanced outcomes whenever stable outcomes exist (Corollary 14) in this general setting666The caveat here is that Corollary 14 does not say anything about the corner case of non-unique maximum weight 𝐛\mathbf{b}-matching..

Appendix A presents extensions of our convergence results to cases where the damping factor varies in time or from node to node, and when updates are asynchronous. This shows that our insights are robust to variations in the natural of damping and the timing of iterative updates.

4 Unequal bargaining powers

It is reasonable to expect that not all edge surpluses on matching edges are divided equally in an exchange network setting. Some nodes are likely to have more ‘bargaining power’ than others. This bargaining power can arise, for example, from ‘patience’; a patient agent is expected to get more than half the surplus when trading with an impatient partner. This phenomenon is well known in the Rubinstein game [36] where nodes alternately make offers to each other until an offer is accepted – the node with a smaller discount factor earns more in the subgame perfect Nash equilibrium. Moreover, a recent experimental study of bargaining in exchange networks [11] found that patience correlated positively with earnings.

A reasonable approach to model this effect would be to assign a positive ‘bargaining power’ to each node, and postulate that if a pair of nodes trade with each other, then the edge surplus is divided in the ratio of their bargaining powers. We choose instead, a more general setting where on each edge (ij)(ij) there is an expected surplus split fraction quantified by rij(0,1)r_{ij}\in(0,1). Namely, rijr_{ij} is the fraction of surplus that goes to ii if ii and jj trade with each other, and similarly for rjir_{ji}. Note that we have rij+rji=1r_{ij}+r_{ji}=1. We call a weighted graph GG along with the postulated split fraction vector r¯\underline{r} an unequal division (UD) instance.

The balance condition is replaced by correct division condition

[rij]1\displaystyle[r_{ij}]^{-1} [γimaxki\j(wikγk)+]\displaystyle\big{[}\gamma_{i}-\max_{k\in{\partial i}\backslash j}(w_{ik}-\gamma_{k})_{+}\big{]} (11)
===UD[rji]1[γjmaxlj\i(wjlγl)+],\displaystyle\stackrel{{\scriptstyle\text{\tiny UD}}}{{\hbox{$\mathord{=}\mkern-7.0mu\leaders{\hbox{$\!\mathord{=}\!$}}{\hfill}\mkern-7.0mu\mathord{=}$}}}\,[r_{ji}]^{-1}\big{[}\gamma_{j}-\max_{l\in{\partial j}\backslash i}(w_{jl}-\gamma_{l})_{+}\big{]}\,,

on matched edges (ij)(ij). We retain the stability condition. We call trade outcomes satisfying (11) and stability unequal division (UD) solutions. A natural modification to our dynamics in this situation consists of the following redefinition of offers.

mijt===UD(wijαi\jt)+rij(wijαi\jtαj\it)+.\displaystyle m_{i\rightarrow j}^{t}\stackrel{{\scriptstyle\text{\tiny UD}}}{{\hbox{$\mathord{=}\mkern-7.0mu\leaders{\hbox{$\!\mathord{=}\!$}}{\hfill}\mkern-7.0mu\mathord{=}$}}}(w_{ij}-\alpha_{i\backslash j}^{t})_{+}-r_{ij}(w_{ij}-\alpha_{i\backslash j}^{t}-\alpha_{j\backslash i}^{t})_{+}\,. (12)

We call the dynamics resulting from (12) and the update rule (6) the unsymmetrical natural dynamics. One can check that 𝖳{\sf{T}} defined in (24) is nonexpansive for offers defined as in (12). It follows that Theorems 2 and 3 hold for the UD-natural dynamics with damping. (We retain Definition 5 of an ϵ{\epsilon}-FP). Further, Theorem 1 can also be extended to this case. The proof involves exactly the same steps as for the natural dynamics (cf. Section 8). Properties 1-6 in the direct part all hold (proofs nearly verbatim) and an identical construction works for the converse.

Theorem 6.

Let GG be an instance for which the LP (2) admits an integral optimum. Let (α¯,m¯,γ¯)(\underline{\alpha},\underline{m},\underline{\gamma}) be a fixed point of the UD-natural dynamics. Then (M,γ¯)(M^{*},\underline{\gamma}) is a UD solution for any maximum weight matching MM^{*}. Conversely, for any UD solution (M,γ¯UD)(M,\underline{\gamma}_{\textup{\tiny UD}}), matching MM is a maximum weight matching and there is a unique fixed point of the UD-natural dynamics with γ¯=γ¯UD\underline{\gamma}=\underline{\gamma}_{\textup{\tiny UD}}.

Further, if the LP (2) has a unique integral optimum, corresponding to matching MM^{*}, then any fixed point α¯\underline{\alpha} induces matching MM^{*}.

We note that the following generalization of the result on existence of Nash bargaining solutions [27] follows from Theorem 6 and the existence of fixed points.

Lemma 1.

UD solutions exist if and only if a stable outcome exists (i.e. LP (2) has an integral optimum.)

Proof.

The direct part of Theorem 6, along with the existence of fixed points of the UD natural dynamics (from Brouwer’s fixed point theorem, also first part of Theorem 2 for UD) shows that UD solutions exist if LP (2) has an integral optimum. The converse is trivial since if LP (2) has no integral optimum, then there are no stable solutions (see Proposition 1) and hence no UD solutions. ∎

4.1 Exponential convergence time in the UD case

It is possible to derive a characterization similar to Theorem 4 also for the UD case. However, the bound on ϵ{\epsilon} needed to ensure that the right pairing emerges in an ϵ{\epsilon}-FP turns out to be exponentially small in nn. As such, we are only able to show that a pairing emerges in time 2O(n)/g22^{O(n)}/g^{2}. In fact, as the example below shows, it does take exponentially long for a pairing to emerge in worst case.

Let n=|V|n=|V|. In Appendix G.1, we construct a sequence of instances (In,n16)(I_{n},n\geq 16), such that for each instance in the sequence the following holds.

  1. (a)

    The instance admits a UD solution, and the gap g1g\geq 1.

  2. (b)

    There is a message vector α¯\underline{\alpha} that is an ϵ{\epsilon}-FP for ϵ=2cn{\epsilon}=2^{-cn} such that α¯\underline{\alpha} does not induce any matching. In fact, for any matching in our example, there exists jj such that jj gets an offer from outside the matching that exceeds the offer of its partner under the matching (if any) by at least 1. Thus, α¯\underline{\alpha} is ‘far’ from inducing a matching.

Further, split fractions are bounded within [r,1r][r,1-r] for arbitrary desired r(0,1/2)r\in(0,1/2) (cc depends on rr). Also, the weights are uniformly bounded by a constant W(r)W(r).

Now, if we start at an ϵ{\epsilon}-FP, the successive iterates of our dynamics are all ϵ{\epsilon}-FPs (this follows from the nonexpansivity of the update operator, cf. Section 8). Hence, no offer can change by more than ϵ{\epsilon} in each iteration. Thus, in our constructed instances InI_{n}, it requires at least 2Ω(n)2^{\Omega(n)} iterations starting with α¯0=α¯\underline{\alpha}^{0}=\underline{\alpha}, before α¯t\underline{\alpha}^{t} induced a matching. Thus, our construction with the above properties implies that the unsymmetrical natural dynamics can take exponentially long to induce a matching, even on well behaved instances (g=Ω(1)g=\Omega(1)). Moreover, as discussed in Appendix G.1, our construction corresponds to a plausible two-sided market, so this is not to be dismissed as a unrealistic special case that can be ignored.

4.2 A fully polynomial time approximation scheme

We show that though the natural dynamics may take exponentially long to converge, there is a polynomial time iterative ‘re-balancing’ algorithm that enables us to compute an approximate UD solution.

First we define an approximate version of correct division, asking that Eq. (11) be satisfied to within an additive ϵ{\epsilon}, for all matched edges. For each edge (ij)M(ij)\in M, we define the ‘edge surplus’ as the excess of wijw_{ij} over the sum of best alternatives, i.e.,

𝒮urpij(γ¯)=wijmaxki\j(wikγk)+maxlj\i(wjlγl)+.\displaystyle\mathcal{S}\textup{urp}_{ij}(\underline{\gamma})=w_{ij}-\max_{k\in\partial i\backslash j}(w_{ik}-\gamma_{k})_{+}-\max_{l\in\partial j\backslash i}(w_{jl}-\gamma_{l})_{+}\,. (13)
Definition 10 (ϵ{\epsilon}-Correct division).

An outcome (γ¯,M)(\underline{\gamma},M) is said to satisfy ϵ{\epsilon}-correct division if, for all (ij)M(ij)\in M,

|γimaxki\j(wikγk)+rij𝒮urpij(γ¯)|ϵ,\displaystyle|\gamma_{i}-\max_{k\in\partial i\backslash j}(w_{ik}-\gamma_{k})_{+}-r_{ij}\mathcal{S}\textup{urp}_{ij}(\underline{\gamma})|\leq{\epsilon}\,, (14)

where 𝒮urpij()\mathcal{S}\textup{urp}_{ij}(\cdot) is defined by Eq. (13).

We define approximate UD solutions as follows:

Definition 11 (ϵ{\epsilon}-UD solution).

An outcome (γ¯,M)(\underline{\gamma},M) is an ϵ{\epsilon}-UD solution for ϵ0{\epsilon}\geq 0 if it is stable and it satisfies ϵ{\epsilon}-correct division (cf. Definition 10).

It follows from Lemma 1 that ϵ{\epsilon}-UD solutions exist if and only if the LP (2) admits an integral optimum, which is the same as the requirement for existence of UD solutions. We prove the following:

Theorem 7.

There is an algorithm that is polynomial in the input and 1/ϵ1/{\epsilon}, such that for any problem instance with weights uniformly bounded by 11, i.e., (we,eE)(0,1]|E|(w_{e},e\in E)\in(0,1]^{|E|}:

  • If the instance admits a UD solution, the algorithm finds an ϵ{\epsilon}-UD solution.

  • If the instance does not admit a UD solution, the algorithm returns the message unstable.

Our approach to finding an ϵ{\epsilon}-UD solution consists of two main steps:

  1. 1.

    Find a maximum weight matching MM^{*} and a dual optimum γ¯\underline{\gamma} (solution to the dual LP (3)) . Thus, form a stable outcome (γ¯,M)(\underline{\gamma},M^{*}). Else certify that the instance has no UD solution.

  2. 2.

    Iteratively update the allocation γ¯\underline{\gamma} without changing the matching. Updates are local, and are designed to converge fast to an allocation satisfying the ϵ{\epsilon}-correct division solution while maintaining stability. Thus, we arrive at an ϵ{\epsilon}-UD solution.

As mentioned earlier, this is similar to the approach of [2]. The crucial differences (enabling our results) are: (i) we stay within the space of stable outcomes, (ii) our analysis of convergence.

The algorithm leading to Theorem 7 and the corresponding analysis are presented in Appendix 4.2. We remark that our algorithm can easily be made local, without sacrificing efficiency (see [23] for details).

5 General capacity constraints

In several situations, agents may be less restricted: Instead of an agent being allowed to enter at most one agreement, for each agent ii, there may be an integer capacity constraint bib_{i} specifying the maximum number of partnerships that ii can enter into. For instance, in a labor market for full time jobs, an employer jj may have 4 openings for a particular role (bj=4b_{j}=4), another employer may have 6 openings for a different role, and so on, but the job seekers can each accept at most one job. In this section, we describe a generalization of our dynamical model to the case of general capacity constraints, in an attempt to model behavior in such settings. We find that most of our results from the one-matching case, cf. Section 3, generalize.

5.1 Preliminaries

Now a bargaining network is specified by an undirected graph G=(V,E)G=(V,E) with positive weights on the edges (wij)(ij)E(w_{ij})_{(ij)\in E}, and integer capacity constraints associated to the nodes (bi)iV(b_{i})_{i\in V}. We generalize the notion of ‘matching’ to sets of edges that satisfy the given capacity constraints: Given capacity constraints 𝐛=(bi)\mathbf{b}=(b_{i}), we call a set of edges MEM\subseteq E a 𝐛\mathbf{b}-matching if the degree di(M)d_{i}(M) of ii in the graph (V,M)(V,M) is at most bib_{i}, for every iVi\in V. We say that ii is saturated under MM if di(M)=bid_{i}(M)=b_{i}.

We assume that there are no double edges between nodes777This assumption was not needed in the one-exchange case since, in that case, utility maximizing agents ii and jj will automatically discard all but the heaviest edge between them. This is no longer true in the case of general capacity constraints.. Thus, an agent can use at most one unit of capacity with any one of her neighbors in the model we consider.

A trade outcome is now a pair (M,Γ)(M,\Gamma), where MM is a 𝐛\mathbf{b}-matching and Γ[0,1]2|E|\Gamma\in[0,1]^{2|E|} is a splitting of profits Γ=(γij,γji)(ij)E\Gamma=(\gamma_{i\to j},\gamma_{j\to i})_{(ij)\in E}, with γij=0\gamma_{i\to j}=0 if (ij)M(ij)\notin M, and γij+γji=wij\gamma_{i\to j}+\gamma_{j\to i}=w_{ij} if (ij)M(ij)\in M.

Define γi=minj:(ij)Mγji\gamma_{i}=\min_{j:(ij)\in M}\gamma_{j\to i} if ii is saturated (i.e. di(M)=bid_{i}(M)=b_{i}) and γi=0\gamma_{i}=0 if ii is not saturated. Note that this definition is equivalent to γi=(bithmax)jiγji\gamma_{i}={(b_{i}^{\rm th}\!\operatorname{-max})}_{j\in\partial i}\gamma_{j\to i}. Here (bthmax):++{(b^{\rm th}\!\operatorname{-max})}:{\mathds{R}}_{+}^{*}\rightarrow{\mathds{R}}_{+} denotes the bb-th largest of a set of non-negative reals, being defined as 0 if there are less than bb numbers in the set. It is easy to see that our definition of γi\gamma_{i} here is consistent with the definition for the one-exchange case. (But Γ\Gamma is not consistent with γ¯\underline{\gamma}, which is why we use different notation.)

We say that a trading outcome is stable if γi+γjwij\gamma_{i}+\gamma_{j}\geq w_{ij} for all ijMij\notin M. This definition is natural; a selfish agent would want to switch partners if and only if he can gain more utility elsewhere.

An outcome (M,γ)(M,\gamma) is said to be balanced if

γji(bithmax)ki\j(wikγk)+=γij(bjthmax)lj\i(wjlγl)+\displaystyle\gamma_{j\to i}-{(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in\partial i\backslash j}(w_{ik}-\gamma_{k})_{+}=\gamma_{i\to j}-{(b_{j}^{\rm th}\!\operatorname{-max})}_{l\in\partial j\backslash i}(w_{jl}-\gamma_{l})_{+} (15)

for all (ij)M(ij)\in M.

Note that the definitions of stability and balance generalize those for the one-exchange case.

An outcome (M,Γ)(M,\Gamma) is a Nash bargaining solution if it is both stable and balanced.

Consider the problem of finding the maximum weight (not necessarily perfect) 𝐛\mathbf{b}-matching on a weighted graph G=(V,E)G=(V,E). The LP-relaxation of this problem and it’s dual are given by

max (ij)Exijwij|min iVbiyi+(ij)Eyijsubject to|subject tojN(i)xijbii|yij+yi+yjwij0(ij)E0xij1(ij)E|yij0(ij)E|yi0iV|Primal LP|Dual LP.\begin{array}[]{rcclcrccl}&&&&&&&&\\ \textrm{max }&&\sum_{(ij)\in E}x_{ij}w_{ij}&&|&\textrm{min }&&\sum_{i\in V}b_{i}y_{i}+\sum_{{(ij)\in E}}y_{ij}&\\ \textrm{subject to}&&&&|&\textrm{subject to}&&&\\ &&\sum_{j\in N(i)}x_{ij}\leq b_{i}&\forall~~i&|&&&y_{ij}+y_{i}+y_{j}-w_{ij}\geq 0&\forall~~(ij)\in E\\ &&0\leq x_{ij}\leq 1&\forall~(ij)\in E&|&&&y_{ij}\geq 0&\forall~(ij)\in E\\ &&&&|&&&y_{i}\geq 0&\forall~i\in V\\ &&&&|&&&&\\ &&\textrm{Primal LP}&&|&&&\textrm{Dual LP}.&\\ \end{array} (16)

Complementary slackness says that a pair of feasible solutions is optimal if and only if

  • For all ijE;xij(wij+yij+yi+yj)=0ij\in E;~~~x_{ij}^{*}(-w_{ij}+y_{ij}^{*}+y_{i}^{*}+y_{j}^{*})=0.

  • For all ijE;(xij1)yij=0ij\in E;~~~(x_{ij}^{*}-1)y_{ij}^{*}=0.

  • For all iV;(jN(i)xijbi)yi=0i\in V;~~~(\sum_{j\in N(i)}x_{ij}^{*}-b_{i})y_{i}^{*}=0.

Lemma 2.

Consider a network G=(V,E)G=(V,E) with edge weights (wij)(ij)E(w_{ij})_{(ij)\in E} and capacity constraints 𝐛=(bi)\mathbf{b}=(b_{i}). There exists a stable solution if and only if the primal LP (16) admits an integer optimum. Further, if (M,Γ)(M,\Gamma) is a stable outcome, then MM is a maximum weight 𝐛\mathbf{b}-matching, and yi=γiy_{i}=\gamma_{i} for all iVi\in V and yij=(wijyiyj)+y_{ij}=(w_{ij}-y_{i}-y_{j})_{+} for all (ij)E(ij)\in E is an optimum solution to the dual LP.

Proof.

If xx^{*} is an integer optimum for the primal LP, and HGH^{*}\subset G is the corresponding 𝐛\mathbf{b}-matching), the complementary slackness conditions read

  • (i)

    For all ijE(H);wij=yij+yi+yjij\in E(H^{*});~~~w_{ij}=y_{ij}^{*}+y_{i}^{*}+y_{j}^{*}.

  • (ii)

    For all ijE(H);yij=0ij\notin E(H^{*});~~~y_{ij}^{*}=0.

  • (iii)

    For all ii with di(H)<bi;yi=0d_{i}(H^{*})<b_{i};~~~y_{i}^{*}=0.

We can construct a stable outcome (H,γ)(H^{*},\gamma) by setting γij=yj+yij/2\gamma_{i\to j}=y_{j}^{*}+y_{ij}^{*}/2 for (ij)H(ij)\in H^{*}, and γij=0\gamma_{i\to j}=0 otherwise: Using (iii) above, γiyi\gamma_{i}\geq y_{i}^{*} (cf. definition of γi\gamma_{i} above), so for any (ij)H(ij)\notin H^{*}, we have γi+γjyi+yjwij\gamma_{i}+\gamma_{j}\geq y_{i}^{*}+y_{j}^{*}\geq w_{ij}, using (ii) above. It is easy to check that γij+γji=wij\gamma_{i\to j}+\gamma_{j\to i}=w_{ij} for any (ij)H(ij)\in H^{*} using (i) above. Thus, (H,γ)(H^{*},\gamma) is a stable outcome.

For the converse, consider a stable allocation (M,γ)(M,\gamma). We claim that MM forms a (integer) primal optimum. For this we simply demonstrate that there is a feasible point in the dual with the same value as the primal value at MM: Take yi=γiy_{i}=\gamma_{i}, and yij=wijyiyjy_{ij}=w_{ij}-y_{i}-y_{j} for edges in MM, and 0 otherwise. The dual objective is then exactly equal to the weight of MM. This also proves the second part of the lemma. ∎

5.2 Dynamical model

We retain the notation αi\jt\alpha_{i\backslash j}^{t} for the ‘best alternative’ estimated in iteration tt. As before, ‘offers’ are determined as

mijt=(wijαi\jt)+12(wijαi\jtαj\it)+,\displaystyle m_{i\rightarrow j}^{t}=(w_{ij}-\alpha_{i\backslash j}^{t})_{+}-\frac{1}{2}(w_{ij}-\alpha_{i\backslash j}^{t}-\alpha_{j\backslash i}^{t})_{+}\,,

in the spirit of the pairwise Nash bargaining solution.

Now the best alternative αi\jt\alpha_{i\backslash j}^{t} should be the estimated income from the ‘replacement’ partnership, if ii and jj do not reach an agreement with each other. This ‘replacement’ should be the one corresponding to the bithb_{i}^{\rm th} largest offer received by ii from neighbors other than jj. Hence, the update rule is modified to

αi\jt+1=(1κ)αi\jt+κ(bithmax)ki\jmkit,\displaystyle\alpha_{i\backslash j}^{t+1}=(1-\kappa)\,\alpha_{i\backslash j}^{t}\,+\,\kappa\,{(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in{\partial i}\backslash j}\;m_{k\rightarrow i}^{t}\,, (17)

where κ(0,1)\kappa\in(0,1) is the damping factor.

Further, we define Γ=(γij,γji)(ij)E\Gamma=(\gamma_{i\to j},\gamma_{j\to i})_{(ij)\in E} by

γjit{mjitif mjit is among top bi incoming offers to i0otherwise.\displaystyle\gamma_{j\to i}^{t}\equiv\left\{\begin{array}[]{ll}m_{j\rightarrow i}^{t}&\mbox{if }m_{j\rightarrow i}^{t}\mbox{ is among top $b_{i}$ incoming offers to $i$}\\ 0&\mbox{otherwise.}\end{array}\right. (20)

Here ties are broken arbitrarily in ordering incoming offers. Finally, we define

γit(bithmax)kiγkit=(bithmax)kimkit\displaystyle\gamma_{i}^{t}\equiv{(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in{\partial i}}\;\gamma_{k\to i}^{t}={(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in{\partial i}}\;m_{k\rightarrow i}^{t} (21)

5.3 Results

Our first result is that fixed points of the new update equations (4), (17) are again in correspondence with Nash bargaining solutions when such solutions exist (analogous to Theorem 1). Note that the fixed points are independent of the damping factor κ\kappa. First, we generalize the notion of an induced matching.

Definition 12.

We say that a state (α¯,m¯,Γ)(\underline{\alpha},\underline{m},\Gamma) (or just α¯\underline{\alpha}) induces a 𝐛\mathbf{b}-matching MM if the following happens. For each node iVi\in V receiving at least bib_{i} non-zero offers (mi>0m_{\cdot\rightarrow i}>0): there is no tie for the (bithmax){(b_{i}^{\rm th}\!\operatorname{-max})} incoming offer to ii, and node ii is matched under MM to the bib_{i} neighbors from whom it is receiving its bib_{i} highest offers. For each node iVi\in V receiving less than bib_{i} non-zero offers: node ii is matched under MM to all its neighbors from whom it is receiving positive offers.

Consider the LP relaxation to the maximum weight matching problem (2). A feasible point x¯\underline{x} for LP (2) is called half-integral if for all eEe\in E, xe{0,1,12}x_{e}\in\{0,1,\frac{1}{2}\}. Again, it can be easily shown that the primal LP (16) always has an optimum x¯\underline{x}^{*} that is half-integral [38, Chapter 31]. As before, an LP with a fully integer x¯\underline{x}^{*} (i.e., xe{0,1}x_{e}^{*}\in\{0,1\} for all eEe\in E) is called tight.

Theorem 8.

Let G=(V,E)G=(V,E) with edge weights (wij)(ij)E(w_{ij})_{(ij)\in E} and capacity constraints 𝐛=(bi)\mathbf{b}=(b_{i}) be an instance such that the primal LP (16) has a unique optimum that is integral, corresponding to matching MM^{*}. Let (α¯,m¯,Γ)(\underline{\alpha},\underline{m},\Gamma) be a fixed point of the natural dynamics. Then α¯\underline{\alpha} induces matching MM^{*} and (M,Γ)(M^{*},\Gamma) is a Nash bargaining solution. Conversely, every Nash bargaining solution (M,ΓNB)(M,\Gamma_{{\textup{\tiny NB}}}) has M=MM=M^{*} and corresponds to a unique fixed point of the natural dynamics with Γ=ΓNB\Gamma=\Gamma_{{\textup{\tiny NB}}}.

We prove Theorem 8 in Appendix E.

Remark 13.

The condition that a tight primal LP (16) has a unique optimum is generic (analogous to Appendix C, Remark 18). Hence, Theorem 8 applies to ‘almost all’ problems with for which there exists a stable solution (cf. Lemma 2).

Corollary 14.

Let G=(V,E)G=(V,E) with edge weights (wij)(ij)E(w_{ij})_{(ij)\in E} and capacity constraints 𝐛=(bi)\mathbf{b}=(b_{i}) be an instance such that the primal LP (16) has a unique optimum that is integral. Then the instance possesses a Nash bargaining solution.

Thus, we obtain a (almost) tight characterization of the when NB solutions exist in the case of general capacity constraints888For simplicity, we have stated and proved, in Theorem 8, a generalization of only part (a) of Theorem 1. However, we expect that part (b) also generalizes, which would then lead to an exact characterization of when NB solutions exist in this case..

Our convergence results, Theorems 2 and 3, generalize immediately, with the proofs (cf. Section 7) going through nearly verbatim:

Theorem 9.

Let G=(V,E)G=(V,E) with edge weights (wij)(ij)E(w_{ij})_{(ij)\in E} and capacity constraints 𝐛=(bi)\mathbf{b}=(b_{i}) be any instance. The natural dynamics has at least one fixed point. Moreover, for any initial condition with α¯0[0,W]2|E|\underline{\alpha}^{0}\in[0,W]^{2|E|}, α¯t\underline{\alpha}^{t} converges to a fixed point.

We retain the Definition 5 for an ϵ{\epsilon}-fixed point.

Theorem 10.

Let G=(V,E)G=(V,E) with weights (we,eE)[0,1]|E|(w_{e},e\in E)\in[0,1]^{|E|} and capacity constraints 𝐛=(bi)\mathbf{b}=(b_{i}) be any instance. Take any initial condition α¯0[0,1]2|E|\underline{\alpha}^{0}\in[0,1]^{2|E|}. Take any ϵ>0{\epsilon}>0. Define

T(ϵ)=1πκ(1κ)ϵ2.\displaystyle T^{*}({\epsilon})=\frac{1}{\pi\kappa(1-\kappa){\epsilon}^{2}}\,. (22)

Then for all tT(ϵ)t\geq T^{*}({\epsilon}), α¯t\underline{\alpha}^{t} is an ϵ{\epsilon}-fixed point. (Again π=3.14159\pi=3.14159\ldots)

6 Discussion

Our results provide a dynamical justification for balanced outcomes, showing that agents bargaining with each other in a realistic, local manner can find such outcomes quickly. Refer to Section 1.2 for a summary of our contributions.

Some caution is needed in the interpretation of our results. Our dynamics avoids the question of how and when a pair of agents will cease to make iterative updates, and commit to each other. We showed that the right pairing will be found in time polynomial in the network size nn and the parameter LP parameter gg. But how will agents find out when this convergence has occurred? After all, agents are not likely to know nn, and even less likely to know gg. Further, why should agents wait for the right pairing to be found? It may be better for them to strike a deal after a few iterative updates because (i) they may estimate that they are unlikely to get a better deal later, (ii) they may be impatient, (iii) the convergence time may be very large on large networks. If a pair of agents do pair up and leave, then this changes the situation for the remaining agents, some of whom may have lost possible partners ([30] studies a model with this flavor). Our dynamics does not deal with this. A possible approach to circumventing some of these problems is to interpret our model in the context of a repeated game, where agents can pair up, but still continue to renegotiate their partnerships. Formalizing this is an open problem.

Related to the above discussion is the fact that our agents are not strategic. Though our dynamics admits interpretation as a bargaining process, it is unclear how, for instance, agent jj becomes aware of the best alternative αi\j\alpha_{i\backslash j} of a neighbor ii. In the case of a fixed best alternative, the work of Rubinstein [36] justifies the pairwise Nash bargaining solution, but in our case the best alternative estimates evolve in time. Thus, it is unclear how to explain our dynamics game theoretically. However, we do not consider this to be a major drawback of our approach. Non-strategic agent behavior is commonly assumed in the literature on learning in games [18], even in games of only two players. Alternative recent approaches to bargaining in networks assume strategic agents, but struggle to incorporate reasonable informational assumptions (e.g. [30] assumes common knowledge of the network and perfect information of all prior events). Prima facie, it appears that bounded rationality models like ours may be more realistic.

Several examples admit multiple balanced outcomes (see Example 3 in Section 1). In fact, this is a common feature of two-sided assignment markets, which typically contain multiple even cycles. It would be very interesting to investigate whether our dynamics favors some balanced outcomes over others. If this is the case, it may improve our ability to predict outcomes in such markets.

Our model assumes the network to be exogenous, which does not capture the fact that agents may strategically form links. It would be interesting (and very challenging) to endogenize the network. A perhaps less daunting proposition is to characterize bargaining on networks that experience shocks, like the arrival of new agents, the departure of agents or the addition/deletion of links. Our result showing convergence to an approximate fixed point in time independent of the network size provides hope of progress on this front.

Finally, we remark on the computational problem of computing exact UD solutions in the unsymmetric case (recall that we give an FPTAS). We conjecture that the problem is computationally hard (cf. Section 1.2), with the recently introduced complexity class continuous local search [15] providing a possible way forward. We leave it as a challenging open problem to prove or refute this conjecture.

7 Convergence to fixed points: Proofs of Theorems 2 and 3

Theorems 2 and 3 admit a surprisingly simple proofs, that build on powerful results in the theory of nonexpansive mappings in Banach spaces.

Definition 15.

Given a normed linear space LL, and a bounded domain DLD\subseteq L, a nonexpansive mapping 𝖳:DL{\sf{T}}:D\to L is a mapping satisfying 𝖳x𝖳yxy\|{\sf{T}}x-{\sf{T}}y\|\leq\|x-y\|\, for all x,yDx,y\in D.

Mann [31] first considered the iteration xt+1=(1κ)xt+κ𝖳xtx^{t+1}=(1-\kappa)\,x^{t}+\kappa\,{\sf{T}}x^{t} for κ(0,1)\kappa\in(0,1), which is equivalent to iterating 𝖳κ=(1κ)I+κ𝖳{\sf{T}}_{\kappa}=(1-\kappa)\,I+\kappa\,{\sf{T}}. Ishikawa [20] and Edelstein-O’Brien [16] proved the surprising result that, if the sequence {xt}t0\{x^{t}\}_{t\geq 0} is bounded, then 𝖳xtxt0\|{\sf{T}}x^{t}-x^{t}\|\to 0 (the sequence is asymptotically regular) and indeed xtxx^{t}\to x^{*} with xx^{*} a fixed point of 𝖳{\sf{T}}.

Baillon and Bruck [3] recently proved a powerful quantitative version of Ishikawa’s theorem: If x0xt1\|x^{0}-x^{t}\|\leq 1 for all tt, then

𝖳xtxt<1πκ(1κ)t.\displaystyle\|{\sf{T}}x^{t}-x^{t}\|<\frac{1}{\sqrt{\pi\kappa(1-\kappa)t}}\,. (23)

The surprise is that such a result holds irrespective of the mapping 𝖳{\sf{T}} and of the normed space (in particular, of its dimensions). Theorems 2 and 3 immediately follow from this theory once we recognize that the natural dynamics can be cast into the form of a Mann iteration for a mapping which is nonexpansive with respect to a suitably defined norm.

Let us stress that the nonexpansivity property does not appear to be a lucky mathematical accident, but rather an intrinsic property of bargaining models under the one-exchange constraint. It loosely corresponds to the basic observation that if earnings in the neighborhood of a pair of trade partners change by amounts N1,N2,,NkN_{1},N_{2},...,N_{k}, then the balanced split for the partners changes at most by max(N1,N2,,Nk)\max(N_{1},N_{2},\ldots,N_{k}), i.e., the largest of the neighboring changes.

Our technique seems therefore applicable in a broader context. (For instance, it can be applied successfully to prove fast convergence of a synchronous and damped version of the edge-balancing dynamics of [2].)

Proof (Theorem 2).

We consider the linear space L=2|E|L={\mathds{R}}^{2|E|} indexed by directed edges in GG. On the bounded domain D=[0,W]2|E|D=[0,W]^{2|E|} we define the mapping 𝖳:α¯𝖳α¯{\sf{T}}:\underline{\alpha}\mapsto{\sf{T}}\underline{\alpha} by letting, for (i,j)E(i,j)\in E,

(𝖳α¯)i\jmaxki\jmki|α¯,({\sf{T}}\underline{\alpha})_{i\backslash j}\equiv\max_{k\in{\partial i}\backslash j}m_{k\rightarrow i}|_{\underline{\alpha}}\,, (24)

where mki|α¯m_{k\rightarrow i}|_{\underline{\alpha}} is defined by Eq. (4). It is easy to check that the sequence of best alternatives produced by the natural dynamics corresponds to the Mann iteration α¯t=𝖳κtα¯0\underline{\alpha}^{t}={\sf{T}}_{\kappa}^{t}\underline{\alpha}^{0}. Also, 𝖳{\sf{T}} is nonexpansive for the \ell_{\infty} norm

α¯β¯=max(i,j)E|αi\jβij|.\displaystyle\|\underline{\alpha}-\underline{\beta}\|_{\infty}=\max_{(i,j)\in E}|\alpha_{i\backslash j}-\beta_{i\setminus j}|\,. (25)

Non-expansivity follows from:
(i) The ‘max\max’ in Eq. (24) is non expansive.
(ii) An offer mijm_{i\rightarrow j} as defined by Eq. (4) is nonexpansive. To see this, note that mij=f(αi\j,αj\i)m_{i\rightarrow j}=f(\alpha_{i\backslash j},\alpha_{j\backslash i}), where f(x,y):+2+f(x,y):{\mathds{R}}_{+}^{2}\rightarrow{\mathds{R}}_{+} is given by

f(x,y)={wijx+y2x+ywij(wijx)+ otherwise.\displaystyle\ f(x,y)=\left\{\begin{array}[]{ll}\frac{w_{ij}-x+y}{2}&x+y\leq w_{ij}\\ (w_{ij}-x)_{+}&\mbox{ otherwise.}\\ \end{array}\right. (28)

It is easy to check that ff is continuous everywhere in +2{\mathds{R}}_{+}^{2}. Also, it is differentiable except in {(x,y)+2:x+y=wij or x=wij}\{(x,y)\in{\mathds{R}}_{+}^{2}:x+y=w_{ij}\mbox{ or }x=w_{ij}\}, and satisfies f1=|fx|+|fy|1||\nabla f||_{1}=|\frac{\partial f}{\partial x}|+|\frac{\partial f}{\partial y}|\leq 1. Hence, ff is Lipschitz continuous in the LL^{\infty} norm, with Lipschitz constant 1, i.e., it is nonexpansive in sup norm.

Notice that 𝖳κ{\sf{T}}_{\kappa} maps D[0,W]2|E|D\equiv[0,W]^{2|E|} into itself. The thesis follows from [20, Corollary 1]. ∎

Proof (Theorem 3).

With the definitions given above, consider W=1W=1 (whence 𝖳α¯tα¯01\|{\sf{T}}\underline{\alpha}^{t}-\underline{\alpha}^{0}\|_{\infty}\leq 1 for all tt) and apply [3, Theorem 1]. ∎

7.1 Exponentially fast convergence to unique Nash bargaining solution

Convergence of the natural dynamics was studied in an earlier version of this paper using a different (and much more laborious) technique [24]. While the results in Section 3 constitute a large improvement in elegance and generality over those of [24], the latter retain an independent interest. Indeed the analysis of [24] shows that convergence is exponentially fast in a well defined class of instances. We decided therefore to retain the main result of that analysis (recast from [24]).

Theorem 11.

Assume W=1W=1. Let GG be an instance having unique Nash bargaining solution (M,γ¯)(M,\underline{\gamma}) with KT gap σ>0\sigma>0, and let γ¯\underline{\gamma} denote the corresponding allocation. Then, for any ϵ(0,σ/4){\epsilon}\in(0,\sigma/4), there exists T(n,σ,ϵ)=Cn7[1/σ+log(σ/ϵ)],T_{*}(n,\sigma,{\epsilon})=C\,n^{7}\big{[}1/\sigma\,+\log(\sigma/{\epsilon})\big{]}\,,such that, for any initial condition with α¯0[0,1]2|E|\underline{\alpha}^{0}\in[0,1]^{2|E|}, and any tTt\geq T_{*} the natural dynamics yields earning estimates γ¯t\underline{\gamma}^{t}, with |γitγi|ϵ|\gamma^{t}_{i}-\gamma_{i}|\leq{\epsilon} for all iVi\in V. Moreover, α¯t\underline{\alpha}^{t} induces the matching MM and (M,γ¯t)(M,\underline{\gamma}^{t}) is a (4ϵ)(4{\epsilon})-NB solution for any tTt\geq T_{*}.

We refer to Appendix F for a definition of the KT gap σ\sigma (here KT stands for Kleinberg-Tardos). Suffice it to say that it is related to the Kleinberg-Tardos decomposition of GG and that it is polynomially computable [27].

As mentioned above, the proof is based on a very different technique, namely on ‘approximate decoupling’ of the natural dynamics on different KT structures under the assumptions σ>0\sigma>0 (which is generic) and that there is a unique NB solution. See preprint [24] for a complete proof.

Let us stress here that, for fixed σ\sigma, T(n,σ,ϵ)T_{*}(n,\sigma,{\epsilon}) is only logarithmic in (1/ϵ)(1/{\epsilon}) while it is proportional to 1/ϵ21/{\epsilon}^{2} in Theorem 3. In other words, for instances with KT gap bounded away from 0, the natural dynamics converges exponentially fast, while Theorem 3 guarantees inverse polynomial convergence in the general case.

8 Fixed point properties: Proof of Theorem 1

Let 𝒮\mathcal{S} be the set of optimum solutions of LP (2). We call eEe\in E a strong-solid edge if xe=1x_{e}^{*}=1 for all x𝒮x^{*}\in\mathcal{S} and a non-solid edge if xe=0x_{e}^{*}=0 for all x𝒮x^{*}\in\mathcal{S}. We call eEe\in E a weak-solid edge if it is neither strong-solid nor non-solid.

Proof of Theorem 1: From fixed points to NB solutions. The direct part follows from the following set of fixed point properties. The proofs of these properties are given in Appendix C. Throughout (α¯,m¯,γ¯)(\underline{\alpha},\underline{m},\underline{\gamma}) is a fixed point of the dynamics (4), (6) (with γ¯\underline{\gamma} given by (5)).

(1) Two players (i,j)E(i,j)\in E are called partners if γi+γj=wij\gamma_{i}+\gamma_{j}=w_{ij}. Then the following are equivalent: (a) ii and jj are partners, (b) wijαi\jαj\i0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}\geq 0, (c) γi=mji\gamma_{i}=m_{j\rightarrow i} and γj=mij\gamma_{j}=m_{i\rightarrow j}.

(2) Let P(i)P(i) be the set of all partners of ii. Then the following are equivalent: (a) P(i)={j}P(i)=\{j\} and γi>0\gamma_{i}>0, (b) P(j)={i}P(j)=\{i\} and γj>0\gamma_{j}>0, (c) wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0, (d) ii and jj receive unique best positive offers from each other.

(3) We say that (i,j)(i,j) is a weak-dotted edge if wijαi\jαj\i=0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}=0, a strong-dotted edge if wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0, and a non-dotted edge otherwise. If ii has no adjacent dotted edges, then γi=0\gamma_{i}=0.

(4) An edge is strong-solid (weak-solid) if and only if it is strongly (weakly) dotted.

(5) The balance property (1), holds at every edge (i,j)E(i,j)\in E (with both sides being non-negative).

(6) γ¯\underline{\gamma} is an optimum solution for the dual LP (3) to LP (2) and mij=(wijγi)+m_{i\rightarrow j}=(w_{ij}-\gamma_{i})_{+} holds for all (i,j)E(i,j)\in E.

Proof of Theorem 1 (a), direct implication.

Assume that the LP (2) has a unique optimum that is integral. Then, by property 4, the set of strong-dotted edges form the unique maximum weight matching MM^{*} and all other edges are non-dotted. By property 33 for ii that is unmatched under MM^{*}, γi=0\gamma_{i}=0. Hence by property 22, α¯\underline{\alpha} induces the matching MM^{*}. Finally, by properties 6 and 5, the pair (M,γ¯)(M^{*},\underline{\gamma}) is stable and balanced respectively, and thus forms a NB solution. ∎

The corresponding result for the non-unique optimum case (part (b)) can be proved similarly: it follows immediately Theorem 12, Appendix C.

Remark 16.

Properties 1-6 hold for any instance. This leads to the general result Theorem 12 in Appendix C shows that in general, fixed points correspond to dual optima satisfying the unmatched balance property (1).

Proof of Theorem 1: From NB solutions to fixed points.

Proof.

Consider any NB solution (M,γ¯NB)(M,\underline{\gamma}_{{\textup{\tiny NB}}}). Using Proposition 1, MM is a maximum weight matching. Construct a corresponding FP as follows. Set mij=(wijγNB,i)+m_{i\rightarrow j}=(w_{ij}-\gamma_{{\textup{\tiny NB}},i})_{+} for all (i,j)E(i,j)\in E. Compute α¯\underline{\alpha} using αi\j=maxki\jmki\alpha_{i\backslash j}=\max_{k\in{\partial i}\backslash j}m_{k\rightarrow i}. We claim that this is a FP and that the corresponding γ¯\underline{\gamma} is γ¯NB\underline{\gamma}_{{\textup{\tiny NB}}}. To prove that we are at a fixed point, we imagine updated offers m¯upd\underline{m}^{\textup{upd}} based on α¯\underline{\alpha}, and show m¯upd=m¯\underline{m}^{\textup{upd}}=\underline{m}.

Consider a matching edge (i,j)M(i,j)\in M. We know that γNB,i+γNB,j=wij\gamma_{{\textup{\tiny NB}},i}+\gamma_{{\textup{\tiny NB}},j}=w_{ij}. Also stability and balance tell us γNB,imaxki\j(wikγNB,k)+=γNB,jmaxlj\i(wjlγNB,l)+\gamma_{{\textup{\tiny NB}},i}-\max_{k\in{\partial i}\backslash j}(w_{ik}-\gamma_{{\textup{\tiny NB}},k})_{+}=\gamma_{{\textup{\tiny NB}},j}-\max_{l\in{\partial j}\backslash i}(w_{jl}-\gamma_{{\textup{\tiny NB}},l})_{+} and both sides are non-negative. Hence, γNB,iαi\j=γNB,jαj\i0\gamma_{{\textup{\tiny NB}},i}-\alpha_{i\backslash j}=\gamma_{{\textup{\tiny NB}},j}-\alpha_{j\backslash i}\geq 0. Therefore αi\j+αj\iwij\alpha_{i\backslash j}+\alpha_{j\backslash i}\leq w_{ij},

mijupd\displaystyle m_{i\rightarrow j}^{\textup{upd}} =wijαi\j+αj\i2=wijγNB,i+γNB,j2\displaystyle=\frac{w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i}}{2}=\frac{w_{ij}-\gamma_{{\textup{\tiny NB}},i}+\gamma_{{\textup{\tiny NB}},j}}{2}
=γNB,j=wijγNB,i=mij.\displaystyle=\gamma_{{\textup{\tiny NB}},j}=w_{ij}-\gamma_{{\textup{\tiny NB}},i}=m_{i\rightarrow j}\,.

By symmetry, we also have mjiupd=γNB,i=mjim_{j\rightarrow i}^{\textup{upd}}=\gamma_{{\textup{\tiny NB}},i}=m_{j\rightarrow i}. Hence, the offers remain unchanged. Now consider (i,j)M(i,j)\notin M. We have γNB,i+γNB,jwij\gamma_{{\textup{\tiny NB}},i}+\gamma_{{\textup{\tiny NB}},j}\geq w_{ij} and, γNB,i=maxki\j(wikγNB,k)+=αi\j\gamma_{{\textup{\tiny NB}},i}=\max_{k\in{\partial i}\backslash j}(w_{ik}-\gamma_{{\textup{\tiny NB}},k})_{+}=\alpha_{i\backslash j}. Similar equation holds for γNB,j\gamma_{{\textup{\tiny NB}},j}. The validity of this identity can be checked individually in the cases when iMi\in M and iMi\notin M. Hence, αi\j+αj\iwij\alpha_{i\backslash j}+\alpha_{j\backslash i}\geq w_{ij}. This leads to mijupd=(wijαi\j)+=(wijγNB,i)+=mijm_{i\rightarrow j}^{\textup{upd}}=(w_{ij}-\alpha_{i\backslash j})_{+}=(w_{ij}-\gamma_{{\textup{\tiny NB}},i})_{+}=m_{i\rightarrow j}. By symmetry, we know also that mjiupd=mjim_{j\rightarrow i}^{\textup{upd}}=m_{j\rightarrow i}.

Finally, we show γ¯=γ¯NB\underline{\gamma}=\underline{\gamma}_{{\textup{\tiny NB}}}. For all (i,j)M(i,j)\in M, we already found that mij=γjm_{i\rightarrow j}=\gamma_{j} and vice versa. For any edge (ij)M(ij)\notin M, we know mij=(wijγNB,i)+γNB,jm_{i\rightarrow j}=(w_{ij}-\gamma_{{\textup{\tiny NB}},i})_{+}\leq\gamma_{{\textup{\tiny NB}},j}. This immediately leads to γ¯=γ¯NB\underline{\gamma}=\underline{\gamma}_{{\textup{\tiny NB}}}. It is worth noting that making use of the uniqueness of LP optimum we know that M=MM=M^{*}, and we can further show that γi=mji>αi\j\gamma_{i}=m_{j\rightarrow i}>\alpha_{i\backslash j} if and only if (ij)M(ij)\in M, i.e., the fixed point reconstructs the pairing M=MM=M^{*}. ∎

9 Polynomial convergence on bipartite graphs: Proof of Theorem 5

Theorem 5 says that on a bipartite graph, under a small random perturbation on any problem instance, the natural dynamics is likely to quickly find the maximum weight matching. Now, in light of Corollary 9, this simply involves showing that the gap gg of the perturbed problem instance is likely to be sufficiently large. We use a version of the well known Isolation Lemma to for this. Note that on bipartite graphs, there is always an integral optimum to the LP (2).

Next, is our Isolation lemma (recast from [21]). For the proof, see Appendix B.

Lemma 3 (Isolation Lemma).

Consider a bipartite graph G=(V,E)G=(V,E). Choose η>0,ξ>0\eta>0,\;\xi>0. Edge weights are generated as follows: for each eEe\in E, w¯e\bar{w}_{e} is chosen uniformly in [we,we+η][w_{e},w_{e}+\eta]. Denote by \mathcal{M} the set of matchings in GG. Let MM^{*} be a maximum weight matching. Let MM^{**} be a matching having the maximum weight in \M\mathcal{M}\backslash M^{*}. Denote by w¯(M)\bar{w}(M) the weight of a matching MM. Then

Pr[w¯(M)w¯(M)ηξ/(2|E|)] 1ξ\displaystyle\Pr[\,\,\,\bar{w}(M^{*})-\bar{w}(M^{**})\geq\,\eta\xi/(2|E|)\,\,\,]\;\geq\;1-\xi (29)
Proof of Theorem 5.

Using Lemma 3, we know that the gap of the perturbed problem satisfies g¯ηξ/(2|E|)\bar{g}\geq\eta\xi/(2|E|) with probability at least 1ξ1-\xi. Now, the weights in the perturbed instance are bounded by W¯=2\bar{W}=2. Rescale by dividing all weights and messages by 22, and use Corollary 9. The theorem follows from the following two elementary observations. First, an (ϵ/2)({\epsilon}/2)-NB solution for the rescaled problem corresponds to an ϵ{\epsilon}-NB solution for the original problem. Second, induced matchings are unaffected by scaling. ∎

We remark that Theorem 5 does not generalize to any (non-bipartite) graph with edge weights such that the LP (2) has an integral optimum, for the following reason. We can easily generalize the Isolation Lemma to show that the gap gg of the perturbed problem is likely to be large also in this case. However, there is a probability arbitrarily close to 1 (depending on the instance) that a random perturbation will result in an instance for which LP (2) does not have an integral optimum, i.e. the perturbed instance does not have any Nash bargaining solutions!

Acknowledgements. We thank Eva Tardos for introducing us to network exchange theory and Daron Acemoglu for insightful discussions. We also thank the anonymous referees for their comments.

A large part of this work was done while Y. Kanoria, M. Bayati and A. Montanari were at Microsoft Research New England. This research was partially supported by NSF, grants CCF-0743978 and CCF-0915145, and by a Terman fellowship. Y. Kanoria is supported by a 3Com Corporation Stanford Graduate Fellowship.

References

  • [1] B. Aspvall and Y. Shiloach, A polynomial time algorithm for solving systems of linear inequalities with two variables pre inequality, Proc. 20th IEEE Symp. Found. of Comp. Sc., 1979.
  • [2] Y. Azar, B. Birnbaum, L. Elisa Celis, N. R. Devanur and Y. Peres, Convergence of Local Dynamics to Balanced Outcomes in Exchange Networks, Proc. 50th IEEE Symp. Found. of Comp. Sc., 2009.
  • [3] J. Baillon and R. E. Bruck, The rate of asymptotic regularity is O(1/n)O(1/\sqrt{n}), in: A.G. Kartsatos (ed.), Theory and applications of nonlinear operators of accretive and monotone type, Lecture Notes in Pure and Appl. Math. 178, Marcel Dekker, Inc., New York, 1996, pp. 51–81.
  • [4] M. Bateni, M.  Hajiaghayi, N. Immorlica and H. Mahini, The cooperative game theory foundations of network bargaining games, Proc. Intl. Colloq. Automata, Languages and Programming, 2010.
  • [5] M. Bayati, D. Shah and M. Sharma, Max-Product for Maximum Weight Matching: Convergence, Correctness, and LP Duality, IEEE Trans. Inform. Theory, 54 (2008), pp. 1241–1251.
  • [6] M. Bayati, C. Borgs, J. Chayes, R. Zecchina, On the exactness of the cavity method for Weighted b-Matchings on Arbitrary Graphs and its Relation to Linear Programs, arXiv:0807.3159, (2007).
  • [7] D. P. Bertsekas, The Auction Algorithm: A Distributed Relaxation Method for the Assignment Problem, Annals of Operations Research, 14 (1988), pp. 105–123.
  • [8] K. Binmore, A. Rubinstein and A. Wolinsky, The Nash Bargaining Solution in Economic Modeling, Rand Journal of Economics, 17(2) (1986), pp. 176–188.
  • [9] N. Braun, T. Gautschi, A Nash bargaining model for simple exchange networks, Social Networks, 28(1) (2006), pp. 1–23.
  • [10] M. Kandori, G. J. Mailath, and R. Rob, Learning, Mutation, and Long Run Equilibria in Games, Econometrica 61(1) (1993).
  • [11] T. Chakraborty, S. Judd, M. Kearns, J. Tan, A Behavioral Study of Bargaining in Social Networks, Proc. 10th ACM Conf. Electronic Commerce, 2010.
  • [12] T. Chakraborty, M. Kearns and S. Khanna, Network Bargaining: Algorithms and Structural Results, Proc. 10th ACM Conf. Electronic Commerce, July 2009.
  • [13] K. S. Cook and T. Yamagishi, Power exchange in networks: A power-dependence formulation, Social Networks, 14 (1992), pp. 245–265.
  • [14] C. Daskalakis, C. Papadimitriou, On oblivious PTAS’s for nash equilibrium, Proc. ACM Symp. Theory of Computing, 2009.
  • [15] C. Daskalakis, C. Papadimitriou, Geometrically Embedded Local Search, Proc. ACM-SIAM Symp. Discrete Algorithms, 2011.
  • [16] M. Edelstein and R.C. O’Brien, Nonexpansive mappings, asymptotic regularity, and successive approximations, J. London Math. Soc. 1 (1978), pp. 547–554.
  • [17] U. Faigle, W. Kern, and J. Kuipers, On the computation of the nucleolus of a cooperative game, International Journal of Game Theory, 30 (2001), pp. 79–98.
  • [18] D. Fudenberg and D. K. Levine, The Theory of Learning in Games, The MIT Press, 1998.
  • [19] Gabow, H. N., Tarjan, R. E.: Faster scaling algorithms for general graph-matching problems. J. ACM, 38(4):815- 853, 1991.
  • [20] S. Ishikawa, Fixed points and iteration of a nonexpansive mapping in a Banach space, Proc. American Mathematical Society, 59(1) (1976).
  • [21] D. Gamarnik, D. Shah and Y. Wei, Belief Propagation for Min-cost Network Flow: Convergence and Correctness, Proc. ACM-SIAM Symp. Disc. Alg., 2010.
  • [22] B. Huang, T. Jebara, Loopy belief propagation for bipartite maximum weight b-matching, Artificial Intelligence and Statistics (AISTATS), March, 2007.
  • [23] Y. Kanoria, An FPTAS for Bargaining Networks with Unequal Bargaining Powers, Proc. Workshop on Internet and Network Economics, 2010, (full version arXiv:1008.0212).
  • [24] Y. Kanoria, M. Bayati, C. Borgs, J. Chayes, and A. Montanari, A Natural Dynamics for Bargaining on Exchange Networks, arXiv:0912.5176 (2009).
  • [25] Y. Kanoria, M. Bayati, C. Borgs, J. Chayes, and A. Montanari, Fast Convergence of Natural Bargaining Dynamics in Exchange Networks, Proc. ACM-SIAM Symp. on Discrete Algorithms (2011), pp. 1518–1537.
  • [26] E. Kalai, M. Smorodinsky, Other Solutions to Nash’s Bargaining Problem, Econometrica, 43(3) (1975).
  • [27] J. Kleinberg and E. Tardos, Balanced outcomes in social exchange networks, in Proc. ACM Symp. Theory of Computing, 2008.
  • [28] U. Kohlenback, A Quantitative Version Of A Theorem Due To Borwein-Reich-Shafrir, Numerical Functional Analysis and Optimization, 22(5-6), August 2001.
  • [29] J.W. Lucas, C.W. Younts, M.J. Lovaglia, and B. Markovsky, Lines of power in exchange networks, Social Forces, 80 (2001), pp. 185–214.
  • [30] M. Mihai, D. Abreu, Markov Equilibria in a Model of Bargaining in Networks, to appear in Games and Economic Behavior http://economics.mit.edu/files/6985.
  • [31] W. R. Mann, Mean value methods in iteration, Proc. Amer. Math Soc., 4 (1953), pp. 506–510.
  • [32] J. Nash, The bargaining problem, Econometrica, 18 (1950), pp. 155–162.
  • [33] S.C. Rochford, Symmetric pairwise-bargained allocations in an assignment market, in J. Economic Theory, 34 (1984), pp. 262–281.
  • [34] V. P. Crawford, S. C. Rochford, Bargaining and Competition in Matching Markets, Intl. Economic Review 27(2) (1986), pp. 329–348.
  • [35] J. Z. Rubin, B.  R. Brown, The Social Psychology of Bargaining and Negotiation, Academic Press 1975.
  • [36] A. Rubinstein: Perfect equilibrium in a bargaining model, Econometrica, 50 (1982), pp. 97–109.
  • [37] S. Sanghavi, D. Malioutov, A. Willsky, Linear Programming Analysis of Loopy Belief Propagation for Weighted Matching, Proc. Neural Inform. Processing Systems, 2007.
  • [38] A. Schrijver, Combinatorial Optimization, Springer-Verlag, Vol. A (2003).
  • [39] L. Shapley and M. Shubik, “The Assignment Game I: The Core,” Intl. J. Game Theory, 1 (1972), pp. 111-130.
  • [40] M. Sotomayor, On The Core Of The One-Sided Assignment Game, 2005, http://www.usp.br/feaecon/media/fck/File/one_sided_assignment_game.pdf.
  • [41] J. Skvoretz and D. Willer, Exclusion and power: A test of four theories of power in exchange networks, American Sociological Review, 58 (1993), pp. 801–818.
  • [42] F. Spitzer, Principles of Random Walk, Springer, 2005.
  • [43] R. E. Stearns, Convergent transfer schemes for n-person games, Transactions of American Mathematical Society, 134 (1968), pp. 449- 459.
  • [44] M. J. Wainwright, T. S.  Jaakkola, and A. S. Willsky, Exact MAP estimates via agreement on (hyper)trees: Linear programming and message-passing, IEEE Trans. Inform. Theory, 51 (2005), pp. 3697–3717.
  • [45] D. Willer (ed.) Network Exchange Theory, Praeger, 1999.

Appendix A Variations of the natural dynamics

Now that we have a reasonable dynamics that converges fast to balanced outcomes, it is natural ask whether variations of the natural dynamics also yield balanced outcomes. What happens in the case of asynchronous updates, different nodes updating at different rates, damping factors that vary across nodes and in time, and so on? We discuss some of these questions in this section, focussing on some situations in which we can prove convergence with minimal additional work. Note that we are only concerned with extending our convergence results since the fixed point properties remain unchanged.

A.1 Node dependent damping

Consider that the damping factor may be different for different nodes, but unchanging over time. Denote by κ(v)\kappa(v), the damping factor for node vv. Assume that κ(v)[1κ,κ]vV\kappa(v)\in[1-\kappa^{*},\kappa^{*}]\ \forall v\in V for some κ[0.5,1)\kappa^{*}\in[0.5,1), i.e. damping factors are uniformly bounded away from 0 and 11. Define operator 𝖳:[0,1]|E|[0,1]|E|{\sf{T}}:[0,1]^{|E|}\rightarrow[0,1]^{|E|} by

(𝖳α¯)i\j=(κ(i)κ)maxki\jmki+(κκ(i)κ)αi\j\displaystyle({\sf{T}}\underline{\alpha})_{i\backslash j}=\left(\frac{\kappa(i)}{\kappa^{*}}\right)\max_{k\in\partial i\backslash j}m_{k\rightarrow i}+\left(\frac{\kappa^{*}-\kappa(i)}{\kappa^{*}}\right)\alpha_{i\backslash j} (30)

𝖳{\sf{T}} is nonexpansive. Now, the dynamics can be written as α¯t+1=κ𝖳α¯t+(1κ)α¯t\underline{\alpha}^{t+1}=\kappa^{*}{\sf{T}}\underline{\alpha}^{t}+(1-\kappa^{*})\underline{\alpha}^{t}. Clearly, convergence to fixed points (Theorem 2) holds in this situation. Note that fixed points of 𝖳{\sf{T}} are the same as fixed points of the natural dynamics. Moreover, we can use [3] to assert that α¯t𝖳α¯t=O(1/t)||\underline{\alpha}^{t}-{\sf{T}}\underline{\alpha}^{t}||_{\infty}=O(1/\sqrt{t}) and hence α¯t\underline{\alpha}^{t} is an O(1/t)O(1/\sqrt{t})-FP. In short, we don’t lose anything with this generalization!

A.2 Time varying damping

Now consider instead that the damping may change over time, but is the same for all nodes. Denote by κt\kappa_{t} the damping factor at time tt, i.e.

α¯t+1=κtmaxki\jmkit+(1κt)α¯t\displaystyle\underline{\alpha}^{t+1}=\kappa_{t}\max_{k\in\partial i\backslash j}m_{k\rightarrow i}^{t}+(1-\kappa_{t})\underline{\alpha}^{t} (31)

The result of [20] implies that, as long as t=0κt=\sum_{t=0}^{\infty}\kappa_{t}=\infty and limtsupκt<1\lim_{t\rightarrow\infty}\sup\kappa_{t}<1, the dynamics is guaranteed to converge to a fixed point. Note that again the fixed points are unchanged. [28] provides a quantitative estimate of the rate of convergence in this case, guaranteeing in particular that an ϵ{\epsilon}-FP is reached in time exp(O(1/ϵ))\exp(O(1/{\epsilon})) if κt\kappa_{t} is uniformly bounded away from 0 and 11. Note that this estimate is much weaker than the one provided by [3], leading to Theorem 3. It seems intuitive that the stronger O(1/k)O(1/\sqrt{k}) bound holds also for the time varying damping case in the general nonexpansive operator setting, but a proof has remained elusive thus far.

A.3 Asynchronous updates

Finally, we look at the case of asynchronous updates, i.e., one message αi\j\alpha_{i\backslash j} is updated in any given step while the others remain unchanged. Define 𝖳i\j:[0,1]|E|[0,1]|E|{\sf{T}}_{i\backslash j}:[0,1]^{|E|}\rightarrow[0,1]^{|E|} by

(𝖳i\jα¯)i\j\displaystyle({\sf{T}}_{i\backslash j}\underline{\alpha})_{i^{\prime}\backslash j^{\prime}} ={maxki\jmkiif (i,j)=(i,j)αi\jotherwise\displaystyle=\left\{\begin{array}[]{ll}\max_{k\in\partial i^{\prime}\backslash j^{\prime}}m_{k\rightarrow i^{\prime}}&\mbox{if }(i,j)=(i^{\prime},j^{\prime})\\ \alpha_{i^{\prime}\backslash j^{\prime}}&\mbox{otherwise}\end{array}\right. (34)

Let m|E|m\equiv|E|. There are 2m2m such operators, two for each edge. Clearly, each 𝖳i\j{\sf{T}}_{i\backslash j} is nonexpansive in sup-norm. Now, consider an arbitrary permutation of the 2m2m directed edges ((i1,j1),(i2,j2),)((i_{1},j_{1}),(i_{2},j_{2}),\ldots). Consider the updates induced by 𝖳i1\j1,𝖳i2\j2,{\sf{T}}_{i_{1}\backslash j_{1}},{\sf{T}}_{i_{2}\backslash j_{2}},\ldots\ in order, each with a damping factor of 1/(2m)1/(2m). Consider the resulting product

((1/2m)𝖳i1\j1+(1(1/2m))I)\displaystyle\Big{(}(1/2m){\sf{T}}_{i_{1}\backslash j_{1}}+\left(1-(1/2m)\right)I\Big{)}\,\cdot (35)
((1/2m)\displaystyle\cdot\,\Big{(}(1/2m) 𝖳i2\j2+(1(1/2m))I)\displaystyle{\sf{T}}_{i_{2}\backslash j_{2}}+\left(1-(1/2m)\right)I\Big{)}\,\cdot\;\ldots
=(1(1(1/2m))2m)𝖳+(1(1/2m))2mI\displaystyle=\Big{(}1-\left(1\!-\!(1/2m)\right)^{2m}\Big{)}{\sf{T}}+\left(1\!-\!(1/2m)\right)^{2m}I

Here (35) defines 𝖳{\sf{T}}, and II is the identity operator. It is easy to deduce that 𝖳{\sf{T}} above is nonexpansive from the following elementary facts – the product of nonexpansive operators in nonexpansive, and the convex combination of nonexpansive operators is nonexpansive. Also, (1(1/2m))2m[1/4,1/e]m\left(1-(1/2m)\right)^{2m}\in[1/4,1/e]\ \forall m. Thus, if we repeat these asynchronous updates periodically in a series of ‘update cycles’, we are guaranteed to quickly converge to an ϵ{\epsilon}-FP of 𝖳{\sf{T}} ([3]).

Proposition 17.

An ϵ{\epsilon}-FP of 𝖳{\sf{T}} is an O(mϵ)O(m{\epsilon})-FP of the natural dynamics.

Proof.

Suppose we start an update cycle at α¯\underline{\alpha}, an ϵ{\epsilon}-FP of 𝖳{\sf{T}}. Then we know that at the end of the update cycle, no coordinate changes by more than (11/4)ϵϵ(1-1/4){\epsilon}\leq{\epsilon}. Note that among the 2m2m steps in a cycle, any particular i\ji\backslash j ‘coordinate’ only changes in one step. Thus, each such coordinate change is bounded by ϵ{\epsilon}. Consider the ss-th step in the update cycle. The state before the ss-th step, call it α¯(s1)\underline{\alpha}(s-1), is ϵ{\epsilon}-close to α¯\underline{\alpha}. Also, we know that the (is\js)(i_{s}\backslash j_{s}) coordinate changes by at most ϵ{\epsilon} in this step. Hence,

𝖳is\jsα¯(s1)α¯(s1)\displaystyle||{\sf{T}}_{i_{s}\backslash j_{s}}\underline{\alpha}(s-1)-\underline{\alpha}(s-1)||_{\infty} (2m)ϵ\displaystyle\leq(2m){\epsilon}
𝖳is\jsα¯α¯\displaystyle\Rightarrow\phantom{||{\sf{T}}_{i_{s}\backslash j_{s}}\underline{\alpha}(s-1)}||{\sf{T}}_{i_{s}\backslash j_{s}}\underline{\alpha}-\underline{\alpha}||_{\infty} (2m+2)ϵ\displaystyle\leq(2m+2){\epsilon}

This holds for s=1,2,,2ms=1,2,\ldots,2m. Hence the result. ∎

Note that with ϵ=0{\epsilon}=0, Proposition 17 tells us that fixed points of 𝖳{\sf{T}} are fixed points of the natural dynamics. Thus, we are immediately guaranteed convergence to fixed points of the natural dynamics. Moreover, the quantitative estimate in Proposition 17 guarantees that in a small number of update cycles we reach approximate fixed points of the natural dynamics.

Finally, we comment that instead of ordering updates by a permutation of directed edges, we could have an arbitrary periodic sequence of updates satisfying non-starvation and obtain similar results. For example, this would include cases where some nodes update more frequently than others. Also, note that the damping factors of (1/2m)(1/2m) were chosen for simplicity and to ensure fast convergence. Any non-trivial damping would suffice to guarantee convergence.

It remains an open question to show convergence for non-periodic asynchronous updates.

Appendix B Proof of Isolation lemma

Our proof of the isolation lemma is adapted from [21].

Proof of Lemma 3.

Fix eEe\in E and fix w¯e\bar{w}_{e^{\prime}} for all eE\ee^{\prime}\in E\backslash e. Let MeM_{e} be a maximum weight matching among matchings that strictly include edge ee, and let MeM_{\sim e} be a maximum weight matching among matchings that exclude edge ee. Clearly, MeM_{e} and MeM_{\sim e} are independent of w¯e\bar{w}_{e}. Define

fe(w¯e)\displaystyle f_{e}(\bar{w}_{e}) w¯(Me)=fe(0)+w¯e\displaystyle\equiv\bar{w}(M_{e})\ \;=f_{e}(0)+\bar{w}_{e}
fe\displaystyle f_{\sim e} w¯(Me)=const<\displaystyle\equiv\bar{w}(M_{\sim e})=\mbox{const}<\infty

Clearly, fe(0)fef_{e}(0)\leq f_{\sim e}, since we cannot do worse by forcing exclusion of a zero weight edge. Thus, there is some unique θ0\theta\geq 0 such that fe(θ)=fef_{e}(\theta)=f_{\sim e}. Define δ=ηξ/2|E|\delta=\eta\xi/2|E|. Let D(e)D(e) be the event that |w¯(Me)w¯(Me)|<δ|\bar{w}(M_{e})-\bar{w}(M_{\sim e})|<\delta. It is easy to see that D(e)D(e) occurs if and only if w¯e(θδ,θ+δ)\bar{w}_{e}\in(\theta-\delta,\theta+\delta). Thus, Pr[D(e)]2δ/η=ξ/|E|\Pr[D(e)]\leq 2\delta/\eta=\xi/|E|. Now,

{w¯(M)w¯(M)<δ}=eED(e)\displaystyle\bigg{\{}\bar{w}(M^{*})-\bar{w}(M^{**})<\delta\bigg{\}}\;=\;\bigcup_{e\in E}D(e) (36)

and the lemma follows by union bound.

Appendix C Proofs of fixed point properties

In this section we state and prove the fixed point properties that were used for the proof of Theorem 1 in Section 8. Before that, however, we remark that the condition: “LP (2) has a unique optimum” in Theorem 1(a) is almost always valid.

Remark 18.

We argue that the condition “LP (2) has a unique optimum” is generic in instances with integral optimum:
Let 𝖦I[0,W]|E|{\sf G}_{\textup{I}}\subset[0,W]^{|E|} be the set of instances having an integral optimum. Let 𝖦UI𝖦I{\sf G}_{\textup{UI}}\subset{\sf G}_{\textup{I}} be the set of instances having a unique integral optimum. It turns out that 𝖦I{\sf G}_{\textup{I}} has dimension |E||E| (i.e. the class of instances having an integral optimum is large) and that 𝖦UI{\sf G}_{\textup{UI}} is both open and dense in 𝖦I{\sf G}_{\textup{I}}.

Notation. In proofs of this section and Section D we denote surplus wijαi\jαj\iw_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i} of edge (ij)(ij) by 𝒮urpij\mathcal{S}\textup{urp}_{ij}.

Lemma 4.

γ¯\underline{\gamma} satisfies the constraints of the dual problem (3).

Proof.

Since offers mijm_{i\rightarrow j} are by definition non-negative therefore for all vVv\in V we have γv0\gamma_{v}\geq 0. So we only need to show γi+γjwij\gamma_{i}+\gamma_{j}\geq w_{ij} for any edge (ij)E(ij)\in E. It is easy to see that γiαi\j\gamma_{i}\geq\alpha_{i\backslash j} and γjαi\j\gamma_{j}\geq\alpha_{i\backslash j}. Therefore, if αi\j+αi\jwij\alpha_{i\backslash j}+\alpha_{i\backslash j}\geq w_{ij} then γi+γjwij\gamma_{i}+\gamma_{j}\geq w_{ij} holds and we are done. Otherwise, for αi\j+αi\j<wij\alpha_{i\backslash j}+\alpha_{i\backslash j}<w_{ij} we have mij=wijαi\j+αj\i2m_{i\rightarrow j}=\frac{w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i}}{2} and mji=wijαj\i+αi\j2m_{j\rightarrow i}=\frac{w_{ij}-\alpha_{j\backslash i}+\alpha_{i\backslash j}}{2} which gives γi+γjmij+mji=wij\gamma_{i}+\gamma_{j}\geq m_{i\rightarrow j}+m_{j\rightarrow i}=w_{ij}.∎

Recall that for any (ij)E(ij)\in E, we say that ii and jj are ‘partners’ if γi+γj=wij\gamma_{i}+\gamma_{j}=w_{ij} and P(i)P(i) denotes the partners of node ii. In other words P(i)={j:ji,γi+γj=wij}P(i)=\{j:j\in\partial i,\gamma_{i}+\gamma_{j}=w_{ij}\}.

Lemma 5.

The following are equivalent:
(a) ii and jj are partners,
(b) 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0.
(c) γi=mji\gamma_{i}=m_{j\rightarrow i} and γj=mij\gamma_{j}=m_{i\rightarrow j}.
Moreover, if γi=mji\gamma_{i}=m_{j\rightarrow i} and γj>mij\gamma_{j}>m_{i\rightarrow j} then γi=0\gamma_{i}=0.

Proof.

We will prove (a)(b)(c)(a)(a)\Rightarrow(b)\Rightarrow(c)\Rightarrow(a).

(a)(b)(a)\Rightarrow(b): Since γiαi\j\gamma_{i}\geq\alpha_{i\backslash j} and γjαj\i\gamma_{j}\geq\alpha_{j\backslash i} always holds then wij=γi+γjαi\j+αj\iw_{ij}=\gamma_{i}+\gamma_{j}\geq\alpha_{i\backslash j}+\alpha_{j\backslash i}.

(b)(c)(b)\Rightarrow(c): If 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0 then (wijαi\j+αj\i)/2αj\i(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2\geq\alpha_{j\backslash i}. But mij=(wijαi\j+αj\i)/2m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2 therefore γj=mij\gamma_{j}=m_{i\rightarrow j}. The argument for γi=mji\gamma_{i}=m_{j\rightarrow i} is similar.

(c)(a)(c)\Rightarrow(a): If 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0 then mij=(wijαi\j+αj\i)/2m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2 and mji=(wijαj\i+αi\j)/2m_{j\rightarrow i}=(w_{ij}-\alpha_{j\backslash i}+\alpha_{i\backslash j})/2 which gives γi+γj=mij+mji=wij\gamma_{i}+\gamma_{j}=m_{i\rightarrow j}+m_{j\rightarrow i}=w_{ij} and we are done. Otherwise, we have γi+γj=mij+mji(wijαi\j)++(wijαj\i)+<max[(wijαi\j)+,(wijαj\i)+,2wijαi\jαj\i]wij\gamma_{i}+\gamma_{j}=m_{i\rightarrow j}+m_{j\rightarrow i}\leq(w_{ij}-\alpha_{i\backslash j})_{+}+(w_{ij}-\alpha_{j\backslash i})_{+}<\max\bigg{[}(w_{ij}-\alpha_{i\backslash j})_{+},(w_{ij}-\alpha_{j\backslash i})_{+},2w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}\bigg{]}\leq w_{ij} which contradicts Lemma 4 that γ¯\underline{\gamma} satisfies the constraints of the dual problem (3).

Finally, we need to show that γi=mji\gamma_{i}=m_{j\rightarrow i} and γj>mij\gamma_{j}>m_{i\rightarrow j} give γi=0\gamma_{i}=0. First note that by equivalence of (b)(b) and (c)(c) we should have wij<αi\j+αj\iw_{ij}<\alpha_{i\backslash j}+\alpha_{j\backslash i}. On the other hand αi\jγi=mji(wijαj\i)+\alpha_{i\backslash j}\leq\gamma_{i}=m_{j\rightarrow i}\leq(w_{ij}-\alpha_{j\backslash i})_{+}. Now if wijαj\i>0w_{ij}-\alpha_{j\backslash i}>0 we get αi\jwijαj\i\alpha_{i\backslash j}\leq w_{ij}-\alpha_{j\backslash i} which is a contradiction. Therefore γi=(wijαj\i)+=0\gamma_{i}=(w_{ij}-\alpha_{j\backslash i})_{+}=0. ∎

Lemma 6.

The following are equivalent:
(a) P(i)={j}P(i)=\{j\} and γi>0\gamma_{i}>0,
(b) P(j)={i}P(j)=\{i\} and γj>0\gamma_{j}>0,
(c) wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0.
(d) ii and jj receive unique best positive offers from each other.

Proof.

(a)(c)(b)(a)\Rightarrow(c)\Rightarrow(b): (a)(a) means that for all ki\jk\in\partial i\backslash j, 𝒮urpik<0\mathcal{S}\textup{urp}_{ik}<0. This means mki=(wikαk\i)+<αi\k=mjim_{k\rightarrow i}=(w_{ik}-\alpha_{k\backslash i})_{+}<\alpha_{i\backslash k}=m_{j\rightarrow i} (using γi>0\gamma_{i}>0). Hence, αi\j<mji\alpha_{i\backslash j}<m_{j\rightarrow i}. From (a)(a), it also follows that mji>0m_{j\rightarrow i}>0 or (wijαj\i)+=wijαj\i(w_{ij}-\alpha_{j\backslash i})_{+}=w_{ij}-\alpha_{j\backslash i}. Therefore, mji(wijαj\i)+=wijαj\im_{j\rightarrow i}\leq(w_{ij}-\alpha_{j\backslash i})_{+}=w_{ij}-\alpha_{j\backslash i} which gives wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0 or (c)(c). From this we can explicitly write mij=(wijαi\j+αj\i)/2m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2 which is strictly bigger than αj\i\alpha_{j\backslash i}. Hence we obtain (b)(b).

By symmetry (b)(c)(a)(b)\Rightarrow(c)\Rightarrow(a). Thus, we have shown that (a)(a), (b)(b) and (c)(c) are equivalent.

(c)(d)(c)\Rightarrow(d): (c)(c) implies that mij=(wijαi\j+αj\i)/2>αj\i=maxkj\imkjm_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2>\alpha_{j\backslash i}=\max_{k\in\partial j\backslash i}m_{k\rightarrow j}. Thus, jj receives its unique best positive offer from ii. Using symmetry, it follows that (d)(d) holds.

(d)(c)(d)\Rightarrow(c): (d)(d) implies γi=mji\gamma_{i}=m_{j\rightarrow i} and γj=mij\gamma_{j}=m_{i\rightarrow j}. By Lemma 5, ii and jj are partners, i.e. γi+γj=wij\gamma_{i}+\gamma_{j}=w_{ij}. Hence, mij+mji=wijm_{i\rightarrow j}+m_{j\rightarrow i}=w_{ij}. But since (d)(d) holds, αi\j<mji\alpha_{i\backslash j}<m_{j\rightarrow i} and αj\i<mij\alpha_{j\backslash i}<m_{i\rightarrow j}. This leads to (c)(c).

This finishes the proof. ∎

Recall that (ij)(ij) is a weak-dotted edge if wijαi\jαj\i=0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}=0, a strong-dotted edge if wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0, and a non-dotted edge otherwise. Basically, for any dotted edge (ij)(ij) we have jP(i)j\in P(i) and iP(j)i\in P(j).

Corollary 19.

A corollary of Lemmas 5-6 is that strong-dotted edges are only adjacent to non-dotted edges. Also each weak-dotted edge is adjacent to at least one weak-dotted edge at each end (assume that the earnings of the two endpoints are non-zero).

Lemma 7.

If ii has no adjacent dotted edges, then γi=0\gamma_{i}=0

Proof.

Assume that the largest offer to ii comes from jj. Therefore, αi\jmji(wijαj\i)+\alpha_{i\backslash j}\leq m_{j\rightarrow i}\leq(w_{ij}-\alpha_{j\backslash i})_{+}. Now if wijαj\i>0w_{ij}-\alpha_{j\backslash i}>0 then αi\jwijαj\i\alpha_{i\backslash j}\leq w_{ij}-\alpha_{j\backslash i} or (ij)(ij) is dotted edge which is impossible. Thus, wijαj\i=0w_{ij}-\alpha_{j\backslash i}=0 and γi=0\gamma_{i}=0. ∎

Lemma 8.

The following are equivalent:
(a) αi\j=γi\alpha_{i\backslash j}=\gamma_{i},
(b) 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\leq 0,
(c) mij=(wijαi\j)+m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j})_{+}.

Proof.

(a)(b)(a)\Rightarrow(b): “not (b)” mji=(wijαj\i+αi\j)/2>αi\j\Rightarrow m_{j\rightarrow i}=(w_{ij}-\alpha_{j\backslash i}+\alpha_{i\backslash j})/2>\alpha_{i\backslash j}\Rightarrow “not (a)”.

(b)(c)(b)\Rightarrow(c): Follows from the definition of mijm_{i\rightarrow j}.

(c)(a)(c)\Rightarrow(a): From mij=(wijαi\j)+m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j})_{+} we have 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\leq 0. Therefore, mji=(wijαj\i)+max[wijαj\i,0]αi\jm_{j\rightarrow i}=(w_{ij}-\alpha_{j\backslash i})_{+}\leq\max\big{[}w_{ij}-\alpha_{j\backslash i},0\big{]}\leq\alpha_{i\backslash j}. ∎

Note that (b) is symmetric in ii and jj, so (a) and (c) can be transformed by interchanging ii and jj.

Corollary 20.

αi\j=γi\alpha_{i\backslash j}=\gamma_{i} if and only if αj\i=γj\alpha_{j\backslash i}=\gamma_{j}

Lemma 9.

mij=(wijγi)+m_{i\rightarrow j}=(w_{ij}-\gamma_{i})_{+} holds (ij)E\forall\;(ij)\in E

Proof.

If wijαi\jαj\i0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}\leq 0 then the result follows from Lemma 8. Otherwise, (ij)(ij) is strongly dotted and γi=mji=(wijαj\i+αi\j)/2\gamma_{i}=m_{j\rightarrow i}=(w_{ij}-\alpha_{j\backslash i}+\alpha_{i\backslash j})/2, γj=mij=(wijαi\j+αj\i)/2\gamma_{j}=m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2. From here we can explicitly calculate wijγi=(wijαi\j+αj\i)/2=mijw_{ij}-\gamma_{i}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2=m_{i\rightarrow j}. ∎

Lemma 10.

The unmatched balance property, equation (1), holds at every edge (ij)E(ij)\in E, and both sides of the equation are non-negative.

Proof.

In light of lemma 9, (1) can be rewritten at a fixed point as

γiαi\j=γjαj\i\displaystyle\gamma_{i}-\alpha_{i\backslash j}=\gamma_{j}-\alpha_{j\backslash i} (37)

which is easy to verify. The case 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\leq 0 leads to both sides of Eq. (37) being 0 by Corollary 20. The other case 𝒮urpij>0\mathcal{S}\textup{urp}_{ij}>0 leads to

mijαj\i=mjiαi\j=𝒮urpij2\displaystyle m_{i\rightarrow j}-\alpha_{j\backslash i}=m_{j\rightarrow i}-\alpha_{i\backslash j}=\frac{\mathcal{S}\textup{urp}_{ij}}{2} (38)

Clearly, we have γi=mji\gamma_{i}=m_{j\rightarrow i} and γj=mij\gamma_{j}=m_{i\rightarrow j}. So Eq. (37) holds. ∎

Next lemmas show that dotted edges are in correspondence with the solid edges that were defined in Section 8.

Lemma 11.

A non-solid edge cannot be a dotted edge, weak or strong.

Before proving the lemma let us define alternating paths. A path P=(i1,i2,,ik)P=({i_{1}},{i_{2}},\ldots,{i_{k}}) in GG is called alternating path if: (a) There exist a partition of edges of PP into two sets A,BA,B such that either AMA\subset M^{*} or BMB\subset M^{*}. Moreover AA (BB) consists of all odd (even) edges; i.e. A={(i1,i2),(i3,i4),}A=\{({i_{1}},{i_{2}}),({i_{3}},{i_{4}}),\ldots\} (B={(i2,i3),(i4,i5),}B=\{({i_{2}},{i_{3}}),({i_{4}},{i_{5}}),\ldots\}). (b) The path PP might intersect itself or even repeat its own edges but no edge is repeated immediately. That is, for any 1rk2:irir+11\leq r\leq k-2:~~~~i_{r}\neq i_{r+1} and irir+2i_{r}\neq i_{r+2}. PP is called an alternating cycle if i1=ik{i_{1}}={i_{k}}.

Also, consider x¯\underline{x}^{*} and y¯\underline{y}^{*} that are optimum solutions for the LP and its dual, (2) and (3). The complementary slackness conditions (see [38]) for more details) state that for all vVv\in V, yv(evxe1)=0y_{v}^{*}(\sum_{e\in\partial v}x_{e}^{*}-1)=0 and for all e=(ij)Ee=(ij)\in E, xe(yi+yjwij)=0x_{e}^{*}(y_{i}^{*}+y_{j}^{*}-w_{ij})=0. Therefore, for all solid edges the equality yi+yj=wijy_{i}^{*}+y_{j}^{*}=w_{ij} holds. Moreover, any node vVv\in V is adjacent to a solid edge if and only if yv>0y_{v}^{*}>0.

Proof of Lemma 11.

First, we refine the notion of solid edges by calling an edge ee, 1-x¯\underline{x}^{*}-solid (12\frac{1}{2}-x¯\underline{x}^{*}-solid) whenever xe=1x_{e}^{*}=1 (xe=12x_{e}^{*}=\frac{1}{2}).

We need to consider two cases:

Case (I). Assume that LP has an optimum solution x¯\underline{x}^{*} that is integral as well (having a tight LP).

The idea of the proof is that if there exists a non-solid edge ee which is dotted, we use a similar analysis to [6] to construct an alternating path consisting of dotted and x¯\underline{x}^{*}-solid edges that leads to creation of at an optimal solution to LP (2) that assigns a positive value to ee. This contradicts the non-solid assumption on ee.

Now assume the contrary: take (i1,i2)(i_{1},i_{2}) that is a non-solid edge but it is dotted. Consider an endpoint of (i1,i2)(i_{1},i_{2}). For example take i2i_{2}. Either there is a x¯\underline{x}^{*}-solid edge attached to i2i_{2} or not. If there is not, we stop. Otherwise, assume (i2,i3)(i_{2},i_{3}) is a x¯\underline{x}^{*}-solid edge. Using Lemma 7, either γi3=0\gamma_{i_{3}}=0 or there is a dotted edge connected to i3i_{3}. But if this dotted edge is (i2,i3)(i_{2},i_{3}) then P(i2){i1,i3}P(i_{2})\supseteq\{i_{1},i_{3}\}. Therefore, by Lemma 6 there has to be another dotted edge (i3,i4)(i_{3},i_{4}) connected to i3i_{3}. Now, depending on whether i4i_{4} has (has not) an adjacent x¯\underline{x}^{*}-solid edge we continue (stop) the construction. A similar procedure could be done by starting at i1i_{1} instead of i2i_{2}. Therefore, we obtain an alternating path P=(ik,,i1,i0,i1,i2,,i)P=({i_{-k}},\ldots,i_{-1},{i_{0}},{i_{1}},{i_{2}},\ldots,{i_{\ell}}) with all odd edges being dotted and all even edges being x¯\underline{x}^{*}-solid. Using the same argument as in [6] one can show that one of the following four scenarios occur.

Path: Before PP intersects itself, both end-points of the path stop. Either the last edge is x¯\underline{x}^{*}-solid (then γv=0\gamma_{v}=0 for the last node) or the last edge is a dotted edge. Now consider a new solution x¯\underline{x}^{\prime} to LP (2) by xe=xex_{e}^{\prime}=x_{e}^{*} if ePe\notin P and xe=1xex_{e}^{\prime}=1-x_{e}^{*} if ePe\in P. It is easy to see that x¯\underline{x}^{\prime} is a feasible LP solution at all points vPv\notin P and also for internal vertices of PP. The only nontrivial case is when v=ikv=i_{-k} (or v=iv=i_{\ell}) and the edge (ik,ik+1)(i_{-k},i_{-k+1}) (or (i1,i)(i_{\ell-1},i_{\ell}) ) is dotted. In both of these cases, by construction vv is not connected to an x¯\underline{x}^{*}-solid edge outside of PP. Hence, making any change inside of PP is safe. Now denote the weight of all solid (dotted) edges of PP by w(Psolid)w(P_{\textrm{solid}}) (w(Pdotted)w(P_{\textrm{dotted}})). Here, we only include edges outside PsolidP_{\textrm{solid}} in PdottedP_{\textrm{dotted}}. Clearly,

eEwexeeEwexe=w(Psolid)w(Pdotted).\displaystyle\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}=w(P_{\textrm{solid}})-w(P_{\textrm{dotted}}). (39)

But w(Pdotted)=vPγvw(P_{\textrm{dotted}})=\sum_{v\in P}\gamma_{v} . Moreover, from Lemma 4, γ¯\underline{\gamma} is dual feasible which gives w(Psolid)vPγvw(P_{\textrm{solid}})\leq\sum_{v\in P}\gamma_{v}. We are using the fact that if there is a x¯\underline{x}^{*}-solid edge at an endpoint of PP the γ\gamma of the endpoint should be 0. Now Eq. (39) reduces to eEwexeeEwexe0.\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}\leq 0. This contradicts that e=(i1,i2)e=(i_{1},i_{2}) is non-solid since xe>0x^{\prime}_{e}>0.

Cycle: PP intersects itself and will contain an even cycle C2sC_{2s}. This case can be handled very similar to the path by defining xe=xex_{e}^{\prime}=x_{e}^{*} if eC2se\notin C_{2s} and xe=1xex_{e}^{\prime}=1-x_{e}^{*} if eC2se\in C_{2s}. The proof is even simpler since the extra check for the boundary condition is not necessary.

Blossom: PP intersects itself and will contain an odd cycle C2s+1C_{2s+1} with a path (stem) PP^{\prime} attached to the cycle at point uu. In this case let xe=xex_{e}^{\prime}=x_{e}^{*} if ePC2s+1e\notin P^{\prime}\cup C_{2s+1}, and xe=1xex_{e}^{\prime}=1-x_{e}^{*} if ePe\in P^{\prime}, and xe=12x_{e}^{\prime}=\frac{1}{2} if eC2s+1e\in C_{2s+1}. From here, we drop the subindex 2s+12s+1 to simplify the notation. Since the cycle has odd length, both neighbors of uu in CC have to be dotted. Therefore,

eEwexeeEwexe\displaystyle\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}
=\displaystyle= w(Psolid)+w(Csolid)w(Pdotted)\displaystyle\,w(P^{\prime}_{\textrm{solid}})+w(C_{\textrm{solid}})-w(P^{\prime}_{\textrm{dotted}})
w(Cdotted)+w(Csolid)2\displaystyle-\frac{w(C_{\textrm{dotted}})+w(C_{\textrm{solid}})}{2}
=\displaystyle= w(Psolid)+w(Csolid)2w(Pdotted)w(Cdotted)2.\displaystyle\,w(P^{\prime}_{\textrm{solid}})+\frac{w(C_{\textrm{solid}})}{2}-w(P^{\prime}_{\textrm{dotted}})-\frac{w(C_{\textrm{dotted}})}{2}\,.

Plugging w(Psolid)vPγvw(P^{\prime}_{\textrm{solid}})\leq\sum_{v\in P^{\prime}}\gamma_{v}, w(Csolid)vCγvγuw(C_{\textrm{solid}})\leq\sum_{v\in C}\gamma_{v}-\gamma_{u}, w(Pdotted)=vPγvγuw(P^{\prime}_{\textrm{dotted}})=\sum_{v\in P^{\prime}}\gamma_{v}-\gamma_{u} and w(Cdotted)=vCγv+γuw(C_{\textrm{dotted}})=\sum_{v\in C}\gamma_{v}+\gamma_{u}, we obtain

eEwexeeEwexe 0,\displaystyle\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}\leq\,0\,,

which is again a contradiction.

Bicycle: PP intersects itself at least twice and will contain two odd cycles C2s+1C_{2s+1} and C2s+1C^{\prime}_{2s^{\prime}+1} with a path (stem) PP^{\prime} that is connecting them. Very similar to Blossom, let xe=xex_{e}^{\prime}=x_{e}^{*} if ePCCe\notin P^{\prime}\cup C\cup C^{\prime}, xe=1xex_{e}^{\prime}=1-x_{e}^{*} if ePe\in P^{\prime}, and xe=12x_{e}^{\prime}=\frac{1}{2} if eCCe\in C\cup C^{\prime}. The proof follows similar to the case of blossom.

Case (II). Assume that there is an optimum solution x¯\underline{x}^{*} of LP that is not necessarily integral.

Everything is similar to Case (I) but the algebraic treatments are slightly different. Some edges ee in PP can be 12\frac{1}{2}-x¯\underline{x}^{*}-solid (xe=12x_{e}^{*}=\frac{1}{2}). In particular some of the odd edges (dotted edges) of PP can now be 12\frac{1}{2}-x¯\underline{x}^{*}-solid. But the subset of 12\frac{1}{2}-x¯\underline{x}^{*}-solid edges of PP can be only sub-paths of odd length in PP. On each such sub-path defining x¯=1x¯\underline{x}^{\prime}=1-\underline{x}^{*} means we are not affecting x¯\underline{x}^{*}. Therefore, all of the algebraic calculations should be considered on those sub-paths of PP that have no 12\frac{1}{2}-x¯\underline{x}^{*}-solid edge which means both of their boundary edges are dotted.

Path: Define x¯\underline{x}^{\prime} as in Case (I). Using the discussion above, let P(1),,P(r)P^{(1)},\ldots,P_{(r)} be disjoint sub-paths of PP that have no 12\frac{1}{2}-x¯\underline{x}^{*}-solid edge. Thus, eEwexeeEwexe=i=1r[w(Psolid(i))w(Pdotted(i))]\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}=\sum_{i=1}^{r}\big{[}w(P_{\textrm{solid}}^{(i)})-w(P_{\textrm{dotted}}^{(i)})\big{]}. Since in each P(i)P^{(i)} the two boundary edges are dotted, w(Psolid(i))vP(i)γvw(P_{\textrm{solid}}^{(i)})\leq\sum_{v\in P^{(i)}}\gamma_{v} and vP(i)γv=w(Pdotted(i))\sum_{v\in P^{(i)}}\gamma_{v}=w(P_{\textrm{dotted}}^{(i)}). The rest can be done as in Case (I).

Cycle, Blossom, Bicycle: These cases can be done using the same method of breaking the path and cycles into sub-paths P(i)P^{(i)} and following the case of path.

Lemma 12.

Every strong-solid edge is a strong-dotted edge. Also, every weak-solid edge is a weak-dotted edge.

Proof.

We rule out all alternative cases one by one. In particular we prove:

(i) A strong-solid edge cannot be weak-dotted. If an edge (i,j)(i,j) is strong-solid then it cannot be adjacent to another solid edge (weak or strong). Therefore, using Lemma 11 none of adjacent edges to (i,j)(i,j) are dotted. However, if (i,j)(i,j) is weak-dotted by Lemma 6 it is adjacent to at least one other weak-dotted edge (since at least one of γi\gamma_{i} and γj\gamma_{j} is positive) which is a contradiction. Thus (i,j)(i,j) cannot be weak-dotted.

(ii) A strong-solid edge cannot be non-dotted. Similar to (i), if an edge (i,j)(i,j) is strong-solid it cannot be adjacent to dotted edges. Now, if (i,j)(i,j) is non-dotted then γi=γj=0\gamma_{i}=\gamma_{j}=0 using Lemma 7. Hence wij<γi+γj=0w_{ij}<\gamma_{i}+\gamma_{j}=0 which is contradiction since we assumed all weights are positive.

(iii) A weak-solid edge cannot be strong-dotted. Assume, (i1,i2)(i_{1},i_{2}) is weak-solid and strong-dotted. Then we can show an optimum to LP (2) can be improved which is a contradiction. The proof is very similar to proof of Lemma 11. Since (i1,i2)(i_{1},i_{2}) is weak-solid, there is a half-integral matching x¯\underline{x}^{*} that is optimum to LP and puts a mass 1/21/2 or 0 on (i1,i2)(i_{1},i_{2}). Then either there is an adjacent x¯\underline{x}^{*}-solid edge (i2,i3)(i_{2},i_{3}) or an adjacent x¯\underline{x}^{*}-solid edge (i0,i1)(i_{0},i_{1}) with mass at least 1/21/2 or we stop. In the latter case, increasing the value of xi1i2x_{i_{1}i_{2}}^{*} increases eEwexe\sum_{e\in E}w_{e}x_{e}^{*} while keeping it LP feasible which is a contradiction. Otherwise, by strong-dotted assumption on (i1,i2)(i_{1},i_{2}) ((i0,i1)(i_{0},i_{1})), the new edge (i2,i3)(i_{2},i_{3}) is not dotted. Now we select a dotted edge (i3,i4)(i_{3},i_{4}) if it exists (otherwise we stop and in that case γi3=0\gamma_{i_{3}}=0). This process is repeated as in proof of Lemma 11 in both directions to obtain an alternating path P=(ik,,i1,i0,i1,i2,,i)P=({i_{-k}},\ldots,i_{-1},{i_{0}},{i_{1}},{i_{2}},\ldots,{i_{\ell}}) with all odd edges being dotted with x¯\underline{x}^{*} value at most 1/21/2 and all even edges being x¯\underline{x}^{*}-solid with mass at least 1/21/2. We discuss the case of PP being a simple path (not intersecting itself) here, and other cases: cycle, bicycle and blossom can be treated similar to path as in proof of Lemma 11.

Construct LP solution x¯\underline{x}^{\prime} that is equal to x¯\underline{x}^{*} outside of PP and inside it satisfies xe=xe+1/2x_{e}^{\prime}=x_{e}^{*}+1/2 if ee is an odd edge that is e=(i2k1,i2k)e=(i_{2k-1,i_{2k}}), and xe=xe1/2x_{e}^{\prime}=x_{e}^{*}-1/2 when ee is an even edge that is e=(i2k,i2k+1)e=(i_{2k,i_{2k+1}}). It is easy to see that x¯\underline{x}^{\prime} is a feasible LP solution. And since for all edges (ij,ij+1)(i_{j},i_{j+1}) we have γij+γij+1wijij+1\gamma_{i_{j}}+\gamma_{i_{j+1}}\geq w_{i_{j}i_{j+1}} and on dotted edges we have equality γij+γij+1=wijij+1\gamma_{i_{j}}+\gamma_{i_{j+1}}=w_{i_{j}i_{j+1}} then eEwexeeEwexe=w(Pdotted)w(Psolid)2γi2+γi3wi2i32>0\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}=\frac{w(P_{\textrm{dotted}})-w(P_{\textrm{solid}})}{2}\geq\frac{\gamma_{i_{2}}+\gamma_{i_{3}}-w_{i_{2}i_{3}}}{2}>0 where the last inequality follows from the fact that (i2,i3)(i_{2},i_{3}) is not-dotted. Hence we reach a contradiction.

(iv) A weak-solid edge cannot be non-dotted. Assume, (i1,i2)(i_{1},i_{2}) is weak-solid and non-dotted. Similar to (iii) we can show the best solution to LP (2) can be improved which is a contradiction. Since (i1,i2)(i_{1},i_{2}) is weak-solid we can choose a half-integral x¯\underline{x}^{*} that puts a mass at least 1/21/2 on (i1,i2)(i_{1},i_{2}). Also, this time the alternation in PP is the opposite of (iii). That is we choose (i2,i3)(i_{2},i_{3}) to be dotted (if it does not exist γi2=0\gamma_{i_{2}}=0 and we stop.) The solution x¯\underline{x}^{\prime} is constructed as before: equal to x¯\underline{x}^{*} outside of PP, xe=xe+1/2x_{e}^{\prime}=x_{e}^{*}+1/2 if ee is odd and xe=xe1/2x_{e}^{\prime}=x_{e}^{*}-1/2 if it is even. Hence, eEwexeeEwexeγi1+γi2wi1i22>0\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}\geq\frac{\gamma_{i_{1}}+\gamma_{i_{2}}-w_{i_{1}i_{2}}}{2}>0, using the non-dotted assumption on (i1,i2)(i_{1},i_{2}). Hence, we obtain another contradiction. ∎

Lemma 13.

γ¯\underline{\gamma} is an optimum for the dual problem (3)

Proof.

Lemma 4 guarantees feasibility. Optimality follows from lemmas 7, 11 and 12 as follows. Take any optimum half integral matching x¯\underline{x}^{*} to LP. Now using Lemma 12: vγv=eEwexe\sum_{v}\gamma_{v}=\sum_{e\in E}w_{e}x_{e}^{*} which finishes the proof. ∎

Theorem 12.

Let 𝒜𝒪𝒫𝒯\mathcal{BALOPT} be the set of optima of the dual problem (3) satisfying the unmatched balance property, Eq. (1), at every edge. If (α¯,m¯,γ¯)(\underline{\alpha},\underline{m},\underline{\gamma}) is a fixed point of the natural dynamics then γ¯𝒜𝒪𝒫𝒯\underline{\gamma}\in\mathcal{BALOPT}. Conversely, for every γ¯BO𝒜𝒪𝒫𝒯\underline{\gamma}_{\rm{BO}}\in\mathcal{BALOPT}, there is a unique fixed point of the natural dynamics with γ¯=γ¯BO\underline{\gamma}=\underline{\gamma}_{\rm{BO}}.

Proof.

The direct implication is immediate from Lemmas 10 and 13. The converse proof here follows the same steps as for Theorem 1, proved in Section 8. Instead of separately analyzing the cases (ij)M(ij)\in M and (ij)M(ij)\notin M, we study the cases γi+γj=wij\gamma_{i}+\gamma_{j}=w_{ij} and γi+γj>wij\gamma_{i}+\gamma_{j}>w_{ij}. ∎

Appendix D ϵ\epsilon-fixed point properties: Proof of Theorem 4

In this section we prove Theorem 4, stated in Section 3. In this section we assume that α¯\underline{\alpha} is an ϵ\epsilon-fixed point with corresponding offers m¯\underline{m} and earnings γ¯\underline{\gamma}. That is, for all i,ji,j

ϵ\displaystyle{\epsilon} |αi\jmaxki\jmki|,\displaystyle\geq|\alpha_{i\backslash j}-\max_{k\in\partial i\backslash j}m_{k\rightarrow i}\big{|}\,,
mij\displaystyle m_{i\rightarrow j} =(wijαi\j)+(wijαi\jαj\i)+2,\displaystyle=(w_{ij}-\alpha_{i\backslash j})_{+}-\frac{(w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i})_{+}}{2}\,,
γi\displaystyle\gamma_{i} =maxkimki.\displaystyle=\max_{k\in{\partial i}}\,m_{k\rightarrow i}\,.
Definition 21.

An edge (ij)(ij) is called δ\delta-dotted (δ0\delta\geq 0) if γi+γjwij+δ\gamma_{i}+\gamma_{j}\leq w_{ij}+\delta.

Lemma 14.

For all edge (ij)E(ij)\in E and all δ,δ1,δ2\delta,\delta_{1},\delta_{2}\in{\mathds{R}} the following hold:

(a) If (ij)(ij) is δ\delta-dotted then 𝒮urpij(2ϵ+δ)\mathcal{S}\textup{urp}_{ij}\geq-(2{\epsilon}+\delta).

(b) If 𝒮urpijδ\mathcal{S}\textup{urp}_{ij}\geq-\delta then mijγj(ϵ+δ)m_{i\rightarrow j}\geq\gamma_{j}-({\epsilon}+\delta) and mjiγi(ϵ+δ)m_{j\rightarrow i}\geq\gamma_{i}-({\epsilon}+\delta).

(c) If mijγjδ1m_{i\rightarrow j}\geq\gamma_{j}-\delta_{1} and mjiγiδ2m_{j\rightarrow i}\geq\gamma_{i}-\delta_{2} then (ij)(ij) is (δ1+δ2)(\delta_{1}+\delta_{2})-dotted.

(d) If γiδmji\gamma_{i}-\delta\leq m_{j\rightarrow i} and γj>mij+2ϵ+δ\gamma_{j}>m_{i\rightarrow j}+2{\epsilon}+\delta then γi=0\gamma_{i}=0.

(e) If γi>0\gamma_{i}>0 and mjiγiδm_{j\rightarrow i}\geq\gamma_{i}-\delta then (ij)(ij) is (2δ+2ϵ)(2\delta+2{\epsilon})-dotted.

(f) For γi,γj>0\gamma_{i},\gamma_{j}>0, mjiαi\j+δm_{j\rightarrow i}\leq\alpha_{i\backslash j}+\delta if and only if mijαj\i+δm_{i\rightarrow j}\leq\alpha_{j\backslash i}+\delta.

(h) For all (ij)(ij), |mij(wijγi)+|ϵ|m_{i\rightarrow j}-(w_{ij}-\gamma_{i})_{+}|\leq{\epsilon}.

(i) For all (ij)(ij), γi(wijγj)+ϵ\gamma_{i}-(w_{ij}-\gamma_{j})_{+}\geq-{\epsilon} and γi+γjwijϵ\gamma_{i}+\gamma_{j}\geq w_{ij}-{\epsilon}.

(j) For all ii, if γi>0\gamma_{i}>0 then there is at least a 2ϵ2{\epsilon}-dotted edge attached to ii.

Proof.

(a) Since α¯\underline{\alpha} is ϵ{\epsilon}-fixed point, γimijϵ\gamma_{i}\geq m_{i\rightarrow j}-{\epsilon} and γjmjiϵ\gamma_{j}\geq m_{j\rightarrow i}-{\epsilon}. Therefore, 𝒮urpij=wijmijmjiwijγiγj(2ϵ)(2ϵ+δ)\mathcal{S}\textup{urp}_{ij}=w_{ij}-m_{i\rightarrow j}-m_{j\rightarrow i}\geq w_{ij}-\gamma_{i}-\gamma_{j}-(2{\epsilon})\geq-(2{\epsilon}+\delta).

(b) First consider the case 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\leq 0. Then, mij=(wijαi\j)+wijαi\jαj\iδmaxj\i(mj)δϵm_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j})_{+}\geq w_{ij}-\alpha_{i\backslash j}\geq\alpha_{j\backslash i}-\delta\geq\max_{\ell\in\partial j\backslash i}(m_{\ell\rightarrow j})-\delta-{\epsilon}, which yields mijγj(ϵ+δ)m_{i\rightarrow j}\geq\gamma_{j}-({\epsilon}+\delta). The proof of mjiγi(ϵ+δ)m_{j\rightarrow i}\geq\gamma_{i}-({\epsilon}+\delta) is similar.

For the case 𝒮urpij>0\mathcal{S}\textup{urp}_{ij}>0, mij=wijαi\j+αj\i2=𝒮urpij2+αi\jmax(δ2,0)+maxj\i(mj)ϵm_{i\rightarrow j}=\frac{w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i}}{2}=\frac{\mathcal{S}\textup{urp}_{ij}}{2}+\alpha_{i\backslash j}\geq\max(\frac{-\delta}{2},0)+\max_{\ell\in\partial j\backslash i}(m_{\ell\rightarrow j})-{\epsilon}, and the rest follows as above.

(c) Note that γi+γjmij+mji+δ1+δ2\gamma_{i}+\gamma_{j}\leq m_{i\rightarrow j}+m_{j\rightarrow i}+\delta_{1}+\delta_{2}. If 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0 then the result follows from mij+mji=wijm_{i\rightarrow j}+m_{j\rightarrow i}=w_{ij}. For 𝒮urpij<0\mathcal{S}\textup{urp}_{ij}<0 the result follows from mij+mjimax[(wijαi\j)+,(wijαj\i)+,2wijαi\jαj\i]wij.m_{i\rightarrow j}+m_{j\rightarrow i}\leq\max[(w_{ij}-\alpha_{i\backslash j})_{+},(w_{ij}-\alpha_{j\backslash i})_{+},2w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}]\leq w_{ij}.

(d) We need to show that when γimji+δ\gamma_{i}\leq m_{j\rightarrow i}+\delta and γj>mij+2ϵ+δ\gamma_{j}>m_{i\rightarrow j}+2{\epsilon}+\delta then γi=0\gamma_{i}=0. From part (b)(b) that was just shown, the surplus should satisfy 𝒮urpij<(ϵ+δ)\mathcal{S}\textup{urp}_{ij}<-({\epsilon}+\delta). On the other hand αi\jϵmaxki\j(mki)γimji+δ(wijαi\j)++δ\alpha_{i\backslash j}-{\epsilon}\leq\max_{k\in\partial i\backslash j}(m_{k\rightarrow i})\leq\gamma_{i}\leq m_{j\rightarrow i}+\delta\leq(w_{ij}-\alpha_{i\backslash j})_{+}+\delta. Now, if γi>0\gamma_{i}>0 then wijαi\j>0w_{ij}-\alpha_{i\backslash j}>0 which gives αi\jϵwijαi\j+δ.\alpha_{i\backslash j}-{\epsilon}\leq w_{ij}-\alpha_{i\backslash j}+\delta. This is equivalent to 𝒮urpij(ϵ+δ)\mathcal{S}\textup{urp}_{ij}\geq-({\epsilon}+\delta) which is a contradiction. Hence γi=0\gamma_{i}=0.

(e) Using part (d)(d) we should have mijγj(2ϵ+δ)m_{i\rightarrow j}\geq\gamma_{j}-(2{\epsilon}+\delta). Now applying part (c)(c) the result follows.

(f) If 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0 then wijαj\i+αi\j2=mjiαi\j+δ\frac{w_{ij}-\alpha_{j\backslash i}+\alpha_{i\backslash j}}{2}=m_{j\rightarrow i}\leq\alpha_{i\backslash j}+\delta. This inequality is equivalent to mij=wijαi\j+αj\i2αj\i+δm_{i\rightarrow j}=\frac{w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i}}{2}\leq\alpha_{j\backslash i}+\delta, which proves the result. If 𝒮urpij<0\mathcal{S}\textup{urp}_{ij}<0 then wijαj\i(wijαj\i)+αi\j+δw_{ij}-\alpha_{j\backslash i}\leq(w_{ij}-\alpha_{j\backslash i})_{+}\leq\alpha_{i\backslash j}+\delta. This is equivalent to wijαi\jαj\i+δw_{ij}-\alpha_{i\backslash j}\leq\alpha_{j\backslash i}+\delta which yields the result.

(h) If 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0 then by part (b), mij+ϵγjm_{i\rightarrow j}+{\epsilon}\geq\gamma_{j} and mji+ϵγim_{j\rightarrow i}+{\epsilon}\geq\gamma_{i}. Therefore, using γjmij\gamma_{j}\geq m_{i\rightarrow j}, γimji\gamma_{i}\geq m_{j\rightarrow i} and mji+mij=wijm_{j\rightarrow i}+m_{i\rightarrow j}=w_{ij} we have, mijwijmjiwijγiwijmjiϵmijϵm_{i\rightarrow j}\geq w_{ij}-m_{j\rightarrow i}\geq w_{ij}-\gamma_{i}\geq w_{ij}-m_{j\rightarrow i}-{\epsilon}\geq m_{i\rightarrow j}-{\epsilon}, which gives the result.

If 𝒮urpij<0\mathcal{S}\textup{urp}_{ij}<0 then mij=(wijαi\j)+<αj\im_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j})_{+}<\alpha_{j\backslash i} this gives γjϵ<αj\i\gamma_{j}-{\epsilon}<\alpha_{j\backslash i}. On the other hand αj\iγj+ϵ\alpha_{j\backslash i}\leq\gamma_{j}+{\epsilon} holds. Similarly, γi+ϵαi\jγiϵ\gamma_{i}+{\epsilon}\geq\alpha_{i\backslash j}\geq\gamma_{i}-{\epsilon} that leads to |(wijαi\j)+(wijγi)+|ϵ|(w_{ij}-\alpha_{i\backslash j})_{+}-(w_{ij}-\gamma_{i})_{+}|\leq{\epsilon}. Hence, the result follows from mij=(wijαi\j)+m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j})_{+}.

(i) Using part (h)(h), mji+ϵ(wijγj)+m_{j\rightarrow i}+{\epsilon}\geq(w_{ij}-\gamma_{j})_{+}. Now result follows using γimji\gamma_{i}\geq m_{j\rightarrow i}.

(j) There is at least one neighbor jij\in\partial i that sends the maximum offer mji=γim_{j\rightarrow i}=\gamma_{i}. Using part (d)(d) we should have mijγj2ϵm_{i\rightarrow j}\geq\gamma_{j}-2{\epsilon} and now the result follows from part (c)(c). ∎

Lemma 15.

For any edge (ij)E(ij)\in E the earnings estimate γ¯\underline{\gamma} satisfies 6ϵ6{\epsilon}-balanced property (i.e., Eq. (9) holds for 6ϵ6{\epsilon} instead of ϵ{\epsilon}).

Proof.

Using Lemma 14(h), αi\j2ϵmaxki\j(mki)ϵmaxki\j[(wikγk)+]maxki\j(mki)+ϵαi\j+2ϵ\alpha_{i\backslash j}-2{\epsilon}\leq\max_{k\in\partial i\backslash j}(m_{k\rightarrow i})-{\epsilon}\leq\max_{k\in\partial i\backslash j}[(w_{ik}-\gamma_{k})_{+}]\leq\max_{k\in\partial i\backslash j}(m_{k\rightarrow i})+{\epsilon}\leq\alpha_{i\backslash j}+2{\epsilon}, or

|maxki\j[(wikγk)+]αi\j|2ϵ\displaystyle\left|\max_{k\in\partial i\backslash j}[(w_{ik}-\gamma_{k})_{+}]-\alpha_{i\backslash j}\right|\leq 2{\epsilon} (40)

Now, if 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\leq 0 then mji=(wijαj\i)+αi\jm_{j\rightarrow i}=(w_{ij}-\alpha_{j\backslash i})_{+}\leq\alpha_{i\backslash j} which gives |γiαi\j|ϵ|\gamma_{i}-\alpha_{i\backslash j}|\leq{\epsilon} or, |γimaxki\j[(wikγk)+]|3ϵ\left|\gamma_{i}-\max_{k\in\partial i\backslash j}[(w_{ik}-\gamma_{k})_{+}]\right|\leq 3{\epsilon}. Therefore, 6ϵ6{\epsilon}-balance property holds.

And if 𝒮urpij>0\mathcal{S}\textup{urp}_{ij}>0, by Lemma 14(b) we have mji+ϵγim_{j\rightarrow i}+{\epsilon}\geq\gamma_{i}. Hence, 𝒮urpij2+ϵ=mjiαi\j+ϵγiαi\jmjiαi\j=𝒮urpij2\frac{\mathcal{S}\textup{urp}_{ij}}{2}+{\epsilon}=m_{j\rightarrow i}-\alpha_{i\backslash j}+{\epsilon}\geq\gamma_{i}-\alpha_{i\backslash j}\geq m_{j\rightarrow i}-\alpha_{i\backslash j}=\frac{\mathcal{S}\textup{urp}_{ij}}{2}. Same bound holds for γjαj\i\gamma_{j}-\alpha_{j\backslash i} by symmetry. Therefore, using Eq. (40), |γimaxki\j[(wikγk)+]||\gamma_{i}-\max_{k\in\partial i\backslash j}[(w_{ik}-\gamma_{k})_{+}]| and |γjmaxj\i[(wjγ)+]||\gamma_{j}-\max_{\ell\in\partial j\backslash i}[(w_{j\ell}-\gamma_{\ell})_{+}]| are within 3ϵ6ϵ3{\epsilon}\leq 6{\epsilon} of each other. ∎

Lemma 16.

If (ij)(ij) is δ\delta-dotted for ki\jk\in\partial i\backslash j and if γk>max(δ,ϵ)+6ϵ\gamma_{k}>\max(\delta,{\epsilon})+6{\epsilon}, then there exists rk\ir\in\partial k\backslash i such that (rk)(rk) is (max(δ,ϵ)+6ϵ)(\max(\delta,{\epsilon})+6{\epsilon})-dotted.

Proof.

Using, γi+γjwij+δ\gamma_{i}+\gamma_{j}\leq w_{ij}+\delta and Lemma 14(i),

ϵγimaxsi\k[(wisγs)+]γi(wijγj)+δ.-{\epsilon}\leq\gamma_{i}-\max_{s\in\partial i\backslash k}[(w_{is}-\gamma_{s})_{+}]\leq\gamma_{i}-(w_{ij}-\gamma_{j})_{+}\leq\delta.

Therefore, |γimaxsi\k[(wisγs)+]|max(δ,ϵ)|\gamma_{i}-\max_{s\in\partial i\backslash k}[(w_{is}-\gamma_{s})_{+}]|\leq\max(\delta,{\epsilon}) which combined with Lemma 15 gives

|γkmaxrk\i[(wrkγr)+]|max(δ,ϵ)+6ϵ.|\gamma_{k}-\max_{r\in\partial k\backslash i}[(w_{rk}-\gamma_{r})_{+}]|\leq\max(\delta,{\epsilon})+6{\epsilon}.

This fact and γk>max(δ,ϵ)+6ϵ\gamma_{k}>\max(\delta,{\epsilon})+6{\epsilon}, show that there exists an edge rk\ir\in\partial k\backslash i with |γk(wrkγr)+|max(δ,ϵ)+6ϵ|\gamma_{k}-(w_{rk}-\gamma_{r})_{+}|\leq\max(\delta,{\epsilon})+6{\epsilon} and the result follows. ∎

Lemma 17.

A non-solid edge cannot be a δ\delta-dotted edge for δ4ϵ\delta\leq 4{\epsilon}.

Note that this Lemma holds even for the more general case of MM^{*} being non-integral.

The proof is a more complex version of proof of Lemma 11. Recall the notion of alternating path from that proof.

Also, consider x¯\underline{x}^{*} and y¯\underline{y}^{*} that are optimum solutions for the LP and its dual, (2) and (3). Also recall that by complementary slackness conditions , for all solid edges the equality yi+yj=wijy_{i}^{*}+y_{j}^{*}=w_{ij} holds. Moreover, any node vVv\in V is adjacent to a solid edge if and only if yv>0y_{v}^{*}>0.

Proof of Lemma 17.

We need to consider two cases:

Case (I). Assume that the optimum LP solution x¯\underline{x}^{*} is integral (having a tight LP). Now assume the contrary: take (i1,i2)(i_{1},i_{2}) that is a non-solid edge but it is δ\delta-dotted. Consider an endpoint of (i1,i2)(i_{1},i_{2}). For example take i2i_{2}. Either there is a solid edge attached to i2i_{2} or not. If there is not, we stop. Otherwise, assume (i2,i3)(i_{2},i_{3}) is a solid edge. Using Lemma 16, either γi3>10ϵ\gamma_{i_{3}}>10{\epsilon} or there is a 10ϵ10{\epsilon}-dotted edge (i3,i4)(i_{3},i_{4}) connected to i3i_{3}. Now, depending on whether i4i_{4} has (has not) an adjacent solid edge we continue (stop) the construction. Similar procedure could be done by starting at i1i_{1} instead of i2i_{2}. Therefore, we obtain an alternating path P=(ik,,i1,i0,i1,i2,,i)P=({i_{-k}},\ldots,i_{-1},{i_{0}},{i_{1}},{i_{2}},\ldots,{i_{\ell}}) with each (i2k,i2k+1)(i_{2k},i_{2k+1}) being (6k+4)ϵ(6k+4){\epsilon}-dotted and all (i2k1,i2k))(i_{2k-1},i_{2k})) being solid. Using the same argument as in [6] one can show that one of the following four scenarios occur.

Path: Before PP intersects itself, both end-points of the path stop. At each end of the path, either the last edge is solid (then γv<(3n+4)ϵ\gamma_{v}<(3n+4){\epsilon} for the last node vv) or the last edge is a (3n+4)(3n+4)-dotted edge with no solid edge attached to vv. Now consider a new solution x¯\underline{x}^{\prime} to LP (2) by xe=xex_{e}^{\prime}=x_{e}^{*} if ePe\notin P and xe=1xex_{e}^{\prime}=1-x_{e}^{*} if ePe\in P. It is easy to see that x¯\underline{x}^{\prime} is a feasible LP solution at all points vPv\notin P and also for internal vertices of PP. The only nontrivial case is when v=ikv=i_{-k} (or v=iv=i_{\ell}) and the edge (ik,ik+1)(i_{-k},i_{-k+1}) (or (i1,i)(i_{\ell-1},i_{\ell}) ) is (3n+4)ϵ(3n+4){\epsilon}-dotted. In both of these cases, by construction no solid edge is attached to vv outside of PP so making any change inside of PP is safe. Now denote the weight of all solid (remaining) edges of PP by w(Psolid)w(P_{\textrm{solid}}) (w(Pdotted)w(P_{\textrm{dotted}})). Hence, eEwexeeEwexe=w(Psolid)w(Pdotted)\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}=w(P_{\textrm{solid}})-w(P_{\textrm{dotted}}).

But w(Pdotted)+(3n2+16n)ϵ/4vPγvw(P_{\textrm{dotted}})+(3n^{2}+16n){\epsilon}/4\geq\sum_{v\in P}\gamma_{v} . Moreover, from Lemma 14(i), γi+γjwijϵ\gamma_{i}+\gamma_{j}\geq w_{ij}-{\epsilon} for all (ij)P(ij)\in P which gives w(Psolid)vPγv+nϵ/2w(P_{\textrm{solid}})\leq\sum_{v\in P}\gamma_{v}+n{\epsilon}/2. Now eEwexeeEwexe=w(Psolid)w(Pdotted)\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}=w(P_{\textrm{solid}})-w(P_{\textrm{dotted}}) yields wexeeEwexe(3n2+18n)ϵ/4n(n+5)ϵ.w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}\leq(3n^{2}+18n){\epsilon}/4\leq n(n+5){\epsilon}. For ϵ<g/(6n2){\epsilon}<g/(6n^{2}) This contradicts the tightness of LP relaxation (2) since xexex_{e}^{\prime}\neq x_{e}^{*} holds at least for e=(i1,i2)e=(i_{1},i_{2}).

Cycle: PP intersects itself and will contain an even cycle C2sC_{2s}. This case can be handled very similar to the path by defining xe=xex_{e}^{\prime}=x_{e}^{*} if eC2se\notin C_{2s} and xe=1xex_{e}^{\prime}=1-x_{e}^{*} if eC2se\in C_{2s}. The proof is even simpler since the extra check for the boundary condition is not necessary.

Blossom: PP intersects itself and will contain an odd cycle C2s+1C_{2s+1} with a path (stem) PP^{\prime} attached to the cycle at point uu. In this case let xe=xex_{e}^{\prime}=x_{e}^{*} if ePC2s+1e\notin P^{\prime}\cup C_{2s+1}, and xe=1xex_{e}^{\prime}=1-x_{e}^{*} if ePe\in P^{\prime}, and xe=12x_{e}^{\prime}=\frac{1}{2} if eC2s+1e\in C_{2s+1}. From here, we drop the subindex 2s+12s+1 to simplify the notation. Since the cycle has odd length, both neighbors of uu in CC have to be dotted. Therefore,

eEwexeeEwexe\displaystyle\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime} =w(Psolid)+w(Csolid)w(Pdotted)w(Cdotted)+w(Csolid)2,\displaystyle=w(P^{\prime}_{\textrm{solid}})+w(C_{\textrm{solid}})-w(P^{\prime}_{\textrm{dotted}})-\frac{w(C_{\textrm{dotted}})+w(C_{\textrm{solid}})}{2}\,,
=w(Psolid)+w(Csolid)2w(Pdotted)w(Cdotted)2\displaystyle=w(P^{\prime}_{\textrm{solid}})+\frac{w(C_{\textrm{solid}})}{2}-w(P^{\prime}_{\textrm{dotted}})-\frac{w(C_{\textrm{dotted}})}{2}
<vPγv+|P|2ϵ+vCγvγu2+sϵvPγv+γu\displaystyle<\sum_{v\in P^{\prime}}\gamma_{v}+\left\lceil\frac{|P|}{2}\right\rceil{\epsilon}+\frac{\sum_{v\in C}\gamma_{v}-\gamma_{u}}{2}+s{\epsilon}-\sum_{v\in P^{\prime}}\gamma_{v}+\gamma_{u}
+(3|P|2+16|P|4)ϵvCγv+γu2+(3s2+16s4)ϵ.\displaystyle\phantom{<\ \ }+\left(\frac{3|P|^{2}+16|P|}{4}\right){\epsilon}-\frac{\sum_{v\in C}\gamma_{v}+\gamma_{u}}{2}+\left(\frac{3s^{2}+16s}{4}\right){\epsilon}\,.

But the last term is at most n(n+5)ϵn(n+5){\epsilon} which is again a contradiction.

Bicycle: PP intersects itself at least twice and will contain two odd cycles C2s+1C_{2s+1} and C2s+1C^{\prime}_{2s^{\prime}+1} with a path (stem) PP^{\prime} that is connecting them. Very similar to Blossom, let xe=xex_{e}^{\prime}=x_{e}^{*} if ePCCe\notin P^{\prime}\cup C\cup C^{\prime}, xe=1xex_{e}^{\prime}=1-x_{e}^{*} if ePe\in P^{\prime}, and xe=12x_{e}^{\prime}=\frac{1}{2} if eCCe\in C\cup C^{\prime}. The proof follows similar to the case of blossom.

Case (II). Assume that the optimum LP solution x¯\underline{x}^{*} is not necessarily integral.

Everything is similar to Case (I) but the algebraic treatments are slightly different. Some edges ee in PP can be 12\frac{1}{2}-solid (xe=12x_{e}^{*}=\frac{1}{2}). In particular some of the odd edges (dotted edges) of PP can now be 12\frac{1}{2}-solid. But the subset of 12\frac{1}{2}-solid edges of PP can be only sub-paths of odd length in PP. On each such sub-path defining x¯=1x¯\underline{x}^{\prime}=1-\underline{x}^{*} means we are not affecting x¯\underline{x}^{*}. Therefore, all of the algebraic calculations should be considered on those sub-paths of PP that have no 12\frac{1}{2}-solid edge which means both of their boundary edges are dotted.

Path: Define x¯\underline{x}^{\prime} as in Case (I). Using the discussion above, let P(1),,P(r)P^{(1)},\ldots,P_{(r)} be disjoint sub-paths of PP that have no 12\frac{1}{2}-solid edge. Thus, eEwexeeEwexe=i=1r[w(Psolid(i))w(Pdotted(i))]\sum_{e\in E}w_{e}x_{e}^{*}-\sum_{e\in E}w_{e}x_{e}^{\prime}=\sum_{i=1}^{r}\big{[}w(P_{\textrm{solid}}^{(i)})-w(P_{\textrm{dotted}}^{(i)})\big{]}. Since in each P(i)P^{(i)} the two boundary edges are dotted, w(Psolid(i))vP(i)γv+|P(i)|ϵ/2w(P_{\textrm{solid}}^{(i)})\leq\sum_{v\in P^{(i)}}\gamma_{v}+|P^{(i)}|{\epsilon}/2 and vP(i)γvw(Pdotted(i))+(3|P(i)|2+16|P(i)|)ϵ/4\sum_{v\in P^{(i)}}\gamma_{v}\leq w(P_{\textrm{dotted}}^{(i)})+(3|P^{(i)}|^{2}+16|P^{(i)}|){\epsilon}/4. The rest can be done as in Case (I).

Cycle, Blossom, Bicycle: These cases can be done using the same method of breaking the path and cycles into sub-paths P(i)P^{(i)} and following the case of path.

The direct part of Theorem 4 follows from the next lemma.

Lemma 18.

α¯\underline{\alpha} induces the matching MM^{*}.

Proof.

From Lemma 17 it follows that the set of 2ϵ2{\epsilon}-dotted edges is a subset of the solid edges. In particular, when the optimum matching MM^{*} is integral, no node can be adjacent to more than one 2ϵ2{\epsilon}-dotted edges. If we define a x¯\underline{x}^{\prime} to be zero on all edges and xe=1x_{e}^{\prime}=1 for all 2ϵ2{\epsilon}-dotted edges (ij)(ij) with γi+γj>0\gamma_{i}+\gamma_{j}>0. Then clearly x¯\underline{x}^{\prime} is feasible to (2). On the other hand, using the definition of 2ϵ2{\epsilon}-dotted for all ee^{\prime} with xe=1x_{e^{\prime}}=1, and Lemma 14(j) that each node with γi>0\gamma_{i}>0 is adjacent to at least one 2ϵ2{\epsilon}-dotted edge we can write eEwexevVγvnϵ\sum_{e\in E}w_{e}x_{e}^{\prime}\geq\sum_{v\in V}\gamma_{v}-n{\epsilon}. Separately, from Lemma 14(i) we have vVγveEwexenϵ2\sum_{v\in V}\gamma_{v}\geq\sum_{e\in E}w_{e}x_{e}^{*}-\frac{n{\epsilon}}{2}, which shows that x¯\underline{x}^{\prime} is also an optimum solution to (2) (when ϵ<g/(6n2){\epsilon}<g/(6n^{2})). From the uniqueness assumption on x¯\underline{x}^{*} we obtain that MM^{*} is equal to the set of all 2ϵ2{\epsilon}-dotted edges with at least one endpoint having a positive earning estimate. We would like to show that for any such edge (ij)(ij), both earning estimates γi\gamma_{i} and γj\gamma_{j} are positive.

Assume the contrary, i.e., without loss of generality γi=0\gamma_{i}=0. Then, 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\leq 0 and 0=mji=(wijαj\i)+0=m_{j\rightarrow i}=(w_{ij}-\alpha_{j\backslash i})_{+} that gives αj\iwij\alpha_{j\backslash i}\geq w_{ij} or

mjαj\iϵwijϵ(wijαi\j)+ϵ=γjϵ.m_{\ell\rightarrow j}\geq\alpha_{j\backslash i}-{\epsilon}\geq w_{ij}-{\epsilon}\geq(w_{ij}-\alpha_{i\backslash j})_{+}-{\epsilon}=\gamma_{j}-{\epsilon}.

for some j\i\ell\in\partial j\backslash i. Now using Lemma 14(e) the edge (j)(j\ell) is 4ϵ4{\epsilon}-dotted which contradicts Lemma 17.

Finally, the endpoints of the matched edges provide each other their unique best offers. This latter follows from the fact that each node with γi>0\gamma_{i}>0 receives an offer equal to γi\gamma_{i} and the edge corresponding to that offer has to be 2ϵ2{\epsilon}-dotted using Lemma 14(d). The nodes with no positive offer γi=0\gamma_{i}=0 are unmatched in MM^{*} as well. ∎

Proof of Theorem 4.

Proof.

For any ϵ<g/(6n2){\epsilon}<g/(6n^{2}), an ϵ{\epsilon}-fixed point induces the matching MM^{*} using Lemma 18. Additionally, the earning vector γ¯\underline{\gamma} is (6ϵ)(6{\epsilon})-balanced using Lemma 15. Next we show that (γ¯,M)(\underline{\gamma},M^{*}) is a stable trade outcome.

Lemma 19.

The earnings estimates γ¯\underline{\gamma} is an optimum solution to the dual (3). In particular the pair (γ¯,M)(\underline{\gamma},M^{*}) is a stable trade outcome.

Proof.

Using Lemma 17, we can show that for any non-solid edge (ij)(ij), stability holds, i.e. γi+γjwij\gamma_{i}+\gamma_{j}\geq w_{ij}.

Now let (i,j)(i,j) be a solid edge. Then ii and jj are sending each other their best offers. If 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0 we are done using γi+γj=mji+mij=wijαi\j+αj\i2+wijαj\i+αi\j2=wij\,\gamma_{i}+\gamma_{j}=m_{j\rightarrow i}+m_{i\rightarrow j}=\,\frac{w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i}}{2}+\frac{w_{ij}-\alpha_{j\backslash i}+\alpha_{i\backslash j}}{2}=w_{ij}. And if 𝒮urpij<0\mathcal{S}\textup{urp}_{ij}<0 then γi=mji=(wijαj\i)+αi\j\gamma_{i}=m_{j\rightarrow i}=(w_{ij}-\alpha_{j\backslash i})_{+}\leq\alpha_{i\backslash j}. Similarly, γjαj\i\gamma_{j}\leq\alpha_{j\backslash i}. This means there exist ki\jk\in\partial i\backslash j with mkiαi\jϵγiϵm_{k\rightarrow i}\geq\alpha_{i\backslash j}-{\epsilon}\geq\gamma_{i}-{\epsilon}. But, from Lemma 14(e) the edge (ik)(ik) would become 4ϵ4{\epsilon}-dotted which is a contradiction. ∎

The converse of Theorem 4 is trivial since any ϵ{\epsilon}-NB solution (M,γ¯NB)(M,\underline{\gamma}_{{\textup{\tiny NB}}}) is stable and produces a trade outcome by definition, hence it is a dual optimal solution which means M=MM=M^{*}. ∎

Appendix E Proof of Theorem 8

Theorem 13.

Let G=(V,E)G=(V,E) with edge weights (wij)(ij)E(w_{ij})_{(ij)\in E} and capacity constraints 𝐛=(bi)\mathbf{b}=(b_{i}) be an instance such that the primal LP (16) has a unique optimum that is integral, corresponding to matching MM^{*}. Let (α¯,m¯,Γ)(\underline{\alpha},\underline{m},\Gamma) be a fixed point of the natural dynamics. Then α¯\underline{\alpha} induces matching MM^{*} and (M,Γ)(M^{*},\Gamma) is a Nash bargaining solution. Conversely, every Nash bargaining solution (M,ΓNB)(M,\Gamma_{{\textup{\tiny NB}}}) has M=MM=M^{*} and corresponds to a unique fixed point of the natural dynamics with Γ=ΓNB\Gamma=\Gamma_{{\textup{\tiny NB}}}.

Let 𝒮\mathcal{S} be the set of optimum solutions of LP (16). As in the one-matching case, we call eEe\in E a strong-solid edge if xe=1x_{e}^{*}=1 for all x𝒮x^{*}\in\mathcal{S} and a non-solid edge if xe=0x_{e}^{*}=0 for all x𝒮x^{*}\in\mathcal{S}. We call eEe\in E a weak-solid edge if it is neither strong-solid nor non-solid.

Proof of Theorem 8: From fixed points to NB solutions. The direct part follows from the following set of fixed point properties, similar to those for the one-matching case. Throughout (α¯,m¯,Γ)(\underline{\alpha},\underline{m},\Gamma) is a fixed point of the dynamics (17) (with Γ\Gamma given by (20), and m¯\underline{m} given by (4)). The properties are proved for the case when the primal LP in (16) has a unique integral optimum (which implies that there are no weak-solid edges).

(1) Two players (i,j)E(i,j)\in E are called partners if γi+γjwij\gamma_{i}+\gamma_{j}\leq w_{ij}. Then the following are equivalent: (a) ii and jj are partners, (b) wijαi\jαj\i0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}\geq 0, (c) γimji\gamma_{i}\leq m_{j\rightarrow i} and γjmij\gamma_{j}\leq m_{i\rightarrow j}.

(2) The following are equivalent: (a) wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0, (b) γji>αi\j=(bithmax)ki\jmki\gamma_{j\to i}>\alpha_{i\backslash j}={(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in{\partial i}\backslash j}\,m_{k\rightarrow i}. Denote this set of edges by MM.

(3) We say that (i,j)(i,j) is a weak-dotted edge if wijαi\jαj\i=0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}=0, a strong-dotted edge if wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0, and a non-dotted edge otherwise. If ii has less than bib_{i} adjacent dotted edges, then γi=0\gamma_{i}=0.

(4) Each strong solid edge is strong dotted, and each non-solid edge is non-dotted.

(5) The balance property (15), holds at every edge (i,j)M(i,j)\in M.

(6) We have

mij={γijfor (ij)M,(wijγi)+for (ij)M.\displaystyle m_{i\rightarrow j}=\left\{\begin{array}[]{ll}\gamma_{i\to j}&\mbox{for }(ij)\in M\,,\\ (w_{ij}-\gamma_{i})_{+}&\mbox{for }(ij)\notin M\,.\end{array}\right.

(7) An optimum solution for the dual LP in (16) can be constructed as yi=γiy_{i}=\gamma_{i} for all iVi\in V and:

yij={wijγiγjfor (ij)M,0for (ij)M.\displaystyle y_{ij}=\left\{\begin{array}[]{ll}w_{ij}-\gamma_{i}-\gamma_{j}&\mbox{for }(ij)\in M\,,\\ 0&\mbox{for }(ij)\notin M\,.\end{array}\right.
Proof of Theorem 8, direct implication.

Assume that the primal LP in (16) has a unique optimum that is integral. Then, by property 4, the set of strong-dotted edges MM is the unique maximum weight matching MM^{*}, i.e. M=MM=M^{*}, and all other edges are non-dotted. By property 33, for ii that has less than bib_{i} partners under MM^{*}, we have γi=0\gamma_{i}=0. Hence by property 22, we know that for ii not saturated under MM^{*}, for every (ij)M(ij)\notin M^{*} since (ij)(ij) is a non-dotted edge γji=αi\j=0\gamma_{j\to i}=\alpha_{i\backslash j}=0, and for every (ij)M(ij)\in M^{*} node ii gets a positive incoming offer γji=mji\gamma_{j\to i}=m_{j\rightarrow i}. For ii saturated under MM^{*}, property 2 yields that the (bithmax){(b_{i}^{\rm th}\!\operatorname{-max})} highest incoming offers to ii come from neighbors in MM^{*} (without ties). It follows that α¯\underline{\alpha} induces the matching MM^{*}. Also, we deduce that γij=γji=0\gamma_{i\to j}=\gamma_{j\to i}=0 for (ij)M(ij)\notin M^{*}.

From property 6 we deduce that γij+γji=wij\gamma_{i\to j}+\gamma_{j\to i}=w_{ij} for (ij)M(ij)\in M and from property 1, we deduce that mji<γim_{j\rightarrow i}<\gamma_{i} for (ij)M(ij)\notin M. It follows that (M,Γ)(M,\Gamma) is a trade outcome. Finally, by properties 7 and 5, the pair (M,γ¯)(M^{*},\underline{\gamma}) is stable and balanced respectively, and thus forms a NB solution. ∎

Proof of Theorem 8: From NB solutions to fixed points.

Proof.

Consider any NB solution (M,ΓNB)(M,\Gamma_{{\textup{\tiny NB}}}). Using Proposition 1, M=MM=M_{*}, the unique maximum weight matching. Construct a corresponding FP as follows. Set

mij={γNB,ijfor (ij)M,(wijγNB,i)+for (ij)M.\displaystyle m_{i\rightarrow j}=\left\{\begin{array}[]{ll}\gamma_{{\textup{\tiny NB}},i\to j}&\mbox{for }(ij)\in M\,,\\ (w_{ij}-\gamma_{{\textup{\tiny NB}},i})_{+}&\mbox{for }(ij)\notin M\,.\end{array}\right.

Compute α¯\underline{\alpha} using αi\j=(bithmax)ki\jmki\alpha_{i\backslash j}={(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in{\partial i}\backslash j}m_{k\rightarrow i}. We claim that this is a FP and that the corresponding Γ\Gamma is ΓNB\Gamma_{{\textup{\tiny NB}}}.

To prove that we are at a fixed point, we imagine updated offers m¯upd\underline{m}^{\textup{upd}} based on α¯\underline{\alpha}, and show m¯upd=m¯\underline{m}^{\textup{upd}}=\underline{m}.

Consider a matching edge (i,j)M(i,j)\in M. We know that γNB,ij+γNB,ji=wij\gamma_{{\textup{\tiny NB}},i\to j}+\gamma_{{\textup{\tiny NB}},j\to i}=w_{ij}. Also, stability and balance tell us

γNB,ji(bithmax)ki\j(wikγNB,k)+=γNB,ij(bjthmax)lj\i(wjlγNB,l)+\gamma_{{\textup{\tiny NB}},j\to i}-{(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in\partial i\backslash j}(w_{ik}-\gamma_{{\textup{\tiny NB}},k})_{+}=\gamma_{{\textup{\tiny NB}},i\to j}-{(b_{j}^{\rm th}\!\operatorname{-max})}_{l\in\partial j\backslash i}(w_{jl}-\gamma_{{\textup{\tiny NB}},l})_{+}

and both sides are non-negative. For (i,k)M(i,k)\in M, we know that (wikγNB,k)+(wikγNB,ik)+=γNB,ki=mki(w_{ik}-\gamma_{{\textup{\tiny NB}},k})_{+}\geq(w_{ik}-\gamma_{{\textup{\tiny NB}},i\to k})_{+}=\gamma_{{\textup{\tiny NB}},k\to i}=m_{k\rightarrow i}. It follows that

(bithmax)ki\j(wikγNB,k)+=(bithmax)ki\jmki=αi\j{(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in\partial i\backslash j}(w_{ik}-\gamma_{{\textup{\tiny NB}},k})_{+}={(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in{\partial i}\backslash j}m_{k\rightarrow i}=\alpha_{i\backslash j}

Hence, γNB,jiαi\j=γNB,ijαj\i0\gamma_{{\textup{\tiny NB}},j\to i}-\alpha_{i\backslash j}=\gamma_{{\textup{\tiny NB}},i\to j}-\alpha_{j\backslash i}\geq 0. Therefore αi\j+αj\iwij\alpha_{i\backslash j}+\alpha_{j\backslash i}\leq w_{ij},

mijupd\displaystyle m_{i\rightarrow j}^{\textup{upd}} =wijαi\j+αj\i2=wijγNB,ji+γNB,ij2\displaystyle=\frac{w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i}}{2}=\frac{w_{ij}-\gamma_{{\textup{\tiny NB}},j\to i}+\gamma_{{\textup{\tiny NB}},i\to j}}{2}
=γNB,ij=mij.\displaystyle=\gamma_{{\textup{\tiny NB}},i\to j}=m_{i\rightarrow j}\,.

By symmetry, we also have mjiupd=γNB,ji=mjim_{j\rightarrow i}^{\textup{upd}}=\gamma_{{\textup{\tiny NB}},j\to i}=m_{j\rightarrow i}. Hence, the offers remain unchanged. Now consider (i,j)M(i,j)\notin M. We have γNB,i+γNB,jwij\gamma_{{\textup{\tiny NB}},i}+\gamma_{{\textup{\tiny NB}},j}\geq w_{ij} and, γNB,i=(bithmax)ki\jγNB,ki=αi\j\gamma_{{\textup{\tiny NB}},i}={(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in{\partial i}\backslash j}\gamma_{{\textup{\tiny NB}},k\to i}=\alpha_{i\backslash j}. A similar equation holds for γNB,j\gamma_{{\textup{\tiny NB}},j}. The validity of this identity can be checked individually in the cases when ii is saturated under MM and ii is not saturated under MM. Hence, αi\j+αj\iwij\alpha_{i\backslash j}+\alpha_{j\backslash i}\geq w_{ij}. This leads to mijupd=(wijαi\j)+=(wijγNB,i)+=mijm_{i\rightarrow j}^{\textup{upd}}=(w_{ij}-\alpha_{i\backslash j})_{+}=(w_{ij}-\gamma_{{\textup{\tiny NB}},i})_{+}=m_{i\rightarrow j}. By symmetry, we know also that mjiupd=mjim_{j\rightarrow i}^{\textup{upd}}=m_{j\rightarrow i}.

Finally, we show Γ=ΓNB\Gamma=\Gamma_{{\textup{\tiny NB}}}. Note that since we have already established α¯\underline{\alpha} is a fixed point, we know from the direct part that α¯\underline{\alpha} induces the matching MM, so there is no tie breaking required to determine the bib_{i} highest incoming offers to node iVi\in V. For all (i,j)M(i,j)\in M, we already found that mij=γNB,ijm_{i\rightarrow j}=\gamma_{{\textup{\tiny NB}},i\to j} and vice versa. For any edge (ij)M(ij)\notin M, we know mij=(wijγNB,i)+γNB,jm_{i\rightarrow j}=(w_{ij}-\gamma_{{\textup{\tiny NB}},i})_{+}\leq\gamma_{{\textup{\tiny NB}},j}. This immediately leads to Γ=ΓNB\Gamma=\Gamma_{{\textup{\tiny NB}}}. ∎

E.1 Proof of properties used in direct part

Now we prove the fixed point properties that were used in the direct part of the proof of Theorem 8. Before that, however, we remark that the condition: “the primal LP in (16) has a unique optimum” in Theorem 8 is almost always valid.

Remark 22.

We argue that the condition “the primal LP in (16) has a unique optimum” is generic in instances with integral optimum:
Let 𝖦I[0,1]|E|{\sf G}_{\textup{I}}\subset[0,1]^{|E|} be the set of instances having an integral optimum, given a graph GG with capacity constraints 𝐛\mathbf{b}. Let 𝖦UI𝖦I{\sf G}_{\textup{UI}}\subset{\sf G}_{\textup{I}} be the set of instances having a unique integral optimum. It turns out that 𝖦I{\sf G}_{\textup{I}} has dimension |E||E| (i.e. the class of instances having an integral optimum is large) and that 𝖦UI{\sf G}_{\textup{UI}} is both open and dense in 𝖦I{\sf G}_{\textup{I}}.

Again, we denote surplus wijαi\jαj\iw_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i} of edge (ij)(ij) by 𝒮urpij\mathcal{S}\textup{urp}_{ij}.

Lemma 20.

The following are equivalent:
(a) γi+γjwij\gamma_{i}+\gamma_{j}\leq w_{ij},
(b) 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0.
(c) γimji\gamma_{i}\leq m_{j\rightarrow i} and γjmij\gamma_{j}\leq m_{i\rightarrow j}.
Moreover, if γimji\gamma_{i}\leq m_{j\rightarrow i} and γj>mij\gamma_{j}>m_{i\rightarrow j} then γi=0\gamma_{i}=0.

Proof.

We will prove (a)(b)(c)(a)(a)\Rightarrow(b)\Rightarrow(c)\Rightarrow(a).

(a)(b)(a)\Rightarrow(b): Since γiαi\j\gamma_{i}\geq\alpha_{i\backslash j} and γjαj\i\gamma_{j}\geq\alpha_{j\backslash i} always holds then wijγi+γjαi\j+αj\iw_{ij}\geq\gamma_{i}+\gamma_{j}\geq\alpha_{i\backslash j}+\alpha_{j\backslash i}.

(b)(c)(b)\Rightarrow(c): If 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0 then mij=(wijαi\j+αj\i)/2αj\im_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2\geq\alpha_{j\backslash i}. So mijm_{i\rightarrow j} is among the bjb_{j} best offers received by node jj, implying γjmij\gamma_{j}\leq m_{i\rightarrow j}. The argument for γimji\gamma_{i}\leq m_{j\rightarrow i} is similar.

(c)(a)(c)\Rightarrow(a): If 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0 then mij=(wijαi\j+αj\i)/2m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2 and mji=(wijαj\i+αi\j)/2m_{j\rightarrow i}=(w_{ij}-\alpha_{j\backslash i}+\alpha_{i\backslash j})/2 which gives γi+γjmij+mji=wij\gamma_{i}+\gamma_{j}\leq m_{i\rightarrow j}+m_{j\rightarrow i}=w_{ij} and we are done. Otherwise, we have γi+γjmij+mji=(wijαi\j)++(wijαj\i)+<max[(wijαi\j)+,(wijαj\i)+,2wijαi\jαj\i]wij\gamma_{i}+\gamma_{j}\leq m_{i\rightarrow j}+m_{j\rightarrow i}=(w_{ij}-\alpha_{i\backslash j})_{+}+(w_{ij}-\alpha_{j\backslash i})_{+}<\max\bigg{[}(w_{ij}-\alpha_{i\backslash j})_{+},(w_{ij}-\alpha_{j\backslash i})_{+},2w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}\bigg{]}\leq w_{ij}.

Finally, we suppose γi=mji\gamma_{i}=m_{j\rightarrow i} and γj>mij\gamma_{j}>m_{i\rightarrow j}. First note that by equivalence of (b)(b) and (c)(c) we should have wij<αi\j+αj\iw_{ij}<\alpha_{i\backslash j}+\alpha_{j\backslash i}. On the other hand αi\jγimji(wijαj\i)+\alpha_{i\backslash j}\leq\gamma_{i}\leq m_{j\rightarrow i}\leq(w_{ij}-\alpha_{j\backslash i})_{+}. Now if wijαj\i>0w_{ij}-\alpha_{j\backslash i}>0 we get αi\jwijαj\i\alpha_{i\backslash j}\leq w_{ij}-\alpha_{j\backslash i} which is a contradiction. Therefore γi(wijαj\i)+=0\gamma_{i}\leq(w_{ij}-\alpha_{j\backslash i})_{+}=0, implying γi=0\gamma_{i}=0. ∎

Lemma 21.

The following are equivalent:
(a) mji>αi\jm_{j\rightarrow i}>\alpha_{i\backslash j},
(b) mij>αj\im_{i\rightarrow j}>\alpha_{j\backslash i},
(c) wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0.
(d) ii and jj receive positive offers from each other, with mji>((bi+1)thmax)kimkim_{j\rightarrow i}>((b_{i}+1)^{\rm th}\!\operatorname{-max})_{k\in{\partial i}}m_{k\rightarrow i} and similarly for ii.

These conditions imply that mji=γjim_{j\rightarrow i}=\gamma_{j\to i} and mij=γijm_{i\rightarrow j}=\gamma_{i\to j}.

Proof.

(a)(c)(b)(a)\Rightarrow(c)\Rightarrow(b): (a)(a) implies that (wijαj\i)+mji>αi\j(w_{ij}-\alpha_{j\backslash i})_{+}\geq m_{j\rightarrow i}>\alpha_{i\backslash j}, which yields (c). From this we can explicitly write mij=(wijαi\j+αj\i)/2m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2 which is strictly bigger than αj\i\alpha_{j\backslash i}. Hence we obtain (b)(b).

By symmetry (b)(c)(a)(b)\Rightarrow(c)\Rightarrow(a). Thus, we have shown that (a)(a), (b)(b) and (c)(c) are equivalent.

(c)(d)(c)\Rightarrow(d): (c)(c) implies that mij=(wijαi\j+αj\i)/2>αj\i=(bjthmax)kj\imkjm_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j}+\alpha_{j\backslash i})/2>\alpha_{j\backslash i}={(b_{j}^{\rm th}\!\operatorname{-max})}_{k\in\partial j\backslash i}m_{k\rightarrow j}. Using symmetry, it follows that (d)(d) holds.

(d)(a)(d)\Rightarrow(a) is easy to check.

This finishes the proof of equivalence. The implication follows from the definition of γij\gamma_{i\to j}. ∎

Recall that (ij)(ij) is a weak-dotted edge if wijαi\jαj\i=0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}=0, a strong-dotted edge if wijαi\jαj\i>0w_{ij}-\alpha_{i\backslash j}-\alpha_{j\backslash i}>0, and a non-dotted edge otherwise.

Lemma 22.

If γi>0\gamma_{i}>0, then ii has bib_{i} adjacent strong dotted edges, or at least bi+1b_{i}+1 adjacent dotted edges.

Proof.

Suppose γi>0\gamma_{i}>0. Then the bib_{i} largest incoming offers to ii are all strictly positive. Suppose one of these offers comes from jj. Then, αi\jmji(wijαj\i)+\alpha_{i\backslash j}\leq m_{j\rightarrow i}\leq(w_{ij}-\alpha_{j\backslash i})_{+}. Now mij>0m_{i\rightarrow j}>0, implying that αi\jwijαj\i\alpha_{i\backslash j}\leq w_{ij}-\alpha_{j\backslash i} or (ij)(ij) is dotted edge. If there is strict inequality for all jj, this means that we have at least bib_{i} strong dotted edges adjacent to ii. If we have equality for some jj, that means there is a tie for the bib_{i} highest offer incoming to ii. We deduce that at least bi+1b_{i}+1 dotted edges adjacent to ii. ∎

Lemma 23.

The following are equivalent:
(a) 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\leq 0,
(b) mij=(wijαi\j)+m_{i\rightarrow j}=(w_{ij}-\alpha_{i\backslash j})_{+}.
Moreover, these conditions imply αi\j=γi\alpha_{i\backslash j}=\gamma_{i} and αj\i=γj\alpha_{j\backslash i}=\gamma_{j}.

Proof.

Equivalence of (a) and (b) follows from the definitions. (a) implies mijαj\im_{i\rightarrow j}\leq\alpha_{j\backslash i} which yields αj\i=γj\alpha_{j\backslash i}=\gamma_{j}. By symmetry, we can also deduce αi\j=γi\alpha_{i\backslash j}=\gamma_{i}. ∎

Note that (a) is symmetric in ii and jj, so (b) can be transformed by interchanging ii and jj.

Lemma 24.

A non-solid edge cannot be a dotted edge, weak or strong.

The proof of this lemma is very similar to that of Lemma 11: we consider optimal solutions of the primal and dual LPs (16), and construct an alternating path consisting of alternate (i) non-solid dotted edges, and (ii) strong solid, non-strong dotted edges. We omit the proof.

Lemma 25.

Every strong-solid edge is a strong-dotted edge.

Again, the proof is very similar to the proof of Lemma 12, and we omit it.

Lemma 26.

Consider the set of edges M{(ij):𝒮urpij>0}M\equiv\{(ij):\mathcal{S}\textup{urp}_{ij}>0\}. The balance property (15) is satisfied for all (ij)M(ij)\in M.

Proof.

Consider any (ij)M(ij)\in M. From Lemma 21, we know that

γji=mji=αi\j+𝒮urpij/2\gamma_{j\to i}=m_{j\rightarrow i}=\alpha_{i\backslash j}+\mathcal{S}\textup{urp}_{ij}/2

To prove balance, it then suffices to establish

(bithmax)ki\jmki=αi\j.\displaystyle{(b_{i}^{\rm th}\!\operatorname{-max})}_{k\in{\partial i}\backslash j}\;m_{k\rightarrow i}=\alpha_{i\backslash j}\,. (41)

Now node ii can have at most bib_{i} adjacent strong dotted edges, from Lemma 6 (d). One of these is (ij)(ij). (41) follows from the property mij=(wijγi)+m_{i\rightarrow j}=(w_{ij}-\gamma_{i})_{+} on non-strong dotted edges (from Lemma 23). ∎

Lemma 27.

An optimum solution for the dual LP in (16) can be constructed as yi=γiy_{i}=\gamma_{i} for all iVi\in V and:

yij={wijγiγjfor (ij)M,0for (ij)M.\displaystyle y_{ij}=\left\{\begin{array}[]{ll}w_{ij}-\gamma_{i}-\gamma_{j}&\mbox{for }(ij)\in M\,,\\ 0&\mbox{for }(ij)\notin M\,.\end{array}\right.
Proof.

We first show that this construction satisfies the dual constraints. yij0y_{ij}\geq 0 follows from Lemma 20 (a) and (b). We have

yi+yj+yij=wijfor (ij)M\displaystyle y_{i}+y_{j}+y_{ij}=w_{ij}\qquad\mbox{for $(ij)\in M$} (42)

by construction. For (ij)M(ij)\notin M, we have yi+yj=γi+γjwijy_{i}+y_{j}=\gamma_{i}+\gamma_{j}\geq w_{ij} from Lemma 23. This completes our proof of feasibility.

To show optimality, we establish that the weight of matching MM is the same as the dual objective value iVbiyi+eEyij\sum_{i\in V}b_{i}y_{i}+\sum_{e\in E}y_{ij} at the chosen y¯\underline{y}. Lemma 22 guarantees γi=0\gamma_{i}=0 for ii having less than bib_{i} adjacent dotted edges. Using Lemmas 24 and 25, we know that MM is a valid 𝐛\mathbf{b} matching consisting of strong dotted edges, and all other edges are non-dotted. We deduce

w(M)=(ij)Mwij=(ij)Myi+yj+yij=iVbiyi+eEyij,\displaystyle w(M)=\sum_{(ij)\in M}w_{ij}=\sum_{(ij)\in M}y_{i}+y_{j}+y_{ij}=\sum_{i\in V}b_{i}y_{i}+\sum_{e\in E}y_{ij}\,,

using (42). This completes our proof of optimality. ∎

Appendix F The Kleinberg-Tardos construction and the KT gap

Refer to caption
Figure 3: Examples of basic structures: path, blossom, bicycle, and cycle (matched edges in bold).

Let GG be an instance which admits at least one stable outcome, MM^{*} be the corresponding matching (recall that this is a maximum weight matching), and consider the Kleinberg-Tardos (KT) procedure for finding a NB solution [27]. Any NB solution γ¯\underline{\gamma}^{*} can be constructed by this procedure with appropriate choices at successive stages. At each stage, a linear program is solved with variables γi\gamma_{i} attached to node ii. The linear program maximizes the minimum ‘slack’ of all unmatched edges and nodes, whose values have not yet been set (the slack of edge (i,j)M(i,j)\not\in M is γi+γjwij\gamma_{i}+\gamma_{j}-w_{ij}).

At the first stage, the set of nodes that remain unmatched (i.e. are not part of MM^{*}) is found, if such nodes exist. Call the set of unmatched nodes 𝒞0{\cal C}_{0}.

After this, at successive stages of the KT procedure, a sequence of structures 𝒞1{\cal C}_{1}, 𝒞2,,𝒞k{\cal C}_{2},\ldots,{\cal C}_{k} characterizing the LP optimum are found. We call this the KT sequence. Each such structure is a pair 𝒞q=(V(𝒞q),E(𝒞q)){\cal C}_{q}=(V({\cal C}_{q}),E({\cal C}_{q})) with V(𝒞q)VV({\cal C}_{q})\subseteq V, E(𝒞q)EE({\cal C}_{q})\subseteq E. According to [27] 𝒞q{\cal C}_{q} belongs to one of four topologies: alternating path, blossom, bicycle, alternating cycle (Figure 3). The qq-th linear program determines the value of γi\gamma_{i}^{*} for iV(𝒞q)i\in V({\cal C}_{q}). Further, one has the partition E(𝒞q)=E1(𝒞q)E2(𝒞q)E({\cal C}_{q})=E_{1}({\cal C}_{q})\cup E_{2}({\cal C}_{q}) with E1(𝒞q)E_{1}({\cal C}_{q}) consisting of all matching edges along which nodes in V(𝒞q)V({\cal C}_{q}) trade, and E2(𝒞q)E_{2}({\cal C}_{q}) consists of edges (i,j)(i,j) such that some iV(𝒞q)i\in V({\cal C}_{q}) receives its second-best, positive offer from jj.

The γ\gamma values for nodes on the limiting structure are uniquely determined if the structure is an alternating path, blossom or bicycle999In [27] it is claimed that the γ\gamma values ‘may not be fully determined’ also in the case of bicycles. However it is not hard to prove that γ\gamma values are, in fact, uniquely determined in bicycles.. In case of an alternating cycle there is one degree of freedom – setting a value γi\gamma_{i}^{*} for one node i𝒞qi\in{\cal C}_{q} fully determines the values at the other nodes.

We emphasize that, within the present definition, 𝒞q{\cal C}_{q} is not necessarily a subgraph of GG, in that it might contain an edge (i,j)(i,j) but not both its endpoints. On the other hand, V(𝒞q)V({\cal C}_{q}) is always subset of the endpoints of E(𝒞q)E({\cal C}_{q}). We denote by Vext(𝒞q)V(𝒞q)V_{\textup{ext}}({\cal C}_{q})\supseteq V({\cal C}_{q}) the set of nodes formed by all the endpoints of edges in E(𝒞q)E({\cal C}_{q}).

For all nodes iV(𝒞q)i\in V({\cal C}_{q}) the second best offer is equal to γiσq\gamma_{i}^{*}-\sigma_{q}, where σq\sigma_{q} is the slack of the qq-th structure. Therefore

γi+γjwij={0if (i,j)E1(𝒞q),σqif (i,j)E2(𝒞q).\displaystyle\gamma_{i}^{*}+\gamma_{j}^{*}-w_{ij}=\left\{\begin{array}[]{ll}0&\mbox{if $(i,j)\in E_{1}({\cal C}_{q})$,}\\ \sigma_{q}&\mbox{if $(i,j)\in E_{2}({\cal C}_{q})$.}\\ \end{array}\right.

The slacks form an increasing sequence (σ1σ2σk\sigma_{1}\leq\sigma_{2}\leq\ldots\leq\sigma_{k}).

Definition 23.

We say that a unique Nash bargaining solution α¯\underline{\alpha}^{*} has a KT gap σ\sigma if

σmin{σ1;σ2σ1;;σkσk1},\displaystyle\sigma\leq\min\big{\{}\sigma_{1};\,\sigma_{2}-\sigma_{1};\,\dots;\,\sigma_{k}-\sigma_{k-1}\big{\}}\,,

and if for each edge (i,j)(i,j) such that i,jVext(𝒞q)i,j\in V_{\textup{ext}}({\cal C}_{q}) and (i,j)E(𝒞q)(i,j)\not\in E({\cal C}_{q}),

γi+γjwijσq+σ.\displaystyle\gamma_{i}^{*}+\gamma_{j}^{*}-w_{ij}\geq\sigma_{q}+\sigma\,.

It is possible to prove that the positive gap condition is generic in the following sense. The set of all instances such that the NB solution is unique can be regarded as a subset 𝖦[0,W]|E|{\sf G}\subseteq[0,W]^{|E|} (WW being the maximum edge weight). It turns out that 𝖦{\sf G} has dimension |E||E| (i.e. the class of instances having unique NB solution is large) and that the subset of instances with gap σ>0\sigma>0 is both open and dense in 𝖦{\sf G}.

Appendix G Appendix to Section 4

G.1 UD example showing exponentially slow convergence

We construct a sequence of instances InI_{n} along with particular initializations such that convergence to an approximate fixed point is exponentially slow, cf. Section 4.1. Let us first consider n=8Nn=8N, where N2N\geq 2. The construction can be easily extended to arbitrary n16n\geq 16. The graph Gn=(Vn,En)G_{n}=(V_{n},E_{n}) we will consider is a simple ‘ring’. More precisely, Vn={1,2,,n}V_{n}=\{1,2,\ldots,n\} and En={(1,2),(2,3),,(n1,n),(n,1)}E_{n}=\{(1,2),(2,3),\ldots,(n-1,n),(n,1)\}. All edges have the same weight WW, with the exception w4N,4N+1=W1w_{4N,4N+1}=W-1. (We will choose WW later.) Thus, GnG_{n} has a unique maximum weight matching M={(1,2),(3,4),,(n1,n),}M_{*}=\{(1,2),(3,4),\ldots,(n-1,n),\}. Moreover, MM^{*} solves LP (2), implying that GnG_{n} possesses a UD solution (using Lemma 1), and the LP gap g=1g=1 provided W1W\geq 1. Given any r(0,1/2)r\in(0,1/2), we define the split fractions as follows 101010It turns out the values of the split fraction on approximately half the edges is irrelevant for our choice of α¯\underline{\alpha}. : We first define the split fractions on edges En,left={(1,2),(2,3),,(4N1,4N)}E_{n,{\rm left}}=\{(1,2),(2,3),\ldots,(4N-1,4N)\}. We set

r3,2\displaystyle r_{3,2} =r5,4==r2N1,2N2=r\displaystyle=r_{5,4}=\ldots=r_{2N-1,2N-2}=r
r2N+1,2N+2\displaystyle r_{2N+1,2N+2} =r2N+3,2N+4==r4N3,4N2=r\displaystyle=r_{2N+3,2N+4}=\ldots=r_{4N-3,4N-2}=r

and the split fractions on all other edges in En,leftE_{n,{\rm left}} to 1/21/2. As before, ri,i+1=1ri+1,ir_{i,i+1}=1-r_{i+1,i} is implicit.

For edges on the ‘right’ side we define split fractions in a symmetrical way. Define ‘reflection’ :{4N+1,4N+2,,8N}{1,2,,4N}{\cal R}:\{4N+1,4N+2,\ldots,8N\}\rightarrow\{1,2,\ldots,4N\} as

(l)=nl+1\displaystyle{\cal R}(l)=n-l+1 (44)

We set ri,i+1=r(i),(i+1)r_{i,i+1}=r_{{\cal R}(i),{\cal R}(i+1)} for all i{4N+1,4N+2,,8N1}i\in\{4N+1,4N+2,\ldots,8N-1\}.

Finally, we set rn,1=r4N,4N+1=1/2r_{n,1}=r_{4N,4N+1}=1/2.

Now we show how to construct α¯\underline{\alpha} satisfying properties (a) and (b) above. Let βr/(1r)<1\beta\equiv r/(1-r)<1. Set αn\1=α1\n=W/21\alpha_{n\backslash 1}=\alpha_{1\backslash n}=W/2-1.

For 0iN10\leq i\leq N-1, define111111The summations can be written in closed form if desired, but it turns out to be easier to work with the summations directly.

α2i+1\2i+2=W/2j=0i1βj,α2i+2\2i+1=W/2+j=0iβj.\displaystyle\alpha_{2i+1\backslash 2i+2}=W/2-\sum_{j=0}^{i-1}\beta^{j}\,,\qquad\alpha_{2i+2\backslash 2i+1}=W/2+\sum_{j=0}^{i}\beta^{j}\,.

For 1iN11\leq i\leq N-1, define

α2i\2i+1=W/2+j=0i2βj,α2i+1\2i=W/2j=0iβj.\displaystyle\alpha_{2i\backslash 2i+1}=W/2+\sum_{j=0}^{i-2}\beta^{j}\,,\qquad\alpha_{2i+1\backslash 2i}=W/2-\sum_{j=0}^{i}\beta^{j}\,.

Let α2N\2N+1=W/2+j=0N1βj\alpha_{2N\backslash 2N+1}=W/2+\sum_{j=0}^{N-1}\beta^{j}, α2N+1\2N=W/2j=0N1βj\alpha_{2N+1\backslash 2N}=W/2-\sum_{j=0}^{N-1}\beta^{j}. For 0iN20\leq i\leq N-2, define

α2N+2i+1\2N+2i+2=W/2j=0Ni1βj,α2N+2i+2\2N+2i+1=W/2+j=0Ni3βj.\displaystyle\alpha_{2N+2i+1\backslash 2N+2i+2}=W/2-\sum_{j=0}^{N-i-1}\beta^{j}\,,\qquad\alpha_{2N+2i+2\backslash 2N+2i+1}=W/2+\sum_{j=0}^{N-i-3}\beta^{j}\,.

For 1iN11\leq i\leq N-1, define

α2N+2i\2N+2i+1=W/2+j=0Ni1βj,α2N+2i+1\2N+2i=W/2j=0Ni2βj.\displaystyle\alpha_{2N+2i\backslash 2N+2i+1}=W/2+\sum_{j=0}^{N-i-1}\beta^{j}\,,\qquad\alpha_{2N+2i+1\backslash 2N+2i}=W/2-\sum_{j=0}^{N-i-2}\beta^{j}\,.

Next, let α4N1\4N=α4N\4N1=W/21\alpha_{4N-1\backslash 4N}=\alpha_{4N\backslash 4N-1}=W/2-1. α4N\4N+1=α4N+1\4N=W/2\alpha_{4N\backslash 4N+1}=\alpha_{4N+1\backslash 4N}=W/2. Again, the messages on the ‘right’ side are defined in terms of the reflection Eq. (44). We define αi\i+1=α(i)\(i+1)\alpha_{i\backslash i+1}=\alpha_{{\cal R}(i)\backslash{\cal R}(i+1)}, and αi+1\i=α(i+1)\(i)\alpha_{i+1\backslash i}=\alpha_{{\cal R}(i+1)\backslash{\cal R}(i)}, for all i{4N+1,4N+2,,8N1}i\in\{4N+1,4N+2,\ldots,8N-1\}.

The corresponding offers m¯(α¯)\underline{m}(\underline{\alpha}) can be easily computed. We have m1n=mn1=W/2m_{1\rightarrow n}=m_{n\rightarrow 1}=W/2. For 0iN10\leq i\leq N-1, we have

m2i+12i+2=W/2+j=0i1βj,m2i+22i+1=W/2+j=0iβj.\displaystyle m_{2i+1\rightarrow 2i+2}=W/2+\sum_{j=0}^{i-1}\beta^{j}\,,\qquad m_{2i+2\rightarrow 2i+1}=W/2+\sum_{j=0}^{i}\beta^{j}\,.

For 1iN11\leq i\leq N-1, we have

m2i2i+1=W/2j=0i1βj,m2i+12i=W/2+j=0i1βj.\displaystyle m_{2i\rightarrow 2i+1}=W/2-\sum_{j=0}^{i-1}\beta^{j}\,,\qquad m_{2i+1\rightarrow 2i}=W/2+\sum_{j=0}^{i-1}\beta^{j}\,.

We have m2N2N+1=W/2j=0N1βjm_{2N\rightarrow 2N+1}=W/2-\sum_{j=0}^{N-1}\beta^{j}, m2N+12N=W/2+j=0N1βjm_{2N+1\rightarrow 2N}=W/2+\sum_{j=0}^{N-1}\beta^{j}. For 0iN20\leq i\leq N-2, we have

m2N+2i+12N+2i+2=W/2+j=0Ni2βj,m2N+2i+22N+2i+1=W/2j=0Ni2βj.\displaystyle m_{2N+2i+1\rightarrow 2N+2i+2}=W/2+\sum_{j=0}^{N-i-2}\beta^{j}\,,\qquad m_{2N+2i+2\rightarrow 2N+2i+1}=W/2-\sum_{j=0}^{N-i-2}\beta^{j}\,.

For 1iN11\leq i\leq N-1, we have

m2N+2i2N+2i+1=W/2j=0Ni1βj,m2N+2i+12N+2i=W/2+j=0Ni2βj.\displaystyle m_{2N+2i\rightarrow 2N+2i+1}=W/2-\sum_{j=0}^{N-i-1}\beta^{j}\,,\qquad m_{2N+2i+1\rightarrow 2N+2i}=W/2+\sum_{j=0}^{N-i-2}\beta^{j}\,.

Finally, we have m4N14N=m4N4N1=W/2m_{4N-1\rightarrow 4N}=m_{4N\rightarrow 4N-1}=W/2, m4N4N+1=m4N+14N=W/21m_{4N\rightarrow 4N+1}=m_{4N+1\rightarrow 4N}=W/2-1. Offers on the ‘right’ side are clearly the same as the corresponding offer on the left.

Importantly, note that it suffices to have W2+2/(1β)W\geq 2+2/(1-\beta) to ensure that all messages and offers are bounded below by 11 (in particular, none of them is negative). Choose W=2+2/(1β)W=2+2/(1-\beta) (for example).

Next, note that all the fixed point conditions are satisfied, except the update rules for α2N\2N+1\alpha_{2N\backslash 2N+1}, α2N+1\2N\alpha_{2N+1\backslash 2N} and the corresponding messages in the reflection. However, it is easy to check that these update rules are each violated by only βN1\beta^{N-1}. Thus, α¯\underline{\alpha} is an ϵn{\epsilon}_{n}-FP for ϵn=βN12cn{\epsilon}_{n}=\beta^{N-1}\leq 2^{-cn} for appropriate c=c(r)>0c=c(r)>0.

G.2 Proof of Theorem 7

We show how to obtain an FPTAS using the two steps stated in Section 4.2.

Step 1 can be carried out by finding a maximum weight matching MM^{*} (see, e.g., [19]) and also solving the the dual linear program (3). For the dual LP, let vv be the optimum value and let γ¯\underline{\gamma} be an optimum solution. We now use Proposition 1. If the weight of MM^{*} is smaller than vv, we return unstable, since we know that no stable outcome exists, hence no UD solution (or ϵ{\epsilon}-UD solution) exists. Else, (γ¯,M)(\underline{\gamma},M^{*}) is a stable outcome. This completes step 1! The computational effort involved is polynomial in the input size. All unmatched nodes have earnings of 0.

In step 2, we fix the matching MM^{*}, and rebalance the matched edges iteratively. It turns out to be crucial that our iterative updates preserve stability. An example similar to the one in Section G.1 demonstrates that the rebalancing procedure can take an exponentially large time to reach an approximate UD solution if stability is not preserved [23].

We now motivate the rebalancing procedure briefly, before we give a detailed description and state results. Imagine an edge (i,j)M(i,j)\in M^{*}. Since we start with a stable outcome, the edge weight wijw_{ij} is at least the sum of the best alternatives, i.e. 𝒮urpij0\mathcal{S}\textup{urp}_{ij}\geq 0. Suppose we change the division of wijw_{ij} into γi\gamma_{i}^{\prime}, γj\gamma_{j}^{\prime} so that the 𝒮urpij\mathcal{S}\textup{urp}_{ij} is divided as per the prescribed split fraction rijr_{ij}. Earnings of all other nodes are left unchanged. Since rij(0,1)r_{ij}\in(0,1), γi\gamma_{i}^{\prime} is at least as large as the best alternative of ii, as was the case for γi\gamma_{i}. This leads to γi+γkwik\gamma_{i}^{\prime}+\gamma_{k}\geq w_{ik} for all ki\jk\in\partial i\backslash j. A similar argument holds for node jj. In short, stability is preserved!

It turns out that the analysis of convergence is simpler if we analyze synchronous updates, as opposed to asynchronous updates as described above. Moreover, we find that simply choosing an appropriate ‘damping factor’ allows us to ensure that stability is preserved even with synchronous updates. We use a powerful technique introduced in our recent work [27] to prove convergence.

Table 1 shows the algorithm Edge Rebalancing we use to complete step 2. Note that each iteration of the loop can requires O(|E|)O(|E|) simple operations.

Table 1: Local algorithm that converts stable outcome to ϵ{\epsilon}-UD solution
Edge Rebalancing(\,\big{(}\;Instance II, Stable outcome (γ¯,M)(\underline{\gamma},M),
   Edge Rebalancing( I Damping factor κ\kappa, Error target ϵ){\epsilon}\;\big{)}
1: Check κ(0,1/2]\kappa\in(0,1/2], ϵ>0{\epsilon}>0, (γ¯,M)(\underline{\gamma},M) is stable outcome T
2: If (Check fails)  Return error
3: γ¯0γ¯\underline{\gamma}^{0}\leftarrow\underline{\gamma}
4: t0t\leftarrow 0
5: Do
6:      ForEach (i,j)M(i,j)\in M
7:       γirebmaxki\j(wikγkt)++rij𝒮urpij(γ¯t)\gamma^{{\textup{\tiny reb}}}_{i}\leftarrow\max_{k\in\partial i\backslash j}(w_{ik}-\gamma_{k}^{t})_{+}+r_{ij}\mathcal{S}\textup{urp}_{ij}(\underline{\gamma}^{t})
8:       γjrebmaxlj\i(wjlγlt)++rji𝒮urpij(γ¯t)\gamma^{{\textup{\tiny reb}}}_{j}\leftarrow\max_{l\in\partial j\backslash i}\,(w_{jl}\,-\gamma_{l}^{t})_{+}+r_{ji}\mathcal{S}\textup{urp}_{ij}(\underline{\gamma}^{t})
9:      End ForEach
10:      ForEach iVi\in V that is unmatched under MM
11:       γireb0\gamma^{{\textup{\tiny reb}}}_{i}\leftarrow 0
12:      End ForEach
13:      If (γ¯rebγ¯tϵ)\left(\lVert\underline{\gamma}^{{\textup{\tiny reb}}}-\underline{\gamma}^{t}\rVert_{\infty}\leq{\epsilon}\right)  Break Do
14:      γ¯t+1=κγ¯reb+(1κ)γ¯t\underline{\gamma}^{t+1}=\kappa\underline{\gamma}^{{\textup{\tiny reb}}}+(1-\kappa)\underline{\gamma}^{t}
15:      tt+1t\leftarrow t+1
16: End Do
17: Return (γ¯t,M)(\underline{\gamma}^{t},M)

Correctness of Edge Rebalancing:
It is easy to check that (γ¯reb,M)(\underline{\gamma}^{{\textup{\tiny reb}}},M) and (γ¯t,M)(\underline{\gamma}^{t},M) are valid outcomes for all tt. We show that γ¯t\underline{\gamma}^{t} computed by Edge Rebalancing is, in fact, a stable allocation:

Lemma 28.

If Edge Rebalancing is given a valid input satisfying the ‘Check’ on line 1, then (γt,M)(\gamma^{t},M) is a stable outcome for all t0t\geq 0.

Proof.

We prove this lemma by induction on time tt. Clearly (γ0,M)(\gamma^{0},M) is a stable outcome, since the input is valid. Suppose (γt,M)(\gamma^{t},M) is a stable outcome.

Consider any (i,j)M(i,j)\in M. It is easy to verify that γireb+γjreb=wij\gamma_{i}^{\textup{\tiny reb}}+\gamma_{j}^{\textup{\tiny reb}}=w_{ij}, for γ¯reb\underline{\gamma}^{{\textup{\tiny reb}}} computed from γ¯t\underline{\gamma}^{t} in Lines 6-9 of Edge Rebalancing. Also, we know that γit+γjt=wij\gamma_{i}^{t}+\gamma_{j}^{t}=w_{ij}. It follows that γit+1+γjt+1=wij\gamma_{i}^{t+1}+\gamma_{j}^{t+1}=w_{ij} as needed. For iVi\in V unmatched under MM, γit=0\gamma_{i}^{t}=0 by hypothesis and as per Lines 10-12, γireb=0γit+1=0\gamma_{i}^{\textup{\tiny reb}}=0\,\Rightarrow\,\gamma_{i}^{t+1}=0 as needed.

Consider any (i,k)E\M(i,k)\in E\backslash M. We know that γit+γktwik\gamma_{i}^{t}+\gamma_{k}^{t}\geq w_{ik}. We want to show the corresponding inequality at time t+1t+1. Define σiktγit+γktwik0\sigma_{ik}^{t}\equiv\gamma_{i}^{t}+\gamma_{k}^{t}-w_{ik}\geq 0.

Claim 24.

γirebγitσikt\gamma_{i}^{{\textup{\tiny reb}}}\geq\gamma_{i}^{t}-\sigma_{ik}^{t}.

If we prove the claim, it follows that a similar inequality holds for γkreb\gamma_{k}^{{\textup{\tiny reb}}}, and hence γireb+γkrebγit+γkt2σik=wikσikt\gamma_{i}^{{\textup{\tiny reb}}}+\gamma_{k}^{{\textup{\tiny reb}}}\geq\gamma_{i}^{t}+\gamma_{k}^{t}-2\sigma_{ik}=w_{ik}-\sigma_{ik}^{t}. It then follows from the definition in Line 14 that γit+1+γkt+1wik\gamma_{i}^{t+1}+\gamma_{k}^{t+1}\geq w_{ik}, for any κ(0,1/2]\kappa\in(0,1/2]. This will complete our proof that (γ¯t+1,M)(\underline{\gamma}^{t+1},M) is a stable outcome.

Let us now prove the claim. Suppose ii is matched under MM. Using the definition in Line 7 (Line 8 contains a symmetrical definition), γirebmaxki\j(wikγkt)+\gamma_{i}^{\textup{\tiny reb}}\geq\max_{k^{\prime}\in\partial i\backslash j}(w_{ik^{\prime}}-\gamma_{k^{\prime}}^{t})_{+} since 𝒮urpik(γ¯t)0\mathcal{S}\textup{urp}_{ik}(\underline{\gamma}^{t})\geq 0. Hence,

γireb(wikγkt)+(wikγkt)=γitσikt,\displaystyle\gamma_{i}^{\textup{\tiny reb}}\geq(w_{ik}-\gamma_{k}^{t})_{+}\geq(w_{ik}-\gamma_{k}^{t})=\gamma_{i}^{t}-\sigma_{ik}^{t}\,,

as needed. If ii is not matched under MM, then γit=γireb=0\gamma_{i}^{t}=\gamma_{i}^{\textup{\tiny reb}}=0, so the claim follows from σikt0\sigma_{ik}^{t}\geq 0. ∎

Convergence of Edge Rebalancing:
Note that the termination condition γ¯rebγ¯tϵ\lVert\underline{\gamma}^{{\textup{\tiny reb}}}-\underline{\gamma}^{t}\rVert_{\infty}\leq{\epsilon} on Line 13 is equivalent to ϵ{\epsilon}-correct division. We show that the rebalancing algorithm terminates fast:

Lemma 29.

For any instance with weights bounded by 11, i.e. (we,eE)(0,1]|E|(w_{e},e\in E)\in(0,1]^{|E|}, if Edge Rebalancing is given a valid input, it terminates in TT iterations, where

T1πκ(1κ)ϵ2.\displaystyle T\leq\left\lceil\frac{1}{\pi\kappa(1-\kappa){\epsilon}^{2}}\right\rceil\,. (45)

and returns an outcome satisfying ϵ{\epsilon}-correct division (cf. Definition 10). Here π=3.14159\pi=3.14159\ldots

sketch only.

This result is proved using the same technique as in the proof of Theorem 3. The iterative updates of Edge Rebalancing can be written as

γ¯t+1=κ𝖳γ¯t+(1κ)γ¯t,\displaystyle\underline{\gamma}^{t+1}=\kappa{\sf{T}}\underline{\gamma}^{t}+(1-\kappa)\underline{\gamma}^{t}\,, (46)

where 𝖳{\sf{T}} is a self mapping of the (convex) set of allocations corresponding to matching MM, 𝒜M[0,1]|E|{\cal A}_{M}\subseteq[0,1]^{|E|}. The ‘edge balancing’ operator 𝖳{\sf{T}} essentially corresponds to Lines 6-9 in Table 1. It is fairly straightforward to show that 𝖳{\sf{T}} is nonexpansive with respect to sup norm. The main theorem in [3] then tells us that

𝖳γ¯tγ¯t1πκ(1κ)t.\displaystyle\lVert{\sf{T}}\underline{\gamma}^{t}-\underline{\gamma}^{t}\rVert_{\infty}\leq\frac{1}{\sqrt{\pi\kappa(1-\kappa)t}}\,. (47)

The result follows.

For the full proof, see [23]. ∎

Using Lemmas 28 and Lemmas 29, we immediately obtain our main result, Theorem 7.

of Theorem 7.

We showed that step 1 can be completed in polynomial time. If the instance has no UD solutions then the algorithm returns unstable. Else we obtain a stable outcome and proceed to step 2.

Step 2 is performed using Edge Rebalancing. The input is the instance, the stable outcome obtained from step 1, κ=1/2\kappa=1/2 (for example) and the target error value ϵ>0{\epsilon}>0. Lemmas 28 and 29 show that Edge Rebalancing terminates after at most 1/(πκ(1κ)ϵ2)\lceil 1/(\pi\kappa(1-\kappa){\epsilon}^{2})\rceil iterations, returning a outcome that is stable and satisfies ϵ{\epsilon}-correct division, i.e. an ϵ{\epsilon}-UD solution. Moreover, each iteration requires O(|E|)O(|E|) simple operations. Hence, step 2 is completed in O(|E|/ϵ2)O(|E|/{\epsilon}^{2}) simple operations.

The total number of operations required by the entire algorithm is thus polynomial in the input and in (1/ϵ)(1/{\epsilon}). ∎