This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Assign and Appraise: Achieving Optimal Performance in Collaborative Teams

Elizabeth Y. Huang, Dario Paccagnan, Wenjun Mei, and Francesco Bullo Submitted on . This work was supported by the U.S. Army Research Laboratory, the U.S. Army Research Office under grant number W911NF-15-1-0577, and the Swiss National Science Foundation under grant number P2EZP2-181618.E.Y.H., D.P., and F.B. are with the Center for Control, Dynamical Systems and Computation, UC Santa Barbara, Santa Barbara, CA 93106-5070 USA (email: {eyhuang, dariop, bullo}@ucsb.edu).W.M. is with the Automatic Control Laboratory, ETH, 8092 Zurich, Switzerland (e-mail: meiwenjunbd@gmail.com).
Abstract

Tackling complex team problems requires understanding each team member’s skills in order to devise a task assignment maximizing the team performance. This paper proposes a novel quantitative model describing the decentralized process by which individuals in a team learn who has what abilities, while concurrently assigning tasks to each of the team members. In the model, the appraisal network represents team member’s evaluations of one another and each team member chooses their own workload. The appraisals and workload assignment change simultaneously: each member builds their own local appraisal of neighboring members based on the performance exhibited on previous tasks, while the workload is redistributed based on the current appraisal estimates. We show that the appraisal states can be reduced to a lower dimension due to the presence of conserved quantities associated to the cycles of the appraisal network. Building on this, we provide rigorous results characterizing the ability, or inability, of the team to learn each other’s skill and thus converge to an allocation maximizing the team performance. We complement our analysis with extensive numerical experiments.

Index Terms:
Appraisal networks, transactive memory systems, coevolutionary networks, evolutionary games.

I Introduction

Research, technology, and innovation is increasingly reliant on teams of individuals with various specializations and interdisciplinary skill sets. In its simplest form, a group of individuals completing routine tasks is a resource allocation problem. However, tackling complex problems such as scientific research [15], software development [20], or problem solving [13] requires consideration of the team structure, cognitive affects, and interdependencies between team members [9]. In these complex scenarios, it is fundamental to discover what skills each member is endowed with, so as to devise a task assignment that maximizes the resulting collective team performance.

I-A Problem description

In this paper, we focus on a quantitative model describing the process by which individuals in a team evaluate one another while concurrently assigning work to each of the team members, in order to maximize the collective team performance (see Figure 1). More specifically, we assume each team member is endowed with a skill level (a-priori unknown), and that the team needs to divide a complex task among its members. We let each team member build their own local appraisal of neighboring team members’ based on the performance exhibited on previous tasks. Upcoming tasks are then distributed according to the current appraisal estimates. Finally, the performance of each member is newly observed by neighboring members, who, in turn, update their appraisal. Any model satisfying these assumptions is composed of two building blocks: i) an appraisal component modeling how team members update their appraisals (left block in Figure 1), and ii) a work assignment component describing how the task is divided within the team (right block in Figure 1).

Refer to caption
Figure 1: Architectural overview on the assign and appraise model studied in this manuscript. Given a complex task to complete, team members get assigned and execute an initial workload (right and bottom blocks). Each team member revises their appraisal of neighboring members based on each neighbor’s individual performance (left), which in turn is used to reassign the workload. The objective is for the team to learn who has what skill, so as to assign tasks in a way that maximizes the collective team performance.

We model the appraisal process i) through the lens of transactive memory systems, a conceptual model introduced by Wegner [23], which assumes that a team is capable of developing collective knowledge regarding who has what information and capabilities. Our choice of dynamics describing the evolution of the interpersonal appraisals is inspired from replicator dynamics, whereby each team member ii updates their appraisal of a neighboring member jj proportionally to the difference between member jj performance and the (appraisal-weighted) average performance of the team.

We model the work assignment process ii) as a compartmental system [14], and utilize two natural dynamics to describe how the task is divided based on the current appraisals. These dynamics correspond to utilizing different centrality measures to subdivide a complex task. It is crucial to observe that the coupling between the appraisal revision and the work assignment process results in a coevolutionary network problem.

This paper follows a trend initiated recently, whereby many traditionally qualitative fields such as social psychology and organizational sciences are developing quantitative models. In this regard, our aim is to quantify the development of transactive memory within a team and study what conditions cause a team to fail or succeed at allocating a task optimally among members. To do so, we leverage control theoretical tools as well as ideas from evolutionary game theory, and notions from graph theory.

I-B Contributions

Our main contributions are as follows.

  1. (i)

    We formulate a quantitative model to capture the coevolution of the workload division and appraisal network, where the optimal workload assignment maximizing the collective team performance is an equilibrium of the model. While we let the appraisal network evolve according to a replicator-like dynamics, we consider two different mechanisms for workload division and show well-posedness of the model.

  2. (ii)

    Regardless of the mechanism used for workload division, we derive conserved quantities associated to the cycles of the appraisal network. Leveraging this result, for a team of nn individuals, we significantly reduce the dimension of the system from n2+nn^{2}+n to a 2n2n dimensional submanifold.

  3. (iii)

    We provide rigorous positive and negative results that characterize the asymptotic behavior for either of the workload division mechanisms. When adopting the first workload division mechanism, we show that under a mild assumption, strongly connected teams are always able to learn each member’s correct skill level, and thus determine the optimal workload division. In the second model variation, strong connectivity is insufficient to guarantee that the team learns the optimal workload, but more specific assumptions allow the team to converge to the optimal workload.

  4. (iv)

    Finally, we enrich our analysis by means of numerical experiments that provide further insight into the limiting behavior.

I-C Related works

Quantitative models of transactive memory systems

Wegner’s transactive memory systems (TMS) model [23] describes how cognitive states affect the collective performance of a team performing complex tasks. This widely established model captures both learning on the individual and collective level, as well as the evolution of the interaction between individuals within a team.

There are very few quantitative models attempting to describe TMS and most of these models rely on numerical analysis to study the evolution of team knowledge [10], or what events are disruptive to learning and productivity in groups [1]. However, numerical analysis alone has natural limitations, whereas a mathematical perspective to TMS can establish the emergence of learning behaviors for entire classes of models. Moreover, while our proposed model is agent-based with collective knowledge represented as a weighted digraph, [10, 1] are not agent-based models and use a scalar value to encode the team’s collective knowledge.

The collective learning model introduced by Mei et al. [16] was the first to quantify TMS with appraisal networks and provide convergence analysis. In particular, for the assign/appraise model in [16], the appraisal update protocol is akin to one originally introduced in [7] and assumes each team member only updates their own appraisal based on performance comparisons. Additionally, the workload assignment is a centralized process determined by the eigenvector centrality of the network [3]. Our model significantly differs from [16] in that team members update their own and neighboring team members’ appraisals. Additionally, the workload assignment is a distributed and dynamic process.

Distributed optimization

Our model has direct ties with the field of distributed optimization. Under suitable conditions discussed later, in fact, the team will be able to learn each other’s skill levels, and thus agree on a work assignment maximizing the collective performance in a distributed fashion. Additionally, any change in the problem dimension, due to the addition or subtraction of agents, only requires local adaptions. In light of this observation, one could reinterpret the assign and appraisal model studied here as a distributed optimization algorithm, where the objective is that of maximizing the team performance through local communication. In comparison to our work, existing distributed optimization algorithms often require more complex dynamics. For example, [17] requires that the optimal solution estimates are projected back into the constrained set, while Newton-like methods [24] require higher order information.

Perhaps closest to this perspective on our problem is the work of Barreiro-Gomez et al. [2], where evolutionary game theory is used to design distributed optimization algorithms. Nevertheless, we observe that the objective we pursue here is that of quantifying if and to what extent team members learn how to share a task optimally. In this respect, the dynamics we consider do not arise as the result of a design choice (as it is in [2]), but they are rather defining the problem itself.

Adaptive coevolutionary networks

Our model is an example of appraisal network coevolving with a resource allocation process. Research regarding adaptive networks has gained traction in recent decades, appearing in biological systems and game theoretical applications [11]. Wang et al. [22], for example, review coupled disease-behavior dynamics, while Ogura et al. [19] propose an epidemic model where awareness causes individuals to distance themselves from infected neighbors. Finally, we note that coevolutionary game theory considers dynamics on the population strategies and dynamics of the environment, where the payoff matrix evolves with the environment state [26, 8].

I-D Paper organization

Section II contains the problem framework, model definition, the model’s well-posedness, and equilibrium corresponding to the optimal workload. Section III contains the properties of the appraisal dynamics and reduced order dynamics. Section IV and V present the convergence results for the model with both workload division mechanisms. Section VI contains numerical studies illustrating the various cases of asymptotic behavior.

I-E Notation

Let 1n\mathbbold{1}_{n} (0n\mathbbold{0}_{n} resp.) denote the nn-dimensional column vector with all ones (zero resp.). Let InI_{n} represent the n×nn\times n identity matrix. For a matrix or vector Bn×mB\in^{n\times m}, let B0B\geq 0 and B>0B>0 denote component-wise inequalities. Given x=[x1,,xn]nx=[x_{1},\dots,x_{n}]^{\top}\in^{n}, let diag(x)\operatorname{diag}(x) denote the n×nn\times n diagonal matrix such that the iith entry on the diagonal equals xix_{i}. Let \odot (\oslash resp.) denote Hadamard entrywise multiplication (division resp.) between two matrices of the same dimensions. For x,ynx,y\in^{n} and Bn×nB\in^{n\times n}, we shall use the property

xyB=diag(x)Bdiag(y).xy^{\top}\odot B=\operatorname{diag}(x)B\operatorname{diag}(y). (1)

Define the nn-dimensional simplex as Δn={xn| 1nx=1,x0}\Delta_{n}=\{x\in^{n}\;|\;\mathbbold{1}_{n}^{\top}x=1,x\geq 0\} and the relative interior of the simplex as int(Δn)={xn| 1nx=1,x>0}\mathrm{int}(\Delta_{n})=\{x\in\mathbb{R}^{n}\;|\;\mathbbold{1}_{n}^{\top}x=1,x>0\}.

A nonnegative matrix B0B\geq 0 is row-stochastic if B1n=1nB\mathbbold{1}_{n}=\mathbbold{1}_{n}. For a nonnegative matrix BB, G(B)G(B) is the weighted digraph associated to BB, with node set {1,,n}\{1,\dots,n\} and directed edge (i,j)(i,j) from node ii to jj if and only if bij>0b_{ij}>0. A nonnegative matrix BB is irreducible if its associated digraph is strongly connected. The Laplacian matrix of a nonnegative matrix BB is defined as L(B)=diag(B1n)BL(B)=\operatorname{diag}(B\mathbbold{1}_{n})-B. For BB irreducible and row-stochastic, vleft(B)v_{\textup{left}}(B) denotes the left dominant eigenvector of BB, i.e., the entry-wise positive left eigenvector normalized to have unit sum and associated with the dominant eigenvalue of BB [6, Perron Frobenius theorem].

II Problem Framework and ASAP Model

In this section, we first propose the Assignment and Appraisal (ASAP) model and establish that it is well-posed for finite time. The proposed ASAP model can be considered a socio-inspired, distributed, and online algorithm for optimal resource allocation problems. Our model captures two fundamental processes within teams: workload distribution and transactive memory. We consider two distributed, dynamic models for the workload division: a compartmental system model and a linear model that uses average-appraisal as the input for adjusting workload. The transactive memory is quantified by the appraisal network and reflects individualized peer evaluation in the team. The development of the transactive memory system allows the team to estimate the work assignment that maximizes the collective team performance.

II-A Workload assignment, performance observation, and appraisal network

Workload assignment

We consider a team of nn individuals performing a sequence of tasks. Let 𝒘=[w1,,wn]int(Δn)\bm{w}=[w_{1},\dots,w_{n}]^{\top}\in\mathrm{int}(\Delta_{n}) denote the vector of workload assignments for a given task, where wiw_{i} is the work assignment of individual ii.

Individual performance

Let p(𝒘):int(Δn)>0np(\bm{w}):\mathrm{int}(\Delta_{n})\rightarrow\mathbb{R}_{{>0}}^{n} represent the vector of individual performances that change as a function of the work assignment, where p(𝒘)=[p1(w1),,pn(wn)]p(\bm{w})=[p_{1}(w_{1}),\dots,p_{n}(w_{n})]^{\top}\in and pi(wi)p_{i}(w_{i}) is the performance of individual ii. In general, individuals will perform better if they have less workload; we formalize this notion with the following two assumptions.

Assumption 1.

(Smooth and strictly decreasing performance functions) Assume function pi:(0,1][0,)p_{i}:(0,1]\rightarrow[0,\infty) is C1C^{1}, strictly decreasing, convex, integrable, and limx0+pi(x)=+\lim_{x\to 0^{+}}p_{i}(x)=+\infty.

Assumption 2.

(Power law performance functions) Assume function pi:(0,1][0,)p_{i}:(0,1]\rightarrow[0,\infty) is of the form pi(x)=sixγp_{i}(x)=s_{i}x^{-\gamma} where si>0s_{i}>0 and γ(0,1)\gamma\in(0,1).

The first assumption is quite general and can be further weakened at the cost of additional notation. The second assumption is more restrictive than Assumption 1, but is well-motivated by the power law for individual learning [18]. Note that functions obeying Assumption 2 also satisfy Assumption 1.

Appraisal network

Let A={aij}i,j{1,,n}A=\{a_{ij}\}_{i,j\in\{1,\dots,n\}} denote the n×nn\times n nonnegative, row-stochastic appraisal matrix, where aija_{ij} is individual ii’s appraisal of individual jj. The appraisal matrix represents the team’s network structure and transactive memory system.

II-B Model description and problem statement

In this work, we design a model where the workload assignment coevolves with the appraisals: the workload assignment changes as a function of the appraisals and the appraisals update based on perceived performance disparities for the assigned workload. Suppose at each time tt, the team has a workload assignment 𝒘(t)\bm{w}(t), individual performances p(𝒘(t))p(\bm{w}(t)), and appraisal matrix A(t)A(t). Since we are studying teams, it is reasonable to assume the appraisal network is strongly connected and each individual appraises themself. This translates to an irreducible initial appraisal matrix A(0)A(0) with strictly positive self-appraisals aii(0)>0a_{ii}(0)>0 for all i{1,,n}i\in\{1,\dots,n\}. All members also start with strictly positive workload 𝒘(0)int(Δn)\bm{w}(0)\in\mathrm{int}(\Delta_{n}). For shorthand throughout the rest of the paper, we use A0=A(0)A_{0}=A(0) and 𝒘0=𝒘(0)\bm{w}_{0}=\bm{w}(0).

Before introducing the model, first we define the work flow function F=[F1(A,𝒘),,Fn(A,𝒘)]F=[F_{1}(A,\bm{w}),\dots,F_{n}(A,\bm{w})]^{\top}, where Fi:[0,1]n×n×ΔnΔnF_{i}:[0,1]^{n\times n}\times\Delta_{n}\rightarrow\Delta_{n} describes how individual ii adjusts their own work assignment. Then our coevolving assignment and appraisal process is quantified by the following dynamical system.

Definition 3 (ASAP (assignment and appraisal) model).

Consider nn performance functions pip_{i} satisfying Assumption 1 or 2. The coevolution of the appraisal network A(t)A(t) and workload assignment 𝐰(t)\bm{w}(t) obey the following coupled dynamics,

a˙ij=aij(pj(wj)k=1naikpk(wk)),w˙i=Fi(A,𝒘),\begin{split}\dot{a}_{ij}&=a_{ij}\Big{(}p_{j}(w_{j})-\sum_{k=1}^{n}a_{ik}p_{k}(w_{k})\Big{)},\\ \dot{w}_{i}&=F_{i}(A,\bm{w}),\end{split} (2)

which reads in matrix form

A˙=A(1np(𝒘)Ap(𝒘)1n),𝒘˙=F(A,𝒘).\begin{split}\dot{A}&=A\odot\Big{(}\mathbbold{1}_{n}p(\bm{w})^{\top}-Ap(\bm{w})\mathbbold{1}_{n}^{\top}\Big{)},\\ \dot{\bm{w}}&=F(A,\bm{w}).\end{split} (3)

The work flow function FF obeys one of the following work flow models:

Donor-controlled: Fi(A,𝒘)=wi+k=1nakiwk,\displaystyle F_{i}(A,\bm{w})=-w_{i}+\sum_{k=1}^{n}a_{ki}w_{k}, (4)
Average-appraisal: Fi(A,𝒘)=wi+1nk=1naki.\displaystyle F_{i}(A,\bm{w})=-w_{i}+\frac{1}{n}\sum_{k=1}^{n}a_{ki}. (5)

The matrix forms of the donor-controlled (4) and average-appraisal (5) work flows are F(A,𝐰)=𝐰+A𝐰F(A,\bm{w})=-\bm{w}+A^{\top}\bm{w} and F(A,𝐰)=𝐰+1nA1nF(A,\bm{w})=-\bm{w}+\frac{1}{n}A^{\top}\mathbbold{1}_{n}, respectively.

The appraisal weights of the ASAP model (2) update based on performance feedback between neighboring individuals. For neighboring team members ii and jj, ii will increase their appraisal of jj if jj’s performance is larger than the weighted average performance observed by ii, i.e. pj(wj)>k=1naikpk(wk)p_{j}(w_{j})>\sum_{k=1}^{n}a_{ik}p_{k}(w_{k}). Individual ii also updates their self-appraisal with the same mechanism. The irreducibility and strictly positive self-appraisal assumptions on the appraisal network means that every individual’s performance is evaluated by themself and at least one other individual within the team.

The donor-controlled work flow (4) models a team where individuals exchange portions of their workload assignment with their neighbors, and the amount of work exchanged depends on their current work assignments and the appraisal values. The work individual jj gives to individual ii has flow rate ajia_{ji} and is proportional to wjw_{j}. The average-appraisal work flow (5) assumes that each individual collects feedback from neighboring team members through appraisal evaluations. Each individual uses this feedback to calculate their average-appraisal 1nk=1naki\frac{1}{n}\sum_{k=1}^{n}a_{ki}, which is then used to adjust their own workload assignment. The average-appraisal is equivalent to the degree centrality of the appraisal network. Note that while the donor-controlled work flow is decentralized and distributed, the average-appraisal work flow is only distributed since it requires individuals to know the total number of team members.

In the following lemma, we show that the ASAP model is well-posed and the appraisal network maintains the same network topology for finite time.

Lemma 4 (Finite-time properties for the ASAP model).

Consider the ASAP model 2 with donor controlled (4) or average appraisal (5) work flow. Assume A0A_{0} is row-stochastic, irreducible, with strictly positive diagonal and 𝐰0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}). Then for any finite Δt>0\Delta t>0, the following statements hold:

  1. (i)

    𝒘(t)int(Δn)\bm{w}(t)\in\mathrm{int}(\Delta_{n}) for t[0,Δt]t\in[0,\Delta t];

  2. (ii)

    A(t)A(t) remains row-stochastic with the same zero/positive pattern for t[0,Δt]t\in[0,\Delta t].

Proof.

Before proving statement (i), we give some properties of the appraisal dynamics. If aij(t)=0a_{ij}(t)=0, then a˙ij(t)=0\dot{a}_{ij}(t)=0, which implies aij(t)0a_{ij}(t)\geq 0. By using the Hadamard product property (1), the matrix form of the appraisal dynamics can also be written as A˙=Adiag(p(𝒘))diag(Ap(𝒘))A\dot{A}=A\operatorname{diag}(p(\bm{w}))-\operatorname{diag}(Ap(\bm{w}))A. Then for A01n=1nA_{0}\mathbbold{1}_{n}=\mathbbold{1}_{n}, A˙1n=0n\dot{A}\mathbbold{1}_{n}=\mathbbold{0}_{n}, so A(t)A(t) remains row-stochastic for t0t\geq 0.

Next, we use A(t)A(t) row-stochastic to prove w(t)int(Δn)w(t)\in\mathrm{int}(\Delta_{n}) for donor-controlled work flow and t[0,Δt]t\in[0,\Delta t]. Left multiplying the 𝒘(t)\bm{w}(t) dynamics by 1n\mathbbold{1}_{n}^{\top}, we have 1n𝒘˙=1n(𝒘+A𝒘)=0n\mathbbold{1}_{n}^{\top}\dot{\bm{w}}=\mathbbold{1}_{n}^{\top}(-\bm{w}+A^{\top}\bm{w})=\mathbbold{0}_{n}. Next, let wi(t)=mink{wk(t)}w_{i}(t)=\min_{k}\{w_{k}(t)\}. For 𝒘0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}), wi(t)=mink{wk(t)}=0w_{i}(t)=\min_{k}\{w_{k}(t)\}=0, and A(t)0A(t)\geq 0, then w˙i(t)=k=1naki(t)wk(t)0\dot{w}_{i}(t)=\sum_{k=1}^{n}a_{ki}(t)w_{k}(t)\geq 0. Therefore 𝒘(t)Δn\bm{w}(t)\in\Delta_{n}. Lastly, we apply the Grönwall-Bellman Comparison Lemma to also show that 𝒘(t)\bm{w}(t) lives in the relative interior of the simplex. For wi(0)>0w_{i}(0)>0 and w˙i(t)=wi(t)+k=1naki(t)wk(t)wi(t)\dot{w}_{i}(t)=-w_{i}(t)+\sum_{k=1}^{n}a_{ki}(t)w_{k}(t)\geq-w_{i}(t), then wi(t)wi(0)et>0w_{i}(t)\geq w_{i}(0)e^{-t}>0 for t[0,Δt]t\in[0,\Delta t]. Therefore, if 𝒘0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}), then 𝒘(t)int(Δn)\bm{w}(t)\in\mathrm{int}(\Delta_{n}) for t[0,Δt]t\in[0,\Delta t].

The proof for statement (i) can be extended to the average-appraisal work flow (5) following the same process, since w˙i(t)=wi(t)+1nk=1naki(t)wi(t)\dot{w}_{i}(t)=-w_{i}(t)+\frac{1}{n}\sum_{k=1}^{n}a_{ki}(t)\geq-w_{i}(t).

For statement (ii), to prove that A(t)A(t) maintains the same zero/positive pattern for t[0,Δt]t\in[0,\Delta t], consider any i,ji,j such that aij(0)>0a_{ij}(0)>0. Since 𝒘(t)int(Δn)\bm{w}(t)\in\mathrm{int}(\Delta_{n}), then p(𝒘(t))>0p(\bm{w}(t))>0 by the performance function assumptions and pj(wj)k=1naikpk(wk)p_{j}(w_{j})-\sum_{k=1}^{n}a_{ik}p_{k}(w_{k}) is finite for any i,ji,j and t[0,Δt]t\in[0,\Delta t]. Let pmax(𝒘(t))=maxk{1,,n}{pk(wk)}p_{\max}(\bm{w}(t))=\max_{k\in\{1,\dots,n\}}\{p_{k}(w_{k})\}. Then the convex combination of individual performances is upper bounded by k=1naikpk(wk)pmax(𝒘(t))\sum_{k=1}^{n}a_{ik}p_{k}(w_{k})\leq p_{\max}(\bm{w}(t)). Now we can write the following lower bound for the time derivative of aij(t)a_{ij}(t),

a˙ij(t)\displaystyle\dot{a}_{ij}(t) aij(t)(pj(wj(t))k=1naik(t)pk(wk(t)))\displaystyle\geq a_{ij}(t)\big{(}p_{j}(w_{j}(t))-\sum\nolimits_{k=1}^{n}a_{ik}(t)p_{k}(w_{k}(t))\big{)}
aij(t)pmax(𝒘(t)).\displaystyle\geq-a_{ij}(t)p_{\max}(\bm{w}(t)).

Using the Grönwall-Bellman Comparison Lemma again, for t[0,Δt]t\in[0,\Delta t], then

aij(t)\displaystyle a_{ij}(t) aij(0)exp(0tpmax(𝒘(τ))𝑑τ)>0.\displaystyle\geq a_{ij}(0)\exp\bigg{(}-\int_{0}^{t}p_{\max}(\bm{w}(\tau))d\tau\bigg{)}>0.

Therefore, A(t)A(t) remains row-stochastic and maintains the same zero/positive pattern as A0A_{0} for finite time. ∎

II-C Team performance and optimal workload as model equilibria

We are interested in the collective team performance and while no single collective team performance function is widely accepted in the social sciences, we consider three such functions. Under minor technical assumptions, the optimal workload for all three is characterized by equal performance levels by the individuals and is an equilibrium point of the ASAP model. If pi(wi)p_{i}(w_{i}) represents the marginal utility of individual ii, then the collective team performance can be measured by the total utility,

tot(𝒘)=i=1n0wipi(x)𝑑x.\mathcal{H}_{\textup{tot}}(\bm{w})=\sum_{i=1}^{n}\int_{0}^{w_{i}}p_{i}(x)dx.

The team performance can alternatively be measured by the “weakest link” or minimum performer,

min(𝒘)=mini{1,,n}{pi(wi)}.\mathcal{H}_{\textup{min}}(\bm{w})=\min_{i\in\{1,\dots,n\}}\{p_{i}(w_{i})\}.

Another metric often used is the weighted average individual performance:

avg(𝒘)=i=1nwipi(wi).\mathcal{H}_{\textup{avg}}(\bm{w})=\sum_{i=1}^{n}w_{i}p_{i}(w_{i}).

The next theorem clarifies when the workload maximizing either tot\mathcal{H}_{\textup{tot}}, min\mathcal{H}_{\textup{min}}, or avg\mathcal{H}_{\textup{avg}} is an equilibrium of the ASAP model.

Theorem 5 (Optimal performance as equilibria of dynamics).

Consider performance functions pip_{i} satisfying Assumption 1 for all i{1,,n}i\in\{1,\dots,n\}. Then

  1. (i)

    there exists a unique pair (p,𝒘opt)(p^{*},\bm{w}^{\mathrm{opt}}) such that p>0p^{*}>0, 𝒘optint(Δn)\bm{w}^{\mathrm{opt}}\in\mathrm{int}(\Delta_{n}), and p(𝒘opt)=p1np(\bm{w}^{\mathrm{opt}})=p^{*}\mathbbold{1}_{n}.

Additionally, let \mathcal{H} denote tot\mathcal{H}_{\textup{tot}}, min\mathcal{H}_{\textup{min}}, or avg\mathcal{H}_{\textup{avg}}. Let Assumption 2 hold when =avg\mathcal{H}=\mathcal{H}_{\textup{avg}}. Then

  1. (ii)

    𝒘opt\bm{w}^{\mathrm{opt}} is the unique solution to

    𝒘opt=argmax𝒘Δn{(𝒘)}.\bm{w}^{\mathrm{opt}}=\operatorname*{arg\,max}_{\bm{w}\in\Delta_{n}}\{\mathcal{H}(\bm{w})\}.

Finally, consider the ASAP model (2) with donor-controlled work flow (4) and let A0A_{0} be row-stochastic, irreducible, with strictly positive diagonal and 𝐰0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}). Then

  1. (iii)

    there exists at least one matrix AA^{*} with the same zero/positive pattern as A0A_{0} that satisfies 𝒘opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*}); and

  2. (iv)

    every pair (A,𝒘opt)(A^{*},\bm{w}^{\mathrm{opt}}), such that AA^{*} has the same zero/positive pattern as A0A_{0} and 𝒘opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*}), is an equilibrium.

For average-appraisal work flow (5), statements (iii)-(iv) may not hold for 𝒘opt=1n(A)1n\bm{w}^{\mathrm{opt}}=\frac{1}{n}(A^{*})^{\top}\mathbbold{1}_{n}, since there may not exist an AA^{*} with the same zero/positive pattern as A0A_{0}. Section V elaborates on these results.

Proof.

Regarding statement (i), recall that pip_{i} is C1C^{1} and strictly decreasing by Assumption 1 or 2. Now we show that given our assumptions, there exists 𝒘optint(Δn)\bm{w}^{\mathrm{opt}}\in\mathrm{int}(\Delta_{n}) such that p(𝒘opt)=p1np(\bm{w}^{\mathrm{opt}})=p^{*}\mathbbold{1}_{n} holds. Let pi1p_{i}^{-1} denote the inverse of pip_{i} and let \circ denote the composition of functions where f(g(x))=(fg)(x)f(g(x))=(f\circ g)(x). Given p1(w1)=pi(wi)p_{1}(w_{1})=p_{i}(w_{i}), then wi=(pi1p1)(w1)w_{i}=(p^{-1}_{i}\circ p_{1})(w_{1}) for all i1i\neq 1. Then taking into account 𝒘optint(Δn)\bm{w}^{\mathrm{opt}}\in\mathrm{int}(\Delta_{n}),

w1+i=1n(pi1p1)(w1)=1.w_{1}+\sum\nolimits_{i=1}^{n}(p^{-1}_{i}\circ p_{1})(w_{1})=1.

pip_{i} strictly decreasing implies pi1p_{i}^{-1} (pi1p1p_{i}^{-1}\circ p_{1} resp.) is strictly decreasing (strictly increasing resp.). Therefore the left hand side of the above equation is strictly increasing, so there is a unique w1opt(0,1)w^{\mathrm{opt}}_{1}\in(0,1) solving the equation. Therefore there is a unique (p,𝒘opt)(p^{*},\bm{w}^{\mathrm{opt}}) that satisfies p(𝒘opt)=p1np(\bm{w}^{\mathrm{opt}})=p^{*}\mathbbold{1}_{n}, where p=p1(w1opt)>0p^{*}=p_{1}(w^{\mathrm{opt}}_{1})>0.

Regarding statement (ii), pip_{i} is strictly decreasing, C1C^{1}, and convex by Assumption 1-2. Then tot\mathcal{H}_{\textup{tot}}, min\mathcal{H}_{\textup{min}}, and avg\mathcal{H}_{\textup{avg}} are all strictly concave. Since we are maximizing over a compact set, and (𝒘)\mathcal{H}(\bm{w}) is finite for 𝒘Δn\bm{w}\in\Delta_{n}, there exists a unique optimal solution 𝒘optΔn\bm{w}^{\mathrm{opt}}\in\Delta_{n}. Next we show that 𝒘opt\bm{w}^{\mathrm{opt}} must satisfy p(𝒘opt)=p1np(\bm{w}^{\mathrm{opt}})=p^{*}\mathbbold{1}_{n} where p>0p^{*}>0 for each collective team performance measure and 𝒘optint(Δn)\bm{w}^{\mathrm{opt}}\in\mathrm{int}(\Delta_{n}).

First, consider =tot\mathcal{H}=\mathcal{H}_{\textup{tot}}. Let 𝝁n\bm{\mu}\in^{n} and λ\lambda\in. Then the KKT conditions are given by: p(𝒘opt)+𝝁λ1n=0np(\bm{w}^{\mathrm{opt}})+\bm{\mu}-\lambda\mathbbold{1}_{n}=\mathbbold{0}_{n}, 𝝁𝒘opt=0n\bm{\mu}\odot\bm{w}^{\mathrm{opt}}=\mathbbold{0}_{n}, and 𝝁0n\bm{\mu}\succeq\mathbbold{0}_{n}. If λ\lambda\to\infty, then 𝒘opt=0n\bm{w}^{\mathrm{opt}}=\mathbbold{0}_{n} for the first KKT condition to hold, but we require 𝒘optΔn\bm{w}^{\mathrm{opt}}\in\Delta_{n}. Similarly, wiopt=0w^{\mathrm{opt}}_{i}=0 for any ii would satisfy the second KKT condition, but violate the first KKT condition. As a result, λ<\lambda<\infty and 𝝁=0n\bm{\mu}=\mathbbold{0}_{n}. This implies that pi(wiopt)=λp_{i}(w^{\mathrm{opt}}_{i})=\lambda for all ii. Therefore 𝒘optint(Δn)\bm{w}^{\mathrm{opt}}\in\mathrm{int}(\Delta_{n}) and there exists p=λ(0,)p^{*}=\lambda\in(0,\infty) such that p(𝒘opt)=p1np(\bm{w}^{\mathrm{opt}})=p^{*}\mathbbold{1}_{n}.

Second, consider =min\mathcal{H}=\mathcal{H}_{\textup{min}}. Define the set argmin(p(𝒘))={i{1,,n}|pi(wi)=mink{pk(wk)}}\operatorname*{arg\,min}(p(\bm{w}))=\{i\in\{1,\dots,n\}\;|\;p_{i}(w_{i})=\min_{k}\{p_{k}(w_{k})\}\} and let |argmin(p(𝒘))||\operatorname*{arg\,min}(p(\bm{w}))| denote the number of elements in argmin(p(𝒘))\operatorname*{arg\,min}(p(\bm{w})). We prove the claim by contradiction. Assume 𝒘opt\bm{w}^{\mathrm{opt}} is the optimal solution such that there exists at least one jij\neq i such that pi(wiopt)<pj(wjopt)p_{i}(w^{\mathrm{opt}}_{i})<p_{j}(w^{\mathrm{opt}}_{j}) for iargmin(p(𝒘))i\in\operatorname*{arg\,min}(p(\bm{w})). Then there exists a sufficiently small ϵ>0\epsilon>0 and 𝒘int(Δn)\bm{w}^{*}\in\mathrm{int}(\Delta_{n}) such that min(𝒘opt)<min(𝒘)\mathcal{H}_{\textup{min}}(\bm{w}^{\mathrm{opt}})<\mathcal{H}_{\textup{min}}(\bm{w}^{*}), where wi=wioptϵw^{*}_{i}=w^{\mathrm{opt}}_{i}-\epsilon and wj=wjopt+ϵ|argmin(p(𝒘))|w^{*}_{j}=w^{\mathrm{opt}}_{j}+\epsilon|\operatorname*{arg\,min}(p(\bm{w}))|. This contradicts the fact that 𝒘opt\bm{w}^{\mathrm{opt}} is the optimal solution. Additionally, we can prove that 𝒘optint(Δn)\bm{w}^{\mathrm{opt}}\in\mathrm{int}(\Delta_{n}) by assuming there exists at least one ii such that wi=0w_{i}=0 and following the same proof by contradiction process. Therefore 𝒘optint(Δn)\bm{w}^{\mathrm{opt}}\in\mathrm{int}(\Delta_{n}) and p(𝒘opt)=p1np(\bm{w}^{\mathrm{opt}})=p^{*}\mathbbold{1}_{n} .

Third, consider =avg\mathcal{H}=\mathcal{H}_{\textup{avg}}. Let 𝝁n\bm{\mu}\in^{n} and λ\lambda\in. Then the KKT conditions are given by: (1γ)p(𝒘)+𝝁λ1n=0n(1-\gamma)p(\bm{w}^{*})+\bm{\mu}-\lambda\mathbbold{1}_{n}=\mathbbold{0}_{n}, 𝝁𝒘=0n\bm{\mu}\odot\bm{w}^{*}=\mathbbold{0}_{n}, and 𝝁0n\bm{\mu}\succeq\mathbbold{0}_{n}. The rest of the proof follows from the same argument as used for =tot\mathcal{H}=\mathcal{H}_{\textup{tot}}.

Regarding statements (iii) and (iv), let ad=[a11,,ann][0,1]na_{d}=[a_{11},\dots,a_{nn}]^{\top}\in[0,1]^{n} and A(ad,A0)=diag(ad)+(Indiag(ad))A0A(a_{d},A_{0})=\operatorname{diag}(a_{d})+(I_{n}-\operatorname{diag}(a_{d}))A_{0}. We prove that there exists some ad>0a_{d}^{*}>0 such that 𝒘opt=vleft(A(ad,A0))\bm{w}^{\mathrm{opt}}=v_{\textup{left}}\big{(}A^{*}(a_{d}^{*},A_{0})\big{)}. From the assumptions on A0A_{0}, then there exists 𝒘¯=vleft(A0)\bar{\bm{w}}=v_{\textup{left}}(A_{0}) such that σ𝒘¯=(Indiag(ad))𝒘opt\sigma\bar{\bm{w}}=(I_{n}-\operatorname{diag}(a_{d}^{*}))\bm{w}^{\mathrm{opt}} for σ\sigma\in. Then solving for ada_{d}^{*}, we have ad=1nσ(𝒘¯𝒘opt)a_{d}^{*}=\mathbbold{1}_{n}-\sigma(\bar{\bm{w}}\oslash\bm{w}^{\mathrm{opt}}). Next, we choose σ=ϵ/maxi{w¯i/wi}\sigma=\epsilon/\max_{i}\{\bar{w}_{i}/w_{i}\} for ϵ(0,1)\epsilon\in(0,1), which gives the following bounds on aiia_{ii} for all ii,

aii[1ϵ,1ϵmini{w¯i/wi}(maxi{w¯i/wi})1](0,1).a_{ii}\in[1-\epsilon,1-\epsilon\min_{i}\{\bar{w}_{i}/w_{i}\}(\max_{i}\{\bar{w}_{i}/w_{i}\})^{-1}]\subseteq(0,1).

With ad>0na_{d}^{*}>\mathbbold{0}_{n}, then A(ad,A0)A^{*}(a_{d}^{*},A_{0}) has the same zero/positive pattern as A(0)A(0). This shows that, given 𝒘opt\bm{w}^{\mathrm{opt}}, there always exists a matrix AA^{*} with left dominant eigenvector 𝒘opt\bm{w}^{\mathrm{opt}} and with the same pattern as A(0)A(0).

Next, we prove that any such pair (A,𝒘opt)(A^{*},\bm{w}^{\mathrm{opt}}) is an equilibrium. Our assumptions on AA^{*} and the Perron-Frobenius theorem together imply that the rank(In(A))=n1\mathrm{rank}(I_{n}-(A^{*})^{\top})=n-1. For the ASAP model (2) with donor-controlled work flow (4), the equilibrium conditions on the self-appraisal states and work assignment read:

0n\displaystyle\mathbbold{0}_{n} =diag(ad(A))(InA)p(𝒘),\displaystyle=\operatorname{diag}\big{(}a_{d}(A^{*})\big{)}(I_{n}-A^{*})p(\bm{w}^{*}), (6)
0n\displaystyle\mathbbold{0}_{n} =(AIn)𝒘.\displaystyle=(A^{*}-I_{n})^{\top}\bm{w}^{*}. (7)

Equation (6) is satisfied because we know from statement (ii) that p(𝒘opt)=p1np(\bm{w}^{\mathrm{opt}})=p^{*}\mathbbold{1}_{n}. Equation (7) is satisfied because we know vleft(A)=𝒘optv_{\textup{left}}(A^{*})=\bm{w}^{\mathrm{opt}}. This concludes the proof of statements (iii) and (iv). ∎

The equilibria described in the above lemma also resemble an evolutionarily stable set [12], which is defined as the set of strategies with the same payoff. Our proof illustrates that at least one AA^{*} always exists, but in general, there are multiple AA^{*} matrices that satisfy a particular zero/positive irreducible matrix pattern with 𝒘opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*}) with the same collective team performance. We will later show that, under mild conditions, this optimal solution is an equilibrium of our dynamics with various attractivity properties (see Section IV and V).

III Properties of Appraisal Dynamics: Conserved Quantities and Reduced Order Dynamics

In this section, we show that every cycle in the appraisal network is associated to a conserved quantity. Leveraging these conserved quantities, we reduce the appraisal dynamics to an n1n-1 dimensional submanifold. Before doing so, we introduce the notion of cycles, cycle path vectors, the cycle set, and the cycle space. For a given initial appraisal matrix A0A_{0} with strictly positive diagonal, let mm denote the total number of strictly positive interpersonal appraisals in the edge set (A0)\mathcal{E}(A_{0}). Recall that if aij(0)=0a_{ij}(0)=0 for any i,ji,j, then a˙ij=0\dot{a}_{ij}=0, which implies aij(t)=0a_{ij}(t)=0 for all t0t\geq 0. Therefore we can consider the total number of appraisal states to be the number of edges in A0A_{0}, which gives a total of n+mn+m appraisal states.

Definition 6 (Cycles, cycle path vectors, and cycle set).

Consider the digraph G(A)G(A) associated to matrix A0n×nA\in\mathbb{R}_{\geq 0}^{n\times n}.

A cycle is an ordered sequence of nodes r={r1,,rk,r1}r=\{r_{1},\dots,r_{k},r_{1}\} with no node appearing more than once, that starts and ends at the same node, has at least two distinct nodes, and each sequential pair of nodes in the cycle denotes an edge (ri,ri+1)(A)(r_{i},r_{i+1})\in\mathcal{E}(A). We do not consider self-loops, i.e. self-appraisal edges, to be part of any cycles.

Let Cr{0,1}mC_{r}\in\{0,1\}^{m} denote the cycle path vector associated to cycle rr. Let each off-diagonal edge of the appraisal matrix (i,j)(A)(i,j)\in\mathcal{E}(A) be assigned to a number in the ordered set {1,,m}\{1,\dots,m\}. For every edge e{1,,m}e\in\{1,\dots,m\}, the eeth component of CrC_{r} is defined as

(Cr)e={+1,if edge e is positively traversed by Cr,0,otherwise.\displaystyle(C_{r})_{e}=\begin{cases}+1,&\;\text{if edge }e\text{ is positively traversed by }C_{r},\\ 0,&\;\text{otherwise}.\end{cases}

Let Φ(A)\Phi(A) denote the cycle set, i.e. the set of all cycles, in digraph G(A)G(A).

To refer to a particular cycle, we will use the cycle’s associated cycle path vector, which then allows us to define the cycle space.

Definition 7 (Cycle space).

A cycle space is a subspace of m spanned by cycle path vectors. By [4, pg. 29, Theorem 9], the cycle space of a strongly connected digraph G(A)G(A) is spanned by a basis of μ=mn+1\mu=m-n+1 cycle path vectors.

Let CB{0,1}m×μC_{B}\in\{0,1\}^{m\times\mu} denote a matrix where the columns are a basis of the cycle space.

The following theorem (i) rigorously defines the conserved quantities associated to cycles in the appraisal network; (ii) shows that the appraisal states can be reduced from dimension n+mn+m to n1n-1 using the conserved quantities; and (iii) uses both the previous properties to introduce reduced order dynamics that have a one-to-one correspondence with the appraisal trajectories.

Theorem 8 (Conserved cycle constants give reduced order dynamics).

Consider the ASAP model (3) with donor-controlled (4) or average-appraisal (5) work flow. Given initial conditions A0A_{0} row-stochastic, irreducible, with strictly positive diagonal and 𝐰0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}), let (A(t),𝐰(t))(A(t),\bm{w}(t)) be the resulting trajectory. Then

  1. (i)

    for any cycle rr, the quantity

    cr=(i,j)raii(t)aij(t),c_{r}=\prod_{(i,j)\in r}\frac{a_{ii}(t)}{a_{ij}(t)}, (8)

    is constant; we refer to cr(0,)c_{r}\in(0,\infty) as the cycle constant associated to cycle rΦ(A0)r\in\Phi(A_{0});

  2. (ii)

    the appraisal matrix A(t)A(t) takes value in a submanifold of dimension n1n-1;

  3. (iii)

    given a solution (𝒗(t),𝒘¯(t))>0n×int(Δn)(\bm{v}(t),\bar{\bm{w}}(t))\in_{>0}^{n}\times\mathrm{int}(\Delta_{n}) with initial condition (𝒗0,𝒘¯0)=(1n,𝒘0)(\bm{v}_{0},\bar{\bm{w}}_{0})=(\mathbbold{1}_{n},\bm{w}_{0}) of the dynamics

    𝒗˙=diag(p(𝒘¯)𝒘¯𝒜(𝒗)p(𝒘¯)1n)𝒗,𝒘¯˙=F(𝒜(𝒗),𝒘¯),\begin{split}\dot{\bm{v}}&=\operatorname{diag}\big{(}p(\bar{\bm{w}})-\bar{\bm{w}}^{\top}\mathcal{A}(\bm{v})p(\bar{\bm{w}})\mathbbold{1}_{n}\big{)}\bm{v},\\ \dot{\bar{\bm{w}}}&=F(\mathcal{A}(\bm{v}),\bar{\bm{w}}),\end{split} (9)

    where 𝒜:nn×n\mathcal{A}:^{n}\to^{n\times{n}} is defined by

    𝒜(𝒗)=diag(A0𝒗)1A0diag(𝒗),\begin{split}\mathcal{A}(\bm{v})&=\operatorname{diag}(A_{0}\bm{v})^{-1}A_{0}\operatorname{diag}(\bm{v}),\end{split} (10)

    then A(t)=𝒜(𝒗(t))A(t)=\mathcal{A}(\bm{v}(t)) and 𝒘(t)=𝒘¯(t)\bm{w}(t)=\bar{\bm{w}}(t);

  4. (iv)

    for every equilibrium (𝒗,𝒘opt)(\bm{v}^{*},\bm{w}^{\mathrm{opt}}) of (9), (A,𝒘opt)(A^{*},\bm{w}^{\mathrm{opt}}) is an equilibrium of (3) with A=𝒜(𝒗)A^{*}=\mathcal{A}(\bm{v}^{*});

  5. (v)

    if additionally A0>0A_{0}>0, then the positive matrix A(t)A0A(t)\oslash A_{0} is rank 1 for all time tt.

Proof.

Regarding statement (i), we show that crc_{r} is constant for any rΦ(A0)r\in\Phi(A_{0}) by taking the natural logarithm of both sides of (8) and showing that the derivative vanishes. By Lemma 4, ln(cr)\ln(c_{r}) is well-defined since aii(t),aij(t)>0a_{ii}(t),a_{ij}(t)>0 for any aijra_{ij}\in r and finite time t<t<\infty.

\difftln(cr)=(i,j)r(a˙iiaiia˙ijaij)=(i,j)r((pi(wi)p¯i(𝒘))(pj(wj)p¯i(𝒘)))=0.\diff{}{t}\ln(c_{r})=\sum_{(i,j)\in r}\Big{(}\frac{\dot{a}_{ii}}{a_{ii}}-\frac{\dot{a}_{ij}}{a_{ij}}\Big{)}\\ =\sum_{(i,j)\in r}\Big{(}\big{(}p_{i}(w_{i})-\bar{p}_{i}(\bm{w})\big{)}-\big{(}p_{j}(w_{j})-\bar{p}_{i}(\bm{w})\big{)}\Big{)}=0.

Therefore, crc_{r} is constant for all rΦ(A0)r\in\Phi(A_{0}).

Regarding statement (ii), first, we will introduce a change of variables from A(t)A(t) to B(t)={bij(t)}i,j{1,,n}0n×nB(t)=\{b_{ij}(t)\}_{i,j\in\{1,\dots,n\}}\in\mathbb{R}_{\geq 0}^{n\times n}, that comes from the appraisal dynamics property that allows for row-stochasticity to be preserved. This allows the n+mn+m states of A(t)A(t) to be reduced to mm states of B(t)B(t). Second, we show that there exists μ=mn+1\mu=m-n+1 independent cycle constants, define constraint equations associated to the cycle constants, and apply the implicit function theorem to show that the mm states of B(t)B(t) further reduce to n1n-1 states.

Let bij(t)=aij(t)aii(t)b_{ij}(t)=\frac{a_{ij}(t)}{a_{ii}(t)} for all i,ji,j. This is well-defined in finite-time by Theorem 4 and the assumption that A0A_{0} has strictly positive diagonal. Since the diagonal entries of B(t)B(t) remain constant and zero-valued edges remain zero, then we can consider the total states of B(t)B(t) to be the mm off-diagonal edges of B(t)B(t). Next, we introduce the cycle constant constraint functions and use the implicit function theorem to show that the mm states can be further reduced to n1n-1 using the cycle constants. For edge e=(i,j)e=(i,j), let bij(t)=be(t)b_{ij}(t)=b_{e}(t). Let z=[x,y]>0mz=[x^{\top},y^{\top}]^{\top}\in\mathbb{R}_{{>0}}^{m} where x=[b1,,bmμ]>0mμx=[b_{1},\dots,b_{m-\mu}]^{\top}\in\mathbb{R}_{{>0}}^{m-\mu} and y=[bmμ+1,,bm]>0μy=[b_{m-\mu+1},\dots,b_{m}]^{\top}\in_{>0}^{\mu}. Consider the cycle constant constraint function g(x,y)=[g1(x,y),,gμ(x,y)]:>0mμ×>0μμg(x,y)=[g_{1}(x,y),\dots,g_{\mu}(x,y)]^{\top}:\mathbb{R}_{{>0}}^{m-\mu}\times\mathbb{R}_{{>0}}^{\mu}\rightarrow^{\mu}, where gr(x,y)=ln(cr)(i,j)rln(aiiaij)=0g_{r}(x,y)=\ln(c_{r})-\sum_{(i,j)\in r}\ln(\frac{a_{ii}}{a_{ij}})=0 is associated to cycle path vector CrC_{r} for all r{1,,μ}r\in\{1,\dots,\mu\} and the selected cycles form a basis for the cycle subspace such that CB=[C1,,Cμ]C_{\textup{B}}=[C_{1},\dots,C_{\mu}]. In matrix form, g(x,y)g(x,y) reads as

g(x,y)\displaystyle g(x,y) =[ln(c1)ln(cμ)]+CB[ln(b1)ln(bm)]=0μ.\displaystyle=\begin{bmatrix}\ln(c_{1})\\ \vdots\\ \ln(c_{\mu})\end{bmatrix}+C_{B}^{\top}\begin{bmatrix}\ln(b_{1})\\ \vdots\\ \ln(b_{m})\end{bmatrix}=\mathbbold{0}_{\mu}.

We partition CBC_{B} into block matrices, CB=[C¯B,C^B]C_{B}=[\bar{C}_{B}^{\top},\hat{C}_{B}^{\top}]^{\top} where C¯B{0,1}mμ×μ\bar{C}_{B}\in\{0,1\}^{m-\mu\times\mu} and C^B{0,1}μ×μ\hat{C}_{B}\in\{0,1\}^{\mu\times\mu}. Then taking the partial derivative of g(x,y)g(x,y) with respect to yy,

g(x,y)y\displaystyle\frac{\partial{g(x,y)}}{\partial{y}} =CB[0mμ×μ(diag(y))1]=C^B(diag(y))1.\displaystyle=C_{B}^{\top}\begin{bmatrix}\mathbbold{0}_{m-\mu\times\mu}\\ (\operatorname{diag}(y))^{-1}\end{bmatrix}=\hat{C}_{B}^{\top}(\operatorname{diag}(y))^{-1}.

The ordering of the rows of CBC_{B} is determined by the ordering of the edges e{1,,m}e\in\{1,\dots,m\}. Since CBC_{B} is full column rank by definition, then there exists an edge ordering such that rank(C^B)=μ\mathrm{rank}(\hat{C}_{B})=\mu. For this ordering with rank(C^B)=μ\mathrm{rank}(\hat{C}_{B})=\mu, then rank(g(x,y)y)=μ\mathrm{rank}(\frac{\partial{g(x,y)}}{\partial{y}})=\mu. By the implicit function theorem, y>0μy\in_{>0}^{\mu} is a continuous function of x>0mμ=n1x\in_{>0}^{m-\mu}=^{n-1}. Equivalently, BB can then be reduced from mm states to mμ=m(mn+1)=n1m-\mu=m-(m-n+1)=n-1. Therefore if A(t)A(t) is irreducible with strictly positive diagonal, then A(t)A(t) can be reduced to an n1n-1 dimensional submanifold.

Regarding statement (iii), we show that, if 𝒗(t)\bm{v}(t) satisfies the dynamics of (9), then 𝒜(𝒗(t))\mathcal{A}(\bm{v}(t)) defined by equation (10) satisfies the original ASAP dynamics (3). For shorthand, let p~(𝒗,𝒘)=𝒘𝒜(𝒗)p(𝒘)\tilde{p}(\bm{v},\bm{w})=\bm{w}^{\top}\mathcal{A}(\bm{v})p(\bm{w}). We compute:

a˙ij\displaystyle\dot{a}_{ij} =aij(0)v˙jk=1naik(0)vkaij(0)vjk=1naik(0)v˙k(k=1naik(0)vk)2\displaystyle=\frac{a_{ij}(0)\dot{v}_{j}}{\sum\nolimits_{k=1}^{n}a_{ik}(0)v_{k}}-\frac{a_{ij}(0)v_{j}\sum\nolimits_{k=1}^{n}a_{ik}(0)\dot{v}_{k}}{\big{(}\sum\nolimits_{k=1}^{n}a_{ik}(0)v_{k}\big{)}^{2}}
=aij(0)vjk=1naik(0)vk(pj(wj)p~(𝒗,𝒘)\displaystyle=\frac{a_{ij}(0)v_{j}}{\sum\nolimits_{k=1}^{n}a_{ik}(0)v_{k}}\Bigg{(}p_{j}(w_{j})-\tilde{p}(\bm{v},\bm{w})
k=1naik(0)vk(pk(wk)p~(𝒗,𝒘))h=1naih(0)vh)\displaystyle\qquad-\sum_{k=1}^{n}\frac{a_{ik}(0)v_{k}\big{(}p_{k}(w_{k})-\tilde{p}(\bm{v},\bm{w})\big{)}}{\sum\nolimits_{h=1}^{n}a_{ih}(0)v_{h}}\Bigg{)}
=aij(pj(wj)k=1naikpk(wk)).\displaystyle=a_{ij}\bigg{(}p_{j}(w_{j})-\sum\nolimits_{k=1}^{n}a_{ik}p_{k}(w_{k})\bigg{)}.

We also note that

A0=𝒜(𝒗(0)).A_{0}=\mathcal{A}(\bm{v}(0)).

Our claim follows from the uniqueness of solutions to ordinary differential equations.

Statement (iv) follows trivially from verifying that (𝒗,𝒘opt)(\bm{v}^{*},\bm{w}^{\mathrm{opt}}) and (A,𝒘opt)(A^{*},\bm{w}^{\mathrm{opt}}) are equilibrium points of the corresponding dynamics with A=𝒜(𝒗)>0A^{*}=\mathcal{A}(\bm{v}^{*})>0.

Regarding statement (v), we first show that the positive matrix A(t)A0A(t)\oslash A_{0} is rank 11 for all time tt. First we multiply A(t)A0A(t)\oslash A_{0} by the diagonal matrix D=diag([a11(0)/a11,,an1(0)/an1])D=\operatorname{diag}([a_{11}(0)/a_{11},\dots,a_{n1}(0)/a_{n1}]). Then we show that D(A(t)A0)D(A(t)\oslash A_{0}) is rank 11, which implies that A(t)A0A(t)\oslash A_{0} is also rank 11.

D(AA0)\displaystyle D(A\oslash A_{0}) =[1a11(0)a11a12a12(0)a11(0)a1na11a1n(0)1an1(0)an2an1an2(0)an1(0)annan1ann(0)]\displaystyle=\begin{bmatrix}1&\frac{a_{11}(0)}{a_{11}}\frac{a_{12}}{a_{12}(0)}&\cdots&\frac{a_{11}(0)a_{1n}}{a_{11}a_{1n}(0)}\\ \vdots&\vdots&\ddots&\vdots\\ 1&\frac{a_{n1}(0)a_{n2}}{a_{n1}a_{n2}(0)}&\cdots&\frac{a_{n1}(0)a_{nn}}{a_{n1}a_{nn}(0)}\end{bmatrix}

By assumption A0>0A_{0}>0, G(A)G(A) is a complete graph for finite tt. Then the cycle constants (8), and any nodes ijki\neq j\neq k, we have akkajjaiiakjaikaji=akk(0)ajj(0)aii(0)akj(0)aij(0)ajk(0)\frac{a_{kk}a_{jj}a_{ii}}{a_{kj}a_{ik}a_{ji}}=\frac{a_{kk}(0)a_{jj}(0)a_{ii}(0)}{a_{kj}(0)a_{ij}(0)a_{jk}(0)} and akkajjakjajk=akk(0)ajj(0)akj(0)ajk(0)\frac{a_{kk}a_{jj}}{a_{kj}a_{jk}}=\frac{a_{kk}(0)a_{jj}(0)}{a_{kj}(0)a_{jk}(0)}. Rearranging these two equations gives aiiaii(0)aij(0)aij=akiaki(0)akj(0)akj\frac{a_{ii}}{a_{ii}(0)}\frac{a_{ij}(0)}{a_{ij}}=\frac{a_{ki}}{a_{ki}(0)}\frac{a_{kj}(0)}{a_{kj}}. This shows that every row of D(AA0)D(A\oslash A_{0}) is equivalent and rank(D(AA0))=rank(AA0)=1\mathrm{rank}(D(A\oslash A_{0}))=\mathrm{rank}(A\oslash A_{0})=1. ∎

Case study for team of two

In order to illustrate the role of the cycle constants (8), we consider an example of a two-person team with performance functions p1(w1)=(0.45w1)0.9p_{1}(w_{1})=(\frac{0.45}{w_{1}})^{0.9} and p2(w2)=(0.55w2)0.8p_{2}(w_{2})=(\frac{0.55}{w_{2}})^{0.8}. Figure 2 shows the evolution of the trajectories for various initial conditions of the ASAP model with donor-controlled work flow. The trajectories illustrate the conserved quantities associated to the cycles in the appraisal network, which is

c=a11(0)a22(0)(1a11(0))(1a22(0))c=\frac{a_{11}(0)a_{22}(0)}{(1-a_{11}(0))(1-a_{22}(0))} (11)

for the two-node case. Then the cycle constant cc with Theorem 8(ii) allows us to write the dynamics for n=2n=2 as

a˙11=a11(1a11)(p1(w1)p2(1w1)),w˙1=w1+(a11(1a11)(1c)w1+a11c+a11(1c)).\begin{split}\dot{a}_{11}&=a_{11}(1-a_{11})\big{(}p_{1}(w_{1})-p_{2}(1-w_{1})\big{)},\\ \dot{w}_{1}&=-w_{1}+\bigg{(}\frac{a_{11}(1-a_{11})(1-c)w_{1}+a_{11}}{c+a_{11}(1-c)}\bigg{)}.\end{split} (12)
Refer to caption
Figure 2: Trajectories of the ASAP (2) with donor-controlled work flow (4)  for various initial conditions. The markers designate the initial values. All trajectories starting on a particular colored surface, remain on that colored surface, where the surfaces are associated to the conserved cycle constants. For n=2n=2, the dynamics reduce to the system (12) with cycle constant cc given by (11). The color blue corresponds to c<1c<1, red to c=1c=1, and green and black to c>1c>1.

The cycle constants can be thought of as a parameter that measures the level of deviation between individual’s initial perception of each other’s skills. When cr=1c_{r}=1 for some rΦ(A)r\in\Phi(A), then all individuals along cycle rr are in agreement over the appraisals for every other individual.

IV Stability Analysis for the ASAP Model with Donor-Controlled Work Flow

In this section, we study the asymptotic behavior of the ASAP model with donor-controlled work flow. Our analysis is based on a Lyapunov argument. Utilizing this approach, we identify initial appraisal network conditions for teams with complete graphs where the optimal workload is learned without any other additional assumptions. Under a technical assumption, we also rigorously prove that for any strongly connected team, the dynamics will converge to the optimal workload.

The next lemma defines the performance-entropy function, which we show to be a Lyapunov function for the ASAP model under certain structural assumptions on the appraisal network.

Lemma 9 (Performance-entropy function).

Consider the ASAP model (3) with donor-controlled work flow (4). Assume A0A_{0} row-stochastic, 𝐰0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}), and there exists some AA^{*} with the same zero/positive pattern as A0A_{0} such that 𝐰opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*}). Define the performance-entropy function V:{aij}(i,j)(A0)×int(Δn)V:\{a_{ij}\}_{(i,j)\in\mathcal{E}(A_{0})}\times\mathrm{int}(\Delta_{n})\rightarrow by

V(A,𝒘)=i=1n(wioptwipi(x)dx+wioptk s.t.(i,k)(A0)aikln(aikaik)).\begin{split}V(A,\bm{w})&=-\sum_{i=1}^{n}\bigg{(}\int\nolimits_{w^{\mathrm{opt}}_{i}}^{w_{i}}p_{i}(x)dx\\ &\qquad+w^{\mathrm{opt}}_{i}\sum\nolimits_{\begin{subarray}{c}k\text{ s.t.}\\ (i,k)\in\mathcal{E}(A_{0})\end{subarray}}a_{ik}^{*}\ln\Big{(}\frac{a_{ik}}{a_{ik}^{*}}\Big{)}\bigg{)}.\end{split} (13)

Then

  1. (i)

    V(A,𝒘)>0V(A,\bm{w})>0 for AAA\neq A^{*} or 𝒘𝒘opt\bm{w}\neq\bm{w}^{\mathrm{opt}}, and

  2. (ii)

    the Lie derivative of VV is

    V˙(A,𝒘)=p(𝒘)(InA)(𝒘𝒘opt).\dot{V}(A,\bm{w})=p(\bm{w})^{\top}(I_{n}-A^{\top})(\bm{w}-\bm{w}^{\mathrm{opt}}). (14)

The first term of the function is the rescaled total utility, tot(𝒘opt)tot(𝒘)=i=1nwioptwipi(x)𝑑x\mathcal{H}_{\textup{tot}}(\bm{w}^{\mathrm{opt}})-\mathcal{H}_{\textup{tot}}(\bm{w})=-\sum_{i=1}^{n}\int_{w^{\mathrm{opt}}_{i}}^{w_{i}}p_{i}(x)dx. The second term, wioptk s.t.(i,k)(A0)aiklnaikaikw^{\mathrm{opt}}_{i}\sum_{\begin{subarray}{c}k\text{ s.t.}\\ (i,k)\in\mathcal{E}(A_{0})\end{subarray}}a_{ik}^{*}\ln\frac{a_{ik}}{a_{ik}^{*}}, is the Kullback-Liebler relative entropy measure [25].

Proof.

By Assumption 1, i=1nwioptwipi(x)𝑑x-\sum_{i=1}^{n}\int_{w^{\mathrm{opt}}_{i}}^{w_{i}}p_{i}(x)dx is convex with minimum value if and only if wi=wioptw_{i}=w^{\mathrm{opt}}_{i}. Therefore this term is positive definite for 𝒘𝒘opt\bm{w}\neq\bm{w}^{\mathrm{opt}}. Since the function ln()-\ln(\cdot) is strictly convex and k=1naik=1\sum_{k=1}^{n}a_{ik}^{*}=1, Jensen’s inequality can be used to give the following lower bound,

k s.t.(i,k)(A0)aikln(aikaik)0,-\sum\nolimits_{\begin{subarray}{c}k\text{ s.t.}\\ (i,k)\in\mathcal{E}(A_{0})\end{subarray}}a_{ik}^{*}\ln\Big{(}\frac{a_{ik}}{a_{ik}^{*}}\Big{)}\geq 0,

where the inequality holds strictly if and only if AAA\neq A^{*}.

For the last statement of the lemma and with the assumption 𝒘opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*}), the Lie derivative of VV is

V˙(A,𝒘)=p(𝒘)𝒘˙(𝒘opt)(A(A˙A))1n=p(𝒘)𝒘˙(𝒘opt)(A(1np(𝒘)Ap(𝒘)1n))1n.\dot{V}(A,\bm{w})=-p(\bm{w})^{\top}\dot{\bm{w}}-(\bm{w}^{\mathrm{opt}})^{\top}\big{(}A^{*}\odot(\dot{A}\oslash A)\big{)}\mathbbold{1}_{n}\\ =-p(\bm{w})^{\top}\dot{\bm{w}}-(\bm{w}^{\mathrm{opt}})^{\top}\big{(}A^{*}\odot(\mathbbold{1}_{n}p(\bm{w})^{\top}-Ap(\bm{w})\mathbbold{1}_{n}^{\top})\big{)}\mathbbold{1}_{n}.

Then using Hadamard product property (1) and 𝒘opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*}), V˙(A,𝒘)\dot{V}(A,\bm{w}) further simplifies to

V˙(A,𝒘)=p(𝒘)𝒘˙(𝒘opt)(Adiag(p(𝒘))diag(Ap(𝒘))A)1n=p(𝒘)𝒘˙(𝒘opt)(Ap(𝒘)Ap(𝒘))=p(𝒘)(InA)(𝒘𝒘opt).\dot{V}(A,\bm{w})=-p(\bm{w})^{\top}\dot{\bm{w}}\\ \quad-(\bm{w}^{\mathrm{opt}})^{\top}\big{(}A^{*}\operatorname{diag}(p(\bm{w}))-\operatorname{diag}(Ap(\bm{w}))A^{*}\big{)}\mathbbold{1}_{n}\\ =-p(\bm{w})^{\top}\dot{\bm{w}}-(\bm{w}^{\mathrm{opt}})^{\top}\big{(}A^{*}p(\bm{w})-Ap(\bm{w})\big{)}\\ =p(\bm{w})^{\top}(I_{n}-A^{\top})(\bm{w}-\bm{w}^{\mathrm{opt}}).\qed

The next theorem states the convergence results to the optimal workload for various cases on the connectivity of the initial appraisal matrix. For donor-controlled work flow, the optimal workload is equal to the eigenvector centrality of the network [5], which is a measure of the individual’s importance as a function of the network structure and appraisal values. Therefore the equilibrium workload value quantifies each team member’s contribution to the team and learning the optimal workload reflects the development of TMS within the team. Note that statement (iii) relies on the assumption that conjecture given in the statement holds. This conjecture is discussed further at the end of the section, where we provide extensive simulations to illustrate its high likelihood.

Theorem 10 (Convergence to optimal workload for strongly connected teams).

Consider the ASAP model (2) with donor-controlled work flow (4). Given initial conditions A0A_{0} row-stochastic, irreducible, with strictly positive diagonal and 𝐰0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}). The following statements hold:

  1. (i)

    if n=2n=2 and A0>0A_{0}>0, then limt(A(t),𝒘(t))=(A,𝒘opt)\lim_{t\to\infty}\left(A(t),\bm{w}(t)\right)=(A^{*},\bm{w}^{\mathrm{opt}}) such that A>0A^{*}>0 is row-stochastic and 𝒘opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*});

  2. (ii)

    if there exists ad(0)=[a11(0),ann(0)]int(Δn)a_{d}(0)=[a_{11}(0),\dots a_{nn}(0)]^{\top}\in\mathrm{int}(\Delta_{n}) such that A0=1nad(0)A_{0}=\mathbbold{1}_{n}a_{d}(0)^{\top} is also rank 11, then limt(A(t),𝒘(t))=(1n(𝒘opt),𝒘opt)\lim_{t\to\infty}(A(t),\bm{w}(t))=(\mathbbold{1}_{n}(\bm{w}^{\mathrm{opt}})^{\top},\bm{w}^{\mathrm{opt}}).

Moreover, define 𝐯(t)>0n\bm{v}(t)\in_{>0}^{n} as in Theorem 8(iii).

  1. (iii)

    If 𝒗(t)\bm{v}(t) is uniformly bounded for all (A0,𝒘0)(A_{0},\bm{w}_{0}) and t0t\geq 0, then limt(A(t),𝒘(t))=(A,𝒘opt)\lim_{t\to\infty}(A(t),\bm{w}(t))=(A^{*},\bm{w}^{\mathrm{opt}}) such that AA^{*} is row-stochastic, has the same zero/positive pattern as A0A_{0}, and 𝒘opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*}).

Proof.

Statement (i) follows directly from the fact that the function defined by (13) is a Lyapunov function for the system. For brevity, we omit the proof of Statement (i), since it follows a similar proof to statement (ii).

Regarding statement (ii), if A0A_{0} is the rank 11 form given by the theorem assumptions, then cr=1c_{r}=1 for all cycles rΦ(A0)r\in\Phi(A_{0}) by Theorem 8(i). This implies that aij=akja_{ij}=a_{kj} for any jj, all iki\neq k, and t0t\geq 0. For the storage function V(A,𝒘)V(A,\bm{w}) as defined by (13), the Lie derivative (14) simplifies to,

V˙\displaystyle\dot{V} =p(𝒘)(Inad1n)𝒘p(𝒘)(Inad1n)𝒘opt\displaystyle=p(\bm{w})^{\top}(I_{n}-a_{d}\mathbbold{1}_{n}^{\top})\bm{w}-p(\bm{w})^{\top}(I_{n}-a_{d}\mathbbold{1}_{n}^{\top})\bm{w}^{\mathrm{opt}}
=p(𝒘)(𝒘ad𝒘opt+ad)=p(𝒘)(𝒘𝒘opt).\displaystyle=p(\bm{w})^{\top}(\bm{w}-a_{d}-\bm{w}^{\mathrm{opt}}+a_{d})=p(\bm{w})^{\top}(\bm{w}-\bm{w}^{\mathrm{opt}}).

From 𝒘,𝒘optΔn\bm{w},\bm{w}^{\mathrm{opt}}\in\Delta_{n}, then p(𝒘)(𝒘𝒘opt)=(p(𝒘)p(𝒘opt))(𝒘𝒘opt)=(p(𝒘)p1n)(𝒘𝒘opt)p(\bm{w})^{\top}(\bm{w}-\bm{w}^{\mathrm{opt}})=(p(\bm{w})-p(\bm{w}^{\mathrm{opt}}))^{\top}(\bm{w}-\bm{w}^{\mathrm{opt}})=(p(\bm{w})-p^{*}\mathbbold{1}_{n})^{\top}(\bm{w}-\bm{w}^{\mathrm{opt}}). Since pi(wi)p_{i}(w_{i}) strictly decreasing by Assumption 1 or 2, then V˙<0\dot{V}<0 for 𝒘𝒘opt\bm{w}\neq\bm{w}^{\mathrm{opt}}. Then VV is a Lyapunov function for the rank 11 initial appraisal case and limt(A(t),𝒘(t))=(1n(𝒘opt),𝒘opt)\lim_{t\to\infty}(A(t),\bm{w}(t))=(\mathbbold{1}_{n}(\bm{w}^{\mathrm{opt}})^{\top},\bm{w}^{\mathrm{opt}}).

Regarding statement (iii), we start by considering the equivalent reduced order appraisal dynamics (9) and by proving asymptotic convergence using LaSalle’s Invariance Principle. Define the function V¯:>0n×int(Δn)\bar{V}:_{>0}^{n}\times\mathrm{int}(\Delta_{n})\rightarrow, which is a modification of the storage function (13) by replacing the term aijaij\frac{a_{ij}}{a_{ij}^{*}} with viv_{i} for all i,ji,j,

V¯(𝒗,𝒘)\displaystyle\bar{V}(\bm{v},\bm{w}) =i=1n(wioptwipi(x)𝑑x+wioptln(vi)).\displaystyle=-\sum_{i=1}^{n}\bigg{(}\int\nolimits_{w^{\mathrm{opt}}_{i}}^{w_{i}}p_{i}(x)dx+w^{\mathrm{opt}}_{i}\ln(v_{i})\bigg{)}. (15)

The Lie derivative of V¯\bar{V} is

V¯˙\displaystyle\dot{\bar{V}} =p(𝒘)𝒘˙(𝒗˙𝒗)𝒘opt\displaystyle=-p(\bm{w})^{\top}\dot{\bm{w}}-(\dot{\bm{v}}\oslash\bm{v})^{\top}\bm{w}^{\mathrm{opt}}
=p(𝒘)(InA)𝒘(p(𝒘)p(𝒘)A𝒘1n)𝒘opt\displaystyle=p(\bm{w})^{\top}(I_{n}-A^{\top})\bm{w}-\big{(}p(\bm{w})-p(\bm{w})^{\top}A^{\top}\bm{w}\mathbbold{1}_{n}\big{)}^{\top}\bm{w}^{\mathrm{opt}}
=p(𝒘)(𝒘𝒘opt)0.\displaystyle=p(\bm{w})^{\top}(\bm{w}-\bm{w}^{\mathrm{opt}})\leq 0.

We can now define the sublevel set Ω={𝒗>0n,𝒘int(Δn)|V¯(𝒗,𝒘)V¯(𝒗0,𝒘0),t0}\Omega=\{\bm{v}\in_{>0}^{n},\bm{w}\in\mathrm{int}(\Delta_{n})\;|\;\bar{V}(\bm{v},\bm{w})\leq\bar{V}(\bm{v}_{0},\bm{w}_{0}),t\geq 0\}, which is closed and positively invariant. Note that if there exists any ii such that limtvi=0\lim_{t\to\infty}v_{i}=0, then limtV¯()=\lim_{t\to\infty}\bar{V}(\cdot)=\infty. However, V¯˙0\dot{\bar{V}}\leq 0 and V¯(𝒗0,𝒘0)\bar{V}(\bm{v}_{0},\bm{w}_{0}) is finite, so 𝒗(t)\bm{v}(t) must be bounded away from zero by a positive value for t0t\geq 0. By our assumption, 𝒗(t)\bm{v}(t) is also upper bounded. Then there exists constants vmin,vmax>0v_{\min},v_{\max}>0 such that 𝒗[vmin,vmax]n\bm{v}\in[v_{\min},v_{\max}]^{n}. Then by LaSalle’s Invariance Principle, the trajectories must converge to the largest invariant set contained in the intersection of

{𝒗[vmin,vmax]n,𝒘int(Δn)|V¯˙=0}Ω.\{\bm{v}\in[v_{\min},v_{\max}]^{n},\bm{w}\in\mathrm{int}(\Delta_{n})\;|\;\dot{\bar{V}}=0\}\operatorname{\cap}\Omega.

By Theorem 5, if V¯˙=0\dot{\bar{V}}=0, then w=𝒘optw=\bm{w}^{\mathrm{opt}} and p(𝒘opt)=p1np(\bm{w}^{\mathrm{opt}})=p^{*}\mathbbold{1}_{n}. This implies 𝒗˙=diag(𝒗)(p(𝒘opt)p1n)=0\dot{\bm{v}}=\operatorname{diag}(\bm{v})(p(\bm{w}^{\mathrm{opt}})-p^{*}\mathbbold{1}_{n})=0, so 𝒗=𝒗>0\bm{v}=\bm{v}^{*}>0. By Theorem 8(iv), (𝒗,𝒘opt)(\bm{v}^{*},\bm{w}^{\mathrm{opt}}) corresponds to equilibrium (A,𝒘opt)(A^{*},\bm{w}^{\mathrm{opt}}). Therefore limt(𝒗(t),𝒘(t))=(𝒗,𝒘opt)\lim_{t\to\infty}(\bm{v}(t),\bm{w}(t))=(\bm{v}^{*},\bm{w}^{\mathrm{opt}}) is equivalent to limt(A(t),𝒘(t))=(A,𝒘opt)\lim_{t\to\infty}(A(t),\bm{w}(t))=(A^{*},\bm{w}^{\mathrm{opt}}) such that 𝒘opt=vleft(A)\bm{w}^{\mathrm{opt}}=v_{\textup{left}}(A^{*}) and A=𝒜(𝒗)A^{*}=\mathcal{A}(\bm{v}^{*}), where AA^{*} and A0A_{0} have the same zero/positive pattern. ∎

Theorem 10(iii) establishes asymptotic convergence from all initial conditions of interest under the assumption that the trajectory 𝒗(t)\bm{v}(t) is uniformly bounded. Throughout our numerical simulation studies, we have empirically observed that this assumption has always been satisfied. We now present a Monte Carlo analysis [21] to estimate the probability that this uniform boundedness assumption holds.

For any randomly generated pair (A0,𝒘0)(A_{0},\bm{w}_{0}), which corresponds to 𝒗0=1n\bm{v}_{0}=\mathbbold{1}_{n}, define the indicator function 𝕀:0n×int(Δn){0,1}\mathds{I}:\mathbb{R}_{\geq 0}^{n}\times\mathrm{int}(\Delta_{n})\rightarrow\{0,1\} as

  1. (i)

    𝕀(A0,𝒘0)=1\mathds{I}(A_{0},\bm{w}_{0})=1 if there exists vmaxv_{\max} such that 𝒗(t)vmax1n\bm{v}(t)\leq v_{\max}\mathbbold{1}_{n} for all t[0,1000]t\in[0,1000];

  2. (ii)

    𝕀(A0,𝒘0)=0\mathds{I}(A_{0},\bm{w}_{0})=0, otherwise.

Let p=[𝕀(A0,𝒘0)=0]p=\mathds{P}[\mathds{I}(A_{0},\bm{w}_{0})=0]. We estimate pp as follows. We generate NN\in\mathbb{N} independent identically distributed random sample pairs, (A0(i),𝒘0(i))(A_{0}^{(i)},\bm{w}_{0}^{(i)}) for i{1,,N}i\in\{1,\dots,N\}, where A0(i)[0,1]n×nA_{0}^{(i)}\in[0,1]^{n\times n} is row-stochastic, irreducible, with strictly positive diagonal and 𝒘0(i)int(Δn)\bm{w}_{0}^{(i)}\in\mathrm{int}(\Delta_{n}).

Finally, we define the empirical probability as

p^N=1Ni=1N𝕀(A0(i),𝒘0(i)).\hat{p}_{N}=\frac{1}{N}\sum_{i=1}^{N}\mathds{I}(A_{0}^{(i)},\bm{w}_{0}^{(i)}).

For any accuracy 1ϵ(0,1)1-\epsilon\in(0,1) and confidence level 1ξ(0,1)1-\xi\in(0,1), then by the Chernoff Bound [21, Equation 9.14], |p^p|<ϵ|\hat{p}-p|<\epsilon with probability greater than confidence level 1ξ1-\xi if

N12ϵ2log2ξ.N\geq\frac{1}{2\epsilon^{2}}\log\frac{2}{\xi}. (16)

For ϵ=ξ=0.01\epsilon=\xi=0.01, the Chernoff bound (16) is satisfied by N=27 000N=27\,000.

Our simulation setup is as follows. We run 27 00027\,000 independent MATLAB simulations for the ASAP model (2) with donor-controlled work flow (5). We consider n=6n=6, irreducible with strictly positive diagonal A0A_{0} generated using the Erdös-Renyi random graph model with edge connectivity probability 0.30.3, and performance functions of the form pi(wi)=(siwi)γip_{i}(w_{i})=(\frac{s_{i}}{w_{i}})^{\gamma_{i}} for γi(0,1)\gamma_{i}\in(0,1) and [s1,,sn]int(Δn)[s_{1},\dots,s_{n}]\in\mathrm{int}(\Delta_{n}). We find that p^N=1\hat{p}_{N}=1. Therefore, we can make the following statement.
Consider (i) n=6n=6; (ii) A0A_{0}irreducible with strictly positive diagonal generated by the Erdös-Renyi random graph model with edge connectivity probability 0.30.3, and randomly generated edge weights normalized to be row-stochastic; and (iii) 𝐰0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}). Then with 99%99\% confidence level, there is at least 0.99%0.99\% probability that 𝐯(t)\left\|{\bm{v}(t)}\right\| is uniformly upper bounded for t[0,1000]t\in[0,1000].

V Stability Analysis for the ASAP Model with Average-Appraisal Work Flow

This section investigates the asymptotic behavior of the ASAP model (2) with average-appraisal work flow (5). In contrast with the eigenvector centrality model, we observe that strongly connected teams obeying this work flow model are not always able to learn their optimal work assignment. First we give a necessary condition on the initial appraisal matrix and optimal work assignment for convergence to the optimal team performance. Second, we prove that learning the optimal work assignment can be guaranteed if the team has a complete network topology or if the collective team performance is optimized by an equally distributed workload. Note that the results in Sections II-III also hold for average-appraisal work flow, only if the equilibrium satisfies 𝒘opt=1n(A)1n\bm{w}^{\mathrm{opt}}=\frac{1}{n}(A^{*})^{\top}\mathbbold{1}_{n}.

Let x\lceil x\rceil denote the ceiling function which rounds up all elements of xx to the nearest integer. The following lemma gives a condition that guarantees when the team is unable to learn the optimal workload assignment.

Lemma 11 (Condition for failure to learn optimal work assignment for the degree centrality model).

Consider the ASAP model (2) with average-appraisal work flow (5). Assume A0A_{0} row-stochastic and 𝐰0int(Δn)\bm{w}_{0}\in\mathrm{int}(\Delta_{n}). If there exists at least one i{1,,n}i\in\{1,\dots,n\} such that wiopt>max{1nk=1naki(0),wi(0)}w^{\mathrm{opt}}_{i}>\max\{\frac{1}{n}\sum_{k=1}^{n}\lceil a_{ki}(0)\rceil,w_{i}(0)\}. Then 𝐰(t)𝐰opt\bm{w}(t)\neq\bm{w}^{\mathrm{opt}} for any t0t\geq 0.

Proof.

By the Grönwall-Bellman Comparison Lemma, w˙iwi+1nk=1naki(0)\dot{w}_{i}\leq-w_{i}+\frac{1}{n}\sum_{k=1}^{n}\lceil a_{ki}(0)\rceil implies that

wi(t)\displaystyle w_{i}(t) wi(0)et+1nk=1naki(0)(et1)\displaystyle\leq-w_{i}(0)e^{-t}+\frac{1}{n}\sum\nolimits_{k=1}^{n}\lceil a_{ki}(0)\rceil(e^{-t}-1)
max{1nk=1naki(0),wi(0)}.\displaystyle\leq\max\Big{\{}\frac{1}{n}\sum\nolimits_{k=1}^{n}\lceil a_{ki}(0)\rceil,w_{i}(0)\Big{\}}.

Therefore if there exists at least one ii such that wiopt>max{1nk=1naki(0),wi(0)}w^{\mathrm{opt}}_{i}>\max\{\frac{1}{n}\sum_{k=1}^{n}\lceil a_{ki}(0)\rceil,w_{i}(0)\}, then wi(t)wioptw_{i}(t)\neq w^{\mathrm{opt}}_{i}. ∎

This sufficient condition for failure to learn the optimal workload can also be stated as a necessary condition for learning the optimal workload. In other words, if limt𝒘(t)=𝒘opt\lim_{t\to\infty}\bm{w}(t)=\bm{w}^{\mathrm{opt}}, then 𝒘ioptmax{1nk=1naki(0),wi(0)}\bm{w}^{\mathrm{opt}}_{i}\leq\max\{\frac{1}{n}\sum_{k=1}^{n}\lceil a_{ki}(0)\rceil,w_{i}(0)\} for all ii.

While the average-appraisal work flow does not converge to the optimal equilibrium for strongly connected teams and general initial conditions, the following lemma describes two cases that do guarantee learning of the optimal workload.

Lemma 12 (Convergence to optimal workload for average-appraisal work flow).

Consider the ASAP model (2) with average-appraisal work flow (5). The following statements hold.

  1. (i)

    If A0A_{0} is row-stochastic, irreducible, with strictly positive diagonal, 𝒘(0)int(Δn)\bm{w}(0)\in\mathrm{int}(\Delta_{n}), and 𝒘opt=1n1n\bm{w}^{\mathrm{opt}}=\frac{1}{n}\mathbbold{1}_{n}, then limt(A(t),𝒘(t))=(A,1n1n)\lim_{t\to\infty}(A(t),\bm{w}(t))=(A^{*},\frac{1}{n}\mathbbold{1}_{n}) where AA^{*} has the same zero/positive pattern as A0A_{0} and is doubly-stochastic with 1n(A)1n=1n1n\frac{1}{n}(A^{*})^{\top}\mathbbold{1}_{n}=\frac{1}{n}\mathbbold{1}_{n};

  2. (ii)

    if A0>0A_{0}>0 is row-stochastic and 𝒘(0)int(Δn)\bm{w}(0)\in\mathrm{int}(\Delta_{n}), then limt(A(t),𝒘(t))=(A,𝒘opt)\lim_{t\to\infty}(A(t),\bm{w}(t))=(A^{*},\bm{w}^{\mathrm{opt}}) where A>0A^{*}>0 and 𝒘opt=1n(A)1n\bm{w}^{\mathrm{opt}}=\frac{1}{n}(A^{*})^{\top}\mathbbold{1}_{n}.

Proof.

Regarding statement (i), the storage function from (13) is a Lyapunov function for the given dynamics with assumption 𝒘opt=1n1n=1n(A)1n\bm{w}^{\mathrm{opt}}=\frac{1}{n}\mathbbold{1}_{n}=\frac{1}{n}(A^{*})^{\top}\mathbbold{1}_{n}. The Lie derivative V˙\dot{V} is

V˙(A,𝒘)=p(𝒘)𝒘˙(𝒘opt)(AA)p(𝒘).=p(𝒘)(𝒘1nA1n1n(A)1n+1nA1n)=p(𝒘)(𝒘𝒘opt)0.\dot{V}(A,\bm{w})=-p(\bm{w})^{\top}\dot{\bm{w}}-(\bm{w}^{\mathrm{opt}})^{\top}(A^{*}-A)p(\bm{w}).\\ =p(\bm{w})^{\top}\Big{(}\bm{w}-\frac{1}{n}A^{\top}\mathbbold{1}_{n}-\frac{1}{n}(A^{*})^{\top}\mathbbold{1}_{n}+\frac{1}{n}A^{\top}\mathbbold{1}_{n}\Big{)}\\ =p(\bm{w})^{\top}(\bm{w}-\bm{w}^{\mathrm{opt}})\leq 0.

By Lemma 9, V=0V=0 if and only if 𝒘=𝒘opt=1n1n\bm{w}=\bm{w}^{\mathrm{opt}}=\frac{1}{n}\mathbbold{1}_{n} and A=AA=A^{*} such that 1n(A)1n=1n1n\frac{1}{n}(A^{*})^{\top}\mathbbold{1}_{n}=\frac{1}{n}\mathbbold{1}_{n}. Therefore limt(A(t),𝒘(t))=(A,1n1n)\lim_{t\to\infty}(A(t),\bm{w}(t))=(A^{*},\frac{1}{n}\mathbbold{1}_{n}) where A0A_{0} and AA^{*} have the same zero/positive pattern.

Regarding statement (ii), consider the reduced order dynamics (9), with p~(𝒗,𝒘)=𝒘𝒜(𝒗)p(𝒘)\tilde{p}(\bm{v},\bm{w})=\bm{w}^{\top}\mathcal{A}(\bm{v})p(\bm{w}) for shorthand. Define the function V¯:>0n×int(Δn)\bar{V}:_{>0}^{n}\times\mathrm{int}(\Delta_{n})\rightarrow as where

V¯(𝒗,𝒘)=i=1n(wioptwipi(x)dxwioptln(vi)+1nln(k=1naik(0)vk)).\bar{V}(\bm{v},\bm{w})=\sum_{i=1}^{n}\bigg{(}-\int\nolimits_{w^{\mathrm{opt}}_{i}}^{w_{i}}p_{i}(x)dx\\ -w^{\mathrm{opt}}_{i}\ln(v_{i})+\frac{1}{n}\ln\Big{(}\sum\nolimits_{k=1}^{n}a_{ik}(0)v_{k}\Big{)}\bigg{)}.

First, we show that V¯\bar{V} is lower bounded. Second, we illustrate that V¯\bar{V} is monotonically decreasing for 𝒘𝒘opt\bm{w}\neq\bm{w}^{\mathrm{opt}}. Then this allows us to show convergence to an optimal equilibrium.

Let amin=mini,j{aij(0)}a_{\min}=\min_{i,j}\{a_{ij}(0)\}. From the proof of Lemma 9, wioptwipi(x)𝑑x0-\int\nolimits_{w^{\mathrm{opt}}_{i}}^{w_{i}}p_{i}(x)dx\geq 0 for all ii. Then V¯\bar{V} is lower bounded by

V¯\displaystyle\bar{V} i=1n(wioptln(vi)+1nln(1amin𝒗1))\displaystyle\geq-\sum_{i=1}^{n}\bigg{(}w^{\mathrm{opt}}_{i}\ln(v_{i})+\frac{1}{n}\ln\Big{(}\frac{1}{a_{\min}\left\|{\bm{v}}\right\|_{1}}\Big{)}\bigg{)}
ln(amin)i=1nwioptln(vi𝒗1)ln(amin).\displaystyle\geq\ln(a_{\min})-\sum_{i=1}^{n}w^{\mathrm{opt}}_{i}\ln\Big{(}\frac{v_{i}}{\left\|{\bm{v}}\right\|_{1}}\Big{)}\geq\ln(a_{\min}).

Now we show that V¯˙0\dot{\bar{V}}\leq 0. Define the function 𝒖:>0n>0n\bm{u}:_{>0}^{n}\rightarrow_{>0}^{n}, where 𝒖(𝒗)=diag(A0𝒗)1\bm{u}(\bm{v})=\operatorname{diag}(A_{0}\bm{v})^{-1}, which reads element-wise as ui(𝒗)=k=1naik(0)vku_{i}(\bm{v})=\sum\nolimits_{k=1}^{n}a_{ik}(0)v_{k}. Using A(t)=𝒜(𝒗(t))A(t)=\mathcal{A}(\bm{v}(t)) as in (10), then the rate of change of 𝒖\bm{u} is given by

𝒖˙=diag(𝒖)2A0𝒗˙=diag(𝒖)(Ap(𝒘)p~(𝒗,𝒘)1n).\dot{\bm{u}}=-\operatorname{diag}(\bm{u})^{2}A_{0}\dot{\bm{v}}=-\operatorname{diag}(\bm{u})\big{(}Ap(\bm{w})-\tilde{p}(\bm{v},\bm{w})\mathbbold{1}_{n}\big{)}.

Plugging 𝒖\bm{u} into V¯\bar{V}, the Lie derivative of V¯\bar{V} is

V¯˙(𝒗,𝒘)=p(𝒘)𝒘˙(𝒗˙𝒗)𝒘𝐨𝐩𝐭1n(𝒖˙𝒖)1n=p(𝒘)(𝒘+1nA1n)(p(𝒘)p~(𝒗,𝒘)1n)𝒘opt1n(Ap(𝒘)+p~(𝒗,𝒘)1n)1n=p(𝒘)(𝒘𝒘opt)+p~(𝒗,𝒘)(1n𝒘opt1n1n1n)=p(𝒘)(𝒘𝒘opt)0.\dot{\bar{V}}(\bm{v},\bm{w})=-p(\bm{w})^{\top}\dot{\bm{w}}-(\dot{\bm{v}}\oslash\bm{v})^{\top}\bm{\bm{w}^{\mathrm{opt}}}-\frac{1}{n}(\dot{\bm{u}}\oslash\bm{u})^{\top}\mathbbold{1}_{n}\\ =-p(\bm{w})^{\top}(-\bm{w}+\frac{1}{n}A^{\top}\mathbbold{1}_{n})-(p(\bm{w})-\tilde{p}(\bm{v},\bm{w})\mathbbold{1}_{n})^{\top}\bm{w}^{\mathrm{opt}}\\ -\frac{1}{n}(-Ap(\bm{w})+\tilde{p}(\bm{v},\bm{w})\mathbbold{1}_{n})^{\top}\mathbbold{1}_{n}\\ =p(\bm{w})^{\top}\big{(}\bm{w}-\bm{w}^{\mathrm{opt}}\big{)}+\tilde{p}(\bm{v},\bm{w})\big{(}\mathbbold{1}_{n}^{\top}\bm{w}^{\mathrm{opt}}-\frac{1}{n}\mathbbold{1}_{n}^{\top}\mathbbold{1}_{n}\big{)}\\ =p(\bm{w})^{\top}(\bm{w}-\bm{w}^{\mathrm{opt}})\leq 0.

Since V¯˙0\dot{\bar{V}}\leq 0, implies that V¯(𝒗,𝒘)V¯(𝒗0,𝒘0)<\bar{V}(\bm{v},\bm{w})\leq\bar{V}(\bm{v}_{0},\bm{w}_{0})<\infty, we can conclude that there exists some strictly positive constant vmin>0v_{\min}>0 such that 𝒗vmin1n\bm{v}\geq v_{\min}\mathbbold{1}_{n}.

Note that V¯˙=0\dot{\bar{V}}=0 if and only if 𝒘=𝒘opt\bm{w}=\bm{w}^{\mathrm{opt}} by Lemma 9. Because V¯\bar{V} has a finite lower bound and is monotonically decreasing for 𝒘𝒘opt\bm{w}\neq\bm{w}^{\mathrm{opt}}, then as tt\to\infty, V¯\bar{V} will decrease to the level set where 𝒘=𝒘opt\bm{w}=\bm{w}^{\mathrm{opt}}. Then 𝒘=𝒘opt\bm{w}=\bm{w}^{\mathrm{opt}} implies 𝒘˙=0\dot{\bm{w}}=0 and 𝒗˙=0\dot{\bm{v}}=0. Therefore limt(𝒗,𝒘)=(𝒗,𝒘opt)\lim_{t\to\infty}(\bm{v},\bm{w})=(\bm{v}^{*},\bm{w}^{\mathrm{opt}}) such that 𝒘opt=1n𝒜(𝒗)1n=1n(A)1n\bm{w}^{\mathrm{opt}}=\frac{1}{n}\mathcal{A}(\bm{v}^{*})^{\top}\mathbbold{1}_{n}=\frac{1}{n}(A^{*})^{\top}\mathbbold{1}_{n}. ∎

VI Numerical Simulations

In this section, we utilize numerical simulations to investigate various cases of the ASAP model to illustrate when teams succeed and fail at optimizing their collective performance.

For all the simulations in this section, we consider performance functions of the form pi(wi)=(siwi)γp_{i}(w_{i})=(\frac{s_{i}}{w_{i}})^{\gamma} for γ(0,1)\gamma\in(0,1) and all ii, which satisfy Assumptions 1-2. Then the same optimal workload maximizes any choice of collective team performance we have introduced.

First, we provide an example of a team with a strongly connected appraisal network and strictly positive self-appraisal weights, i.e. satisfying the assumptions of Theorem 10(iii), to illustrate a case where the team learns the optimal work assignment. Figure 3 illustrates the evolution of the appraisal network and work assignment of the ASAP model (2) with donor-controlled work flow (4).

𝒘opt\bm{w}^{\mathrm{opt}} Refer to caption
𝒘(t)\bm{w}(t) Refer to caption
A(t)A(t) Refer to caption
Figure 3: Visualization of the evolution of 𝒘(t)\bm{w}(t) and A(t)A(t) obeying the ASAP Model (2) with donor-controlled work flow (4). For the work assignment vector, the darker the entry, the higher value it has. For the appraisal matrix, the thicker the edge is, the higher the appraisal edge weight is. The team’s initial appraisal network is strongly connected with strictly positive self-appraisals, and is an example of a team that successfully learns the work assignment that maximizes the collective team performance. The plots pictured are at times t={0,1,10,1000}t=\{0,1,10,1000\}, from left to right.

VI-A Distributed optimization illustrated with switching team members

Next we consider another example of the ASAP model (2) with donor-controlled work flow (4), where individuals are switching in and out of the team. Under the behavior governed by the ASAP model, only affected neighboring individuals need to be aware of an addition or subtraction of a team member, since the model is both distributed and decentralized. In this example, when individual jj is added to the team as a neighbor of individual ii, ii allocates a portion of their work assignment to the new individual jj. Similarly, if individual jj is removed, then jj’s neighbors will absorb jj’s workload. Let k=1k=1, k=2k=2, and k=3k=3 denote the subteams from time intervals t[0,5)t\in[0,5), t[5,15)t\in[5,15), and t[15,)t\in[15,\infty), respectively. Then let tot(k)\mathcal{H}_{\textup{tot}}^{(k)} denote the collective performance for the kkth subteam. Figure 4 illustrates the appraisal network topologies of each subteam and the evolution of the workload 𝒘(t)\bm{w}(t) and normalized collective team performance tot(k)\mathcal{H}_{\textup{tot}}^{(k)}.

Refer to caption
Refer to captionRefer to captionRefer to caption
Figure 4: Evolution of the ASAP model (2) with donor-controlled work flow (4) where individuals are being added and removed from the team. From top to bottom, the digraphs depict the topology of the team for t[0,5)t\in[0,5), t[5,15)t\in[5,15) and t[15,)t\in[15,\infty). At t=10t=10, individual 44 (in red diamond) is added to the team and individual 33 gives a portion of their work to individual 44. At t=20t=20, individual 11 (in black triangle) is removed from the team, and 11’s work assignment is given to individual 22.

VI-B Failure to learn

Partial observation of performance feedback does not guarantee learning optimal work assignment

Partial observation occurs when the appraisal network does not have the desired strongly connected property, resulting in team members having insufficient feedback to determine their optimal work assignment. We consider an example of the ASAP model (2) with donor-controlled work flow (4) and reducible initial appraisal network A0A_{0}. Figure 5(a) illustrates how some appraisal weights between neighboring individuals approach zero asymptotically, resulting in the team not being capable of learning the work distribution that maximizes the collective team performance.

𝒘opt\bm{w}^{\mathrm{opt}} Refer to caption
𝒘(t)\bm{w}(t) Refer to caption
A(t)A(t) Refer to caption
(a) Visualization of the evolution of 𝒘(t)\bm{w}(t) and A(t)A(t) obeying the ASAP model (2) with donor-controlled work flow (4) and A0A_{0} weakly connected.
𝒘opt\bm{w}^{\mathrm{opt}} Refer to caption
𝒘(t)\bm{w}(t) Refer to caption
A(t)A(t) Refer to caption
(b) Visualization of the evolution of 𝒘(t)\bm{w}(t) and A(t)A(t) obeying the ASAP model (2) with average-appraisal work flow (5). A0A_{0} is strongly connected and 𝒘opt\bm{w}^{\mathrm{opt}}, 𝒘0\bm{w}_{0}, and A0A_{0} satisfy the sufficient condition for failure to learn the optimal workload given by Lemma 11.
Figure 5: Various examples of cases where the team is unable to learn the work assignment that maximizes the collective team performance. For the work assignment vector, the darker the entry, the higher value it has. For the appraisal matrix, the thicker the edge is, the higher the appraisal edge weight is. The plots pictured are at times t={0,1,10,1000}t=\{0,1,10,1000\}, from left to right.

Average-appraisal feedback limits direct cooperation

Figure 5(b) is an example of a team obeying the ASAP model (2) with average-appraisal work flow (5). Even if the team does not satisfy the sufficient conditions for failure from Lemma 11, when individuals adjust their work assignment with only their average-appraisal as the input, the team may still not succeed in learning the correct workload to maximize the team performance.

VII Conclusion

This paper proposes novel models for the evolution of interpersonal appraisals and the assignment of workload in a team of individuals engaged in a sequence of tasks. We propose appraisal networks as a mathematical multi-agent model for the applied psychological concept of TMS. For two natural models of workload assignment, we establish conditions under which a correct TMS develops and allows the team to achieve optimal workload assignment and optimal performance. Our two proposed workload assignment mechanisms feature different degrees of coordination among team members. The donor-controlled work flow model requires a higher level of coordination compared to the average-appraisal work flow and, as a result, achieves optimal behavior under weaker requirements on the initial appraisal matrix.

Possible future research directions include studying team’s behavior when individuals in the team update their appraisals and work assignments asynchronously. The updates could be modeled using an additional contact network with switching topology. More investigation can also be done to determine if it is possible to predict which appraisal weights in a weakly connected network approach zero asymptotically, using only information on the initial work distribution and appraisal values.

VIII Code Availability

The source code is publicly available under https://github.com/eyhuang66/assign-appraise-dynamics-of-teams.

References

  • [1] E. G. Anderson Jr and K. Lewis. A dynamic model of individual and collective learning amid disruption. Organization Science, 25(2):356–376, 2014. doi:10.1287/orsc.2013.0854.
  • [2] J. Barreiro-Gomez, G. Obando, and N. Quijano. Distributed population dynamics: Optimization and control applications. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(2):304–314, 2016. doi:10.1109/TSMC.2016.2523934.
  • [3] P. Bauer. A sequential elimination procedure for choosing the best population(s) based on multiple testing. Journal of Statistical Planning and Inference, 21(2):245–252, 1989. doi:10.1016/0378-3758(89)90008-6.
  • [4] C. Berge. Graphs and Hypergraphs. North-Holland, 1973, ISBN 072042450X.
  • [5] P. Bonacich. Power and centrality: A family of measures. American Journal of Sociology, 92(5):1170–1182, 1987. doi:10.1086/228631.
  • [6] F. Bullo. Lectures on Network Systems. Kindle Direct Publishing, 1.4 edition, July 2020, ISBN 978-1986425643. With contributions by J. Cortés, F. Dörfler, and S. Martínez. URL: http://motion.me.ucsb.edu/book-lns.
  • [7] N. E. Friedkin. A formal theory of reflected appraisals in the evolution of power. Administrative Science Quarterly, 56(4):501–529, 2011. doi:10.1177/0001839212441349.
  • [8] L. Gong, J. Gao, and M. Cao. Evolutionary game dynamics for two interacting populations in a co-evolving environment. In IEEE Conf. on Decision and Control, pages 3535–3540, 2018. doi:10.1109/CDC.2018.8619801.
  • [9] A. C. Graesser, S. M. Fiore, S. Greiff, J. Andrews-Todd, P. W. Foltz, and F. W. Hesse. Advancing the science of collaborative problem solving. Psychological Science in the Public Interest, 19(2):59–92, 2018. doi:10.1177/1529100618808244.
  • [10] J. A. Grand, M. T. Braun, G. Kuljanin, S. W. J. Kozlowski, and G. T. Chao. The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams. Journal of Applied Psychology, 101(10):1353–1385, 2016. doi:10.1037/apl0000136.
  • [11] T. Gross and B. Blasius. Adaptive coevolutionary networks: A review. Journal of The Royal Society Interface, 5(20):259–271, 2008. doi:10.1098/rsif.2007.1229.
  • [12] J. Hofbauer and K. Sigmund. Evolutionary Games and Population Dynamics. Cambridge University Press, 1998, ISBN 052162570X.
  • [13] C. Hsiung. The effectiveness of cooperative learning. Journal of Engineering Education, 101(1):119–137, 2012. doi:10.1002/j.2168-9830.2012.tb00044.x.
  • [14] J. A. Jacquez and C. P. Simon. Qualitative theory of compartmental systems. SIAM Review, 35(1):43–79, 1993. doi:10.1137/1035003.
  • [15] S. W. J. Kozlowski, W. J. Steve, and B. S. Bell. Evidence-based principles and strategies for optimizing team functioning and performance in science teams. In Strategies for Team Science Success, pages 269–293. Springer, 2019. doi:10.1007/978-3-030-20992-6_21.
  • [16] W. Mei, N. E. Friedkin, K. Lewis, and F. Bullo. Dynamic models of appraisal networks explaining collective learning. IEEE Transactions on Automatic Control, 63(9):2898–2912, 2018. doi:10.1109/TAC.2017.2775963.
  • [17] A. Nedić, A. Ozdaglar, and P. A. Parrilo. Constrained consensus and optimization in multi-agent networks. IEEE Transactions on Automatic Control, 55(4):922–938, 2010. doi:10.1109/TAC.2010.2041686.
  • [18] A. Newell and P. S. Rosenbloom. Mechanisms of skill acquisition and the law of practice. In J. R. Anderson, editor, Cognitive Skills and Their Acquisition, volume 1, pages 1–55. Taylor & Francis, 1981.
  • [19] M. Ogura and V. M. Preciado. Stability of spreading processes over time-varying large-scale networks. IEEE Transactions on Network Science and Engineering, 3:44–57, 2016. doi:10.1109/TNSE.2016.2516346.
  • [20] S. Ryan and R. V. O’Connor. Acquiring and sharing tacit knowledge in software development teams: An empirical study. Information and Software Technology, 55(9):1614–1624, 2013. doi:10.1016/j.infsof.2013.02.013.
  • [21] R. Tempo, G. Calafiore, and F. Dabbene. Randomized Algorithms for Analysis and Control of Uncertain Systems. Springer, 2005, ISBN 1-85233-524-6.
  • [22] Z. Wang, M. A. Andrews, Z. X. Wu, L. Wang, and C. T. Bauch. Coupled disease-behavior dynamics on complex networks: A review. Physics of Life Reviews, 15:1–29, 2015. doi:10.1016/j.plrev.2015.07.006.
  • [23] D. M. Wegner. Transactive memory: A contemporary analysis of the group mind. In B. Mullen and G. R. Goethals, editors, Theories of Group Behavior, pages 185–208. Springer, 1987. doi:10.1007/978-1-4612-4634-3_9.
  • [24] E. Wei, A. Ozdaglar, and A. Jadbabaie. A distributed Newton method for network utility maximization, I: Algorithm. IEEE Transactions on Automatic Control, 58(9):2162–2175, 2013. doi:10.1109/TAC.2013.2253218.
  • [25] J. W. Weibull. Evolutionary Game Theory. MIT Press, 1997, ISBN 9780262731218.
  • [26] J. S. Weitz, C. Eksin, S. P. Brown K. Paarporn, and W. C. Ratcliff. An oscillating tragedy of the commons in replicator dynamics with game-environment feedback. Proceedings of the National Academy of Sciences, 113(47):E7518–E7525, 2016. doi:10.1073/pnas.1604096113.
[Uncaptioned image] Elizabeth Y. Huang received the B.S. degree in mechanical engineering from the University of California, San Diego, USA in 2016. She is currently working toward her Ph.D. in mechanical engineering from the University of California, Santa Barbara, USA. Her research interests include the application of control and algebraic graph theoretical tools for the study of networks of multi-agent systems, such as social networks, evolutionary dynamics, and power systems.
[Uncaptioned image] Dario Paccagnan is a Postdoctoral Fellow with the Mechanical Engineering Department and the Center for Control, Dynamical Systems and Computation, University of California, Santa Barbara. In 2018, Dario obtained a Ph.D. degree from the Information Technology and Electrical Engineering Department, ETH Zürich, Switzerland. He received a B.Sc. and M.Sc. in Aerospace Engineering in 2011 and 2014 from the University of Padova, Italy, and a M.Sc. in Mathematical Modelling and Computation from the Technical University of Denmark in 2014; all with Honors. Dario was a visiting scholar at the University of California, Santa Barbara in 2017, and at Imperial College of London, in 2014. His interests are at the interface between control theory and game theory, with a focus on the design of behavior-influencing mechanisms for socio-technical systems. Applications include multiagent systems and smart cities. Dr. Paccagnan was awarded the ETH medal, and is recipient of the SNSF fellowship for his work in Distributed Optimization and Game Design.
[Uncaptioned image] Wenjun Mei is a postdoctoral researcher in the Automatic Control Laboratory at ETH, Zurich. He received the Bachelor of Science degree in Theoretical and Applied Mechanics from Peking University in 2011 and the Ph.D degree in Mechanical Engineering from University of California, Santa Barbara, in 2017. He is on the editorial board of the Journal of Mathematical Sociology. His current research interests focus on network multi-agent systems, including social, economic and engineering networks, population games and evolutionary dynamics, network games and optimization.
[Uncaptioned image] Francesco Bullo (S’95-M’99-SM’03-F’10) is a Professor with the Mechanical Engineering Department and the Center for Control, Dynamical Systems and Computation at the University of California, Santa Barbara. He was previously associated with the University of Padova (Laurea degree in Electrical Engineering, 1994), the California Institute of Technology (Ph.D. degree in Control and Dynamical Systems, 1999), and the University of Illinois. He served on the editorial boards of IEEE, SIAM, and ESAIM journals and as IEEE CSS President. His research interests focus on network systems and distributed control with application to robotic coordination, power grids and social networks. He is the coauthor of “Geometric Control of Mechanical Systems” (Springer, 2004), “Distributed Control of Robotic Networks” (Princeton, 2009), and “Lectures on Network Systems” (Kindle Direct Publishing, 2019, v1.3). He received best paper awards for his work in IEEE Control Systems, Automatica, SIAM Journal on Control and Optimization, IEEE Transactions on Circuits and Systems, and IEEE Transactions on Control of Network Systems. He is a Fellow of IEEE, IFAC, and SIAM.