This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Towards a Theory of Systems Engineering Processes: A Principal-Agent Model of a One-Shot, Shallow Process

Salar Safarkhani2, Ilias Bilionis21, Jitesh H. Panchal2

2School of Mechanical Engineering, Purdue University, West Lafayette, Indiana 47907-2088 1Corresponding author, email: ibilion@purdue.edu
Abstract

Systems engineering processes coordinate the effort of different individuals to generate a product satisfying certain requirements. As the involved engineers are self-interested agents, the goals at different levels of the systems engineering hierarchy may deviate from the system-level goals which may cause budget and schedule overruns. Therefore, there is a need of a systems engineering theory that accounts for the human behavior in systems design. Undertaking such an ambitious endeavor is clearly beyond the scope of any single paper due to the inherent difficulty of rigorously formulating the problem in its full complexity along with the lack of empirical data. However, as experience in the physical sciences shows, a lot of knowledge can be generated by studying simple hypothetical scenarios which nevertheless retain some aspects of the original problem. To this end, the objective of this paper is to study the simplest conceivable systems engineering process, a principal-agent model of a one-shot (single iteration), shallow (one level of hierarchy) systems engineering process. We assume that the systems engineer maximizes the expected utility of the system, while the subsystem engineers seek to maximize their expected utilities. Furthermore, the systems engineer is unable to monitor the effort of the subsystem engineer and may not have complete information about their types or the complexity of the design task. However, the systems engineer can incentivize the subsystem engineers by proposing specific contracts. To obtain an optimal incentive, we pose and solve numerically a bi-level optimization problem. Through extensive simulations, we study the optimal incentives arising from different system-level value functions under various combinations of effort costs, problem-solving skills, and task complexities. Our numerical examples show that, the passed-down requirements to the agents increase as the task complexity and uncertainty grow and they decrease with increasing the agents’ costs.

Index Terms:
systems engineering theory, systems science, complex systems, game theory, principal-agent model, mechanism design, contract theory, expected utility, bi-level programming problem, optimal incentives.

I Introduction

Cost and schedule overruns plague the majority of large systems engineering projects across multiple industry sectors including power [1], defense [2], and space [3]. As design mistakes are more expensive to correct during the production and operation phases, the design phase of the systems engineering process (SEP) has the largest potential impact on cost and schedule overruns. Collopy et al. [4] argued that requirements engineering (RE), which is a fundamental part of the design phase, is a major source of inefficiencies in systems engineering. In response, they developed value-driven design (VDD) [5], a systems design approach that starts with the identification of a system-level value function and guides the systems engineer (SE) to construct subsystem value functions that are aligned with the system goals. According to VDD, the subsystem engineers (sSE) and contractors should maximize the objective functions passed down by the SE instead of trying to meet requirements.

RE and VDD make the assumption that the goals of the human agents involved in the SEP are aligned with the SE goals. In particular, RE assumes that, agents attempt to maximize the probability of meeting the requirements, while VDD assumes that they will maximize the objective functions supplied by the SE. However, this assumption ignores the possibility that the design agents, as all humans, may have personal agendas that not necessarily aligned with the system-level goals.

Contrary to RE and VDD, it is more plausible that the design agents seek to maximize their own objectives. Indeed, there is experimental evidence that the quality of the outcome of a design task is strongly affected by the reward anticipated by the agent [6, 7, 8]. In other words, the agent decides how much effort and resources to devote to a design task after taking into account the potential reward. In the field, the reward could be explicitly implemented as an annual performance-based bonus, or, as it is the case most often, it could be implicitly encoded in expectations about job security, promotion, professional reputation, etc. To capture the human aspect in SEPs, one possible way is one needs to follow a game-theoretic approach [9], [10]. Most generally, the SEP should can be modeled as a dynamical hierarchical network game with incomplete information. Each layer of the hierarchy represents interactions among the SE and some sSEs, or the sSEs and other engineers or contractors. With the term “principal,” we refer to any individual delegating a task, while we reserve the term “agent” for the individual carrying out the task. Note that an agent may simultaneously be the principal in a set of interactions down the network. For example, the sSE is the agent when considering their interaction with the SE (the principal), but the principal when considering their interaction with a contractor (the agent). At each time step, the principals pass down delegated tasks along with incentives, the agents choose the effort levels that maximize their expected utility, perform the task, and return the outcome to the principals.

The iterative and hierarchical nature of real SEPs makes them extremely difficult to model in their full generality. Given that our aim is to develop a theory of SEPs, we start from the simplest possible version of a SEP which retains, nevertheless, some of the important elements of the real process. Specifically, the objective of this paper is to develop and analyze a principal-agent model of a one-shot, shallow SEP. The SEP is “one-shot” in the sense that decisions are made in one iteration and they are final. The term “shallow” refers to a one-layer-deep SEP hierarchy, i.e., only the SE (principal) and the sSEs (agents) are involved. The agents maximize their expected utility given the incentives provided by the principal, and the principal selects the incentive structure that maximizes the expected utility of the system. We pose this mechanism design problem [11] as a bi-level optimization problem and we solve it numerically.

A key component of our SEP model is the quality function of an agent. The quality function is a stochastic process that models the principal’s beliefs about the outcome of the delegated design task given that the agent devotes a certain amount of effort. The quality function is affected by what the principal believes about the task complexity and the problem solving skills of the agent. Following our work [12], we model the design task as a maximization problem where the agent seeks the optimal solution. The principal expresses their prior beliefs about the task complexity by modeling the objective function as a random draw from a Gaussian process prior with a suitably selected covariance function.

As we showed in [12], conditioned on knowing the task complexity and the agent type, the quality function is well approximated by an increasing, concave function of effort with additive Gaussian noise. However, we will use a linear approximation for the quality function.

We study numerically two different scenarios. The first scenario assumes that the SE knows the agent types and the task complexity, but they do not observe the agent’s effort. This situation is known in game theory as a moral hazard problem [13]. The most common way to solve a moral hazard problem is to use the first order approach (FOA) [14]. In the FOA, the incentive compatibility constraint of the agent is replaced by its first order necessary condition. However, the FOA depends on the convexity of the distribution function in effort which is not valid in our case. There have been several attempts to solve the principal-agent model where the requirements of the FOA may fail, nonetheless they must still satisfy the monotone likelihood ratio property [15].

In the second scenario, we study the case of moral hazard with simultaneous adverse selection [16], i.e., the SE observes neither the effort nor the type of agents nor the task complexity. This is a Bayesian game with incomplete information.. In this case, the SE experiences additional loss in their expected utility, because the sSEs’ can pretend to have different types. The revelation principle [17] guarantees that it suffices to search for the optimal mechanism within the set of incentive compatible mechanisms, i.e., within the set of mechanisms in which the sSEs are telling the truth about their types and technology maturity. In this paper, we solve the optimization problem in the principal-agent model, numerically with making no assumptions about the quality function.

The paper is organized as follows. In section II we will derive the mathematical model of the SEP and we will study the type-independent and type-dependent optimal contracts. We will also introduce the value and utility functions. In section III, we perform an exhaustive numerical study and show the solutions for several case studies. Finally, we conclude in section IV.

II Modeling a one-shot, shallow systems engineering process

II-A Basic definitions and notation

As mentioned in the introduction, we develop a model of a one-shot (the game evolves in one iteration and the decisions are final), shallow (one-layer-deep hierarchy) SEP. The SE has decomposed the system into NN subsystems and assigned a sSE to each one of them. We use i=1,,Ni=1,\dots,N to label each subsystem. From now on, we refer to the SE as the principal and the sSEs as the agents. The principal delegates tasks to the agents along with incentives. The agents choose how much effort to devote on their task by maximizing their expected utility. The principal, anticipates this reaction and selects the incentives that maximize the system-level expected utility.

Let (Ω,,)\left(\Omega,\mathcal{F},\mathbb{P}\right) be a probability space where, Ω\Omega is the sample space, \mathcal{F} is a σ\sigma-algebra, and \mathbb{P} is the probability measure. With ωΩ\omega\in\Omega we refer to the random state of nature. We use upper case letters for random variables (r.v.), bold upper case letters for their range, and lower case letters for their possible values. For example, the type of agent ii is a r.v. Θi\Theta_{i} taking MiM_{i} discrete values θi\theta_{i} in the set 𝚯iΘi(Ω)={1,,Mi}\boldsymbol{\Theta}_{i}\equiv\Theta_{i}(\Omega)=\{1,\dots,M_{i}\}. Collectively, we denote all types with the NN-dimensional tuple Θ=(Θ1,,ΘN)\Theta=(\Theta_{1},\dots,\Theta_{N}) and we reserve Θi\Theta_{-i} to refer to the (N1)(N-1)-dimensional tuple containing all elements of Θ\Theta except Θi\Theta_{i}. This notation carries to any NN-dimensional tuple. For example, θ\theta and θi\theta_{-i} are the type values for all agents and all agents except ii, respectively. The range of Θ\Theta is 𝚯=×i=1N𝚯i\boldsymbol{\Theta}=\times_{i=1}^{N}\boldsymbol{\Theta}_{i}.

The principal believes that the agents types vary independently, i.e., they assign a probability mass function (p.m.f.) on Θ\Theta that factorizes over types as follows:

[Θ=θ]=i=1N[Θi=θi]=i=1Npiθi,\mathbb{P}[\Theta=\theta]=\prod_{i=1}^{N}\mathbb{P}[\Theta_{i}=\theta_{i}]=\prod_{i=1}^{N}p_{i\theta_{i}}, (1)

for all θ\theta in 𝚯\boldsymbol{\Theta}, where pik0p_{ik}\geq 0 is the probability that agent ii has type kk, for kk in 𝚯i\boldsymbol{\Theta}_{i}. Of course, we must have k=1Mjpik=1\sum_{k=1}^{M_{j}}p_{ik}=1, for all i=1,,Ni=1,\dots,N.

Each agent knows their type, but their state of knowledge about all other agents is the same as the principal’s. That is, if agent ii is of type Θi=θi\Theta_{i}=\theta_{i}, then their state of knowledge about everyone else is captured by the p.m.f.:

[Θi=θi|Θi=θi]=[Θ=θ][Θi=θi]=jipjθj.\mathbb{P}[\Theta_{-i}=\theta_{-i}|\Theta_{i}=\theta_{i}]=\frac{\mathbb{P}[\Theta=\theta]}{\mathbb{P}[\Theta_{i}=\theta_{i}]}=\prod_{j\not=i}p_{j\theta_{j}}. (2)

Agent ii chooses a normalized effort level ei[0,1]e_{i}\in[0,1] for his delegated task. We assume that this normalized effort is the percentage of an agent’s maximum available effort. The units of the normalized effort depend on the nature of the agent’s subsystem. If the principal and the agent are both part of same organization then the effort can be the time that the agent dedicates to the delegated task in a particular period of time, e.g., in a fiscal year. On the other hand, if the agent is a contractor, then the effort can be the percentage of the available yearly budget that the contractor spends on the assigned task. We represent the monetary cost of the ii-th agent’s effort with the random process Ci(ei)C_{i}\left(e_{i}\right). In economic terms, Ci(ei)C_{i}\left(e_{i}\right) is the opportunity cost, i.e., the payoff of the best alternative project in agent could devote their effort. In general, we know that the process Ci(ei)C_{i}(e_{i}) should be an increasing function of the effort eie_{i}. For simplicity, we assume that the cost of effort of the agents is quadratic,

Ci(ei):=ciΘiei2,C_{i}(e_{i}):=c_{i\Theta_{i}}e_{i}^{2}, (3)

with a type-dependent coefficient cik>0c_{ik}>0 for all kk in 𝚯i\boldsymbol{\Theta}_{i}.

The quality function of the ii-th agent is a real valued random Qi(ei):=Qi(ei)Q_{i}(e_{i}):=Q_{i}(e_{i}) process paremeterized by the effort eie_{i}. The quality function models everybody’s beliefs about the design capabilities of agent ii. The interpretation of the quality function is as follows. If agent ii devotes to the task an effort of level eie_{i}, then they produce a random outcome of quality Qi(ei)Q_{i}(e_{i}). In our previous work [12], we created a stochastic model for the quality function of a designer where we explicitly captured its dependence on the problem-solving skills of the designer and on the task complexity. In that work, we showed that Qi(ei)Q_{i}(e_{i}) has increasing and concave sample paths, that its mean function is increasing concave, and the standard deviation is decreasing with effort, albeit mildly, it is independent of the problem-solving skills of the designer, and it only increases mildly with increasing task complexity. Examining the spectral decomposition of the process for various cases, we observed that it can be well-approximated by:

Qi(ei)=qiΘi0(ei)+σiΘiΞi,Q_{i}(e_{i})=q_{i\Theta_{i}}^{0}(e_{i})+\sigma_{i\Theta_{i}}\Xi_{i}, (4)

where, for kk in 𝚯i\boldsymbol{\Theta}_{i}, qik0(ei)q_{ik}^{0}(e_{i}) is an increasing, concave, type-dependent mean quality function, σik>0\sigma_{ik}>0 is a type-dependent standard deviation parameter capturing the aleatory uncertainty of the design process, and Ξi\Xi_{i} is a standard normal r.v. If we further assume that the time window for design is relatively small, then the qik0(ei)q_{ik}^{0}(e_{i}) term can be approximated as a linear function. Therefore, we will assume that the quality function is:

Qi(ei)=κiΘiei+σiΘiΞi,Q_{i}(e_{i})=\kappa_{i\Theta_{i}}e_{i}+\sigma_{i\Theta_{i}}\Xi_{i}, (5)

where, κ\kappa is inversely proportional to the complexity of the problem. For instance, a large κ\kappa corresponds to a low-complexity task while a κ\kappa corresponds to a high-complexity task. The standard deviation parameter σ\sigma captures the inherent uncertainty of the design process and depends on the maturity of the underlying technology. In summary, an agent’s type is characterized by the triplet cost-complexity-uncertainty.

From the perspective of the principal, the r.v.’s Ξi\Xi_{i} are independent of the agents’ types Θi\Theta_{i} as they represent the uncertain state of nature. A stronger assumption that we employ is that the Ξi\Xi_{i}’s are also independent to each other. This assumption is strong because it essentially means that the qualities of the various subsystems are decoupled. Under these independence assumptions, the state of knowledge of the principal is captured by the following probability measure:

[Θ=θ,Ξ×i=1N𝐁i]=i=1N[piθi𝐁iϕ(ξi)dξi],\mathbb{P}\left[\Theta=\theta,\Xi\in\times_{i=1}^{N}\mathbf{B}_{i}\right]=\prod_{i=1}^{N}\left[p_{i\theta_{i}}\int_{\mathbf{B}_{i}}\phi(\xi_{i})d\xi_{i}\right], (6)

for all θ𝚯\theta\in\boldsymbol{\Theta} and all Borel-measurable 𝐁i\mathbf{B}_{i}\subset\mathbb{R}. Assuming that all these are common knowledge, the state of knowledge of agent ii after they observe their type θi\theta_{i} (but before they observe Ξi\Xi_{i}) is

[Θi=θi,ξ×i=1N𝐁i|Θi=θi]=[Θ=θ,ξ×i=1N𝐁i][Θi=θi]=[Θi=θi|Θi=θi]i=1N[𝐁iϕ(ξi)𝑑ξi]\begin{array}[]{ccc}\mathbb{P}\left[\Theta_{-i}=\theta_{-i},\xi\in\times_{i=1}^{N}\mathbf{B}_{i}\middle|\Theta_{i}=\theta_{i}\right]&&\\ =\frac{\mathbb{P}\left[\Theta=\theta,\xi\in\times_{i=1}^{N}\mathbf{B}_{i}\right]}{\mathbb{P}[\Theta_{i}=\theta_{i}]}&&\\ =\mathbb{P}[\Theta_{-i}=\theta_{-i}|\Theta_{i}=\theta_{i}]\prod_{i=1}^{N}\left[\int_{\mathbf{B}_{i}}\phi(\xi_{i})d\xi_{i}\right]&&\end{array} (7)

Finally, we use 𝔼[]\mathbb{E}[\cdot] to denote the expectation of any quantity over the state of knowledge of the principal as characterized by the probability measure of Eq. (6). That is, the expectation of any function f(Θ,Ξ)f(\Theta,\Xi) of the agent types Θ\Theta and the state of nature Ξ\Xi is

𝔼[f(Θ,Ξ)]=θ𝚯Nf(θ,ξ)i=1N[piθiϕ(ξi)]dξ.\mathbb{E}[f(\Theta,\Xi)]=\sum_{\theta\in\boldsymbol{\Theta}}\int_{\mathbb{R}^{N}}f(\theta,\xi)\prod_{i=1}^{N}\left[p_{i\theta_{i}}\phi(\xi_{i})\right]d\xi. (8)

Similarly, we use the notation 𝔼iθi[]\mathbb{E}_{i\theta_{i}}[\cdot] to denote the conditional expectation over the state of knowledge of an agent ii who knows that their type is Θi=θi\Theta_{i}=\theta_{i}. This is the expectation 𝔼[|Θi=θi]\mathbb{E}[\cdot|\Theta_{i}=\theta_{i}] with respect to the probability measure of Eq. (2) and we have:

𝔼ik[f(Θ,Ξ)]=θi𝚯iNf(θi,θi,ξ)j=1N[pjθjϕ(ξj)]piθi𝑑ξ.\mathbb{E}_{ik}[f(\Theta,\Xi)]=\sum_{\theta_{-i}\in\boldsymbol{\Theta}_{-i}}\int_{\mathbb{R}^{N}}f(\theta_{i},\theta_{-i},\xi)\frac{\prod_{j=1}^{N}\left[p_{j\theta_{j}}\phi(\xi_{j})\right]}{p_{i\theta_{i}}}d\xi. (9)

II-B Type-independent optimal contracts

We start by considering the case where the principal offers a single take-it-or-leave-it contract independent of the agent type. This is the situation usually encountered in contractual relationships between the SE and the sSEs within the same organization. The principal offers the contract and the agent decides whether or not to accept it. If the agent accepts, then they select their level of effort by maximizing their expected utility, they work on their design task, they return the outcome quality back to the principal, and they receive their reward. We show a schematic view of this type of contracts in Fig. 1(a). A contract is a monetary transfer function ti:t_{i}:\mathbb{R}\rightarrow\mathbb{R} that specifies the agent’s compensation ti(qi)t_{i}(q_{i}) contingent on the quality level qiq_{i}. Therefore, the payoff of the ii-th agent is the random process:

Πi(ei)=ti(Qi(ei))Ci(ei).\Pi_{i}(e_{i})=t_{i}\left(Q_{i}\left(e_{i}\right)\right)-C_{i}(e_{i}). (10)

We assume that the agent knows their type, but they choose the optimal effort level ex-ante, i.e., they choose the effort level before seeing the state of the nature Ξi\Xi_{i}. Denoting their monetary utility function by Ui(πi)=uiΘi(πi)U_{i}(\pi_{i})=u_{i\Theta_{i}}(\pi_{i}), the ii-th agent selects an effort level by solving:

eiθi=argmaxei[0,1]𝔼iθi[Ui(Πi(ei))].{e_{i}}^{*}_{\theta_{i}}=\underset{e_{i}\in[0,1]}{\operatorname{\arg\max}}\ \mathbb{E}_{i\theta_{i}}\left[U_{i}\left(\Pi_{i}(e_{i})\right)\right]. (11)

Let QiQ_{i}^{*} be the r.v. representing the quality function that the principal should expect from agent ii if they act optimally, i.e.,

Qi=Qi(eiΘi).Q_{i}^{*}=Q_{i}(e_{i_{\Theta_{i}}}^{*}). (12)

Then the system level value is a r.v. of the form

V=v(Q),V=v(Q^{*}), (13)

where v:Nv:\mathbb{R}^{N}\rightarrow\mathbb{R} is a function of the subsystem outcomes QQ^{*}. We introduce the form of the value function, v(q)v(q), in Sec. III. Note that, even though in this work the r.v. VV is assumed to be just a function of QQ^{*}, in reality it may also depend on the random state of nature, e.g., future prices, demand for the system services. Consideration of the latter is problem-dependent and beyond the scope of this work.

Given the system value VV and taking into account the transfers to the agents, the system-level payoff is the r.v.

Π0=Vi=1Nti(Qi).\Pi_{0}=V-\sum_{i=1}^{N}t_{i}(Q_{i}^{*}). (14)

If the monetary utility of the principal is u0(π0)u_{0}(\pi_{0}), then they should select the transfer functions t()=(t1(),,tN())t(\cdot)=(t_{1}(\cdot),\dots,t_{N}(\cdot)) by solving:

t()=argmaxt()𝔼[u0(Π0)].t^{*}\left(\cdot\right)=\underset{t\left(\cdot\right)}{\operatorname{\arg\max}}\ \mathbb{E}\left[u_{0}\left(\Pi_{0}\right)\right]. (15)

However, guarantee that they want to participate in the SEP, the expected utility of the sSEs must be greater than the expected utility they would enjoy if they participated in another project. Therefore, the SE must solve Eq. (15) subject to the participation constraints:

𝔼iθi[Ui(Πi)]u¯iθi,\mathbb{E}_{i\theta_{i}}\left[U_{i}\left(\Pi_{i}\right)\right]\geq\bar{u}_{i\theta_{i}}, (16)

for all possible values of θi\theta_{i}, and all i=1,,Ni=1,\dots,N, where u¯iθi\bar{u}_{i\theta_{i}} is known as the reservation utility of agent ii.

II-C Type-depdenent optimal contracts

By offering a single transfer function, the principal is unable to differentiate between the various agent types when adverse selection is an issue. That is, all agent types, independently of their cost, complexity, and uncertainty attributes, exactly the same transfer function. In other words, with a single transfer function the principal is actually targeting the average agent. This necessarily leads to inefficiencies stemming from problems such as paying an agent involved in a low-complexity task more than a same cost and uncertainty agent involved in a high-complexity task.

The principal can gain in efficiency by offering different transfer functions (if any exist) that target specific agent types. For example, the principal could offer a transfer function that is suitable for cost-efficient, low-complexity, low-uncertainty agents, and one for cost-inefficient agents, low-complexity, low-uncertainty, etc., for any other combination that is supported by the principal’s prior knowledge about the types of the agent population. To implement this strategy the principal can employ the following extension to the mechanism of Sec. II-B. Prior to initiating work, the agents announce their types to the principal and they receive a contract that matches the announced type. In Fig. 1(b), we show how this type of contract evolves in time. Let us formulate this idea mathematically. The ii-th agent announces a type θi\theta_{i}^{\prime} in 𝚯i\boldsymbol{\Theta}_{i} (not necessarily the same as their true type θi\theta_{i}), and they receive the associated, type-specific, transfer function tiθi()t_{i\theta_{i}^{\prime}}(\cdot). The payoff to agent ii is now:

Πi(ei,θi)=tiθi(Qi(ei))Ci(ei),\Pi_{i}(e_{i},\theta_{i}^{\prime})=t_{i\theta_{i}^{\prime}}(Q_{i}(e_{i}))-C_{i}(e_{i}), (17)

where all other quantities are like before. Given the announcement of a type θi\theta_{i}^{\prime}, the rational thing to do for agent ii is to select a level of ei(θi,θi)e_{i}^{*}(\theta_{i},\theta_{i}^{\prime}) by maximizing their expected utility, i.e., by solving:

eiθiθi=argmaxei[0,1]𝔼iθi[Ui(Πi(ei,θi))].e^{*}_{i\theta_{i}\theta_{i}^{\prime}}=\underset{e_{i}\in[0,1]}{\arg\max}\mathbb{E}_{i\theta_{i}}[U_{i}(\Pi_{i}(e_{i},\theta_{i}^{\prime}))]. (18)

Of course, the announcement of θi\theta_{i}^{\prime} is also a matter of choice and a rational agent should select also by maximizing their expected utility. The obvious issue here is that agents can lie about their type. For example, a cost-efficient agent (agent with low cost of effort) may pretend to be a cost-inefficient agent (agent with high cost of effort). Fortunately, the revelation principle [17] comes to the rescue and simplifies the situation. It guarantees that, among the optimal mechanisms, there is one that is incentive compatible. Thus it will be sufficient if the principal constraints their contracts to over truth-telling mechanisms. Mathematically, to enforce truth-telling, the SE must satisfy the incentive compatibility constraints:

𝔼iθi[Ui(Πi(eiΘiθi,θi))]𝔼iθi[Ui(Πi(eiΘiθi,θi))],\mathbb{E}_{i\theta_{i}}[U_{i}(\Pi_{i}(e_{i\Theta_{i}\theta_{i}}^{*},\theta_{i}))]\geq\mathbb{E}_{i\theta_{i}}[U_{i}(\Pi_{i}(e_{i\Theta_{i}\theta_{i}^{\prime}}^{*},\theta_{i}^{\prime}))], (19)

for all θiθi\theta_{i}\not=\theta_{i}^{\prime} in 𝚯i\boldsymbol{\Theta}_{i}. Eq. (19) expresses mathematically that “the expected payoff of agent ii when they are telling the truth is always greater than or equal to the expected payoff they would enjoy if they lied.”

Similar to the developments of Sec. II-B, the quality that the SE expects to receive is:

Qi=Qi(eiΘiΘi),Q_{i}^{*}=Q_{i}(e_{i\Theta_{i}\Theta_{i}}^{*}), (20)

where we use the fact that the mechanism is incentive-compatible. The payoff of the SE becomes:

Π0=Vi=1NtiΘi(Qi).\Pi_{0}=V-\sum_{i=1}^{N}t_{i\Theta_{i}}(Q_{i}^{*}). (21)

Therefore, to select the optimal transfer functions, the SE must solve:

maxt(,)𝔼[u0(Π0)],\max_{t(\cdot,\cdot)}\mathbb{E}\left[u_{0}(\Pi_{0})\right], (22)

subject to the incentive compatibility constraints of Eq. (19), and the participation constraints:

𝔼iθi[Ui(Πi(eiΘiΘi,Θi))]u¯iθi,\mathbb{E}_{i\theta_{i}}[U_{i}(\Pi_{i}(e^{*}_{i\Theta_{i}\Theta_{i}},\Theta_{i}))]\geq\bar{u}_{i\theta_{i}}, (23)

for all θi𝚯i\theta_{i}\in\boldsymbol{\Theta}_{i}, where we also assume that the incentive compatibility constrains hold.

Refer to caption
(a)
Refer to caption
(b)
Figure 1: (a): Timing of the contract for type-independent contracts. (b): Timing of the contract for type-dependent contracts.

II-D Parameterization of the transfer functions

Transfer functions must be practically implementable. That is, they must be easily understood by the agent when expressed in the form of a contract. To be easily implementable, transfer functions should be easy to convey in the form of a table. To achieve this, we restrict our attention to functions that are made out of constants, step functions, linear functions, or combinations of these.

Despite the fact that including such functions would likely enhance the principal’s payoff, we exclude transfer functions that encode penalties for poor agent performance, i.e., transfer functions that can take negative values. First, contracts with penalties may not be implementable if the principal and the agent reside within the same organization. Second, even when the agent is an external contractor penalties are not commonly encountered in practice. In particular, if the SE is a sensitive government office, e.g., the department of defense, national security may dictate that the contractors should be protected from bankruptcy. Third, we do not expect our theory to be empirically valid when penalties are included since, according to prospect theory [18], humans perceive losses differently. They are risk-seeking when the reference point starts at a loss and risk-averse when the reference point starts at a gain.

To overcome these issues we restrict our attention to transfer functions that include three simple additive terms: a constant term representing a participation payment, i.e., a payment received for accepting to be part of the project; a constant payment that is activated when a requirement is met; and a linear increasing part activated after meeting the requirement. The role of the latter two part is to incentivize the agent to meet and exceed the requirements.

We now describe this parameterization mathematically. The transfer function associated with type kk in 𝚯𝒊\boldsymbol{\Theta_{i}} of agent ii is parameterized by:

tik(qi)=aik,0+aik,1H(qiaik,2)+aik,3(qiaik,2)H(qiaik,2),\begin{array}[]{ccc}t_{ik}\left(q_{i}\right)&=&a_{ik,0}+a_{ik,1}\operatorname{H}\left(q_{i}-a_{ik,2}\right)\\ &&+a_{ik,3}\left(q_{i}-a_{ik,2}\right)\operatorname{H}\left(q_{i}-a_{ik,2}\right),\end{array} (24)

where HH is the Heaviside function (H(x)=1H(x)=1 if x0x\geq 0 and 0 otherwise), and all the parameters aik,0,,aik,3a_{ik,0},\dots,a_{ik,3} are non-negative. In Eq. (24), aik,0a_{ik,0} is the participation reward, aik,1a_{ik,1} is the award for exceeding the passed-down requirement, aik,2a_{ik,2} is the passed-down requirement, and aik,3a_{ik,3} the payoff per unit quality exceeding the passed-down requirement. We will call these form of transfer functions the “requirement based plus incentive” (RPI) transfer function. In case the aik,3=0a_{ik,3}=0, we call it the “requirement based” (RB) transfer function. At this point, it is worth mentioning that the passed-down requirement aik,2a_{ik,2} is not necessarily the same as the true system requirement rir_{i}, see our reults in Sec. III. As we have shown in earlier work [10], the optimal passed-down requirement differs from the true system requirement. For example, the SE should ask for higher requirements for the design task with low-complexity. On the other hand, for the task with high-complexity, the SE should pass down less than the actual requirement. For notational convenience, we denote by 𝐚ik+4\mathbf{a}_{ik}\in\mathbb{R}^{4}_{+} (={x:x0}\mathbb{R}=\{x\in\mathbb{R}:x\geq 0\}) the transfer parameters pertaining to agent ii of type k𝚯ik\in\boldsymbol{\Theta}_{i}, i.e.,

𝐚ik=(aik,0,,aik,3).\mathbf{a}_{ik}=\left(a_{ik,0},\dots,a_{ik,3}\right). (25)

Similarly, with 𝐚i+4Mi\mathbf{a}_{i}\in\mathbb{R}^{4M_{i}}_{+} we denote the transfer parameters pertaining to agent ii for all types, i.e.,

𝐚i=(𝐚i1,,𝐚iMi),\mathbf{a}_{i}=\left(\mathbf{a}_{i1},\dots,\mathbf{a}_{iM_{i}}\right), (26)

and with 𝐚+4i=1NMi\mathbf{a}\in\mathbb{R}^{4\sum_{i=1}^{N}M_{i}}_{+} all the transfer parameters collectively, i.e.,

𝐚=(𝐚1,,𝐚N).\mathbf{a}=\left(\mathbf{a}_{1},\dots,\mathbf{a}_{N}\right). (27)

II-E Numerical solution of the optimal contract problem

The optimal contract problem is a an intractable bi-level, non-linear programming problem.In particular, the SE’s problem is for the case of type-dependent contracts is to maximize the expected system-level utility over the class of implementable contracts, i.e.,

max𝐚𝔼[u0(Π0)],\underset{\mathbf{a}}{\max}\mathbb{E}\left[u_{0}(\Pi_{0})\right], (28)

subject to

  1. 1.

    contract implementability constraints:

    aik,j0,a_{ik,j}\geq 0, (29)

    for all i=1,,N,k=1,,Mi,j=0,,3i=1,\dots,N,k=1,\dots,M_{i},j=0,\dots,3;

  2. 2.

    individual rationality constraints:

    eikl=argmaxei[0,1]𝔼ik[Ui(Πi(ei,l))],e^{*}_{ikl}=\underset{e_{i}\in[0,1]}{\arg\max}\mathbb{E}_{ik}[U_{i}(\Pi_{i}(e_{i},l))], (30)

    for all i=1,,N,k=1,,Mi,l=1,,Mii=1,\dots,N,k=1,\dots,M_{i},l=1,\dots,M_{i};

  3. 3.

    participation constraints:

    𝔼ik[Ui(Πi(eikk,k))]u¯ik,\mathbb{E}_{ik}[U_{i}(\Pi_{i}(e^{*}_{ikk},k))]\geq\bar{u}_{ik}, (31)

    for all i=1,,N,k=1,,Mii=1,\dots,N,k=1,\dots,M_{i}; and

  4. 4.

    incentive compatibility constraints:

    𝔼ik[Ui(Πi(eikk,k))]𝔼ik[Ui(Πi(eikl,l))],\mathbb{E}_{ik}[U_{i}(\Pi_{i}(e^{*}_{ikk},k))]\geq\mathbb{E}_{ik}[U_{i}(\Pi_{i}(e^{*}_{ikl},l))], (32)

    for all i=1,,Ni=1,\dots,N and klk\not=l in {1,,Mi}\{1,\dots,M_{i}\}.

For the case of type-independent contracts, one adds the constraint 𝐚ik=𝐚il\mathbf{a}_{ik}=\mathbf{a}_{il} for all i=1,,Ni=1,\dots,N and klk\not=l in {1,,Mi}\{1,\dots,M_{i}\} and the incentive compatibility constraints are removed.

A common approach to solving bi-level programming problems is to replace the internal optimization with the corresponding Karush-Kuhn-Tucker (KKT) condition. This approach is used when the internal problem is concave, i.e., when it has a unique maximum. However, in our case, concavity is not guaranteed, and we resort to nested optimization. We implement everything in Python using the Theano [19] symbolic computation package exploit automatic differentiation. We solve the follower problem using sequential least squares programming (SLSQP) as implemented in the scipy package. We use simulated annealing to find the global optimum point of the leader problem. We first convert the constraint problem to the unconstrained problem using the penalty method such that:

f(𝐚)=𝔼[u0(Π0)]+i=1Ncmin(gi(𝐚),0),f\left(\mathbf{a}\right)=\mathbb{E}\left[u_{0}(\Pi_{0})\right]+\sum_{i=1}^{N_{c}}\min\left(g_{i}\left(\mathbf{a}\right),0\right), (33)

where gi()g_{i}\left(\cdot\right)’s are the constraints in Eqs. (29-32). Maximizing the f(𝐚)f(\mathbf{a}) in Eq. 33, is equivalent to finding the mode of the distribution:

πγ(𝐚)exp(γf(𝐚)),\pi_{\gamma}\left(\mathbf{a}\right)\propto\exp\left(\gamma f\left(\mathbf{a}\right)\right), (34)

we use Sequential Monte Carlo (SMC) [20] method to sample from this distribution by increasing γ\gamma from 0.0010.001 to 5050. To perform the SMC, we use the “pysmc” package [21]. To ensure the computational efficiency of our approach, we need to use a numerical quadrature rule to approximate the expectation over Ξ\Xi. This step is discussed in Appendix A. To guarantee the reproducibility of our results, we have published our code in an open source Github repository (https://github.com/ebilionis/incentives) with an MIT license.

II-F Value Function and Risk Behavior

We assume two types of value functions, namely, the requirement based (RB) and requirement based plus incentive (RPI). Mathematically, we define these two value functions as:

VRB:=v0i=1N{H(Qi1)}.V_{\text{RB}}:=v_{0}\prod_{i=1}^{N}\left\{H(Q_{i}^{*}-1)\right\}. (35)

and,

VRPI:=v0i=1N{H(Qi1)}[1+0.2(Qi1)],V_{\text{RPI}}:=v_{0}\prod_{i=1}^{N}\left\{H(Q_{i}^{*}-1)\right\}\left[1+0.2(Q_{i}^{*}-1)\right], (36)

respectively. In Fig. 2, we show these two value functions for one subsystem.

Refer to caption
Figure 2: The RB value function (black solid line) and RPI value function (green dashed line).

We consider two different risk behaviors for individuals, risk averse (RA) and risk neutral (RN). We use the utility function in Eq. (37), for the risk behavior of the agents and principal,

u(π())={abecπ(),for RAπ(),for RN,u\left(\pi\left(\cdot\right)\right)=\begin{cases}a-be^{-c\pi\left(\cdot\right)},\ \text{for RA}\\ \pi\left(\cdot\right),\ \text{for RN},\end{cases} (37)

where c=2c=2 for a RA agent. The parameters aa and bb are:

a=b=11ec.a=b=\frac{1}{1-e^{-c}}.

We show these utility functions for the two different risk behaviors in Fig. 3.

Refer to caption
Figure 3: The utility functions for risk averse (RA) (black solid line), risk neutral (RN) (green dashed line).

III NUMERICAL EXAMPLES

In this section, we start by performing an exhaustive numerical investigation of the effects of task complexity, agent’s cost of effort, uncertainty in the quality of the returned task, and adverse selection. In Sec. III-A1, we study the “moral hazard only” scenario with the RB transfer and value functions. In Sec. III-A2, we study the effect of the RPI transfer and value functions. We study the “moral hazard with adverse selection” in Sec. III-A3.

III-A Numerical investigation of the proposed model

In these numerical investigations we consider a single risk neutral principal and a risk averse agent. Each case study corresponds to a choice of task complexity (κ\kappa in Eq. (5)), cost of effort (cc in Eq. (3)), and performance uncertainty (σ\sigma in Eq. (5)). With regards to task complexity, we select κ=2.5\kappa=2.5 for an easy task and κ=1.5\kappa=1.5 for a hard task. For the cost of effort parameter, we associate c=0.1c=0.1 and c=0.4c=0.4 with the low- and high-cost agents, respectively. Finally, low- and high-uncertainty tasks are characterized by σ=0.1\sigma=0.1 and σ=0.4\sigma=0.4, respectively.

Note that, the parameters κiθi\kappa_{i\theta_{i}}, ciθic_{i\theta_{i}}, and σiθi\sigma_{i\theta_{i}} have two indices. The first index ii is the agent’s (subsystem’s) number and the second index is the type of the agent. We begin with a series of cases with a single agent with a known type denoted by 11 (moral-hazard-only case studies). In these cases, the parameters corresponding to complexity, cost and uncertainty are denoted by κ11\kappa_{11}, c11c_{11}, and σ11\sigma_{11}, respectively. We end with a series of cases with a single agent but with an unknown type that can take two discrete, equally probable values 11 and 22 (moral-hazard-and-adverse-selection case studies). Consequently, κ11\kappa_{11} denotes the effort coefficient of a type-1 agent 11, κ12\kappa_{12} the same for a type-2 agent, and so on for all the other parameters.

To avoid numerical difficulties and singularities, we replace all Heaviside functions with a sigmoids, i.e.,

H^α(x)=11+eαx,\hat{H}_{\alpha}(x)=\frac{1}{1+e^{-\alpha x}}, (38)

where the parameter α\alpha controls the slope. We choose α=50\alpha=50 for the transfer functions and α=100\alpha=100 for the value function. We consider two types of value functions, RB and RPI value functions, see Sec. II-F. For the RB value function we use the transfer function of Eq. (24) constrained so aik,3=0a_{ik},3=0 (RB transfer function). In other words, the agent is paid a constant amount if they achieve the requirement and there is no payment per quality exceeding the requirement. For the case of RPI value function, we remove this constraint.

III-A1 Moral hazard with RB transfer and value functions

Consider the case of a single risk-averse agent of known type and a risk-neutral principal with an RB value function. In Fig. 4(a), we show the transfer functions for several agent types covering all possible combinations of low/high complexity, low/high cost, and low/high task uncertainty. Fig. 4(b) depicts the probability that the principal’s expected utility exceeds a given threshold for all these combinations. We refer to this curve as the exceedance curve. Finally, in tables I and II, we report the expected utility of the principal for the low and high cost agents, respectively. We make the following observations:

  1. 1.

    For the same level of task complexity and uncertainty, but with increasing cost of effort:

    1. (a)

      the optimal passed-down requirement decreases;

    2. (b)

      the optimal payment for achieving the requirement increases;

    3. (c)

      the principal’s expected utility decreases; and

    4. (d)

      the exceedance curve shifts to the left.

    Intuitively, as the agent’s cost of effort increases, the principal must make the contract more attractive to ensure that the participation constraints are satisfied. As a consequence, the probability that the principal’s expected utility exceeds a given threshold decreases.

  2. 2.

    For the same level of task uncertainty and cost of effort, but with increasing complexity:

    1. (a)

      the optimal passed-down requirement decreases;

    2. (b)

      the optimal payment for achieving the requirement increases;

    3. (c)

      the principal’s expected utility decreases; and

    4. (d)

      the exceedance curve shifts to the left.

    Thus, we see that an increase in task complexity has the same a similar effect as an increase in the agent’s cost of effort. As in the previous case, to make sure that the agent wants to participate, the principal has to make the contract more attractive as task complexity increases.

  3. 3.

    For the same level of task complexity and cost of effort, but with increasing uncertainty:

    1. (a)

      the optimal passed-down requirement increases;

    2. (b)

      the optimal payment for achieving the requirement increases;

    3. (c)

      the principals expected utility decreases;

    4. (d)

      the exceedance curve shifts towards the bottom right.

    This case is the most interesting. Here as the uncertainty of the task increases, the principal must increase the passed-down requirement to ensure that they are hedged against failure. At the same time, however, they must also increase the payment to ensure that the agent still has an incentive to participate.

  4. 4.

    For all cases considered, the optimal passed down requirement is greater than the true requirement (which is set to one). Note, however, this is not universally true. Our study does not examine all possible combinations of cost, quality, and utility functions that could have been considered. Indeed, as we showed in our previous work [10], there are situations in which a smaller-than-the-true requirement can be optimal.

Refer to caption
(a) Transfer functions using RB value function.
Refer to caption
(b) Exceedance curve.
Figure 4: L and H stand for low and high, respectively, Comp. and Unc. stand for complexity and uncertainty, respectively. The low and high complexity denote the κ11=2.5\kappa_{11}=2.5 and κ11=1.5\kappa_{11}=1.5, respectively, low and high cost denote c11=0.1c_{11}=0.1 and c11=0.4c_{11}=0.4, respectively, low and high uncertainty denote σ11=0.1\sigma_{11}=0.1 and σ11=0.4\sigma_{11}=0.4, respectively, RA denotes the risk averse agent. (a): The RB transfer functions for several different agent types with respect to outcome of the subsystem (Q1Q_{1}) for moral hazard scenario. (b): The exceedance for the moral hazard scenario using the RB transfer function.
TABLE I: The expected utility of the principal for low cost agent with RB value function.
Low Uncertainty High Uncertainty
Low Complexity 0.97 0.93
High Complexity 0.92 0.79
TABLE II: The expected utility of the principal for high cost agent with RB value function.
Low Uncertainty High Uncertainty
Low Complexity 0.89 0.77
High Complexity 0.72 0.45

III-A2 Moral hazard with RPI transfer and value functions

This case is identical to Sec. III-A1, albeit we use the RPI value function, see Sec. II-F, and the RPI transfer function, see Eq. (24). Fig 5(a), depicts the transfer functions for all combinations of agent types and task complexities. In Fig. 5(b), we show the exceedance curve using the RPI value and transfer functions. Finally, in tables III and IV, we report the expected utility of the principal using the RPI transfer and value functions for the low and high cost agents, respectively. The results are qualitative similar to Sec. III-A1, with the additional observations:

  1. 1.

    For the same level of task complexity, uncertainty and agent cost, the optimal reward for achieving the requirement decreases compared to the same cases in Sec. III-A1. Intuitively, as the principal has the option to reward the agent based on the quality exceeding the requirement, they prefer to pay less for fulfilling the requirement. Instead, the principal incentivizes the agent to improve the quality beyond the optimal passed-down requirement.

  2. 2.

    The slope of the transfer function beyond the passed-down requirement is almost identical to the slope of the value function.

Refer to caption
(a) Transfer functions using RPI value function.
Refer to caption
(b) Exceedance curve.
Figure 5: L and H stand for low and high, respectively, Comp. and Unc. stand for complexity and uncertainty, respectively. The low and high complexity denote the κ11=2.5\kappa_{11}=2.5 and κ11=1.5\kappa_{11}=1.5, respectively, low and high cost denote c11=0.1c_{11}=0.1 and c11=0.4c_{11}=0.4, respectively, low and high uncertainty denote σ11=0.1\sigma_{11}=0.1 and σ11=0.4\sigma_{11}=0.4, respectively, RA denotes the risk averse agent. (a): The RPI transfer functions for several different agent types with respect to outcome of the subsystem (Q1Q_{1}) for moral hazard scenario. (b): The exceedance curve for the moral hazard scenario using the RPI transfer function.
TABLE III: The expected utility of the principal for low cost agent with RPI value function.
Low Uncertainty High Uncertainty
Low Complexity 1.2 1.2
High Complexity 1.0 0.89
TABLE IV: The expected utility of the principal for high cost agent with RPI value function.
Low Uncertainty High Uncertainty
Low Complexity 0.95 0.93
High Complexity 0.76 0.56

In table V, we summarize our observations for the results in Sec.  III-A1 and III-A2. In this table, we show how the passed-down requirement and payment change when we fix two parameters of the model (we denote it by “fix” in the table) and vary the third parameter. We denote increase by \uparrow and decrease by \downarrow.

TABLE V: Summary of the observations.
complexity agent cost uncertainty requirement payment
\uparrow fix fix \downarrow \uparrow
fix \uparrow fix \downarrow \uparrow
fix fix \uparrow \uparrow \uparrow

III-A3 Moral hazard with adverse selection

Consider the case of a single risk-averse agent of unknown type which takes two possible values, and a risk-neutral principal with a RB value function. We consider two possibilities for the unknown type:

  1. 1.

    Unknown cost of effort. Here, we set κ11=κ12=1.5\kappa_{11}=\kappa_{12}=1.5 (p(κ11=1.5)=1p(\kappa_{11}=1.5)=1), σ11=σ12=0.1\sigma_{11}=\sigma_{12}=0.1 (p(σ11=0.1)p(\sigma_{11}=0.1)), and p(c11=0.1)=0.5p(c_{11}=0.1)=0.5 and p(c12=0.4)=0.5p(c_{12}=0.4)=0.5

  2. 2.

    Unknown task complexity. For the unknown quality we assume that p(κ11=2.5)=0.5p(\kappa_{11}=2.5)=0.5 and p(κ12=1.5)=0.5p(\kappa_{12}=1.5)=0.5, σ11=σ12=0.4\sigma_{11}=\sigma_{12}=0.4 (p(σ11=0.4)=1p(\sigma_{11}=0.4)=1), and c11=c12=0.4c_{11}=c_{12}=0.4 (p(c11=0.4)=1p(c_{11}=0.4)=1).

In this scenario, we maximize the expected utility of the principal subject to constraints in Eqs. (29-32). The incentive compatibility constraint, Eq. (32), guarantees that the agent will choose the contract that is suitable for their true type. In other words, as there are two agent types’ possibilities, the principal must offer two contracts, see Fig. 1(b). These two contracts must be designed in a way that there is no benefit for the agent to deviate from their true type, i.e., the contracts enforce the agent to be truth telling.

Solving the constraint optimization problem yields:

𝐚11=𝐚12=(0,0.29,1.06),\mathbf{a}_{11}=\mathbf{a}_{12}=(0,0.29,1.06),

i.e., the two contracts collapse into one. Note that the resulting contract is the same as the pure moral hazard case, Sec. III-A1, for an agent with type κ11=1.5\kappa_{11}=1.5, σ11=0.1\sigma_{11}=0.1, and c11=0.4c_{11}=0.4. In other words, the principal must behave as if there was only a high-cost agent. That is, there are no contacts that can differentiate between a low- and a high-cost agent in this case.

A similar outcome occurs for unknown task complexity. The solution of the constraint optimization problem for this scenario is:

𝐚11=𝐚12=(0,0.08,1.11),\mathbf{a}_{11}=\mathbf{a}_{12}=(0,0.08,1.11),

which is the same as the optimum contract that is offered for the pure moral hazard case, Sec. III-A1, for an agent with type κ11=1.5\kappa_{11}=1.5, σ11=0.4\sigma_{11}=0.4, and c11=0.4c_{11}=0.4. Therefore, in this case the principal must behave as if there the task is of high complexity.

Note that in both cases above, the collapse of the two contracts to one contract is not a generalizable property of our model. In particular, it may not happen if more flexible transfer functions are allowed, e.g., ones that allow performance penalties.

In Fig. 6, we show the transfer functions for the adverse selection scenarios with unknown cost and unknown quality. In tables VI and VII, we show the expected utility of two types of agents and the principal using the optimum contract for unknown cost and unknown quality, respectively. To sum up:

  1. 1.

    The unknown cost:

    1. (a)

      the optimum transfer function for this problem is as same as that the principal would have offered for a single-type high-cost agent with c11=c12=0.4c_{11}=c_{12}=0.4 (moral hazard scenario with no adverse selection);

    2. (b)

      the expected utility of the low cost agent (efficient agent) is greater than that of the high cost agent.

    In this case, the low-cost agent benefits because of information asymmetry. In other words, the principal must pay an information rent to the low-cost agent to reveal their type.

  2. 2.

    The unknown task complexity:

    1. (a)

      the optimum contract in this case is the contract that the principal would have offered for the single-type high-complexity task with κ11=κ12=1.5\kappa_{11}=\kappa_{12}=1.5;

    2. (b)

      the expected utility of an agent dealing with a low-complexity task is greater than that of an agent dealing with a high-complexity task.

    Again, due to the information asymmetry, the agent benefits if the task complexity is low. The principal must pay an information rent to reveal the task complexity.

Refer to caption
Figure 6: The transfer function for the adverse selection scenarios with unknown cost (solid line) and unknown quality (dashed line), the agent is risk averse. For unknown cost: κ11=κ12=1.5\kappa_{11}=\kappa_{12}=1.5 with probability 1, σ11=σ12=0.1\sigma_{11}=\sigma_{12}=0.1 with probability 1, and c11=0.1c_{11}=0.1 with probability 0.5 and c12=0.4c_{12}=0.4 with probability 0.5. For unknown quality: κ11=2.5\kappa_{11}=2.5 with probability 0.5 and κ12=1.5\kappa_{12}=1.5 with probability 0.5, σ11=σ12=0.4\sigma_{11}=\sigma_{12}=0.4 with probability 1, and c11=c12=0.4c_{11}=c_{12}=0.4 with probability 1.
TABLE VI: The expected utility of the agent with unknown cost for two different contracts.
𝔼[u1()]\mathbb{E}[u_{1}(\cdot)] 𝔼[u0()]\mathbb{E}[u_{0}(\cdot)]
Low Cost Agent (Type 1) 0.39 0.72
High Cost Agent (Type 2) 0 0.72
TABLE VII: The expected utility of the agent with unknown quality for two different contracts.
𝔼[u1()]\mathbb{E}[u_{1}(\cdot)] 𝔼[u0()]\mathbb{E}[u_{0}(\cdot)]
Low Complexity (Type 1) 0.52 0.45
High Complexity (Type 2) 0 0.45

III-B Satellite Design

In this section we apply our method on a simplified satellite design. Typically a satellite consists of seven different subsystems  [22], namely, electrical power subsystem, propulsion, attitude determination and control, on-board processing, telemetry, tracking and command, structures and thermal subsystems. We focus our attention on the propulsion subsystem (N=1N=1). To simplify the analysis, we assume that the design of these subsystems is assigned to a sSE in a one-shot fashion. Note that, the actual systems engineering process of the satellite design is an iterative process and the information and results are exchanged back and forth in each iteration. Our model is a crude approximation of reality. The goal of the SE is to optimally incentivize the sSE to produce subsystem designs that meet the mission’s requirements. Furthermore, we assume that the propulsion subsystem is decoupled from the other subsystems, i.e., there is no interactions between them, and that the SE knows the types of each sSE and therefore, there is no information asymmetry.

To extract the parameters of the model, i.e., a11,σ11,c11a_{11},\sigma_{11},c_{11}, we will use available historical data. To this end, let I1I_{1} be the cumulative, sector-wide investment on the propulsion subsystem and G1G_{1} be the delivered specific impulse of solid propellants (IspI_{\text{sp}}). The specific impulse is defined as the ratio of thrust to weight flow rate of the propellant and is a measure of energy content of the propellants [22].

Historical data, say 𝒟1={(I1,i,G1,i)}i=1S\mathcal{D}_{1}=\left\{\left(I_{1,i},G_{1,i}\right)\right\}_{i=1}^{S}, of these quantities are readily available for many technologies. Of course, cumulative investment and best performance increase with time, i.e., I1,iI1,i+1I_{1,i}\leq I_{1,i+1} and G1,iG1,i+1G_{1,i}\leq G_{1,i+1}. We model the relationship between G1G_{1} and I1I_{1} as:

G1=G1,S+A1(I1I1,S)+Σ1Ξ1,G_{1}=G_{1,S}+A_{1}(I_{1}-I_{1,S})+\Sigma_{1}\Xi_{1}, (39)

where G1,SG_{1,S} and I1,SI_{1,S} are the current states of these variables, Ξ1𝒩(0,1)\Xi_{1}\sim\mathcal{N}(0,1), and A1A_{1} and Σ1\Sigma_{1} are parameters to be estimated from the all available data, 𝒟1\mathcal{D}_{1}. We use a maximum likelihood estimator for A1A_{1} and Σ1\Sigma_{1}. This is equivalent to a least squares estimate for AiA_{i}:

A^1=argminA1i=1S[G1,S+A1(I1,iI1,S)G1,i]2,\hat{A}_{1}=\arg\min_{A_{1}}\sum_{i=1}^{S}\left[G_{1,S}+A_{1}(I_{1,i}-I_{1,S})-G_{1,i}\right]^{2}, (40)

and to setting Σ1\Sigma_{1} equal to the mean residual square error:

Σ^1=1Si=1S[G1,S+A1^(I1,iI1,S)G1,i]2.\hat{\Sigma}_{1}=\frac{1}{S}\sum_{i=1}^{S}\left[G_{1,S}+\hat{A_{1}}(I_{1,i}-I_{1,S})-G_{1,i}\right]^{2}. (41)

Now, let G1rG_{1}^{r} be the required quality for the propulsion subsystem in physical units. The scaled quality of a subsystem Q1Q_{1}, can be defined as:

Q1=G1G1,SG1rG1,S,Q_{1}=\frac{G_{1}-G_{1,S}}{G_{1}^{r}-G_{1,S}}, (42)

with this definition, we get Q1=0Q_{1}=0 for the state-of-the-art, and Q1=1Q_{1}=1 for the requirement. Substituting Eq. (39) in Eq. (42) and using the maximum likelihood estimates for A1A_{1} and Σ1\Sigma_{1}, we obtain:

Q1=A^1G1rG1,S(I1I1,S)+Σ^1G1rG1,SΞ1.Q_{1}=\frac{\hat{A}_{1}}{G_{1}^{r}-G_{1,S}}(I_{1}-I_{1,S})+\frac{\hat{\Sigma}_{1}}{G_{1}^{r}-G_{1,S}}\Xi_{1}. (43)

From this equation, we can identify the uncertainty σ11\sigma_{11} in the quality function as:

σ11=Σ^1G1rG1,S.\sigma_{11}=\frac{\hat{\Sigma}_{1}}{G_{1}^{r}-G_{1,S}}. (44)

Finally, we need to define effort. Let T1T_{1} represents the time for which the propulsion engineer is to be hired. The cost of the agent per unit time is C1C_{1}. T1T_{1} is just the duration of the systems engineering process we consider. The value C1T1C_{1}T_{1} can be read from the balance sheets of publicly traded firms related to the technology. We can associate the effort variable e1e_{1} with the additional investment required to buy the time of one engineer:

e1=I1I1,SC1T1,e_{1}=\frac{I_{1}-I_{1,S}}{C_{1}{T_{1}}}, (45)

that is, e1=1e_{1}=1 corresponds to the effort of one engineer for time T1T_{1}. Let us assume there are ZZ engineers work on the subsystem. Comparing this equation, Eq. (43), and Eq. (5), we get that the κ11\kappa_{11} coefficient is given by:

κ11=ZC1T1A^1G1rG1,S.\kappa_{11}=\frac{ZC_{1}T_{1}\hat{A}_{1}}{G_{1}^{r}-G_{1,S}}. (46)

To complete the picture, we need to talk about the value V0V_{0} (in USD) of the system if the requirements are met. We can use this value to normalize all dollar quantities. That is, we set:

v0=1,v_{0}=1, (47)

and for the cost per square effort of the agent we set:

c11=ZC1T1V0.c_{11}=\frac{ZC_{1}T_{1}}{V_{0}}. (48)

Finally, we use some real data to fix some of the parameters. Trends in delivered IspI_{\text{sp}} (G1G_{1} (sec.)) and investments by NASA (I1I_{1} (millions USD)) in chemical propulsion technology with time are obtained from [23] and [24], respectively. The state-of-the-art solid propellant technology corresponds to a G1,SG_{1,S} value of 252 sec. and I1,SI_{1,S} value of 149.1 million USD. The maximum likelihood fit of the parameters results in a regression coefficient of A^1=0.0133\hat{A}_{1}=0.0133 sec. per million USD, and standard deviation Σ^1=0.12\hat{\Sigma}_{1}=0.12 sec. The corresponding data and the maximum likelihood fit are illustrated in Fig. 7. The value of C1C_{1} is the median salary (per time) of a propulsion engineer which is approximately 120,000 USD / year, according to the data obtained from [25]. For simplicity, also assume that T1=1T_{1}=1 year. Moreover, we assume that there are 200 engineers work on the subsystem, Z=200Z=200. We will examine two case studies which is summarized in table VIII.

Refer to caption
Figure 7: Satellite case study (propulsion subsystem): Historical data (1979–1988) of specific impulse of solid mono-propellants vs cumulative investment per firm. The solid line and the shaded area correspond to the maximum likelihood fit of a linear regression model and the corresponding 95%95\% prediction intervals, respectively.
TABLE VIII: The model parameters for two case studies.
G1rG_{1}^{r} V0V_{0} κ11\kappa_{11} σ11\sigma_{11} c11c_{11}
252.2 s 50,000,000 USD 1.6 0.6 0.5
252.25 s 60,000,000 USD 1.28 0.48 0.4

Using RB value function, we depict the contracts for these two scenarios in Fig. 8.

Refer to caption
Figure 8: The contracts for two case studies in satellite design.

IV CONCLUSIONS

We developed a game-theoretic model for a one-shot shallow SEP. We posed and solved the problem of identifying the contract (transfer function) that maximizes the principal’s expected utility. Our results show that, the optimum passed-down requirement is different from the real system requirement. For the same level of task complexity and uncertainty, as the agent cost of effort increases, the passed-down requirement decreases and the award to achieving the requirement increases. In this way, the principal makes the contract more attractive to the high-cost agent and ensures that the participation constraint is satisfied. Similarly, for the same level of task uncertainty and cost of effort, increasing task complexity results in lower passed-down requirement and larger award for achieving the requirement. For the same level of task complexity and cost of effort, as the uncertainty increases both the passed-down requirement and the award for achieving the requirement increase. This is because the principal wants to make sure that the system requirements are achieved. Moreover, by increasing the task complexity, the task uncertainty, or the cost of effort, the principal earns less and the exceedance curve is shifted to the left. Using the RPI contracts, the principal pays smaller amount for achieving the requirement but, instead, they pay for per quality exceeding the requirement.

For the adverse selection scenario with RB value function, we observe that when the principal is maximally uncertain about the cost of the agent, the optimum contracts are equivalent to the contract designed for the high cost agent in the single-type case with no adverse selection. The low-cost agent earns more expected utility than the high-cost agent. This is the information rent that the principal must pay to reveal the agents’ types. Similarly, if the principal is maximally uncertain about the task complexity, the two optimum contracts for the unknown quality are equivalent to the contract that is offered to the high-complexity task where there is no adverse selection. Note that, the equivalence of the contracts in adverse selection scenario with the contract that is offered in absence of adverse selection is not universal. If the class of possible contracts is enlarged, e.g., to allow penalties, there may be a set of two contracts that differentiate types.

There are still many remaining questions in modeling SEPs using a game-theoretic approach. First, there is a need to study the hierarchical nature of SEPs with potentially coupled subsystems. Second, true SEPs are dynamic in nature with many iterations corresponding to exchange of information between the various agents. These are the topics of ongoing research towards a theoretical foundation of systems engineering design that accounts for human behavior.

Appendix A Numerical estimation of the required expectations

For the numerical implementation of the suggested model, we need to be able to carry out expectations of the form of Eq. 9 a.k.a. Eq. 8 and Eq. 7. Since, we have at most two possible types in our case studies, the summation over the possible types is trivial. Focusing on expectations over Ξ\Xi, we evaluate them using a sparse grid quadrature rule [26]. In particular, any expectation of the form 𝔼[g(Ξ)]\mathbb{E}[g(\Xi)] is approximated by:

𝔼[g(Ξ)]s=1Nsw(s)g(ξ(s)),\mathbb{E}[g(\Xi)]\approx\sum_{s=1}^{N_{s}}w^{(s)}g\left(\xi^{(s)}\right), (49)

where w(s)w^{(s)} and ξ(s)\xi^{(s)} are the Ns=127N_{s}=127 quadrature points of the level 66 sparse grid quadrature constructed by the Gauss-Hermite 1D quadrature rule.

Acknowledgment

This material is based upon work supported by the National Science Foundation under Grant No. 1728165.

References

  • [1] G. Locatelli, “Why are megaprojects, including nuclear power plants, delivered overbudget and late? reasons and remedies,” arXiv preprint arXiv:1802.07312, 2018.
  • [2] U. S. G. A. Office, “Navy shipbuilding past performance provides valuable lessons for future investments,” 2018.
  • [3] N. GAO, “Assessments of major projects,” 2018.
  • [4] I. Maddox, P. Collopy, and P. A. Farrington, “Value-based assessment of dod acquisitions programs,” Procedia Computer Science, vol. 16, pp. 1161 – 1169, 2013, 2013 Conference on Systems Engineering Research. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1877050913001233
  • [5] P. D. Collopy and P. M. Hollingsworth, “Value-Driven Design,” Journal of Aircraft, vol. 48, no. 3, pp. 749–759, 2011. [Online]. Available: https://doi.org/10.2514/1.C000311
  • [6] A. Bandura, “Social cognitive theory: An agentic perspective,” Annual review of psychology, vol. 52, no. 1, pp. 1–26, 2001.
  • [7] M. Shergadwala, I. Bilionis, K. N. Kannan, and J. H. Panchal, “Quantifying the impact of domain knowledge and problem framing on sequential decisions in engineering design,” Journal of Mechanical Design, vol. 140, no. 10, p. 101402, 2018.
  • [8] A. M. Chaudhari, Z. Sha, and J. H. Panchal, “Analyzing participant behaviors in design crowdsourcing contests using causal inference on field data,” Journal of Mechanical Design, vol. 140, no. 9, p. 091401, 2018.
  • [9] S. D. Vermillion and R. J. Malak, “Using a principal-agent model to investigate delegation in systems engineering,” in ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015, pp. V01BT02A046–V01BT02A046.
  • [10] S. Safarkhani, V. R. Kattakuri, I. Bilionis, and J. Panchal, “A principal-agent model of systems engineering processes with application to satellite design,” arXiv preprint arXiv:1903.06979, 2019.
  • [11] J. Watson, “Contract, mechanism design, and technological detail,” Econometrica, vol. 75, no. 1, pp. 55–81, 2007.
  • [12] S. Safarkhani, I. Bilionis, and J. Panchal, “Understanding the effect of task complexity and problem-solving skills on the design performance of agents in systems engineering,” the ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Aug 2018.
  • [13] J. A. Mirrlees, “The theory of moral hazard and unobservable behaviour: Part i,” The Review of Economic Studies, vol. 66, no. 1, pp. 3–21, 1999. [Online]. Available: http://www.jstor.org/stable/2566946
  • [14] W. P. Rogerson, “The first-order approach to principal-agent problems,” Econometrica, vol. 53, no. 6, pp. 1357–1367, 1985. [Online]. Available: http://www.jstor.org/stable/1913212
  • [15] R. Ke, C. T. Ryan et al., “A general solution method for moral hazard problems.”
  • [16] R. Myerson, Game Theory: Analysis of Conflict. Harvard University Press, 1991. [Online]. Available: https://books.google.com/books?id=1w5PAAAAMAAJ
  • [17] R. B. Myerson, “Optimal auction design,” Mathematics of operations research, vol. 6, no. 1, pp. 58–73, 1981.
  • [18] D. Kahneman and A. Tversky, “Prospect theory.. an analy-sis of decision under risk,” 2008.
  • [19] Theano Development Team, “Theano: A Python framework for fast computation of mathematical expressions,” arXiv e-prints, vol. abs/1605.02688, May 2016. [Online]. Available: http://arxiv.org/abs/1605.02688
  • [20] A. Doucet, S. Godsill, and C. Andrieu, “On sequential monte carlo sampling methods for bayesian filtering,” Statistics and computing, vol. 10, no. 3, pp. 197–208, 2000.
  • [21] Ilias bilionis, “Sequential Monte Carlo working on top of pymc.” [Online]. Available: https://github.com/PredictiveScienceLab/pysmc
  • [22] J. R. Wertz, D. F. Everett, and J. J. Puschell, Space mission engineering: the new SMAD. Hawthorne, CA: Microcosm Press : Sold and distributed worldwide by Microcosm Astronautics Books, 2011, oCLC: 747731146.
  • [23] “Solid.” [Online]. Available: http://www.astronautix.com/s/solid.html
  • [24] “NASA Historical Data Books.” [Online]. Available: https://history.nasa.gov/SP-4012/vol6/cover6.html
  • [25] “Propulsion Engineer Salaries.” [Online]. Available: https://www.paysa.com/salaries/propulsion-engineer--t
  • [26] T. Gerstner and M. Griebel, “Numerical integration using sparse grids,” Numerical Algorithms, vol. 18, no. 3, p. 209, Jan 1998. [Online]. Available: https://doi.org/10.1023/A:1019129717644