This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Competitive percolation strategies for network recovery

Andrew M. Smith Department of Computer Science and Complexity Sciences Center, University of California, Davis, CA, USA, 95616 amsmi@ucdavis.edu Márton Pósfai Department of Computer Science and Complexity Sciences Center, University of California, Davis, CA, USA, 95616 Martin Rohden Department of Computer Science and Complexity Sciences Center, University of California, Davis, CA, USA, 95616 Andrés D. González School of Industrial and Systems Engineering, University of Oklahoma, Norman, OK, USA, 73019 Leonardo Dueñas-Osorio Department of Civil & Environmental Engineering, Rice University, Houston, TX, USA, 77005 Raissa M. D’Souza Department of Computer Science and Complexity Sciences Center, University of California, Davis, CA, USA, 95616 Department of Mechanical and Aerospace Engineering, University of California, Davis, California, USA, 95616 Santa Fe Institute, Santa Fe, New Mexico, USA, 87501
Abstract

Restoring operation of critical infrastructure systems after catastrophic events is an important issue, inspiring work in multiple fields, including network science, civil engineering, and operations research. We consider the problem of finding the optimal order of repairing elements in power grids and similar infrastructure. Most existing methods either only consider system network structure, potentially ignoring important features, or incorporate component level details leading to complex optimization problems with limited scalability. We aim to narrow the gap between the two approaches. Analyzing realistic recovery strategies, we identify over- and undersupply penalties of commodities as primary contributions to reconstruction cost, and we demonstrate traditional network science methods, which maximize the largest connected component, are cost inefficient. We propose a novel competitive percolation recovery model accounting for node demand and supply, and network structure. Our model well approximates realistic recovery strategies, suppressing growth of the largest connected component through a process analogous to explosive percolation. Using synthetic power grids, we investigate the effect of network characteristics on recovery process efficiency. We learn that high structural redundancy enables reduced total cost and faster recovery, however, requires more information at each recovery step. We also confirm that decentralized supply in networks generally benefits recovery efforts.

Introduction

Resilience of complex networks is one of the most studied topics of network science, with an expanding literature on spreading of failures, mitigation of damage, and recovery processes [1, 2, 3, 4, 5, 6]. The level of functionality of a network is typically quantified by its connectedness, e.g., size of the largest component [1], average path length [7, 8], or various centrality metrics [9]. Such simple topology-based metrics ensure mathematical tractability and allow us to analyze and compare networks that can be very different in nature, providing general insights into the organization of complex systems. However, such a perspective necessarily ignores important system-specific details. For example, abstracted topological models of infrastructure networks recovering from damage or catastrophic failure aim to rapidly restore the largest component [10, 11, 12, 13]. But, extensive connectivity is not a necessary condition to guarantee that all supply and demand can be met. For instance, consumers of a power grid can be served if they are connected to at least one power source and that source satisfies operational constraints [14]. The concept of “islanding”, a technique of intentionally partitioning the network to avoid cascading failures, is actually a practical strategy used to improve security and resilience during restoration efforts in power grids [15, 16, 17]. Indeed, after the 2010 earthquake in Chile the recovery process first created five islands, which were only connected to each other in the final steps of reconstruction [18].

The restoration of critical infrastructure operation after a catastrophic event, such as a hurricane or earthquake, is a problem of great practical importance and is the focus of a significant body of work in civil and industrial engineering disciplines. The goal of engineering based models of recovery is to provide system-specific predictions and actionable recommendations. This is achieved by incorporating component level details and realistic transmission dynamics into the models, often in the form of generalized formulations of the network design problems (NDPs), which satisfy network flows. In this context, objective functions of such NDP-based models aim to minimize the construction and/or operational costs of recovering edges and nodes in a utility network. Basic forms of the NDP have been well-studied [19, 20], and have recently been combined with scheduling and resource allocation problems to model the entire restoration process [21, 22, 23]. While these models provide a principled manner to obtain optimal, centralized recovery strategies, their complexity (at least NP-complete [19]) renders computation not scalable, and interpretation restricted in scope to small instances. More efficient approximate solutions for NDPs have been found using optimization meta-heuristics such as a hybrid ant system [24] and gradient descent [25] methods. Such algorithms are generally applicable to global search problems and were designed to reduce computational complexity by not guaranteeing optimality; therefore, provide limited insight into the mechanism of network formation during recovery. We will analyze the output of an NDP algorithm, and leveraging on these observations, we will develop a percolation-based model for network recovery with the goal of uncovering important principles of network formation and recoverability.

Percolation processes, often used for studying properties of stochastic network formation, have recently been applied to network recovery [26]. In the kinetic formulation of random percolation [27], we start with NN unconnected nodes and consider a discrete time process. At each timestep, an edge is selected from the set of all possible edges at random, and added to the network. Initially the largest connected component (LCC) is sublinear in NN; above a critical edge density it spans a finite fraction of the network and the LCC is referred to as the giant component. Controlling location of the critical point is of great interest in many systems – for instance, suppressing the formation of the giant component may reduce the likelihood of virus spreading in social contact networks. This can be achieved by selecting M>1M>1 candidate edges at each timestep, and adding the edge that optimizes some criteria. The general class of models that results from this choice is referred to as competitive percolation or an Achlioptas process [28, 29]. While simple, Achlioptas processes often have the benefit of being scalable, numerically analyzable, and provide a parameter, MM, for tuning how close the formation process is to matching the desired criteria. Note that when MM is equal to the number of possible edges, we always add the edge that is optimal with respect to the selection criteria. Previous percolation-based recovery models typically measure solution quality by how quickly the LCC grows [11, 12], or assume nodes which are not connected to the LCC to be nonfunctional [10, 13]. However, empirical studies of recovery scenarios suggest that these assumptions do not apply to infrastructure networks after catastrophic scenarios [18] .

In this work, we aim to narrow the gap between topology-based recovery approaches and computationally difficult optimization approaches by incorporating features which mirror infrastructure restoration processes. We start by applying a generalized version of a well-studied NDP recovery algorithm [22, 23, 30] to a small case study, and we identify that the satisfaction of demand is a key driving force in the initial periods of recovery, outranking operational efficiency and direct repair costs of network elements in importance. Motivated by this finding, we define a simple, competitive percolation-based model of recovery that aims to maximize the satisfaction of consumer demand in a greedy manner. We show that component size anti-correlates with the likelihood of further growth; therefore, leading to islanding and the suppression of the emergence of large-scale connectivity, similar to explosive percolation transitions [29] and in contrast with traditional recovery models. We apply our recovery algorithm to synthetic power grids to systematically investigate how realistic structural features of the network affect the efficiency of the recovery process. We learn that high structural redundancy (related to the existence of multiple paths between nodes) allows for reduced total cost and faster recovery time; however, to benefit from that redundancy, an increasing amount of information needs to be considered at each step of reconstruction. We also study the role of the ratio of suppliers and consumers and find that decentralized supply generally benefits recovery efforts, unless the fraction of suppliers becomes unrealistically high. Our model deepens our understanding of network formation during recovery and of the relationship between network structure and recoverability. We anticipate that our work can lead to efficient approximations of the NDP algorithm by leveraging the important mechanisms uncovered by our competitive percolation model.

Model

Problem statement and the optimal recovery model

We are interested in the problem of restoring the operation of a critical infrastructure system after sustaining large-scale damage. The infrastructure network is represented by a graph 𝒢=(𝒩,)\mathcal{G}=(\mathcal{N},\mathcal{E}), where 𝒩\mathcal{N} is the set of NN nodes corresponding to substations and \mathcal{E} is a set of EE edges corresponding to transmission lines, e.g., power lines, water or gas pipes. We introduce the parameter did_{i} representing the commodity demand of node ii: if di<0d_{i}<0, node ii is a net consumer; if di>0d_{i}>0, node ii is a net supplier. We normalize did_{i} such that the total consumption (or production) sums to unity, i.e., 12i=1N|di|=1\frac{1}{2}\sum_{i=1}^{N}\left|d_{i}\right|=1. Following a catastrophic event, a subset of the network 𝒢=(𝒩,)\mathcal{G}^{\prime}=(\mathcal{N}^{\prime},\mathcal{E}^{\prime}) becomes damaged. We study a discrete time reconstruction process: in each timestep we fix one damaged component, and the process ends once the entire network is functional. Our goal is to identify a sequence in which to repair the elements such that the total cost of recovery is minimized. In this manuscript, we focus on the fundamental case where all links are damaged but nodes remain functional, i.e., =\mathcal{E}^{\prime}=\mathcal{E} and 𝒩=\mathcal{N}^{\prime}=\emptyset.

Optimization frameworks are often used in order to explore the space of possible repair sequences and identify best solutions. We implement the time-dependent network design problem (td-NDP) algorithm [22, 23, 30], which is a well known example of an optimization algorithm for network recovery developed by the civil engineering community. Out of the recovery processes we examine in this paper, td-NDP is the most realistic, and therefore the most computationally complex. It is formulated as a mixed integer problem which optimizes a cost that includes reconstruction costs of network components, operational costs, and penalties incurred for unsatisfied demand, while taking constraints on flows of commodities into account. In general mixed integer program are known to be NP-hard except in special cases. For the td-NDP this means problems become exponentially harder as the size of the network to be reconstructed increases; therefore, it is common practice to break up the recovery into time windows of length TT, and find the locally optimal solution in each window. A formal definition of td-NDP is provided in the Methods section.

To uncover the key driving factors and properties of the recovery process, we apply the td-NDP to a representative example, the power grid of Shelby County, Tennessee, which consists of 99 suppliers and 3737 consumers (with 1414 junction nodes where di=0d_{i}=0), connected by M=76M=76 transmission lines. Network topology and necessary parameters where obtained from Refs. [22, 23]. Figure 1a shows the total repair cost as a function of time as we perform the td-NDP with T=5T=5 on a network that was initially completely destroyed, with cost broken down by the type of expense. We see that deficit cost (i.e., the penalty accrued for unsatisfied demand) is overwhelmingly dominant and exponentially decreasing throughout the initial stages of recovery. Investigating how this impacts the growth of the components, Figure 2a shows the commodity deficit or supply of each connected component throughout the recovery process, with circle sizes representing the component size, and colors representing if a component is over- (blue) or undersupplied (red). We see that the td-NDP process results in many small components initially, with relatively small surplus/deficit, and only towards the end of the process all components are joined. This is consistent with islanding techniques discussed in engineering practice. Our goal is to develop a simple and computationally efficient model of the recovery process that captures these key features.

Competitive percolation optimizing for LCC growth

Previous topology-based recovery processes prioritize the rapid growth of the Largest Connected Component (LCC) [11, 12, 10, 13]. Models vary in the details, such as the type of failure (random, localized, catastrophic) and additional secondary objectives (such as prioritizing nodes based on population), but the metric for the quality of the solution is directly related to how quickly the LCC grows.

As a representative example of topology driven recovery strategies, we implement an Achiloptas process using a selection rule that maximizes the sum of the resulting component, which we refer to as LCC percolation. In this process, we randomly select M>0M>0 candidate edges out of the set of damaged edges at each discrete timestep. We then examine the impact that each individual edge would have and select the edge that, when added, creates the largest connected component. More specifically, let sis_{i} denote the size of the component to which node ii belongs. If nodes ii and jj belong to separate components, repairing edge (i,j)(i,j) creates a component with size Sij=si+sjS_{ij}=s_{i}+s_{j}; if they belong to the same component, the size of the component does not change and we set Sij=0S_{ij}=0. Out of the MM candidate edges, we repair the one that maximizes SijS_{ij}; if multiple candidate links have the same maximal SijS_{ij}, we select one of them uniformly at random. If M=1M=1, the process is equivalent to traditional percolation. If M=EM=E, the process is largely deterministic, we always repair an edge that is optimal with respect to the selection criteria.

We now apply LCC percolation as a model for the recovery of the Shelby County power grid and compare the results to the benchmark td-NDP process. Figure 2b shows that if we use growth of the LCC as our objective, the LCC, represented by the largest circle, grows rapidly throughout the process as expected. However, the deficit/surplus of this component fluctuates greatly. As indicated by the magnitude of total commodity deficit DD, i.e., the total unsatisfied demand in the network, in Figure 1b, such a recovery algorithm is costly and leaves large portions of the grid without power until the final steps of the recovery algorithm. To conclude, we find that a recovery process based on quick growth of the LCC is neither cost efficient nor effective for satisfying consumer’s demand as quickly as possible. As a result, we do not consider this algorithm for further studies.

Competitive percolation optimizing for demand satisfaction

We have shown that the key driving factor in recovery processes is reduction of the total commodity deficit, i.e., the total unsatisfied demand, and that optimizing LCC fails to capture this. To capture the essence of real recovery strategies, we propose a competitive percolation process which we refer to as recovery percolation that, instead of optimizing for LCC growth, aims to directly reduce the unsatisfied demand. In addition to network topology, this recovery process also takes into account the net demand or production of the individual nodes.

We define DiD_{i} as the commodity deficit of the connected component to which node ii belongs. We assume that capacity constraints of the transmission lines are sufficient and thus do not limit the flow of commodities during the recovery process, a common practice in infrastructure recovery literature [14, 22, 23]. Therefore the commodity deficit of a component is the sum of demand or supply of individual nodes belonging to the component, i.e., Di=j𝒞idjD_{i}=\sum_{j\in\mathcal{C}_{i}}d_{j}, where 𝒞i\mathcal{C}_{i} is the set of nodes belonging to the component containing node ii.

We use commodity deficit of the components as a selection criteria for the competitive percolation model to account for the goal of balancing supply and demand. Similar to LCC percolation discussed above, we randomly select M>0M>0 damaged candidate edges, from which one is chosen to be repaired and added to the network at each timestep as follows. We first consider how much demand would be met by adding each of the MM edges individually to the network. More specifically, if nodes ii and jj belong to components such that DiDj<0D_{i}D_{j}<0, then repairing edge (i,j)(i,j) reduces the total commodity deficit by ΔD=min(|Di|,|Dj|)\Delta D=\min(\lvert D_{i}\rvert,\lvert D_{j}\rvert); if DiDj0D_{i}D_{j}\geq 0, then there is no commodity deficit reduction, i.e., ΔD=0\Delta D=0. Out of the MM candidate edges, we repair the one that maximizes ΔD\Delta D; if multiple candidates have the same maximal ΔD\Delta D, we repair one of them chosen uniformly at random. As MM shrinks and approaches 11, the process becomes more stochastic; while if M=EM=E the process is largely deterministic, as we always select an edge that is optimal with respect to the selection criteria.

Figure 1b shows that for the Shelby County power grid the total commodity deficit during the recovery percolation for M=E=76M=E=76 well approximates the td-NDP, especially at the beginning of the recovery process when costs are much higher. We also see that even when M=10M=10, corresponding to only 13%\sim 13\% of the total edges considered at each timestep, the approximation remains very effective. Figure 2c shows similar dynamical behavior in recovery percolation as in the td-NDP solution (cf. Figure 2a): larger components delay formation, and tend to have smaller commodity deficit.

Results

In the following, we apply the recovery percolation model to various network topologies to identify important mechanisms driving network formation and to understand how network structure affects the efficiency of recovery efforts. For each synthetically generated network, the demand distribution is chosen to approximate the demand observed in real power grids (details are provided in the Methods section).

Recovery percolation on complete networks

We have shown that recovery percolation follows our benchmark td-NDP solution closely on a real-world topology. We also observed that the growth of connected components is suppressed via recovery percolation as compared to LCC percolation. To understand this behavior, we study large systems with N=104N=10^{4} nodes and we allow potential edges to exist between any node pair, removing underlying topology constraints. Note that the td-NDP process is intractable for networks of this size.

Figure 3 (main figure) shows the growth of the LCC for a range of MM values. For M=1M=1, the model reduces to random percolation which has a second-order phase transition at t/N=0.5t/N=0.5, and above this critical point the LCC becomes proportional to NN. As we increase MM, the apparent transition point shifts to higher values and approaches 11, indicating that the appearance of large-scale connectivity is suppressed; however, once the transition point is reached, the growth of the LCC becomes increasingly abrupt. This observation is analogous to explosive percolation, where links are chosen to be constructed explicitly to delay component growth [29]. In contrast, in recovery percolation it is an indirect consequence of the restoration strategy.

To understand the underlying mechanism of component growth, we plot the average component sizes and their corresponding average undersupply at various points during the reconstruction process in the bottom row of plots in Figure 3. Note that average oversupply behaves in a similar manner, but is omitted for clarity. The left column of plots in Figure 3 shows the same quantities such that the size of the LCC is fixed. The main trend we observe at any given point in time is that for large enough MM there is negative correlation between component size and undersupply, and this correlation becomes stronger as MM increases. This means that as components grow their commodity deficit is reduced and therefore the likelihood of further growth is also reduced, ultimately suppressing the appearance of large scale connectivity.

The observed two features also describe islanding, an intentional behavior in resilience planning and recovery in real-world power grids. This islanding behavior is already observed in early stages of the restoration process, becoming more apparent as tt approaches the transition point.

Recovery percolation on synthetic power grids

So far we investigated the recovery process on an underlying graph without topological constraints. We also wish to analyze more realistic networks and turn our attention to synthetic power grids [31, 32]. This allows us to systematically investigate how typical structural features of power grids affect the efficiency of the recovery process.

Power grids are spatially embedded networks, and physical constraints limit the maximum number of connections a node can have; their degree distributions, therefore, have an exponential tail, in contrast to many complex networks that display high levels of degree heterogeneity. Power grids typically have average degree k\left\langle k\right\rangle between 2.52.5 and 33 [32, 33]. An important requirement of power grids is structural redundancy, meaning that the failure of a single link cannot cause the network to fall into disconnected components. A network without redundancy has tree structure, has average degree 2 and all node pairs are connected through a unique path. Any additional link creates loops and improves redundancy. Structural redundancy can be characterized locally by counting short range loops. For example, power grids have a high clustering coefficient cc, typically ranging between 0.050.05 and 0.10.1 [32]. The algebraic connectivity, denoted by λ2\lambda_{2}, is the second smallest eigenvalue of the network’s Laplacian matrix and captures a measure of global redundancy: it is related to the number of links that have to be removed in order to break the network into two similarly sized components, with high value corresponding to high redundancy. The exact value of λ2\lambda_{2} depends on system size, where for a given number of nodes λ2\lambda_{2} is minimal for tree structure, and monotonically increases as further links are added [34].

To generate networks that exhibit these features, we use a simplified version of a practical model developed by Schultz et al. [32]. The model generates spatially embedded networks mimicking the growth of real-world power grids. The process is initiated by randomly placing N0N_{0} nodes on the unit square and connecting them with their minimum spanning tree. To increase redundancy, qN0qN_{0} (0q10\leq q\leq 1) number of links are added one-by-one, such that each link is selected to minimize the redundancy-cost trade-off function

f(i,j)=(dnet(i,j)+1)rdeuc(i,j),f(i,j)=\frac{(d_{\textrm{net}}(i,j)+1)^{r}}{d_{\textrm{euc}}(i,j)}, (1)

where ii and jj are two nodes not connected directly, dnet(i,j)d_{\textrm{net}}(i,j) is their shortest path distance in the network, and deuc(i,j)d_{\textrm{euc}}(i,j) is their Euclidean distance. The r0r\geq 0 parameter controls the trade-off between creating long loops to improve redundancy and the cost of building power lines. After the initialization, we add NN0N-N_{0} nodes through a growth process. In each time step a new node is added: with probability 1s1-s the node is placed in a random position and connects to the nearest node; with probability ss a randomly selected link (i,j)(i,j) is split and a new node is placed halfway between nodes ii and jj and is connected to both of them. To increase redundancy, in each time step an additional link is added with probability qq connecting a randomly selected node ii to node jj, such that f(i,j)f(i,j) is minimized. Finally, a fraction of nodes psp_{\textrm{s}} are randomly selected to be suppliers, the rest are assigned to be consumers.

Changing parameters qq, rr, and ss allows us to systematically explore how these parameters impact the structure of these model power grids (Fig. 4): qq controls the average degree k=2(1+q)\left\langle k\right\rangle=2(1+q) and adds redundancy to the network; rr controls how loops are formed, where small rr favors short distance connections leading to high cc and low λ2\lambda_{2}, while large rr favors long loops leading to low cc and high λ2\lambda_{2}; and ss increases typical distances in the network, lowering both cc and λ2\lambda_{2}.

Comparing recovery percolation and td-NDP

We first consider a set of parameters that yield typical topologies and compare the performance of recovery percolation to that of the locally optimal td-NDP recovery. We choose the parameters to create networks similar to the Western US grid following the specifications of Ref. [32] (N=103N=10^{3}, q=0.33q=0.33, r=1r=1, and s=0s=0). For the td-NDP analysis, we reduce the time window from T=5T=5, as used in the Shelby County model, down to T=2T=2 for tractability reasons since our synthetic networks are much larger (increasing TT causes an exponential increase in complexity). While this will result in a suboptimal solution, it still considers future timesteps, a dimension not present in percolation models. Figure 5a shows the growth of the LCC for the td-NDP process and recovery percolation varying MM from 11 to 100100. For recovery percolation we observe similar behavior to that seen for studies on the complete networks: as MM is increased the growth of the LCC is suppressed, and the formation of large-scale connectivity is delayed, and when it forms it grows more rapidly. For large MM, the recovery percolation closely resembles the td-NDP recovery in terms of LCC formation.

As the dominant cost factor in recovery of infrastructure networks is the total commodity deficit D(t)D(t), this is the most important metric in network recoverability, beyond the size of the LCC. Figure 5b shows D(t)D(t) reduction throughout the recovery process. As MM increases, we see a closer fit with td-NDP, especially in the more expensive early stages of recovery. Surprisingly, M=10M=10 approximates total commodity deficit quite well, which is significantly less computationally intensive than td-NDP or the deterministic version of recovery percolation (M=EM=E).

To better understand how the choice of MM affects the quality of recovery percolation, we calculate the total cost CMC_{M}, defined here as the area under the curve D(t)D(t) over time (i.e., CM=tD(t)C_{M}=\sum_{t}D(t)) as a function of MM. Figure 5c shows that CMC_{M} rapidly approaches CC_{\infty}, its value at M=EM=E. For this particular case, we only need to consider M=20M=20, that is less than 2%2\% of edges, at each timestep to get within 10%10\% of the optimal cost.

It is worth highlighting that recovery percolation captures the essential properties of the td-NDP process for T=2T=2, despite the fact that recovery percolation only considers commodity deficit, while td-NDP takes into account such details as heterogeneous repair costs of individual power lines, operational costs, performs network flows, and selects optimal recovery actions considering two timesteps.

Effect of network structure on recovery percolation

Recovery percolation together with the synthetic power grid networks provide a stylized model to extract key network features that impact the efficiency of the restoration process. For this we systematically investigate how typical structural features affect the following quantities:

  1. 1.

    Total optimal cost of recovery CC_{\infty}, which is the minimum cost obtainable with recovery percolation (M=EM=E).

  2. 2.

    Time to recovery t90t_{90} , the number of timesteps needed to reduce total commodity deficit by 90% percent.

  3. 3.

    Characteristic MM^{*}, which captures the approximability of the process. It is defined as the smallest MM value for which CM1.2CC_{M}\leq 1.2C_{\infty}.

Simulations show that redundancy qq, which controls the average degree, has the strongest effect (Figs. 6a-c). Increasing redundancy lowers both optimal total cost CC_{\infty} and recovery time t90t_{90}; however, it increases MM^{*}, meaning that to approximate the optimal solution more edges need to be sampled. This observation is robust to the choice of other parameters. Redundancy increases possible ways to reconnect the network, allowing less costly reconstruction strategies, but this also means that more paths must be explored to pick out the optimal one. The effect of rr is more subtle, we find that long range shortcuts (r=10r=10) further decrease CC_{\infty} and t90t_{90}; while short cycles (r=0r=0) have the opposite effect. The value of rr has little effect on MM^{*}.

The effect of line splitting depends on both the fraction of suppliers psp_{\textrm{s}} and the redundancy qq (Figs. 6d-f). For centralized supply (ps=0.05p_{\textrm{s}}=0.05), we find that in case of low redundancy, ss increases cost CC_{\infty} and recovery time t90t_{90}; while in case of high redundancy, ss has the opposite effect, reducing CC_{\infty} and t90t_{90}. Independent of the value of qq, the characteristic MM^{*} is significantly increased. For distributed supply (ps=0.3p_{\textrm{s}}=0.3), we find that both CC_{\infty} and t90t_{90} are increased by ss independently of the value of qq. While the value of MM^{*} is increased by ss for high qq, and decreased for low qq.

Finally, the fraction of suppliers psp_{\textrm{s}} also strongly influences the recovery process (Figs. 6g-i). Total optimal cost CC_{\infty} and recovery time t90t_{90} are high for very centralized (low psp_{\textrm{s}}) and very distributed (high psp_{\textrm{s}}) supply, with a minimum in between. If the demand and supply follow the same distribution, the minimum is at ps=0.5p^{*}_{\textrm{s}}=0.5. For our choice, the demand is more heterogeneously distributed than the supply, resulting ps<0.5p^{*}_{\textrm{s}}<0.5. Increasing psp_{\textrm{s}}, also allows easier approximation of the optimal solution, i.e., MM^{*} decreases with increasing psp_{\textrm{s}} (with the exception of low qq and psp_{\textrm{s}}).

Overall, we find that high structural redundancy reduces the optimal cost and time of recovery; however, higher edge sampling MM is needed to benefit from this reduction. Long range shortcuts in the network further reduce the cost, without significantly increasing MM. We also benefit from distributed supply, reducing both cost and recovery time, and depending on the level redundancy, may also improve approximability.

Discussion

We investigated the problem of optimal cost reconstruction of critical infrastructure systems after catastrophic events. We started by analyzing realistic recovery strategies for a small-scale case study, the power grid of Shelby County, TN. We identified the penalty incurred for over- and undersupply of commodities as the main contribution to the cost, outranking operational and repair costs by orders of magnitude in the initial periods of recovery. Motivated by this observation, we introduced the recovery percolation model, a competitive percolation model that in addition to network structure also takes the demand and supply associated with each node into account. The advantage of our stylized model is that it is computationally tractable and easy to interpret compared to the complex optimization problems studied in the civil engineering literature, while adequately reproducing important features of realistic recovery processes. This allows us to identify underlying mechanisms of the recovery process. For example, we showed that component size anti-correlates with the unsatisfied demand, which suppresses the emergence of large-scale connectivity through a process analogous to explosive percolation. Such a suppression of large-scale connectivity can be in fact observed in real recovery events [18]. The model also allowed us to systematically investigate the effect of typical network characteristics on the efficiency of the recovery process using synthetic power grids.

The computational complexity of identifying actionable reconstruction strategies is an open issue, especially in the case of interdependent and decentralized recovery scenarios, where systems are larger, and the optimization problem must be solved numerous times [22, 30, 35]. Our stylized model is efficient, but still ignores details. Similar to td-NPD, these strategies provide scenarios that may be useful for developing recovery operator based approaches to mathematically model the dynamics of recovery and enable development of data-driven control approaches [36]. Further work is needed to extend our model to simultaneous recovery of multiple critical infrastructure systems explicitly taking into account interdependencies between the systems. Competitive percolation strategies in general, can provide opportunities for modeling real-world processes. For instance, in addition to this application to recovery, there is recent work of applying competitive percolation strategies to suppress the outbreak of epidemics via targeted immunization [37].

td-NDP= minimize\displaystyle\text{td-NDP}=\text{ minimize} t𝒯t>0((i,j)fijty~ijt+i𝒩qitw~it)\displaystyle\sum_{t\in\mathcal{T}\mid t>0}\bigg{(}\sum_{(i,j)\in\mathcal{E}^{\prime}}{f_{ijt}\widetilde{y}_{ijt}}+\sum_{i\in\mathcal{N}^{\prime}}{q_{it}\widetilde{w}_{it}}\bigg{)}
+t𝒯(i𝒩(Mit+δit++Mitδit)+(i,j)cijtxijt)\displaystyle+\sum_{t\in\mathcal{T}}\bigg{(}{\sum_{i\in\mathcal{N}}{\boxed{(M^{+}_{it}\delta^{+}_{it}+M^{-}_{it}\delta^{-}_{it})}}}+{\sum_{(i,j)\in\mathcal{E}}{c_{ijt}x_{ijt}}}\bigg{)} (2a)
subject to,
j:(i,j)xijtj:(j,i)xjit=bitδit+δit+,\displaystyle\sum_{j:(i,j)\in\mathcal{E}}{x_{ijt}}-\sum_{j:(j,i)\in\mathcal{E}}{x_{jit}}=b_{it}-\delta^{-}_{it}+\delta^{+}_{it}, i𝒩,t𝒯,\displaystyle\forall i\in\mathcal{N},\forall t\in\mathcal{T}, (2b)
xijtuijtwit,\displaystyle x_{ijt}\leq u_{ijt}w_{it}, (i,j),t𝒯,\displaystyle\forall(i,j)\in\mathcal{E},\forall t\in\mathcal{T}, (2c)
xijtuijtwjt,\displaystyle x_{ijt}\leq u_{ijt}w_{jt}, (i,j),t𝒯,\displaystyle\forall(i,j)\in\mathcal{E},\forall t\in\mathcal{T}, (2d)
xijtuijtyijt,\displaystyle x_{ijt}\leq u_{ijt}y_{ijt}, (i,j),t𝒯,\displaystyle\forall(i,j)\in\mathcal{E},\forall t\in\mathcal{T}, (2e)
i𝒩w~it+(i,j)y~ijt1\displaystyle\sum_{i\in\mathcal{N}^{\prime}}\widetilde{w}_{it}+\sum_{(i,j)\in\mathcal{E}^{\prime}}\widetilde{y}_{ijt}\leq 1 t𝒯t>0,\displaystyle\forall t\in\mathcal{T}\mid t>0, (2f)
witt~=1tw~it~,\displaystyle w_{it}\leq\sum_{\tilde{t}=1}^{t}{\widetilde{w}_{i\tilde{t}}}, i𝒩,t𝒯|t>0,\displaystyle\forall i\in\mathcal{N}^{\prime},\forall t\in\mathcal{T}|t>0, (2g)
yijtt~=1ty~ijt~,\displaystyle y_{ijt}\leq\sum_{\tilde{t}=1}^{t}{\widetilde{y}_{ij\tilde{t}}}, (i,j),t𝒯|t>0,\displaystyle\forall(i,j)\in\mathcal{E}^{\prime},\forall t\in\mathcal{T}|t>0, (2h)
δit+0,\displaystyle\delta^{+}_{it}\geq 0, i𝒩,t𝒯,\displaystyle\forall i\in\mathcal{N},\forall t\in\mathcal{T}, (2i)
δit0,\displaystyle\delta^{-}_{it}\geq 0, i𝒩,t𝒯,\displaystyle\forall i\in\mathcal{N},\forall t\in\mathcal{T}, (2j)
xijt0,\displaystyle x_{ijt}\geq 0, (i,j),t𝒯,\displaystyle\forall(i,j)\in\mathcal{E},\forall t\in\mathcal{T}, (2k)
wit{0,1},\displaystyle w_{it}\in\{0,1\}, i𝒩,t𝒯,\displaystyle\forall i\in\mathcal{N},\forall t\in\mathcal{T}, (2l)
yijt{0,1},\displaystyle y_{ijt}\in\{0,1\}, (i,j),t𝒯,\displaystyle\forall(i,j)\in\mathcal{E},\forall t\in\mathcal{T}, (2m)
w~it{0,1},\displaystyle\widetilde{w}_{it}\in\{0,1\}, i𝒩,t𝒯,\displaystyle\forall i\in\mathcal{N}^{\prime},\forall t\in\mathcal{T}, (2n)
y~ijt{0,1},\displaystyle\widetilde{y}_{ijt}\in\{0,1\}, (i,j),t𝒯.\displaystyle\forall(i,j)\in\mathcal{E}^{\prime},\forall t\in\mathcal{T}. (2o)

Methods

Time-dependent NDP

Here, we define our benchmark model for network recovery: the time-dependent network design problem (td-NDP). Our version follows the more general formulation developed by Gonzalez et al. [22, 23, 30]. The td-NDP takes a graph 𝒢=(𝒩,)\mathcal{G}=(\mathcal{N},\mathcal{E}), where 𝒩\mathcal{N} is a set of nodes, and \mathcal{E} is the set of edges connecting nodes. At the beginning of the recovery process the td-NDP uses the destroyed graph, 𝒢=(𝒩,)\mathcal{G}^{\prime}=(\mathcal{N}^{\prime},\mathcal{E}^{\prime}), where 𝒩\mathcal{N}^{\prime} and \mathcal{E}^{\prime} represents the nodes and edges that are not functioning, respectively. The objective function (cf. Equation (2a)) minimizes the total reconstruction cost over a given time domain 𝒯\mathcal{T} with t𝒯t\in\mathcal{T}, which includes the cost to repair nodes, qitq_{it}, cost to repair edges, fijtf_{ijt}, cost of flow on each edge, cijtc_{ijt}, and oversupply and undersupply penalties for each node, Mit+M^{+}_{it} and MitM^{-}_{it}. These costs usually depend on multiple factors, such as the level of damage, the type and size of the components to be restored, their geographical accessibility, the amount of labor and resources required, and the social vulnerability of the affected areas, among others [22, 38, 39]. To keep track of demand satisfaction, each node also has a supply capacity (demand if negative), bitb_{it}. In the most general formalization of the problem, node supply bitb_{it} can depend on time tt, but in this paper we only consider constant values. The variables δit+\delta^{+}_{it} (δit\delta^{-}_{it}) account for oversupply (or undersupply) of node ii. We refer to the sum of the absolute values of oversupply and undersupply (|δit+|+|δit|\left|\delta^{+}_{it}\right|+\left|\delta^{-}_{it}\right|) as the commodity deficit of node ii. The td-NDP includes as decision variables the amount of flow on each edge, xijtx_{ijt}, whether or not a node ii [edge (i,j)(i,j)] is chosen to be recovered at timestep tt, w~it\tilde{w}_{it} (y~ijt\tilde{y}_{ijt}), and whether or not a node ii [edge (i,j)(i,j)] is functional at timestep tt, witw_{it} (yijty_{ijt}). Constraints 2b2o are imposed to ensure that conservation of flow properties are held and that only recovered and functional nodes can produce or consume flow.

The td-NDP formulation is a mixed integer program, which has been shown to be, in general, NP-hard (and becomes exponentially harder as 𝒯\mathcal{T} and |𝒢|\left|\mathcal{G}^{\prime}\right| grows). The number of variables and constraints also become larger as the input graph becomes larger. For many reasonable size problems, computing a global optimal (i.e., where 𝒯\mathcal{T} contains the entire time horizon for recovery) is intractable. Therefore, heuristics are used to restrict the size of 𝒯\mathcal{T} by dividing the total recovery time into smaller windows, and finding the locally optimal solutions within these windows [22]. It has been shown that such heuristic finds solutions very close to the optimal; however, the computational complexity is still relatively high as a result of the underlying mixed-integer program.

Supply and demand distribution

For our computational experiments, we generate our demand distribution by following the load distribution of the European power grid [39]. This dataset was chosen due to its large system size (N=1463,E=2199N=1463,E=2199) and its high resolution. Our goal is not to identify the true analytic form of the load distribution, but to generate statistically similar samples through bootstrapping. We found that an exponentiated Weibull distribution of the form f(x,a,c)=ac(1exp(xc)(a1)exp(xc)x(c1))f(x,a,c)=ac(1-\exp{(-x^{c})}^{(a-1)}\exp{(-x^{c})}x^{(c-1)}), where a=3.59a=3.59 and c=0.8c=0.8 well approximates the features of the demand distribution. Suppliers’ capacities are uniformly distributed to balance the total demand. We also get our ratio of suppliers (0.30.3) to consumers (0.70.7) from this dataset.

References

  • [1] Albert, R., Jeong, H. & Barabási, A.-L. Error and attack tolerance of complex networks. \JournalTitleNature 406, 378–382 (2000).
  • [2] Cohen, R., Erez, K., Ben-Avraham, D. & Havlin, S. Resilience of the internet to random breakdowns. \JournalTitlePhysical review letters 85, 4626 (2000).
  • [3] Motter, A. E. & Lai, Y.-C. Cascade-based attacks on complex networks. \JournalTitlePhysical Review E 66, 065102 (2002).
  • [4] Li, D., Yinan, J., Rui, K. & Havlin, S. Spatial correlation analysis of cascading failures: Congestions and blackouts. \JournalTitleScientific Reports 4 (2014).
  • [5] Zhao, J., Li, D., Sanhedrai, H., Cohen, R. & Havlin, S. Spatio-temporal propagation of cascading overload failures in spatially embedded networks. \JournalTitleNature Communications 7 (2016).
  • [6] Zhong, J. Restoration of interdependent network against cascading overload failure. \JournalTitlePhysica A: Statistical Mechanics and its Applications 514, 884–891 (2018).
  • [7] Latora, V. & Marchiori, M. Efficient behavior of small-world networks. \JournalTitlePhysical Review Letters 87, 198701 (2001).
  • [8] Ash, J. & Newth, D. Optimizing complex networks for resilience against cascading failure. \JournalTitlePhysica A: Statistical Mechanics and its Applications 380, 673–683 (2007).
  • [9] Holme, P., Kim, B. J., Yoon, C. N. & Han, S. K. Attack vulnerability of complex networks. \JournalTitlePhysical Review E 65, 056109 (2002).
  • [10] Majdandzic, A. et al. Spontaneous recovery in dynamical networks. \JournalTitleNature Physics 10, 34 (2014).
  • [11] Hu, F., Yeung, C. H., Yang, S., Wang, W. & Zeng, A. Recovery of infrastructure networks after localised attacks. \JournalTitleScientific Reports 6 (2016).
  • [12] Shang, Y. Localized recovery of complex networks against failure. \JournalTitleScientific Reports 6 (2016).
  • [13] Di Muro, M., La Rocca, C., Stanley, H., Havlin, S. & Braunstein, L. Recovery of interdependent networks. \JournalTitleScientific Reports 6 (2016).
  • [14] Quattrociocchi, W., Caldarelli, G. & Scala, A. Self-healing networks: Redundancy and structure. \JournalTitlePLOS One 9 (2014).
  • [15] Panteli, M., Trakas, D. N., Mancarella, P. & Hatziargyriou, N. D. Boosting the power grid resilience to extreme weather events using defensive islanding. \JournalTitleIEEE Transactions on Smart Grid 7, 2913–2922 (2016).
  • [16] Mureddu, M., Caldarelli, G., Damiano, A., Scala, A. & Meyer-Ortmanns, H. Islanding the power grid on the transmission level: less connections for more security. \JournalTitleScientific Reports 6, 34797 (2016).
  • [17] National Research Council. Terrorism and the electric power delivery system (National Academies Press, 2012).
  • [18] Rudnick, H., Mocarquer, S., Andrade, E., Vuchetich, E. & Miquel, P. Disaster management. \JournalTitleIEEE Power and Energy Magazine 9, 37–45 (2011).
  • [19] Johnson, D., Lenstra, J. & Kan, A. The complexity of the network design problem. \JournalTitleNetworks 8, 279–285 (1978).
  • [20] Balakrishnan, A., Magnanti, T. L. & Wong, R. T. A dual-ascent procedure for large-scale uncapacitated network design. \JournalTitleOperations Research 37, 716–740 (1989).
  • [21] Nurre, S. G., Cavdaroglu, B., Mitchell, J. E., Sharkey, T. C. & Wallace, W. A. Restoring infrastructure systems: An integrated network design and scheduling (inds) problem. \JournalTitleEuropean Journal of Operational Research 223, 794–806 (2012).
  • [22] González, A. D., Dueñas-Osorio, L., Sánchez-Silva, M. & Medaglia, A. L. The interdependent network design problem for optimal infrastructure system restoration. \JournalTitleComputer-Aided Civil and Infrastructure Engineering 31, 334–350 (2016).
  • [23] González, A. D., Dueñas-Osorio, L., Sánchez-Silva, M. & Medaglia, A. L. The time-dependent interdependent network design problem (TD-INDP) and the evaluation of multi-system recovery strategies in polynomial time. \JournalTitleThe 6th Asian-Pacific Symposium on Structural Reliability and its Applications 544–550 (2016).
  • [24] Poorzahedy, H. & Rouhani, O. M. Hybrid meta-heuristic algorithms for solving network design problem. \JournalTitleEuropean Journal of Operational Research 182, 578–596 (2007).
  • [25] Gallo, M., D’Acierno, L. & Montella, B. A meta-heuristic approach for solving the urban network design problem. \JournalTitleEuropean Journal of Operational Research 201, 144–157 (2010).
  • [26] Li, D., Zhang, Q., Zio, E., Havlin, S. & Kang, R. Network reliability analysis based on percolation theory. \JournalTitleReliability Engineering and System Safety 142, 556–562 (2015).
  • [27] Krapivsky, P. L., Redner, S. & Ben-Naim, E. A kinetic view of statistical physics (Cambridge University Press, 2010).
  • [28] Achlioptas, D., D’souza, R. M. & Spencer, J. Explosive percolation in random networks. \JournalTitleScience 323, 1453–1455 (2009).
  • [29] D’Souza, R. M. & Nagler, J. Anomalous critical and supercritical phenomena in explosive percolation. \JournalTitleNature Physics 11, 531–538 (2015).
  • [30] González, A. D., Chapman, A., Dueñas-Osorio, L., Mesbahi, M. & D’Souza, R. M. Efficient infrastructure restoration strategies using the recovery operator. \JournalTitleComputer-Aided Civil and Infrastructure Engineering 32, 991–1006 (2017).
  • [31] Wang, Z., Scaglione, A. & Thomas, R. J. Generating statistically correct random topologies for testing smart grid communication and control networks. \JournalTitleIEEE transactions on Smart Grid 1, 28–39 (2010).
  • [32] Schultz, P., Heitzig, J. & Kurths, J. A random growth model for power grids and other spatially embedded infrastructure networks. \JournalTitleThe European Physical Journal Special Topics 223, 2593–2610 (2014).
  • [33] Li, J., Dueñas-Osorio, L., Chen, C., Berryhill, B. & Yazdani, A. Characterizing the Topological and Controllability Features of U.S. Power Transmission Networks Physica A: Statistical Mechanics and Its Applications 453, 84-98 (2016).
  • [34] Van Mieghem, P. Graph spectra for complex networks (Cambridge University Press, 2010).
  • [35] Smith, A. M., González, A. D., Dueñas-Osorio, L. & D’Souza, R. M. Interdependent network recovery games. \JournalTitleRisk Analysis (2017).
  • [36] Chapman, A., González, A. D., Mesbahi, M., Dueñas-Osorio, L. & D’Souza, R. M. Data-guided control: Clustering, graph products, and decentralized control. In Decision and Control (CDC), 2017 IEEE 56th Annual Conference on, 493–498 (IEEE, 2017).
  • [37] Clusella, P., Grassberger, P., Pérez-Reche, F. J. & Politi, A. Immunization and targeted destruction of networks using explosive percolation. \JournalTitlePhysical Review Letters 117, 208301 (2016).
  • [38] FEMA. Multi-hazard Loss Estimation Methodology, Earthquake Model - Technical Manual, Hazus - MH 2.1. Tech. Rep., Washington D.C. (2013).
  • [39] Hutcheon, N. & Bialek, J. W. Updated and validated power flow model of the main continental european transmission network. In PowerTech (POWERTECH), 1–5 (IEEE, 2013).

Acknowledgements

We gratefully acknowledge support from the U.S. Army Research Laboratory and the U.S. Army Research Office under MURI award number W911NF-13-1-0340, and from DARPA award W911NF-17-1-0077.

Author contributions statement

A.S. was responsible for experimental design and analysis; M.P. was responsible for problem scoping and providing theoretical insights; M.R. provided guidance for power grid analysis and network formation; A.G. and L.D.O. contributed recovery and optimization algorithms and background; R.D. provided percolation and network science guidance and expertise. All authors contributed to the transcription of this article.

Competing interests

The author(s) declare no competing interests.

Refer to caption
Figure 1: Recovery of the Shelby County power network. (a) Breakdown of normalized recovery costs for the locally optimal td-NDP solution. Surplus/deficit costs are the overwhelmingly dominant factor in early stages of recovery. (b) Total commodity deficit during the recovery of the Shelby County power network, averaged over 100100 independent realizations for percolation results; recovery percolation model closely approximates the td-NDP baseline surplus/deficit cost curve (especially as MM increases), while LCC percolation deviates.
Refer to caption
Figure 2: Commodity deficit of each connected component over time throughout various recovery processes on the Shelby County dataset: (a) td-NDP, (b) LCC percolation, and (c) recovery percolation. Each point represents a component, where the size of the point indicates the relative size of the component. Blue circles (above 0 on the y-axis) indicate excess supply, while red circles (below 0 on the y-axis) indicate unmet demand. The final point on the x-axis indicates the first timestep where all nodes belong to the LCC. The optimization process shown in (a) keeps size and commodity deficit low throughout the recovery process, consistent with islanding practices. Standard percolation, (b), is vastly different, showing varying sizes and commodity deficits throughout the process, resulting in higher recovery costs. Our competitive recovery percolation method, (c), shows a signature closer to (a) and is much more computationally efficient, making a locally optimal choice from a sample size of only M=10M=10 at each timestep.
Refer to caption
Figure 3: Recovery percolation on the complete graph, showing growth of the LCC with increasing edge density (center figure). We see that as MM is increased, the transition point is delayed. Small plots show the average undersupply of connected components as a function of component size. Left column plots distributions for various MM values when the LCC is of a certain size; bottom row plots shows the same for fixed time. Rare events (component sizes that occur <1%<1\% of the time) are not plotted to eliminate noise. We note that as MM is increased, the slopes trend towards being more negative, indicating that larger components have less deficit magnitude throughout the recovery process when more recovery choices are presented to the recovery percolation process. All data points are the average of 10 independent realizations with N=104N=10^{4} nodes, error bars indicate the standard error of the mean.
Refer to caption
Figure 4: Network structure of the synthetic power grids. (a-c) Increasing qq increases both local and global redundancy, i.e., clustering cc and algebraic connectivity λ2\lambda_{2}; while decreases the diameter dmaxd_{\textrm{max}} of the network. High rr (solid red) favors long distance shortcuts, increasing λ2\lambda_{2} and decreasing cc; while small rr (green) has the opposite effect. Increased line splitting ss (light blue and purple) increases dmaxd_{\textrm{max}}, especially for low qq; while reducing both cc and λ2\lambda_{2}. Each data point is an average of 100100 independent network realizations with N=1000N=1000 nodes, and the errorbars indicate the standard error of the mean.
Refer to caption
Figure 5: Recovery percolation and td-NDP on synthetic power grids. (a) Size of the LCC as a function of recovery time and (b) total commodity deficit during the restoration. High MM values well capture the behavior of the td-NDP process. (c) How well the total cost during recovery percolation approximates the cost of the td-NDP solution as a function of MM. All data points are the average of 10 independent realizations with N=1000N=1000 nodes, error bars indicate the standard error of the mean and are typically smaller than the markers.
Refer to caption
Figure 6: Effect of network structure on recovery. (a-f) Redundancy qq reduces optimal cost CC_{\infty} and recovery time t90t_{90} and increases MM^{*}, the number of edges needed to be sampled to approximate the optimal solution. This effect is independent of the value of rr, ss, and psp_{\textrm{s}} (dashed and solid lines). (g-h) Centralized and unrealistically distributed supply both increase CC_{\infty} and recovery time t90t_{90}, with a minimum at intermediately distributed supply. (i) Increasing psp_{\textrm{s}}, improves approximability. Total cost CC_{\infty} is measured in units of (total demand×recovery timestep)(\textrm{total demand}\times\textrm{recovery timestep}). Networks were generated using parameters N=1000N=1000, q=0.33q=0.33, r=1r=1, s=0s=0, and ps=0.3p_{\textrm{s}}=0.3, unless otherwise indicated in the figure. Each data point is an average of 100100 independent network realizations, and the error bars indicate the standard error of the mean.