This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Mass-Matrix Differential-Algebraic Equation Formulation for Transient Stability Simulation

Hantao Cui,  Fangxing (Fran) Li,  Joe H. Chow H. Cui and F. Li are with the Department of Electrical Engineering and Computer Science, The University of Tennessee, Knoxville, TN, 37996 USA e-mail: fli6@utk.edu.J. H. Chow is with the Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USAThis work was supported in part by the Engineering Research Center Program of the National Science Foundation and the Department of Energy under NSF Award Number EEC-1041877 and the CURENT Industry Partnership Program.
Abstract

This letter proposes a mass-matrix differential-algebraic equation (DAE) formulation for transient stability simulation. This formulation has two prominent advantages: compatible with a multitude of implicit DAE solvers and can be conveniently implemented based on the traditional formulation, for example, by separating the parameters in denominators into the diagonals of the mass matrix. It also allows reducing the dynamics using null time constants. Benchmark studies are presented on the time and accuracy of 17 implicit solvers for the proposed formulation using the Kundur’s two-area system and a 2,000 bus system.

Index Terms:
Differential-algebraic equations, mass-matrix formulation, transient stability simulation, numerical integration.

I Introduction

Power systems transient stability programs traditionally employ the following explicit differential-algebraic equations (DAEs) for transient stability analysis:

𝒙˙=𝒇(𝒙,𝒚,𝒖)𝟎=𝒈(𝒙,𝒚,𝒖),\begin{array}[]{lll}\dot{\bm{x}}&=&\bm{f}(\bm{x},\bm{y},\bm{u})\\ \bm{0}&=&\bm{g}(\bm{x},\bm{y},\bm{u})\,,\\ \end{array} (1)

where 𝒇\bm{f} (𝒇:m+n+on\bm{f}:\mathbb{R}^{m+n+o}\Rightarrow\mathbb{R}^{n}) and 𝒈\bm{g} (𝒈:m+n+om\bm{g}:\mathbb{R}^{m+n+o}\Rightarrow\mathbb{R}^{m}) are the differential and the algebraic equations, respectively, 𝒙\bm{x} are the differential states, 𝒚\bm{y} are the algebraic variables, and 𝒖\bm{u} are discontinuous states from discrete models.

Recent works such as [1] propose an explicit formulation that can convert differential states into algebraic ones. This is referred to as the flexible formulation given by

𝚪𝒙˙=𝒇(𝒙,𝒚,𝒖)𝟎=𝒈(𝒙,𝒚,𝒖),\begin{array}[]{rll}\bm{\Gamma}\dot{\bm{x}}&=&\bm{f}(\bm{x},\bm{y},\bm{u})\\ \bm{0}&=&\bm{g}(\bm{x},\bm{y},\bm{u})\,,\\ \end{array} (2)

where 𝚪\bm{\Gamma} is a n×nn\times n time-invariant diagonal matrix with diagonal γi,i\gamma_{i,i} satisfying

{γi,i=1,if xi remains as a differential state,γi,i=0,if xi is converted to an algebraic one.\left\{\begin{matrix}\gamma_{i,i}=1,&\text{if }x_{i}\text{ remains as a differential state,}\\ \gamma_{i,i}=0,&\text{if }x_{i}\text{ is converted to an algebraic one.}\end{matrix}\right. (3)

More recently, a semi-implicit formulation is proposed in [2] with the abilities to 1) increase the sparsity of Jacobians, 2) reduce computation efforts, and 3) enable state-to-algebraic switching, given by

𝑻(𝒙,𝒚)𝒙˙=𝒇~(𝒙,𝒚,𝒖)𝑹(𝒙,𝒚)𝒙˙=𝒈~(𝒙,𝒚,𝒖),\begin{array}[]{lll}\bm{T}(\bm{x},\bm{y})\dot{\bm{x}}&=&\bm{\tilde{f}}(\bm{x},\bm{y},\bm{u})\\ \bm{R}(\bm{x},\bm{y})\dot{\bm{x}}&=&\bm{\tilde{g}}(\bm{x},\bm{y},\bm{u})\,,\\ \end{array} (4)

where 𝑻(𝒙,𝒚)\bm{T}(\bm{x},\bm{y}) and 𝑹(𝒙,𝒚)\bm{R}(\bm{x},\bm{y}) are n×nn\times n and m×nm\times n time-variant, highly sparse, non-diagonal and non-full rank matrices. This formulation, however, can be challenging to implement due to the complexity and effort associated with implementing custom solvers and porting models from (1) to (4), which, although, can be carried out incrementally.

This letter proposes a mass-matrix formulation that can take advantage of numerous DAE solvers and is straightforward to implement. It also allows for converting differential states to algebraic ones by setting associated parameters to zero.

II Mass-Matrix Explicit DAE Formulation

II-A Mass-Matrix DAE Formulation

The proposed mass-matrix formulation is given by

[𝑴𝒙𝟎𝟎𝟎]𝑴[𝒙˙𝟎]=[𝒇^(𝒙,𝒚,𝒖)𝒈(𝒙,𝒚,𝒖)],\begin{array}[]{rcl}\underbrace{\begin{bmatrix}\bm{M_{x}}&\bm{0}\\ \bm{0}&\bm{0}\end{bmatrix}}_{\bm{M}}\begin{bmatrix}\dot{\bm{x}}\\ \bm{0}\end{bmatrix}&=&\begin{bmatrix}\bm{\hat{f}}(\bm{x},\bm{y},\bm{u})\\ \bm{g}(\bm{x},\bm{y},\bm{u})\end{bmatrix}\end{array}\,, (5)

where 𝑴\bm{M} is a (n+m)×(n+m)(n+m)\times(n+m) possibly-diagonal matrix with its upper-left n×nn\times n block being 𝑴𝒙\bm{M_{x}}. 𝒇^\bm{\hat{f}} (𝒇^:m+n+on\bm{\hat{f}}:\mathbb{R}^{m+n+o}\Rightarrow\mathbb{R}^{n}) is chosen based on 𝒇\bm{f} and determines the diagonality of 𝒎\bm{m}.

The choice of 𝒇^\bm{\hat{f}} is not unique. For transient stability simulations, the simplest choice of 𝒇^\bm{\hat{f}} is the numerators of 𝒇\bm{f} if 𝒇\bm{f} is a fraction expression with a constant parameter as the denominator. In such case, 𝑴𝒙\bm{M_{x}} becomes a time-invariant diagonal matrix with the element μi,i\mu_{i,i} (i[1,n])(i\in[1,n]) being

μi,i=f^i/fi.\begin{array}[]{rcl}\mu_{i,i}&=&\hat{f}_{i}/f_{i}\end{array}\,. (6)

The proposed mass-matrix formulation in (5) has the following relationships with the existing ones:

  1. 1.

    (5) is a special case of (4) when one let 𝒇~=𝒇^\bm{\tilde{f}}=\bm{\hat{f}}, 𝑻(𝒙,𝒚)=𝑴𝒙\bm{T(x,y)}=\bm{M_{x}} and 𝑹(𝒙,𝒚)=𝟎\bm{R(x,y)}=\bm{0}.

  2. 2.

    (2) is a special case of (5) when one let 𝒇^=𝒇\bm{\hat{f}}=\bm{f} and 𝑴=𝚪\bm{M}=\bm{\Gamma}.

  3. 3.

    (1) is a special case of (2) when one let 𝚪=𝑰𝒏\bm{\Gamma}=\bm{I_{n}}, an n×nn\times n identity matrix. Due to the fact that the traditional formulation has already been proven to be all-encompassing for power system models, (5), as a superset of (1), can naturally accommodate for all power system equation formulations.

Such relationships can be represented as:

{Semi-Implicit}{Mass-Matrix}{Flexible}{Traditional}\text{\{Semi-Implicit\}}\supset\text{\{{Mass-Matrix}\}}\supset\text{\{Flexible\}}\supset\text{\{Traditional\}} (7)

The main advantages of (5) are two-fold:

  1. 1.

    As a general formulation, it is directly supported by a broad spectrum of full-fledged DAE solvers. Power system researchers can thus focus on formulating models without worrying about the details in solution techniques.

  2. 2.

    Simple to implement. The only required change to the traditional formulation is to separate the parameters (typically, time or mass constants) from the denominators of 𝒇\bm{f} into the mass matrix. Mixing models in the mass-matrix and the traditional formulations is allowed, as the latter is a special case.

In addition, the mass-matrix formulation shares the same advantage as the semi-implicit formulation that allows model simplification by setting some time constants to zero.

II-B Modeling Examples

This section presents some basic transfer functions and synchronous generators in the mass-matrix formulation, in order to demonstrate the implementation simplicity.

II-B1 First-order lag block

The conventional formulation of lag block has one differential equations given by

y˙=(Kuy)/T.\begin{array}[]{rll}\dot{y}&=&(Ku-y)/T\,.\end{array} (8)

Using the proposed formulation, the numerator can be chosen as the right-hand-side (RHS) by moving TT to the mass matrix diagonal:

Ty˙=(Kuy),\begin{array}[]{rll}T\dot{y}&=&(Ku-y)\,,\end{array} (9)

which allows using T=0T=0 to convert the lag block to a pure gain. This formulation is the same as using the semi-implicit formulation.

II-B2 Lead-lag block

The lead-lag block can be implemented in the following serial approach:

x˙=(ux)/T2y=(T1/T2)(ux)+x.\begin{array}[]{rllll}\dot{x^{\prime}}&=&(u-x^{\prime})/T_{2}\\ y&=&(T_{1}/T_{2})(u-x^{\prime})+x^{\prime}\,.\end{array} (10)

The mass-matrix formulation can be given by

T2x˙=(ux)0=T1T2(ux)+xy,\begin{array}[]{rll}T_{2}\dot{x^{\prime}}&=&(u-x^{\prime})\\ 0&=&T_{1}T_{2}^{\prime}(u-x^{\prime})+x^{\prime}-y\,,\end{array} (11)

where T2T_{2}^{\prime} is an auxiliary parameter given by

{T2=1/T2,if T20T2=0,otherwise.\left\{\begin{array}[]{llll}T_{2}^{\prime}&=&1/T_{2},&\text{if }T_{2}\neq 0\\ T_{2}^{\prime}&=&0,&\text{otherwise}\,.\end{array}\right. (12)

T2T_{2}^{\prime} can be pre-calculated before simulation and thus will not increase the computation operations. This implementation retains the ability to convert the lead-lag (when T1=0T_{1}=0) to a lag block or to a pass-through block (when T1=T2=0T_{1}=T_{2}=0).

II-B3 Synchronous Generator Model

The round-rotor generator model [3] is conventionally given by

e˙q=(XadIfd+vf)/Td0e˙d=(XaqI1q)/Tq0e˙d′′=(Id(xdxl)ed′′+eq)/Td0′′e˙q′′=(Iq(xqxl)eq′′+ed)/Tq0′′\begin{array}[]{rll}\dot{e}^{\prime}_{q}&=&(-X_{ad}I_{fd}+v_{f})/{T^{\prime}_{d0}}\\ \dot{e}^{\prime}_{d}&=&(-X_{aq}I_{1q})/{T^{\prime}_{q0}}\\ \dot{e}^{\prime\prime}_{d}&=&(-I_{d}\left(x^{\prime}_{d}-x_{l}\right)-e^{\prime\prime}_{d}+e^{\prime}_{q})/{T^{\prime\prime}_{d0}}\\ \dot{e}^{\prime\prime}_{q}&=&(I_{q}\left(x^{\prime}_{q}-x_{l}\right)-e^{\prime\prime}_{q}+e^{\prime}_{d})/{T^{\prime\prime}_{q0}}\end{array} (13)

where XadIfdX_{ad}I_{fd} and XaqI1qX_{aq}I_{1q} are calculated algebraically. The proposed mass-matrix DAE formulation is

Td0e˙q=XadIfd+vfTq0e˙d=XaqI1qTd0′′e˙d′′=Id(xdxl)ed′′+eqTq0′′e˙q′′=Iq(xqxl)eq′′+ed\begin{array}[]{rll}T^{\prime}_{d0}\dot{e}^{\prime}_{q}&=&-X_{ad}I_{fd}+v_{f}\\ T^{\prime}_{q0}\dot{e}^{\prime}_{d}&=&-X_{aq}I_{1q}\\ T^{\prime\prime}_{d0}\dot{e}^{\prime\prime}_{d}&=&-I_{d}\left(x^{\prime}_{d}-x_{l}\right)-e^{\prime\prime}_{d}+e^{\prime}_{q}\\ T^{\prime\prime}_{q0}\dot{e}^{\prime\prime}_{q}&=&I_{q}\left(x^{\prime}_{q}-x_{l}\right)-e^{\prime\prime}_{q}+e^{\prime}_{d}\end{array} (14)

Generator model reductions can be achieved by setting the corresponding time constants to zero. For example, the one d- and one q-axis flux-decay model can be obtained by setting Td0′′=Tq0′′=0T_{d0}^{\prime\prime}=T_{q0}^{\prime\prime}=0.

As seen from the examples, (5) is straightforward to implement. In a symbolic modeling framework where parameters, variables, and equations are symbols [4], the conversion from the traditional formulation can be automated by manipulating the equations with a single parameter as the denominator.

II-C Implicit Trapezoidal Method for Mass-Matrix DAE

The mass-matrix formulation of DAE can be solved by a variety of numerical integration methods. Implicit numerical integration schemes are known for their good performance for Ordinary Differential Equations (ODEs) and DAEs that are stiff, which is the case for transient simulation. At each step, ITM solves a set of nonlinear equations consisting of both differential and algebraic equations simultaneously through Newton’s iteration. This subsection discusses the form of iteration for the mass-matrix formulation.

The nonlinear equations to solve for (1) at time tt are given by

𝟎=𝒑t=(𝒙t𝒙th)0.5h(𝒇t+𝒇th)𝟎=𝒒t=γ𝒈t,\begin{array}[]{rrcll}\bm{0}&=&\bm{p}_{t}&=&(\bm{x}_{t}-\bm{x}_{t-h})-0.5h(\bm{f}_{t}+\bm{f}_{t-h})\\ \bm{0}&=&\bm{q}_{t}&=&-\gamma\bm{g}_{t}\end{array}\,, (15)

where hh is the step size, and 𝒙th\bm{x}_{t-h} and 𝒇th\bm{f}_{t-h} are the differential states and equation RHS computed at the previous time (tht-h). Note that for 𝒈\bm{g}, the sign and the scaling factor γ\gamma can be chosen arbitrarily, and a small γ\gamma close to hh improves the convergence. Accordingly, solutions are obtained by iteratively updating variables using (16), where the increments are calculated using (17).

[𝒙(i+1)(t)𝒚(i+1)(t)]=[𝒙(i)(t)𝒚(i)(t)]+[Δ𝒙(i)Δ𝒚(i)]\begin{array}[]{rcl}\begin{bmatrix}\bm{x}^{(i+1)}(t)\\ \bm{y}^{(i+1)}(t)\end{bmatrix}&=&\begin{bmatrix}\bm{x}^{(i)}(t)\\ \bm{y}^{(i)}(t)\end{bmatrix}+\begin{bmatrix}\Delta\bm{x}^{(i)}\\ \Delta\bm{y}^{(i)}\end{bmatrix}\end{array} (16)
[Δ𝒙(i)Δ𝒚(i)]=[𝑰n0.5h𝒇𝒙(i)0.5h𝒇𝒚(i)γ𝒈𝒙(i)γ𝒈𝒚(i)]𝑨(i)1[𝒑(i)𝒒(i)]\begin{array}[]{rcl}\begin{bmatrix}\Delta\bm{x}^{(i)}\\ \Delta\bm{y}^{(i)}\end{bmatrix}&=&-{\underbrace{\begin{bmatrix}\bm{I}_{n}-0.5h\bm{f_{x}}^{(i)}&-0.5h\bm{f_{y}}^{(i)}\\ -\gamma\bm{g_{x}}^{(i)}&-\gamma\bm{g_{y}}^{(i)}\end{bmatrix}}_{\bm{A}^{(i)}}}^{-1}\begin{bmatrix}\bm{p}^{(i)}\\ \bm{q}^{(i)}\end{bmatrix}\end{array} (17)

Comparing (5) with (1), we can observe that the mass-matrix formulation can be derived by multiplying 𝑴\bm{M} to both sides of the equation. Therefore, mutltiplying 𝑴\bm{M} to (17) yields

𝟎=𝒑^t=𝑴𝒙(𝒙t𝒙th)0.5h(𝒇^t+𝒇^th)𝟎=𝒒t=γ𝒈t.\begin{array}[]{rrcll}\bm{0}&=&\bm{\hat{p}}_{t}&=&\bm{M_{x}}(\bm{x}_{t}-\bm{x}_{t-h})-0.5h(\bm{\hat{f}}_{t}+\bm{\hat{f}}_{t-h})\\ \bm{0}&=&\bm{q}_{t}&=&-\gamma\bm{g}_{t}\end{array}\,. (18)

The Jacobian matrix for calculating increments is given by:

𝑨^(i)=[𝑴𝒙0.5h𝒇^𝒙(i)0.5h𝒇^𝒚(i)γ𝒈𝒙(i)γ𝒈𝒚(i)].\begin{array}[]{rcl}\bm{\hat{A}}^{(i)}&=&\begin{bmatrix}\bm{M_{x}}-0.5h\bm{\hat{f}_{x}}^{(i)}&-0.5h\bm{\hat{f}_{y}}^{(i)}\\ -\gamma\bm{g_{x}}^{(i)}&-\gamma\bm{g_{y}}^{(i)}\end{bmatrix}\end{array}\,. (19)

Comparing (19) to (17), one can notice that the only required change to the solver is to substitute the identity matrix 𝑰𝒏\bm{I_{n}} with the mass matrix 𝑴𝒙\bm{M_{x}}. Therefore, numerical integration routines can be readily adapted for the proposed formulation.

II-D Remarks on the Computational Complexity

In terms of computational complexity, (5) is not more demanding than (1). When one uses a constant diagonal mass-matrix formulation, the number of division operations can be slightly reduced for evaluating 𝒇^\bm{\hat{f}} and the subsequent Jacobian elements. Such effect, however, is negligible in actual test cases mostly because the number of reduced operations is small w.r.t. the total operations.

III Case Studies

III-A Solvers and Simulation Setup

The most significant advantage of the mass-matrix formulation is its compatibility with fine-tuned DAE solver packages, which usually come with a multitude of solution methods and error control mechanisms and have undergone rigorous tests. Such compatibility allows power system researchers to focus on modeling while utilizing the state-of-the-art numerical solvers.

TABLE I: An overview of the tested solvers
Solver Name Order Stability Remarks
Rosenbrock methods (for small-stiffness problems)
Ros4LStab 4 A
\cdashline2-5[1pt/1pt] Rodas4 4 A Order 3 interpolant
\cdashline2-5[1pt/1pt] Rodas42 4 A Order 3 interpolant
\cdashline2-5[1pt/1pt] Rodas4P 4 A
Order 3 interpolant with
parabolic correction
\cdashline2-5[1pt/1pt] Rodas5 5 A Hermite interpolant
Rosenbrock-Wanner methods (allow approximate Jacobian)
Rosenbrock23 2/3 L
\cdashline2-5[1pt/1pt] ROS34PW1a 4 L
\cdashline2-5[1pt/1pt] ROS34PW1b 4 L
\cdashline2-5[1pt/1pt] ROS34PW2 4 L
\cdashline2-5[1pt/1pt] ROS34PW3 4 A
Implicit Runge-Kutta methods (for stiff problems, low accuracy)
Trapezoid 2 A Adaptive time step
\cdashline2-5[1pt/1pt] ImplicitEuler 1 L Adaptive time step
Multistep methods (for stiff problems)
QNDF1 1 L Quasi-constant time step
\cdashline2-5[1pt/1pt] QNDF2 2 L Quasi-constant time step
\cdashline2-5[1pt/1pt] QBDF1 1 L
Equivalent to ImplicitEuler
with BDF error estimator
\cdashline2-5[1pt/1pt] QBDF2 2 L
\cdashline2-5[1pt/1pt] QNDF 1\sim5 L

For example, the DifferentialEquations.jl package [5] provides four categories of methods that naturally support the mass-matrix formulation: Rosenbrock methods and Rosenbrock-Wanner methods (for small stiffness systems), and Implicit Runge-Kutta methods and multistep methods (for stiff problems). An overview of the interfaced solvers in this work is shown in Table I, where “A” or “L” in the stability column indicates an algorithm being A-stable or L-stable [6]. Details of the solvers can be found in [5] and the accompanying documentation.

The proposed work has been implemented in ANDES [4], a Python-based hybrid symbolic-numeric library for power systems, and interfaced with the solvers in the Julia language with the Jacobian callback provided. Case studies utilize Python 3.7.6, ANDES 1.0.8, Julia 1.5.0, and DifferentialEquations 6.15.0. Simulations are executed on an Intel Xeon W-2133 CPU with 32 GiB of RAM running Ubuntu 16.04 LTS.

III-B Solver Benchmarks and Statistics

First, the Kundur’s system modeled in the proposed formulation is used for solver benchmarking. The system contains 52 differential states and 140 algebraic variables, and the rank of the mass matrix is 48 (two generators are reduced with Td0′′=Tq0′′=0T_{d0}^{\prime\prime}=T_{q0}^{\prime\prime}=0). The response following a line trip at t=0.1t=0.1 s and a reconnection after 50 ms is simulated for 5 s. The baseline “accurate” solution is obtained using Rodas5, which is efficient in high accuracy, with both absolute and relative tolerances set to 101210^{-12}. Next, each solver is given four pairs of absolute and relative tolerances, chosen from (105,106,107,108)(10^{-5},10^{-6},10^{-7},10^{-8}), and (101,102,103,104)(10^{-1},10^{-2},10^{-3},10^{-4}), respectively. Errors are obtained as the difference between the accurate and the actual solutions at the final simulation step. Each case is run for five times to compute the average time. The results shown in Figure 1, and some observations are:

  1. 1.

    The commonly used Trapezoidal method is balanced in speed and accuracy. ImplicitEuler is neither efficient nor accurate.

  2. 2.

    Depending on the stiffness, the Rosenbrock methods can be faster than the Trapezoid method at some accuracy levels.

  3. 3.

    The Rosenbrock-Wanner methods allow approximate Jacobians. When accurate Jacobians are used (in our case and most other transient stability simulation tools), the Rosenbrock-Wanner methods tend to be slower than the Rosenbrock ones.

  4. 4.

    The QNDF method has similar speed and accuracy to the Trapezoid method. QBDF1 performs similarly to Implicit Euler.

Refer to caption
Figure 1: Performance of the solvers using the Kundur’s system.

Next, the Great Britain (GB) 2,224-bus system [7] with dynamics is used for benchmarking. The system contains 788 state variables and 9,176 algebraic variables. A disturbance of line trip at t=1.0t=1.0 s and a reconnection after 100 ms is simulated for 5 s. Benchmark results are shown in Figure 2. The observations from the Kundur’s system also apply to this case. For a different system, one can perform similar benchmarks to identify the best solvers that satisfy the speed and accuracy requirements.

Refer to caption
Figure 2: Performance of the solvers using the GB 2,224-bus system.

IV Conclusions

This letter proposes a mass-matrix formulation for transient simulation with the advantages of being compatible with the traditional formulation, compatible with a multitude of solvers, and simple to implement. Modeling examples for common transfer functions and synchronous generators are shown, and the implicit trapezoid method is deduced for solving DAE in the mass-matrix formulation. The solver compatibility is verified using 17 solvers benchmarked for two test systems. In conclusion, the compatibility and simplicity make the proposed method highly suitable for transient stability simulation.

References

  • [1] P. Aristidou, D. Fabozzi, and T. Van Cutsem, “Dynamic simulation of large-scale power systems using a parallel schur-complement-based decomposition method,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 10, pp. 2561–2570, 2013.
  • [2] F. Milano, “Semi-implicit formulation of differential-algebraic equations for transient stability analysis,” IEEE Transactions on Power Systems, vol. 31, no. 6, pp. 4534–4543, 2016.
  • [3] J. H. Chow and J. J. Sanchez-Gasca, Power System Modeling, Computation, and Control.   John Wiley & Sons, 2020.
  • [4] H. Cui, F. Li, and K. Tomsovic, “Hybrid symbolic-numeric library for power system modeling and analysis,” arXiv preprint arXiv:2002.09455, 2020.
  • [5] C. Rackauckas and Q. Nie, “Differentialequations. jl–a performant and feature-rich ecosystem for solving differential equations in julia,” Journal of Open Research Software, vol. 5, no. 1, 2017.
  • [6] G. Wanner and E. Hairer, Solving ordinary differential equations II.   Springer Berlin Heidelberg, 1996.
  • [7] T. U. of Edinburgh. (2020, aug) Gb network. [Online]. Available: https://www.maths.ed.ac.uk/optenergy/NetworkData/fullGB/