Upper limits on the robustness of Turing models
and other multiparametric dynamical systems
Abstract
Traditional linear stability analysis based on matrix diagonalization is a computationally intensive process for -dimensional systems of differential equations, posing substantial limitations for the exploration of Turing systems of pattern formation where an additional wave-number parameter needs to be investigated. In this study, we introduce an efficient technique that leverages Gershgorin’s theorem to determine upper limits on regions of parameter space and the wave number beyond which Turing instabilities cannot occur. This method offers a streamlined avenue for exploring the phase diagrams of other complex multiparametric models, such as those found in systems biology.
Deciphering reproducible pattern formation during embryogenesis with multiparameter models calls for mechanisms demonstrating robustness in parameter space. Turing models of diffusion-driven instability [1], which serve as a pivotal category of models, only induce patterns within a highly restricted region of the parameter space [2, 3, 4, 5]. The exploration of such models for increased regions of parameter space and hence enhanced robustness is markedly limited due to their inherent complexity: they incorporate numerous parameters, including rate constants and diffusion constants of activator and inhibitor morphogens, as well as a wave number parameter as part of the analysis.
Here, we consider autonomous spatiotemporal dynamic models that include diffusion, such as the ones describing reaction-diffusion phenomena. Their partial differential equations (PDEs) are written as
(1) |
where are system variables, is an -valued function defined in -dimensional phase space, is the system-independent parameter vector, is the diffusion matrix, and the Laplacian.
Generally, when analyzing a dynamical system, after finding all the fixed points, one needs to study their stability and, eventually, the basins of attraction of all fixed points in the phase space. To elaborate on this procedure, for systems with diffusion-induced instability capable of producing Turing patterns, one requires the system without diffusion to be stable. Hence, one needs to solve the system of equations
(2) |
for a given to find fixed points , and then use the Taylor expansion of the equations around these points to linearise the model to study the stability of the system due to a small perturbation for . This procedure is equivalent to writing the Jacobian matrix of the system at and studying its eigenvalues to classify the fixed point stability. However, finding the eigenvalues of a matrix is computationally expensive , so the above procedure is challenging for high-dimensional phase spaces. The mentioned problem becomes more acute when considering that a candidate dynamical system can have some parameters that change the system’s stability. In this case, we need to redo all the steps for every point in parameter space to study the dynamical system’s characteristics, e.g. the bifurcation diagram.
Specifically, for our reaction-diffusion system with diffusion-induced instability, the Laplacian is removed by spatial Fourier transformation at the expense of an extra parameter, the wave number. Hence, the linear stability analysis must be applied for different wave numbers to specify the dominant wavelength that finally shapes the stationary solution [3]. For a diagonal diffusion matrix , the Jacobian for a given wave number, say , is
(3) |
Hence, any improvement that decreases the procedure’s computational complexity is beneficial for general stability analysis, particularly for pattern formation study.
In this paper, we introduce an efficient methodology based on the well-known Gershgorin’s theorem [6] in linear algebra. Our method serves as an initial screening process to rule out extensive portions of the parameter space when searching for Turing instabilities. This method, applicable numerically and analytically for simplified models, markedly accelerates the exploration process. We illustrate the efficacy of this approach using three 2-morphogen models – a sigmoidal Hill-function-based model, the Brusselator model [7], and the Lengyel-Epstein [8] model.
Gershgorin’s theorem - We start by introducing Gershgorin’s theorem and consider its geometrical interpretation. As we shall see, it is possible to construct an algorithm that checks the rows or columns of a Jacobian matrix to find those that are unstable or remain stable after introducing diffusion and consequently cannot produce a Turing pattern. The theorem [6] states that for complex matrix and the sum of moduli of off-diagonal elements in the -th row, each union of circles (for ) contains a number of eigenvalues of equal to the number of circles used to create the union. The analogous result holds if columns of are considered. Note that in our case, , and for diagonal terms without and with diffusion.
Let us consider the radius and position of a circle in the complex plane as depicted in Fig. S1. Depending on the sign of the diagonal term , the circle’s center is on the real axis’s negative or positive side. Let us say is one of the eigenvalues corresponding to the row or column of the matrix . Four types of circles may appear: Type 1: As it shown in Fig. S1A, we have and . Therefore, regardless of where the eigenvalue is inside the circle, its real part must be negative, . Type 2: In Fig. S1B, we can see , and the center of the circle is placed on the negative side of the real axis. However, since and , the real part of can be negative or positive. In other words, the theorem is inconclusive about the sign of the real part of the corresponding eigenvalue. Type 3: The diagonal element is positive (), and the center of the circle is on the positive side of the real axis. Yet, and . Thus, the real part of the eigenvalue must be positive .Type 4: Similar to Type 2, the range of Gershgorin’s circle spans from negative to positive values. Therefore we cannot conclusively decide about the real part of the eigenvalue inside this area.
We must emphasize that when two or more Gershgorin’s circles overlap, the eigenvalues lie in the union of the circles. Let us define the lower and upper bounds of each circle as and , respectively. For instance, in Fig. S1C, the union of two circles shows that the real part of both eigenvalues corresponding to rows and must be in the union of their diameters . And correspondingly, for each row (or column) of the matrix , there exists intervals like
(4) |
where after taking their unions and reformulating as non-overlapping, disjoint intervals result in
(5) |
such that , for . Therefore, the is the right-most disjoint interval constructed by the unions of the original intervals, and is sufficient to study it to find the sign of the real part of the largest eigenvalue. This introduces the following possibilities regarding the stability condition: 1. For , the real part of the largest eigenvalue is negative. Consequently, the system is stable. 2. For , the real part of the largest eigenvalue is positive. Consequently, the system is unstable. 3. For the situation is inconclusive.
At this stage, we can use the obtained results in two different ways. The first possibility is when the Jacobian is written in terms of the model’s parameters, , and we may be able to derive the right-most disjoint set parametrically. Accordingly, our inequalities define the regions in parameter space where the method can conclusively determine the stability/instability of the system. Nevertheless, the region corresponding to the inconclusive range requires different classification techniques. Note that even if the method is not always conclusive for all the regions in parameter space, it can always find a theoretical lower bound for the volume of the parameter space that the system is stable/unstable.
The second possibility is when studying a system’s linear stability numerically. We propose an algorithm that classifies a given Jacobian matrix into “stable”, “unstable” and “inconclusive” stability groups (see Algorithm (1) in Supplementary Materials - IV A).
Reaction-diffusion models - To study a pattern-forming system given by Eq. (1) with stationary solution , the linear stability of the Jacobian in Eq. (3) is studied by our proposed method. We can write the lower and upper bounds corresponding to row (or column) as and for . Then, the stability/instability criteria of the Jacobian determine the Turing pattern conditions.
It is interesting to see the effect of introducing diffusion. For a given wave number , the inclusion of diffusion shifts all the diagonal terms by (see Eq. (3)). Effectively, since diagonal terms correspond to the location of the centers of Gershgorin’s circles, it is geometrically equivalent to saying all the circles shift to the left as shown in Fig. S1D. Note, since the off-diagonal terms have not changed, the circles’ radii remain unchanged.
Indeed, for any given circle partially on the positive side of the real axis, there exists a maximum shift by a wave number defined by that transfers the circle entirely to the negative side of the real axis by (for details see Supplementary Materials - II). Furthermore, the real part of the eigenvalue corresponding to that circle must be negative for all the higher wave numbers . By finding for all rows (or columns), the linear stability of different wave numbers can be restricted to the range , such that for , or the case with no diffusion, specifies the stability condition and for determines the evolution of the dominant wave number in a perturbed system.
Thus, we must have three different regimes: (1) For , when , all the circles are on the negative side of the real line and including diffusion for any given wave number shifts the circles further to the left. Hence, all real parts of eigenvalues are negative, and diffusion cannot excite any wave number. Consequently, the system is incapable of producing a Turing pattern (“super-stable”). (2) On the contrary, when for , the initial stationary state is unstable and incapable of producing a Turing pattern (“unstable”). (3) Finally, when is inconclusive ( and ), must be studied by finding its eigenvalues. And if one finds that it is stable, the maximum of the dispersion relation finds the dominant wave number for pattern formation by restricting to . These regimes are included in our algorithm to speed up the process for checking the possibility of Turing-pattern formation for a given parameter set. Only parameters for which the classification is “inconclusive” need further study of their eigenvalues and are in principle able to form patterns.
Role of diffusing species - The dispersion relation of systems in which all species are diffusing always satisfies for . In other words, asymptotically, as long as all species are diffusers, we have . When some but not all species in a reaction-diffusion model diffuse, introducing the diffusion coefficients into the Jacobian matrix shifts some circles to the left while the others remain in the same place – see Fig. S2. Although one can calculate the values for the diffuser rows in the matrix, a special situation can arise for in the dispersion relation. For instance, consider two among three species are diffusers as shown in Fig. S2. As we can see, after shifting the diffusers circle by an amount and respectively, the third eigenvalue corresponds to the non-diffuser element remains positive for all . Consequently, the real part of the dispersion relation can remain positive with no upper bounds. As a result, no dominant wave number exists to create a stationary pattern. Fortunately, it is easy to find these cases algorithmically without calculating the eigenvalues.
Tightening the bounds - Given an invertible matrix , introduces an equivalence relation between square matrices and such that matrix has the same eigenvalues as (see Supplementary Material - III for details). Defining an invertible diagonal matrix as the identity matrix with exception of matrix element , the transformed matrix has the form
(6) |
The transformation’s effect is similar to dividing all the elements of the row by and multiplying the elements of the column by . Consequently, the diagonal term remains the same.
Consider the rows of the resulting matrix. The radii of all circles corresponding to rows other than expand by the amount (for ), and the -th radius shrinks by the factor of , while the centers of all circles stay the same. Since the eigenvalues of the transformed matrix are the same as the original one, one can hope the shrunk circle becomes isolated from the rest since the expansions of the other radii are smaller than the shrinking of the single radius. In practice, one can find that isolates the circle with the largest center from the rest. The interval of all rows except the -th is , and the interval for row is .
To tighten the bounds, we have two distinct cases that can be studied separately. Case (1): When the diagonal term is positive, or , to isolate the circle corresponding to row , its leftmost point, or , must be larger than every other circle’s rightmost point, or . As explained in detail in Supplementary Materials - III, if all the rows and the largest one, , satisfy the inequalities
(7) |
simultaneously, there exists a that isolates the rightmost eigenvalue, and the Jacobian is conclusively unstable. Case (2): When the largest diagonal term is negative, or , the latter implies all the other diagonal terms are negative too. In this case, we search for a possible shrinkage value , such that while the circle of the row shrinks with its upper bound on the negative side of the real axis, the growth of all the other circles keeps them at the negative side. Combined, this leads to two conditioned bounds as
(8) |
Hence, if a non-empty interval can be found that satisfies the above inequalities, the Jacobian is conclusively “super-stable” (see Algorithm 2 in Supplementary Materials - IV B).
Conclusion - We have introduced an efficient method that uses Gershgorin’s theorem to define robustness bounds in dynamical systems, particularly beneficial for reaction-diffusion models such as diffusion-driven Turing systems [3]. This approach can eliminate unstable non-diffusive cases or solutions that remain stable post-diffusion while also setting an upper limit for wave numbers capable of pattern formation. When applied to specific models in the appendix, our method does not only enhance numerical algorithms’ speed, but it also facilitates an analytical study of parameter space. The former capability is exemplified by a Hill-function-based Turing model which leads to a rejection of of the parameter combinations in table 1), while the latter is demonstrated for the classic Brusselator model in Fig. S3 (and by the Lengyel-Epstein model in Supplementary Materials - V B). The method’s utility increases significantly when accounting for parameters that alter the behavior of a potential dynamical system. For example, generating bifurcation profiles of dynamical systems necessitates repeated linear stability analysis for many different parameter values, a challenging endeavor in high dimensional phase spaces.
Total | Super-stable | Inconclusive | Unstable | No fixed point | |
---|---|---|---|---|---|
Row-wise | |||||
100% | 85.07% | 14.04% | 0.49% | 0.41% | |
Column-wise | - | ||||
100% | 48.76% | 51.22% | 0.02% | - | |
Tighten bounds | - | ||||
100% | 89.85% | 9.72% | 0.43% | - | |
Combined | |||||
100% | 98.37% | 0.70% | 0.52% | 0.41% |
Appendix on applications to Turing models — To test the fraction of rejections and, consequently, the speed up due to the Algorithms (1) and (2), we use a biologically inspired model capable of producing Turing patterns [3]. The reaction-diffusion PDE with nine free parameters is written as
(9) |
Note that the nonlinear terms are Hill functions that regulate activation and inhibition of molecule production, e.g., in gene expression. The Jacobian of the linearized form of the above equations is a two-by-two matrix, and in practice, the computational cost of calculating its eigenvalues is not much different than our algorithm. However, we selected this model since the algorithm’s correctness can be easily checked by comparing the determinant and trace of the Jacobian.
In this simulation, we selected one billion parameter combinations and applied Algorithm (1) to classify them into “unstable”, “super-stable”, “inconclusive” and “no fixed point”. Note that the case “no fixed point” refers to the parameter combinations for which the root-finding algorithm could not find any stationary solutions. We first used our Algorithm (1) for a row-wise comparison, and after that, by using the inconclusive results from the first run, we used the algorithm again for a column-wise calculation. And finally, we classified the remaining inconclusive cases using our Algorithm (2). These results are presented in table 1, showing that more than of the parameter combinations were rejected. This provides an upper limit on the robustness of Turing patterns given a certain sampling of parameter space [3].
Brusselator model - Next, we look for inequalities that separate the parameter space of the Brusselator model into “inconclusive“ or otherwise. The Brusselator is a two-species reaction-diffusion model with a set of PDEs given by
(10) |
for two parameters . Using the stability analysis, we can derive the Jacobian of the model at its fixed point and the corresponding rows and columns intervals as
(11) |
Note that since the intervals depend on the parameters, it is easier to check the sign of the circles’ centers and classify the conditions than directly taking the union.
For the first row, the circle’s center is at ; therefore, for , it is on the negative side of the real axis. At the same time, the center of the second row’s circle is always on the negative side (). Consequently, the region of parameter space in which the Jacobian is stable before and after the inclusion of diffusion derives as
(12) |
for . At the same time, for , the center of the first row’s circle is on the positive side of the real axis. Thus, the Jacobian is unstable when the first interval’s lower bound becomes positive, leading to
(13) |
The conditions from both Eqs. (12) and (13) are shown in Fig. S3A. Similarly, for the column’s intervals, if , both centers are on the negative sides of the real axis, and the condition of stability writes as for . However, for , the interval of the first column stays between and , which means the case is inconclusive. Fig. S3B depicts the columns’ results.
References
- Turing [1952] M. Turing, Philosophical Transactions of the Royal Society B 237, 37 (1952).
- Maini et al. [2012] P. Maini, T. Woolley, R. Baker, E. Gaffney, and L. S. Seirin, Interface Focus 2, 487 (2012).
- Scholes et al. [2019] N. S. Scholes, D. Schnoerr, M. Isalan, and M. P. Stumpf, Cell Systems 9, 243 (2019).
- Haas and Goldstein [2021] P. A. Haas and R. E. Goldstein, Physical Review Letters 126, 238101 (2021).
- Woolley et al. [2021] T. Woolley, A. Krause, and E. Gaffney, Bulletin of Mathematical Biology 83, 41 (2021).
- Bell [1965] H. E. Bell, Am. Math. Mon. 72, 292 (1965).
- Prigogine and Nicolis [1985] I. Prigogine and G. Nicolis, Self-organisation in nonequilibrium systems: Towards a dynamics of complexity, in Bifurcation Analysis: Principles, Applications and Synthesis, edited by M. Hazewinkel, R. Jurkovich, and J. H. P. Paelinck (Springer Netherlands, Dordrecht, 1985) pp. 3–12.
- Lengyel et al. [1990a] I. Lengyel, G. Rabai, and I. R. Epstein, Journal of the American Chemical Society 112, 9104 (1990a).
- Lengyel et al. [1990b] I. Lengyel, G. Rabai, and I. R. Epstein, Journal of the American Chemical Society 112, 4606 (1990b).
Supplementary Materials:
Upper limits on the robustness of Turing models
and other multiparametric dynamical systems
In these supplementary materials, we provide details on the results presented in the main text. We also present our Algorithms (1) and (2), and additional results on the Lengyel-Epstein model.
I Gershgorin’s Theorem
Theorem 1 (Gershgorin’s Theorem [6])
Let be an complex matrix, and be the sum of moduli of off-diagonal elements in the -th row. Then, each eigenvalue of lies in the union of the circle
(S1) |
The analogous result holds if columns of are considered.
I.1 Geometrical interpretation
Let us consider the radius and position of a circle in the complex plain regarding -th row as depicted in Fig. S1A, S1B, S1C and S1D. Depending on the sign of the diagonal term , the circle’s center is either on the real axis’s negative or non-negative side. Let us say is one of the eigenvalues corresponding to the row or column of the matrix . It follows:
-
•
Type 1: As it shown in Fig. S1A, we have , and . Therefore, regardless of where the eigenvalue is inside the circle, its real part must be negative .
-
•
Type 2: In Fig. S1B, we can see , and the center of the circle is placed on the negative side of the real axis. However, since and , the real part of can be negative or positive. In other words, the theorem is inconclusive about the sign of the real part of the corresponding eigenvalue.
-
•
Type 3: The diagonal element is positive (), and the center of the circle is on the positive side of the real axis. See Fig. S1C. Yet, and . Thus, the real part of the eigenvalue must be positive .
-
•
Type 4: Similar to type 2, the range of Gershgorin’s circle spans from negative to positive values. Therefore we cannot conclusively decide about the real part of the eigenvalue inside this area.
I.2 Overlapping circles
We must emphasize that when two or more Gershgorin’s circles overlap, the eigenvalues lie in the union of the circles. For instance, in Fig. S2, the union of two circles shows that the real part of both eigenvalues corresponding to rows and must be in the union of their diameters
(S2) |
The lower and upper bounds of each circle must be
(S3) |
and
(S4) |
respectively, and for each row (or column) of the matrix , there exists a corresponding interval
(S5) |
where after taking their unions, one or more disjoint intervals results in
(S6) |
such that
(S7) |
After taking the unions, we can see that s and s are not necessarily equal to s and s. Still by defining
(S8) |
the corresponds to the right-most disjoint interval. Recall that our main goal in studying the linear stability of a system is investigating its Jacobian eigenvalues, and if the real part of the largest eigenvalue is positive, one concludes that the system is unstable. Similarly, the negative real part of the largest eigenvalue implies the stability of the system.
As a result, the right-most disjoint interval in is enough to conclusively state the sign of the largest eigenvalue, and introduces the following possibilities regarding the stability condition:
-
1.
For , the real part of the largest eigenvalue is negative. Consequently, the system is stable.
-
2.
For , the real part of the largest eigenvalue is positive. Consequently, the system is unstable.
-
3.
For the situation is inconclusive.
I.3 Linear Stability Analysis
At this stage, we can use the obtained results in two different ways. The first possibility is when the Jacobian is written in terms of the model’s parameters, , and we may be able to derive the right-most disjoint set parametrically. Accordingly, the inequalities mentioned in the previous section define the regions in parameter space where the method can conclusively determine the stability/instability of the system. Nevertheless, the region corresponding to the inconclusive range requires different classification techniques. We used this technique for Brusselator [7] (see the main paper) and Lengyel-Epstein [9, 8] (see subsection V.2) model. Note that even if the method is not always conclusive for all the regions in parameter space, it can always find a theoretical lower bound for the volume of the parameter space that the system is stable/unstable.
Furthermore, the second possibility is when studying a system’s linear stability numerically. We propose the Algorithm (1) that classifies a given Jacobian matrix into “stable”, “unstable” and “inconclusive” groups.
II Reaction-diffusion models
To study a pattern-forming system like the reaction-diffusion model, for instance, in
(S9) |
after linearising equations at the stationary solution , the Jacobian writes as
(S10) |
where . Thus, to study the linear stability of the matrix by using the proposed method, we can write the lower and upper bounds corresponding to row (column) as
(S11) |
and
(S12) |
for .
Then, the stability/instability criteria of the Jacobian determine the Turing pattern conditions. For unstable cases, including diffusion will not produce a Turing pattern; therefore, there is no need to complete linear stability analysis. However, it is interesting to see the effect of introducing diffusion for inconclusive and stable cases.
For a given wave number , the inclusion of diffusion shifts all the diagonal terms by accordingly
(S13) |
Effectively, since diagonal terms correspond to the location of Gershorin’s circle and only diagonal terms are affected by including the diffusion, it is geometrically equivalent to saying all the centre of Gershgorin’s circle shifts to the left. Simultaneously, since the off-diagonal terms have not changed, the circle’s radius remains unchanged. This effect is shown in Fig. S4.
Indeed, for any circle with a segment on the positive side of the real axis, there exists a maximum shift to the left, say , by a wave number denoted by that transfers the circle entirely to the negative side of the real axis. Also, after the shift, the real part of the eigenvalue corresponding to that circle must be negative for all the higher wave numbers . Hence, depending on the sign of the derives as
(S14) |
for .
Subsequently, the linear stability of different wave numbers restricts to the range , such that for , or the case with no diffusion, determines the stability of the initial condition and for determines that the evolution of the dominant wave number in a perturbed system:
-
•
For , when , all the circles are on the negative side of the real axis and including the diffusion for any given wave number shifts the circles further to the left by an amount . Hence, all real parts of eigenvalues are negative, and diffusion cannot excite any wave number. Consequently, the system is incapable of producing a Turing pattern (“super-stable”).
-
•
On the contrary, when for , the initial stationary state is unstable and incapable of producing a Turing pattern (“unstable”).
-
•
Finally, when is on inconclusive range, must be studied by finding its eigenvalues. And if one finds that it is stable, the maximum of the dispersion relation finds the dominant wave number for pattern formation. Meanwhile, by finding for all rows (or columns), the study of the dispersion relation can be restricted to .
Similar to the stability analysis of a single matrix, Algorithm (1) can speed up the process to check the possibility of a Turing pattern for a given parameter set. However, the algorithm can further speed up the search for Turing pattern-formation systems.
To elaborate, let us recite the linear stability of the pattern-formations models: Without diffusion, the must be stable, and after the inclusion of diffusion, must be unstable for some s. In contrast, when algorithm (1) classifies as stable, it is impossible to have a pattern as a stationary solution (This is why we called that class “super-stable”). The pattern-forming parameters are among those that are classified as inconclusive and need further study by their eigenvalues.
Remark 1
The dispersion relation for all diffusing species always satisfies
(S15) |
In other words, asymptotically, as long as all species are diffusers, we have
(S16) |
III Tightening the bounds
Given an invertible matrix , observe that introduces an equivalence relation between square matrices and such that the matrix have the same eigenvalues as . To see that, let us assume and are ’s eigenvalue and its corresponding eigenvector
(S17) |
Then,
(S18) | |||||
which shows is the eigenvalue of with the corresponding eigenvector . Observing that, we define an invertible diagonal matrix as
(S19) |
the transformed matrix has the form
(S20) |
the transformation’s effect is similar to dividing all the elements of the row by and multiplying the elements of the column by . Consequently, the diagonal term remains the same.
Consider the rows of the resulting matrix. The interval of all rows except the -th will change from
(S21) | |||||
to
(S22) | |||||
and the row to
(S23) | |||||
In other words, the radii of all circles correspond to rows other than expand by the amount and the -th radius shrinks by the factor of , while the centre of all circles stays on the same point. And since the eigenvalues of the transformed matrix are the same as the original one, one can hope the shrunk circle becomes isolated from the rest. At the same time, the other radius expansions were smaller than that single shrink. In practice, one can find that can isolate the circle with the largest centre from the rest. To tighten the bounds, we have two distinct cases that study separately:
-
•
Let us assume is the largest diagonal term. To isolate the circle corresponding to row , its leftmost point, or , must be larger than every other circle’s rightmost point, or . For a given row , this condition writes as
(S24) The last result is a quadratic form in with all its three coefficients calculated for the Algorithm (1) for the rows and . To satisfy the inequality, the discriminant of the quadratic form must be positive, or
(S25) At the same time, since , to have one or more positive solutions for the quadratic equation, say , we must have
(S26) And finally, must be positive to have an unstable Jacobian, which constraint as
(S27) To check this condition, one needs to find the roots of the quadratic Eq. (S24) and check that its largest solution is greater than . If all the rows and the largest one, , satisfy the inequalities
(S28) there exists a that isolates the rightmost eigenvalue, and the Jacobian is conclusively unstable.
-
•
When the largest diagonal term is negative, or , it implies all the other diagonal terms are negative two. In this case, we search for a possible shrinkage value , such that while the circle of the row shrinks with its upper bound on the negative side of the real axis, the growth of all the other circles keeps them at the negative side too. This argument is equivalent to a set of conditions for the row as
(S29) and for all the other rows as
(S30) Combining these conditions derives a bound as
(S31) So, if a non-empty interval can be found that satisfies the above inequalities, Jacobian is conclusively super-unstable.
IV Algorithms
IV.1 Algorithm 1
IV.2 Algorithm 2
V Results
V.1 Numerical comparison
To test the ratio of rejection and, consequently, the speed up due to the algorithm (1), we use a biologically inspired model capable of producing a Turing pattern [3]. The model reaction-diffusion PDE with nine free parameters is written as
(S32) |
Note that the nonlinear terms are Hill functions that regulate activation and inhibition in a cell’s gene expression. The Jacobian of the linearised form of the above equations is a two-by-two matrix, and in practice, the computational cost of calculating its eigenvalues is not much different than our algorithm. However, we selected this model for the numerical experiment since it is easy to check the stability/instability by comparing the determinant and trace of the Jacobian and ensuring the algorithm’s correctness.
In this simulation, we used diffusion constants and and selected all one billion parameter combinations from the nine-dimensional mesh grid and applied the algorithms (1) (2) and to classify them into “unstable”, “super-stable”, “Inconclusive” and “No fixed point”. Note that the case “No fixed point” refers to the parameter combinations that the root finding algorithm (“hybrd” of “MINPACK” library) could not find a stationary solution. We first used the Algorithm (1) for row-wise comparison, and after that, by using the inconclusive results from the first run, we used the algorithm again for column-wise calculation. And finally, classified the remaining inconclusive cases by using the Algorithm (2). These results are presented in table (1).
Total | Super-stable | Inconclusive | Unstable | No fixed point | |
---|---|---|---|---|---|
Row-wise | |||||
100% | 85.07% | 14.04% | 0.49% | 0.41% | |
Column-wise | - | ||||
100% | 48.76% | 51.22% | 0.02% | - | |
Tighten bounds | - | ||||
100% | 89.85% | 9.72% | 0.43% | - | |
Combined | |||||
100% | 98.37% | 0.70% | 0.52% | 0.41% |
The table (1) shows that more than of the parameter combinations were rejected. Even in this case, where calculating the eigenvalues is computationally cheap, there is a substantial speed up in comparison to the usual Turing space search: the rejected super-stable cases do not need a dispersion relation study since we know they will stay stable even after diffusion inclusion.
V.2 Lengyel-Epstein model
The Lengyel-Epstein model [9, 8] is written for two species, and like
(S33) |
and
(S34) |
for parameters . The Jacobian matrix for the stationary state
(S35) |
derives as
(S36) |
First, note that the positive factor can be eliminated from all the following inequalities. Thus, to simplify the calculations, we do not include it in what follows.
Next, the rows and columns intervals write
(S37) |
For rows, we can see the center of the second row is always negative. As a result, we can have two conclusive cases:
-
1.
The center of the first row is on the positive side of the real line, while the other circle does not overlap the first one. These conditions can be written as
(S38) which simplify to two conditions:
(S39) Notice that the excluded region is -dependent.
-
2.
In the second case, the center of both circles is on the negative side of the real line, and for each. Hence, these conditions express in the following inequalities
(S40) The first inequality is always satisfied. And the remaining ones find an upper bound for
(S41) Finally, the three inequalities reduce to one
(S42)
Again, the center of the second row is always negative for columns. Using the same argument and without reiterating it, we have the following cases:
-
1.
The center of the first column is on the positive side of the real line.
(S43) -
2.
The center of both circles is on the negative side of the real line, and their radius is smaller than the distance between their center and the origin.
(S44)