This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Noise-induced transitions in gene circuits: a perturbative approach for slow noise

Gerardo Aquino1 Gerardo.Aquino@apha.gov.uk    Andrea Rocco2,3 A.Rocco@surrey.ac.uk 1Department of Computing, Goldsmiths, University of London, London, UK 2Department of Physics, University of Surrey, GU2 7XH Guildford, UK 3Department of Microbial Sciences, University of Surrey, GU2 7XH, Guildford, UK
Abstract

We consider a generic class of gene circuits affected by nonlinear extrinsic noise. To address this nonlinearity we introduce a general perturbative methodology based on assuming timescale separation between noise and genes dynamics, with fluctuations exhibiting a large but finite correlation time. We apply this methodology to the case of the toggle switch, and by considering biologically relevant log-normal fluctuations, we find that the system exhibits noise-induced transitions. The system becomes bimodal in regions of the parameter space where it would be deterministically monostable. We show that by including higher order corrections our methodology allows one to obtain correct predictions for the occurrence of transitions even for not so large correlation time of the fluctuations, overcoming thereby limitations of previous theoretical approaches. Interestingly we find that at intermediate noise intensities the noise-induced transition in the toggle switch affects one of the genes involved, but not the other one.

I Introduction

Extrinsic noise has been investigated thoroughly over the last decades in a variety of physical and chemical systems Horsthemke84 . It has emerged as an active dynamical player, which can produce so-called noise-induced transitions to bimodal dynamics in systems otherwise deterministically monostable.

In chemical and physical systems, the search and the possible emergence of such transitions relies on the multiplicative nature of the corresponding stochastic dynamics, which is mathematically well characterized when the noise appears linearly in the system’s dynamical equations, and it is gaussian and white. In this case the Stratonovich prescription to interpret the corresponding stochastic integral leads to the emergence of the so-called Stratonovich drift in the Langevin and Fokker-Planck dynamics, which is responsible for dramatic changes in the stability properties of stationary states, and possibly in their number Gardiner85 .

In biological systems, however, none of the assumptions on the noise being linear, Gaussian and white is usually fulfilled. In gene regulatory networks, for instance, the noise is often non-Gaussian Cai06 ; Bengtsson05 , it is usually characterized by correlation times that can easily exceed cell cycles times Rosenfeld05 , and it appears mostly in a nonlinear fashion in the dynamics determining gene expression.

While it can be very hard to treat exactly the corresponding stochastic dynamics under these conditions, a geometric construction based on noise filtering has been introduced Ochab-Marcinek10 ; Ochab-Marcinek17 . In essence, this approach relies on the transformation properties of probability distributions of stochastic variables related by a given input/output transformation. Conservation of probability allows one to express the output probability distribution in terms of the input probability distribution, which results in non-trivial dynamics when the relation between the stochastic variables is nonlinear.

This approach has been rederived dynamically in Aquino20 by extending to nonlinear noise the so-called switching-curve approximation introduced in Arnold78 ; Reichl82 . The dynamical derivation highlights the crucial role played by the timescales of the system. The approximation works well when the input noise is much slower (in the limit, infinitely slower) than the dynamics of the system of interest. In a gene expression process, this translates in extrinsic noise being much slower that protein degradation and synthesis Aquino20 .

In this paper we take up this issue again, and develop a perturbative framework that allows one to go beyond the approximation defined in Aquino20 by extending that approach further to large but finite correlation times of the noise. We show that the inclusion of higher order perturbative terms can modify dramatically the analysis of the emergence of noise-induced transitions, by removing them when present at lower orders, and providing therefore a qualitative and quantitative more accurate description of the stochastic dynamics.

We apply our formalism to the case of the toggle switch Gardner00 , and show that when log-normal noise is considered the system undergoes a transition in a region of the phase space where the deterministic system would be monostable. We also find that tuning the noise intensity may give rise to a situation in which the marginalized probabilities of the genes involved show different qualitative features, with overlapping unimodal and bimodal behaviours. We name this feature ’partial bimodality’, and discuss its origin in the context of noise propagation across the network.

The paper is organized as follows. In Section II we present the genetic circuits considered here. In Section III we introduce our perturbative approach, and develop it up to the second perturbative order for the generic circuit introduced in Section II (with the third perturbative order described for the toggle switch in the Appendix). In Section IV we introduce the toggle switch as an application of the methodology, and assess the perturbative framework by comparing the contributions of different orders to direct stochastic simulations. In Section V we draw conclusions on the methodology introduced, and make some final remarks.

II Genetic circuits

We consider a generic circuit of interacting genes. We make the assumption that the half-life of mRNAs is much shorter than that of the corresponding proteins, so that mRNA dynamics can be adiabatically eliminated, and gene expression can be described in a single step of combined transcription and translation Rosenfeld02 . We then describe the system by decomposing the rate equations for each synthesized protein into a production term and a degradation term as follows:

d𝐱dt=𝐠(𝐱,R)𝐊𝐱.\frac{d{\bf x}}{dt}={\bf g}({\bf x},R)-{\bf K}{\bf x}\ . (1)

Here 𝐱{\bf x} is an nn-dimensional vector whose components are the xix_{i} species involved, while 𝐊{\bf K} is a diagonal matrix whose diagonal elements are the degradation rates ki=1/τxik_{i}=1/\tau_{x_{i}} of each species xix_{i}, with τxi\tau_{x_{i}} the respective half-lifes. The function 𝐠(𝐱,R){\bf g}({\bf x},R) is a n×n\mathbb{R}^{n}\times\mathbb{R}\to\mathbb{R}^{n} Hill-like function of the concentrations of the nn species 𝐱{\bf x}, and RR is a factor that exerts control on the dynamics of 𝐱{\bf x}. The factor RR can be thought of as an external control parameter (for instance a binding constant), or any other species at high enough concentrations so that it is unaffected by the dynamics of 𝐱{\bf x} (for instance an external signalling molecule, or a transcription factor). For the sake of simplicity, we here limit ourselves to a single factor RR, but the procedure can be further generalised to multiple factors.

III Perturbative calculation

Let us now consider fluctuations acting on RR. We describe them by adding a generic Langevin-type dynamics to equations (1),

d𝐱dt\displaystyle\frac{d{\bf x}}{dt} =\displaystyle= 𝐠(𝐱,R)𝐊𝐱,\displaystyle{\bf g}({\bf x},R)-{\bf K}{\bf x}\ , (2)
dRdt\displaystyle\frac{dR}{dt} =\displaystyle= μ(R)τ+Dτν(R)ξ(t),\displaystyle\frac{\mu(R)}{\tau}+\sqrt{\frac{D}{\tau}}\nu(R)\xi(t)\ , (3)

with τ\tau the timescale of the process RR. In Eq. (3) the functions μ(R)\mu(R) and ν(R)\nu(R) are kept generic for now. Their specification allows us to reproduce different noise distributions. Also, the noise ξ(t)\xi(t) is assumed Gaussian and white, with correlator ξ(t)ξ(t)=2δ(tt)\langle\xi(t)\xi(t^{\prime})\rangle=2\delta(t-t^{\prime}).

By rescaling time as tt/τMt\to t/\tau_{M}, with τM=1/kM\tau_{M}=1/k_{M} the largest degradation time of the species involved (we assume that τxiτM\tau_{x_{i}}\lessapprox\tau_{M} for xixMx_{i}\neq x_{M}) we obtain

d𝐱dt\displaystyle\frac{d{\bf x}}{dt} =\displaystyle= ε𝜸(𝐱,R)𝐊¯𝐱,\displaystyle\varepsilon{\bm{\gamma}}({\bf x},R)-{\bf\bar{K}}{\bf x}\ , (4)
dRdt\displaystyle\frac{dR}{dt} =\displaystyle= εμ(R)+εDν(R)ξ(t),\displaystyle\varepsilon\mu(R)+\sqrt{\varepsilon D}\nu(R)\xi(t)\ , (5)

where ε=τM/τ\varepsilon=\tau_{M}/\tau, 𝜸=τ𝐠{\bm{\gamma}}=\tau{\bf g} and 𝐊¯=𝐊/kM{\bf\bar{K}}={\bf K}/k_{M} with 𝐊¯{\bf\bar{K}} still diagonal and 𝐊¯ii=κii=kxi/kM=O(1){\bf\bar{K}}_{ii}=\kappa_{ii}=k_{x_{i}}/k_{M}=O(1). In order for the dynamics of RR to be slower than that of all the species xix_{i}, we consider the condition τMτ\tau_{M}\ll\tau, implying that ε\varepsilon is a small parameter, with ε1\varepsilon\ll 1.

This naturally leads to seeking a solution of the system (4) and (5) in terms of a perturbative expansion in orders of ε\varepsilon. This perturbative approach is inspired by the one presented in Aquino20 , with the important difference that the latter was based on assuming the limit τ\tau\to\infty, while here we consider instead the limit τM0\tau_{M}\to 0 (i.e. ε0\varepsilon\to 0) with τ\tau large but fixed and finite. This different perspective keeps the function 𝜸=τ𝐠{\bm{\gamma}}=\tau{\bf g} in (4) fixed in the limit ε0\varepsilon\to 0, and allows us to obtain a systematic perturbative expansion in ε\varepsilon. As a result, the approach presented here allows us to go beyond the lowest order calculation of Aquino20 , accounting for the noise filtering approach of Ochab-Marcinek10 ; Ochab-Marcinek17 , and to obtain all perturbative corrections analytically at any order in ε\varepsilon.

Let us define the Fokker-Planck operator LFPL_{\rm FP} for the RR process as

LFP(R)=R[(μ+Dνν)+DR(ν2)],L_{\rm FP}(R)=\frac{\partial}{\partial R}\left[-(\mu+D\nu\nu^{\prime})+D\frac{\partial}{\partial R}\left(\nu^{2}\right)\right]\ , (6)

where we have assumed the Stratonovich interpretation for the multiplicative noise equation (5).

The total Fokker-Planck equation for the time-dependent probability density Wt(𝐱,R)W_{t}({\bf x},R) is therefore:

tWt(𝐱,R)\displaystyle\frac{\partial}{\partial t}W_{t}({\bf x},R) =\displaystyle= [(ε𝜸(𝐱,R)𝐊¯𝐱)\displaystyle\left[-\mathbf{\nabla}\cdot\left(\varepsilon{\bm{\gamma}}({\bf x},R)-{\bf\bar{K}}{\bf x}\right)\right. (7)
+εLFP(R)]Wt(𝐱,R).\displaystyle\qquad\quad\left.+\varepsilon L_{\rm FP}(R)\right]W_{t}({\bf x},R)\ .

We now aim to compute the stationary solution of Eq. (7), namely the time-independent probability density W(𝐱,R)W({\bf x},R) satisfying

[(ε𝜸(𝐱,R)𝐊¯𝐱)εLFP(R)]W(𝐱,R)=0,\displaystyle\left[\mathbf{\nabla}\cdot\left(\varepsilon{\bm{\gamma}}({\bf x},R)-{\bf\bar{K}}{\bf x}\right)-\varepsilon L_{\rm FP}(R)\right]W({\bf x},R)=0\ , (8)

obtained by imposing tWt(𝐱,R)=0\frac{\partial}{\partial t}W_{t}({\bf x},R)=0 in (7). We seek a solution of the type

W(𝐱,R)=W¯(R)ψ(𝐱,R)W({\bf x},R)=\bar{W}(R)\psi({\bf x},R) (9)

where W¯\bar{W} is chosen so that LFPW¯(R)=0L_{\rm FP}\bar{W}(R)=0, since the dynamics of the variable RR is independent of x,yx,y (but not viceversa). In general it can be shown Gardiner85 that W¯(R)\bar{W}(R) can be written as

W¯(R)=𝒩ν(R)exp(1DR𝑑Rμ(R)ν2(R)),\bar{W}(R)=\frac{\mathcal{N}}{\nu(R)}\exp\left(\frac{1}{D}\int^{R}dR^{\prime}\frac{\mu(R^{\prime})}{\nu^{2}(R^{\prime})}\right)\ , (10)

and it must be as well

𝐝𝐱W(𝐱,R)=W¯(R)𝐝𝐱ψ(𝐱,R)=1.\int{\bf dx}W({\bf x},R)=\bar{W}(R)\Rightarrow\int{\bf dx}\psi({\bf x},R)=1\ . (11)

III.1 Zeroth order solution

We begin by assuming the following perturbative expansion for W(𝐱,R)W({\bf x},R) of generic order mm in powers of ε\varepsilon

W(𝐱,R)=j=0mεjj!W(j)(𝐱,R)+O(εm+1).W({\bf x},R)=\sum^{m}_{j=0}\frac{\varepsilon^{j}}{j!}W^{(j)}({\bf x},R)+O(\varepsilon^{m+1})\ . (12)

By replacing this expansion in the lefthand side of (8), and by keeping only the zeroth order terms in ε\varepsilon we obtain

(𝐱)W(0)(𝐱,R)=0,\left(\mathbf{\nabla}\cdot{\bf x}\right)W^{(0)}({\bf x},R)=0\ , (13)

which immediately leads to the zeroth order solution

W(0)(𝐱,R)=δn(𝐱)W¯(R)=W¯(R)i=1nδ(xi).W^{(0)}({\bf x},R)=\delta^{n}({\bf x})\bar{W}(R)=\bar{W}(R)\prod_{i=1}^{n}\delta(x_{i})\ . (14)

III.2 First order correction

Having evaluated W(0)W^{(0)} we can replace again the expansion (12) in the lefthand side of (8), and keep this time all terms up to first order in ε\varepsilon. We obtain:

(𝜸(𝐱,R)W(0)𝐊¯𝐱W(1))+LFP(R)W(0)=0,\displaystyle-\mathbf{\nabla}\cdot\left({\bm{\gamma}}({\bf x},R)W^{(0)}-{\bf\bar{K}}{\bf x}W^{(1)}\right)+L_{\rm FP}(R)W^{(0)}=0\ ,

which, using the property xdnδ(x)dx=ndn1δ(x)dxx\frac{d^{n}\delta(x)}{dx}=-n\frac{d^{n-1}\delta(x)}{dx} and noticing that LFP(R)W(0)=0L_{\rm FP}(R)W^{(0)}=0, leads to the solution

W(1)(𝐱,R)=W¯(R)i(δ(xi)κiiγi(𝐱,R)jiδ(xj)).W^{(1)}({\bf x},R)=-\bar{W}(R)\sum_{i}\left(\frac{\delta^{\prime}(x_{i})}{\kappa_{ii}}\gamma_{i}({\bf x},R)\prod_{j\neq i}\delta(x_{j})\right)\ . (15)

Therefore, up to the first order in ε\varepsilon we can write

W(𝐱,R)\displaystyle W({\bf x},R)
=W(0)(𝐱,R)+εW(1)(𝐱,R)\displaystyle\quad=W^{(0)}({\bf x},R)+\varepsilon W^{(1)}({\bf x},R)
=W¯(R)[δn(𝐱)εi(δ(xi)κiiγi(𝐱,R)jiδ(xj))]\displaystyle\quad=\bar{W}(R)\left[\vphantom{\frac{\delta^{\prime}(x_{i})}{\kappa_{ii}}}\delta^{n}({\bf x})-\varepsilon\sum_{i}\left(\frac{\delta^{\prime}(x_{i})}{\kappa_{ii}}\gamma_{i}({\bf x},R)\prod_{j\neq i}\delta(x_{j})\right)\right]
W¯(R)δn(𝐱ε𝐊¯𝜸(𝐱,R)),\displaystyle\quad\simeq\bar{W}(R)\;\delta^{n}\left({\bf x}-{\frac{\varepsilon}{\bf\bar{K}}}{\bm{\gamma}}({\bf x},R)\right)\ , (16)

where the last equality is meant in the sense of distributions, and is also valid up to the first order in ε\varepsilon:

d𝐱δn(𝐱ε1𝐊¯𝜸(𝐱,R))h(𝐱)=d𝐱[δn(𝐱)\displaystyle\int d{\bf x}\delta^{n}\left({\bf x}-\varepsilon\frac{1}{\bf\bar{K}}{\bm{\gamma}}({\bf x},R)\right)h({\bf x})=\int d{\bf x}\left[\vphantom{\frac{\delta^{\prime}(x_{i})}{\kappa_{ii}}}\delta^{n}({\bf x})\right.
εi(δ(xi)κiiγi(𝐱,R)jiδ(xj))]h(𝐱)+O(ε2)\displaystyle-\left.\varepsilon\sum_{i}\left(\frac{\delta^{\prime}(x_{i})}{\kappa_{ii}}\gamma_{i}({\bf x},R)\prod_{j\neq i}\delta(x_{j})\right)\right]h({\bf x})+O(\varepsilon^{2})\, (17)

for any test function h(𝐱)h({\bf x}). We notice here that this first order result coincides with the result of the nonlinear noise filtering approach proposed in Ochab-Marcinek10 and derived dynamically in Aquino20 . In fact the delta function in Eq. (III.2) defines the (approximate) steady state solution, determined by the condition

𝐳(𝐱,R)=𝐱ε1𝐊¯𝜸(𝐱,R)=0.{\bf z}({\bf x},R)={\bf x}-\varepsilon{\frac{1}{\bf\bar{K}}}{\bm{\gamma}}({\bf x},R)=0. (18)

This solution coincides with the steady state solution of Eqs. (2) and (3) in the limit τ\tau\to\infty of infinitely slow noise (i.e. constant RR on the time scale of the dynamics of 𝐱{\bf x}) and corresponds trivially to a change of variable R𝐱R\to{\bf x}.

For instance, in the one-dimensional case, when γ(x,R)=γ(R)\gamma(x,R)=\gamma(R) (i.e. g(x,R)=g(R))g(x,R)=g(R)) depends monotonically only on RR, one can use the one-dimensional version of Eq. (III.2) and after integrating over RR via

p(x)\displaystyle p(x) =\displaystyle= 𝑑RW(x,R)=𝑑RW¯(R)δ(xεκγ(x,R))\displaystyle\int dRW(x,R)=\int dR\bar{W}(R)\delta(x-\frac{\varepsilon}{\kappa}\gamma(x,R)) (19)
=𝑑RW¯(R)δ(xεκγ(R)),\displaystyle=\int dR\bar{W}(R)\delta(x-\frac{\varepsilon}{\kappa}\gamma(R)),

obtain the transformation of the probability density function

p(x)=W¯(r(x))|dr(x)/dx|.p({x})=\bar{W}(r({x}))|dr({x})/d{x}|. (20)

In Eq. (20) r(x)r({x}) descends from the one-dimensional version of (18), i.e. x=εκγ(R)x=\frac{\varepsilon}{\kappa}\gamma(R), and therefore r(x)=R=γ1(κx/ε)=g1(kx)r(x)=R=\gamma^{-1}(\kappa x/\varepsilon)={g}^{-1}(kx), with κ=k/kM\kappa=k/k_{M} being the only value of the one-dimensional matrix 𝐊¯{\bf\bar{K}}.

This solution corresponds to the nonlinear filtering approach introduced in Ochab-Marcinek10 ; Ochab-Marcinek17 , where the Hill-like gg function acts as transfer function of the regulator RR, assumed as an infinitely slow fluctuating variable, and produces a corresponding distribution of stationary protein concentration x{x}.

However, this perturbative approach can be extended to describe noise with finite τ\tau beyond the first order, providing non-trivial corrections to the resulting dynamics as it is shown in the next subsection, as well as to generic nn-dimensional systems by using the implicit functions theorem.

In the multi-dimensional case, in fact, when 𝐳(𝐱,R){\bf z}({\bf x},R) is a continuously differentiable function n+1n\mathbb{R}^{n+1}\to\mathbb{R}^{n}, the implicit functions theorem tells us that a function r(𝐱)=Rr({\bf x})=R exists which is locally invertible so that 𝐳(r1(R),R){\bf z}({r^{-1}}(R),R)=0. It follows that the probability density for 𝐱{\bf x} can be determined from W¯(R)\bar{W}(R) via a change of variable as

p(𝐱)=W¯(r(𝐱))|J|.p({\bf x})=\bar{W}(r({\bf x}))|J|. (21)

Here |J||J| is the Jacobian determinant of the transformation (R=r(𝐱),x2xn)(x1xn)(R=r({\bf x}),x_{2}\cdots x_{n})\to(x_{1}\cdots x_{n}) which can be obtained, without knowing the explicit form of rr, from the function 𝐳{\bf z} in Eq. Eq. (18) using the implicit function theorem in nn dimensions. Eq. (21) generalises Eq. (20) to the case of nn species, with r(𝐱)r({\bf x}) now depending on all species x1xnx_{1}\cdots x_{n} and defined only implicitly via Eq. (18).

III.3 Second order correction

In order to evaluate the second order correction we need first to evaluate LFP(R)W(1)(𝐱,R)L_{\rm FP}(R)W^{(1)}({\bf x},R). From Eq. (15) and the property LFP(R)W¯(R)=0L_{\rm FP}(R)\bar{W}(R)=0 it follows that

LFP(R)W(1)(𝐱,R)\displaystyle-L_{\rm FP}(R)W^{(1)}({\bf x},R)
=iδ(xi)κiikiδ(xk)LFP(R)[γi(𝐱,R)W¯(R)]\displaystyle\qquad=\sum_{i}\frac{\delta^{\prime}(x_{i})}{\kappa_{ii}}\prod_{k\neq i}\delta(x_{k})L_{\rm FP}(R)\left[\gamma_{i}({\bf x},R)\bar{W}(R)\right]
=iδ(xi)κiiGi(𝐱,R)kiδ(xk)\displaystyle\qquad=\sum_{i}\frac{\delta^{\prime}(x_{i})}{\kappa_{ii}}G_{i}({\bf x},R)\prod_{k\neq i}\delta(x_{k}) (22)

where Gi(𝐱,R)=LFP(R)[γi(𝐱,R)W¯(R)]G_{i}({\bf x},R)=L_{\rm FP}(R)\left[\gamma_{i}({\bf x},R)\bar{W}(R)\right] which, using Eq.(6), leads to:

Gi(𝐱,R)\displaystyle G_{i}({\bf x},R)
=(μ(R)3Dν(R)ν(R))W¯(R)Rγi(𝐱,R)\displaystyle=-(\mu(R)-3D\nu(R)\nu^{\prime}(R))\bar{W}(R)\partial_{R}\gamma_{i}({\bf x},R)
+Dν(R)2(2W¯(R)Rγi(𝐱,R)+W¯(R)R2γi(𝐱,R).\displaystyle\quad+D\nu(R)^{2}(2\bar{W}^{\prime}(R)\partial_{R}\gamma_{i}({\bf x},R)+\bar{W}(R)\partial_{R}^{2}\gamma_{i}({\bf x},R)\ . (23)

Having evaluated W(1)(𝐱,R)W^{(1)}({\bf x},R) we can now repeat the same procedure as done for the first order, i.e. replace expansion (12) in the lefthand side of (8) and keep this time all terms up to second order in ε\varepsilon. We obtain:

(𝜸(𝐱,R)W(1)𝐊¯𝐱W(2)2)+LFP(R)W(1)(𝐱,R)=0,-\mathbf{\nabla}\!\cdot\!\left({\bm{\gamma}}({\bf x},R)W^{(1)}-{\bf\bar{K}}{\bf x}\frac{W^{(2)}}{2}\right)+L_{\rm FP}(R)W^{(1)}({\bf x},R)=0\ , (24)

which, using (III.3), can be rewritten as

(𝜸(𝐱,R)W(1)𝐊¯𝐱W(2)2)\displaystyle\mathbf{\nabla}\cdot\left({\bm{\gamma}}({\bf x},R)W^{(1)}-{\bf\bar{K}}{\bf x}\frac{W^{(2)}}{2}\right)
+iδ(xi)kiiGi(𝐱,R)kiδ(xk)=0.\displaystyle\qquad\qquad+\sum_{i}\frac{\delta^{\prime}(x_{i})}{k_{ii}}G_{i}({\bf x},R)\prod_{k\neq i}\delta(x_{k})=0\ . (25)

From the structure of this equation and from (III.2) we deduce that the solution for W(2)(𝐱,R)W^{(2)}({\bf x},R) must have the following form:

W(2)(𝐱,R)\displaystyle W^{(2)}({\bf x},R) =\displaystyle= W¯(R)i,q[(Aiqδ(xi)+Biqδ′′(xi))δ(xq)\displaystyle\bar{W}(R)\sum_{i,q}\left[\left(A_{iq}\delta^{\prime}(x_{i})+B_{iq}\delta^{\prime\prime}(x_{i})\right)\delta(x_{q})\right. (26)
+Ciqδ(xi)δ(xq)]ki,qδ(xk).\displaystyle\left.\qquad+C_{iq}\delta^{\prime}(x_{i})\delta^{\prime}(x_{q})\right]\prod_{k\neq i,q}\delta(x_{k})\ .

The coefficients AiqA_{iq}, BiqB_{iq}, CiqC_{iq} can be obtained by inserting (26) in (III.3) and result in

Aiq\displaystyle A_{iq} =(2kii2G¯i(𝐱,R)+A~i(𝐱,R))δiq\displaystyle=\left(-\frac{2}{k_{ii}^{2}}\bar{G}_{i}({\bf x},R)+\tilde{A}_{i}({\bf x},R)\right)\delta_{iq} (27a)
withA~i(𝐱,R)xixiA~i(𝐱,R)=2κii2xiG¯i(𝐱,R)\displaystyle\text{with}\;\;\frac{\tilde{A}_{i}({\bf x},R)}{x_{i}}-\partial_{x_{i}}\tilde{A}_{i}({\bf x},R)=-\frac{2}{\kappa_{ii}^{2}}\partial_{x_{i}}\bar{G}_{i}({\bf x},R)
Biq\displaystyle B_{iq} =(γi(𝐱,R)κii)2δiq\displaystyle=\left(\frac{\gamma_{i}({\bf x},R)}{\kappa_{ii}}\right)^{2}\delta_{iq} (27b)
Ciq\displaystyle C_{iq} =γi(𝐱,R)γq(𝐱,R)κqqκii(1δiq)\displaystyle=\frac{\gamma_{i}({\bf x},R)\gamma_{q}({\bf x},R)}{\kappa_{qq}\kappa_{ii}}(1-\delta_{iq}) (27c)

with G¯i=Gi/W¯(R)\bar{G}_{i}=G_{i}/\bar{W}(R) . Therefore we make the following ansatz for the stationary probability density W(𝐱,R)W({\bf x},R):

W(𝐱,R)=W¯(R)δn(𝐱ε𝐊¯𝜸(𝐱,R)ε22𝐐(𝐱,R))+O(ε3)W({\bf x},R)=\bar{W}(R)\delta^{n}\left({\bf x}-\frac{\varepsilon}{\bf\bar{K}}{\bm{\gamma}}({\bf x},R)-\frac{\varepsilon^{2}}{2}{\bf Q}({\bf x},R)\right){+O(\varepsilon^{3})} (28)

with

𝐐(𝐱,R)=(2𝐊¯𝟐𝐆¯(𝐱,R)+𝐀~(𝐱,R)).{\bf Q}({\bf x},R)=\left(-\frac{2}{\bf\bar{K}^{2}}{\bf\bar{G}}({\bf x},R)+{\bf\tilde{A}}({\bf x},R)\right). (29)

The solution given by the ansatz (28) in fact coincides up to second order in ε\varepsilon with the solution derived with the perturbative expansion (12) to the same order, i.e. including Eq. (26) with coefficients (27).

The delta function in Eq. (28) is similar to the one in (III.2) and therefore, as it was seen for the first order in (18), it defines the (approximate) steady state solution of Eqs. (2) and (3). This is determined by the condition

𝐳(𝐱,R)=𝐱ε𝐊¯𝜸(𝐱,R)ε22𝐐(𝐱,R)=0,{\bf z}({\bf x},R)={\bf x}-\frac{\varepsilon}{\bf\bar{K}}{\bm{\gamma}}({\bf x},R)-\frac{\varepsilon^{2}}{2}{\bf Q}({\bf x},R)=0, (30)

which in the case that 1𝐊¯𝜸(𝐱,R)=const\mathbf{\nabla}\cdot\frac{1}{\bf\bar{K}}{\bm{\gamma}}({\bf x},R)=const, implying in turn 1𝐊¯𝐆(𝐱,R)=0\mathbf{\nabla}\cdot\frac{1}{\bf\bar{K}}{\bf G}({\bf x},R)=0 and therefore (neglecting null contributions in distributional sense) 𝐀~(𝐱,R)=0{\bf\tilde{A}}({\bf x},R)=0, simplifies to

𝐳(𝐱,R)=𝐱ε𝐊¯𝜸(𝐱,R)ε21𝐊¯𝟐𝐆¯(𝐱,R)=0.{\bf z}({\bf x},R)={\bf x}-\frac{\varepsilon}{\bf\bar{K}}{\bm{\gamma}}({\bf x},R)-\varepsilon^{2}\frac{1}{\bf\bar{K}^{2}}{\bf\bar{G}}({\bf x},R)=0. (31)

Note that both 𝐐(𝐱,R){\bf Q}({\bf x},R) and 𝐆¯(𝐱,R){\bf\bar{G}}({\bf x},R) depend linearly on 𝜸=τ𝐠{\bm{\gamma}}=\tau{\bf g}. Therefore, when multiplied by their expansion factor ε2=τM2τ2\varepsilon^{2}=\frac{\tau_{M}^{2}}{\tau^{2}}, they disappear in the limit τ\tau\to\infty, leaving only the first order correction in Eqs. (28), (30) or (31). This can be verified to apply to all the higher order corrections in the same limit. Therefore Eq. (18), obtained as first order correction within our approach, is actually an exact solution in such limit, corresponding thereby to the noise filtering approach of Ochab-Marcinek10 ; Ochab-Marcinek17 .

Under the assumption that 𝐳(𝐱,R){\bf z}({\bf x},R) is a continuously differentiable function n+1n\mathbb{R}^{n+1}\to\mathbb{R}^{n}, the implicit functions theorem can be applied again. As a result, the probability density for 𝐱{\bf x} can be determined from W¯(R)\bar{W}(R) via a change of variable as shown in Eq. (21), where the Jacobian can now be obtained from the function 𝐳{\bf z} in Eq. (30) or (31).

In the following section, we show how this approach is applied with a concrete example.

IV The toggle switch

Refer to caption
Figure 1: The toggle switch. The expression level of gene xx is downregulated by gene yy upon binding with the repressor R. We make the assumption that when active, both genes synthesize the proteins xx and yy in one single step of combined transcription and translation.

As an application of our methodology we consider the toggle switch, a positive feedback loop of two genes mutually repressing each other (Fig. 1) Gardner00 . The dynamic equations describing each of the synthesized protein have the form (1), namely:

dxdt\displaystyle\frac{dx}{dt} =\displaystyle= g(y,R)k1x=g11+ρ1(Ry)β1k1x\displaystyle g(y,R)-k_{1}x=\frac{g_{1}}{1+\rho_{1}{(}Ry{)}^{\beta_{1}}}-k_{1}x (32)
dydt\displaystyle\frac{dy}{dt} =\displaystyle= f(x)k2y=g21+ρ2xβ2k2y\displaystyle f(x)-k_{2}y=\frac{g_{2}}{1+\rho_{2}x^{\beta_{2}}}-k_{2}y (33)

Here, we have assumed that protein synthesis occurs in one single stage of merged transcription and translation, which, as discussed, is valid when the half-life of mRNA species is much shorter than the half-life of protein species Rosenfeld02 . Further to standard degradation dynamics for xx and yy proteins, characterized by degradation rates k1k_{1} and k2k_{2} respectively, the structure of the equations is based on two Hill-like functions, which account for each of the two synthesised proteins to act as a repressor on the other gene. Here parameters ρ1,2\rho_{1,2} are the association constants for the binding of xx (yy, respectively) to the promoter of the target gene, g1,2g_{1,2} are the maximal expression rates of each gene, and β1,2\beta_{1,2} are cooperativity parameters representing homodimerization of the relevant protein.

The bifurcation diagram of this system has been analyzed in detail in Gardner00 . The system exhibits deterministic multistable behavior for β1,β2>1\beta_{1},\beta_{2}>1, and it is otherwise monostable. To probe the effect of noise in a situation of deterministic monostability, we therefore set in the following β1=β2=1\beta_{1}=\beta_{2}=1. In this way any transition to bimodal dynamics will only be due to the effect of noise, accounting thereby for the possible emergence of a noise-induced transition in the system.

To explore this further, we have made the assumption that regulation of xx happens upon formation of a heterodimer of yy with the external (repressive) transcription factor RR. Hence fluctuations in RR will correspond to insertion of nonlinear extrinsic noise in the system, and will be amenable to be studied with the methodology developed here.

IV.1 Perturbative calculation

As for the more general case, let us add Eq. (3) to the equations (32) and (33) for the toggle switch and let us assume τx=1/k1>τy=1/k2\tau_{x}=1/k_{1}>\tau_{y}=1/k_{2}. We then redefine time as tt/τxt\to t/\tau_{x}, we obtain:

dxdt\displaystyle\frac{dx}{dt} =\displaystyle= εγ(y,R)x=εγ11+ρ1Ryx,\displaystyle\varepsilon\gamma(y,R)-x=\varepsilon\frac{\gamma_{1}}{1+\rho_{1}Ry}-x\ , (34)
dydt\displaystyle\frac{dy}{dt} =\displaystyle= εf(x)κy=εγ21+ρ2xκy,\displaystyle\varepsilon f(x)-\kappa y=\varepsilon\frac{\gamma_{2}}{1+\rho_{2}x}-\kappa y\ , (35)
dRdt\displaystyle\frac{dR}{dt} =\displaystyle= εμ(R)+εDν(R)ξ(t),\displaystyle\varepsilon\mu(R)+\sqrt{\varepsilon D}\nu(R)\xi(t)\ , (36)

where ε=τx/τ\varepsilon=\tau_{x}/\tau , γ1,2=g1,2τ\gamma_{1,2}=g_{1,2}\tau and κ=k2/k11\kappa=k_{2}/k_{1}\simeq 1. For simplicity we here assume κ=1\kappa=1.

We also assume that the timescale τ\tau of the dynamics of RR satisfies ττx,τy\tau\gg\tau_{x},\tau_{y} and therefore ε\varepsilon is a small parameter. We are in this way assuming that the dynamics of RR is slower than that of xx and yy.

With LFPL_{\rm FP} defined as in Eq. (6), the Fokker-Planck (7) becomes

tWt(x,y,R)\displaystyle\frac{\partial}{\partial t}W_{t}(x,y,R) =\displaystyle= [x(εγ(y,R)x)y(εf(x)y)\displaystyle\left[-\frac{\partial}{\partial x}\left(\varepsilon\gamma(y,R)-x\right)-\frac{\partial}{\partial y}\left(\varepsilon f(x)-y\right)\right. (37)
+εLFP(R)]Wt(x,y,R),\displaystyle\left.\vphantom{-\frac{\partial}{\partial x}}+\varepsilon L_{\rm FP}(R)\right]W_{t}(x,y,R)\ ,

whose stationary solution satisfies

[x(xεγ(y,R))+y(yεf(x))+\displaystyle\left[\frac{\partial}{\partial x}\left(x-\varepsilon\gamma(y,R)\right)+\frac{\partial}{\partial y}\left(y-\varepsilon f(x)\right)+\right.
+εLFP(R)]W(x,y,R)=0\displaystyle\left.\vphantom{\frac{\partial}{\partial x}}+\varepsilon L_{\rm FP}(R)\right]W(x,y,R)=0 (38)

IV.2 Zeroth order solution

Assuming the perturbative expansion (12), Eq. (13) becomes

(xx+yy)W(0)(x,y,R)=0,\left(\frac{\partial}{\partial x}x+\frac{\partial}{\partial y}y\right)W^{(0)}(x,y,R)=0\ , (39)

which immediately leads to the zeroth order solution:

W(0)(x,y,R)=δ(x)δ(y)W¯(R).W^{(0)}(x,y,R)=\delta(x)\delta(y)\bar{W}(R)\ . (40)

IV.3 First order correction

We now proceed with the calculation of first order corrections W(1)(x,y,R)W^{(1)}(x,y,R). Upon substitution of (12) in (IV.1) and by collecting terms up to first order in ε\varepsilon, we obtain

x(γ(y,R)W(0)xW(1))\displaystyle-\frac{\partial}{\partial x}\left(\gamma(y,R)W^{(0)}-xW^{(1)}\right) \displaystyle- y(f(x)W(0)yW(1))\displaystyle\frac{\partial}{\partial y}\left(f(x)W^{(0)}-yW^{(1)}\right) (41)
+\displaystyle+ LFP(R)W(0)=0\displaystyle L_{\rm FP}(R)W^{(0)}=0

which leads to the solution

W(1)(x,y,R)\displaystyle W^{(1)}(x,y,R)
=W¯(R)(δ(x)δ(y)γ(y,R)+δ(y)δ(x)f(x)),\displaystyle\hskip 2.84544pt=-\bar{W}(R)\left(\vphantom{\sum}\delta^{\prime}(x)\delta(y)\gamma(y,R)+\delta^{\prime}(y)\delta(x)f(x)\right), (42)

as could be derived directly from Eq.(15). Following our general procedure, we can write to the first order in ε\varepsilon:

W(x,y,R)\displaystyle W(x,y,R) =\displaystyle= W(0)+εW(1)\displaystyle W^{(0)}+\varepsilon W^{(1)} (43)
=\displaystyle= W¯(R)[(δ(x)εγ(y,R)δ(x))δ(y)\displaystyle\bar{W}(R)\left[\left(\delta(x)-\varepsilon\gamma(y,R)\delta^{\prime}(x)\right)\delta(y)\right.
δ(y)δ(x)f(x)]\displaystyle\qquad\qquad\quad\;\;-\left.\delta^{\prime}(y)\delta(x)f(x)\right]
=\displaystyle= W¯(R)δ(xεγ(y,R))δ(yεf(x)),\displaystyle\bar{W}(R)\delta(x-\varepsilon\gamma(y,R))\delta(y-\varepsilon f(x))\ ,

analogous to Eq. (III.2). In fact, if we let

δ~(x,y)=(δ(x)εγ(y,R)δ(x))δ(y)εδ(y)δ(x)f(x),\tilde{\delta}(x,y)=\left(\vphantom{\sum}\delta(x)-\varepsilon\gamma(y,R)\delta^{\prime}(x)\right)\delta(y)-\varepsilon\delta^{\prime}(y)\delta(x)f(x)\ , (44)

then from the distributional point of view

𝑑x𝑑yδ~(x,y)h(x,y)\displaystyle\iint dxdy\tilde{\delta}(x,y)h(x,y)
=h(0,0)+ε(γ(y,R)xh(0,0)+f(x)yh(0,0))\displaystyle\hskip 5.69046pt=h(0,0)+\varepsilon\left(\gamma(y,R)\frac{\partial}{\partial x}h(0,0)+f(x)\frac{\partial}{\partial y}h(0,0)\right)
=h(εγ(y,R),εf(x))+O(ε2)\displaystyle\hskip 5.69046pt=h(\varepsilon\gamma(y,R),\varepsilon f(x))+O(\varepsilon^{2}) (45)

and since

h(εγ(y,R),εf(x))\displaystyle h(\varepsilon\gamma(y,R),\varepsilon f(x))
=𝑑x𝑑yδ(xεγ(y,R))δ(yεf(x))h(x,y)\displaystyle\hskip 8.5359pt=\iint dxdy\delta(x-\varepsilon\gamma(y,R))\delta(y-\varepsilon f(x))h(x,y) (46)

for any h(x,y)h(x,y), the last equality in (43) follows. This result coincides at this order with the noise filtering approach proposed in Ochab-Marcinek10 with p(x)p(x) stemming from W¯(R)\bar{W}(R) via the change of variables RxR\to x induced by the (analytically solvable in this case) condition xεγ(εf(x),R)=0x-\varepsilon\gamma(\varepsilon f(x),R)=0

IV.4 Second order correction

We can now carry out the evaluation of the second order correction by computing LFP(R)W(1)L_{\rm FP}(R)W^{(1)}. Eq. (III.3) becomes

LFP(R)W(1)(x,y,R)\displaystyle L_{\rm FP}(R)W^{(1)}(x,y,R) =\displaystyle= δ(x)δ(y)LFP(R)(γ(y,R)W¯(R))\displaystyle-\delta^{\prime}(x)\delta(y)L_{\rm FP}(R)\left(\gamma(y,R)\bar{W}(R)\right) (47)
=\displaystyle= δ(x)δ(y)G(y,R),\displaystyle-\delta^{\prime}(x)\delta(y)G(y,R)\ ,

where

G(y,R)\displaystyle G(y,R) =\displaystyle= LFP(R)(γ(y,R)W¯(R))\displaystyle L_{\rm FP}(R)\left(\gamma(y,R)\bar{W}(R)\right)
=\displaystyle= (μ(R)3Dν(R)ν(R))W¯(R)Rγ(y,R)+\displaystyle-(\mu(R)-3D\nu(R)\nu^{\prime}(R))\bar{W}(R)\partial_{R}\gamma(y,R)+
+\displaystyle+ Dν(R)2(2W¯(R)Rγ(y,R)+W¯(R)R2γ(y,R)).\displaystyle D\nu(R)^{2}(2\bar{W}^{\prime}(R)\partial_{R}\gamma(y,R)+\bar{W}(R)\partial_{R}^{2}\gamma(y,R)).

Eq. (24) now becomes

x(γ(y,R)W(1)x2W(2))\displaystyle-\frac{\partial}{\partial x}\left(\gamma(y,R)W^{(1)}-\frac{x}{2}W^{(2)}\right) \displaystyle- y(f(x)W(1)y2W(2))\displaystyle\frac{\partial}{\partial y}\left(f(x)W^{(1)}-\frac{y}{2}W^{(2)}\right) (49)
+\displaystyle+ LFP(R)W(1)=0,\displaystyle L_{\rm FP}(R)W^{(1)}=0\ ,

which, using (47), can be rewritten as

x(γ(y,R)W(1)x2W(2))\displaystyle-\frac{\partial}{\partial x}\left(\gamma(y,R)W^{(1)}-\frac{x}{2}W^{(2)}\right) \displaystyle- y(f(x)W(1)y2W(2))\displaystyle\frac{\partial}{\partial y}\left(f(x)W^{(1)}-\frac{y}{2}W^{(2)}\right) (50)
+\displaystyle+ δ(x)δ(y)G(y,R)=0.\displaystyle\delta^{\prime}(x)\delta(y)G(y,R)=0\ .

The form of the solution W(2)(x,y,R)W^{(2)}(x,y,R) is analogous to (26), namely given by

W(2)(x,y,R)\displaystyle W^{(2)}(x,y,R) =\displaystyle= W¯(R)(A10δ(x)δ(y)+A11δ(x)δ(y)\displaystyle\bar{W}(R)\left(A_{10}\delta^{\prime}(x)\delta(y)+A_{11}\delta^{\prime}(x)\delta^{\prime}(y)\right. (51)
+\displaystyle+ A20δ′′(x)δ(y)+A02δ(x)δ′′(y))\displaystyle\left.A_{20}\delta^{\prime\prime}(x)\delta(y)+A_{02}\delta(x)\delta^{\prime\prime}(y)\right)

Inserting this expression in (50) one obtains the following conditions on the coefficients AijA_{ij}

A10\displaystyle A_{10} =\displaystyle= 2G¯(y,R),A20=(γ(y,R))2,\displaystyle-2\bar{G}(y,R)\ ,\;\;\;A_{20}=\left(\gamma(y,R)\right)^{2}\ ,
A11\displaystyle A_{11} =\displaystyle= 2γ(y,R)f(x),A02=(f(x))2.\displaystyle 2\gamma(y,R)f(x)\ ,\;\;\;A_{02}=\left(f(x)\right)^{2}\ .

with G¯=G/W¯\bar{G}=G/\bar{W}, as could be derived as well directly from Eqs. (27) with the condition 𝐀~=0{\bf\tilde{A}}=0. The ansatz for the steady state solution for the probability density W(x,y,R)W(x,y,R) that coincides up to the second order in ε\varepsilon with the perturbative evaluation shown above is therefore

W(x,y,R)\displaystyle W(x,y,R) =W¯(R)δ(xεγ(y,R)ε2G¯(y,R))\displaystyle=\bar{W}(R)\delta\left(x-\varepsilon\gamma(y,R)-\varepsilon^{2}\bar{G}(y,R)\right) (52)
×δ(yεf(x))+O(ε3).\displaystyle\times\delta\left(y-\varepsilon f(x)\right)+O(\varepsilon^{3}).

IV.5 Log-normal Noise

The power of the approach that we propose here relies also on the fact that we can assume different types of noise, as long as these can be represented in terms of the Langevin dynamics (3). By following Aquino20 , we focus on log-normal noise, which in contrast to Gaussian noise preserves the positivity of parameter values on which it acts. It also describes well extrinsic fluctuations Rosenfeld05 ; Shahrezaei08 and universal behaviours in bacteria and yeast Salman12 ; Brenner15 .

The stationary log-normal distribution

w¯(R)=1R2πexp[12D(logRR¯+D2)2]\bar{w}(R)=\frac{1}{R\sqrt{2\pi}}\exp{\left[-\frac{1}{2D}\left(\log\frac{R}{\bar{R}}+\frac{D}{2}\right)^{2}\right]} (53)

can be derived Aquino20 by defining the stochastic process

R(t)=R¯eη(t)eD/2,R(t)=\bar{R}e^{\eta(t)}e^{-D/2}, (54)

where η(t)\eta(t) is the standard Ornstein-Uhlenbeck noise,

dηdt=ητ+Dτξ(t),\frac{d\eta}{dt}=-\frac{\eta}{\tau}+\sqrt{\frac{D}{\tau}}\xi(t), (55)

with ξ(t)\xi(t) zero average Gaussian white noise, and R¯=R(t)\bar{R}=\langle R(t)\rangle, since eη(t)=eD/2\langle e^{\eta(t)}\rangle=e^{D/2}. Hence, the stochastic process (54) obeys

dRdt=Rτlog(RR¯+D2)+DτRξ(t),\frac{dR}{dt}=-\frac{R}{\tau}\log{\left(\frac{R}{\bar{R}}+\frac{D}{2}\right)}+\sqrt{\frac{D}{\tau}}R\xi(t), (56)

which corresponds to (3) with the choice μ(R)=Rlog(R/R¯+D/2)\mu(R)=-R\log(R/\bar{R}+D/2) and ν(R)=R\nu(R)=R. Eq. (10) allows then the derivation of (53).

IV.6 Comparison with numerical simulations

The joint probability density p(x,y)p(x,y) for the protein distributions can be obtained by marginalizing (52) with respect to RR:

p(x,y)=𝑑RW(x,y,R).p(x,y)=\int dRW(x,y,R)\ . (57)

The probability density px(x)p_{x}(x) for the distribution of protein xx can then be obtained by marginalizing (57) with respect to yy. In this case it is simpler to first integrate in yy after replacing (52) in (57), we obtain

px(x)\displaystyle p_{x}(x) =\displaystyle= 𝑑RW¯(R)δ(xεγ(εf(x),R)ε2G¯(εf(x),R))\displaystyle\int dR\bar{W}(R)\delta\left(x-\varepsilon\gamma(\varepsilon f(x),R)-\varepsilon^{2}\bar{G}(\varepsilon f(x),R)\right) (58)
=\displaystyle= 𝑑RW¯(R)δ(z(x,R))\displaystyle\int dR\bar{W}(R)\delta(z(x,R))

and further integration with respect to RR amounts to a change of variable R=r(x)R=r(x) where the function r(x)r(x) is defined implicitly by the equation

z(x,R)=xεγ(εf(x),R)ε2G¯(εf(x),R)=0.z(x,R)=x-\varepsilon\gamma(\varepsilon f(x),R)-\varepsilon^{2}\bar{G}(\varepsilon f(x),R)=0\ . (59)

We notice that in this particular case we can also develop easily the third order correction. This is obtained following the same procedure used for the second order (see Appendix for details) leading to the following modified expression for the steady state solution for the probability density W(x,y,R)W(x,y,R):

W(x,y,R)=W¯(R)δ(xεγ(y,R)ε2G¯(y,R)+\displaystyle W(x,y,R)=\bar{W}(R)\delta\left(x-\varepsilon\gamma(y,R)-\varepsilon^{2}\bar{G}(y,R)+\right.
ε3G3¯(x,y,R))δ(yεf(x))+O(ε4).\displaystyle\left.-\varepsilon^{3}\bar{G_{3}}(x,y,R)\right)\delta\left(y-\varepsilon f(x)\right)+O(\varepsilon^{4})\ . (60)
Refer to caption
Refer to caption
Figure 2: Probability distributions px(x)p_{x}(x) (top panel) and py(y)p_{y}(y) (bottom panel) as given by numerical simulation (black line) compared to theoretical solution (64) for pxp_{x} and (67) for pyp_{y}, respectively at the first order (red line), second order via Eq. (59) (blue line), and third order via Eq. (IV.6) (blue continuous line).These solutions are indistinguishable. Same colours and line styles are used in both panels. Here α=10,R¯=0.87\alpha=10,\bar{R}=0.87nM, g0=0.01g_{0}=0.01, τ=107\tau=10^{7}, ε=0.001\varepsilon=0.001, D=0.012D=0.012.

Here, we have used (see Appendix):

G3¯(x,y,R)=G¯M(y,R)+G¯2(y,R)2x,\bar{G_{3}}(x,y,R)=\bar{G}_{M}(y,R)+\frac{\bar{G}_{2}(y,R)}{2x}\ , (61)

and therefore by integrating over yy we obtain

px(x)=𝑑RW¯(R)δ(z(x,R))p_{x}(x)=\int dR\bar{W}(R)\delta(z(x,R)) (62)

with

z(x,R)\displaystyle z(x,R) =\displaystyle= xεγ(εf(x),R)ε2G¯(εf(x),R)+\displaystyle x-\varepsilon\gamma(\varepsilon f(x),R)-\varepsilon^{2}\bar{G}(\varepsilon f(x),R)+
\displaystyle- ε3G3¯(x,εf(x),R),\displaystyle\varepsilon^{3}\bar{G_{3}}(x,\varepsilon f(x),R),

which replaces (59).

According to the implicit functions theorem, from the condition z(x,R)=0z(x,R)=0, it follows that a function r(x)=Rr(x)=R exists such that

px(x)=W¯(r(x))|drdx|,p_{x}(x)=\bar{W}(r(x))\left|\frac{dr}{dx}\right|\ , (64)

where dr/dxdr/dx can be computed by applying the implicit functions theorem either to (59) for the calculation up to the second order, or to (IV.6) for the third order. Hence:

drdx=xz(x,R)Rz(x,R),\frac{dr}{dx}=-\frac{\partial_{x}z(x,R)}{\partial_{R}z(x,R)}\ , (65)

which allows us to evaluate explicitly px(x)p_{x}(x) for the toggle switch.

Refer to caption
Refer to caption
Figure 3: Probability distributions px(x)p_{x}(x)(top panel) and py(y)p_{y}(y) (bottom panel) as given by numerical simulation (black line) compared to theoretical solution (64) for pxp_{x} and (67) for pyp_{y}, respectively at the first order (red lines), second order via Eq. (59) (blue dashed lines), and third order via Eq. (IV.6) (blue continuous lines). Same colours and line styles are used in both panels. Including the third order correctly shows only one mode for px(x)p_{x}(x) (blue continuous line, top panel). Here α=10\alpha=10, R¯=0.87\bar{R}=0.87nM, g0=0.01g_{0}=0.01, τ=105\tau=10^{5}, ε=0.1\varepsilon=0.1, D=0.012D=0.012.

Finally, in order to derive

py(y)=𝑑x𝑑RW(x,y,R),p_{y}(y)=\int dx\int dRW(x,y,R)\ , (66)

it suffices to notice that at steady state we have exactly y=εf(x)y=\varepsilon f(x), and therefore py(y)p_{y}(y) in this case can be derived directly from px(x)p_{x}(x) via a change of variable as

py(y)\displaystyle p_{y}(y) =\displaystyle= 1εpx(f1(yε))|df1dy|\displaystyle\frac{1}{\varepsilon}p_{x}\left(f^{-1}\left(\frac{y}{\varepsilon}\right)\right)\left|\frac{df^{-1}}{dy}\right| (67)
=\displaystyle= px(f1(y/ε))ε|f(x=f1(y/ε)|.\displaystyle\frac{p_{x}(f^{-1}(y/\varepsilon))}{\varepsilon\left|f^{\prime}(x=f^{-1}(y/\varepsilon)\right|}\ .

Equations (64) and (67) so constructed can then be compared at any given perturbative order with direct stochastic simulations, in order to highlight the predictive power of the theoretical approach here developed.

We perform direct stochastic simulations of fluctuatons in RR by using the standard approach to generate an Ornstein-Uhlenbeck process, converting it via Eq. (54) into log-normal noise and then by performing ordinary integration of the equations (32) and (33). The probability densities are obtained by averaging a single trajectory over long times. Parameters are set so as to reproduce biologically relevant conditions. In all simulations and in the analytical solution, we fix the maximal gene expression rates at the values g1=g2=g0=102s1g_{1}=g_{2}=g_{0}=10^{-2}s^{-1}, which is representative of typical gene expression in bacteria Thattai01 , and the degradation rates k1=k2=104s1k_{1}=k_{2}=10^{-4}s^{-1}, which are representative of proteins’ half-life in the range of 2 hours. We also fix the dissociation constants of transcription factors to DNA ρ1=10nM1\rho_{1}=10nM^{-1} and ρ2=1nM1\rho_{2}=1nM^{-1}, and R¯=0.87nM\bar{R}=0.87nM Aquino20 .

We then focus on two different values of ε\varepsilon in order to explore the contribution of different perturbative orders. Since we assume k=104s1k=10^{-4}s^{-1}, choosing τ=107s\tau=10^{7}s corresponds to assume ε=0.001\varepsilon=0.001, while the choice τ=105s\tau=10^{5}s will reflect a larger value of ε\varepsilon, namely ε=0.1\varepsilon=0.1. It is this second regime, when the timescale separation between dynamics of xx and of the RR fluctuations is less pronounced, that we predict that higher order terms in the perturbative expansion will become relevant.

In Fig. 2 we show the comparison between perturbative expansion and stochastic simulations for ε=0.001\varepsilon=0.001. The theoretical analysis predicts bimodality for the concentration xx, and this is confirmed excellently by the numerical result. In this case first order solution Eq. (43), second order solution via Eq. (59), and third order solution via Eq. (IV.6) practically coincide.

In Fig. 3 (top panel) we show a particular case where using the first order solution Eq. (43) leads erroneously to conclude that bimodality emerges, while including second and third order correction gives a quantitatively and qualitatively correct description of the properties of the density function, correctly predicting a unimodal distribution for both xx and yy protein concentrations.

IV.7 Noise-induced transitions: Partial bimodality

Refer to caption
Refer to caption
Refer to caption
Figure 4: Noise-induced transitions and partial bimodality in the toggle switch. The distributions px(x)p_{x}(x) and py(y)p_{y}(y) as given by numerical simulations (black lines) compared to analytical solutions (43) (red line and blue line for py(y)p_{y}(y) and px(x)p_{x}(x) respectively). Here α=10,R¯=0.8nM,g0=0.01,τ=107,ε=0.001\alpha=10,\bar{R}=0.8nM,g_{0}=0.01,\tau=10^{7},\varepsilon=0.001, from top to bottom D=0.01D=0.01, 0.1, 1010.

The analysis and simulations shown in Fig. 2 highlight the emergence of a particular phenomenon, which we name ’partial bimodality’, whereby bimodal behaviour emerges for the protein concentration xx, but not for yy. This is clearly unexpected in standard toggle switches affected by intrinsic noise only, as in this case the bimodality would instead affect simultaneosly both genes involved.

In order to obtain further insight into these dynamics, and clarify the nature of the noise-induced transition here identified, we perform simulations and solve the corresponding first order dynamics, for three different values of noise intensities.

In Fig. 4 we show simulations and analytical predictions for the toggle switch in a regime where at low noise intensity (D=0.01D=0.01) the dynamics are unimodal. In this case we observe that increasing the noise intensity up to D=10D=10 bimodal behaviour emerges for both genes. However, interestingly we also observe that for intermediate values of noise, D0.1D\simeq 0.1, a regime exists where bimodality emerges only for one gene and not the other.

For this case we show two exemplary trajectories for variables xx and yy in the upper panel of Fig. 5. While the variable xx makes complete transitions between a high value around the second mode (x=32.5x=32.5) and low values close to the first mode (x=0.24)x=0.24), the variable yy remains more consistently around y=0.2y=0.2, as testified by the higher peak of its distribution at the origin as compared to that of pxp_{x} (see inset in middle Figure 4). In correspondence of transitions of the xx variable to the low state, excursions to slightly larger values of yy are permitted, but their amplitude is not large enough to exit the basin of attraction of the low yy state. In the bottom panel of Fig. 5 instead, the case where bimodality occurs for both species is shown. These traejctories correspond to the bimodal distributions in the bottom panel of Fig. 4 with modes in x,y=0x,y=0 and in x=100,y=99x=100,y=99. Since pxp_{x} has a higher peak in x=100x=100 and pyp_{y} has a higher peak in y=0y=0 these species spend most of their time around these values with sporadic transitions to values around x=0x=0 and y=99y=99 respectively. The amplitude of the fluctuations of the yy variable allows in this case the variable yy to exit the basin of attraction of its high state.

We attribute the different noise amplitudes experienced by the variables xx and yy to a noise propagation effect, whereby the fluctuations of the external parameter RR are perceived differently by the two genes. In fact, the variance of the distribution of xx is consistently larger of the variance of the distribution of yy, for all input noises on RR (parameterized by DD), as shown in Table 1:

DD 0.010.01 0.10.1 1010
σx\sigma_{x} 56.62 325.31 590.67
σy\sigma_{y} 0.88 58.57 191.17
Table 1: Variances of xx and yy distributions corresponding to the probability distributions shown in Fig. 4. Parameter values are set as in Fig. 4.

While xx is under the direct influence of fluctuations of RR, gene yy feels those fluctuations, propagated through the repressive action from xx, at a lower level. This means that while gene xx can be above threshold for transitions to bimodal behavior, gene yy may still remain below the threshold, and therefore may still exhibit unimodal behaviour. This mechanism is in agreement with the result of some extensive studies performed in Hornung08 , which demonstrated that positive feedback loops limit the range of fluctuations of the target gene.

Refer to caption
Refer to caption
Figure 5: Exemplary trajectories for the variable xx (blue line) and yy (red line) for parameters α=10\alpha=10, R¯=0.8\bar{R}=0.8nM, g0=0.01g_{0}=0.01, τ=107\tau=10^{7}, ε=0.001\varepsilon=0.001 and D=0.1D=0.1 (upper panel, corresponding to distributions with partial bimodality in middle panel in Fig. 4) and D=10D=10 (lower panel, corresponding to bimodal distributions in bottom panel in Fig. 4).

V Conclusions

In this paper we have introduced an efficient methodology to deal with nonlinear extrinsic noise when the correlation time of the fluctuations is large. The derivation here proposed is based on a timescale expansion of the master equation and reproduces to the first order the phenomenological approach based on the nonlinear noise filtering introduced in Ochab-Marcinek10 . Importantly, our perturbative approach allows one to go beyond the dynamical derivation presented in Aquino20 and obtain correct predictions even in a regime when the timescales characterizing fluctuations and gene dynamics are comparable, with the lowest perturbative orders failing to capture the qualitative dynamical features of the system.

We also find that the toggle switch is capable of a partial bimodality, in that while one of the two participating genes shows full bimodal behaviour, the other gene still preserves unimodality. This feature is intermediate in terms of noise intensities to the two extreme cases of no bimodality for any of the two genes, to full bimodality for both of them. As a follow-up to this observation, it will be interesting to analyze the effect of simultaneous intrinsic and extrinsic noise on the system. We expect that the interplay between intrinsic and extrinsic noise would provoke even richer dynamics, in which the balance between noise intensities will be crucial in determining the structure of the phase space.

The novel mechanism for noise-induced transitions here described and the methodology introduced are promising and worthy of further analysis and experimental validation. They will further our understanding of the fundamental and active role played by noise in biological systems.

ACKNOWLEDGMENTS

This study has been funded by the UK Biotechnology and Biological Sciences Research Council (BBSRC), under grant no. BB/L007789/1.

For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.

DATA AVAILABILITY

All details of the mathematical models developed and the data deriving from their simulation are available in the article.

APPENDIX: EVALUATION OF THIRD ORDER PERTURBATIVE CORRECTION FOR THE TOGGLE SWITCH

We can evaluate the third order correction in the perturbative expansion just by carrying on the procedure introduced in the main text to include the third order. We start with W(x,R)=W(0)+εW(1)+ε2W(2)/2+ε3W(3)/6W(x,R)=W^{(0)}+\varepsilon W^{(1)}+\varepsilon^{2}W^{(2)}/2+\varepsilon^{3}W^{(3)}/6 and this leads to

x(γ(y,R)W(2)xW(3)3)\displaystyle-\frac{\partial}{\partial x}\left(\gamma(y,R)W^{(2)}-\frac{xW^{(3)}}{3}\right)
y(f(x)W(2)yW(3)3)+LFP(R)W(2)=0.\displaystyle-\frac{\partial}{\partial y}\left(f(x)W^{(2)}-\frac{yW^{(3)}}{3}\right)+L_{\rm FP}(R)W^{(2)}=0. (68)

where we know W(2)(x,y,R)W^{(2)}(x,y,R) from previous calculations. Therefore we can evaluate LFP(R)W(2)L_{\rm FP}(R)W^{(2)}, and we obtain

LFP(R)W(2)(x,y,R)=G2(y,R)δ′′(x)δ(y)\displaystyle L_{\rm FP}(R)W^{(2)}(x,y,R)=G_{2}(y,R)\delta^{\prime\prime}(x)\delta(y)
+2f(x)G(y,R)δ(x)δ(y)2GM(y,R)δ(x)δ(y)\displaystyle+2f(x)G(y,R)\delta^{\prime}(x)\delta^{\prime}(y)-2G_{M}(y,R)\delta^{\prime}(x)\delta(y) (69)

with

G2(y,R)=LFP(γ2(y,R)W¯(R))=\displaystyle G_{2}(y,R)=L_{\rm FP}\left(\gamma^{2}(y,R)\bar{W}(R)\right)=
2(3Dν(R)ν(R)μ(R))W¯(R)γ(y,R)Rγ(y,R)\displaystyle\qquad 2(3D\nu(R)\nu^{\prime}(R)-\mu(R))\bar{W}(R)\gamma(y,R)\partial_{R}\gamma(y,R)
+Dν(R)2[4W¯(R)γ(y,R)Rγ(y,R)\displaystyle\qquad+D\nu(R)^{2}\left[4\bar{W}^{\prime}(R)\gamma(y,R)\partial_{R}\gamma(y,R)\right.
+2W¯(R)(γ(y,R)R2γ(y,R)+(Rγ(y,R))2)],\displaystyle\qquad\left.+2\bar{W}(R)\left(\gamma(y,R)\partial_{R}^{2}\gamma(y,R)+(\partial_{R}\gamma(y,R))^{2}\right)\right]\ , (70)

where it can be shown that

GM(y,R)=LFP(R)(G(y,R))=\displaystyle G_{M}(y,R)=L_{\rm FP}(R)\left(G(y,R)\right)=
α(R)G(y,R)+σ(R)RG(y,R)+γ(R)R2G(y,R),\displaystyle\alpha(R)G(y,R)+\sigma(R)\partial_{R}G(y,R)+\gamma(R)\partial_{R}^{2}G(y,R)\ , (71)

with

α(R)\displaystyle\alpha(R) =\displaystyle= (μ(R)Dν(R)2Dν(R)ν′′(R)),\displaystyle-\left(\mu^{\prime}(R)-D\nu^{\prime}(R)^{2}-D\nu(R)\nu^{\prime\prime}(R)\right)\ , (72)
σ(R)\displaystyle\sigma(R) =\displaystyle= (μ(R)3Dν(R)ν(R)),\displaystyle-\left.(\mu(R)-3D\nu(R)\nu^{\prime}(R)\right)\ , (73)
γ(R)\displaystyle\gamma(R) =\displaystyle= Dν2(R).\displaystyle D\nu^{2}(R)\ . (74)

From Eq. (APPENDIX: EVALUATION OF THIRD ORDER PERTURBATIVE CORRECTION FOR THE TOGGLE SWITCH) we can therefore expect W3(x,R)W_{3}(x,R) of the form:

W3(x,y,R)=W¯(R)(A10δ(x)δ(y)+A11δ(x)δ(y)\displaystyle W_{3}(x,y,R)=\bar{W}(R)\left(A_{10}\delta^{\prime}(x)\delta(y)+A_{11}\delta^{\prime}(x)\delta^{\prime}(y)\right.
+A12δ(x)δ′′(y)+A20δ′′(x)δ(y)+A21δ′′(x)δ(y)+\displaystyle\qquad\left.+A_{12}\delta^{\prime}(x)\delta^{\prime\prime}(y)+A_{20}\delta^{\prime\prime}(x)\delta(y)+A_{21}\delta^{\prime\prime}(x)\delta^{\prime}(y)+\right.
+A30δ′′′(x)δ(y)+A03δ(x)δ′′′(y)).\displaystyle\qquad\left.+A_{30}\delta^{\prime\prime\prime}(x)\delta(y)+A_{03}\delta(x)\delta^{\prime\prime\prime}(y)\right)\ . (75)

Replacing Eq. (75) in (APPENDIX: EVALUATION OF THIRD ORDER PERTURBATIVE CORRECTION FOR THE TOGGLE SWITCH) and solving for AijA_{ij} leads to

A10\displaystyle A_{10} =\displaystyle= 6G¯M(y,R)(3/x)G¯2(y,R)\displaystyle-6\bar{G}_{M}(y,R)-(3/x)\bar{G}_{2}(y,R) (76)
A11\displaystyle A_{11} =\displaystyle= 6G¯(y,R)f(x)\displaystyle 6\bar{G}(y,R)f(x) (77)
A12\displaystyle A_{12} =\displaystyle= 3γ(y,R)f2(x)\displaystyle 3\gamma(y,R)f^{2}(x) (78)
A20\displaystyle A_{20} =\displaystyle= 6G¯(y,R)γ(y,R)\displaystyle 6\bar{G}(y,R)\gamma(y,R) (79)
A21\displaystyle A_{21} =\displaystyle=- 3γ2(y,R)f(x)\displaystyle 3\gamma^{2}(y,R)f(x) (80)
A30\displaystyle A_{30} =\displaystyle= γ3(y,R)\displaystyle-\gamma^{3}(y,R) (81)
A03\displaystyle A_{03} =\displaystyle= f3(x)\displaystyle-f^{3}(x) (82)

References

  • (1) W. Horsthemke and R. Lefever, Noise-Induced Transi- tions, Springer (1984).
  • (2) C.W. Gardiner, Handbook of stochastic methods, Springer (1985).
  • (3) L. Cai, N. Friedman and X.S. Xie, Nature 440, 358 (2006).
  • (4) M. Bengtsson, A. Stahlberg, P. Rorsman and Mikael Kubista, Genome Res. 5, 1388 (2005).
  • (5) N. Rosenfeld, J.W. Young, U. Alon, P.S. Swain, M.B. Elowitz, Science 307, 1962 (2005).
  • (6) A. Ochab-Marcinek and M. Tabaka, Proc. Natl. Acad. Sci., 107, 22096 (2010).
  • (7) A. Ochab Marcinek, J. Jedrak, M. Tabaka, Phys. Chem. Chem. Phys. 19, 22580 (2017).
  • (8) G. Aquino and A. Rocco, Math. Biosci. Eng. 17, 6993 (2020).
  • (9) L. Arnold, W. Horsthemke, R. Lefever, Z. Physics B 29, 367 (1978).
  • (10) L.E. Reichl and W.C. Schieve, Instabilities, bifurcation, and Fluctuations in Chemical Systems, University of Texas Press, Austin, 1982.
  • (11) T.S. Gardner, C.R. Cantor, J.J. Collins, Nature 403, 339 (2000).
  • (12) N. Rosenfeld, M.B. Elowitz, U. Alon, J. Mol. Biol. 323 785 (2002).
  • (13) V. Shahrezaei, J.F. Ollivier, P.S. Swain, Mol. Sys. Bio. 4, 196 (2008).
  • (14) H. Salman et al., Phys. Rev. Lett. 108, 238105 (2012)
  • (15) N. Brenner et al., Phys. Rev. E 92, 042713 (2015).
  • (16) M. Thattai and A. van Oudenaarden, PNAS 98, 8614 (2001).
  • (17) G. Hornung and N. Barkai, PLOS Comp. Biol. 4, 055 (2008).
  • (18) J. Paulsson, Nature 427, 415 (2004).