This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Likelihood-based inference, identifiability and prediction using count data from lattice-based random walk models

Yihan Liu School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia. David J. Warne School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia. Matthew J. Simpson School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia.
Abstract

In vitro cell biology experiments are routinely used to characterize cell migration properties under various experimental conditions. These experiments can be interpreted using lattice-based random walk models to provide insight into underlying biological mechanisms, and continuum limit partial differential equation (PDE) descriptions of the stochastic models can be used to efficiently explore model properties instead of relying on repeated stochastic simulations. Working with efficient PDE models is of high interest for parameter estimation algorithms that typically require a large number of forward model simulations. Quantitative data from cell biology experiments usually involves non-negative cell counts in different regions of the experimental images, and it is not obvious how to relate finite, noisy count data to the solutions of continuous PDE models that correspond to noise-free density profiles. In this work we illustrate how to develop and implement likelihood-based methods for parameter estimation, parameter identifiability and model prediction for lattice-based models describing collective migration with an arbitrary number of interacting subpopulations. We implement a standard additive Gaussian measurement error model as well as a new physically-motivated multinomial measurement error model that relates noisy count data with the solution of continuous PDE models. Both measurement error models lead to similar outcomes for parameter estimation and parameter identifiability, whereas the standard additive Gaussian measurement error model leads to non-physical prediction outcomes. In contrast, the new multinomial measurement error model involves a lower computational overhead for parameter estimation and identifiability analysis, as well as leading to physically meaningful model predictions. Open access Julia software required to replicate the results in this study are available on GitHub.

1 Introduction

Two-dimensional cell migration assays, also called scratch assays, are routinely used to quantify how different populations of cells migrate [1]. These simple experiments, outlined in Figure 1, can be used to study potential drug treatments [2], different nutrient availability conditions [3], or interactions between different populations of cells [4]. Scratch assays involve growing uniform monolayers of cells on tissue culture plates before part of the monolayer is scratched away using a sharp-tipped instrument like a razor blade. The resulting recolonisation of the scratched region is then imaged, and the rate at which the scratch closes provides a simple measure of the ability of cells to migrate [1, 2]. Scratch assays conducted over relatively short periods of time (e.g. less than 24 hours) are often used to focus on cell migration [5], whereas experiments conducted over longer periods of time (e.g. greater than 24 hours) provide insight into the role of cell migration and cell proliferation combined [5].

One way of interpreting scratch assays is to implement a stochastic random walk model describing the motion of individual agents on a lattice [3, 6, 7, 8, 9]. The location of agents can be initialised to mimic the initial geometry of the scratch, and then agents are allowed to undergo a random walk, either biased or unbiased, with a simple exclusion mechanism to model crowding effects. Carefully choosing parameters in the random walk model to replicate experimental observations provides biological insight into the roles of directed and undirected migration among different subpopulations of cells within the experiment. Using a stochastic simulation model to interpret these experiments is advantageous because stochastic models allow us to keep track of individual cells within the population, as well as capturing the role of stochasticity in the experiments [10].

Refer to caption
Figure 1: (a)–(e) Scratch assay schematic. (a)–(b) Cell monolayers are grown in 6-well tissue culture plates, where each well within the tissue culture plate has a diameter of 35 mm. (c) The scratched monolayer is superimposed with the imaging field of view (solid blue). (d)–(e) Schematic showing population invasion over time within the field of view (solid blue). (f)-(g) Experimental images from a scratch assay using a single population of prostate cancer cells at time t=0t=0 and t=48t=48 hours. The red scale bar corresponds to 300 µm [11]. (h)-(i) Experimental images from a scratch assay for using a more complicated population of melanoma cells composed of two populations indicated by red and green fluorescence at time t=0t=0 and t=48t=48 hours. The red scale bar corresponds to 200 µm [12]. All images are reused with permission.

In the last decade there has been an increasing interest in parameter estimation for stochastic models of cell migration experiments [8, 9, 10]. When working with a discrete model of a spatially explicit biological process, such as cell migration, it has become customary to implement some form of Approximate Bayesian Computation (e.g. ABC rejection, ABC-MCMC) [8, 9, 10, 13], which is often motivated by noting that stochastic models are not associated with a tractable likelihood function. A similar trend has emerged in the spatial ecology literature, where parameter estimation has been dominated by different ABC methods [14, 15, 16, 17, 18]. In this work we demonstrate how to take a different approach by using likelihood-based methods for parameter estimation, parameter identifiability and model prediction [19]. Noting that scratch assays are often quantified by reporting cell counts in different spatial regions of experimental images [11, 12, 20], we explore how to use approximate continuum limit PDE models as a computationally efficient process model to describe the key mechanistic features in a scratch assay [21]. We work in a general setting by considering migration and crowding of a population composed of s=1,2,3,,Ss=1,2,3,\ldots,S potentially distinct subpopulations of agents.

The main focus of our work is to compare two approaches for relating the solution of the PDE model to the observed noisy count data: (i) a traditional additive Gaussian measurement error model; and (ii) a new physically-motivated multinomial measurement error model. It is worth noting that applications of parameter estimation in biological applications involve working with an additive Gaussian measurement error model, and that this choice is often implemented without critically examining the implications of this assumption [22, 23]. Using maximum likelihood estimation and the profile likelihood [24, 25, 26, 27, 28], we obtain best-fit parameter estimates and likelihood-based confidence intervals that allow us to examine the practical identifiability of parameter estimates derived from noisy count data. Importantly, we also use a likelihood-based method to map the variability in parameter confidence sets to explore the variability in model predictions [29]. While both the additive Gaussian and multinomial measurement error models perform similarly in terms of parameter estimation and parameter identifiability, we show that the standard additive Gaussian measurement error model leads to unphysical predictions of negative agent counts, or counts of agents that locally exceed the maximum carrying capacity of the lattice. In contrast, the new physically-motivated multinomial measurement error model is simpler and faster to implement computationally compared to usual additive Gaussian measurement error model. Furthermore, the multinomial measurement error model leads to physically realistic model predictions while also avoiding the computational expense of a more standard ABC approach using a far more expensive stochastic process model.

This manuscript is organised as follows. In Section 2 we introduce the stochastic simulation model and the associated continuum-limit PDE. We also provide a general description of how noisy count data can be generated using the stochastic simulation algorithm to mimic experimental measurements for problems involving both single populations and two subpopulations of distinct agents. The two measurement error models are introduced along with methods for maximum likelihood estimation, practical identifiability analysis using the profile likelihood and likelihood-based prediction. In section 3 we present specific results for parameter estimation, parameter identifiability analysis and model prediction for a simple case of count data with one population of agents and a more realistic case of count data associated with a population composed of two subpopulations of agents. Finally, in Section 4 we summarise our findings and outline opportunities for future work.

2 Methods

2.1 Mathematical Models

In this study, we use two types of mathematical modelling frameworks. First, we use a stochastic lattice-based random walk model that will be described in Section 2.1.1. The motivation for using a computationally expensive stochastic model is that it provides a high-fidelity means of generating noisy data that is compatible with the kind of data generated experimentally. Second, we use a computationally efficient continuum limit description of the stochastic model. The continuum limit description takes the form of a system PDEs that will be described in Section 2.1.2.

2.1.1 Stochastic model

Experimental images from scratch assays shown in Figure 1 motivate our stochastic model. Scratch assay experiments are routinely used in experimental cell biology to quantify cell migration. For example, these experiments are often used to explore how different surface coatings or different putative drugs impact the ability of cells to migrate. Scratch assays are performed by growing a uniform population of cells in a tissue culture plate. A sharp-tipped instrument is used to create a scratch within the uniform monolayer, and individual cells within the population undergo random migration, where the motility of individual cells is influenced by crowding effects. The net outcome of this random migration is that cells move into the scratched region, and this closes the scratched region over time.

We use a lattice-based stochastic model to simulate the migration of SS distinct populations of agents on a two-dimensional lattice with lattice spacing Δ\Delta [30]. In our simulations we think of each agent representing an individual cell in the experiment. The size of the lattice is W×HW\times H, where WW is the width of the lattice and HH is the height of the lattice. Each site is indexed (i,j)(i,j), and each site is associated with a unique Cartesian coordinate (xi,yj)(x_{i},y_{j}), where xi=(i1)Δx_{i}=(i-1)\Delta with i=1,2,3,,Ii=1,2,3,\ldots,I and yj=(j1)Δy_{j}=(j-1)\Delta with j=1,2,3,,Jj=1,2,3,\ldots,J.

Stochastic simulations are performed using a random sequential update method [31]. Simulations are initialised by placing a total of NN agents from SS distinct agent subpopulations on the lattice. If NsN_{s} is the number of agents in the ssth subpopulation, then we have N=s=1SNsN=\displaystyle{\sum_{s=1}^{S}N_{s}}. To evolve the stochastic algorithm from time tt to time t+τt+\tau, NN agents are selected at random, one at a time, with replacement and given an opportunity to move [30]. When an agent belonging to subpopulation ss is chosen, that agent attempts to move with probability Ps[0,1]P_{s}\in[0,1]. The target site for potential motility events is chosen in the following way. The probability that a motile agent at site (i,j)(i,j) attempts to move to site (i,j±1)(i,j\pm 1) is 1/41/4, whereas the probability that a motile agent at site (i,j)(i,j) attempts to move to site (i±1,j)(i\pm 1,j) is (1±ρs)/4(1\pm\rho_{s})/4. Here, ρs[1,1]\rho_{s}\in[-1,1] is a bias parameter that influences the left-right directional bias in the horizontal direction. Setting ρs=0\rho_{s}=0 indicates that there is no bias for agents in the ssth subpopulation. Any potential motility event that would place an agent on an occupied site is aborted. This means that our stochastic simulation algorithm is closely related to an exclusion process [31]. Periodic boundary conditions are applied along the horizontal boundaries, and reflecting boundary conditions are imposed along the vertical boundaries. Time steps in the simulations are indexed by kk so that t=kτt=k\tau for k=1,2,3,k=1,2,3,\ldots. To keep our simulation framework general we always work with a dimensionless simulations by setting Δ=τ=1\Delta=\tau=1. These simulations can be re-scaled using appropriate length and time scales to accommodate cells of different sizes and motility rates [6].

Although cells in the experiments in Figure 1 are free to move in any direction, scratching a uniform monolayer along the vertical direction means that the density of cells is independent of the vertical location within the image, and that the macroscopic density of cells varies with horizontal position. Therefore, these experimental images have been quantified by superimposing a series of uniformly-spaced columns across each image and counting the number of cells within each column [11, 12, 20]. This approach summarises the outcome of each experiment as a series of count data as a function of horizontal location of each column. The most standard way of reporting outcomes of a scratch assay is to image the experiment once at the end of the experiment and then counts of cells can be determined and reported. Alternatively, it is possible to repeat the imaging and counting process across a number of time points and report a time series of count data. Our modelling framework can be applied to either approach, but we will present our results for the more standard approach of working with data at one time point only, and in Section 4 we will explain how our methodology generalises to working with a time series of count data.

We will now describe how the stochastic random walk model can be used to generate count data in exactly the same way as count data are generated experimentally. In this work we always consider stochastic simulations that mimic the same geometry and design as in the experimental scratch assays. Therefore all simulations are initialised so that the expected occupancy status of lattice sites within the same column of the lattice is identical. Together with the boundary conditions, this ensures that the occupancy status of any lattice site is, on average, independent of vertical location at any time during the simulation.

The outcome of the stochastic simulations is to determine the occupancy of each lattice site for different subpopulations as a function of time. To quantify this we let Cs(i,j,k)C_{s}^{\star}(i,j,k) denote the occupancy of site (i,j)(i,j), for subpopulation ss after kk time steps. If site (i,j)(i,j) is occupied by an agent from subpopulation ss we have Cs(i,j,k)=1C_{s}^{\star}(i,j,k)=1, otherwise Cs(i,j,k)=0C_{s}^{\star}(i,j,k)=0. With this notation, the observed total agent count from subpopulation ss in column ii after kk time steps is

Cso(i,k)=j=1JCs(i,j,k).C_{s}^{\text{o}}(i,k)=\sum_{j=1}^{J}C_{s}^{\star}(i,j,k). (1)

These counts are bounded since Cso(i,k)[0,J]C_{s}^{\text{o}}(i,k)\in[0,J] for all s=1,2,3,,Ss=1,2,3,\ldots,S.

To mimic the scratch assay experiment in Figure 1(f)-(g) with a single population we consider stochastic simulations with S=1S=1 on a 200×20200\times 20 lattice. Initially the lattice is populated so that each site with 1i551\leq i\leq 55 and 146i200146\leq i\leq 200 is fully occupied. Agents move without directional bias by setting P1=1P_{1}=1 and ρ1=0\rho_{1}=0. Results in Figure 2(a) show snapshots of the distribution of agents together with plots of the associated count data at the beginning and conclusion of the simulation. It is relevant to note that the count data at the end of the simulation, C1o(i,k)C_{1}^{\text{o}}(i,k) for i=1,2,3,,Ii=1,2,3,\ldots,I, is noisy and exhibits large fluctuations across the columns of the lattice. These fluctuations are similar to the fluctuations observed in count data generated by quantifying images from a scratch assay performed using a single population of cells such as the experimental images in Figure 1(f)-(g).

We now explore how to use the stochastic model to mimic the scratch assay experiment in Figure 1(h)-(i) with two subpopulations of cells by initialising a 200×20200\times 20 lattice with two subpopulations of agents, S=2S=2. Stochastic simulations are initialised by randomly populating 50%\% of sites in each column with 1i551\leq i\leq 55 and 146i200146\leq i\leq 200 with agents from subpopulation 1. The remaining sites in these columns are occupied by agents from subpopulation 2. Simulations are performed with P1=P2=1P_{1}=P_{2}=1 and ρ1=ρ2=0\rho_{1}=\rho_{2}=0. For this choice of parameters it turns out that both subpopulations have the same migration rate and bias parameter which means that both subpopulations behave identically. Later in Section 2.2 we will show that the same ideas apply when the subpopulations are characterised by different migration rates and bias parameters. Results in Figure 2(b) show snapshots of the distribution of agents and plots of the associated count data. Since we are working with two subpopulations of agents we are able to generate two sets of count data per column, C1o(i,k)C_{1}^{\text{o}}(i,k) and C2o(i,k)C_{2}^{\text{o}}(i,k) for i=1,2,3,,Ii=1,2,3,\ldots,I, after kk time steps. In this case we see that the count data at the end of the simulation are noisy and exhibit large fluctuations.

Refer to caption
Figure 2: Motivating stochastic simulations on a 200×20200\times 20 lattice illustrate agent snapshots and count data for simulations involving a single population and populations composed of two interacting subpopulations. Results in (a) correspond to a single population with (P1,ρ1)=(1,0)(P_{1},\rho_{1})^{\top}=(1,0)^{\top}. The top row shows simulation snapshots at k=0k=0 and k=5000k=5000, with the lower row showing the corresponding count data, C1o(i,k)C_{1}^{\text{o}}(i,k) for i=1,2,3,,200i=1,2,3,\dots,200. Results in (b) correspond to a population composed of two interacting subpopulations with (P1,P2,ρ1,ρ2)=(1,1,0,0)(P_{1},P_{2},\rho_{1},\rho_{2})^{\top}=(1,1,0,0)^{\top}. The top row shows the simulation snapshots at k=0k=0 and k=5000k=5000 where agents from subpopulation 1 are red and agents from subpopulation 2 are green. The middle row shows the corresponding count data for subpopulation 1, C1o(i,k)C_{1}^{\text{o}}(i,k) for i=1,2,3,,200i=1,2,3,\dots,200. The lower row shows the corresponding count data for subpopulation 2, C2o(i,k)C_{2}^{\text{o}}(i,k) for i=1,2,3,,200i=1,2,3,\dots,200.

2.1.2 Continuum model

The stochastic model presented in Section 2.1.1 is well-suited to mimic noisy data from biological experiments. However, the stochastic model is computationally expensive, which means that it is not well-suited for parameter estimation, which can require a very large number of forward simulations. To make progress we will use a computationally efficient continuum limit approximations of the stochastic model. The continuum limit description of the stochastic model is given by a system of PDEs [30] that can be written in conservation form as

cst=𝒥sx,fors=1,2,3,,S,\frac{\partial c_{s}}{\partial t}=-\frac{\partial\mathcal{J}_{s}}{\partial x},\quad\textrm{for}\quad s=1,2,3,\dots,S, (2)

where cs(x,t)c_{s}(x,t) is the dimensionless density of population ss at location xx and time tt, and the flux of subpopulation ss can be written as

𝒥s=Ds(1s=1Scs)csxDscsx(s=1Scs)+vscs(1s=1Scs),\mathcal{J}_{s}=-D_{s}\left(1-\sum_{s=1}^{S}c_{s}\right)\frac{\partial c_{s}}{\partial x}-D_{s}c_{s}\frac{\partial}{\partial x}\left(\sum_{s=1}^{S}c_{s}\right)+v_{s}c_{s}\left(1-\sum_{s=1}^{S}c_{s}\right), (3)

where

Ds=limΔ,τ0PsΔ24τandvs=limΔ,τ0PsρsΔ2τ.D_{s}=\lim_{\Delta,\tau\to 0}\frac{P_{s}\Delta^{2}}{4\tau}\quad\quad\text{and}\quad\quad v_{s}=\lim_{\Delta,\tau\to 0}\frac{P_{s}\rho_{s}\Delta}{2\tau}. (4)

In this study we focus on applications involving either one or two subpopulations. For the single population model S=1S=1, the continuum model simplifies to

c1t\displaystyle\frac{\partial c_{1}}{\partial t} =𝒥1x,with𝒥1\displaystyle=-\frac{\partial\mathcal{J}_{1}}{\partial x},\quad\text{with}\quad\mathcal{J}_{1} =D1c1x+v1c1(1c1),\displaystyle=-D_{1}\frac{\partial c_{1}}{\partial x}+v_{1}c_{1}\left(1-c_{1}\right), (5)

where

D1=limΔ,τ0P1Δ24τandv1=limΔ,τ0P1ρ1Δ2τ.D_{1}=\lim_{\Delta,\tau\to 0}\frac{P_{1}\Delta^{2}}{4\tau}\quad\quad\text{and}\quad\quad v_{1}=\lim_{\Delta,\tau\to 0}\frac{P_{1}\rho_{1}\Delta}{2\tau}. (6)

For two subpopulations, S=2S=2, the continuum model simplifies to

c1t\displaystyle\frac{\partial c_{1}}{\partial t} =𝒥1x,andc2t\displaystyle=-\frac{\partial\mathcal{J}_{1}}{\partial x},\quad\quad\text{and}\quad\quad\frac{\partial c_{2}}{\partial t} =𝒥2x,\displaystyle=-\frac{\partial\mathcal{J}_{2}}{\partial x}, (7)

with

𝒥1\displaystyle\mathcal{J}_{1} =D1(1c1c2)c1xD1c1x(c1+c2)+v1c1(1c1c2),\displaystyle=-D_{1}\left(1-c_{1}-c_{2}\right)\frac{\partial c_{1}}{\partial x}-D_{1}c_{1}\frac{\partial}{\partial x}\left(c_{1}+c_{2}\right)+v_{1}c_{1}\left(1-c_{1}-c_{2}\right), (8)
𝒥2\displaystyle\mathcal{J}_{2} =D2(1c1c2)c2xD2c2x(c1+c2)+v2c2(1c1c2),\displaystyle=-D_{2}\left(1-c_{1}-c_{2}\right)\frac{\partial c_{2}}{\partial x}-D_{2}c_{2}\frac{\partial}{\partial x}\left(c_{1}+c_{2}\right)+v_{2}c_{2}\left(1-c_{1}-c_{2}\right), (9)

where

D1=limΔ,τ0P1Δ24τ,v1=limΔ,τ0P1ρ1Δ2τ,D2=limΔ,τ0P2Δ24τandv2=limΔ,τ0P2ρ2Δ2τ.\displaystyle D_{1}=\lim_{\Delta,\tau\to 0}\dfrac{P_{1}\Delta^{2}}{4\tau},\quad v_{1}=\lim_{\Delta,\tau\to 0}\dfrac{P_{1}\rho_{1}\Delta}{2\tau},\quad D_{2}=\lim_{\Delta,\tau\to 0}\frac{P_{2}\Delta^{2}}{4\tau}\quad\text{and}\quad v_{2}=\lim_{\Delta,\tau\to 0}\dfrac{P_{2}\rho_{2}\Delta}{2\tau}. (10)

An implicit assumption in the derivation of the continuum limit model is that we are dealing with a sufficiently large lattice such that fluctuations in count data are negligible. In this idealised scenario it is possible to relate count data to the solution of the continuum limit model by

cs(xi,t)\displaystyle c_{s}(x_{i},t) =limJ[j=1JCs(i,j,k)J],\displaystyle=\lim_{J\to\infty}\left[\dfrac{\sum_{j=1}^{J}C_{s}^{\star}(i,j,k)}{J}\right],
=limJ[Cso(i,k)J],\displaystyle=\lim_{J\to\infty}\left[\dfrac{C_{s}^{\text{o}}(i,k)}{J}\right], (11)

where xix_{i} is the central position of the iith column and Cs(i,j,k)C_{s}^{\star}(i,j,k) indicates the occupancy of lattice site (i,j)(i,j) for subpopulation ss after kk time steps of the stochastic model. This relationship, which has been verified computationally [30], indicates that the solution of the continuum limit PDE model approaches the column-averaged density estimates obtained using count data only in the impractical situation where the height of the lattice (or the height of the experimental image) is sufficiently large. In practice, however, experiments always involve relatively small fields of view and consequently the associated count data can involve relatively large fluctuations that are similar to our simulation-derived count data in Figure 2 that is obtained using just J=20J=20. Under these practical conditions the relationship between the observed count data, Cso(i,k)C_{s}^{\text{o}}(i,k) for i=1,2,3,,Ii=1,2,3,\ldots,I, and the solution of the continuum limit PDE, cs(x,t)c_{s}(x,t), is unclear. We will make progress in relating these two quantities by introducing two different types of measurement error models that account for the fluctuations in the count data in different ways [29].

2.2 Data

For this simulation study we use synthetic data generated by stochastic model described in Section 2.1.1 to generate count data that has similar properties to count data obtained in a scratch assay. With this data we will explore different options for efficient likelihood-based parameter estimation, parameter identifiability analysis and model prediction. We will focus on two different cases: (i) Case 1 involves generating data at a single time point involving biased motility in the context of working with a single homogeneous population with S=1S=1; and (ii) Case 2 involves generating data at a single time point involving biased motility with a population composed of two subpopulations with S=2S=2, All data are denoted using the vector yoy^{\text{o}}. For Case 1, yoy^{\text{o}} is a vector of length II; for Case 2, yoy^{\text{o}} is a vector of length 2I2I.

Case 1 involves a biased single population, S=1S=1, on a 200×20200\times 20 lattice where the motion of agents is biased, (P1,ρ1)=(1,0.1)(P_{1},\rho_{1})^{\top}=(1,0.1)^{\top} which corresponds to (D1,v1)=(0.25,0.05)(D_{1},v_{1})^{\top}=(0.25,0.05)^{\top} in the continuum model under idealised conditions where JJ is sufficiently large. Since the motion is biased in the positive xx-direction, the initial placement of agents involves fully occupying sites with 10i4010\leq i\leq 40, as shown in Figure 3(a). The placement of agents towards the left boundary of the domain allows us to observe the biased motion in the positive xx-direction, and data collected after k=300k=300 time steps gives rise to the snapshot shown in Figure 3(b). Count data in Figure 3(c)–(d) are given at the beginning and end of the simulation, respectively.

Case 2 involves a biased population composed of two subpopulations, S=2S=2, on a 200×20200\times 20 lattice with (P1,P2,ρ1,ρ2)=(0.8,1,0.2,0.0)(P_{1},P_{2},\rho_{1},\rho_{2})^{\top}=(0.8,1,0.2,0.0)^{\top}, corresponding to (D1,D2,v1,v2)=(0.2,0.25,0.08,0.0)(D_{1},D_{2},v_{1},v_{2})^{\top}=(0.2,0.25,0.08,0.0)^{\top} in the continuum model under idealised conditions where JJ is sufficiently large. The initial placement of agents involves placing a group of agent from subpopulation 1 in the central region of the lattice so that all sites with 80i12080\leq i\leq 120 are completely occupied by agents from subpopulation 1. All remaining lattice sites are randomly occupied by agents from subpopulation 2 with probability 0.5, as shown in Figure 3(e). This means that agents from subpopulation 1 will attempt to spread out from their original location with a small bias in their motion in the positive xx-direction. Since the motion of agents from subpopulation 1 are hindered by the presence of surrounding agents from subpopulation 2 we collect data after a larger number of time steps, k=1000k=1000, which gives rise to the snapshot in Figure 3(f). Count data for subpopulation 1 are given in Figure 3(g)–(h) whereas count data for subpopulation 2 are given in Figure 3(i)–(j).

Refer to caption
Figure 3: Snapshots of agent distribution and associated count data from stochastic simulations. Parameters in the stochastic model are (P1,ρ1)=(1,0.1)(P_{1},\rho_{1})^{\top}=(1,0.1)^{\top} for Case 1 and (P1,P2,ρ1,ρ2)=(0.8,1,0.2,0)(P_{1},P_{2},\rho_{1},\rho_{2})^{\top}=(0.8,1,0.2,0)^{\top} for Case 2. (a)-(b) and (c)-(d) show snapshots and count data for Case 1. (e)-(f) shows snapshots for Case 2, whereas (g)-(j) shows count data for Case 2.

2.3 Likelihood function

Our count data, summarised in Figure 3, are deliberately generated with J=20J=20 so that the count data involve clear fluctuations and so the mathematical relationship between the discrete count data and the solution of the corresponding continuum limit model, Equation (2.1.2), is not guaranteed to hold. Therefore, we introduce two measurement error models [29] that allow us to relate the solution of the continuum model to the noisy count data in a probabilistic sense. In the first instance we take a standard approach and work with an additive Gaussian measurement error model because this is the most standard approach in the biological physics and mathematical biology literature. Secondly, we introduce a multinomial measurement error model which, as we will show, is a natural choice for working with count data.

The standard approach for relating data to solutions of differential equations is to assume that the data yoy^{\text{o}} is normally distributed about the solution of the differential equation of interest. Throughout this work we will refer to this approach as working with an additive Gaussian measurement error model. For example, if we consider Case 1 where we have yo=C1o(i,k)y^{\text{o}}=C_{1}^{\text{o}}(i,k) for i=1,2,3,,Ii=1,2,3,\ldots,I and the solution of the continuum limit mathematical model is c1(xi,t)c_{1}(x_{i},t) for i=1,2,3,,Ii=1,2,3,\ldots,I, we make the standard assumption that C1o(i,k)=𝒩(c1(xi,t),σ12)C_{1}^{\text{o}}(i,k)=\mathcal{N}(c_{1}(x_{i},t),\sigma_{1}^{2}) for i=1,2,3,,Ii=1,2,3,\ldots,I. Since we deal with SS counts across II columns of the lattice, invoking a standard independence assumption gives us a log-likelihood function that can be written as

(θ|yo)=s=1Si=1Ilog[ϕ(Cso(i,k)/Jc(xi,t),σs2)],\ell(\theta\,|\,y^{\text{o}})=\sum_{s=1}^{S}\sum_{i=1}^{I}\log\left[\phi\left(C_{s}^{\text{o}}(i,k)/J\mid c(x_{i},t),\sigma_{s}^{2}\right)\right], (12)

where ϕ(xμ,σ2)\phi(x\mid\mu,\sigma^{2}) is the Gaussian probability density function with mean μ\mu and variance σ2\sigma^{2}, and θ\theta refers to the parameters in the continuum limit PDE model.

Working with an additive Gaussian measurement error model means that we have introduced additional parameters in the noise model, σs2\sigma_{s}^{2} for s=1,2,3,,Ss=1,2,3,\ldots,S. Throughout this work we will present results generated using the Gaussian additive noise model in terms of the standard deviation σs\sigma_{s} for s=1,2,3,,Ss=1,2,3,\ldots,S. As we will demonstrate, data will be used to estimate the parameters in the continuum limit model description of the stochastic model as well as simultaneously estimating parameters in the measurement error model. For example, in Case 1 where we have S=1S=1 we have two parameters in the continuum limit model D1D_{1} and v1v_{1}, and one parameter in the measurement error model σ1\sigma_{1}. When working with the additive Gaussian measurement error model we will include both sets of parameters in the vector θ\theta which means that with S=1S=1 we have θ=(D1,v1,σ1)\theta=(D_{1},v_{1},\sigma_{1})^{\top} whereas with S=2S=2 we have θ=(D1,D2,v1,v2,σ1,σ2)\theta=(D_{1},D_{2},v_{1},v_{2},\sigma_{1},\sigma_{2})^{\top}.

Instead of simply assuming that the data is normally distributed about the solution of the appropriate continuum limit model, we can take advantage of the structure of our count data and derive a more mechanistically-motivated measurement error model that we will refer to as a multinomial measurement error model. For example, when dealing with just a single population, S=1S=1, our count data simply consists of counts of agents within each column of the lattice, which implicitly defines a count of the vacant lattice sites in each column, meaning that our data takes the form C1o(i,k)C_{1}^{\text{o}}(i,k) and Eo(i,k)=JC1o(i,k)E^{\text{o}}(i,k)=J-C_{1}^{\text{o}}(i,k) for i=1,2,3,,Ii=1,2,3,\dots,I. If we interpret C1o(i,k)C_{1}^{\text{o}}(i,k) at any given spatial location as corresponding to a finite number of independent samples from some underlying stochastic process, these samples can be thought of as an approximation of a continuous measurement of agent occupancy within that column. These data can be considered as samples from a distribution where the expected occupancy fraction is given by the solution of the continuum limit model cs(xi,t)c_{s}(x_{i},t). Together, these measurements imply a binomial likelihood,

(θyo(i))c1(xi,t)C1o(i,k)(1c1(xi,t))Eo(i,k).\mathcal{L}(\theta\mid y^{\text{o}}(i))\propto c_{1}(x_{i},t)^{C_{1}^{\text{o}}(i,k)}\left(1-c_{1}(x_{i},t)\right)^{E^{\text{o}}(i,k)}. (13)

Assuming all column counts, i=1,2,3,,Ii=1,2,3,\ldots,I, are conditionally independent, given the continuum-limit solution, and taking the logarithm of the likelihood function gives us a log-likelihood function based on the binomial measurement error model for a single population with S=1S=1 that can be written as

(θyo)i=1Ilog[c1(xi,t)C1o(i,k)(1c1(xi,t))Eo(i,k)].\ell(\theta\mid y^{\text{o}})\propto\sum_{i=1}^{I}\log\left[c_{1}(x_{i},t)^{C_{1}^{\text{o}}(i,k)}\left(1-c_{1}(x_{i},t)\right)^{E^{\textrm{o}}(i,k)}\right]. (14)

Similar arguments lead to a multinomial-based log-likelihood function for S>1S>1. Under these conditions our count data consists of counts of agents from each subpopulation within each column, Cso(i,k)C_{s}^{\text{o}}(i,k) for i=1,2,3,,Ii=1,2,3,\ldots,I and s=1,2,3,,Ss=1,2,3,\ldots,S. The geometry of the lattice means that counts of vacant sites is given by Eo(i,k)=Js=1SCso(i,k)\displaystyle{E^{\text{o}}(i,k)=J-\sum_{s=1}^{S}C_{s}^{\text{o}}(i,k)} for i=1,2,3,,Ii=1,2,3,\dots,I. Again, we can interpret Cso(i,k)C_{s}^{\text{o}}(i,k) at any given spatial location as being generated by a finite number of independent samples from some underlying stochastic process, these samples can be thought of as an approximation of expected, noise-free measure of agent occupancy from the ssth subpopulation within that column. These data can be considered as samples from a distribution where the expected occupancy fraction is given by the solution of the continuum limit model, cs(xi,t)c_{s}(x_{i},t). Following the same arguments as above for S=1S=1, these measurements imply multinomial log-likelihood function which can be written as

(θ|yo)\displaystyle\ell(\theta\,|y^{\text{o}})\propto i=1Ilog[s=1Scs(xi,t)Cso(i,k)(1s=1Scs(xi,t))Eo(i,k)].\displaystyle\sum_{i=1}^{I}\log\left[\prod_{s=1}^{S}c_{s}(x_{i},t)^{C_{s}^{\text{o}}(i,k)}\left(1-\sum_{s=1}^{S}c_{s}(x_{i},t)\right)^{E^{\textrm{o}}(i,k)}\right]. (15)

This multinomial log-likelihood function relaxes to the binomial log-likelihood function, derived previously, when S=1S=1, and here we have written the data using the compact vector notation yoy^{\text{o}}. In practice, when we evaluate the multinomial log-likelihood function we simply set the proportionality constant to unity. Unlike the additive Gaussian measurement error model, working with the multinomial measurement error model does not involve introducing any new parameters. When we work with S=1S=1 we have θ=(D1,v1)\theta=(D_{1},v_{1})^{\top}, whereas S=2S=2 involves θ=(D1,D2,v1,v2)\theta=(D_{1},D_{2},v_{1},v_{2})^{\top}.

2.4 Likelihood-based estimation and identifiability

Given a set of count data yoy^{\text{o}} together with an appropriate process model and measurement error model with all unknown parameters summarised in the vector θ\theta, we have access to a log-likelihood function, (θ|yo)\ell(\theta\,|y^{\text{o}}). Consequently, the choice of parameters that gives the best match to the data is given by θ^\hat{\theta} which maximises the log-likelihood, giving rise to the maximum likelihood estimate (MLE),

θ^=argsup𝜃[(θ|yo)].\hat{\theta}=\underset{\theta}{\arg\sup}\left[\ell(\theta\,|\,y^{\text{o}})\right]. (16)

Throughout this work we always use numerical optimization calculate θ^\hat{\theta}. In particular all numerical optimization calculations are performed using the Nelder-Mead algorithm with simple bound constraints [32] implemented within the NLopt routine [33] in Julia. In general we find that our numerical optimization calculations are insensitive to the initial estimate and that the algorithm converges using default stopping criteria.

Given our estimate of θ^\hat{\theta}, we use then the profile likelihood to quantify the precision of our parameter estimates by examining the curvature of the log-likelihood function [24, 25, 26, 27, 28]. To compare our results against asymptotic thresholds we work with a normalised log-likelihood function

¯(θyo)\displaystyle\bar{\ell}(\theta\mid y^{\text{o}}) =(θyo)(θ^yo),\displaystyle=\ell(\theta\mid y^{\text{o}})-\ell(\hat{\theta}\mid y^{\text{o}}), (17)

so that ¯(θ^yo)=0\bar{\ell}(\hat{\theta}\mid y^{\text{o}})=0. To proceed we partition the full parameter θ\theta into interest parameters ψ\psi, and nuisance parameters ω\omega, so that θ=(ψ,ω)\theta=(\psi,\omega). In this study we restrict our attention to univariate profile likelihood functions which means that our interest parameter is always a single parameter. For a set of data yoy^{\text{o}}, the profile log-likelihood for the interest parameter ψ\psi given the partition (ψ,ω)(\psi,\omega) is

¯p(ψyo)=supωψ[¯(ψ,ωyo)],\displaystyle\bar{\ell}_{p}(\psi\mid y^{\text{o}})=\sup_{\omega\mid\psi}\left[\bar{\ell}(\psi,\omega\mid y^{\text{o}})\right], (18)

which implicitly defines a function ω(ψ)\omega^{*}(\psi) of optimal values of ω\omega for each value of ψ\psi. As for calculating the MLE, all profile likelihood functions in this work are calculated using the Nelder-Mead numerical optimization algorithm with the same bound constraints used to calculate θ^\hat{\theta}. As a concrete example, if we work with the continuum limit model for a single population using the additive Gaussian measurement error model we have θ=(D1,v1,σ1)\theta=(D_{1},v_{1},\sigma_{1})^{\top}. In this scenario we can compute three univariate profile likelihood functions by choosing the interest parameter to be: (i) the diffusivity, ψ=D1\psi=D_{1} and ω=(v1,σ1)\omega=(v_{1},\sigma_{1})^{\top}; (i) the drift velocity, ψ=v1\psi=v_{1} and ω=(D1,σ1)\omega=(D_{1},\sigma_{1})^{\top}; and (iii) the standard deviation in the measurement error model, ψ=σ1\psi=\sigma_{1} and ω=(D1,v1)\omega=(D_{1},v_{1})^{\top}. For all univariate profile likelihood calculations we work with a uniformly-discretized interval that contains the MLE. For example, if our interest parameter is D1D_{1} we identify an interval, D1<D^1<D1+D_{1}^{-}<\hat{D}_{1}<D_{1}^{+} and evaluate ¯p\bar{\ell}_{p} across a uniform discretization of the interval to give a simple univariate function that we call the profile likelihood. The degree of curvature of the profile likelihood function provides information about the practical identifiability of the interest parameter. For example, if the profile log-likelihood function is flat then the interest parameter is non-identifiable. In contrast, when the profile log-likelihood function is curved the degree of curvature indicates inferential precision and we may determine likelihood-based confidence intervals where ¯p<Δq,n/2\bar{\ell}_{p}<-\Delta_{q,n}/2 where Δq,n\Delta_{q,n} denotes the qqth quantile of the χ2\chi^{2} distribution with nn degrees of freedom, which we take to be the relevant number of unknown parameters [34]. For example, identifying the interval where ¯p<Δ0.95,1/21.9207\bar{\ell}_{p}<-\Delta_{0.95,1}/2\approx-1.9207 allows us to identify the asymptotic 95% confidence interval [24, 29] for a univariate profile likelihood function with one free parameter. This procedure gives us a simple way of identifying the width of the interval [D1,D1+][D_{1}^{-},D_{1}^{+}]. A simple approach to determine suitable choices of D1D_{1}^{-} and D1+D_{1}^{+} is to compute ¯p\bar{\ell}_{p} across some initial estimate of the interval and if the values of the profile log-likelihood do not intersect the relevant asymptotic threshold then we simply continue to compute the profile log-likelihood function on a wider interval. In contrast, if we compute the profile log-likelihood on some interval and find that this interval is very wide compared to the relevant asymptotic threshold we can simply re-compute the profile log-likelihood function on a narrower interval. Since these computations involve numerical optimisation calculations where we always have a reasonably good estimate to start the iterative calculations (e.g. θ^\hat{\theta}) we find that these computations are efficient.

2.5 Likelihood-based prediction

The methods outlined so far focus on taking noisy count data and exploring how to find the MLE parameter estimates, and to take an asymptotic log-likelihood threshold to define a parameter confidence set so that we can understand how variability in count data corresponds to variability in parameter estimates. In this section we will now explore how variability in parameter estimates maps to variability in model predictions. In particular, our focus will be to compare likelihood-based predictions using the traditional additive noise model with the multinomial noise model and explore differences in these approaches.

Given a set of count data, yoy^{\textrm{o}}, and an associated normalised log-likelihood function, ¯(θyo)\bar{\ell}(\theta\mid y^{\textrm{o}}), we proceed by identifying the asymptotic 95% log-likelihood threshold Δ0.95,n/2-\Delta_{0.95,n}/2, where nn is the number of free parameters. Using rejection sampling we obtain MM parameter sets that lie within the 95% log-likelihood threshold so that we have θm\theta_{m} that satisfy ¯(θm|yo)Δ0.95,n/2\bar{\ell}(\theta_{m}\,|\,y^{\text{o}})\geq-\Delta_{0.95,n}/2 for m=1,2,3,,Mm=1,2,3,\ldots,M. For each θm\theta_{m} within the confidence set we solve the relevant continuum model to give MM solutions that correspond to the mean trajectory prediction as defined by the relevant measurement error model [29]. Since all measurement error models considered in this work involve a parametric distribution, we can also construct various data realization predictions [29] by considering various measures of the width of that parameteric distribution about the mean. For simplicity we will consider the 5% and 95% quantiles of the relevant distributions as a simple measure of distribution width. To make a prediction using the single population model we denote the mean trajectory as c1(m)(x,t)c_{1}^{(m)}(x,t) to denote the solution of Equation (5) using the mmth parameter sample from within the parameter confidence set. To plot the solutions we discretize the spatial location so that we have c1(m)(xr,t)c_{1}^{(m)}(x_{r},t), where xr=(r1)/10x_{r}=(r-1)/10 for r=1,2,3,,2001r=1,2,3,\ldots,2001. For each of the mean trajectories we now have an interval c1,0.05(m)(xr,t)<c1(m)(xr,t)<c1,0.95(m)(xr,t)c^{(m)}_{1,0.05}(x_{r},t)<c_{1}^{(m)}(x_{r},t)<c^{(m)}_{1,0.95}(x_{r},t) for r=1,2,3,,2001r=1,2,3,\ldots,2001, where the lower bound corresponds to the 5% quantile of the measurement error noise model and the upper bound corresponds to the 95% quantile of the measurement error noise model. For a fixed parameter θm\theta_{m}, the 5% and 95% quantiles of the additive Gaussian measurement error model are constants determined by σ1\sigma_{1}. For the multinomial measurement error model the 5% and 95% quantiles are no longer constants but vary with the mean of the distribution. For S=1S=1 the multinomial distribution relaxes to the binomial distribution and hence the bounds at location xrx_{r} correspond to the 5% and 95% quantiles of the binomial distribution with JJ trials with probability of success c1(xr,t)c_{1}(x_{r},t). For the multinomial measurement error model with S>1S>1 the bounds for subpopulation ss at location xrx_{r} correspond to the 5% and 95% quantiles of the ssth marginal distribution, again with JJ trials with probability of success cs(xr,t)c_{s}(x_{r},t). After calculating these MM intervals around each mean trajectory we then take the union to define [min(c1,0.05(m)(xr,t)),max(c1,0.95(m)(xr,t))]\left[\text{min}\left(c^{(m)}_{1,0.05}(x_{r},t)\right),\text{max}\left(c^{(m)}_{1,0.95}(x_{r},t)\right)\right]. Here the minimum value of c1,0.05(m)(xr,t)c^{(m)}_{1,0.05}(x_{r},t) and the maximum value of c1,0.95(m)(xr,t)c^{(m)}_{1,0.95}(x_{r},t) are computed for each fixed value of xrx_{r} across the set of MM different parameter values, m=1,2,3,,Mm=1,2,3,\ldots,M. This simple computational procedure gives us a complete picture of the prediction interval that can be interpreted as a form of tolerance interval [35] accounting for both parameter uncertainty and data realization uncertainty [21].

3 Results and Discussion

Case 1

Working with the additive Gaussian measurement error model the MLE for the count data in Case 1 is θ^=(D^1,v^1,σ^1)=(0.2613,0.0480,0.0598)\hat{\theta}=(\hat{D}_{1},\hat{v}_{1},\hat{\sigma}_{1})^{\top}=(0.2613,0.0480,0.0598)^{\top}. The estimates of the diffusivity and drift velocity differ slightly from the idealised result of D1=0.25D_{1}=0.25 and v1=0.05v_{1}=0.05 for perfect noise-free data with sufficiently large JJ. We attribute these differences to the role of fluctuations in the count data that are obtained with just J=20J=20. When working with the multinomial measurement error model the MLE for the same count data is θ^=(D^1,v^1)=(0.2509,0.0454)\hat{\theta}=(\hat{D}_{1},\hat{v}_{1})^{\top}=(0.2509,0.0454)^{\top}. Overall, comparing the MLE estimates of the diffusivity and drift velocity indicate that the different measurement error noise models lead to small differences in the MLE for D1D_{1} and v1v_{1}. Here, the only practical difference is that working with the additive Gaussian measurement error model involves additional computational effort required to estimate σ^1\hat{\sigma}_{1}.

The broad similarity between our results for the two measurement error models can be partly explained by noting that the multinomial error model simplifies to a binomial error model when S=1S=1, and that the binomial distribution can be approximated by a Gaussian distribution under well-known conditions. The central limit theorem indicates that C/J𝒩(c,c(1c)/J)C/J\to\mathcal{N}(c,c(1-c)/J) when both JcJc and J(1c)J(1-c) are sufficiently large. In Case 1 we have N1=620N_{1}=620 on a lattice of size 200×20200\times 20 which means the average occupancy across all columns of the lattice is c¯1=620/(200×20)=0.155\bar{c}_{1}=620/(200\times 20)=0.155. On average this is equivalent to having 2 or 3 agents per column across the lattice. We will make progress with this estimate of c¯1\bar{c}_{1} despite the fact there is a very high variability in the number of agents per column in Figure 3(d) where we see that many columns contain zero agents, whereas there is one column containing 19 agents. Under this clearly questionable approximation, the normal approximation of the binomial distribution implies that σ1=c¯1(1c¯1)/J\sigma_{1}=\sqrt{\bar{c}_{1}(1-\bar{c}_{1})/J}, or σ1=0.08\sigma_{1}=0.08 for our data. Despite these assumptions, this estimate of σ1\sigma_{1} is not dramatically different from our estimate of σ^1=0.0598\hat{\sigma}_{1}=0.0598 obtained by using the additive Gaussian measurement error model directly. This is consistent with our observations that our parameter estimates are not very sensitive to the choice of measurement error model.

We now consider the identifiability of our parameter estimates by constructing various univariate profile likelihood functions for each parameter, as shown in Figure 4. Regardless of which measurement error model is used, overall we see that all univariate profiles are well formed about a single distinct peak at the MLE indicating that all parameters in both loglikelihood functions are practically identifiable. For all parameters we are able to identify an interval where ¯Δ0.95,1/21.9207\bar{\ell}\geq-\Delta_{0.95,1}/2\approx-1.9207 to define a 95% asymptotic confidence interval [34]. For example, with the additive Gaussian measurement error model we have D^1=0.2613\hat{D}_{1}=0.2613 and the 95% confidence interval is 0.2345D10.29130.2345\leq D_{1}\leq 0.2913, indicating that our estimate is reasonably precise. Since all profile likelihood functions are fairly narrow we conclude that all parameters are reasonably well identified by this data, and again the main difference between working with the additive Gaussian measurement error model and the multinomial measurement error model is the additional computational effort required to compute the profile likelihood for σ1\sigma_{1} when working with the additive Gaussian measurement error model.

Refer to caption
Figure 4: Parameter estimation and parameter identifiability for Case 1. Results in the left column correspond to the additive Gaussian measurement error model, whereas results in the right column correspond to the multinomial measurement error model. Each subfigure shows a univariate profile likelihood function profile superimposed with a vertical green line at the MLE, and a horizontal orange line at the 95% asymptotic threshold. For the additive Gaussian measurement error model the MLE and 95% confidence intervals are: D^1=0.2613[0.2345, 0.2913]\hat{D}_{1}=0.2613\,\left[0.2345,\,0.2913\right]; v^1=0.0480[0.0411, 0.0553]\hat{v}_{1}=0.0480\,\left[0.0411,\,0.0553\right] and σ^1=0.0598[0.0544, 0.0662]\hat{\sigma}_{1}=0.0598\,\left[0.0544,\,0.0662\right]. For the multinomial measurement error model we obtain D^1=0.2509[0.2154, 0.2950]\hat{D}_{1}=0.2509\,\left[0.2154,\,0.2950\right] and v^1=0.0454[0.0355, 0.0550]\hat{v}_{1}=0.0454\,\left[0.0355,\,0.0550\right].

Results in Figure 5 illustrate the prediction intervals for data realizations for Case 1, and here we see a significant difference between the two measurement error models. Results in Figure 5(a) for the additive Gaussian measurement error model show that the prediction interval is a smooth interval about the solution evaluated at the MLE, and importantly in regions where the expected counts are zero (i.e. i80i\gtrapprox 80), the lower bound of the prediction interval is negative. This result is unhelpful because count data are, by definition, non-negative. This is an important limitation of working with the standard additive Gaussian measurement error model, since the prediction interval does not necessarily obey the physical constraint that C1o(i,k)[0,J]C_{1}^{\textrm{o}}(i,k)\in[0,J]. Repeating this exercise using a different initial arrangement of agents on the lattice, or stopping the simulation at an earlier time can lead to prediction intervals that exceed JJ. In contrast, results in Figure 5(b) for the multinomial measurement error model show that the prediction interval is not smooth, and this reflects the fact that count data are non-negative integers. Importantly, the prediction intervals derived using the multinomial measurement error model obey the physical constraint C1o(i,k)[0,J]C_{1}^{\textrm{o}}(i,k)\in[0,J]. Therefore, unlike our results in Figure 4 where the details of the measurement error model made little difference beyond computational efficiency, here in terms of prediction we see that the standard approach can lead to non-physical outcomes whereas the physically-motivated multinomial measurement error model is more computationally efficient to work with, as well as leading to physically meaningful prediction intervals. For the particular results in Figure 5 we have 94% of count data lying within the prediction interval for the additive Gaussian measurement error model and 97.5% of count data falling within the prediction interval for the multinomial measurement error model.

Refer to caption
Figure 5: Likelihood-based model predictions for data realizations in Case 1. (a) Prediction interval for the additive Gaussian measurement error model; (b) Prediction interval for the multinomial measurement error model. Each subfigure shows the MLE prediction in a dashed red curve, and a shaded red prediction intervals constructed using M=500M=500 parameter values sampled from within the 95% confidence set for each log-likelihood function. The 95% confidence set is associated with ¯Δ0.95,3/23.9074\bar{\ell}\geq-\Delta_{0.95,3}/2\approx-3.9074 for the additive Gaussian measurement error model and ¯Δ0.95,2/22.996\bar{\ell}\geq-\Delta_{0.95,2}/2\approx-2.996 for the multinomial measurement error model.

We will now repeat this exercise for Case 2 to explore the consequences within the context of working with a population composed of multiple interacting subpopulations.

Case 2

With the additive Gaussian measurement error model the MLE for the count data in Case 2 is θ^=(D^1,D^2,v^1,v^2,σ^1,σ^2)=(0.1774,0.0971,0.0717,0.0024,0.0639,0.1094)\hat{\theta}=(\hat{D}_{1},\hat{D}_{2},\hat{v}_{1},\hat{v}_{2},\hat{\sigma}_{1},\hat{\sigma}_{2})^{\top}=(0.1774,0.0971,0.0717,0.0024,0.0639,0.1094)^{\top}. As for Case 1, these estimates of diffusivities and drift velocities differ from the idealised results. This difference is because we are working with noisy data with J=20J=20 as well as the approximate nature of the mean-field PDE model. When working with the multinomial measurement error model the MLE is θ^=(D^1,D^2,v^1,v^2)=(0.1664,0.0850,0.0678,0.0023)\hat{\theta}=(\hat{D}_{1},\hat{D}_{2},\hat{v}_{1},\hat{v}_{2})^{\top}=(0.1664,0.0850,0.0678,0.0023)^{\top}. Comparing the MLE estimates indicate that the different measurement error noise models only leads to small differences in the MLE, and main practical difference is that working with the additive Gaussian measurement error model involves additional computational effort required to estimate both σ1\sigma_{1} and σ2\sigma_{2}.

The identifiability of our parameter estimates is explored by constructing various univariate profile likelihood functions shown in Figure 6. Again, regardless of which measurement error model is used, all univariate profiles are well formed about a single distinct peak at the MLE indicating that all parameters in both loglikelihood functions are practically identifiable with reasonably narrow 95% asymptotic confidence intervals. Again the main difference between working with the additive Gaussian measurement error model and the multinomial measurement error model is the additional computational effort required to compute the profile likelihoods for σ1\sigma_{1} and σ2\sigma_{2} when working with the additive Gaussian measurement error model.

Refer to caption
Figure 6: Parameter estimation and parameter identifiability for Case 2. Results in the left column correspond to the additive Gaussian measurement error model, whereas results in the right column correspond to the multinomial measurement error model. Each subfigure shows two univariate profile likelihood functions. Solid lines correspond to subpopulation s=1s=1 and dashed lines correspond to subpopulation s=2s=2. Each profile likelihood function is superimposed with a vertical green line at the MLE, and a horizontal orange line at the 95% asymptotic threshold. For the additive Gaussian measurement error model the MLE and 95% confidence intervals are: D^1=0.1774[0.1488, 0.2089]\hat{D}_{1}=0.1774\,\left[0.1488,\,0.2089\right]; D^2=0.0971[0.0721, 0.1299]\hat{D}_{2}=0.0971\,\left[0.0721,\,0.1299\right]; v^1=0.0717[0.0671, 0.0764]\hat{v}_{1}=0.0717\,\left[0.0671,\,0.0764\right]; v^2=0.0024[0.0006, 0.0043]\hat{v}_{2}=0.0024\,\left[0.0006,\,0.0043\right]; σ^1=0.0639[0.0580, 0.0707]\hat{\sigma}_{1}=0.0639\,\left[0.0580,\,0.0707\right] and σ^2=0.1094[0.0995, 0.1210]\hat{\sigma}_{2}=0.1094\,\left[0.0995,\,0.1210\right]. For the multinomial measurement error model the MLE and 95% confidence intervals are D^1=0.1664[0.1382, 0.2001]\hat{D}_{1}=0.1664\,\left[0.1382,\,0.2001\right]; D^2=0.0850[0.0585, 0.1211]\hat{D}_{2}=0.0850\,\left[0.0585,\,0.1211\right]; v^1=0.0678[0.0624, 0.0734]\hat{v}_{1}=0.0678\,\left[0.0624,\,0.0734\right] and v^2=0.0023[0.0005, 0.0041]\hat{v}_{2}=0.0023\,\left[0.0005,\,0.0041\right].

Results in Figure 7 illustrate the prediction intervals for data realizations for Case 2, and again we see a significant difference between the two measurement error models. Results in Figure 7(a) and (c) for the Gaussian measurement error model show smooth prediction intervals about the MLE solution, and in regions of low density the lower bound of the prediction interval is negative which is physically impossible. Alternatively, results in Figure 7(b) and (d) for the multinomial measurement error model leads to physically realistic, non-smooth predictions that reflect the fact that count data are non-negative integers that obey the physical constraint that C1o(i,k)[0,J]C_{1}^{\textrm{o}}(i,k)\in[0,J] and C2o(i,k)[0,J]C_{2}^{\textrm{o}}(i,k)\in[0,J]. For the particular results in Figure 7 we have 94.5% of count data lying within the prediction interval for the additive Gaussian measurement error model and 96.75% of count data falling within the prediction interval for the multinomial measurement error model.

Refer to caption
Figure 7: Likelihood-based model predictions for data realizations in Case 2. (a) and (c) Prediction intervals for the additive Gaussian measurement error model for s=1s=1 and s=2s=2, respectively. (b) and (d) Prediction intervals for the multinomial measurement error model for s=1s=1 and s=2s=2, respectively. Each subfigure shows the MLE prediction in a dashed red or green curve, superimposed with a shaded red or green prediction interval constructed using M=500M=500 parameter values sampled from within the 95% confidence set for each log-likelihood function. The 95% confidence set is associated with ¯Δ0.95,6/26.2958\bar{\ell}\geq-\Delta_{0.95,6}/2\approx-6.2958 for the additive Gaussian measurement error model and ¯Δ0.95,4/24.7438\bar{\ell}\geq-\Delta_{0.95,4}/2\approx-4.7438 for the multinomial measurement error model.

4 Conclusion and Future Work

In this work we have presented a likelihood-based framework that can be used for parameter estimation, parameter identifiability analysis and model prediction for lattice-based random walk models. In particular, we focus on lattice-based models of biased migration of SS potentially distinct subpopulations of agents that can be relevant for interpreting cell biology experiments that involve multiple interacting subpopulations of cells. Many recent investigations of parameter estimation for these kinds of stochastic models have focused on various Approximate Bayesian Computation methods based on repeated stochastic simulations, and these approaches are often justified because likelihood models are not always considered. Here we take a different approach and note that count data from stochastic simulation algorithms can be described using several likelihood models and working with a continuum limit PDE approximation means that we have access to a computationally efficient PDE model together with a simple likelihood model that can facilitate maximum likelihood estimation, profile likelihood-based analysis of parameter identifiability and likelihood-based model prediction. Our approach can lead to significant computational improvements, for example computing the stochastic simulation data in Figure 3(a)-(d) for Case 1 is approximately ten times longer than solving the corresponding PDE model parameterised by θ^\hat{\theta} with δ=0.5\delta=0.5, and similar computational improvements hold for Case 2.

Many studies focusing on parameter estimation in the mathematical biology and biophysics literature often work with an additive Gaussian measurement error model to relate noisy observations to the solution of a continuous differential equation. Here we follow the same approach noting that this approach also leads to additional parameters σs\sigma_{s} for s=1,2,3,,Ss=1,2,3,\ldots,S to be estimated along with the parameters in the process model. Alternatively, we note that count data obtained from the stochastic simulation model naturally leads to a multinomial measurement error model that can also be implemented and the main contribution of our study is to compare the performance of the likelihood-based parameter estimates, identifiability analysis and model prediction for the two measurement error models. In general we show that the two approaches can lead to very little differences in terms of parameter estimates and profile likelihood functions, however the outcomes in terms of model predictions is very different. In particular the standard approach of working with an additive Gaussian measurement error model can lead to non-physical model predictions where the lower bound of the prediction interval is negative and the upper bound of the prediction interval exceeds the maximum packing density of agents per column in the random walk model. In contrast the multinomial measurement error model leads to physically meaningful prediction intervals that obey physical constraints imposed by the lattice-based modelling framework.

The results presented in this study correspond to working with a stochastic model describing the motion of SS subpopulations of agents on a lattice, and the movement of agents in each subpopulation can be biased or unbiased, and this modelling framework can be described by a system of SS nonlinear PDEs for the non-dimensional density cs(x,t)c_{s}(x,t) for s=1,2,3,,Ss=1,2,3,\ldots,S. Our framework can also be applied to a broader set of mechanisms and more complicated PDE models. For example, incorporating a birth-death process into the stochastic model [36] leads to a more complicated stochastic model where NsN_{s} can now change over time and the mean-field PDE description involves a source term,

cst=𝒥sx+𝒢s,fors=1,2,3,,S,\frac{\partial c_{s}}{\partial t}=-\frac{\partial\mathcal{J}_{s}}{\partial x}+\mathcal{G}_{s},\quad\textrm{for}\quad s=1,2,3,\dots,S, (19)

where cs(x,t)c_{s}(x,t) is the dimensionless density of population ss at location xx and time tt. Here 𝒥s\mathcal{J}_{s} denotes the flux of subpopulation ss, and this would remains the same as in (3) provided that the discrete motility mechanism remains the same, whereas 𝒢s\mathcal{G}_{s} denotes a source/sink term for subpopulation ss that models the impact of the birth-death process in the stochastic model, and we note that this source term is related to a generalised logistic growth term. Our approach for relating noisy count data from the stochastic model to the solution of a mean-field PDE model remains unchanged regardless of whether we consider incorporating a birth-death process. Clearly it is also possible to generalise the motility mechanism beyond working with the simple biased motility mechanisms explored in this work. Some interesting generalisations would be to incorporate cell pulling/pushing [37, 38] or cell swapping mechanisms [39] into the discrete model. Since the mean-field descriptions of such generalized mechanisms have been previously derived and validated [37, 38, 39] it is straightforward to incorporate these more detailed mechanisms into the same framework outlined in this study. It is worth noting some caution is warranted, however, as incorporating additional into the discrete model runs a risk of encountering identifiability issues as the size of the parameter space increases. Therefore, it is prudent to always construct the univariate profile likelihood functions to ensure that parameter estimates are sufficiently precise before biological mechanisms can be associated with parameter estimates.

A final comment is that we chose to present our Case studies in the typical scenario where data is obtained at one time point only. This simplification was motivated by the fact that it is fairly common in simple cell biology experiments to image the experiment after one fixed time point. It is conceptually and computationally straightforward to generalise our approach to work with data collected at KK time points simply by summing over these additional time points in the log-likelihood function, k=1,2,3,,Kk=1,2,3,\ldots,K. For example, the multinomial log-likelihood function for count data collected at KK time points generalises to

(θ|yo)\displaystyle\ell(\theta\,|y^{\text{o}})\propto i=1Ik=1Klog[s=1Scs(xi,tk)Cso(i,k)(1s=1Scs(xi,tk))Eo(i,k)],\displaystyle\sum_{i=1}^{I}\sum_{k=1}^{K}\log\left[\prod_{s=1}^{S}c_{s}(x_{i},t_{k})^{C_{s}^{\text{o}}(i,k)}\left(1-\sum_{s=1}^{S}c_{s}(x_{i},t_{k})\right)^{E^{\textrm{o}}(i,k)}\right], (20)

where tk=kτt_{k}=k\tau, and the data vector yoy^{\text{o}} has length I(S+K)I(S+K). With this modestly generalised log-likelihood function we can employ the same numerical optimization procedures to calculate θ^\hat{\theta} and the associated profile likelihood functions, and the approach for calculating likelihood-based model predictions remains the same. A similar generalisation can also be implemented to work with the additive Gaussian measurement error model with data collected at KK time points.

An important feature of the modelling presented in this study is that we consider a two-dimensional stochastic model of a scratch assay where the density of agents at the beginning of the simulation is independent of vertical location and remains, on average, independent of vertical location during the simulation. This simplification is consistent with scratch assay design, and under these conditions we work with one-dimensional noisy count data, Cso(i,k)C_{s}^{\textrm{o}}(i,k), that depends upon the horizontal position xi=(i1)Δx_{i}=(i-1)\Delta, and time t=kτt=k\tau. For other application we may be interested in genuinely two-dimensional count data and it is straightforward to generate and interrogate this kind of data using the same stochastic simulation model by performing a suite of QQ identically-prepared realisations. If Csq(i,j,k)C_{s}^{q}(i,j,k) is a binary variable that denotes the occupancy status for subpopulation ss at site (i,j)(i,j) after kk time steps in the qqth identically-prepared realization, then Cso(i,j,k)=q=1QCsq(i,j,k)C_{s}^{\textrm{o}}(i,j,k)=\displaystyle{\sum_{q=1}^{Q}C_{s}^{q}(i,j,k)} is the noisy count data generated by considering QQ identically-prepared realizations. Following the same ideas outlined in this work, we can solve a two-dimensional mean-field PDE model to give cs(x,y,t)c_{s}(x,y,t) and use either an additive Gaussian measurement error model or a multinomial measurement error model to calculate θ^\hat{\theta}, the associated profile likelihood-based confidence intervals, and likelihood-based model predictions. The key difference is that in the current work the count data is associated with JJ trials where the expected column occupancy fraction is cs(i,k)c_{s}(i,k) and JJ is the height of the lattice. For applications where agent density varies with both vertical and horizontal position the count data is associated with QQ trials where the expected site occupancy fraction is cs(i,j,k)c_{s}(i,j,k) and QQ is the number of identically-prepared realizations of the stochastic model. The same concepts apply to three-dimensional applications.

Appendix A Numerical methods

In this work we consider two different continuum models. For problems involving one population we generate numerical solutions of Equation (5), c1(x,t)c_{1}(x,t). For problems involving two subpopulations we generate numerical solutions of Equation (7), c1(x,t)c_{1}(x,t) and c2(x,t)c_{2}(x,t). We will now describe how we obtain these numerical solutions. In brief we use a method-of-lines approach where we spatially discretize the PDE models using standard finite difference approximations for the spatial terms and then solve the resulting system of coupled ordinary differential equations (ODE) in time using the DifferentialEquations.jl package by taking advantage of automatic time stepping and temporal truncation error control [40].

To solve Equation (5) on 0<x<L0<x<L we discretize the spatial terms on a uniform grid with grid spacing δ>0\delta>0, such that c1(xn,t)=c1,nc_{1}(x_{n},t)=c_{1,n} for n=1,2,3,,Nn=1,2,3,\ldots,N, where xn=(n1)δx_{n}=(n-1)\delta. At x=0x=0 and x=Lx=L, corresponding to mesh points x1x_{1} and xNx_{N}, respectively, we impose a zero flux boundary conditions 𝒥1=𝒥N=0\mathcal{J}_{1}=\mathcal{J}_{N}=0 to give

dc1,1dt\displaystyle\dfrac{\text{d}c_{1,1}}{\text{d}t} =D1δ2(c1,2c1,1)v1δ[c1,2(1c1,2)],\displaystyle=\dfrac{D_{1}}{\delta^{2}}\left(c_{1,2}-c_{1,1}\right)-\dfrac{v_{1}}{\delta}\left[c_{1,2}\left(1-c_{1,2}\right)\right], (21)
dc1,ndt\displaystyle\dfrac{\text{d}c_{1,n}}{\text{d}t} =D1δ2(c1,n+12c1,n+c1,n1)\displaystyle=\dfrac{D_{1}}{\delta^{2}}\left(c_{1,n+1}-2c_{1,n}+c_{1,n-1}\right)
v12δ[c1,n+1(1c1,n+1)c1,n1(1c1,n1)],for n=2,3,4,,N1,\displaystyle-\dfrac{v_{1}}{2\delta}\left[c_{1,n+1}(1-c_{1,n+1})-c_{1,n-1}(1-c_{1,n-1})\right],\quad\text{for }n=2,3,4,\ldots,N-1, (22)
dc1,Ndt\displaystyle\frac{\text{d}c_{1,N}}{\text{d}t} =D1δ2(c1,Nc1,N1)+v1δ[c1,N1(1c1,N1)].\displaystyle=-\dfrac{D_{1}}{\delta^{2}}\left(c_{1,N}-c_{1,N-1}\right)+\dfrac{v_{1}}{\delta}\left[c_{1,N-1}\left(1-c_{1,N-1}\right)\right]. (23)

We solve the system of ODEs given by Equations (21)–(23) using Heun’s method with adaptive time-stepping [40]. All results presented in this work correspond to δ=0.5\delta=0.5. To ensure our results are grid independent we considered a number of test cases and checked that numerical results with δ=0.5\delta=0.5 were indistinguishable from results with δ=0.1\delta=0.1.

To solve Equation (7) on 0<x<L0<x<L we discretize the spatial terms on a uniform grid with grid spacing δ>0\delta>0, such that c1(xn,t)=c1,nc_{1}(x_{n},t)=c_{1,n} and c2(xn,t)=c1,nc_{2}(x_{n},t)=c_{1,n} for n=1,2,3,,Nn=1,2,3,\ldots,N. No flux boundary conditions are imposed at x=0x=0 and x=Lx=L, leading to

dc1,1dt\displaystyle\dfrac{\text{d}c_{1,1}}{\text{d}t} =D1δ2[(1c1,2c2,2)(c1,2c1,1)+c1,2(c1,2+c2,2c1,1c2,1)]\displaystyle=\dfrac{D_{1}}{\delta^{2}}\left[\left(1-c_{1,2}-c_{2,2}\right)\left(c_{1,2}-c_{1,1}\right)+c_{1,2}\left(c_{1,2}+c_{2,2}-c_{1,1}-c_{2,1}\right)\right]
v1δ[c1,2(1c1,2c2,2)],\displaystyle-\dfrac{v_{1}}{\delta}\left[c_{1,2}\left(1-c_{1,2}-c_{2,2}\right)\right],
dc2,1dt\displaystyle\dfrac{\text{d}c_{2,1}}{\text{d}t} =D2δ2[(1c1,2c2,2)(c2,2c2,1)+c2,2(c1,2+c2,2c1,1c2,1)]\displaystyle=\dfrac{D_{2}}{\delta^{2}}\left[\left(1-c_{1,2}-c_{2,2}\right)\left(c_{2,2}-c_{2,1}\right)+c_{2,2}\left(c_{1,2}+c_{2,2}-c_{1,1}-c_{2,1}\right)\right]
v2δ[c2,2(1c1,2c2,2)],\displaystyle-\dfrac{v_{2}}{\delta}\left[c_{2,2}\left(1-c_{1,2}-c_{2,2}\right)\right], (24)
dc1,ndt\displaystyle\dfrac{\text{d}c_{1,n}}{\text{d}t} =D12δ2[(2c1,n+1c2,n+1c1,nc2,n)(c1,n+1c1,n)]\displaystyle=\dfrac{D_{1}}{2\delta^{2}}\left[\left(2-c_{1,n+1}-c_{2,n+1}-c_{1,n}-c_{2,n}\right)\left(c_{1,n+1}-c_{1,n}\right)\right]
+D12δ2[(c1,n+1+c1,n)(c1,n+1+c2,n+1c1,nc2,n)]\displaystyle+\dfrac{D_{1}}{2\delta^{2}}\left[\left(c_{1,n+1}+c_{1,n}\right)\left(c_{1,n+1}+c_{2,n+1}-c_{1,n}-c_{2,n}\right)\right]
D12δ2[(2c1,nc2,nc1,n1c2,n1)(c1,nc1,n1)]\displaystyle-\dfrac{D_{1}}{2\delta^{2}}\left[\left(2-c_{1,n}-c_{2,n}-c_{1,n-1}-c_{2,n-1}\right)\left(c_{1,n}-c_{1,n-1}\right)\right]
D12δ2[(c1,n+c1,n1)(c1,n+c2,nc1,n1c2,n1)]\displaystyle-\dfrac{D_{1}}{2\delta^{2}}\left[\left(c_{1,n}+c_{1,n-1}\right)\left(c_{1,n}+c_{2,n}-c_{1,n-1}-c_{2,n-1}\right)\right]
v12δ[c1,n+1(1c1,n+1c2,n+1)c1,n1(1c1,n1c2,n1)]\displaystyle-\dfrac{v_{1}}{2\delta}\left[c_{1,n+1}\left(1-c_{1,n+1}-c_{2,n+1}\right)-c_{1,n-1}\left(1-c_{1,n-1}-c_{2,n-1}\right)\right]
dc2,ndt\displaystyle\dfrac{\text{d}c_{2,n}}{\text{d}t} =D22δ2[(2c1,n+1c2,n+1c1,nc2,n)(c2,n+1c2,n)]\displaystyle=\dfrac{D_{2}}{2\delta^{2}}\left[\left(2-c_{1,n+1}-c_{2,n+1}-c_{1,n}-c_{2,n}\right)\left(c_{2,n+1}-c_{2,n}\right)\right]
+D22δ2[(c2,n+1+c2,n)(c1,n+1+c2,n+1c1,nc2,n)]\displaystyle+\dfrac{D_{2}}{2\delta^{2}}\left[\left(c_{2,n+1}+c_{2,n}\right)\left(c_{1,n+1}+c_{2,n+1}-c_{1,n}-c_{2,n}\right)\right]
D22δ2[(2c1,nc2,nc1,n1c2,n1)(c2,nc2,n1)]\displaystyle-\dfrac{D_{2}}{2\delta^{2}}\left[\left(2-c_{1,n}-c_{2,n}-c_{1,n-1}-c_{2,n-1}\right)\left(c_{2,n}-c_{2,n-1}\right)\right]
D22δ2[(c2,n+c2,n1)(c1,n+c2,nc1,n1c2,n1)]\displaystyle-\dfrac{D_{2}}{2\delta^{2}}\left[\left(c_{2,n}+c_{2,n-1}\right)\left(c_{1,n}+c_{2,n}-c_{1,n-1}-c_{2,n-1}\right)\right]
v22δ[c2,n+1(1c1,n+1c2,n+1)c2,n1(1c1,n1c2,n1)],\displaystyle-\dfrac{v_{2}}{2\delta}\left[c_{2,n+1}\left(1-c_{1,n+1}-c_{2,n+1}\right)-c_{2,n-1}\left(1-c_{1,n-1}-c_{2,n-1}\right)\right],
for n=2,3,4,,N1,\displaystyle\text{for }n=2,3,4,\ldots,N-1, (25)
dc1,Ndt\displaystyle\dfrac{\text{d}c_{1,N}}{\text{d}t} =D1δ2[(1c1,N1c2,N1)(c1,Nc1,N1)]\displaystyle=-\dfrac{D_{1}}{\delta^{2}}\left[\left(1-c_{1,N-1}-c_{2,N-1}\right)\left(c_{1,N}-c_{1,N-1}\right)\right]
D1δ2[c1,N1(c1,N+c2,Nc1,N1c2,N1)]\displaystyle-\dfrac{D_{1}}{\delta^{2}}\left[c_{1,N-1}\left(c_{1,N}+c_{2,N}-c_{1,N-1}-c_{2,N-1}\right)\right]
+v1δ[c1,N1(1c1,N1c2,N1)],\displaystyle+\dfrac{v_{1}}{\delta}\left[c_{1,N-1}\left(1-c_{1,N-1}-c_{2,N-1}\right)\right],
dc2,Ndt\displaystyle\dfrac{\text{d}c_{2,N}}{\text{d}t} =D2δ2[(1c1,N1c2,N1)(c2,Nc2,N1)]\displaystyle=-\dfrac{D_{2}}{\delta^{2}}\left[\left(1-c_{1,N-1}-c_{2,N-1}\right)\left(c_{2,N}-c_{2,N-1}\right)\right]
D2δ2[c2,N1(c1,N+c2,Nc1,N1c2,N1)]\displaystyle-\dfrac{D_{2}}{\delta^{2}}\left[c_{2,N-1}\left(c_{1,N}+c_{2,N}-c_{1,N-1}-c_{2,N-1}\right)\right]
+v2δ[c2,N1(1c1,N1c2,N1)].\displaystyle+\dfrac{v_{2}}{\delta}\left[c_{2,N-1}\left(1-c_{1,N-1}-c_{2,N-1}\right)\right]. (26)

We solve the system of ODEs given by Equations (24)–(26) using Heun’s method with adaptive time-stepping [40] with δ=0.5\delta=0.5. Again, we tested these results to ensure they are grid independent.

References

  • [1] Chun-Chi Liang, Ann Y Park and Jun-Lin Guan InIn vitrovitro scratch assay: A convenient and inexpensive method for analysis of cell migration inin vitrovitro In Nature Protocols 2.2 Nature Publishing Group, 2007, pp. 329–333 DOI: 10.1038/nprot.2007.30
  • [2] Ayman Grada et al. “Research techniques made simple: Analysis of collective cell migration using the wound healing assay” In Journal of Investigative Dermatology 137, 2017, pp. e11–e16 DOI: 10.1016/j.jid.2016.11.020
  • [3] Evgeniy Khain et al. “Collective behavior of brain tumor cells: The role of hypoxia” In Physical Review E 83, 2011 DOI: 10.1103/PhysRevE.83.031920
  • [4] Carles Falcó, Daniel.. Cohen, José.. Carrillo and Ruth.. Baker “Quantifying tissue growth, shape and collision via continuum models and Bayesian inference” In Journal of the Royal Society Interface 20, 2023 DOI: 10.1098/rsif.2023.0184
  • [5] Stuart T Johnston, Matthew J Simpson and D… McElwain “How much information can be obtained from tracking the position of the leading edge in a scratch assay?” In Journal of the Royal Society Interface 11 Royal Society, 2014 DOI: 10.1098/rsif.2014.0325
  • [6] M.. Simpson, K.. Landman and B.. Hughes “Cell invasion with proliferation mechanisms motivated by time-lapse data” In Physica A: Statistical Mechanics and its Applications 389, 2010, pp. 3779–3790 DOI: 10.1016/j.physa.2010.05.020
  • [7] Thomas Callaghan, Evgeniy Khain, Leonard M. Sander and Robert M. Ziff “A stochastic model for wound healing” In Journal of Statistical Physics 122, 2006, pp. 909–924 DOI: 10.1007/s10955-006-9022-1
  • [8] R.. Ross et al. “Using approximate Bayesian computation to quantify cell–cell adhesion parameters in a cell migratory process” In NPJ Systems Biology and Applications 3, 2017 DOI: 10.1038/s41540-017-0010-7
  • [9] A.. Browning, S.. McCue and M.. Simpson “A Bayesian computational approach to explore the optimal duration of a cell proliferation assay” In Bulletin of Mathematical Biology 79, 2017, pp. 1888–1906 DOI: 10.1007/s11538-017-0311-4
  • [10] B.. Vo, C.. Drovandi, A.. Pettitt and M.. Simpson “Quantifying uncertainty in parameter estimates for stochastic models of collective cell spreading using approximate Bayesian computation” In Mathematical Biosciences 263, 2015, pp. 133–142 DOI: 10.1016/j.mbs.2015.02.010
  • [11] W. Jin et al. “Reproducibility of scratch assays is affected by the initial degree of confluence: Experiments, modelling and model selection” In Journal of Theoretical Biology 390, 2016, pp. 136–145 DOI: 10.1016/j.jtbi.2015.10.040
  • [12] S.. Vittadello et al. “Mathematical models for cell migration with real-time cell cycle dynamics” In Biophysical Journal 114, 2018, pp. 1241–1253 DOI: 10.1016/j.bpj.2017.12.041
  • [13] Brenda N. Vo, Christopher C. Drovandi, Anthony N. Pettitt and Graeme J. Pettet “Melanoma cell colony expansion parameters revealed by approximate Bayesian computation” In PLOS Computational Biology 11, 2015 DOI: 10.1371/journal.pcbi.1004635
  • [14] Nikolai W F Bode “Parameter calibration in crowd simulation models using approximate Bayesian computation” In arXiv preprint, 2020 DOI: 10.48550/arXiv.2001.10330
  • [15] Grant Hamilton et al. “Bayesian estimation of recent migration rates after a spatial expansion” In Genetics 170, 2005, pp. 409–417 DOI: 10.1534/genetics.104.034199
  • [16] Rune Rasmussen and Grant Hamilton “An approximate Bayesian computation approach for estimating parameters of complex environmental processes in a cellular automata” In Environmental Modelling & Software 29, 2012, pp. 1–10 DOI: 10.1016/j.envsoft.2011.10.005
  • [17] Elske van der Vaart, Mark A. Beaumont, Alice S.A. Johnston and Richard M. Sibly “Calibration and evaluation of individual-based models using Approximate Bayesian Computation” In Ecological Modelling 312, 2015, pp. 182–190 DOI: 10.1016/j.ecolmodel.2015.05.020
  • [18] Elske van der Vaart, Dennis Prangle and Richard M. Sibly “Taking error into account when fitting models using Approximate Bayesian Computation” In Ecological Applications 28, 2018, pp. 267–274 DOI: 10.1002/eap.1656
  • [19] Larry Wasserman “All of Statistics: A Concise Course in Statistical Inference” Springer, 2004
  • [20] Alessio Gnerucci, Paola Faraoni, Elettra Sereni and Francesco Ranaldi “Scratch assay microscopy: A reaction–diffusion equation approach for common instruments and data” In Mathematical Biosciences 330, 2020 DOI: 10.1016/j.mbs.2020.108482
  • [21] M.. Simpson, R.. Murphy and O.. Maclaren “Modelling count data with partial differential equation models in biology” In Journal of Theoretical Biology 580, 2024 DOI: 10.1016/j.jtbi.2024.111732
  • [22] Keegan E Hines, Thomas R Middendorf and Richard W Aldrich “Determination of parameter identifiability in nonlinear biophysical models: A Bayesian approach” In Journal of General Physiology 143, 2014, pp. 401–416 DOI: 10.1085/jgp.201311116
  • [23] Matthew J Simpson, Ruth E Baker, Sean T Vittadello and Oliver J Maclaren “Practical parameter identifiability for spatio-temporal models of cell invasion” In Journal of the Royal Society Interface 17, 2020 DOI: 10.1098/rsif.2020.0055
  • [24] Y. Pawitan “In All Likelihood: Statistical Modelling and Inference Using Likelihood” Oxford University Press, 2001
  • [25] A. Raue et al. “Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood” In Bioinformatics 25, 2009, pp. 1923–1929 DOI: 10.1093/bioinformatics/btp358
  • [26] A. Raue, C. Kreutz, F.. Theis and J. Timmer “Joining forces of Bayesian and frequentist methodology: A study for inference in the presence of non-identifiability” In Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371, 2013 DOI: 10.1098/rsta.2011.0544
  • [27] A.. Villaverde, N. Tsiantis and J.. Banga “Full observability and estimation of unknown inputs, states and parameters of nonlinear biological models” In Journal of the Royal Society Interface 16, 2019 DOI: 10.1098/rsif.2019.0043
  • [28] A.. Villaverde et al. “A protocol for dynamic model calibration” In Briefings in Bioinformatics 23, 2022 DOI: 10.1093/bib/bbab387
  • [29] R. Murphy, O.. Maclaren and M.. Simpson “Implementing measurement error models with mechanistic mathematical models in a likelihood-based framework for estimation, identifiability analysis and prediction in the life sciences” In Journal of the Royal Society Interface 21, 2024 DOI: 10.1098/rsif.2023.0402
  • [30] M.. Simpson, K.. Landman and B.. Hughes “Multi-species simple exclusion processes” In Physica A: Statistical Mechanics and its Applications 388, 2009, pp. 399–406 DOI: 10.1016/j.physa.2008.10.038
  • [31] D. Chowdhury, A. Schadschneider and K. Nishinari “Physics of transport and traffic phenomena in biology: From molecular motors and cells to organisms” In Physics of Life Reviews 2, 2005, pp. 318–352 DOI: 10.1016/j.plrev.2005.09.001
  • [32] J.. Nelder and R. Mead “A simplex method for function minimization” In The Computer Journal 7, 1965, pp. 308–313 DOI: 10.1093/comjnl/7.4.308
  • [33] S.. Johnson “The NLopt module for Julia”, 2018 URL: https://github.com/JuliaOpt/NLopt.jl
  • [34] P. Royston “Profile likelihood for estimation and confidence intervals” In The Stata Journal 7, 2007, pp. 376–387 DOI: 10.1177/1536867x0700700305
  • [35] S.. Vardeman “What about the other intervals?” In The American Statistician 46, 1992, pp. 193–197 DOI: 10.1080/00031305.1992.10475882
  • [36] Ruth E. Baker and Matthew J. Simpson “Correcting mean-field approximations for birth-death-movement processes” In Physical Review E 82, 2010 DOI: 10.1103/PhysRevE.82.041905
  • [37] G. Chappelle and C.. Yates “Pulling in models of cell migration” In Physical Review E 99, 2019 DOI: 10.1103/PhysRevE.99.062413
  • [38] C.. Yates, A. Parker and R.. Baker “Incorporating pushing in exclusion-process models of cell migration” In Physical Review E. 91, 2015 DOI: 10.1103/PhysRevE.91.052711
  • [39] S.. Noureen, J.. Owen, R.. Mort and C.. Yates “Swapping in lattice-based cell migration models” In Physical Review E 107, 2023 DOI: 10.1103/PhysRevE.107.044402
  • [40] C Rackauckas and Q Nie “DifferentialEquations.jl – A performant and feature-rich ecosystem for solving differential equations in Julia” In Journal of Open Research Software 5, 2017 DOI: 10.5334/jors.151