This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Global Consensus Monte Carlo

Lewis J. Rendell 
Department of Statistics, University of Warwick
and
Adam M. Johansen
Department of Statistics, University of Warwick
and
Anthony Lee
School of Mathematics, University of Bristol
and
Nick Whiteley
School of Mathematics, University of Bristol
The authors gratefully acknowledge the support of The Alan Turing Institute under the EPSRC grant EP/N510129/1, the Lloyd’s Register Foundation–Alan Turing Institute Programme on Data-Centric Engineering, and the EPSRC under grants EP/M508184/1, EP/R034710/1 and EP/T004134/1.
Abstract

To conduct Bayesian inference with large data sets, it is often convenient or necessary to distribute the data across multiple machines. We consider a likelihood function expressed as a product of terms, each associated with a subset of the data. Inspired by global variable consensus optimisation, we introduce an instrumental hierarchical model associating auxiliary statistical parameters with each term, which are conditionally independent given the top-level parameters. One of these top-level parameters controls the unconditional strength of association between the auxiliary parameters. This model leads to a distributed MCMC algorithm on an extended state space yielding approximations of posterior expectations. A trade-off between computational tractability and fidelity to the original model can be controlled by changing the association strength in the instrumental model. We further propose the use of a SMC sampler with a sequence of association strengths, allowing both the automatic determination of appropriate strengths and for a bias correction technique to be applied. In contrast to similar distributed Monte Carlo algorithms, this approach requires few distributional assumptions. The performance of the algorithms is illustrated with a number of simulated examples.


Keywords: Bayesian inference, Distributed inference, Markov chain Monte Carlo, Sequential Monte Carlo

1 Introduction

Large data sets arising in modern statistical applications present serious challenges for standard computational techniques for Bayesian inference, such as Markov chain Monte Carlo (MCMC) and other approaches requiring repeated evaluations of the likelihood function. We consider here the situation where the data are distributed across multiple computing nodes. This could be because the likelihood function cannot be computed on a single computing node in a reasonable amount of time, e.g. the data might not fit into main memory.

We assume that the likelihood function can be expressed as a product of terms so that the posterior density for the statistical parameter ZZ satisfies

π(z)μ(z)j=1bfj(z),\pi(z)\propto\mu(z)\prod_{j=1}^{b}f_{j}(z), (1)

where ZZ takes values z𝖤dz\in\mathsf{E}\subseteq\mathbb{R}^{d}, and μ\mu is a prior density. We assume that fjf_{j} is computable on computing node jj and involves consideration of 𝐲j\mathbf{y}_{j}, the jjth subset or ‘block’ of the full data set, which comprises bb such blocks.

Many authors have considered ‘embarrassingly parallel’ MCMC algorithms approximating expectations with respect to (1), following the Consensus Monte Carlo approach of Scott et al., (2016). Such procedures require separate MCMC chains to be run on each computing node, each targeting a distribution dependent only on the associated likelihood contribution fjf_{j}. The samples from each of these chains are then combined in a final post-processing step to generate an approximation of the true posterior π\pi. Such algorithms require communication between the nodes only at the very beginning and end of the procedure, falling into the MapReduce framework (Dean and Ghemawat,, 2008); their use is therefore more advantageous when inter-node communication is costly, for example due to high latency. However, the effectiveness of such approaches at approximating the true posterior π\pi depends heavily on the final combination step. Many proposed approaches are based on assumptions on the likelihood contributions fjf_{j}, or employ techniques that may be infeasible in high-dimensional settings. We later review some of these approaches, and some issues surrounding their use, in Section 2.4.

Instead of aiming to minimise entirely communication between nodes, we propose an approach that avoids employing a final aggregation step, thereby avoiding distributional assumptions on π\pi. This approach is motivated by global variable consensus optimisation (see, e.g., Boyd et al.,, 2011, Section 7; we give a summary in Section 2.1). Specifically we introduce an instrumental hierarchical model, associating an auxiliary parameter with each likelihood contribution (and therefore each computing node), which are conditionally independent given ZZ. An additional top-level parameter controlling their unconditional strength of association is also introduced. This allows the construction of an MCMC algorithm on an extended state space, yielding estimates of expectations with respect to π\pi. By tuning the association strength through the top-level parameter, a trade-off between computational tractability and fidelity to the original model can be controlled.

As well as avoiding issues associated with embarrassingly parallel algorithms, our framework presents benefits compared to the simple approach of constructing an MCMC sampler to directly target (1). In settings where communication latency is non-negligible but the practitioner’s time budget is limited, our approach allows a greater proportion of this time to be spent on computation rather than communication, allowing for faster exploration of the state space.

Our approach was initially presented in Rendell et al., (2018). A proposal to use essentially the same framework in a serial context has been independently and contemporaneously published by Vono et al., 2019a , who construct a Gibbs sampler via a ‘variable splitting’ approach. Rather than distributing the computation, the authors focus on the setting where b=1b=1 in order to obtain a relaxation of the original simulation problem. An implementation of this approach for problems in binary logistic regression has been proposed in Vono et al., (2018), with a number of non-asymptotic and convergence results presented more recently in Vono et al., 2019b . Our work focuses on distributed settings, providing a sequential Monte Carlo implementation of the framework that may be used to generate bias-corrected estimates.

We introduce the proposed framework and the resulting algorithmic structure in Section 2, including some discussion of issues in its implementation, and comparisons with related approaches in the literature. We then introduce a sequential Monte Carlo implementation of the framework in Section 3. Various simulation examples are presented in Section 4, before conclusions are provided in Section 5.

2 The instrumental model and MCMC

For simplicity, we shall occasionally abuse notation by using the same symbol for a probability measure on 𝖤\mathsf{E}, and for its density with respect to some dominating measure. For the numerical examples presented herein, 𝖤d\mathsf{E}\subseteq\mathbb{R}^{d} and all densities are defined with respect to a suitable version of the Lebesgue measure. We use the notation xm:n(xm,,xn)x_{m:n}\coloneqq(x_{m},\ldots,x_{n}) for arbitrary xm,,xnx_{m},\ldots,x_{n}. For a probability density function ν\nu and function φ\varphi we denote by ν(φ)\nu(\varphi) the expectation of φ(Z)\varphi(Z) when ZνZ\sim\nu, i.e.

ν(φ)φ(z)ν(z)dz.\nu(\varphi)\coloneqq\int\varphi(z)\nu(z)\mathop{}\!\text{d}z.

2.1 The instrumental model

The goal of the present paper is to approximate (1). We take an approach that has also been developed in contemporaneous work by Vono et al., 2019a , although their objectives were somewhat different. Alongside the variable of interest ZZ, we introduce a collection of bb instrumental variables each also defined on 𝖤\mathsf{E}, denoted by X1:bX_{1:b}. On the extended state space 𝖤×𝖤b\mathsf{E}\times\mathsf{E}^{b}, we define the probability density function π~λ\tilde{\pi}_{\lambda} by

π~λ(z,x1:b)μ(z)j=1bKj(λ)(z,xj)fj(xj),\tilde{\pi}_{\lambda}(z,x_{1:b})\propto\mu(z)\prod_{j=1}^{b}K_{j}^{(\lambda)}(z,x_{j})f_{j}(x_{j}), (2)

where for each j{1,,b}j\in\{1,\dots,b\}, {Kj(λ):λ+}\{K_{j}^{(\lambda)}:\lambda\in\mathbb{R}_{+}\} is a family of Markov transition densities on 𝖤\mathsf{E}. Defining

fj(λ)(z)𝖤Kj(λ)(z,x)fj(x)dx,f_{j}^{(\lambda)}(z)\coloneqq\int_{\mathsf{E}}K_{j}^{(\lambda)}(z,x)f_{j}(x)\mathop{}\!\text{d}x, (3)

the density of the ZZ-marginal of π~λ\tilde{\pi}_{\lambda} may be written as

πλ(z)𝖤bπ~λ(z,x1:b)dx1:bμ(z)j=1bfj(λ)(z).\pi_{\lambda}(z)\coloneqq\int_{\mathsf{E}^{b}}\tilde{\pi}_{\lambda}(z,x_{1:b})\mathop{}\!\text{d}x_{1:b}\propto\mu(z)\prod_{j=1}^{b}f_{j}^{(\lambda)}(z). (4)

Here, we may view each fj(λ)f_{j}^{(\lambda)} as a smoothed form of fjf_{j}, with πλ\pi_{\lambda} being the corresponding smoothed form of the target density (1).

The role of λ\lambda is to control the fidelity of fj(λ)f_{j}^{(\lambda)} to fjf_{j}, and so we assume the following in the sequel.

Assumption 1

For all λ>0\lambda>0, fj(λ)f_{j}^{(\lambda)} is bounded for each j{1,,b}j\in\{1,\ldots,b\}; and fj(λ)fjf_{j}^{(\lambda)}\to f_{j} pointwise as λ0\lambda\to 0 for each j{1,,b}j\in\{1,\ldots,b\}.

For example, Assumption 1 implies that πλ\pi_{\lambda} converges in total variation to π\pi by Scheffé’s lemma (Scheffé,, 1947), and therefore πλ(φ)π(φ)\pi_{\lambda}(\varphi)\to\pi(\varphi) for bounded φ:𝖤\varphi:\mathsf{E}\to\mathbb{R}. Assumption 1 is satisfied for essentially any kernel that may be used for kernel density estimation; here, λ\lambda takes a similar role to that of the smoothing bandwidth.

On a first reading one may assume that the Kj(λ)K_{j}^{(\lambda)} are chosen to be independent of jj; for example, with 𝖤=\mathsf{E}=\mathbb{R} one could take Kj(λ)(z,x)=𝒩(x;z,λ)K_{j}^{(\lambda)}(z,x)=\mathcal{N}(x;z,\lambda). We describe some considerations in choosing these transition kernels in Section LABEL:subsec:choosingTransitionDensities of the supplement, and describe settings in which choosing these to differ with jj may be beneficial.

ZZ𝐲j\mathbf{y}_{j}j=1,,bj=1,\dots,bZZλ\lambdaXjX_{j}𝐲j\mathbf{y}_{j}j=1,,bj=1,\dots,b
Figure 1: Directed acyclic graphs, representing the original statistical model (left) and the instrumental model we construct (right).

The instrumental hierarchical model is presented diagrammatically in Figure 1. The variables X1:bX_{1:b} may be seen as ‘proxies’ for ZZ associated with each of the data subsets, which are conditionally independent given ZZ and λ\lambda. Loosely speaking, λ\lambda represents the extent to which we allow the local variables X1:bX_{1:b} to differ from the global variable ZZ. In terms of computation, it is the separation of ZZ from the subsets of the data 𝐲1:b\mathbf{y}_{1:b}, given X1:bX_{1:b} introduced by the instrumental model, that can be exploited by distributed algorithms.

This approach to constructing an artificial joint target density is easily extended to accommodate random effects models, in which the original statistical model itself contains local variables associated with each data subset. These variables may be retained in the resulting instrumental model, alongside the local proxies X1:bX_{1:b} for ZZ. A full description of the resulting model is presented in Rendell, (2020).

The framework we describe is motivated by concepts in distributed optimisation, a connection that is also explored in the contemporaneous work of Vono et al., 2019a . The global consensus optimisation problem (see, e.g., Boyd et al.,, 2011, Section 7) is that of minimising a sum of functions on a common domain, under the constraint that their arguments are all equal to some global common value. If for each j{1,,b}j\in\{1,\ldots,b\} one uses the Gaussian kernel density Kj(λ)(z,x)=𝒩(x;z,λ)K_{j}^{(\lambda)}(z,x)=\mathcal{N}(x;z,\lambda), then taking the negative logarithm of (2) gives

logπ~λ(z,x1:b)=Clogμ(z)j=1blogfj(xj)+12λj=1b(zxj)2-\log\tilde{\pi}_{\lambda}(z,x_{1:b})=C-\log\mu(z)-\sum_{j=1}^{b}\log f_{j}(x_{j})+\frac{1}{2\lambda}\sum_{j=1}^{b}\left(z-x_{j}\right)^{2} (5)

where CC is a normalising constant. Maximising π(z)\pi(z) is equivalent to minimising this function under the constraint that z=xjz=x_{j} for j{1,,b}j\in\{1,\dots,b\}, which may be achieved using the alternating direction method of multipliers (Bertsekas and Tsitsiklis,, 1989). Specifically, (5) corresponds to the use of 1/λ1/\lambda as the penalty parameter in this procedure.

There are some similarities between this framework and Approximate Bayesian Computation (ABC; see Marin et al.,, 2012, for a review of such methods). In both cases one introduces a kernel that can be viewed as acting to smooth the likelihood. In the case of (2) the role of λ\lambda is to control the scale of smoothing that occurs in the parameter space; the tolerance parameter used in ABC, in contrast, controls the extent of a comparable form of smoothing in the observation (or summary statistic) space.

2.2 Distributed Metropolis-within-Gibbs

The instrumental model described forms the basis of our proposed global consensus framework; ‘global consensus Monte Carlo’ is correspondingly the application of Monte Carlo methods to form an approximation of πλ\pi_{\lambda}. We focus here on the construction of a Metropolis-within-Gibbs Markov kernel that leaves π~λ\tilde{\pi}_{\lambda} invariant. If λ\lambda is chosen to be sufficiently small, then the ZZ-marginal πλ\pi_{\lambda} provides an approximation of π\pi. Therefore given a chain with values denoted (Zi,X1:bi)(Z^{i},X_{1:b}^{i}) for i{1,,N}i\in\{1,\ldots,N\}, an empirical approximation of π\pi is given by

πλN1Ni=1NδZi,\pi_{\lambda}^{N}\coloneqq\frac{1}{N}\sum_{i=1}^{N}\delta_{Z^{i}}, (6)

where δz\delta_{z} denotes the Dirac measure at zz.

The Metropolis-within-Gibbs kernel we consider utilises the full conditional densities

π~λ(xjz)Kj(λ)(z,xj)fj(xj)\tilde{\pi}_{\lambda}(x_{j}\mid z)\propto K_{j}^{(\lambda)}(z,x_{j})f_{j}(x_{j}) (7)

for j{1,,b}j\in\{1,\ldots,b\}, and

π~λ(zx1:b)μ(z)j=1bKj(λ)(z,xj),\tilde{\pi}_{\lambda}(z\mid x_{1:b})\propto\mu(z)\prod_{j=1}^{b}K_{j}^{(\lambda)}(z,x_{j}), (8)

where (7) follows from the mutual conditional independence of X1:bX_{1:b} given ZZ. Here we observe that Kj(λ)(z,xj)K_{j}^{(\lambda)}(z,x_{j}) simultaneously provides a pseudo-prior for XjX_{j} and a pseudo-likelihood for ZZ.

We define M1(λ)M_{1}^{(\lambda)} to be a π~λ\tilde{\pi}_{\lambda}-invariant Markov kernel that fixes zz; we may write

M1(λ)((z,x1:b);d(z,x)1:b)=δz(dz)j=1bPj,z(λ)(xj,dxj),M_{1}^{(\lambda)}((z,x_{1:b});\mathop{}\!\text{d}(z^{\prime},x{}_{1:b}^{\prime}))=\delta_{z}(\mathop{}\!\text{d}z^{\prime})\prod_{j=1}^{b}P_{j,z}^{(\lambda)}(x_{j},\mathop{}\!\text{d}x_{j}^{\prime}), (9)

where for each jj, Pj,z(λ)(xj,)P_{j,z}^{(\lambda)}(x_{j},\cdot) is a Markov kernel leaving (7) invariant. We similarly define M2(λ)M_{2}^{(\lambda)} to be a π~λ\tilde{\pi}_{\lambda}-invariant Markov kernel that fixes x1:bx_{1:b},

M2(λ)((z,x1:b);d(z,x)1:b)=[j=1bδxj(dx)j]Px1:b(λ)(z,dz),M_{2}^{(\lambda)}((z,x_{1:b});\mathop{}\!\text{d}(z^{\prime},x{}_{1:b}^{\prime}))=\left[\prod_{j=1}^{b}\delta_{x_{j}}(\mathop{}\!\text{d}x{}_{j}^{\prime})\right]P^{(\lambda)}_{x_{1:b}}(z,\mathop{}\!\text{d}z^{\prime}), (10)

where Px1:b(λ)(z,)P_{x_{1:b}}^{(\lambda)}(z,\cdot) is a Markov kernel leaving (8) invariant.

Using these Markov kernels we construct an MCMC kernel that leaves π~λ\tilde{\pi}_{\lambda} invariant; we present the resulting sampling procedure as Algorithm 1. In the special case in which one may sample exactly from the conditional distributions (7)–(8), this algorithm takes the form of a Gibbs sampler.

Algorithm 1 Global consensus Monte Carlo: MCMC algorithm

Fix λ>0\lambda>0. Set initial state (Z0,X1:b0)(Z^{0},X_{1:b}^{0}); choose chain length NN.

For i=1,,Ni=1,\dots,N:

  • For j{1,,b}j\in\{1,\dots,b\}, sample XjiPj,Zi1(λ)(Xji1,)X_{j}^{i}\sim P_{j,Z^{i-1}}^{(\lambda)}(X_{j}^{i-1},\cdot).

  • Sample ZiPX1:bi(λ)(Zi1,)Z^{i}\sim P_{X_{1:b}^{i}}^{(\lambda)}(Z^{i-1},\cdot).

Return (Zi,X1:bi)i=1N(Z^{i},X_{1:b}^{i})_{i=1}^{N}.

The interest from a distributed perspective is that the full conditional density (7) of each XjX_{j}, for given values xjx_{j} and zz, depends only on the jjth block of data (through the partial likelihood fjf_{j}) and may be computed on the jjth machine. Within Algorithm 1, the sampling of each XjiX_{j}^{i} from Pj,Zi1(λ)(Xji1,)P_{j,Z^{i-1}}^{(\lambda)}(X_{j}^{i-1},\cdot) may therefore occur on the jjth machine; these X1:biX_{1:b}^{i} may then be communicated to a central machine that draws ZiZ^{i}.

Our approach has particular benefits when sampling exactly from (7) is not possible, in which case Algorithm 1 takes a Metropolis-within-Gibbs form. One may choose the Markov kernels Pj,z(λ)P_{j,z}^{(\lambda)} to comprise multiple iterations of an MCMC kernel leaving (7) invariant; indeed multiple computations of each fjf_{j} (and therefore multiple accept/reject steps) may be conducted on each of the bb nodes, without requiring communication between machines. This stands in contrast to more straightforward MCMC approaches directly targeting π\pi, in which such communication is required for each evaluation of (1), and therefore for every accept/reject step. Similar to such a ‘direct’ MCMC approach, each iteration of Algorithm 1 requires communication to and from each of the bb machines on which the data are stored; but in cases where the communication latency is high, the resulting sampler will spend a greater proportion of time exploring the state space. This may in turn result in faster mixing (e.g. with respect to wall-clock time). We further discuss this comparison, and the role of communication latency, in Section LABEL:subsec:repeatedMCMCkernels of the supplement.

The setting in which Algorithm 1 describes a Gibbs sampler, with all variables drawn exactly from their full conditional distributions, is particularly amenable to analysis. We provide such a study in Section LABEL:sec:Gaussians of the supplement. This analysis may also be informative about the more general Metropolis-within-Gibbs setting, when Pj,z(λ)P_{j,z}^{(\lambda)} comprises enough MCMC iterations to exhibit good mixing.

2.3 Choosing the regularisation parameter

For φ:𝖤\varphi:\mathsf{E}\to\mathbb{R} we may estimate π(φ)\pi(\varphi) using (6) as πλN(φ)\pi_{\lambda}^{N}(\varphi). The regularisation parameter λ\lambda here takes the role of a tuning parameter; we can view its effect on the mean squared error of such estimates using the bias–variance decomposition,

𝔼[(πλN(φ)π(φ))2]=[πλ(φ)π(φ)]2+var[πλN(φ)],\operatorname{\mathbb{E}}\!\left[\left(\pi_{\lambda}^{N}(\varphi)-\pi(\varphi)\right)^{2}\right]=[\pi_{\lambda}(\varphi)-\pi(\varphi)]^{2}+\operatorname{var}[\pi_{\lambda}^{N}(\varphi)], (11)

which holds exactly when 𝔼[πλN(φ)]=πλ(φ)\operatorname{\mathbb{E}}[\pi_{\lambda}^{N}(\varphi)]=\pi_{\lambda}(\varphi). In many practical cases this decomposition will provide a very accurate approximation for large NN, as the squared bias of πλN(φ)\pi_{\lambda}^{N}(\varphi) is typically asymptotically negligible in comparison to its variance.

The decomposition (11) separates the contributions to the error from the bias introduced by the instrumental model and the variance associated with the MCMC approximation. If λ\lambda is too large, the squared bias term in (11) can dominate while if λ\lambda is too small, the Markov chain may exhibit poor mixing due to strong conditional dependencies between X1:bX_{1:b} and ZZ, and so the variance term in (11) can dominate.

It follows that λ\lambda should ideally be chosen in order to balance these two considerations. We provide a theoretical analysis of the role of λ\lambda in Section LABEL:sec:Gaussians of the supplement, by considering a simple Gaussian setting. In particular we show that for a fixed number of data, one should scale λ\lambda with the number of samples NN as 𝒪(N1/3)\mathcal{O}(N^{-1/3}) in order to minimise the mean squared error. We also consider the consistency of the approximate posterior πλ\pi_{\lambda} as the number of data nn tends to infinity. Specifically, suppose these are split equally into bb blocks; considering λ\lambda and bb as functions of nn, we find that πλ\pi_{\lambda} exhibits posterior consistency if λ/b\lambda/b decreases to 0 as nn\to\infty, and that credible intervals have asymptotically correct coverage if λ/b\lambda/b decreases at a rate strictly faster than n1n^{-1}.

As an alternative to selecting a single value of λ\lambda, we propose in Section 3 a Sequential Monte Carlo sampler employing Markov kernels formed via Algorithm 1. In this manner a decreasing sequence of λ\lambda values may be considered, which may result in lower-variance estimates for small λ\lambda values; we also describe a possible bias correction technique.

2.4 Related approaches

As previously mentioned, a Gibbs sampler construction essentially corresponding to Algorithm 1 has independently been proposed by Vono et al., 2019a . Their main objective is to improve algorithmic performance when computation of the full posterior density is intensive by constructing full conditional distributions that are more computationally tractable, for which they primarily consider a setting in which b=1b=1. In contrast, we focus on the exploitation of this framework in distributed settings (i.e. with b>1b>1), in the manner described in Section 2.2.

The objectives of our algorithm are similar, but not identical, to those of the previously-introduced ‘embarrassingly parallel’ approaches proposed by many authors. These reduce the costs of communication latency to a near-minimum by simulating a separate MCMC chain on each machine; typically, the chain on the jjth machine is invariant with respect to a ‘subposterior’ distribution with density proportional to μ(z)1/bfj(z)\mu(z)^{1/b}f_{j}(z). Communication is necessary only for the final aggregation step, in which an approximation of the full posterior is obtained using the samples from all bb chains.

A well-studied approach within this framework is Consensus Monte Carlo (Scott et al.,, 2016), in which the post-processing step amounts to forming a ‘consensus chain’ by weighted averaging of the separate chains. In the case that each subposterior density is Gaussian this approach can be used to produce samples asymptotically distributed according to π\pi, by weighting each chain using the precision matrices of the subposterior distributions. Motivated by Bayesian asymptotics, the authors suggest using this approach more generally. In cases where the subposterior distributions exhibit near-Gaussianity this performs well, with the final ‘consensus chain’ providing a good approximation of posterior expectations. However, there are no theoretical guarantees associated with this approach in settings in which the subposterior densities are poorly approximated by Gaussians. In such cases consensus Monte Carlo sometimes performs poorly in forming an approximation of the posterior π\pi (as in examples of Wang et al.,, 2015; Srivastava et al.,, 2015; Dai et al.,, 2019), and so the resulting estimates of integrals π(φ)\pi(\varphi) exhibit high bias.

Various authors (e.g. Minsker et al.,, 2014; Rabinovich et al.,, 2015; Srivastava et al.,, 2015; Wang et al.,, 2015) have therefore proposed alternative techniques for utilising the values from each of these chains in order to approximate posterior expectations, each of which presents benefits and drawbacks. For example, Neiswanger et al., (2014) propose a strategy based on kernel density estimation; based on this approach, Scott, (2017) suggests a strategy based on finite mixture models, though notes that both methods may be impractical in high-dimensional settings.

An aggregation procedure proposed by Wang and Dunson, (2013) bears some relation to our proposed framework, being based on the application of Weierstrass transforms to each subposterior density. The resulting smoothed densities are analogous to (3), which represents a smoothed form of the partial likelihood fjf_{j}. As well as proposing an aggregation method based on rejection sampling, the authors propose a technique for ‘refining’ an initial posterior approximation, which may be expressed in terms of a Gibbs kernel on an extended state space. Comparing with our framework, this is analogous to applying Algorithm 1 for one iteration with NN different initial values.

A potential issue common to these approaches is the treatment of the prior density μ\mu. Each subposterior density is typically assigned an equal share of the prior information in the form of a fractionated prior density μ(z)1/b\mu(z)^{1/b}, but it is not clear when this approach is satisfactory. For example, suppose μ\mu belongs to an exponential family; any property that is not invariant to multiplying the canonical parameters by a constant will not be preserved in the fractionated prior. As such if μ(z)1/b\mu(z)^{1/b} is proportional to a valid probability density function of zz, then the corresponding distribution may be qualitatively very different to the full prior. Although Scott et al., (2016) note that fractionated priors perform poorly on some examples (for which tailored solutions are provided), no other general way of assigning prior information to each block naturally presents itself. In contrast our approach avoids this problem entirely, with μ\mu providing prior information for ZZ at the ‘global’ level.

Finally, we believe this work is complementary to other distributed algorithms lying outside of the ‘embarrassingly parallel’ framework (including Xu et al.,, 2014; Jordan et al.,, 2019), and to approaches that aim to reduce the amount of computation associated with each likelihood calculation on a single node, e.g. by using only a subsample or batch of the data (as in Korattikara et al.,, 2014; Bardenet et al.,, 2014; Huggins et al.,, 2016; Maclaurin and Adams,, 2014).

3 Sequential Monte Carlo approach

As discussed in Section 2.3, as λ\lambda approaches 0 estimators πλN(φ)\pi_{\lambda}^{N}(\varphi) formed using (6) exhibit lower bias but higher variance, due to poorer mixing of the resulting Markov chain. In order to obtain lower-variance estimators for λ\lambda values close to 0, we consider the use of Sequential Monte Carlo (SMC) methodology to generate suitable estimates for a sequence of λ\lambda values.

SMC methodology employs sequential importance sampling and resampling; recent surveys include Doucet and Johansen, (2011) and Doucet and Lee, (2018). We consider here approximations of a sequence of distributions with densities π~λ0,π~λ1,\tilde{\pi}_{\lambda_{0}},\tilde{\pi}_{\lambda_{1}},\ldots, where λ0,,λn\lambda_{0},\ldots,\lambda_{n} is a decreasing sequence. The procedure we propose, specified in Algorithm 2, is an SMC sampler within the framework of Del Moral et al., (2006). This algorithmic form was first proposed by Gilks and Berzuini, (2001) and Chopin, (2002) in different settings, building upon ideas in Crooks, (1998), Neal, (2001).

Algorithm 2 Global consensus Monte Carlo: SMC algorithm

Fix a decreasing sequence (λ0,λ1,,λn)(\lambda_{0},\lambda_{1},\dots,\lambda_{n}). Set number of particles NN.

Initialise:

  • For i{1,,N}i\in\{1,\dots,N\}, sample ζ0i=(Z0i,X0,1:bi)π~λ0\zeta_{0}^{i}=(Z_{0}^{i},X_{0,1:b}^{i})\sim\tilde{\pi}_{\lambda_{0}} and set W0i1W_{0}^{i}\leftarrow 1.

For p=1,,np=1,\dots,n:

  1. 1.

    For i{1,,N}i\in\{1,\ldots,N\}, set W~piWp1iwp(ζp1i)\tilde{W}_{p}^{i}\leftarrow W_{p-1}^{i}w_{p}(\zeta_{p-1}^{i}), where

    wp(z,x1:b)π~λp(z,x1:b)π~λp1(z,x1:b)=j=1bKj(λp)(z,xj)Kj(λp1)(z,xj).w_{p}(z,x_{1:b})\coloneqq\frac{\tilde{\pi}_{\lambda_{p}}(z,x_{1:b})}{\tilde{\pi}_{\lambda_{p-1}}(z,x_{1:b})}=\prod_{j=1}^{b}\frac{K_{j}^{(\lambda_{p})}(z,x_{j})}{K_{j}^{(\lambda_{p-1})}(z,x_{j})}.
  2. 2.

    Optionally, carry out a resampling step: for i{1,,N}i\in\{1,\ldots,N\},

    • Sample Ap1iCategorical(W~p1,,W~pN)A_{p-1}^{i}\sim{\rm Categorical}(\tilde{W}_{p}^{1},\ldots,\tilde{W}_{p}^{N}) independently.

    • Set Wpi1W_{p}^{i}\leftarrow 1.

    Otherwise: for i{1,,N}i\in\{1,\ldots,N\} set Ap1iiA_{p-1}^{i}\leftarrow i, WpiW~piW_{p}^{i}\leftarrow\tilde{W}_{p}^{i}.

  3. 3.

    For i{1,,N}i\in\{1,\ldots,N\}, sample ζpi=(Zpi,Xp,1:bi)Mp(ζp1Ap1i,)\zeta_{p}^{i}=(Z_{p}^{i},X_{p,1:b}^{i})\sim M_{p}(\zeta_{p-1}^{A_{p-1}^{i}},\cdot), where MpM_{p} is a π~λp\tilde{\pi}_{\lambda_{p}}-invariant MCMC kernel constructed in the manner of Algorithm 1.

  4. 4.

    Optionally, store the particle approximation of π~λp\tilde{\pi}_{\lambda_{p}},

    π~λpNi=1NWpiδζpii=1NWpi.\tilde{\pi}^{N}_{\lambda_{p}}\coloneqq\frac{\sum_{i=1}^{N}W_{p}^{i}\delta_{\zeta_{p}^{i}}}{\sum_{i=1}^{N}W_{p}^{i}}.

The algorithm presented involves simulating particles using π~λ\tilde{\pi}_{\lambda}-invariant Markov kernels, and has a genealogical structure imposed by the ancestor indices ApiA_{p}^{i} for p{0,,n1}p\in\{0,\ldots,n-1\} and i{1,,N}i\in\{1,\ldots,N\}. The specific scheme for simulating the ancestor indices here is known as multinomial resampling; other schemes can be used (see Douc et al.,, 2005; Gerber et al.,, 2019, for a summary of some schemes and their properties). We use this simple scheme here as it validates the use of the variance estimators used in Section 3.1. This optional resampling step is used to prevent the degeneracy of the particle set; a common approach is to carry out this step whenever the particles’ effective sample size (Liu and Chen,, 1995) falls below a pre-determined threshold.

Under weak conditions π~λpN(φ)\tilde{\pi}^{N}_{\lambda_{p}}(\varphi) converges almost surely to π~λp(φ)\tilde{\pi}_{\lambda_{p}}(\varphi) as NN\to\infty. One can also define the particle approximations of πλp\pi_{\lambda_{p}} via

πλpNi=1NWpiδZpii=1NWpi,{\pi}^{N}_{\lambda_{p}}\coloneqq\frac{\sum_{i=1}^{N}W_{p}^{i}\delta_{Z_{p}^{i}}}{\sum_{i=1}^{N}W_{p}^{i}}, (12)

where ZpiZ_{p}^{i} is the first component of the particle ζpi\zeta_{p}^{i}.

Although the algorithm is specified for simplicity in terms of a fixed sequence λ0,,λn\lambda_{0},\ldots,\lambda_{n}, a primary motivation for the SMC approach is that the sequence used can be determined adaptively while running the algorithm. A number of such procedures have been proposed in the literature in the context of tempering, allowing each value λp\lambda_{p} to be selected based on the particle approximation of π~λp1\tilde{\pi}_{\lambda_{p-1}}. For example, Jasra et al., (2011) propose a procedure that controls the decay of the particles’ effective sample size. Within the examples in Section 4 we employ a procedure proposed by Zhou et al., (2016), which generalises this approach to settings in which resampling is not conducted in every iteration, aiming to control directly the dissimilarity between successive distributions. A possible approach to determining when to terminate the procedure, based on minimising the mean squared error (11), is detailed in Section LABEL:subsec:stoppingRule of the supplement.

With regard to initialisation, if it is not possible to sample from π~λ0\tilde{\pi}_{\lambda_{0}} one could instead use samples obtained by importance sampling, or one could initialise an SMC sampler with some tractable distribution and use tempering or similar techniques to reach π~λ0\tilde{\pi}_{\lambda_{0}}. At the expense of the introduction of an additional approximation, an alternative would be to run a π~λ0\tilde{\pi}_{\lambda_{0}}-invariant Markov chain, and obtain an initial collection of particles by thinning the output (an approach that may be validated using results of Finke et al.,, 2020). Specifically, one could use Algorithm 1 to generate such samples for some large λ0\lambda_{0}, benefiting from its good mixing and low autocorrelation when λ\lambda is sufficiently large. The effect of Algorithm 2 may then be seen as refining or improving the resulting estimators, by bringing the parameter λ\lambda closer to zero.

Other points in favour of this approach are that many of the particle approximations (12) can be used to form a final estimate of π(φ)\pi(\varphi) as explored in Section 3.1, and that SMC methods can be more robust to multimodality of π\pi than simple Markov chain schemes. We finally note that in such an SMC sampler, a careful implementation of the MCMC kernels used may allow the inter-node communication to be interleaved with likelihood computations associated with the particles, thereby reducing the costs associated with communication latency.

3.1 Bias correction using local linear regression

We present an approach to use many of the particle approximations produced by Algorithm 2. A natural idea is to regress the values of πλN(φ)\pi_{\lambda}^{N}(\varphi) on λ\lambda, extrapolating to λ=0\lambda=0 to obtain an estimate of π(φ)\pi(\varphi). A similar idea has been used for bias correction in the context of Approximate Bayesian Computation, albeit not in an SMC setting, regressing on the discrepancy between the observed data and simulated pseudo-observations (Beaumont et al.,, 2002; Blum and François,, 2010).

Under very mild assumptions on the transition densities Kj(λ)K_{j}^{(\lambda)}, πλ(φ)\pi_{\lambda}(\varphi) is smooth as a function of λ\lambda. Considering a first-order Taylor expansion of this function, a simple approach is to model the dependence of πλ(φ)\pi_{\lambda}(\varphi) on λ\lambda as linear, for λ\lambda sufficiently close to 0. Having determined a subset of the values of λ\lambda used for which a linear approximation is appropriate (we propose a heuristic approach in Section LABEL:subsec:inclusionProcedure of the supplement) one can use linear least squares to carry out the regression. To account for the SMC estimates πλpN(φ)\pi_{\lambda_{p}}^{N}(\varphi) having different variances, we propose the use of weighted least squares, with the ‘observations’ πλpN(φ)\pi_{\lambda_{p}}^{N}(\varphi) assigned weights inversely proportional to their estimated variances; we describe methods for computing such variance estimates in Section 3.2. A bias-corrected estimate of π(φ)\pi(\varphi) is then obtained by extrapolating the resulting fit to λ=0\lambda=0, which corresponds to taking the estimated intercept term.

To make this explicit, first consider the case in which φ:𝖤\varphi:\mathsf{E}\to\mathbb{R}, so that the estimates πλN(φ)\pi_{\lambda}^{N}(\varphi) are univariate. For each value λp\lambda_{p} denote the corresponding SMC estimate by ηpπλpN(φ)\eta_{p}\coloneqq\pi_{\lambda_{p}}^{N}(\varphi), and let vpv_{p} denote some proxy for the variance of this estimate. Then for some set of indices S{p,,n}S\coloneqq\{p^{*},\dots,n\} chosen such that the relationship between ηp\eta_{p} and λp\lambda_{p} is approximately linear for pSp\in S, a bias-corrected estimate for π(φ)\pi(\varphi) may be computed as

πSBC(φ)η~Sλ~SpS(λpλ~S)(ηpη~S)/vppS(λpλ~S)2/vp,\pi^{\mathrm{BC}}_{S}(\varphi)\coloneqq\tilde{\eta}_{S}-\tilde{\lambda}_{S}\frac{\sum_{p\in S}(\lambda_{p}-\tilde{\lambda}_{S})(\eta_{p}-\tilde{\eta}_{S})/v_{p}}{\sum_{p\in S}(\lambda_{p}-\tilde{\lambda}_{S})^{2}/v_{p}}, (13)

where λ~S\tilde{\lambda}_{S} and η~S\tilde{\eta}_{S} denote weighted means given by

λ~SpSλp/vppS1/vp,η~SpSηp/vppS1/vp.\tilde{\lambda}_{S}\coloneqq\frac{\sum_{p\in S}\lambda_{p}/v_{p}}{\sum_{p\in S}1/v_{p}},\qquad\tilde{\eta}_{S}\coloneqq\frac{\sum_{p\in S}\eta_{p}/v_{p}}{\sum_{p\in S}1/v_{p}}.

The formal justification of this estimate assumes that the observations are uncorrelated, which does not hold here. We demonstrate in Section 4 examples on which this simple approach is nevertheless effective, but in principle one could use generalised least squares combined with some approximation of the full covariance matrix of the SMC estimates.

In the more general case where φ:𝖤d\varphi:\mathsf{E}\to\mathbb{R}^{d} for d>1d>1, we propose simply evaluating (13) for each component of this quantity separately, which corresponds to fitting an independent weighted least squares regression to each component. This facilitates the use of the variance estimators described in the following section, though in principle one could use multivariate weighted least squares or other approaches.

3.2 Variance estimation for weighted least squares

We propose the weighted form of least squares here since, as the values of λ\lambda used in the SMC procedure approach zero, the estimators generated may increase in variance: partly due to poorer mixing of the MCMC kernels as previously described, but also due to the gradual degeneracy of the particle set. In order to estimate the variances of estimates generated using SMC, several recent approaches have been proposed that allow this estimation to be carried out online by considering the genealogy of the particles. Using any such procedure, one may estimate the variance of πλN(φ)\pi_{\lambda}^{N}(\varphi) for each λ\lambda value considered by Algorithm 2, with these values used for each vpv_{p} in (13).

Within our examples, we use the estimator proposed by Lee and Whiteley, (2018), which for fixed NN coincides with an earlier proposal of Chan and Lai, (2013) (up to a multiplicative constant). Specifically, after each step of the SMC sampler we compute an estimate of the asymptotic variance of each estimate πλpN(φ)\pi_{\lambda_{p}}^{N}(\varphi); that is, the limit of Nvar[πλpN(φ)]N\operatorname{var}[\pi_{\lambda_{p}}^{N}(\varphi)] as NN\to\infty. While this is not equivalent to computing the true variance of each estimate, for fixed large NN the relative sizes of these asymptotic variance estimates should provide a useful indicator of the relative variances of each estimate πλN(φ)\pi_{\lambda}^{N}(\varphi). In Section LABEL:subsec:gaussianSMCexample of the supplement we show empirically that inversely weighting the SMC estimates according to these estimated variances can result in more stable bias-corrected estimates as the particle set degenerates. We also explain in Section LABEL:subsec:stoppingRule how these estimated variances can be used within a rule to determine when to terminate the algorithm.

The asymptotic variance estimator described is consistent in NN. However, if in practice resampling at the ppth time step causes the particle set to degenerate to having a single common ancestor, then the estimator evaluates to zero, and so it is impossible to use this value as the inverse weight vpv_{p} in (13). Such an outcome may be interpreted as a warning that too few particles have been used for the resulting SMC estimates to be reliable, and that a greater number should be used when re-running the procedure.

4 Examples

4.1 Log-normal toy example

To compare the posterior approximations formed by our global consensus framework with those formed by some embarrassingly parallel approaches discussed in Section 2.4, we conduct a simulation study based on a simple model. Let 𝒩(x;μ,σ2)\mathcal{LN}(x;\mu,\sigma^{2}) denote the density of a log-normal distribution with parameters (μ,σ2)(\mu,\sigma^{2}); that is,

𝒩(x;μ,σ2)=1x2πσ2exp((log(x)μ)22σ2).\mathcal{LN}(x;\mu,\sigma^{2})=\frac{1}{x\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{(\log(x)-\mu)^{2}}{2\sigma^{2}}\right).

One may consider a model with prior density μ(z)=𝒩(z;μ0,σ02)\mu(z)=\mathcal{LN}(z;\mu_{0},\sigma_{0}^{2}) and likelihood contributions fj(z)=𝒩(log(μj);log(z),σj2)f_{j}(z)=\mathcal{LN}(\log(\mu_{j});\log(z),\sigma_{j}^{2}) for j{1,,b}j\in\{1,\dots,b\}. This may be seen as a reparametrisation of the Gaussian model in Section LABEL:sec:Gaussians of the supplement, in which each likelihood contribution is that of a data subset with a Gaussian likelihood. This convenient setting allows for the target distribution π\pi to be expressed analytically. For the implementation of the global consensus algorithm, we choose Markov transition kernels given by Kj(λ)(z,x)=𝒩(x;log(z),λ)K_{j}^{(\lambda)}(z,x)=\mathcal{LN}(x;\log(z),\lambda) for each j{1,,b}j\in\{1,\dots,b\}, which satisfy Assumption 1; this allows for exact sampling from all the full conditional distributions.

As a toy example to illustrate the effects of non-Gaussian partial likelihoods we consider a case in which fj(z)=𝒩(log(μj);log(z),1)f_{j}(z)=\mathcal{LN}(\log(\mu_{j});\log(z),1) for each jj, and μ(z)=𝒩(z;0,25)\mu(z)=\mathcal{LN}(z;0,25). Here we took b=32b=32, and selected the location parameters μj\mu_{j} as i.i.d. samples from a standard normal distribution. We ran Global Consensus Monte Carlo (GCMC) using the Gibbs sampler form of Algorithm 1, for values of λ\lambda between 10510^{-5} and 1010. For comparison we also drew samples from each subposterior distribution as defined in Section 2.4, combining the samples using various approaches. These are the Consensus Monte Carlo (CMC) averaging of Scott et al., (2016); the nonparametric density product estimation (NDPE) approach of Neiswanger et al., (2014); and the Weierstrass rejection sampling (WRS) combination technique of Wang and Dunson, (2013), using their R implementation (https://github.com/wwrechard/weierstrass). In each case we ran the algorithm 2525 times, drawing N=105N=10^{5} samples.

φ(z)=z\varphi(z)=z φ(z)=z5\varphi(z)=z^{5} φ(z)=log(z)\varphi(z)=\log(z)
Algorithm (π(φ)=1.141\pi(\varphi)=1.141) (π(φ)=2.644\pi(\varphi)=2.644) (π(φ)=0.1164\pi(\varphi)=0.1164)
GCMC λ=101\lambda=10^{1} 1.329±0.0031.329\pm 0.003 121.154±10.487121.154\pm 10.487 0.1151±0.00190.1151\pm 0.0019
λ=100\lambda=10^{0} 1.159±0.0021.159\pm 0.002 3.901±0.037\hphantom{00}3.901\pm\hphantom{0}0.037 0.1165±0.0014\mathbf{0.1165\pm 0.0014}
λ=101\lambda=10^{-1} 1.144±0.003\mathbf{1.144\pm 0.003} 2.763±0.044\mathbf{\hphantom{00}2.763\pm\hphantom{0}0.044} 0.1173±0.00300.1173\pm 0.0030
λ=102\lambda=10^{-2} 1.140±0.0111.140\pm 0.011 2.648±0.143\hphantom{00}2.648\pm\hphantom{0}0.143 0.1150±0.00900.1150\pm 0.0090
λ=103\lambda=10^{-3} 1.142±0.0221.142\pm 0.022 2.661±0.295\hphantom{00}2.661\pm\hphantom{0}0.295 0.1191±0.01990.1191\pm 0.0199
λ=104\lambda=10^{-4} 1.120±0.0771.120\pm 0.077 3.505±1.136\hphantom{00}3.505\pm\hphantom{0}1.136 0.1630±0.06510.1630\pm 0.0651
λ=105\lambda=10^{-5} 1.400±0.1101.400\pm 0.110 6.195±2.217\hphantom{00}6.195\pm\hphantom{0}2.217 0.3283±0.08100.3283\pm 0.0810
CMC 1.073±0.0101.073\pm 0.010 16.092±5.675\hphantom{0}16.092\pm\hphantom{0}5.675 0.0135±0.00950.0135\pm 0.0095
NDPE 1.148±0.0291.148\pm 0.029 2.800±0.385\hphantom{00}2.800\pm\hphantom{0}0.385 0.1231±0.02460.1231\pm 0.0246
WRS 1.111±0.0071.111\pm 0.007 2.444±0.086\hphantom{00}2.444\pm\hphantom{0}0.086 0.0862±0.00630.0862\pm 0.0063
Table 1: True values and estimates of π(φ)\pi(\varphi), for various test functions φ\varphi, for the log-normal toy model. Estimates obtained using Global Consensus Monte Carlo with various values of λ\lambda, and three embarrassingly parallel methods (see main text for abbreviations). For each method the mean estimate ±\pm Monte Carlo standard error is presented, as computed over 2525 replicates; the estimator corresponding to the lowest mean squared error is printed in bold.

To demonstrate the role of λ\lambda in the bias–variance decomposition (11), Table 1 presents the means and standard deviations of estimates of π(φ)\pi(\varphi), for various test functions φ\varphi. In estimating the first moment of π\pi, GCMC generates a low-bias estimator when λ\lambda is chosen to be sufficiently small; however, as expected, the variance of such estimators increases when very small values of λ\lambda are chosen. While the other methods produce estimators of reasonably low variance, these exhibit somewhat higher bias. For CMC the bias is especially pronounced when estimating higher moments of the posterior distribution, as exemplified by the estimates of the fifth moment. Note however that high biases are also introduced when using Global Consensus Monte Carlo with large values of λ\lambda (as seen here with λ=10\lambda=10), for which πλ\pi_{\lambda} is a poor approximation of π\pi.

Also of note are estimates of log(z)π(z)dz\int\log(z)\pi(z)\mathop{}\!\text{d}z, corresponding to the mean of the Gaussian model of which this a reparametrisation. While Global Consensus Monte Carlo performs well across a range of λ\lambda values, the other methods perform less favourably; Consensus Monte Carlo produces an estimate that is incorrect by an order of magnitude. While this could be solved by a simple reparametrisation of the problem in this case, in more general settings no such straightforward solution may exist.

In Section LABEL:subsec:additionalLognormalExample of the supplement we present second example based on a log-normal model, demonstrating the robustness of these methods to permutation and repartitioning of the data.

4.2 Logistic regression

Binary logistic regression models are commonly used in settings related to marketing. In web design for example, A/B testing may be used to determine which content choices lead to maximised user interaction, such as the user clicking on a product for sale.

We assume that we have a data set of size nn formed of responses η{1,1}\eta_{\ell}\in\{-1,1\}, and vectors ξ{0,1}d\xi_{\ell}\in\{0,1\}^{d} of binary covariates, where {1,,n}\ell\in\{1,\dots,n\}. The likelihood contribution of each block of data then takes the form fj(z)=S(ηz𝖳ξ)f_{j}(z)=\prod_{\ell}S(\eta_{\ell}z^{\mathsf{T}}\xi_{\ell}), zdz\in\mathbb{R}^{d}, where the product is taken over those indices \ell included in the jjth block of data, and SS denotes the logistic function, S(x)=(1+exp(x))1S(x)=(1+\exp(x))^{-1}.

For the prior μ\mu, we use a product of independent zero-mean Gaussians, with standard deviation 2020 for the parameter corresponding to the intercept term, and 55 for all other parameters. For the Markov transition densities in GCMC, we use multivariate Gaussian densities: Kj(λ)(z,x)=𝒩(x;z,λI)K_{j}^{(\lambda)}(z,x)=\mathcal{N}(x;z,\lambda I) for each j{1,b}j\in\{1,\dots\ b\}.

We investigated several such simulated data sets and the efficacy of various approaches in approximating the true posterior π\pi. To illustrate the bias–variance trade-off described in Section 2.3, in the presentation of these results we focus on the estimation of the posterior first moment; denoting the identity function on d\mathbb{R}^{d} by Id\operatorname{Id}, we may write this as π(Id)\pi(\operatorname{Id}). While our global consensus approach was consistently successful in forming estimators with low mean squared error in each component, in low-dimensional settings the application of Consensus Monte Carlo often resulted in marginal improvements. However, in many higher-dimensional settings, the estimators resulting from CMC exhibited relatively large biases.

We present here an example in which the dd predictors correspond to pp binary input variables, their pairwise products, and an intercept term, so that d=1+p+(p2)d=1+p+\binom{p}{2}. In settings where the interaction effects corresponding to these pairwise products are of interest, the dimensionality dd of the space can be very large compared to pp. We used a simulated data set with p=20p=20 input variables, resulting in a parameter space of dimension d=211d=211. The data comprise n=80,000n=80{,}000 observations, split into b=8b=8 equally-sized blocks. Each observation of the 2020 binary variables was generated from a Bernoulli distribution with parameter 0.10.1, and for each vector of covariates, the response was generated from the correct model, for a fixed underlying parameter vector zz^{*}.

4.2.1 Metropolis-within-Gibbs

We applied GCMC for values of λ\lambda between 10410^{-4} and 11. We used a Metropolis-within-Gibbs formulation of Algorithm 1, sampling directly from the Gaussian conditional distribution of ZZ given X1:bX_{1:b}. To sample approximately from the conditional distributions of each XjX_{j} given ZZ we used Markov kernels Pj,z(λ)P_{j,z}^{(\lambda)} comprising k=20k=20 iterations of a random walk Metropolis kernel.

As mentioned in Section 2.2, in settings of high communication latency our approach allows a greater proportion of wall-clock time to be spent on likelihood contributions, compared to an MCMC chain directly targeting the full posterior π\pi. To compare across settings, we therefore consider an abstracted distributed setting as discussed in Section LABEL:subsec:repeatedMCMCkernels of the supplement, here assuming that the latency is 1010 times the time taken to compute each partial likelihood fjf_{j} (in the notation of Section LABEL:subsec:repeatedMCMCkernels, C=10C=10\ell).

We also compare with the same embarrassingly parallel approaches as in Section 4.1 (CMC, NDPE, WRS), which are comparatively unaffected by communication latency. For these methods, we again used random walk Metropolis to draw samples from each subposterior distribution. To ease computation, we thinned these chains before applying the combination step; in practice, the estimators obtained using these thinned chains behaved very similarly to those obtained using all subposterior samples.

To provide a ‘ground truth’ against which to compare the results we ran a random walk Metropolis chain of length 500,000500{,}000 targeting π\pi. For all our random walk Metropolis samplers we used Gaussian proposal kernels. To determine the covariance matrices of these, we formed a Laplace approximation of the target density following the approach of Chopin and Ridgway, (2017), scaling the resulting covariance matrix optimally according to results of Roberts and Rosenthal, (2001).

Algorithm Mean sum of squared errors
GCMC λ=100\lambda=10^{0} 0.18350.1835
λ=101\lambda=10^{-1} 0.13790.1379
λ=102\lambda=10^{-2} 0.07700.0770
λ=103\lambda=10^{-3} 0.0478\mathbf{0.0478}
λ=104\lambda=10^{-4} 0.06620.0662
CMC 0.37100.3710
NDPE 0.84760.8476
WRS 0.64020.6402
Direct MCMC 0.08840.0884
Table 2: Mean sum of squared errors over all dd components of estimates of the posterior mean for the logistic regression model, formed using various algorithmic approaches as described in the main text, during an approximate wall-clock time equal to 200,000200{,}000 times that required to compute a single partial likelihood fjf_{j}. All values computed over 2525 replicates, with the lowest value printed in bold.

For each algorithmic setting, we ran the corresponding sampler 2525 times. To compare the resulting estimators of the posterior mean we computed the mean squared error of each of the dd components of the posterior mean, summing these to obtain a ‘mean sum of squared errors’.

Table 2 compares the values obtained by each algorithm after an approximate wall-clock time equal to 200,000200{,}000 times the time taken to compute a single partial likelihood fjf_{j}. Accounting for latency in the abstracted distributed setting described above, the GCMC approach is able to generate 50005000 approximate posterior samples during this time, spending 50%50\% of time on likelihood computations. In contrast, a direct MCMC approach generates 95239523 samples, but would only spend 4.8%4.8\% of the time on likelihood computations, with the remainder lost due to latency.

The result is that the estimators generated by GCMC for appropriately-chosen λ\lambda exhibit lower mean sums of squared errors: we conduct many more accept/reject steps in each round of inter-node communication than if we were to directly target π\pi, and so it becomes possible to achieve faster mixing of the ZZ-chain (and a better estimator) compared to such a direct approach. This may be seen when comparing the effective sample size (ESS) of each chain, where we estimate this via the ‘batch means’ approach of Vats et al., (2019): we find that the average ESS of the direct MCMC chains is only 11111111, while depending on the choice of λ\lambda, the shorter GCMC chains have average ESS values between 13271327 and 45774577.

Despite being unaffected by latency and therefore allowing many more samples to be drawn, the embarrassingly parallel approaches (CMC, NDPE, WRS) perform poorly compared to GCMC. This is particularly true of the nonparametric density product estimation (NDPE) method of Neiswanger et al., (2014): while asymptotically exact even in non-Gaussian settings, the resulting estimator is based on kernel density estimators and is not effective in this high-dimensional setting.

Refer to caption
Figure 2: Mean sum of squared errors over all dd components of estimates of the posterior mean for the logistic regression model, formed using various algorithmic approaches as described in the main text. Values plotted against the approximate wall-clock time, relative to the time taken to compute a single partial likelihood term. All values computed over 2525 replicates.

Figure 2 shows the mean sums of squared errors as a function of the approximate wall-clock time (for simplicity we include only the best-performing of the three embarrassingly parallel methods, omitting the results for NDPE and WRS). We see that for large enough λ\lambda, the GCMC estimators πλN(Id)\pi_{\lambda}^{N}(\operatorname{Id}) exhibit rather lower values than the corresponding CMC and ‘direct’ MCMC estimators. As the number of samples used grows, the squared bias of these estimators begins to dominate, and so smaller λ\lambda values result in lower mean squared errors. As λ\lambda becomes smaller the autocorrelation of the resulting ZZ-chain increases; indeed we found that for λ\lambda too small, the GCMC estimator πλN(Id)\pi_{\lambda}^{N}(\operatorname{Id}) will always have a greater mean squared error than the ‘direct’ MCMC estimator, no matter how much time is used. Of course, since an MCMC estimator formed by directly targeting π\pi is consistent in NN, given sufficient time such an estimator will always outperform estimators formed using GCMC, which are biased for any λ\lambda. However, in many practical big data settings it may be infeasible to draw large numbers of samples using the available time budget.

4.2.2 Sequential Monte Carlo

We also applied the SMC procedure to this logistic regression model. While we found that the SMC approach was most effective in lower-dimensional settings (see Section LABEL:subsec:gaussianSMCexample of the supplement for a simple example) in which it is less computationally expensive, the SMC procedure can be more widely useful as a means of ‘refining’ the estimator formed using a single λ\lambda value, as discussed in Section 3.

We used N=1250N=1250 particles, initialising the particle set by thinning the chain generated by the Metropolis-within-Gibbs procedure with λ=101\lambda=10^{-1}. To generate a sequence of subsequent λ\lambda values we used the adaptive procedure of Zhou et al., (2016), using tuning parameter CESS=0.98\text{CESS}^{\star}=0.98. For the Markov kernels MpM_{p} we used Metropolis-within-Gibbs kernels as previously, with each update of XjX_{j} given ZZ comprising k=50k=50 iterations of a random walk Metropolis kernel.

The estimator πλ0N(Id)\pi_{\lambda_{0}}^{N}(\operatorname{Id}) formed using the initial particle set was found to have a mean sum of squared errors of 0.06920.0692. After a fixed number of iterations (n=100n=100) the resulting SMC estimate exhibited a mean sum of squared errors of 0.04180.0418; this represents a decrease of 40%40\%, and has the benefit of avoiding the need to carefully specify a single value for λ\lambda.

Used alone, the bias correction procedure of Section 3.1 was found to perform best in lower-dimensional settings (as in Section LABEL:subsec:gaussianSMCexample of the supplement); here, it resulted in a mean sum of squared errors of 0.06820.0682 after 100100 iterations. However, improved results were obtained using the stopping rule we propose in Section LABEL:subsec:stoppingRule of the supplement (with stopping parameter κ=15\kappa=15), which is based on our proposed bias correction procedure. The estimator selected by this stopping rule, which automatically determines when to terminate the algorithm, obtained a mean sum of squared errors of 0.03670.0367, a decrease of 47%47\% from the estimator generated using the initial particle set.

5 Conclusion

We have presented a new framework for sampling in distributed settings, demonstrating its application on some illustrative examples in comparison to embarrassingly parallel approaches. Given that our proposed approach makes no additional assumptions on the form of the likelihood beyond its factorised form as in (1), we expect that our algorithm will be most effective in those big data settings for which approximate Gaussianity of the likelihood contributions may not hold. These may include high-dimensional settings, for which some subsets of the data may be relatively uninformative about the parameter. In such cases the likelihood contributions may be highly non-Gaussian, so that the consensus Monte Carlo approach of averaging across chains results in estimates of high bias; simultaneously, the high dimensionality may preclude the use of alternative combination techniques (e.g. the use of kernel density estimates).

This framework may be of use in serial settings. As previously noted, the contemporaneous work of Vono et al., 2019a presents an example in which the use of an analogous framework (with b=1b=1) results in more efficient simulation than approaches that directly target the true posterior. Our proposed SMC sampler implementation and the associated bias correction technique may equally be applied to such settings, reducing the need to specify a single value of the regularisation parameter λ\lambda.

There is potential for further improvements to be made to the procedures we present here. In the SMC case for example, while our proposed use of weighted least squares as a bias correction technique is simple, non-linear procedures (such those proposed in an ABC context by Blum and François,, 2010) may provide a more robust alternative, with some theoretical guarantees. We also stress that the MCMC and SMC procedures presented here constitute only two possible approaches to inference within the instrumental hierarchical model that we propose, and there is considerable scope for alternative sampling algorithms to be employed within this global consensus framework.


SUPPLEMENTARY MATERIAL

Supplement to ‘Global Consensus Monte Carlo’:

“GCMCsupplement.pdf” includes supplementary material covering implementation considerations, theoretical analysis for a simple model, and heuristic procedures for the proposed SMC sampler (with numerical demonstrations).

R code for examples:

“GCMCexamples.zip” contains R scripts used to generate the numerical results and figures presented here and in the supplement.

References

  • Bardenet et al., (2014) Bardenet, R., Doucet, A., and Holmes, C. (2014). Towards scaling up Markov chain Monte Carlo: an adaptive subsampling approach. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 405–413.
  • Beaumont et al., (2002) Beaumont, M. A., Zhang, W., and Balding, D. J. (2002). Approximate Bayesian computation in population genetics. Genetics, 162(4):2025–2035.
  • Bertsekas and Tsitsiklis, (1989) Bertsekas, D. P. and Tsitsiklis, J. N. (1989). Parallel and Distributed Computation: Numerical Methods. Prentice-Hall.
  • Blum and François, (2010) Blum, M. G. and François, O. (2010). Non-linear regression models for approximate Bayesian computation. Statistics and Computing, 20(1):63–73.
  • Boyd et al., (2011) Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122.
  • Chan and Lai, (2013) Chan, H. P. and Lai, T. L. (2013). A general theory of particle filters in hidden Markov models and some applications. Annals of Statistics, 41(6):2877–2904.
  • Chopin, (2002) Chopin, N. (2002). A sequential particle filter method for static models. Biometrika, 89(3):539–552.
  • Chopin and Ridgway, (2017) Chopin, N. and Ridgway, J. (2017). Leave Pima Indians alone: binary regression as a benchmark for Bayesian computation. Statistical Science, 32(1):64–87.
  • Crooks, (1998) Crooks, G. E. (1998). Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. Journal of Statistical Physics, 90(5–6):1481–1487.
  • Dai et al., (2019) Dai, H., Pollock, M., and Roberts, G. (2019). Monte Carlo fusion. Journal of Applied Probability, 56(1):174–191.
  • Dean and Ghemawat, (2008) Dean, J. and Ghemawat, S. (2008). MapReduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107–113.
  • Del Moral et al., (2006) Del Moral, P., Doucet, A., and Jasra, A. (2006). Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(3):411–436.
  • Douc et al., (2005) Douc, R., Cappé, O., and Moulines, E. (2005). Comparison of resampling schemes for particle filtering. In Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, pages 64–69. IEEE.
  • Doucet and Johansen, (2011) Doucet, A. and Johansen, A. M. (2011). A tutorial on particle filtering and smoothing: Fifteen years later. In Crisan, D. and Rozovskii, B., editors, Handbook of Nonlinear Filtering, pages 656–704. Oxford University Press.
  • Doucet and Lee, (2018) Doucet, A. and Lee, A. (2018). Sequential Monte Carlo methods. In Maathuis, M., Drton, M., Lauritzen, S. L., and Wainwright, M., editors, Handbook of Graphical Models, pages 165–189. CRC Press.
  • Finke et al., (2020) Finke, A., Doucet, A., and Johansen, A. M. (2020). Limit theorems for sequential MCMC methods. Advances in Applied Probability, 52(2). In press.
  • Gerber et al., (2019) Gerber, M., Chopin, N., and Whiteley, N. (2019). Negative association, ordering and convergence of resampling methods. Annals of Statistics, 47:2236–2260.
  • Gilks and Berzuini, (2001) Gilks, W. R. and Berzuini, C. (2001). Following a moving target — Monte Carlo inference for dynamic Bayesian models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(1):127–146.
  • Huggins et al., (2016) Huggins, J., Campbell, T., and Broderick, T. (2016). Coresets for scalable Bayesian logistic regression. In Advances in Neural Information Processing Systems, pages 4080–4088.
  • Jasra et al., (2011) Jasra, A., Stephens, D. A., Doucet, A., and Tsagaris, T. (2011). Inference for Lévy-driven stochastic volatility models via adaptive sequential Monte Carlo. Scandinavian Journal of Statistics, 38(1):1–22.
  • Jordan et al., (2019) Jordan, M. I., Lee, J. D., and Yang, Y. (2019). Communication-efficient distributed statistical inference. Journal of the American Statistical Association, 114(526):668–681.
  • Korattikara et al., (2014) Korattikara, A., Chen, Y., and Welling, M. (2014). Austerity in MCMC land: Cutting the Metropolis-Hastings budget. In Proceedings of the 31st International Conference on Machine Learning (ICML-14).
  • Lee and Whiteley, (2018) Lee, A. and Whiteley, N. (2018). Variance estimation in the particle filter. Biometrika, 105(3):609–625.
  • Liu and Chen, (1995) Liu, J. S. and Chen, R. (1995). Blind deconvolution via sequential imputations. Journal of the American Statistical Association, 90(430):567–576.
  • Maclaurin and Adams, (2014) Maclaurin, D. and Adams, R. P. (2014). Firefly Monte Carlo: Exact MCMC with subsets of data. In Proceedings of the 30th International Conference on Uncertainty in Artificial Intelligence (UAI-14), pages 543–552.
  • Marin et al., (2012) Marin, J. M., Pudlo, P., Robert, C. P., and Ryder, R. J. (2012). Approximate Bayesian computational methods. Statistics and Computing, 22(6):1167–1180.
  • Minsker et al., (2014) Minsker, S., Srivastava, S., Lin, L., and Dunson, D. (2014). Scalable and robust Bayesian inference via the median posterior. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1656–1664.
  • Neal, (2001) Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2):125–139.
  • Neiswanger et al., (2014) Neiswanger, W., Wang, C., and Xing, E. (2014). Asymptotically exact, embarrassingly parallel MCMC. In Proceedings of the 30th International Conference on Uncertainty in Artificial Intelligence (UAI-14), pages 623–632.
  • Rabinovich et al., (2015) Rabinovich, M., Angelino, E., and Jordan, M. I. (2015). Variational consensus Monte Carlo. In Advances in Neural Information Processing Systems, pages 1207–1215.
  • Rendell, (2020) Rendell, L. J. (2020). Sequential Monte Carlo variance estimators and global consensus. PhD thesis, University of Warwick.
  • Rendell et al., (2018) Rendell, L. J., Johansen, A. M., Lee, A., and Whiteley, N. (2018). Global consensus Monte Carlo. arXiv preprint arXiv:1807.09288.
  • Roberts and Rosenthal, (2001) Roberts, G. O. and Rosenthal, J. S. (2001). Optimal scaling for various Metropolis–Hastings algorithms. Statistical Science, 16(4):351–367.
  • Scheffé, (1947) Scheffé, H. (1947). A useful convergence theorem for probability distributions. Annals of Mathematical Statistics, 18(3):434–438.
  • Scott, (2017) Scott, S. L. (2017). Comparing consensus Monte Carlo strategies for distributed Bayesian computation. Brazilian Journal of Probability and Statistics, 31(4):668–685.
  • Scott et al., (2016) Scott, S. L., Blocker, A. W., Bonassi, F. V., Chipman, H. A., George, E. I., and McCulloch, R. E. (2016). Bayes and big data: The consensus Monte Carlo algorithm. International Journal of Management Science and Engineering Management, 11(2):78–88.
  • Srivastava et al., (2015) Srivastava, S., Cevher, V., Dinh, Q., and Dunson, D. (2015). WASP: Scalable Bayes via barycenters of subset posteriors. In Artificial Intelligence and Statistics, pages 912–920.
  • Vats et al., (2019) Vats, D., Flegal, J. M., and Jones, G. L. (2019). Multivariate output analysis for Markov chain Monte Carlo. Biometrika, 106(2):321–337.
  • Vono et al., (2018) Vono, M., Dobigeon, N., and Chainais, P. (2018). Sparse Bayesian binary logistic regression using the split-and-augmented Gibbs sampler. In Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing.
  • (40) Vono, M., Dobigeon, N., and Chainais, P. (2019a). Split-and-augmented Gibbs sampler – application to large-scale inference problems. IEEE Transactions on Signal Processing, 67(6):1648–1661.
  • (41) Vono, M., Paulin, D., and Doucet, A. (2019b). Efficient MCMC sampling with dimension-free convergence rate using ADMM-type splitting. arXiv preprint arXiv:1905.11937.
  • Wang and Dunson, (2013) Wang, X. and Dunson, D. B. (2013). Parallelizing MCMC via Weierstrass sampler. arXiv preprint arXiv:1312.4605.
  • Wang et al., (2015) Wang, X., Guo, F., Heller, K. A., and Dunson, D. B. (2015). Parallelizing MCMC with random partition trees. In Advances in Neural Information Processing Systems, pages 451–459.
  • Xu et al., (2014) Xu, M., Lakshminarayanan, B., Teh, Y. W., Zhu, J., and Zhang, B. (2014). Distributed Bayesian posterior sampling via moment sharing. In Advances in Neural Information Processing Systems, pages 3356–3364.
  • Zhou et al., (2016) Zhou, Y., Johansen, A. M., and Aston, J. A. (2016). Towards automatic model comparison: an adaptive sequential Monte Carlo approach. Journal of Computational and Graphical Statistics, 25(3):701–726.