This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

MixFlows: principled variational inference via mixed flows

Zuheng Xu    Naitong Chen    Trevor Campbell
Abstract

This work presents mixed variational flows (MixFlows), a new variational family that consists of a mixture of repeated applications of a map to an initial reference distribution. First, we provide efficient algorithms for i.i.d. sampling, density evaluation, and unbiased ELBO estimation. We then show that MixFlows have MCMC-like convergence guarantees when the flow map is ergodic and measure-preserving, and provide bounds on the accumulation of error for practical implementations where the flow map is approximated. Finally, we develop an implementation of MixFlows based on uncorrected discretized Hamiltonian dynamics combined with deterministic momentum refreshment. Simulated and real data experiments show that MixFlows can provide more reliable posterior approximations than several black-box normalizing flows, as well as samples of comparable quality to those obtained from state-of-the-art MCMC methods.

Machine Learning, ICML

1 Introduction

Bayesian statistical modelling and inference provides a principled approach to learning from data. However, for all but the simplest models, exact inference is not possible and computational approximations are required. A standard methodology for Bayesian inference is Markov chain Monte Carlo (MCMC) [Robert & Casella, 2004; Robert & Casella, 2011; Gelman et al., 2013, Ch. 11,12], which involves simulating a Markov chain whose stationary distribution is the Bayesian posterior distribution, and then treating the sequence of states as draws from the posterior. MCMC methods are supported by theory that guarantees that if one simulates the chain for long enough, Monte Carlo averages based on the sequence of states will converge to the exact posterior expectation of interest (e.g., (Roberts & Rosenthal, 2004)). This “exactness” property is quite compelling: regardless of how well one is able to tune the Markov chain, the method is guaranteed to eventually produce an accurate result given enough computation time. Nevertheless, it remains a challenge to assess and optimize the performance of MCMC in practice with a finite computational budget. One option is to use a Stein discrepancy (Gorham & Mackey, 2015; Liu et al., 2016; Chwialkowski et al., 2016; Gorham & Mackey, 2017; Anastasiou et al., 2021), which quantifies how well a set of MCMC samples approximates the posterior distribution. But standard Stein discrepancies are not reliable in the presence of multimodality (Wenliang & Kanagawa, 2020), are computationally expensive to estimate, and suffer from the curse of dimensionality, although recent work addresses the latter two issues to an extent (Huggins & Mackey, 2018; Gong et al., 2021). The other option is to use a traditional diagnostic, e.g., Gelman–Rubin (Gelman & Rubin, 1992; Brooks & Gelman, 1998), effective sample size (Gelman et al., 2013, p. 286), Geweke (1992), or others (Cowles & Carlin, 1996). These diagnostics detect mixing issues, but do not comprehensively quantify how well the MCMC samples approximate the posterior.

Variational inference (VI) (Jordan et al., 1999; Wainwright & Jordan, 2008; Blei et al., 2017) is an alternative to MCMC that does provide a straightforward quantification of posterior approximation error. In particular, VI involves approximating the posterior with a probability distribution—typically selected from a parametric family—that enables both i.i.d. sampling and density evaluation (Wainwright & Jordan, 2008; Rezende & Mohamed, 2015a; Ranganath et al., 2016; Papamakarios et al., 2021). Because one can both obtain i.i.d. draws and evaluate the density, one can estimate the ELBO (Blei et al., 2017), i.e., the Kullback-Leibler (KL) divergence (Kullback & Leibler, 1951) to the posterior up to a constant. The ability to estimate this quantity, in turn, enables scalable tuning via straightforward stochastic gradient descent algorithms (Hoffman et al., 2013; Ranganath et al., 2014), optimally mixed approximations (Jaakkola & Jordan, 1998; Gershman et al., 2012; Zobay, 2014; Guo et al., 2016; Wang, 2016; Miller et al., 2017; Locatello et al., 2018b, a; Campbell & Li, 2019), model selection (Corduneau & Bishop, 2001; Masa-aki, 2001; Constantinopoulos et al., 2006; Ormerod et al., 2017; Chérief-Abdellatif & Alquier, 2018; Tao et al., 2018), and more. However, VI typically does not possess the same “exactness regardless of tuning” that MCMC does. The optimal variational distribution is not usually equal to the posterior due to the use of a limited parametric variational family; and even if it were, one could not reliably find it due to nonconvexity of the KL objective (Xu & Campbell, 2022). Recent work addresses this problem by constructing variational families from parametrized Markov chains targeting the posterior. Many are related to annealed importance sampling (Salimans et al., 2015; Wolf et al., 2016; Geffner & Domke, 2021; Zhang et al., 2021; Thin et al., 2021b; Jankowiak & Phan, 2021); these methods introduce numerous auxiliary variables and only have convergence guarantees in the limit of increasing dimension of the joint distribution. Those based on flows (Neal, 2005; Caterini et al., 2018; Chen et al., 2022) avoid the increase in dimension with flow length, but typically do not have guaranteed convergence to the target. Methods involving the final-state marginal of finite simulations of standard MCMC methods (e.g., Zhang & Hernández-Lobato, 2020) do not enable density evaluation.

The first contribution of this work is a new family of mixed variational flows (MixFlows), constructed via averaging over repeated applications of a pushforward map to an initial reference distribution. We develop efficient methods for i.i.d. sampling, density evaluation, and unbiased ELBO estimation. The second contribution is a theoretical analysis of MixFlows. We show (Theorems 4.1 and 4.2) that when the map family is ergodic and measure-preserving, MixFlows converge to the target distribution for any value of the variational parameter, and hence have guarantees regardless of tuning as in MCMC. We then extend these results (Theorems 4.3 and 4.4) to MixFlows based on approximated maps—which are typically necessary in practice—with bounds on error with increasing flow length. The third contribution of the work is an implementation of MixFlows via uncorrected discretized Hamiltonian dynamics. Simulated and real data experiments compare performance to the No-U-Turn sampler (NUTS) (Hoffman & Gelman, 2014), standard Hamiltonian Monte Carlo (HMC) (Neal, 2011), and several black-box normalizing flows (Rezende & Mohamed, 2015b; Dinh et al., 2017). Our results demonstrate a comparable sample quality to NUTS, similar computational efficiency to HMC, and more reliable posterior approximations than standard normalizing flows.

Related work.

Averages of Markov chain state distributions in general were studied in early work on shift-coupling (Aldous & Thorisson, 1993), with convergence guarantees established by Roberts & Rosenthal (1997). However, these guarantees involve minorization and drift conditions that are designed for stochastic Markov chains, and are not applicable to MixFlows. Averages of deterministic pushforwards specifically to enable density evaluation have also appeared in more recent work. Rotskoff & Vanden-Eijnden (2019); Thin et al. (2021a) use an average of pushforwards generated by simulating nonequilibrium dynamics as an importance sampling proposal. The proposal distribution itself does not come with any convergence guarantees—due to the use of non-measure-preserving, non-ergodic damped Hamiltonian dynamics—or tuning guidance. Our work provides comprehensive convergence theory and establishes a convenient means of optimizing hyperparameters.

MixFlows are also related to past work on deterministic MCMC (Murray & Elliott, 2012; Neal, 2012; ver Steeg & Galstyan, 2021; Neklyudov et al., 2021). Murray & Elliott (2012) developed a Markov chain Monte Carlo procedure based on an arbitrarily dependent random stream via augmentation and measure-preserving bijections. ver Steeg & Galstyan (2021) designed a specialized momentum distribution that generates valid Monte Carlo samples solely through the simulation of deterministic Hamiltonian dynamics. Neklyudov et al. (2021) proposed a general form of measure-preserving dynamics that can be utilized to construct deterministic Gibbs samplers. These works all involve only deterministic updates, but do not construct variational approximations, provide total variation convergence guarantees, or provide guidance on hyperparameter tuning. Finally, some of these works involve discretization of dynamical systems, but do not characterize the resultant error (ver Steeg & Galstyan, 2021; Neklyudov et al., 2021). In contrast, our work provides a comprehensive convergence theory, with error bounds for when approximate maps are used.

2 Background

2.1 Variational inference with flows

Consider a set 𝒳d\mathcal{X}\subseteq\mathbb{R}^{d} and a target probability distribution π\pi on 𝒳\mathcal{X} whose density with respect to the Lebesgue measure we denote π(x)\pi(x) for x𝒳x\in\mathcal{X}. In the setting of Bayesian inference, π\pi is the posterior distribution that we aim to approximate, and we are only able to evaluate a function p(x)p(x) such that p(x)=Zπ(x)p(x)=Z\cdot\pi(x) for some unknown normalization constant Z>0Z>0. Throughout, we will assume all distributions have densities with respect to the Lebesgue measure on 𝒳\mathcal{X}, and will use the same symbol to denote a distribution and its density; it will be clear from context what is meant.

Variational inference involves approximating the target distribution π\pi by minimizing the Kullback-Leibler (KL) divergence from π\pi to members of a parametric family {qλ:λΛ}\{q_{\lambda}:\lambda\in\Lambda\}, Λp\Lambda\subseteq\mathbb{R}^{p}, i.e.,

λ=argminλΛDKL(qλ||π)=argminλΛqλ(x)logqλ(x)p(x)dx.\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:klmin}}{e}q:klmin}\begin{aligned} \lambda^{\star}&=\operatornamewithlimits{arg\,min}_{\lambda\in\Lambda}\mathrm{D_{KL}}\mathchoice{\left({{{{q_{\lambda}||\pi}}}}\right)}{({{{{q_{\lambda}||\pi}}}})}{({{{{q_{\lambda}||\pi}}}})}{({{{{q_{\lambda}||\pi}}}})}\\ &=\operatornamewithlimits{arg\,min}_{\lambda\in\Lambda}\int q_{\lambda}(x)\log\frac{q_{\lambda}(x)}{p(x)}\mathrm{d}x.\end{aligned} (2)

The two objective functions in Equation 2 differ only by the constant logZ\log Z. In order to be able to optimize λ\lambda using standard techniques, the variational family qλq_{\lambda}, λΛ\lambda\in\Lambda must enable both i.i.d. sampling and density evaluation. A common approach to constructing such a family is to pass draws from a simple reference distribution q0q_{0} through a measurable function Tλ:𝒳𝒳T_{\lambda}:\mathcal{X}\to\mathcal{X}; TλT_{\lambda} is often referred to as a flow when comprised of repeated composed functions (Tabak & Turner, 2013; Rezende & Mohamed, 2015a; Kobyzev et al., 2021). If TλT_{\lambda} is a diffeomorphism, i.e., differentiable and has a differentiable inverse, then we can express the density of X=Tλ(Y)X=T_{\lambda}(Y), Yq0Y\sim q_{0} as

x𝒳,qλ(x)=q0(Tλ1(x))Jλ(Tλ1(x)),Jλ(x)=|detxTλ(x)|.\displaystyle\forall x\!\in\!\mathcal{X},\,q_{\lambda}(x)\!\!=\!\!\frac{q_{0}(T_{\lambda}^{-1}(x))}{J_{\lambda}(T_{\lambda}^{-1}(x))},\,J_{\lambda}(x)\!\!=\!\!\left|\det\!\nabla_{x}T_{\lambda}(x)\right|.\!\!\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:transformationvars}}{e}q:transformationvars} (3)

In this case the optimization in Equation 2 can be rewritten using a transformation of variables as

λ\displaystyle\lambda^{\star} =argminλΛq0(x)logq0(x)Jλ(x)p(Tλ(x))dx.\displaystyle=\operatornamewithlimits{arg\,min}_{\lambda\in\Lambda}\int q_{0}(x)\log\frac{q_{0}(x)}{J_{\lambda}(x)p(T_{\lambda}(x))}\mathrm{d}x.\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:reparammin}}{e}q:reparammin} (4)

One can solve the optimization problem Equation 4 using unbiased stochastic estimates of the gradient111We assume throughout that differentiation and integration can be swapped wherever necessary. with respect to λ\lambda based on draws from q0q_{0} (Salimans & Knowles, 2013; Kingma & Welling, 2014),

λDKL(qλ||π)λlogq0(X)Jλ(X)p(Tλ(X)),Xq0.\displaystyle\nabla_{\lambda}\mathrm{D_{KL}}\mathchoice{\left({{{{q_{\lambda}||\pi}}}}\right)}{({{{{q_{\lambda}||\pi}}}})}{({{{{q_{\lambda}||\pi}}}})}{({{{{q_{\lambda}||\pi}}}})}\!\approx\!\nabla_{\lambda}\!\log\!\frac{q_{0}(X)}{J_{\lambda}(X)p(T_{\lambda}(X))},\,X\sim q_{0}.\!\!\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:gradest}}{e}q:gradest} (5)

2.2 Measure-preserving and ergodic maps

Variational flows are often constructed from a flexible, general-purpose parametrized family {Tλ:λΛ}\{T_{\lambda}:\lambda\in\Lambda\} that is not specialized for any particular target distribution (Papamakarios et al., 2021); it is the job of the KL divergence minimization Equation 4 to adapt the parameter λ\lambda such that qλq_{\lambda} becomes a good approximation of the target π\pi. However, there are certain functions—in particular, those that are both measure-preserving and ergodic for π\pi—that naturally provide a means to approximate expectations of interest under π\pi without the need for tuning. Intuitively, a measure-preserving map TT will not change the distribution of draws from π\pi: if XπX\sim\pi, then T(X)πT(X)\sim\pi. And an ergodic map TT, when applied repeatedly, will not get “stuck” in a subset of 𝒳\mathcal{X} unless it has probability either 0 or 1 under π\pi. The precise definitions are given in Definitions 2.1 and 2.2.

Definition 2.1 (Measure-preserving map (Eisner et al., 2015, pp. 73, 105)).

T:𝒳𝒳T:\mathcal{X}\to\mathcal{X} is measure-preserving for π\pi if Tπ=πT\pi=\pi, where TπT\pi is the pushforward measure given by π(T1(A))\pi(T^{-1}(A)) for each measurable set A𝒳A\subseteq\mathcal{X}.

Definition 2.2 (Ergodic map (Eisner et al., 2015, pp. 73, 105)).

T:𝒳𝒳T:\mathcal{X}\to\mathcal{X} is ergodic for π\pi if for all measurable sets A𝒳A\subseteq\mathcal{X}, T(A)=AT(A)=A implies that π(A){0,1}\pi(A)\in\{0,1\}.

If a map TT satisfies both Definitions 2.1 and 2.2, then long-run averages resulting from repeated applications of TT will converge to expectations under π\pi, as shown by Theorem 2.3. When 𝒳\mathcal{X} is compact, this result shows that the discrete measure 1Nn=0N1δTnx\frac{1}{N}\sum_{n=0}^{N-1}\delta_{T^{n}x} converges weakly to π\pi (Dajani & Dirksin, 2008, Theorem 6.1.7).

Theorem 2.3 (Ergodic Theorem [Birkhoff, 1931; Eisner et al., 2015, p. 212]).

Suppose T:𝒳𝒳T:\mathcal{X}\to\mathcal{X} is measure-preserving and ergodic for π\pi, and fL1(π)f\in L^{1}(\pi). Then

limN1Nn=0N1f(Tnx)=fdππ-a.e.x𝒳.\displaystyle\lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1}f(T^{n}x)=\int f\mathrm{d}\pi\qquad\text{$\pi$-{a.e.}\@}\,x\in\mathcal{X}. (6)

Based on this result, one might reasonably consider building a measure-preserving variational flow, i.e., X=T(X0)X=T(X_{0}) where X0q0X_{0}\sim q_{0}. However, it is straightforward to show that measure-preserving bijections TT do not decrease the KL divergence (or any other ff-divergence, e.g., total variation and Hellinger (Qiao & Minematsu, 2010, Theorem 1)):

DKL(Tq||π)=DKL(Tq||Tπ)=DKL(q||π).\displaystyle\mathrm{D_{KL}}\mathchoice{\left({{{{Tq||\pi}}}}\right)}{({{{{Tq||\pi}}}})}{({{{{Tq||\pi}}}})}{({{{{Tq||\pi}}}})}=\mathrm{D_{KL}}\mathchoice{\left({{{{Tq||T\pi}}}}\right)}{({{{{Tq||T\pi}}}})}{({{{{Tq||T\pi}}}})}{({{{{Tq||T\pi}}}})}=\mathrm{D_{KL}}\mathchoice{\left({{{{q||\pi}}}}\right)}{({{{{q||\pi}}}})}{({{{{q||\pi}}}})}{({{{{q||\pi}}}})}. (7)

3 Mixed variational flows (MixFlows)

In this section, we develop a general family of mixed variational flows (MixFlows) as well as algorithms for tractable i.i.d. sampling, density evaluation, and ELBO estimation. MixFlows consist of a mixture of normalizing flows obtained via repeated application of a pushforward map. This section introduces general MixFlows; later in Sections 4 and 5 we will provide convergence guarantees and examples based on Hamiltonian dynamics.

3.1 Variational family

Define a reference distribution q0q_{0} on 𝒳\mathcal{X} for which i.i.d. sampling and density evaluation is tractable, and a collection of measurable functions Tλ:𝒳𝒳T_{\lambda}:\mathcal{X}\to\mathcal{X} parametrized by λΛ\lambda\in\Lambda. Then the MixFlow family generated by q0q_{0} and TλT_{\lambda} is

qλ,N=1Nn=0N1Tλnq0forλΛ,N,\displaystyle q_{\lambda,N}=\frac{1}{N}\sum_{n=0}^{N-1}T^{n}_{\lambda}q_{0}\qquad\text{for}\qquad\lambda\in\Lambda,\,N\in\mathbb{N}, (8)

where Tλnq0T^{n}_{\lambda}q_{0} denotes the pushforward of the distribution q0q_{0} under nn repeated applications of TλT_{\lambda}.

3.2 Density evaluation and sampling

If Tλ:𝒳𝒳T_{\lambda}:\mathcal{X}\to\mathcal{X} is a diffeomorphism with Jacobian Jλ:𝒳J_{\lambda}:\mathcal{X}\to\mathbb{R}, we can express the density of qλ,Nq_{\lambda,N} by using a transformation of variables formula on each component in the mixture:

qλ,N(x)\displaystyle q_{\lambda,N}(x) =1Nn=0N1q0(Tλnx)j=1nJλ(Tλjx).\displaystyle=\frac{1}{N}\sum_{n=0}^{N-1}\frac{q_{0}(T_{\lambda}^{-n}x)}{\prod_{j=1}^{n}J_{\lambda}(T_{\lambda}^{-j}x)}. (9)

This density can be computed efficiently using N1N-1 evaluations of Tλ1T^{-1}_{\lambda} and JλJ_{\lambda} each (Algorithm 2). For sampling, we can obtain an independent draw Xqλ,NX\sim q_{\lambda,N} by treating qλ,Nq_{\lambda,N} as a mixture of NN distributions:

K\displaystyle K 𝖴𝗇𝗂𝖿{0,1,,N1}X0q0X=TλK(X0).\displaystyle\sim{\sf{Unif}}\{0,1,\dots,N-1\}\quad X_{0}\sim q_{0}\quad X=T^{K}_{\lambda}(X_{0}). (10)

The procedure is given in Algorithm 1. On average, this computation requires 𝔼[K]=N12\mathbb{E}[K]=\frac{N-1}{2} applications of TλT_{\lambda}, and at most it requires N1N-1 applications. However, one often takes samples from qλ,Nq_{\lambda,N} to estimate the expectation of some test function f:𝒳f:\mathcal{X}\to\mathbb{R}. In this case, one can use all intermediate states over a single pass of a trajectory rather than individual i.i.d. draws. In particular, we obtain an unbiased estimate of f(x)qλ,N(dx)\int f(x)q_{\lambda,N}(\mathrm{d}x) via

X0q01Nn=0N1f(TλnX0).\displaystyle X_{0}\sim q_{0}\quad\frac{1}{N}\sum_{n=0}^{N-1}f(T_{\lambda}^{n}X_{0}).\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:traj_average}}{e}q:traj_{a}verage} (11)

We refer to this estimate as the trajectory-averaged estimate of ff. The trajectory-averaged estimate of ff is preferred over the naïve estimate based on a single draw f(X),Xqλ,Nf(X),X\sim q_{\lambda,N}, as its cost is of the same order (N1N-1 applications of TλT_{\lambda}) and its variance is bounded above by that of the naïve estimate f(X)f(X), as shown in Proposition 3.1. See Section E.2, and Figure 6 in particular, for empirical verification of Proposition 3.1.

Proposition 3.1.
Var[1Nn=0N1f(TnX0)]Var[f(X)].\displaystyle\operatorname{Var}\left[\frac{1}{N}\sum_{n=0}^{N-1}f(T^{n}X_{0})\right]\leq\operatorname{Var}\left[f(X)\right]. (12)

3.3 ELBO estimation

We can minimize the KL divergence from qλ,Nq_{\lambda,N} to π\pi by maximizing the ELBO (Blei et al., 2017), given by

ELBO(λ,N)\displaystyle\mathrm{ELBO}\left(\lambda,N\right) =qλ,N(x)logp(x)qλ,N(x)dx\displaystyle=\int q_{\lambda,N}(x)\log\frac{p(x)}{q_{\lambda,N}(x)}\mathrm{d}x (13)
=q0(x)(1Nn=0N1logp(Tλnx)qλ,N(Tλnx)dx).\displaystyle=\int q_{0}(x)\left(\frac{1}{N}\sum_{n=0}^{N-1}\log\frac{p(T_{\lambda}^{n}x)}{q_{\lambda,N}(T^{n}_{\lambda}x)}\mathrm{d}x\right). (14)

The trajectory-averaged ELBO estimate is thus

X0q0,ELBO^(λ,N)=1Nn=0N1logp(TλnX0)qλ,N(TλnX0).\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:betterelbo}}{e}q:betterelbo}\!\!X_{0}\sim q_{0},\,\widehat{\mathrm{ELBO}}(\lambda,N)\!=\!\frac{1}{N}\sum_{n=0}^{N-1}\log\frac{p(T_{\lambda}^{n}X_{0})}{q_{\lambda,N}(T_{\lambda}^{n}X_{0})}.\!\! (15)

The naïve method to compute this estimate—sampling X0X_{0} and then computing the log density ratio for each term—requires O(N2)O(N^{2}) computation because each evaluation of qλ,N(x)q_{\lambda,N}(x) is O(N)O(N). Algorithm 3 provides an efficient way of computing ELBO^(λ,N)\widehat{\text{ELBO}}(\lambda,N) in O(N)O(N) operations, which is on par with taking a single draw from qλ,Nq_{\lambda,N} or evaluating qλ,N(x)q_{\lambda,N}(x) once. The key insight in Algorithm 3 is that we can evaluate the collection of values {qλ,N(X0),qλ,N(TλX0),,qλ,N(TλN1X0)}\{q_{\lambda,N}(X_{0}),q_{\lambda,N}(T_{\lambda}X_{0}),\dots,q_{\lambda,N}(T_{\lambda}^{N-1}X_{0})\} incrementally, starting from qλ,N(X0)q_{\lambda,N}(X_{0}) and iteratively computing each qλ,N(TλnX0)q_{\lambda,N}(T_{\lambda}^{n}X_{0}) for increasing nn in constant time. Specifically, in practice one computes qλ,N(x)q_{\lambda,N}(x) with a complexity of O(N)O(N) and iteratively updates qλ,N(Tλkx)q_{\lambda,N}(T_{\lambda}^{k}x) and the Jacobian for k=1,2,N1k=1,2,\dots N-1. Each update requires only constant cost if one precomputes and stores TλN+1x,,TλN1xT^{-N+1}_{\lambda}x,\dots,T^{N-1}_{\lambda}x and Jλ(TλN+1x),,Jλ(TλN1x)J_{\lambda}(T_{\lambda}^{-N+1}x),\dots,J_{\lambda}(T_{\lambda}^{N-1}x). This then implies that Algorithm 3 requires O(N)O(N) memory; Algorithm 5 in Appendix B provides a slightly more complicated O(N)O(N) time, O(1)O(1) memory implementation of Equation 15.

Algorithm 1 Sample(qλ,N)\texttt{Sample}(q_{\lambda,N}): Take a draw from qλ,Nq_{\lambda,N}
  Input: reference distribution q0q_{0}, flow map TλT_{\lambda}, number of steps NN
  KSample(𝖴𝗇𝗂𝖿{0,1,,N1})K\leftarrow\texttt{Sample}({\sf{Unif}}\{0,1,\dots,N-1\})
  x0Sample(q0)x_{0}\leftarrow\texttt{Sample}(q_{0})
  Return: TλK(x0)T_{\lambda}^{K}(x_{0})
Algorithm 2 logqλ,N(x)\log q_{\lambda,N}(x): Evaluate the log-density of qλ,Nq_{\lambda,N}
  Input: location xx, reference distribution q0q_{0}, flow map TλT_{\lambda}, Jacobian JλJ_{\lambda}, number of steps NN
  L0L\leftarrow 0
  w0logq0(x)w_{0}\leftarrow\log q_{0}(x)
  for n=1,,N1n=1,\dots,N-1 do
     xTλ1(x)x\leftarrow T^{-1}_{\lambda}(x)
     LL+logJλ(x)L\leftarrow L+\log J_{\lambda}(x)
     wnlogq0(x)Lw_{n}\leftarrow\log q_{0}(x)-L
  end for
  Return: LogSumExp(w0,,wN1)logN\texttt{LogSumExp}(w_{0},\dots,w_{N-1})-\log N
Algorithm 3 EstELBO(λ,N)\texttt{EstELBO}(\lambda,N): Obtain an unbiased estimate of the ELBO for qλ,Nq_{\lambda,N}.
  Input: reference q0q_{0}, unnormalized target pp, flow map TλT_{\lambda}, Jacobian JλJ_{\lambda}, number of flow steps NN
  x0Sample(q0),J0Jλ(x0)x_{0}\leftarrow\texttt{Sample}(q_{0}),\quad J_{0}\leftarrow J_{\lambda}(x_{0})
  for n=1,,N1n=1,\dots,N-1 do
     xnTλ(xn1),JnJλ(xn)x_{n}\leftarrow T_{\lambda}(x_{n-1}),\quad J_{n}\leftarrow J_{\lambda}(x_{n})
     xnTλ1(xn+1),JnJλ(xn)x_{-n}\leftarrow T_{\lambda}^{-1}(x_{-n+1}),\quad J_{-n}\leftarrow J_{\lambda}(x_{-n})
  end for
  z0qλ,N(x0)z_{0}\leftarrow q_{\lambda,N}(x_{0}) (via Algorithm 2)
  Jj=1N1JjJ\leftarrow\prod_{j=1}^{N-1}J_{-j}
  for n=1,,N1n=1,\dots,N-1 do
     q¯n1Nq0(xN+n)/J\bar{q}_{n}\leftarrow\frac{1}{N}q_{0}(x_{-N+n})/J
     zn(zn1q¯n)/Jn1+1Nq0(xn)z_{n}\leftarrow\left(z_{n-1}-\bar{q}_{n}\right)/J_{n-1}+\frac{1}{N}q_{0}(x_{n})
     if n<N1n<N-1 then
        JJJn1/JN+nJ\leftarrow J\cdot J_{n-1}/J_{-N+n}
     end if
  end for
  ELBO^(λ,N)1Nn=0N1(logp(xn)logzn)\widehat{\mathrm{ELBO}}(\lambda,N)\leftarrow\frac{1}{N}\sum_{n=0}^{N-1}(\log p(x_{n})-\log z_{n})
  Return: ELBO^(λ,N)\widehat{\mathrm{ELBO}}(\lambda,N)

4 Guarantees for MixFlows

In this section, we show that when TλT_{\lambda} is measure-preserving and ergodic for π\pi—or approximately so—MixFlows come with convergence guarantees and bounds on their approximation error as a function of flow length. Proofs for all results may be found in Appendix A.

4.1 Ergodic MixFlows

Ergodic MixFlow families are those where λΛ\forall\lambda\in\Lambda, TλT_{\lambda} is measure-preserving and ergodic for π\pi. In this setting, Theorems 4.1 and 4.2 show that MixFlows converge to the target weakly and in total variation as NN\to\infty regardless of the choice of λ\lambda (i.e., regardless of parameter tuning effort). Thus, ergodic MixFlow families provide the same compelling convergence result as MCMC, but with the added benefit of unbiased ELBO estimates. We first demonstrate setwise and weak convergence. Recall that a sequence of distributions qnq_{n} converges weakly to π\pi if for all bounded, continuous f:𝒳f:\mathcal{X}\to\mathbb{R}, limnf(x)qn(dx)=f(x)π(dx)\lim_{n\to\infty}\int f(x)q_{n}(\mathrm{d}x)=\int f(x)\pi(\mathrm{d}x), and converges setwise to π\pi if for all measurable A𝒳A\subseteq\mathcal{X}, limnqn(A)=π(A)\lim_{n\to\infty}q_{n}(A)=\pi(A).

Theorem 4.1.

Suppose q0πq_{0}\ll\pi and TλT_{\lambda} is measure-preserving and ergodic for π\pi. Then qλ,Nq_{\lambda,N} converges both setwise and weakly to π\pi as NN\to\infty.

Using Theorem 4.1 as a stepping stone, we can obtain convergence in total variation. Recall that a sequence of distributions qnq_{n} converges in total variation to π\pi if DTV(qn,π)=supA|qn(A)π(A)|0\mathrm{D_{TV}}\mathchoice{\left({{{{q_{n},\pi}}}}\right)}{({{{{q_{n},\pi}}}})}{({{{{q_{n},\pi}}}})}{({{{{q_{n},\pi}}}})}=\sup_{A}\left|q_{n}(A)-\pi(A)\right|\to 0 as nn\to\infty. Note that similar nonasymptotic results exist for the ergodic average law of Markov chains (Roberts & Rosenthal, 1997), but as previously mentioned, these results do not apply to deterministic Markov kernels.

Theorem 4.2.

Suppose q0πq_{0}\ll\pi and TλT_{\lambda} is measure-preserving and ergodic for π\pi. Then qλ,Nq_{\lambda,N} converges in total variation to π\pi as NN\to\infty.

4.2 Approximated MixFlows

In practice, it is rare to be able to evaluate a measure-preserving map exactly; but approximations are commonly available. For example, in Section 5 we will create a measure-preserving map using Hamiltonian dynamics, and then approximate that map with a discretized leapfrog integrator. We therefore need to extend the result in Section 4.1 to apply to approximated maps.

Suppose we have two maps, TT and T^\hat{T}, with corresponding MixFlow families qnq_{n} and q^n\hat{q}_{n} (suppressing λ\lambda for brevity). Theorem 4.3 shows that the error of the MixFlow family q^N\hat{q}_{N} reflects an accumulation of the difference between the pushforward of each qnq_{n} under T^\hat{T} and TT.

Theorem 4.3.

Suppose T^\hat{T} is a bijection. Then

DTV(q^N,π)DTV(qN,π)+n=0N1nNDTV(Tqn,T^qn).\displaystyle\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N},\pi}}}}\right)}{({{{{\hat{q}_{N},\pi}}}})}{({{{{\hat{q}_{N},\pi}}}})}{({{{{\hat{q}_{N},\pi}}}})}\leq\mathrm{D_{TV}}\mathchoice{\left({{{{q_{N},\pi}}}}\right)}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}+\sum_{n=0}^{N-1}\frac{n}{N}\mathrm{D_{TV}}\mathchoice{\left({{{{Tq_{n},\hat{T}q_{n}}}}}\right)}{({{{{Tq_{n},\hat{T}q_{n}}}}})}{({{{{Tq_{n},\hat{T}q_{n}}}}})}{({{{{Tq_{n},\hat{T}q_{n}}}}})}. (16)

Suppose TT is measure-preserving and ergodic for π\pi and q0Nq_{0}\ll N. Then Theorem 4.2 implies that DTV(qN,π)0\mathrm{D_{TV}}\mathchoice{\left({{{{q_{N},\pi}}}}\right)}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}\to 0, and for the second term, we (very loosely) expect that DTV(Tqn,T^qn)DTV(π,T^π)\mathrm{D_{TV}}\mathchoice{\left({{{{Tq_{n},\hat{T}q_{n}}}}}\right)}{({{{{Tq_{n},\hat{T}q_{n}}}}})}{({{{{Tq_{n},\hat{T}q_{n}}}}})}{({{{{Tq_{n},\hat{T}q_{n}}}}})}\approx\mathrm{D_{TV}}\mathchoice{\left({{{{\pi,\hat{T}\pi}}}}\right)}{({{{{\pi,\hat{T}\pi}}}})}{({{{{\pi,\hat{T}\pi}}}})}{({{{{\pi,\hat{T}\pi}}}})} for large nn. Therefore, the second term should behave like O(NDTV(π,T^π))O(N\cdot\mathrm{D_{TV}}\mathchoice{\left({{{{\pi,\hat{T}\pi}}}}\right)}{({{{{\pi,\hat{T}\pi}}}})}{({{{{\pi,\hat{T}\pi}}}})}{({{{{\pi,\hat{T}\pi}}}})}), i.e., increase linearly in NN proportional to how “non-measure-preserving” T^\hat{T} is. This presents a trade-off between flow length and approximation quality: better approximations enable longer flows and lower minimal error bounds. Our empirical findings in Section 6 generally confirm this behavior. Corollary 4.4 further provides a more explicit characterization of this trade-off when T^\hat{T} and its log-Jacobian logJ^\log\hat{J} are uniformly close to their exact counterparts TT and logJ\log J, and logqn\log q_{n} and logJ\log J are uniformly (over nn\in\mathbb{N}) Lipschitz continuous. The latter assumption is reasonable in practical settings where we observe that the log-density of qnq_{n} closely approximates π\pi (see the experimental results in Section 6).

Corollary 4.4.

Suppose that for all x𝒳x\in\mathcal{X},

max{T^1xT1x,|logJ^(x)logJ(x)|}ϵ,\displaystyle\max\left\{\|\hat{T}^{-1}x-T^{-1}x\|,|\log\hat{J}(x)-\log J(x)|\right\}\leq\epsilon, (17)

for all n[N1]n\in[N-1], logqn\log q_{n} and logJ\log J are \ell-Lipschitz continuous, and that T,T^T,\hat{T} are diffeomorphisms. Then

DTV(q^N,π)DTV(qN,π)+Nϵ(+1)e(2+1)ϵ.\displaystyle\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N},\pi}}}}\right)}{({{{{\hat{q}_{N},\pi}}}})}{({{{{\hat{q}_{N},\pi}}}})}{({{{{\hat{q}_{N},\pi}}}})}\leq\mathrm{D_{TV}}\mathchoice{\left({{{{q_{N},\pi}}}}\right)}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}+N\epsilon(\ell+1)e^{(2\ell+1)\epsilon}. (18)

This result states that the overall map-approximation error is O(Nϵ)O(N\epsilon) when ϵ\epsilon is small. Theorem 1 of Butzer & Westphal (1971) also suggests that DTV(qN,π)=O(1/N)\mathrm{D_{TV}}\mathchoice{\left({{{{q_{N},\pi}}}}\right)}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}=O(1/N) in many cases, which hints that the bound should decrease roughly until N=O(1/ϵ )N=O(1/\mathchoice{{\hbox{$\displaystyle\sqrt{\epsilon\,}$}\lower 0.4pt\hbox{\vrule height=4.30554pt,depth=-3.44446pt}}}{{\hbox{$\textstyle\sqrt{\epsilon\,}$}\lower 0.4pt\hbox{\vrule height=4.30554pt,depth=-3.44446pt}}}{{\hbox{$\scriptstyle\sqrt{\epsilon\,}$}\lower 0.4pt\hbox{\vrule height=3.01389pt,depth=-2.41113pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\epsilon\,}$}\lower 0.4pt\hbox{\vrule height=2.15277pt,depth=-1.72223pt}}}), with error bound O(ϵ )O(\mathchoice{{\hbox{$\displaystyle\sqrt{\epsilon\,}$}\lower 0.4pt\hbox{\vrule height=4.30554pt,depth=-3.44446pt}}}{{\hbox{$\textstyle\sqrt{\epsilon\,}$}\lower 0.4pt\hbox{\vrule height=4.30554pt,depth=-3.44446pt}}}{{\hbox{$\scriptstyle\sqrt{\epsilon\,}$}\lower 0.4pt\hbox{\vrule height=3.01389pt,depth=-2.41113pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\epsilon\,}$}\lower 0.4pt\hbox{\vrule height=2.15277pt,depth=-1.72223pt}}}). We leave a more careful investigation of this trade-off for future work.

4.3 Discussion

Our main convergence results (Theorems 4.1 and 4.2) require that π\pi dominates the reference q0q_{0}, and that TT is both measure-preserving and ergodic. It is often straightforward to design a dominated reference q0q_{0}. Designing a measure-preserving map TT with an implementable approximation T^T\hat{T}\approx T is also often feasible. However, verifying the ergodicity of TT conclusively is typically a very challenging task. As a consequence, most past work involving measure-preserving transformations just asserts the ergodic hypothesis without proof; see the discussions in ver Steeg & Galstyan (2021, p. 4) and Tupper (2005, p. 2-3).

But because MixFlows provide the ability to estimate the Kullback-Leibler divergence up to a constant, it is not critical to prove convergence a priori (as it is in the case of MCMC, for example). Instead, we suggest using the results of Theorems 4.1, 4.2, 4.3 and 4.4 as a guiding recipe for constructing principled variational families. First, start by designing a family of measure-preserving maps TλT_{\lambda}, λΛ\lambda\in\Lambda. Next, approximate TλT_{\lambda} with some tractable Tλ,ϵT_{\lambda,\epsilon} including a tunable fidelity parameter ϵ0\epsilon\geq 0, such as that ϵ0\epsilon\to 0, Tλ,ϵT_{\lambda,\epsilon} becomes closer to TλT_{\lambda}. Finally, build a MixFlow from Tλ,ϵT_{\lambda,\epsilon}, and tune both λ\lambda and ϵ\epsilon by maximizing the ELBO. We follow this recipe in Section 5 and verify it empirically in Section 6.

5 Uncorrected Hamiltonian MixFlows

In this section, we provide an example of how to design a MixFlow by starting from an exactly measure-preserving map and then creating a tunable approximation to it. The construction is inspired by Hamiltonian Monte Carlo (HMC) (Neal, 2011, 1996), in which each iteration involves simulating Hamiltonian dynamics followed by a stochastic momentum refreshment; our method replaces the stochastic refreshment with a deterministic transformation. In particular, consider the augmented target density on 𝒳×d×[0,1]\mathcal{X}\times\mathbb{R}^{d}\times[0,1],

π¯(x,ρ,u)=π(x)m(ρ)𝟙[0u1],m(ρ)=i=1dr(ρi),\displaystyle\bar{\pi}(x,\rho,u)\!=\!\pi(x)m(\rho)\mathds{1}[0\!\leq\!u\!\leq\!1],\;\;m(\rho)\!=\!\prod_{i=1}^{d}r(\rho_{i}), (19)

with auxiliary variables ρd\rho\in\mathbb{R}^{d}, u[0,1]u\in[0,1], and some almost-everywhere differentiable univariate probability density r:+r:\mathbb{R}\to\mathbb{R}_{+}. The xx-marginal distribution of π¯\bar{\pi} is the original target distribution π\pi.

5.1 Measure-preserving map via Hamiltonian dynamics

We construct TλT_{\lambda} by composing the following three steps, which are all measure-preserving for π¯\bar{\pi}:

(1) Hamiltonian dynamics

We first apply

(x,ρ)HL(x,ρ),\displaystyle(x^{\prime},\rho^{\prime})\leftarrow H_{L}(x,\rho), (20)

where HL:𝒳×d𝒳×dH_{L}:\mathcal{X}\times\mathbb{R}^{d}\to\mathcal{X}\times\mathbb{R}^{d} is the map of Hamiltonian dynamics with position xx and momentum ρ\rho simulated for a time interval of length L+L\in\mathbb{R}_{+},

dρdt\displaystyle\frac{\mathrm{d}\rho}{\mathrm{d}t} =logπ(x)\displaystyle=\nabla\log\pi(x) dxdt\displaystyle\frac{\mathrm{d}x}{\mathrm{d}t} =logm(ρ).\displaystyle=-\nabla\log m(\rho).\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:hamdynamics}}{e}q:hamdynamics} (21)

One can show that this preserves density (and hence is measure preserving) and is also unit Jacobian (Neal, 2011).

(2) Pseudotime shift

Second, we apply a constant shift to the pseudotime variable uu,

uu+ξmod1,\displaystyle u^{\prime}\leftarrow u+\xi\mod 1, (22)

where ξ\xi\in\mathbb{R} is a fixed irrational number (say, ξ=π/16\xi=\pi/16). As this is a constant shift, it is unit Jacobian and density-preserving (and hence measure-preserving). The uu component will act as a notion of “time” of the flow, and ensures that the refreshment of ρ\rho in step (3) below will take a different form even if xx visits the same location again.

(3) Momentum refreshment

Finally, we refresh each of the momentum variables via

i=1,,d,ρi′′R1(R(ρi)+z(xi,u)mod1),\displaystyle\forall i=1,\dots,d,\quad\rho^{\prime\prime}_{i}\leftarrow R^{-1}(R(\rho_{i}^{\prime})+z(x_{i}^{\prime},u^{\prime})\mod 1), (23)

where RR is the cumulative distribution function (CDF) of density rr, and z:𝒳×[0,1]z:\mathcal{X}\times[0,1]\to\mathbb{R} is any differentiable function; this generalizes the map from Neal (2012); Murray & Elliott (2012) to enable the shift to depend on the state xx and pseudotime uu. This step (3) is an attempt to replicate the independent resampling of ρm\rho\sim m from HMC using only deterministic maps. This map is measure-preserving as it involves mapping the momentum to a 𝖴𝗇𝗂𝖿[0,1]{\sf{Unif}}[0,1] random variable via the CDF, shifting by an amount that depends only on x,ux,u, and then mapping back using the inverse CDF. The Jacobian is the momentum density ratio m(ρ)/m(ρ′′)m(\rho^{\prime})/m(\rho^{\prime\prime}).

5.2 Approximation via uncorrected leapfrog integration

In practice, we cannot simulate the dynamics in step (1) perfectly. Instead, we approximate the dynamics in Equation 21 by running LL steps of the leapfrog method, where each leapfrog map H^ϵ:2d2d\hat{H}_{\epsilon}:\mathbb{R}^{2d}\to\mathbb{R}^{2d} involves interleaving three discrete transformations with step size ϵ>0\epsilon>0,

ρ^k+1=ρk+ϵ2logπ(xk)xk+1=xkϵlogm(ρ^k+1)ρk+1=ρ^k+1+ϵ2logπ(xk+1).\displaystyle\begin{aligned} \hat{\rho}_{k+1}&=\rho_{k}+\frac{\epsilon}{2}\nabla\log\pi(x_{k})\\ x_{k+1}&=x_{k}-\epsilon\nabla\log m(\hat{\rho}_{k+1})\\ \rho_{k+1}&=\hat{\rho}_{k+1}+\frac{\epsilon}{2}\nabla\log\pi(x_{k+1})\end{aligned}.\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:leapfrog}}{e}q:leapfrog} (24)

Denote the map Tλ,ϵT_{\lambda,\epsilon} to be the composition of the three steps with Hamiltonian dynamics replaced by the leapfrog integrator; Algorithm 6 in Appendix C provides the pseudocode. The final variational tuning parameters for the MixFlow are the step size ϵ\epsilon—which controls how close Tλ,ϵT_{\lambda,\epsilon} is to being measure-preserving for π¯\bar{\pi}—the Hamiltonian simulation length LL\in\mathbb{N}, and the flow length NN. In our experiments we tune the number of leapfrog steps LL and the step size ϵ\epsilon by maximizing the ELBO. We also tune the number of refreshments NN to achieve a desirable computation-quality tradeoff by visually inspecting the convergence of the ELBO.

5.3 Numerical stability

Density evaluation (Algorithm 2) and ELBO estimation (Algorithm 3) both involve repeated applications of TλT_{\lambda} and Tλ1T_{\lambda}^{-1}. This poses no issue in theory, but in a computer—with floating point computations—one should code the map TλT_{\lambda} and its inverse Tλ1T^{-1}_{\lambda} in a numerically precise way such that (TλK)(TλK)(T^{K}_{\lambda})\circ(T^{-K}_{\lambda}) is the identity map for large KK\in\mathbb{N}. Figure 8 in Section E.2 displays the severity of the numerical error when TλT_{\lambda} and Tλ1T^{-1}_{\lambda} are not carefully implemented. In practice, we check the stability limits of our flow by taking draws from q0q_{0} and evaluating TλKT^{K}_{\lambda} followed by TλKT^{-K}_{\lambda} (and vice versa) for increasing KK until we cannot reliably invert the flow. See Figure 7 in the appendix for an example usage of this diagnostic. Note that for sample generation specifically (Algorithm 1), numerical stability is not a concern as it only requires forward evaluation of the map TλT_{\lambda}.

6 Experiments

Refer to caption
Refer to caption
Refer to caption
(a) (a) Gaussian
Refer to caption
Refer to caption
Refer to caption
(b) (b) Gaussian mixture
Refer to caption
Refer to caption
Refer to caption
(c) (c) Cauchy
Refer to caption
Refer to caption
Refer to caption
(d) (d) Banana
Refer to caption
Refer to caption
Refer to caption
(e) (e) Funnel
Refer to caption
Refer to caption
Refer to caption
(f) (f) Cross
Refer to caption
Refer to caption
Refer to caption
(g) (g) Warp
Figure 1: Marginal samples (first row) and pairs of exact and approximate joint log density (second row) for Gaussian (1(a)), mixture (1(b)), and Cauchy (1(c)) targets. Marginal samples (third row), pairs of sliced exact and approximate joint log density (fourth row) for banana (1(d)), funnel (1(e)), cross (1(f)), and warped Gaussian (1(g)).

In this section, we demonstrate the performance of our method (MixFlow) on 7 synthetic targets and 7 real data targets. See Appendix E for the details of each target. Both our synthetic and real data examples are designed to cover a range of challenging features such as heavy tails, high dimensions, multimodality, and weak identifiability. We compare posterior approximation quality to several black-box normalizing flow methods (NF): PlanarFlow, RadialFlow, and RealNVP with various architectures (Papamakarios et al., 2021). To make the methods comparable via the ELBO, we train all NFs on the same joint space as MixFlow. We also compare the marginal sample quality of MixFlow against 5,0005{,}000 samples from NUTS and NFs. Finally, we compare sampling time with all competitors, and effective sample size (ESS) per second with HMC. For all experiments, we use the standard Laplace distribution as the momentum distribution due to its numerical stability (see Figure 7 in Appendix E). Additional comparisons to variational inference based on uncorrected Hamiltonian annealing (UHA) (Geffner & Domke, 2021) and nonequilibrium orbit sampling (NEO) (Thin et al., 2021a, Algorithm 2) may be found in Appendix E. All experiments were conducted on a machine with an AMD Ryzen 9 3900X and 32GB of RAM. Code is available at https://github.com/zuhengxu/Ergodic-variational-flow-code.

6.1 Qualitative assessment

We begin with a qualitative examination of the i.i.d. samples and the approximated targets produced by MixFlow initialized at q0=𝒩(0,1)q_{0}=\mathcal{N}(0,1) for three one-dimensional synthetic distributions: a Gaussian, a mixture of Gaussians, and the standard Cauchy. We excluded the pseudotime variable uu here in order to visualize the full joint density of (x,ρ)(x,\rho) in 2 dimensions. More details can be found in Section E.1. Figures 1(a), 1(b) and 1(c) show histograms of 10,00010{,}000 i.i.d. xx-marginal samples generated by MixFlow for each of the three targets, which nearly perfectly match the true target marginals. Figures 1(a), 1(b) and 1(c) also show that logqλ,N\log q_{\lambda,N} is generally a good approximation of the log target density.

We then present similar visualizations on four more challenging synthetic target distributions: the banana (Haario et al., 2001), Neal’s funnel (Neal, 2003), a cross-shaped Gaussian mixture, and a warped Gaussian. All four examples have a 2-dimensional state x2x\in\mathbb{R}^{2}, and hence (x,ρ,u)5(x,\rho,u)\in\mathbb{R}^{5}. In each example we set the initial distribution q0q_{0} to be the mean-field Gaussian approximation. More details can be found in Section E.2. Figures 1(d), 1(e), 1(f) and 1(g) shows the scatter plots consisting of 1,0001{,}000 i.i.d. xx-marginal samples drawn from MixFlow, as well as the approximated MixFlow log density and exact log density sliced as a function of x2x\in\mathbb{R}^{2} for a single value of (ρ,u)(\rho,u) chosen randomly via (ρ,u)𝖫𝖺𝗉(0,I)×𝖴𝗇𝗂𝖿[0,1](\rho,u)\sim{\sf{Lap}}(0,I)\times{\sf{Unif}}[0,1] (which is required for visualization, as (x,ρ,u)5(x,\rho,u)\in\mathbb{R}^{5}). We see that, qualitatively, both the samples and approximated densities from MixFlow closely match the target. Finally, Figure 9 in Section E.2 provides a more comprehensive set of sample histograms (showing the xx-, ρ\rho-, and uu-marginals). These visualizations support our earlier theoretical analysis.

6.2 Posterior approximation quality

Refer to caption
Refer to caption
(a)
Refer to caption
Refer to caption
(a) (a) linear
regression
Refer to caption
Refer to caption
(b) (b) linear regression (heavy)
Refer to caption
Refer to caption
(c) (c) logistic
regression
Refer to caption
Refer to caption
(d) (d) Poisson
regression
Refer to caption
Refer to caption
(e) (e) student t
regression
Refer to caption
Refer to caption
(f) (f) sparse
regression
Refer to caption
Refer to caption
(g) (g) sparse regression (high dim)
Figure 2: ELBO and KSD comparison for real data examples. Figure 2(a) displays the effect of step size for the linear regression problem: ϵ1=0.0001,ϵ2=0.001\epsilon_{1}=0.0001,\epsilon_{2}=0.001 and tuned ϵ=0.0005\epsilon=0.0005; see Figure 17 for step sizes for all other experiments. Lines indicate the median, and error regions indicate 25th25^{th} to 75th75^{th} percentile from 55 runs. Figure 2(b) does not include ELBOs of PlanarFlow as values are significantly worse than all other methods and are hard to visualize (see its ELBOs in Figure 17(b)).

Next, we provide a quantitative comparison of MixFlow, NFs, NUTS on 7 real data experiments outlined in Section E.5. We tune each NF method under various settings (Tables 1, 2 and 3), and present the best one for each example. ELBOs of MixFlow are estimated with Algorithm 3, averaging over 1,0001{,}000 independent trajectories. ELBOs of NFs are based on 2,0002{,}000 samples. To obtain an assessment for the target marginal distribution itself (not the augmented target), we also compare methods using the kernel Stein discrepancy (KSD) with an inverse quadratic (IMQ) kernel (Gorham & Mackey, 2017). NUTS and NFs use 5,0005{,}000 samples for KSD estimation, while MixFlow is based on 2,0002{,}000 i.i.d. draws (Algorithm 1). For KSD comparisons, all variational methods are tuned by maximizing the ELBO (Figures 2 and 17).

Augmented target distribution

Figure 2 displays the ELBO comparison. First, Figure 2(a) shows how different step sizes ϵ\epsilon and number of refreshments NN affects approximation quality. Overly large step sizes typically result in errors due to the use of discretized Hamiltonian dynamics, while overly small step sizes result in a flow that makes slow progress towards the target. MixFlow with a tuned step size generally shows a comparable joint ELBO value to the best NF method, yielding a competitive target approximation. Similar comparisons and assessment of the effect of step size for the synthetic examples are presented in Figures 5, 10 and 16. Note that in three examples (Figures 2(a), 2(d) and 2(g)), the tuned RealNVP ELBO exceeds MixFlow by a small amount; but this required expensive architecture search and parameter tuning, and as we will describe next, MixFlow is actually more reliable in terms of target marginal sample quality and density estimation.

Original target distribution

The second row of Figure 2 displays a comparison of KSD for the target distribution itself (instead of the augmented target). In particular, MixFlow produces comparable sample quality to that from NUTS—an exact MCMC method—and clearly outperforms all of the NF competitors. The scatter plots of samples in Figure 20 confirm the improvement in sample quality of MixFlow over variational competitors. Further, Figure 3 shows the (sliced) densities on two difficult real data examples: Bayesian student-t regression (with a heavy-tailed posterior), and a high-dimensional sparse regression (parameter dimension is 8484). This result demonstrates that the densities provided by MixFlow more closely match those of the original target distribution than those of the best NF. Notice that MixFlow accurately captures the skew and heavy tails of the exact target, while the NF density fails to do so and contains many spurious modes.

6.3 Ease of tuning

In order to tune MixFlow, we simply run a 1-dimensional parameter sweep for the step size ϵ\epsilon, and use a visual inspection of the ELBO to set an appropriate number of flow steps NN. Tuning an NF requires optimizing its architecture, number of layers, and its (typically many) parameters. Not only is this time consuming—in our experiments, tuning took 10 minutes to roughly 1 hour (Figure 4)—but the optimization can also behave in unintuitive ways. For example, performance can be heavily dependent on the number of flow layers, and adding more layers does not necessarily improve quality. Figures 16, 1, 2 and 3 show that using more layers does not necessarily help, and slows tuning considerably. In the case of RealNVP specifically, tuning can be unstable, especially for more complex models. The optimizer often returns NaN values for flow parameters during training (see Table 1). This instability has been noted in earlier work (Dinh et al., 2017, Sec. 3.7).

Refer to caption
(a) (a) student t regression
Refer to caption
(b) (b) sparse regression (high dim)
Figure 3: Sliced log conditional densities on student-t regression (Figure 3(a)) and high-dimensional sparse regression (Figure 3(b)). We visualize the log conditional density of the first coordinate by fixing other coordinates to value 0. The NF methods are chosen to be the best performing ones from Figures 2(e) and 2(g). Since we only know the log posterior density up to an unknown constant, we shift all lines to have maximal value 0 for visualization.
Refer to caption
Refer to caption
(a) (a) sparse regression (high dim)
Refer to caption
Refer to caption
(b) (b) student t regression
Figure 4: Timing results (100 trials), showing sampling time (first row) and ESS per second (second row).

6.4 Time efficiency

Finally, Figure 4 presents timing results for two of the real data experiments (additional comparisons in Figures 14 and 19). In this figure, we use MixFlow iid to refer to i.i.d. sampling from MixFlow, and MixFlow single to refer to collecting all intermediate states on a single trajectory. This result shows that the per sample time of MixFlow single is similar to NUTS and HMC as one would expect. MixFlow iid is the slowest because each sample is generated by passing through the entire flow. The NF generates the fastest draws, but recall that this comes at the cost of significant initial tuning time; in the time it takes NF to generate its first sample, MixFlow single has generated millions of samples in Figure 4. See Section E.4 for a detailed discussion of this trade-off.

Figures 4 and 19 further show the computational efficiency in terms of ESS per second on real data examples, which reflects the autocorrelation between drawn samples. Results show that MixFlow produce comparable ESS per second to HMC. MixFlow single behaves similarly to HMC as expected since the pseudo-momentum refreshment we proposed (steps (2-3) of Section 5) resembles the momentum resample step of HMC. The ESS efficiency of MixFlow iid depends on the trade-off between a slower sampling time and i.i.d. nature of drawn samples. In these real data examples, MixFlow iid typically produces a high ESS per second; but in the synthetic examples (Figure 14(b)), MixFlow iid is similar to the others.

7 Conclusion

This work presented MixFlows, a new variational family constructed from mixtures of pushforward maps. Experiments demonstrate a comparable sample quality to NUTS and more reliable posterior approximations than standard normalizing flows. A main limitation of our methodology is numerical stability; reversing the flow for long trajectories can be unstable in practice. Future work includes developing more stable momentum refreshment schemes and extensions via involutive MCMC (Neklyudov et al., 2020; Spanbauer et al., 2020; Neklyudov & Welling, 2022).

Acknowledgements

The authors gratefully acknowledge the support of a National Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant and a UBC four year doctoral fellowship.

References

  • Aldous & Thorisson (1993) Aldous, D. and Thorisson, H. Shift-coupling. Stochastic Processes and their Applications, 44:1–14, 1993.
  • Anastasiou et al. (2021) Anastasiou, A., Barp, A., Briol, F.-X., Ebner, B., Gaunt, R., Ghaderinezhad, F., Gorham, J., Gretton, A., Ley, C., Liu, Q., Mackey, L., Oates, C., Reinert, G., and Swan, Y. Stein’s method meets statistics: a review of some recent developments. arXiv:2105.03481, 2021.
  • Birkhoff (1931) Birkhoff, G. Proof of the ergodic theorem. Proceedings of the National Academy of Sciences, 17(12):656–660, 1931.
  • Blei et al. (2017) Blei, D., Kucukelbir, A., and McAuliffe, J. Variational inference: a review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017.
  • Brooks & Gelman (1998) Brooks, S. and Gelman, A. General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7(4):434–455, 1998.
  • Butzer & Westphal (1971) Butzer, P. and Westphal, U. The mean ergodic theorem and saturation. Indiana University Mathematics Journal, 20(12):1163–1174, 1971.
  • Campbell & Li (2019) Campbell, T. and Li, X. Universal boosting variational inference. In Advances in Neural Information Processing Systems, 2019.
  • Carpenter et al. (2017) Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., and Riddell, A. Stan: A probabilistic programming language. Journal of Statistical Software, 76(1), 2017.
  • Caterini et al. (2018) Caterini, A., Doucet, A., and Sejdinovic, D. Hamiltonian variational auto-encoder. In Advances in Neural Information Processing Systems, 2018.
  • Chen et al. (2022) Chen, N., Xu, Z., and Campbell, T. Bayesian inference via sparse Hamiltonian flows. In Advances in Neural Information Processing Systems, 2022.
  • Chérief-Abdellatif & Alquier (2018) Chérief-Abdellatif, B.-E. and Alquier, P. Consistency of variational Bayes inference for estimation and model selection in mixtures. Electronic Journal of Statistics, 12(2):2995–3035, 2018.
  • Chwialkowski et al. (2016) Chwialkowski, K., Strathmann, H., and Gretton, A. A kernel test of goodness of fit. In International Conference on Machine Learning, 2016.
  • Constantinopoulos et al. (2006) Constantinopoulos, C., Titsias, M., and Likas, A. Bayesian feature and model selection for Gaussian mixture models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(6):1013–1018, 2006.
  • Corduneau & Bishop (2001) Corduneau, A. and Bishop, C. Variational Bayesian model selection for mixture distributions. In Artificial Intelligence and Statistics, 2001.
  • Cowles & Carlin (1996) Cowles, M. K. and Carlin, B. Markov chain Monte Carlo convergence diagnostics: a comparative review. Journal of the American Statistical Association, 91(434):883–904, 1996.
  • Dajani & Dirksin (2008) Dajani, K. and Dirksin, S. A simple introduction to ergodic theory, 2008. URL: https://webspace.science.uu.nl/ kraai101/lecturenotes2009.pdf.
  • Dinh et al. (2017) Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. In International Conference on Learning Representations, 2017.
  • Eisner et al. (2015) Eisner, T., Farkas, B., Haase, M., and Nagel, R. Operator Theoretic Aspects of Ergodic Theory. Graduate Texts in Mathematics. Springer, 2015.
  • Fjelde et al. (2020) Fjelde, T. E., Xu, K., Tarek, M., Yalburgi, S., and Ge, H. Bijectors.jl: Flexible transformations for probability distributions. In Symposium on Advances in Approximate Bayesian Inference, pp.  1–17, 2020.
  • Geffner & Domke (2021) Geffner, T. and Domke, J. MCMC variational inference via uncorrected Hamiltonian annealing. In Advances in Neural Information Processing Systems, 2021.
  • Gelman & Rubin (1992) Gelman, A. and Rubin, D. Inference from iterative simulation using multiple sequences. Statistical Science, 7(4):457–472, 1992.
  • Gelman et al. (2013) Gelman, A., Carlin, J., Stern, H., Dunson, D., Vehtari, A., and Rubin, D. Bayesian data analysis. CRC Press, 3rd edition, 2013.
  • Gershman et al. (2012) Gershman, S., Hoffman, M., and Blei, D. Nonparametric variational inference. In International Conference on Machine Learning, 2012.
  • Geweke (1992) Geweke, J. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. In Bernardo, J. M., Berger, J. O., and Dawid, A. P. (eds.), Bayesian Statistics, volume 4, pp.  169–193, 1992.
  • Gong et al. (2021) Gong, W., Li, Y., and Hernández-Lobato, J. M. Sliced kernelized Stein discrepancy. In International Conference on Learning Representations, 2021.
  • Gorham & Mackey (2015) Gorham, J. and Mackey, L. Measuring sample quality with Stein’s method. In Advances in Neural Information Processing Systems, 2015.
  • Gorham & Mackey (2017) Gorham, J. and Mackey, L. Measuring sample quality with kernels. In International Conference on Machine Learning, 2017.
  • Guo et al. (2016) Guo, F., Wang, X., Fan, K., Broderick, T., and Dunson, D. Boosting variational inference. In Advances in Neural Information Processing Systems, 2016.
  • Haario et al. (2001) Haario, H., Saksman, E., and Tamminen, J. An adaptive Metropolis algorithm. Bernoulli, pp.  223–242, 2001.
  • Hamidieh (2018) Hamidieh, K. A data-driven statistical model for predicting the critical temperature of a superconductor. Computational Materials Science, 154:346–354, 2018.
  • Harrison Jr. & Rubinfeld (1978) Harrison Jr., D. and Rubinfeld, D. Hedonic housing prices and the demand for clean air. Journal of Environmental Economics and Management, 5(1):81–102, 1978.
  • Hoffman & Gelman (2014) Hoffman, M. and Gelman, A. The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1):1593–1623, 2014.
  • Hoffman et al. (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347, 2013.
  • Huggins & Mackey (2018) Huggins, J. and Mackey, L. Random feature Stein discrepancies. In Advances in Neural Information Processing Systems, 2018.
  • Jaakkola & Jordan (1998) Jaakkola, T. and Jordan, M. Improving the mean field approximation via the use of mixture distributions. In Learning in graphical models, pp.  163–173. Springer, 1998.
  • Jankowiak & Phan (2021) Jankowiak, M. and Phan, D. Surrogate likelihoods for variational annealed importance sampling. arXiv:2112.12194, 2021.
  • Jordan et al. (1999) Jordan, M., Ghahramani, Z., Jaakkola, T., and Saul, L. An introduction to variational methods for graphical models. Machine Learning, 37:183–233, 1999.
  • Kakutani (1938) Kakutani, S. Iteration of linear operations in complex Banach spaces. Proceedings of the Imperial Academy, 14(8):295–300, 1938.
  • Kingma & Welling (2014) Kingma, D. and Welling, M. Auto-encoding variational Bayes. In International Conference on Learning Representations, 2014.
  • Kobyzev et al. (2021) Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: an introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):3964–3979, 2021.
  • Kullback & Leibler (1951) Kullback, S. and Leibler, R. On information and sufficiency. The Annals of Mathematical Statistics, 22(1):79–86, 1951.
  • Liu & Rubin (1995) Liu, C. and Rubin, D. ML estimation of the t distribution using EM and its extensions, ECM and ECME. Statistica Sinica, pp.  19–39, 1995.
  • Liu et al. (2016) Liu, Q., Lee, J., and Jordan, M. A kernelized Stein discrepancy for goodness-of-fit tests and model evaluation. In International Conference on Machine Learning, 2016.
  • Locatello et al. (2018a) Locatello, F., Dresdner, G., Khanna, R., Valera, I., and Rätsch, G. Boosting black box variational inference. In Advances in Neural Information Processing Systems, 2018a.
  • Locatello et al. (2018b) Locatello, F., Khanna, R., Ghosh, J., and Rätsch, G. Boosting variational inference: an optimization perspective. In International Conference on Artificial Intelligence and Statistics, 2018b.
  • Masa-aki (2001) Masa-aki, S. Online model selection based on the variational Bayes. Neural Computation, 13(7):1649–1681, 2001.
  • Miller et al. (2017) Miller, A., Foti, N., and Adams, R. Variational boosting: iteratively refining posterior approximations. In International Conference on Machine Learning, 2017.
  • Moro et al. (2014) Moro, S., Cortez, P., and Rita, P. A data-driven approach to predict the success of bank telemarketing. Decision Support Systems, 62:22–31, 2014.
  • Murray & Elliott (2012) Murray, I. and Elliott, L. Driving Markov chain Monte Carlo with a dependent random stream. arXiv:1204.3187, 2012.
  • Neal (1996) Neal, R. Bayesian Learning for Neural Networks. Lecture Notes in Statistics, No. 118. Springer-Verlag, 1996.
  • Neal (2003) Neal, R. Slice sampling. The Annals of Statistics, 31(3):705–767, 2003.
  • Neal (2005) Neal, R. Hamiltonian importance sampling. Banff International Research Station (BIRS) Workshop on Mathematical Issues in Molecular Dynamics, 2005.
  • Neal (2011) Neal, R. MCMC using Hamiltonian dynamics. In Brooks, S., Gelman, A., Jones, G., and Meng, X.-L. (eds.), Handbook of Markov chain Monte Carlo, chapter 5. CRC Press, 2011.
  • Neal (2012) Neal, R. How to view an MCMC simulation as permutation, with applications to parallel simulation and improved importance sampling. arXiv:1205.0070, 2012.
  • Neklyudov & Welling (2022) Neklyudov, K. and Welling, M. Orbital MCMC. In Artificial Intelligence and Statistics, 2022.
  • Neklyudov et al. (2020) Neklyudov, K., Welling, M., Egorov, E., and Vetrov, D. Involutive MCMC: a unifying framework. In International Conference on Machine Learning, 2020.
  • Neklyudov et al. (2021) Neklyudov, K., Bondesan, R., and Welling, M. Deterministic gibbs sampling via ordinary differential equations. arXiv:2106.10188, 2021.
  • Ormerod et al. (2017) Ormerod, J., You, C., and Müller, S. A variational Bayes approach to variable selection. Electronic Journal of Statistics, 11(2):3549–3594, 2017.
  • Papamakarios et al. (2021) Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22:1–64, 2021.
  • Qiao & Minematsu (2010) Qiao, Y. and Minematsu, N. A study on invariance of ff-divergence and its application to speech recognition. IEEE Transactions on Signal Processing, 58(7):3884–3890, 2010.
  • Ranganath et al. (2014) Ranganath, R., Gerrish, S., and Blei, D. Black box variational inference. In International Conference on Artificial Intelligence and Statistics, 2014.
  • Ranganath et al. (2016) Ranganath, R., Tran, D., and Blei, D. Hierarchical variational models. In International Conference on Machine Learning, 2016.
  • Rezende & Mohamed (2015a) Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In International Conference on Machine Learning, 2015a.
  • Rezende & Mohamed (2015b) Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In International Conference on Machine Learning, pp. 1530–1538. PMLR, 2015b.
  • Riesz (1938) Riesz, F. Some mean ergodic theorems. Journal of the London Mathematical Society, 13(4):274–278, 1938.
  • Robert & Casella (2004) Robert, C. and Casella, G. Monte Carlo Statistical Methods. Springer, 2nd edition, 2004.
  • Robert & Casella (2011) Robert, C. and Casella, G. A short history of Markov Chain Monte Carlo: subjective recollections from incomplete data. Statistical Science, 26(1):102–115, 2011.
  • Roberts & Rosenthal (1997) Roberts, G. and Rosenthal, J. Shift-coupling and convergence rates of ergodic averages. Stochastic Models, 13(1):147–165, 1997.
  • Roberts & Rosenthal (2004) Roberts, G. and Rosenthal, J. General state space Markov chains and MCMC algorithms. Probability Surveys, 1:20–71, 2004.
  • Rotskoff & Vanden-Eijnden (2019) Rotskoff, G. and Vanden-Eijnden, E. Dynamical computation of the density of states and Bayes factors using nonequilibrium importance sampling. Physical Review Letters, 122(15):150602, 2019.
  • Salimans & Knowles (2013) Salimans, T. and Knowles, D. Fixed-form variational posterior approximation through stochastic linear regression. Bayesian Analysis, 8(4):837–882, 2013.
  • Salimans et al. (2015) Salimans, T., Kingma, D., and Welling, M. Markov chain Monte Carlo and variational inference: bridging the gap. In International Conference on Machine Learning, 2015.
  • Spanbauer et al. (2020) Spanbauer, S., Freer, C., and Mansinghka, V. Deep involutive generative models for neural MCMC. arXiv:2006.15167, 2020.
  • Tabak & Turner (2013) Tabak, E. and Turner, C. A family of non-parametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164, 2013.
  • Tao et al. (2018) Tao, C., Chen, L., Zhang, R., Henao, R., and Carin, L. Variational inference and model selection with generalized evidence bounds. In International Conference on Machine Learning, 2018.
  • Thin et al. (2021a) Thin, A., Janati, Y., Le Corff, S., Ollion, C., Doucet, A., Durmus, A., Moulines, É., and Robert, C. NEO: non equilibrium sampling on the orbit of a deterministic transform. Advances in Neural Information Processing Systems, 2021a.
  • Thin et al. (2021b) Thin, A., Kotelevskii, N., Durmus, A., Panov, M., Moulines, E., and Doucet, A. Monte Carlo variational auto-encoders. In International Conference on Machine Learning, 2021b.
  • Tupper (2005) Tupper, P. Ergodicity and the numerical simulation of Hamiltonian systems. SIAM Journal on Applied Dynamical Systems, 4(3):563–587, 2005.
  • U.S. Department of Justice Federal Bureau of Investigation (1995) U.S. Department of Justice Federal Bureau of Investigation. Crime in the United States, 1995. URL: https://ucr.fbi.gov/crime-in-the-u.s/1995.
  • ver Steeg & Galstyan (2021) ver Steeg, G. and Galstyan, A. Hamiltonian dynamics with non-Newtonian momentum for rapid sampling. In Advances in Neural Information Processing Systems, 2021.
  • Wainwright & Jordan (2008) Wainwright, M. and Jordan, M. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1–305, 2008.
  • Wang (2016) Wang, X. Boosting variational inference: theory and examples. Master’s thesis, Duke University, 2016.
  • Wenliang & Kanagawa (2020) Wenliang, L. and Kanagawa, H. Blindness of score-based methods to isolated components and mixing proportions. arXiv:2008.10087, 2020.
  • Wolf et al. (2016) Wolf, C., Karl, M., and van der Smagt, P. Variational inference with Hamiltonian Monte Carlo. arXiv:1609.08203, 2016.
  • Xu et al. (2020) Xu, K., Ge, H., Tebbutt, W., Tarek, M., Trapp, M., and Ghahramani, Z. AdvancedHMC.jl: A robust, modular and efficient implementation of advanced HMC algorithms. In Symposium on Advances in Approximate Bayesian Inference, 2020.
  • Xu & Campbell (2022) Xu, Z. and Campbell, T. The computational asymptotics of variational inference and the Laplace approximation. Statistics and Computing, 32(4):1–37, 2022.
  • Yosida (1938) Yosida, K. Mean ergodic theorem in Banach spaces. Proceedings of the Imperial Academy, 14(8):292–294, 1938.
  • Zhang et al. (2021) Zhang, G., Hsu, K., Li, J., Finn, C., and Grosse, R. Differentiable annealed importance sampling and the perils of gradient noise. In Advances in Neural Information Processing Systems, 2021.
  • Zhang & Hernández-Lobato (2020) Zhang, Y. and Hernández-Lobato, J. M. Ergodic inference: accelerate convergence by optimisation. In arXiv:1805.10377, 2020.
  • Zobay (2014) Zobay, O. Variational Bayesian inference with Gaussian-mixture approximations. Electronic Journal of Statistics, 8:355–389, 2014.

Appendix A Proofs

Proof of Proposition 3.1.

Because both estimates are unbiased, it suffices to show that 𝔼[f2(X)]𝔼[(1Nn=0N1f(TnX0))2]\mathbb{E}[f^{2}(X)]\geq\mathbb{E}\left[\left(\frac{1}{N}\sum_{n=0}^{N-1}f(T^{n}X_{0})\right)^{2}\right], which itself follows by Jensen’s inequality:

𝔼[(1Nn=0N1f(TnX0))2]𝔼[1Nn=0N1f2(TnX0)]=𝔼[f2(X)].\displaystyle\mathbb{E}\left[\left(\frac{1}{N}\sum_{n=0}^{N-1}f(T^{n}X_{0})\right)^{2}\right]\leq\mathbb{E}\left[\frac{1}{N}\sum_{n=0}^{N-1}f^{2}(T^{n}X_{0})\right]=\mathbb{E}[f^{2}(X)]. (25)

Proof of Theorem 4.1.

Since setwise convergence implies weak convergence, we will focus on proving setwise convergence. We have that qλ,Nq_{\lambda,N} converges setwise to π\pi if and only if for all measurable bounded f:𝒳f:\mathcal{X}\to\mathbb{R},

𝔼f(XN)𝔼f(X),XNqλ,N,Xπ.\displaystyle\mathbb{E}f(X_{N})\to\mathbb{E}f(X),\qquad X_{N}\sim q_{\lambda,N},\quad X\sim\pi. (26)

The proof proceeds by directly analyzing 𝔼f(XN)\mathbb{E}f(X_{N}):

𝔼f(XN)\displaystyle\mathbb{E}f(X_{N}) =f(x)qλ,N(dx)\displaystyle=\int f(x)q_{\lambda,N}(\mathrm{d}x) (27)
=1Nn=0N1f(x)(Tλnq0)(dx)\displaystyle=\frac{1}{N}\sum_{n=0}^{N-1}\int f(x)(T^{n}_{\lambda}q_{0})(\mathrm{d}x) (28)
=1Nn=0N1f(Tλnx)q0(dx)\displaystyle=\frac{1}{N}\sum_{n=0}^{N-1}\int f(T^{n}_{\lambda}x)q_{0}(\mathrm{d}x) (29)
=1Nn=0N1f(Tλnx)q0(dx).\displaystyle=\int\frac{1}{N}\sum_{n=0}^{N-1}f(T^{n}_{\lambda}x)q_{0}(\mathrm{d}x). (30)

Since q0πq_{0}\ll\pi, by the Radon-Nikodym theorem, there exists a density of q0q_{0} with respect to π\pi, so

𝔼f(XN)\displaystyle\mathbb{E}f(X_{N}) =1Nn=0N1f(Tλnx)dq0dπ(x)π(dx).\displaystyle=\int\frac{1}{N}\sum_{n=0}^{N-1}f(T^{n}_{\lambda}x)\frac{\mathrm{d}q_{0}}{\mathrm{d}\pi}(x)\pi(\mathrm{d}x). (31)

By the pointwise ergodic theorem (Theorem 2.3), fN(x)=1Nn=0N1f(Tλnx)f_{N}(x)=\frac{1}{N}\sum_{n=0}^{N-1}f(T^{n}_{\lambda}x) converges pointwise π\pi-a.e. to fdπ\int f\mathrm{d}\pi; and because ff is bounded, fNf_{N} is uniformly bounded for all NN\in\mathbb{N}. Hence by the Lebesgue dominated convergence theorem,

limN𝔼f(XN)\displaystyle\lim_{N\to\infty}\mathbb{E}f(X_{N}) =limN1Nn=0N1f(Tλnx)dq0dπ(x)π(dx)\displaystyle=\lim_{N\to\infty}\int\frac{1}{N}\sum_{n=0}^{N-1}f(T^{n}_{\lambda}x)\frac{\mathrm{d}q_{0}}{\mathrm{d}\pi}(x)\pi(\mathrm{d}x) (32)
=(fdπ)dq0dπ(x)π(dx)\displaystyle=\int\left(\int f\mathrm{d}\pi\right)\frac{\mathrm{d}q_{0}}{\mathrm{d}\pi}(x)\pi(\mathrm{d}x) (33)
=(fdπ)1=𝔼f(X).\displaystyle=\left(\int f\mathrm{d}\pi\right)\cdot 1=\mathbb{E}f(X). (34)

Theorem A.1 (Mean ergodic theorem in Banach spaces [Yosida, 1938; Kakutani, 1938; Riesz, 1938; Eisner et al., 2015, Theorem 8.5]).

Let TT be a bounded linear operator on a Banach space EE, define the operator

AN=1Nn=0N1Tn,\displaystyle A_{N}=\frac{1}{N}\sum_{n=0}^{N-1}T^{n}, (35)

and let

fix(T)={vE:Tv=v}=ker(IT).\displaystyle\mathrm{fix}(T)=\left\{v\in E:Tv=v\right\}=\ker(I-T). (36)

Suppose that supNAN<\sup_{N\in\mathbb{N}}\|A_{N}\|<\infty and that 1NTNv0\frac{1}{N}T^{N}v\to 0 for all vEv\in E. Then the subspace

V={vE:limNANv exists}\displaystyle V=\left\{v\in E:\lim_{N\to\infty}A_{N}v\text{ exists}\right\} (37)

is closed, TT-invariant, and decomposes into a direct sum of closed subspaces

V=fix(T)range¯(IT).\displaystyle V=\mathrm{fix}(T)\oplus\overline{\mathrm{range}}(I-T). (38)

The operator T|V\left.T\right|_{V} on VV is mean ergodic. Furthermore, the operator

A:Vfix(T)Av=limNANv\displaystyle A:V\to\mathrm{fix}(T)\qquad Av=\lim_{N\to\infty}A_{N}v (39)

is a bounded projection with kernel ker(A)=range¯(IT)\ker(A)=\overline{\mathrm{range}}(I-T) and AT=A=TAAT=A=TA.

Proof of Theorem 4.2.

We suppress λ\lambda subscripts for brevity. Consider the Banach space (π)\mathcal{M}(\pi) of signed finite measures dominated by π\pi endowed with the total variation norm m=supA𝒳m(A)infA𝒳m(A)\|m\|=\sup_{A\subseteq\mathcal{X}}m(A)-\inf_{A\subseteq\mathcal{X}}m(A). Then the pushforward TmTm of m(π)m\in\mathcal{M}(\pi) under TT is dominated by π\pi since

π(A)=0π(T1(A))=0m(T1(A))=0.\displaystyle\pi(A)=0\implies\pi(T^{-1}(A))=0\implies m(T^{-1}(A))=0. (40)

Note the slight abuse of notation involving the same symbol for T:𝒳𝒳T:\mathcal{X}\to\mathcal{X} and the associated operator. Hence T:(π)(π)T:\mathcal{M}(\pi)\to\mathcal{M}(\pi) is a linear operator on (π)\mathcal{M}(\pi). Further,

Tm=supA𝒳m(T1(A))infA𝒳m(T1(A))supA𝒳m(A)infA𝒳m(A)=m,\displaystyle\|Tm\|=\sup_{A\subseteq\mathcal{X}}m(T^{-1}(A))-\inf_{A\subseteq\mathcal{X}}m(T^{-1}(A))\leq\sup_{A\subseteq\mathcal{X}}m(A)-\inf_{A\subseteq\mathcal{X}}m(A)=\|m\|, (41)

and so TT is bounded with T1\|T\|\leq 1. Hence 1NTNm0\frac{1}{N}T^{N}m\to 0 for all m(π)m\in\mathcal{M}(\pi), and if we define the operator

N,AN=1Nn=0N1Tn,\displaystyle\forall N\in\mathbb{N},\quad A_{N}=\frac{1}{N}\sum_{n=0}^{N-1}T^{n}, (42)

we have that supNAN1\sup_{N\in\mathbb{N}}\|A_{N}\|\leq 1. Therefore by the mean ergodic theorem in Banach spaces [Yosida, 1938; Kakutani, 1938; Riesz, 1938; Eisner et al., 2015, Theorem 8.5], we have that

A:Vfix(T)Am=limNANm\displaystyle A:V\to\mathrm{fix}(T)\quad Am=\lim_{N\to\infty}A_{N}m (43)

where V={m(π):limNANm exists}V=\left\{m\in\mathcal{M}(\pi):\lim_{N\to\infty}A_{N}m\text{ exists}\right\}. Eisner et al. (2015, Theorem 8.20) guarantees that V=(π)V=\mathcal{M}(\pi) as long as the weak limit of ANmA_{N}m exists for each m(π)m\in\mathcal{M}(\pi). Note that since (π)\mathcal{M}(\pi) and L1(π)L^{1}(\pi) are isometric (via the map mdmdπm\mapsto\frac{\mathrm{d}m}{\mathrm{d}\pi}), and π\pi is σ\sigma-finite, the dual of (π)\mathcal{M}(\pi) is the set of linear functionals

mϕ(x)m(dx),ϕL(π).\displaystyle m\mapsto\int\phi(x)m(\mathrm{d}x),\qquad\phi\in L^{\infty}(\pi). (44)

So therefore we have that V=(π)V=\mathcal{M}(\pi) if for each m(π)m\in\mathcal{M}(\pi), there exists a g(π)g\in\mathcal{M}(\pi) such that

ϕL(π),ϕ(x)(ANm)(dx)ϕ(x)g(dx).\displaystyle\forall\phi\in L^{\infty}(\pi),\qquad\int\phi(x)(A_{N}m)(\mathrm{d}x)\to\int\phi(x)g(\mathrm{d}x). (45)

The same technique using a transformation of variables and the pointwise ergodic theorem from the proof of Theorem 4.1 provides the desired weak convergence. Therefore we have that ANmA_{N}m converges in total variation to fix(T)\mathrm{fix}(T) for all m(π)m\in\mathcal{M}(\pi), and hence ANq0A_{N}q_{0} converges in total variation to fix(T)\mathrm{fix}(T). Furthermore Theorem 4.1 guarantees weak convergence of ANq0A_{N}q_{0} to π\pi, and πfix(T)\pi\in\mathrm{fix}(T), so thus DTV(qλ,N,π)0\mathrm{D_{TV}}\mathchoice{\left({{{{q_{\lambda,N},\pi}}}}\right)}{({{{{q_{\lambda,N},\pi}}}})}{({{{{q_{\lambda,N},\pi}}}})}{({{{{q_{\lambda,N},\pi}}}})}\to 0. ∎

Proof of Theorem 4.3.

We suppress λ\lambda subscripts for brevity. By the triangle inequality,

DTV(q^N,π)\displaystyle\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N},\pi}}}}\right)}{({{{{\hat{q}_{N},\pi}}}})}{({{{{\hat{q}_{N},\pi}}}})}{({{{{\hat{q}_{N},\pi}}}})} DTV(q^N,qN)+DTV(qN,π).\displaystyle\leq\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N},q_{N}}}}}\right)}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})}+\mathrm{D_{TV}}\mathchoice{\left({{{{q_{N},\pi}}}}\right)}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}{({{{{q_{N},\pi}}}})}. (46)

Focusing on the error term, we have that

DTV(q^N,qN)\displaystyle\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N},q_{N}}}}}\right)}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})} =DTV(1Nn=1N1T^nq0,1Nn=1N1Tnq0)\displaystyle=\mathrm{D_{TV}}\mathchoice{\left({{{{\frac{1}{N}\sum_{n=1}^{N-1}\hat{T}^{n}q_{0},\frac{1}{N}\sum_{n=1}^{N-1}T^{n}q_{0}}}}}\right)}{({{{{\frac{1}{N}\sum_{n=1}^{N-1}\hat{T}^{n}q_{0},\frac{1}{N}\sum_{n=1}^{N-1}T^{n}q_{0}}}}})}{({{{{\frac{1}{N}\sum_{n=1}^{N-1}\hat{T}^{n}q_{0},\frac{1}{N}\sum_{n=1}^{N-1}T^{n}q_{0}}}}})}{({{{{\frac{1}{N}\sum_{n=1}^{N-1}\hat{T}^{n}q_{0},\frac{1}{N}\sum_{n=1}^{N-1}T^{n}q_{0}}}}})} (47)
=N1NDTV(1N1n=1N1T^nq0,1N1n=1N1Tnq0)\displaystyle=\frac{N-1}{N}\mathrm{D_{TV}}\mathchoice{\left({{{{\frac{1}{N-1}\sum_{n=1}^{N-1}\hat{T}^{n}q_{0},\frac{1}{N-1}\sum_{n=1}^{N-1}T^{n}q_{0}}}}}\right)}{({{{{\frac{1}{N-1}\sum_{n=1}^{N-1}\hat{T}^{n}q_{0},\frac{1}{N-1}\sum_{n=1}^{N-1}T^{n}q_{0}}}}})}{({{{{\frac{1}{N-1}\sum_{n=1}^{N-1}\hat{T}^{n}q_{0},\frac{1}{N-1}\sum_{n=1}^{N-1}T^{n}q_{0}}}}})}{({{{{\frac{1}{N-1}\sum_{n=1}^{N-1}\hat{T}^{n}q_{0},\frac{1}{N-1}\sum_{n=1}^{N-1}T^{n}q_{0}}}}})} (48)
=N1NDTV(T^1N1n=1N1T^n1q0,T1N1n=1N1Tn1q0)\displaystyle=\frac{N-1}{N}\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{T}\frac{1}{N-1}\sum_{n=1}^{N-1}\hat{T}^{n-1}q_{0},T\frac{1}{N-1}\sum_{n=1}^{N-1}T^{n-1}q_{0}}}}}\right)}{({{{{\hat{T}\frac{1}{N-1}\sum_{n=1}^{N-1}\hat{T}^{n-1}q_{0},T\frac{1}{N-1}\sum_{n=1}^{N-1}T^{n-1}q_{0}}}}})}{({{{{\hat{T}\frac{1}{N-1}\sum_{n=1}^{N-1}\hat{T}^{n-1}q_{0},T\frac{1}{N-1}\sum_{n=1}^{N-1}T^{n-1}q_{0}}}}})}{({{{{\hat{T}\frac{1}{N-1}\sum_{n=1}^{N-1}\hat{T}^{n-1}q_{0},T\frac{1}{N-1}\sum_{n=1}^{N-1}T^{n-1}q_{0}}}}})} (49)
=N1NDTV(T^1N1n=0N2T^nq0,T1N1n=0N2Tnq0)\displaystyle=\frac{N-1}{N}\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{T}\frac{1}{N-1}\sum_{n=0}^{N-2}\hat{T}^{n}q_{0},T\frac{1}{N-1}\sum_{n=0}^{N-2}T^{n}q_{0}}}}}\right)}{({{{{\hat{T}\frac{1}{N-1}\sum_{n=0}^{N-2}\hat{T}^{n}q_{0},T\frac{1}{N-1}\sum_{n=0}^{N-2}T^{n}q_{0}}}}})}{({{{{\hat{T}\frac{1}{N-1}\sum_{n=0}^{N-2}\hat{T}^{n}q_{0},T\frac{1}{N-1}\sum_{n=0}^{N-2}T^{n}q_{0}}}}})}{({{{{\hat{T}\frac{1}{N-1}\sum_{n=0}^{N-2}\hat{T}^{n}q_{0},T\frac{1}{N-1}\sum_{n=0}^{N-2}T^{n}q_{0}}}}})} (50)
=N1NDTV(T^q^N1,TqN1)\displaystyle=\frac{N-1}{N}\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{T}\hat{q}_{N-1},Tq_{N-1}}}}}\right)}{({{{{\hat{T}\hat{q}_{N-1},Tq_{N-1}}}}})}{({{{{\hat{T}\hat{q}_{N-1},Tq_{N-1}}}}})}{({{{{\hat{T}\hat{q}_{N-1},Tq_{N-1}}}}})} (51)
=N1NDTV(q^N1,T^1TqN1),\displaystyle=\frac{N-1}{N}\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N-1},\hat{T}^{-1}Tq_{N-1}}}}}\right)}{({{{{\hat{q}_{N-1},\hat{T}^{-1}Tq_{N-1}}}}})}{({{{{\hat{q}_{N-1},\hat{T}^{-1}Tq_{N-1}}}}})}{({{{{\hat{q}_{N-1},\hat{T}^{-1}Tq_{N-1}}}}})}, (52)

where the last equality is due to the fact that T^\hat{T} is a bijection. The triangle inequality yields

DTV(q^N,qN)\displaystyle\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N},q_{N}}}}}\right)}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})} N1N(DTV(q^N1,qN1)+DTV(qN1,T^1TqN1))\displaystyle\leq\frac{N-1}{N}\left(\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N-1},q_{N-1}}}}}\right)}{({{{{\hat{q}_{N-1},q_{N-1}}}}})}{({{{{\hat{q}_{N-1},q_{N-1}}}}})}{({{{{\hat{q}_{N-1},q_{N-1}}}}})}+\mathrm{D_{TV}}\mathchoice{\left({{{{q_{N-1},\hat{T}^{-1}Tq_{N-1}}}}}\right)}{({{{{q_{N-1},\hat{T}^{-1}Tq_{N-1}}}}})}{({{{{q_{N-1},\hat{T}^{-1}Tq_{N-1}}}}})}{({{{{q_{N-1},\hat{T}^{-1}Tq_{N-1}}}}})}\right) (53)
=N1N(DTV(q^N1,qN1)+DTV(T^qN1,TqN1)).\displaystyle=\frac{N-1}{N}\left(\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N-1},q_{N-1}}}}}\right)}{({{{{\hat{q}_{N-1},q_{N-1}}}}})}{({{{{\hat{q}_{N-1},q_{N-1}}}}})}{({{{{\hat{q}_{N-1},q_{N-1}}}}})}+\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{T}q_{N-1},Tq_{N-1}}}}}\right)}{({{{{\hat{T}q_{N-1},Tq_{N-1}}}}})}{({{{{\hat{T}q_{N-1},Tq_{N-1}}}}})}{({{{{\hat{T}q_{N-1},Tq_{N-1}}}}})}\right). (54)

Then iterating that technique yields

DTV(q^N,qN)n=1N1nNDTV(T^qn,Tqn),\displaystyle\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:approxtverr}}{e}q:approxtverr}\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N},q_{N}}}}}\right)}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})}\leq\sum_{n=1}^{N-1}\frac{n}{N}\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{T}q_{n},Tq_{n}}}}}\right)}{({{{{\hat{T}q_{n},Tq_{n}}}}})}{({{{{\hat{T}q_{n},Tq_{n}}}}})}{({{{{\hat{T}q_{n},Tq_{n}}}}})}, (55)

which completes the proof. ∎

Proof of Corollary 4.4.

Examining the total variation in its L1L^{1} distance formulation yields that for n=1,,N1n=1,\dots,N-1,

DTV(T^qn,Tqn)\displaystyle\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{T}q_{n},Tq_{n}}}}}\right)}{({{{{\hat{T}q_{n},Tq_{n}}}}})}{({{{{\hat{T}q_{n},Tq_{n}}}}})}{({{{{\hat{T}q_{n},Tq_{n}}}}})} =|T^qn(x)Tqn(x)1|Tqn(dx)\displaystyle=\int\left|\frac{\hat{T}q_{n}(x)}{Tq_{n}(x)}-1\right|Tq_{n}(\mathrm{d}x) (56)
=|exp{logqn(T^1x)logqn(T1x)+logJ(T1x)logJ^(T^1x)}1|Tqn(dx).\displaystyle=\int\left|\exp\left\{\log q_{n}(\hat{T}^{-1}x)-\log q_{n}(T^{-1}x)+\log J(T^{-1}x)-\log\hat{J}(\hat{T}^{-1}x)\right\}-1\right|Tq_{n}(\mathrm{d}x). (57)

By assumption,

|logqn(T^1x)logqn(T1x)|T^1xT1xϵ,\displaystyle\left|\log q_{n}(\hat{T}^{-1}x)-\log q_{n}(T^{-1}x)\right|\leq\ell\left\|\hat{T}^{-1}x-T^{-1}x\right\|\leq\ell\epsilon, (58)

and

|logJ(T1x)logJ^(T^1x)|\displaystyle\left|\log J(T^{-1}x)-\log\hat{J}(\hat{T}^{-1}x)\right| |logJ(T1x)logJ(T^1x)|+|logJ(T^1x)logJ^(T^1x)|\displaystyle\leq\left|\log J(T^{-1}x)-\log J(\hat{T}^{-1}x)\right|+\left|\log J(\hat{T}^{-1}x)-\log\hat{J}(\hat{T}^{-1}x)\right| (59)
T^1xT1x+ϵ\displaystyle\leq\ell\left\|\hat{T}^{-1}x-T^{-1}x\right\|+\epsilon (60)
(+1)ϵ.\displaystyle\leq(\ell+1)\epsilon. (61)

Combining these two bounds yields

DTV(T^qn,Tqn)\displaystyle\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{T}q_{n},Tq_{n}}}}}\right)}{({{{{\hat{T}q_{n},Tq_{n}}}}})}{({{{{\hat{T}q_{n},Tq_{n}}}}})}{({{{{\hat{T}q_{n},Tq_{n}}}}})} exp((2+1)ϵ)1(2+1)ϵexp((2+1)ϵ).\displaystyle\leq\exp((2\ell+1)\epsilon)-1\leq(2\ell+1)\epsilon\exp((2\ell+1)\epsilon). (62)

Therefore, by Equation 55, we obtain that

DTV(q^N,qN)n=1N1nNDTV(T^qn,Tqn)N12e(2+1)ϵ(2+1)ϵNϵ(+1)e(2+1)ϵ.\displaystyle\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{q}_{N},q_{N}}}}}\right)}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})}{({{{{\hat{q}_{N},q_{N}}}}})}\leq\sum_{n=1}^{N-1}\frac{n}{N}\mathrm{D_{TV}}\mathchoice{\left({{{{\hat{T}q_{n},Tq_{n}}}}}\right)}{({{{{\hat{T}q_{n},Tq_{n}}}}})}{({{{{\hat{T}q_{n},Tq_{n}}}}})}{({{{{\hat{T}q_{n},Tq_{n}}}}})}\leq\frac{N-1}{2}e^{(2\ell+1)\epsilon}\cdot(2\ell+1)\epsilon\leq N\epsilon(\ell+1)e^{(2\ell+1)\epsilon}. (63)

Finally, combining the above with Theorem 4.3 yields the desired result. ∎

Appendix B Memory efficient ELBO estimation

Algorithm 4 DensityTriple(x)\texttt{DensityTriple}(x): Evaluate TλN+1(x)T_{\lambda}^{-N+1}(x), qλ,N(x)q_{\lambda,N}(x), and j=1N1Jλ(Tλjx)\prod_{j=1}^{N-1}J_{\lambda}(T_{\lambda}^{-j}x) with O(N)O(N)-time, O(1)O(1)-memory
  Input: location xx, reference distribution q0q_{0}, flow map TλT_{\lambda}, Jacobian JλJ_{\lambda}, number of steps NN
  L0L\leftarrow 0
  wq0(x)w\leftarrow q_{0}(x)
  for n=1,,N1n=1,\dots,N-1 do
     xTλ1(x)x\leftarrow T^{-1}_{\lambda}(x)
     LL+logJλ(x)L\leftarrow L+\log J_{\lambda}(x)
     wLogSumExp(w,logq0(x)L)w\leftarrow\texttt{LogSumExp}(w,\log q_{0}(x)-L)
  end for
  wwlogNw\leftarrow w-\log N
   Return: xx, exp(w)\exp(w), exp(L)\exp(L)
Algorithm 5 EstELBO(λ,N)\texttt{EstELBO}(\lambda,N): Estimate the ELBO for qλ,Nq_{\lambda,N} in O(N)O(N)-time, O(1)O(1)-memory.
  Input: reference q0q_{0}, unnormalized target pp, flow map TλT_{\lambda}, Jacobian JλJ_{\lambda}, number of flow steps NN
  xSample(q0)x\leftarrow\texttt{Sample}(q_{0})
  x,z,JDensityTriple(x)x^{\prime},z,J\leftarrow\texttt{DensityTriple}(x)
  flogp(x)f\leftarrow\log p(x)
  glogzg\leftarrow\log z
  for n=1,,N1n=1,\dots,N-1 do
     q¯1Nq0(x)/J\bar{q}\leftarrow\frac{1}{N}q_{0}(x^{\prime})/J
     Jn1Jλ(x)J_{n-1}\leftarrow J_{\lambda}(x)
     z(zq¯)/Jn1z\leftarrow\left(z-\bar{q}\right)/J_{n-1}
     xTλ(x)x\leftarrow T_{\lambda}(x)
     zz+1Nq0(x)z\leftarrow z+\frac{1}{N}q_{0}(x)
     ff+logp(x)f\leftarrow f+\log p(x)
     gg+logzg\leftarrow g+\log z
     if n<N1n<N-1 then
        JJJn1/Jλ(x)J\leftarrow J\cdot J_{n-1}/J_{\lambda}(x^{\prime})
     end if
     xTλ(x)x^{\prime}\leftarrow T_{\lambda}(x^{\prime})
  end for
  Return: ELBO^(λ,N)1N(fg)\widehat{\mathrm{ELBO}}(\lambda,N)\leftarrow\frac{1}{N}(f-g)
  ELBO^(λ,N)\widehat{\mathrm{ELBO}}(\lambda,N)

Appendix C Hamiltonian flow pseudocode

Algorithm 6 Compute Tλ,ϵT_{\lambda,\epsilon} and its Jacobian Jλ,ϵJ_{\lambda,\epsilon} for Hamiltonian flow with leapfrog integrator
  Input: initial state x𝒳x\in\mathcal{X}, ρd\rho\in\mathbb{R}^{d}, u[0,1]u\in[0,1], step size ϵ\epsilon, shift ξ\xi\in\mathbb{R}, pseudorandom shift z(,)z(\cdot,\cdot)
  x0,ρ0x,ρx_{0},\rho_{0}\leftarrow x,\rho
  for =1,,L\ell=1,\dots,L do
     x,ρH^ϵ(x1,ρ1)x_{\ell},\rho_{\ell}\leftarrow\hat{H}_{\epsilon}(x_{\ell-1},\rho_{\ell-1})
  end for
  x,ρxL,ρLx^{\prime},\rho^{\prime}\leftarrow x_{L},\rho_{L}
  uu+ξmod1u^{\prime}\leftarrow u+\xi\mod 1
  for i=1,,di=1,\dots,d do
     ρi′′R1(R(ρi)+z(xi,u)mod1)\rho^{\prime\prime}_{i}\leftarrow R^{-1}(R(\rho_{i}^{\prime})+z(x_{i}^{\prime},u^{\prime})\mod 1)
  end for
  Jm(ρ)/m(ρ′′)J\leftarrow m(\rho^{\prime})/m(\rho^{\prime\prime})
  Return: (x,ρ′′,u),J(x^{\prime},\rho^{\prime\prime},u^{\prime}),J

Appendix D Extensions

Tunable reference

So far we have assumed that the reference distribution q0q_{0} for the flow is fixed. Given that q0q_{0} is often quite far from the target π\pi, this forces the variational flow to spend some of its steps just moving the bulk of the mass to π\pi. But this can be accomplished much easier with, say, a simple linear transformation that efficiently allows large global moves in mass. For example, if q0=𝒩(0,I)q_{0}=\mathcal{N}(0,I), we can include a map MθM_{\theta}

qλ,N\displaystyle q_{\lambda,N} =1Nn=0N1TλnMθq0,\displaystyle=\frac{1}{N}\sum_{n=0}^{N-1}T_{\lambda}^{n}M_{\theta}q_{0}, (64)

where Mθ(x)=θ1x+θ2M_{\theta}(x)=\theta_{1}x+\theta_{2}, where θ1d×d\theta_{1}\in\mathbb{R}^{d\times d} and θ2d\theta_{2}\in\mathbb{R}^{d}. Note that it is possible to optimize the reference and flow jointly, or to optimize the reference distribution by itself first and then use that fixed reference in the flow optimization.

Automated burn-in

A common practice in MCMC is to throw away a first fraction of the states in the sequence to ensure that the starting sample is in a high probability region of the target distribution, thus reducing the bias from initialization (“burn-in”). Usually one needs a diagnostic to check when burn-in is completed. In the case of MixFlow, we can monitor the burn-in phase in a principled way by evaluating the ELBO. Once the flow is trained, the variational distribution with MM burn-in samples is simply

qλ,M,N\displaystyle q_{\lambda,M,N} =1NMn=MN1Tλnq0,\displaystyle=\frac{1}{N-M}\sum_{n=M}^{N-1}T_{\lambda}^{n}q_{0}, (65)

We can easily optimize this by estimating the ELBOs for M=1,,NM=1,\dots,N.

Mixtures of MixFlows

One can build multiple MixFlows starting from multiple different initial reference distributions; when the posterior is multimodal, it may be the case that some of these MixFlows converge to different modes but do not mix across modes. In this scenario, it can be helpful to average several MixFlows, i.e., build an approximation of the form

qλ,N=k=1Kwk(qλ,N)k,k=1Kwk=1.\displaystyle q^{\star}_{\lambda,N}=\sum_{k=1}^{K}w_{k}(q_{\lambda,N})_{k},\quad\sum_{k=1}^{K}w_{k}=1. (66)

Because each component flow provides access to i.i.d. samples and density evaluation, MixFlow provides the ability to optimize the weights by maximizing the ELBO (i.e., minimizing the KL divergence).

Appendix E Additional experimental details

In this section, we provide details for each experiment presented in the main text, as well as additional results regarding numerical stability and a high-dimensional synthetic experiment. Aside from the univariate synthetic example, all examples include a pseudotime shift step with ξ=π/16\xi=\pi/16, and a momentum refreshment with z(x,u)=0.5sin(2x+u)+0.5z(x,u)=0.5\sin(2x+u)+0.5. For the kernel Stein discrepancy, we use a IMQ kernel k(x,y)=(c2+xy22)βk(x,y)=(c^{2}+\|x-y\|_{2}^{2})^{\beta} with β=0.5,c=1\beta=-0.5,c=1, the same setting as in (Gorham & Mackey, 2017).

For all experiments, unless otherwise stated, NUTS uses 20,00020{,}000 steps for adaptation, targeting at an average acceptance ratio 0.70.7, and generates 5,0005{,}000 samples for KSD estimation. The KSD for MixFlow is estimated using 2,0002{,}000 samples. The KSD estimation for NEO is based on 5,0005{,}000 samples generated by a tuned NEO after 20,00020{,}000 burn-in steps. We adopted two tuning strategies for NEO: (1) choosing among the combinations of several fixed settings of discretization step (ϵ=0.2,0.5,1.0\epsilon=0.2,0.5,1.0), friction parameter γ\gamma of nonequilibrium Hamiltonian dynamics (γ=0.2,0.5,1.0\gamma=0.2,0.5,1.0), integration steps (K=10,20K=10,20), and mass matrix of momentum distribution (M=IM=I); (2) fixing the integration steps (K=10,20K=10,20) and friction parameter (γ=0.2,0.5,1.0\gamma=0.2,0.5,1.0), and using windowed adaptation (Carpenter et al., 2017) to adapt the mass matrix and integration step size, targeting at an average acceptance ratio at 0.70.7. The optimal setting of NEO is considered to be the one that produces lowest average marginal KSD over 33 runs with no NaN values encountered. Performance of NEO across various settings for each example is summarized in Table 4. In each of NEO MCMC transition, we run 1010 deterministic orbits and computes corresponding normalizing constant estimates in parallel. Aside from NEO, all other methods are deployed using a single processor.

As for NUTS, we use the Julia package AdvancedHMC.jl (Xu et al., 2020) with all default settings. NEO adaptation is also conducted using AdvancedHMC.jl with the number of simulation steps set to the number of integration steps of NEO. The ESS is computed using the R package mcmcse, which is based on batch mean estimation. We implement all NFs using the Julia package Bijectors.jl (Fjelde et al., 2020). The flow layers of PlanarFlow and RadialFlow, as well as the coupling layers of RealNVP are implemented in Bijectors.jl. The affine coupling functions of RealNVP—a scaling function and a shifting function—are both parameterized using fully connected neural networks of three layers, of which the activation function is LeakyReLU and number of hidden units is by default the same dimension as the target distribution, unless otherwise stated.

E.1 Univariate synthetic examples

The three target distributions tested in this experiment were

  • normal: 𝒩(2,22)\mathcal{N}(2,2^{2}),

  • synthetic Gaussian mixture: 0.5𝒩(3,1.52)+0.3𝒩(0,0.82)+0.2𝒩(3,0.82)0.5\mathcal{N}(-3,1.5^{2})+0.3\mathcal{N}(0,0.8^{2})+0.2\mathcal{N}(3,0.8^{2}), and

  • Cauchy: 𝖢𝖺𝗎𝖼𝗁𝗒(0,1){\sf{Cauchy}}(0,1).

For all three examples, we use a momentum refreshment without introducing the pseudotime variable; this enables us to plot the full joint density of (x,ρ)2(x,\rho)\in\mathbb{R}^{2}:

ρ′′RLap1(RLap(ρ)+(sin(2x)+1)/2mod1).\displaystyle\rho^{\prime\prime}\leftarrow R_{\text{Lap}}^{-1}(R_{\text{Lap}}(\rho^{\prime})+(\sin(2x^{\prime})+1)/2\mod 1). (67)

For the three examples, we used the leapfrog stepsize ϵ=0.05\epsilon=0.05 and run L=50L=50 leapfrogs between each refreshment. For both the Gaussian and Gaussian mixture targets, we use 100100 refreshments. In the case of the Cauchy, we used 1,0001{,}000 refreshments.

E.2 Multivariate synthetic examples

The three target distributions tested in this experiment were

  • the banana distribution (Haario et al., 2001):

    y=[y1y2]𝒩(0,[100001]),x=[y1y2+by12100b],b=0.1;\displaystyle y=\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}\sim\mathcal{N}\left(0,\begin{bmatrix}100&0\\ 0&1\end{bmatrix}\right),\quad x=\begin{bmatrix}y_{1}\\ y_{2}+by_{1}^{2}-100b\end{bmatrix},\quad b=0.1; (68)
  • Neals’ funnel (Neal, 2003):

    x1𝒩(0,σ2),x2x1𝒩(0,exp(x12)),σ2=36;\displaystyle x_{1}\sim\mathcal{N}\left(0,\sigma^{2}\right),\quad x_{2}\mid x_{1}\sim\mathcal{N}\left(0,\exp\left(\frac{x_{1}}{2}\right)\right),\quad\sigma^{2}=36; (69)
  • a cross-shaped distribution: in particular, a Gaussian mixture of the form

    x\displaystyle x 14𝒩([02],[0.152001])+14𝒩([20],[1000.152])\displaystyle\sim\frac{1}{4}\mathcal{N}\left(\begin{bmatrix}0\\ 2\end{bmatrix},\begin{bmatrix}0.15^{2}&0\\ 0&1\end{bmatrix}\right)+\frac{1}{4}\mathcal{N}\left(\begin{bmatrix}-2\\ 0\end{bmatrix},\begin{bmatrix}1&0\\ 0&0.15^{2}\end{bmatrix}\right) (70)
    +14𝒩([20],[1000.152])+14𝒩([02],[0.152001]);\displaystyle+\frac{1}{4}\mathcal{N}\left(\begin{bmatrix}2\\ 0\end{bmatrix},\begin{bmatrix}1&0\\ 0&0.15^{2}\end{bmatrix}\right)+\frac{1}{4}\mathcal{N}\left(\begin{bmatrix}0\\ -2\end{bmatrix},\begin{bmatrix}0.15^{2}&0\\ 0&1\end{bmatrix}\right); (71)
  • and a warped Gaussian distribution

    y=[y1y2]𝒩(0,[1000.122]),x=[y2cos(atan2(y2,y1)12y2)y2sin(atan2(y2,y1)12y2)],\displaystyle y=\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}\sim\mathcal{N}\left(0,\begin{bmatrix}1&0\\ 0&0.12^{2}\end{bmatrix}\right),\quad x=\begin{bmatrix}\|y\|_{2}\cos\left(\operatorname{atan2}\left(y_{2},y_{1}\right)-\frac{1}{2}\|y\|_{2}\right)\\ \|y\|_{2}\sin\left(\operatorname{atan2}\left(y_{2},y_{1}\right)-\frac{1}{2}\|y\|_{2}\right)\end{bmatrix}, (72)

    where atan2(y,x)\operatorname{atan2}(y,x) is the angle, in radians, between the positive xx axis and the ray to the point (x,y)(x,y).

We used flows with 500500 and 20002000 refreshments for the banana distribution, Neal’s Funnel respectively, and a flow with 10001000 refreshments for both cross distribution and warped Gaussian. Between each refreshment we used 200 leapfrog steps for the banana distribution, 60 for the cross distribution, and 80 for the funnel and warped Gaussian. Note that in all four examples, we individually tuned the step size ϵ\epsilon by maximizing the estimated ELBO, as shown in Figure 5(a). Figure 5(b) also demonstrates how the ELBO varies versus the number of refreshments NN. For small step sizes ϵ\epsilon, the discretized Hamiltonian dynamics approximates the continuous dynamics, and the ELBO generally increases with NN indicating convergence to the target. For larger step sizes ϵ\epsilon, the ELBO increases to a peak and then decreases, indicating that the discretized dynamics do not exactly target the desired distribution.

Figure 6 presents a comparison of the uncertainty involved in estimating the expectations of a test function ff for both MixFlow iid and MixFlow single. Specifically, it examines the streaming estimation of 𝔼[f(X)],XqN\mathbb{E}[f(X)],X\sim q_{N}, where f(x)=x1f(x)=\|x\|_{1}, based on samples generated from MixFlow iid and MixFlow single under 50,00050{,}000 flow map evaluations over 10 independent trials. Note that we assess computational cost via the number of flow map evaluations, not by the number of draws, because the cost per draw is random in MixFlow iid (due to K𝖴𝗇𝗂𝖿{0,,N1}K\sim{\sf{Unif}}\{0,\dots,N-1\}), while the cost per draw is fixed to NN evaluations for the trajectory estimate in MixFlow single. The results indicate that, given equivalent computational resources, the trajectory-averaged estimates generally exhibit lower variances between trials than the naïve i.i.d. Monte Carlo estimate. This observation validates Proposition 3.1.

Figure 7 shows why we opted to use a Laplace momentum distribution as opposed to a Gaussian momentum. In particular, the numerical error of composing TλKTλKT_{\lambda}^{K}\circ T_{\lambda}^{-K} and TλKTλKT_{\lambda}^{-K}\circ T_{\lambda}^{K} (denoted as “Bwd” and “Fwd” in the legend, respectively) indicate that the flow using the Laplace momentum is more reliably invertible for larger numbers of refreshments than the flow with a Gaussian momentum. Figure 8 uses a high-precision (256 bit) floating point representation to further illustrate the rapid escalation of numerical error when evaluating forward and backward trajectories on Gaussian Hamiltonian MixFlows. For all four synthetic examples, after approximately 100 flow transformations, MixFlows with Gaussian momentum exhibits an error on the scale of the target distribution itself (per the contour plots (d)-(g) in Figure 1). This may be due to the fact that the normal has very light tails; for large momentum values in the right tail, the CDF is 1\approx 1, which gets rounded to exactly 1 in floating point representation. Our implementation of the CDF and inverse CDF was fairly naïve, so it is possible that stability could be improved with more care taken to prevent rounding error. We leave a more careful examination of this error to future work; for this work, using a Laplace distribution momentum sufficed to maintain a low numerical error.

Finally, Figure 9 provides a more comprehensive set of sample histograms for these experiments, showing the xx-, ρ\rho-, and uu-marginals. It is clear that MixFlow generates samples for each variable reliably from the target marginal.

E.3 Higher-dimensional synthetic experiment

We also tested two higher dimensional Neal’s funnel target distributions of xdx\in\mathbb{R}^{d} where d=5,20d=5,20. In particular, we used the target distribution

x1𝒩(0,σ2),andi{2,,d},xix1i.i.d.𝒩(0,exp(x12)),\displaystyle x_{1}\sim\mathcal{N}\left(0,\sigma^{2}\right),\quad\text{and}\quad\forall i\in\{2,\cdots,d\},\,\,x_{i}\mid x_{1}\overset{\textrm{\tiny{i.i.d.}\@}}{\sim}\mathcal{N}\left(0,\exp\left(\frac{x_{1}}{2}\right)\right), (73)

with σ2=36\sigma^{2}=36. We use 8080 leapfrogs between refreshments and ϵ=0.0009\epsilon=0.0009 when d=5d=5, and 100100 leapfrogs between refreshments and ϵ=0.001\epsilon=0.001 when d=20d=20 (Figures 10(a) and 10(c) shows the ELBO comparison used to tune the step size ϵ\epsilon). Figures 10(b) and 10(d) confirm via the KSD that the method performs as well as NUTS in higher-dimensional cases.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
(a) (a): ELBO versus step size ϵ\epsilon.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
(b) (b): ELBO of MixFlow versus number of refreshments NN over different step sizes, and comparision to tuned PlanarFlow
Figure 5: MixFlow tuning for the four multivariate synthetic examples
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 6: Comparison of Monte Carlo estimates of 𝔼[f(X)],XqN,f(x)=x1\mathbb{E}[f(X)],X\sim q_{N},f(x)=\|x\|_{1} based on individual i.i.d. draws (blue) and trajectory-averaged estimates in Equation 11 (orange) on four synthetic examples. The vertical axis indicates the estimate of 𝔼[f]\mathbb{E}[f], and the horizontal axis indicates the total number of flow transformations evaluated, i.e., total computational cost. Note that in each example, NN is fixed; the number of refreshments on the horizontal axis increases because we average over increasingly many draws Xi.i.d.qNX\overset{\textrm{\tiny{i.i.d.}\@}}{\sim}q_{N} (blue) or increasingly many trajectory averages (orange). We run 1010 trials to assess the quality of each estimate: lines indicate the mean, and error regions indicate standard deviation.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 7: Stability of composing TλKTλKT^{-K}_{\lambda}\circ T^{K}_{\lambda} (Fwd) and TλKTλKT^{K}_{\lambda}\circ T^{-K}_{\lambda} (Bwd) for the four multivariate experiments with flows constructed using Gaussian (Gauss) and Laplace (Lap) momentum distributions. The vertical axis shows the 2-norm error of reconstructing (x,ρ,u)(x,\rho,u) sampled from q0q_{0}; the horizontal axis shows increasing numbers of refreshments KK. The lines indicate the median, and error regions indicate 25th{}^{\text{th}} to 75th{}^{\text{th}} percentile for 100 independent samples.
Refer to caption
(a) (a): Banana
Refer to caption
(b) (b): Neal’s Funnel
Refer to caption
(c) (c): Cross
Refer to caption
(d) (d): Warped Gaussian
Figure 8: Large numerical errors exhibited by Hamiltonian MixFlow with Gaussian momentum on synthetic examples. Figure shows forward (fwd) error TkxT^kx\|T^{k}x-\hat{T}^{k}x\| and backward (bwd) error TkxB^kx\|T^{-k}x-\hat{B}^{k}x\| comparing kk transformations of the forward approximate/exact maps T^T\hat{T}\approx T and backward approximate/exact maps B^T1\hat{B}\approx T^{-1}. For the exact maps we use a 256-bit BigFloat representation, and for the numerical approximations we use 64-bit Float representation. The lines indicate the median, and error regions indicate 25th{}^{\text{th}} to 75th{}^{\text{th}} percentile over 100 initialization draws from the reference distribution q0q_{0}.
Refer to caption
(a) (a) Banana distribution
Refer to caption
(b) (b) Neal’s funnel
Refer to caption
(c) (c) Cross distribution
Refer to caption
(d) (d) Warped Gaussian
Figure 9: Scatter plots and histograms for the xx-, ρ\rho-, and uu-marginals in the four synthetic experiments.
Refer to caption
(a) (a)
Refer to caption
(b) (b)
Refer to caption
(c) (c)
Refer to caption
(d) (d)
Figure 10: ELBO versus step size (10(a),10(c)) and KSD comparison with NUTS (10(b),10(d)) for the 5- and 20-dimensional Neal’s funnel examples.

E.4 Additional experiments for synthetic examples

Refer to caption
Refer to caption
Refer to caption
(a) (a) Banana. From left to right: True, MixFlow, and NEO
Refer to caption
Refer to caption
Refer to caption
(b) (b) Neal’s Funnel. From left to right: True, MixFlow, and NEO
Refer to caption
Refer to caption
Refer to caption
(c) (c) Cross. From left to right: True, MixFlow, and NEO
Refer to caption
Refer to caption
Refer to caption
(d) (d) Warped Gauss. From left to right: True, MixFlow, and NEO
Figure 11: Visualization of sliced exact (left) and MixFlow-approximated (middle) joint log density (middle), and sliced joint log density of tuned NEO importance sampling proposal (right) for banana (11(a)), funnel (11(b)), cross (11(c)), and warped Gaussian (11(d)).
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 12: Marginal KSD of tuned MixFlow versus number of refreshments NN, and comparison to NUTS and NEO
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 13: Scatter plot of 50005000 i.i.d. samples generated from trained NF.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
(a) (a): Sampling time comparison to NF,NUTS, and HMC (100 trials).
Refer to caption
Refer to caption
Refer to caption
Refer to caption
(b) (b): Per second ESS comparison to NUTS and HMC (100 trials).
Figure 14: Time results for four multivariate synthetic examples: from left to right, each column corresponds to Banana, Neal’s funnel, Cross, and warped Gaussian respectively.
Refer to caption
(a) (a)
Refer to caption
(b) (b)
Refer to caption
(c) (c)
Refer to caption
(d) (d)
Refer to caption
(e) (e)
Refer to caption
(f) (f)
Refer to caption
(g) (g)
Refer to caption
(h) (h)
Figure 15: Streaming mean and standard deviation (SD) estimation (50,00050,000 samples) for four 22-dimensional synthetic distributions. For each distribution, the two plots on the top row correspond to the two marginal means, and the two plots on the bottom row correspond to the two marginal SDs. Each plot shows the evolution of coordinate-wise mean/SD estimates, using samples from MixFlow single (green), NUTS (blue), HMC (red), and NF (purple); black dashed line indicates the true target mean/SD, which is estimated using 50,00050,000 samples from the actual synthetic target distribution. The lines indicate the median, and error regions indicate 25th25^{\text{th}} to 75th75^{\text{th}} percentile from 1010 runs.

In this section, we provide additional comparisons of MixFlow against NUTS, HMC, NEO and a generic normalizing flow method—planar flow (NF) (Rezende & Mohamed, 2015b) on all four synthetic examples in Section E.2. For this set of experiments, all of the settings for MixFlow are the same as outlined in Section E.2. HMC uses the same leapfrog step size and number of leapfrogs steps (between refreshments) as MixFlow. For NF, 55 Planar layers (Rezende & Mohamed, 2015b) that contain 4545 parameters to be optimized (9 parameters for each Planar layer of a 44-dimensional Planar flow) are used unless otherwise stated. We train NF using ADAM until convergence (100,000100,000 iterations except where otherwise noted) with the initial step size set to 0.0010.001. And at each step, 1010 samples are used for gradient estimation. The initial distribution for MixFlow, NF and NUTS is set to be the mean-field Gaussian approximation, and HMC and NUTS are initialized using the learned mean of the mean-field Gaussian approximation. All parameters in NF are initialized using random samples from the standard Gaussian distribution.

Figure 11 compares the sliced log density of tuned MixFlow and the importance sampling proposal of tuned NEO, visualized in a similar fashion as Figure 1 in Section 6.1. It is clear that NEO does not provide a high-quality density approximation, while the log density of MixFlow visually matches the target. Indeed, unlike our method, due to the use of a nonequilibrium dynamic, NEO does not provide a similar convergence guarantee of its proposal distribution as simulation length increases.

Figure 12 presents comparisons of the marginal sample qualities of MixFlow to NUTS and NEO—both are exact MCMC methods. Results show that MixFlow produces comparable marginal sample quality, if not slightly better, than both MCMC methods. Note that it involves a nontrivial tuning process to achieve the displayed performance of NEO. Concrete tuning strategies are explained in the beginning of Appendix E. In fact, finding a good combination of discretization step, friction parameter, simulation length, and mass matrix of momentum distribution is necessary for NEO to behave reasonably; Table 4 summarizes the performance of NEO under various settings for both synthetic and real data experiments. The standard deviation of marginal sample KSD can change drastically across different settings, meaning that the marginal sample quality can be sensitive to the choice of hyperparameters. Moreover, due to the usage of an unstable Hamiltonian dynamics (with friction), one must choose hyperparameters carefully to avoid NaN values. The last column of Table 4 shows the number of hyperparameter combinations that lead to NaN values during sampling.

Figure 5(b) compares the joint ELBO values across various leapfrog step sizes for MixFlow against that of NF. We see that in all four examples, NF produces a smaller ELBO than MixFlow with a reasonable step size, which implies a lower quality target approximation. Indeed, Figure 13 shows that the trained NFs fail to capture the shape of the target distributions. Although one may expect the performance of NF to improve if it were given more layers, we will show that this is not the case in a later paragraph.

Figure 14(a) compares the sampling efficiency of MixFlow against NUTS, HMC, and NF. We see that MixFlow iid is the slowest, because each sample is generated by passing through the entire flow. However, we see that by taking all intermediate samples as in MixFlow single, we can generate samples just as fast as NUTS and HMC. On the other hand, while NF is fastest for sampling, it requires roughly 22 minutes for training, which alone allows MixFlow single, NUTS, and HMC to generate over 11 million samples in these examples. A more detailed discussion about this trade-off for NF is presented later.

Figure 14(b) further shows the computational efficiency in terms of effective samples size (ESS) per second. The smaller per second ESS of MixFlow iid is due to its slower sampling time. However, we emphasize that these samples are i.i.d.. NUTS overall achieves a higher per second ESS. NUTS is performant because of the much longer trajectories it produces (it only terminates once it detects a “U-turn”). This is actually an illustration of a limitation of the ESS per unit time as a measurement of performance. Because NUTS generates longer trajectories, it has a lower sample autocorrelation and a higher ESS; but Figure 15 shows that the actual estimation performance of NUTS is comparable to the other methods. Note that it is also possible to incorporate the techniques used in NUTS to our method, which we leave for future work.

As mentioned above, ESS mainly serves as a practical index for the relative efficiency of using a sequence of dependent samples, as opposed to independent samples, to estimate certain summary statistics of the target distribution. In this case, MixFlow single can be very useful. Figure 15 demonstrate the performance of MixFlow single, NUTS, HMC, and NF when estimating the coordinate-wise means and standard deviations (SD) of target distributions. We see that MixFlow single, NUTS, and HMC generally show similar performance in terms of convergence speed and estimation precision. While NF converges very quickly due to i.i.d. sampling, it does seem to struggle more at identifying the correct statistics, particularly the standard deviation, given its limited approximation quality. It is worth noting that, unlike MixFlow and general MCMC methods, sample estimates of target summaries obtained from NF are typically not asymptotically exact, as the sample quality is fundamentally limited by the choice of variational family and how well the flow is trained.

Finally, we provide an additional set of results for NF (Figure 16), examining its performance as we increase the number of planar layers (5,10,20,505,10,20,50). All settings for NF are identical to the above, except that we increase the optimization iteration to 500,000500,000 to ensure the convergence of flows with increased numbers of layers. As demonstrated in the second and third column of Figure 16, both training time and sampling time scale with the number of layers roughly linearly. Although a trained NF is still generally faster in sample generation (see also Figure 14(a)), for these synthetic examples with 44-dimensional joint target distributions, training time can take up to 3030 minutes. More importantly, the corresponding target approximations of NF are still not as good as those of MixFlow in all four examples, even when we increase the number of layers to 5050, which corresponds to optimizing 450450 parameters from the flow. One may also notice from Figure 16 that a more complex normalizing flow does not necessarily lead to better performance. This is essentially because the (usually non-convex) KL optimization problem of standard NF becomes more complex to solve as the flow becomes more flexible. As a result, even though theoretically, NF becomes more expressive with more layers, there is no guarantee on how well it approximates the target distribution. In contrast, MixFlow is optimization-free and is asymptotically exact—with a proper choice of hyper-parameter, more computation typically leads to better performance (Figure 5(b)). With the 3030 minutes training time of NF, MixFlow iid can generate 18,00018,000 i.i.d. samples and MixFlow single can generate over 1010 million samples, both of which are more than sufficient for most estimations under these target distributions.

Refer to caption
(a) (a) Banana distribution
Refer to caption
(b) (b) Neal’s funnel
Refer to caption
(c) (c) Cross distribution
Refer to caption
(d) (d) Warped Gaussian
Figure 16: Additional NF results with number of planar layers (5,10,20,505,10,20,50). First column shows the ELBO value. Each ELBO is estimated using 20002000 i.i.d. samples from trained flow. The second and third columns correspond to the training time and per-sample time (100100 trials) of NF given increasing planar layers.

E.5 Real data examples

All settings for MixFlow, NUTS, NEO and HMC are identical to those in synthetic examples. All NFs are trained using ADAM for 200,000200{,}000 iterations with initial step size set to be 0.0010.001. We remark that comparison NEO is not included for high dimensional sparse regression example—the most challenging example—due to its high consumption of RAM, which hits the ceiling of our computation resources. Overall we observe similar phenomenons as for synthetic examples Section E.2. Additionally, we include comparison to UHA on real data examples. The tuning procedure of UHA involves architectural search (i.e., leapfrog steps between refreshments, and number of refreshments), and hyperparameter optimization (i.e., leapfrog step size, and tempering path). We choose among the combination of several fixed architectural settings of leapfrog steps between refreshments (L=10,20,50L=10,20,50), and number of refreshment (N=5,10N=5,10), and optimize hyperparameters using ADAM for 20,00020{,}000 iterations (each gradient estimate is based on 5 independent samples); each setting is repeated for 55 times. Table 5 presents median, IQR of the resulting ELBOs of UHA under different architectural settings. We pick the best settings for UHA based on the ELBO performance and compare UHA under the selected settings with our method in terms of target approximation quality and marginal sample quality. Both ELBO and KSD of UHA are estimated using 5,0005{,}000 samples.

Figure 17 offers a comparison of the ELBO performance between UHA and MixFlow. While MixFlow demonstrates similar or superior ELBO performance in the linear regression, logistic regression, student-t regression, and sparse regression problems, it is outperformed by UHA in the remaining three real data examples. The higher ELBO of UHA in these examples might be due to its incorporation of annealing, which facilitates exploration of complex target distributions. We leave studying the use of annealing in MixFlows to future work.

However, it is essential to note that for variational methods that augment the original target space, a higher ELBO does not necessarily equate to improved marginal sample quality. As demonstrated in Figure 18, the marginal sample quality of UHA, as evaluated by marginal KSD, is worse than that of MixFlow and the two Monte Carlo methods (NUTS and NEO), although it surpasses that of parametric flows (e.g. RealNVP).

It is also worth noting that tuning augmentation methods like UHA can be very expensive. For instance, in the Student-t regression problem, UHA required 20 minutes ( 1223.81223.8 seconds) to run 20,00020{,}000 optimization steps, with each gradient estimated using 5 Monte Carlo samples. This is approximately 40 times slower than RealNVP, which required around 10 minutes for 200,000200{,}000 optimization steps with stochastic gradients based on 10 Monte Carlo samples. Conversely, evaluating one ELBO curve (averaged over 1,0001{,}000 independent trajectories) for MixFlow across six different flow lengths (i.e., flow length = 100, 200, 500, 1000, 1500, 2000) only took 8 minutes, while adapting NUTS with 20,00020{,}000 iterations was completed within a few seconds.

E.5.1 Bayesian linear regression

We consider two Bayesian linear regression problems, with both a standard normal prior and a heavy tail prior using two sets of real data. The two statistical models take the following form:

logσ2i.i.d.𝒩(0,1),yjβ,σ2indep𝒩(xjTβ,σ2)\displaystyle\log\sigma^{2}\overset{\textrm{\tiny{i.i.d.}\@}}{\sim}\mathcal{N}(0,1),\quad y_{j}\mid\beta,\sigma^{2}\overset{\textrm{\tiny{indep}}}{\sim}\mathcal{N}\left(x_{j}^{T}\beta,\sigma^{2}\right) (74)
Normal:βi.i.d.𝒩(0,1)Cauchy:βi.i.d.𝖢𝖺𝗎𝖼𝗁𝗒(0,1)\displaystyle\textbf{Normal:}\beta\overset{\textrm{\tiny{i.i.d.}\@}}{\sim}\mathcal{N}(0,1)\quad\textbf{Cauchy:}\beta\overset{\textrm{\tiny{i.i.d.}\@}}{\sim}{\sf{Cauchy}}(0,1) (75)

where yjy_{j} is the response and xjpx_{j}\in\mathbb{R}^{p} is the feature vector for data point jj. For linear regression problem with a normal prior, we use the Boston housing prices dataset (Harrison Jr. & Rubinfeld, 1978). Dataset available in the MLDatasets Julia package at https://github.com/JuliaML/MLDatasets.jl. containing J=506J=506 suburbs/towns in the Boston area; the goal is to use suburb information to predict the median house price. We standardize all features and responses. For linear regression with a heavy-tail prior, we use the communities and crime dataset (U.S. Department of Justice Federal Bureau of Investigation, 1995), available at http://archive.ics.uci.edu/ml/datasets/Communities+and+Crime. The original dataset contains 122122 attributes that potentially connect to crime; the goal is to predict per-capita violent crimes using the information of the community, such as the median family income, per capita number of poluce officers, and etc. For the data preprocessing, we drop observations with missing values, and using Principle component analysis for feature dimension reduction; we selected 5050 principal components with leading eigenvalues. The posterior dimensions of the two linear regression inference problems are 1515 and 5252, repsectively.

E.5.2 Bayesian generalized linear regression

We then consider two Bayesian generalized linear regression problems—a hierachical logistic regression and a poisson regression:

Logis. Reg.: α𝖦𝖺𝗆(1,0.01),βα𝒩(0,α1I),\displaystyle\alpha\sim{\sf{Gam}}(1,0.01),\beta\mid\alpha\sim\mathcal{N}(0,\alpha^{-1}I), (76)
yjβindep𝖡𝖾𝗋𝗇(11+exjTβ),\displaystyle y_{j}\mid\beta\overset{\textrm{\tiny{indep}}}{\sim}{\sf{Bern}}\left(\frac{1}{1+e^{-x_{j}^{T}\beta}}\right), (77)
Poiss. Reg.: β𝒩(0,I),\displaystyle\beta\sim\mathcal{N}(0,I), (78)
yjβindep𝖯𝗈𝗂𝗌𝗌(log(1+exjTβ)),\displaystyle y_{j}\mid\beta\overset{\textrm{\tiny{indep}}}{\sim}{\sf{Poiss}}\left(\log\left(1+e^{-x_{j}^{T}\beta}\right)\right),\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:poireg_model}}{e}q:poireg_{m}odel} (79)

For logistic regression, we use a bank marketing dataset (Moro et al., 2014) downsampled to J=400J=400 data points. Original dataset is available at https://archive.ics.uci.edu/ml/datasets/bank+marketing. the goal is to use client information to predict whether they subscribe to a term deposit. We include 8 features from the bank marketing dataset (Moro et al., 2014): client age, marital status, balance, housing loan status, duration of last contact, number of contacts during campaign, number of days since last contact, and number of contacts before the current campaign. For each of the binary variables (marital status and housing loan status), all unknown entries are removed. All features of the dataset are also standardized. Hence the posterior dimension of the logistic regression problem is 99 and the overall (x,ρ,u)(x,\rho,u) state dimension of the logistic regression inference problems are 19.

For Poisson regression problem, we use an airport delays dataset with 1515 features and J=500J=500 data points (subsampled), resulting a 1616-dimensional posterior distribution. The airport delays dataset was constructed using flight delay data from http://stat-computing.org/dataexpo/2009/the-data.html and historical weather information from https://www.wunderground.com/history/. relating daily weather information to the number of flights leaving an airport with a delay of more than 15 minutes. All features are standardized as well.

E.5.3 Bayesian Student-t regression

We also consider a Bayesian Student-t regression problem, of which the posterior distribution is heavy-tail. The Student-t regression model is as follows:

yiXi,β𝒯5(XiTβ,1),βi.i.d.𝖢𝖺𝗎𝖼𝗁𝗒(0,1).\displaystyle y_{i}\mid X_{i},\beta\sim\mathcal{T}_{5}(X_{i}^{T}\beta,1),\quad\beta\overset{\textrm{\tiny{i.i.d.}\@}}{\sim}{\sf{Cauchy}}(0,1). (80)

In this example, we use the creatinine dataset (Liu & Rubin, 1995), containing a clinical trial on 3434 male patients with 33 covariates. Original dataset is available in https://github.com/faosorios/heavy/blob/master/data/creatinine.rda. The 3 covariates consist of body weight in kg(WT), serum creatininte concentration (SC), and age in years. The goal is to predict the endogenous cretinine clearance (CR) using these covariates. We apply the data transformation recommended by Liu & Rubin (1995) by transferring response into log(CR)\log(\text{CR}), and transferring covariats into log(WT)\log(\text{WT}), log(SC)\log(\text{SC}), log(140age)\log(140-\text{age}).

E.5.4 Bayesian sparse regression

Finally, we compare the methods on the Bayesian sparse regression problem applied to two datasets: a prostate cancer dataset containing 99 covariates and 9797 observations, and a superconductivity dataset (Hamidieh, 2018), containing 8383 features and 100100 observations (subsampled). The prostate cancer dataset is available at https://hastie.su.domains/ElemStatLearn/datasets/prostate.data. The superconductivity dataset is available at https://archive.ics.uci.edu/ml/datasets/superconductivty+data. The model is as follows:

logσ2i.i.d.𝒩(0,1),βii.i.d.12𝒩(0,τ12)+12𝒩(0,τ22),\displaystyle\log\sigma^{2}\overset{\textrm{\tiny{i.i.d.}\@}}{\sim}\mathcal{N}(0,1),\beta_{i}\overset{\textrm{\tiny{i.i.d.}\@}}{\sim}\frac{1}{2}\mathcal{N}(0,\tau_{1}^{2})+\frac{1}{2}\mathcal{N}(0,\tau_{2}^{2}), (81)
yjβ,σ2indep𝒩(xjTβ,σ2)\displaystyle y_{j}\mid\beta,\sigma^{2}\overset{\textrm{\tiny{indep}}}{\sim}\mathcal{N}\left(x_{j}^{T}\beta,\sigma^{2}\right)\addcontentsline{lla}{section}{\numberline{\string\crtrefnumber{eq:spreg_model}}{e}q:spreg_{m}odel} (82)

For both two datasets, we set τ1=0.1,τ2=10\tau_{1}=0.1,\tau_{2}=10. The resulting posterior dimension for both datasets are 1010 and 8484 respectively. When data information is weak, the posterior distribution in this model typically contains multiple modes (Xu & Campbell, 2022). We standardize the covariates during the preprocessing procedure for both datasets.

E.5.5 Additional experiment for real data examples

Refer to caption
(a) (a) Linear regression
Refer to caption
(b) (b) Linear regression (heavy tail)
Refer to caption
(c) (c) Logistic regression
Refer to caption
(d) (d) Poisson regression
Refer to caption
(e) (e) Student-t regression
Refer to caption
(f) (f) Sparse regression
Refer to caption
(g) (g) Sparse regression (high dim)
Figure 17: ELBO comparison against NFs for real examples (except for linear regression, which is displayed in Figure 2(a). Each NF method is tuned under various settings, and only the best one is present for each example. Each figure also shows the effect of step sizes on MixFlow; overly large or small step sizes influence the performance of MixFlow negatively. Lines indicate the median, and error regions indicate 25th25^{th} to 75th75^{th} percentile from 55 runs.
Refer to caption
(a) (a) linear regression
Refer to caption
(b) (b) linear regression (heavy)
Refer to caption
(c) (c) logistic regression
Refer to caption
(d) (d) poisson regression
Refer to caption
(e) (e) student t regression
Refer to caption
(f) (f) sparse regression
Refer to caption
(g) (g) sparse regression (high dim)
Refer to caption
(h)
Figure 18: KSD comparison for real data examples.
Refer to caption
Refer to caption
(a) (a) Linear regression
Refer to caption
Refer to caption
(b) (b) Linear regression (heavy tail)
Refer to caption
Refer to caption
(c) (c) Logistic regression
Refer to caption
Refer to caption
(d) (d) Poisson regression
Refer to caption
Refer to caption
(e) (e) Sparse regression
Figure 19: Time results (sampling time comparison to NF, NUTS, NEO, and HMC, and per second ESS comparison to HMC; each repeated 100 trials) for two linear regression problems (Figures 19(a) and 19(b)), two generalized linear regression problems (Figures 19(c) and 19(d)), and one sparse regression problem (Figure 19(e)).
Refer to caption
(a) (a) Linear reg.: first 5 dimensions
Refer to caption
(b) (b) Linear reg. (Cauchy): first 10 dimensions
Refer to caption
(c) (c) Poisson reg.: all 8 dimensions
Refer to caption
(d) (d) Sparse reg. (high dim): first 10 dimensions
Figure 20: Sample quality visualization of 2,0002{,}000 i.i.d. samples draw from each of MixFlow (green scatters) and NF (orange scatters) on 4 real data examples: linear regression, linear regression with heavy tail prior, poisson regression, and high-dimensional sparse regression. Pairwise kernel density estimation (KDE) is based on 20,00020{,}000 NUTS samples, which is initialized with the Gaussian mean of the mean-field Gaussian approximation to posterior distribution, uses 20,00020{,}000 steps for adaptation (targeting at an average acceptance rate 0.70.7). NF is chosen to be the same one as compared in Figures 4 and 19.
Table 1: Comparison of ELBO between MixFlow and RealNVP with different number of layers on real data example. Each setting of RealNVP is run in 55 trials. ELBOs of MixFlow are estimated using 10001000 independent trajectories and ELBOs of RealNVP are estimated using 20002000 independent samples.
MixFlow
STEPSIZE #LEAPFROG #REFRESHMENT ELBO #SAMPLE
Lin Reg 0.0005 30 2000 -429.98 1000
Lin Reg Heavy 0.000025 50 2500 117.1 1000
Log Reg 0.002 50 1500 -250.5 1000
Poiss Reg 0.0001 50 2000 82576.9 1000
Student t Reg 0.0008 80 2000 -145.41 1000
Sparse Reg 0.001 30 600 -125.9 1000
Sparse Reg High Dim 0.0012 50 2000 -538.36 2000
RealNVP
#LAYER #HIDDEN MEDIAN IQR #NAN
Lin Reg 5 15 -429.43 (-429.56, -429.42) 0
Lin Reg 10 15 -429.41 (-429.43, -429.36) 1
Lin Reg Heavy 5 52 116.47 (116.34, 116.49) 0
Lin Reg Heavy 8 52 116.65 (116.56, 116.73) 3
Lin Reg Heavy 10 52 116.26 (116.11,116.41) 3
Log Reg 5 9 -250.76 (-250.76, -250.75) 3
Log Reg 8 9 -250.65 (-250.75, -250.64) 2
RealNVP
#LAYER #HIDDEN MEDIAN IQR #NAN
Poiss Reg 3 16 82576.87 (82575.57, 82577.53) 1
Poiss Reg 5 16 82580.77 (82580.13, 82583.72) 2
Poiss Reg 8 16 N/A N/A 5
Student t Reg 5 4 -145.52 (-145.53, -145.51) 0
Student t Reg 10 4 -145.52 (-145.52, -145.49) 0
RealNVP
#LAYER #HIDDEN MEDIAN IQR #NAN
Sparse Reg 5 10 -126.33 (-126.36, -126.32) 0
Sparse Reg 8 10 -126.35 (-126.36, -126.33) 1
Sparse Reg High Dim 5 20 -531.75 (-533.41, -522.60) 0
Sparse Reg High Dim 8 20 N/A N/A 5
Table 2: Comparison of ELBO between MixFlow and PlanarFlow with different number of layers on real data example. Each setting of PlanarFlow is run in 55 trials. ELBOs of MixFlow are estimated using 10001000 independent trajectories and ELBOs of PlanarFlow are estimated using 20002000 independent samples.
MixFlow
STEPSIZE #LEAPFROG #REFRESHMENT ELBO #SAMPLE
Lin Reg 0.0005 30 2000 -429.98 1000
Lin Reg Heavy 2.50E-05 50 2500 117.1 1000
Log Reg 0.002 50 1500 -250.5 1000
Poiss Reg 0.0001 50 2000 82576.9 1000
Student t Reg 0.0008 80 2000 -145.41 1000
Sparse Reg 0.001 30 600 -125.9 1000
Sparse Reg High Dim 0.0012 50 2000 -538.36 2000
PlanarFlow
#LAYER MEDIAN IQR #NAN
Lin Reg 5 -434.73 (-435.01, -434.65) 0
Lin Reg 10 -438.79 (-439.25, -436.77) 0
Lin Reg 20 -442.4384675 (-444.64, -440.93) 0
Lin Reg Heavy 5 79.65 (-75.92, 99.10) 0
Lin Reg Heavy 10 23.13 ( 21.29, 100.10) 0
Lin Reg Heavy 20 -879.1909019 (-1641.19, -392.80) 0
Log Reg 5 -251.71 (-251.85, -251.41) 0
Log Reg 10 -251.8 (-252.03, -251.80) 0
Log Reg 20 -252.0937812 (-252.20, -251.93) 0
PlanarFlow
#LAYER MEDIAN IQR #NAN
Poiss Reg 5 82569.58 (82569.33, 82569.68) 0
Poiss Reg 10 82568.49 (82568.12, 82569.57) 0
Poiss Reg 20 82566.40441 (82552.57, 82568.05) 2
Student t Reg 5 -145.69 (-145.71, -145.64) 0
Student t Reg 10 -145.72 (-145.73, -145.69) 0
Student t Reg 20 -145.7579931 (-145.77, -145.72) 0
PlanarFlow
#LAYER MEDIAN IQR #NAN
Sparse Reg 5 -127.58 (-127.60, -127.42) 0
Sparse Reg 10 -127.75 (-127.87, -127.68) 0
Sparse Reg 20 -127.6677415 (-128.48, -127.65) 0
Sparse Reg High Dim 5 -571.97 (-572.09, -565.10) 0
Sparse Reg High Dim 10 -571.7 (-573.22, -565.55) 0
Sparse Reg High Dim 20 -577.1338885 (-580.30, -569.39) 0
Table 3: Comparison of ELBO between MixFlow and RadialFlow with different number of layers on real data example. Each setting of RadialFlow is run in 55 trials. ELBOs of MixFlow are estimated using 10001000 independent trajectories and ELBOs of RadialFlow are estimated using 20002000 independent samples.
MixFlow
STEPSIZE #LEAPFROG #REFRESHMENT ELBO #SAMPLE
Lin Reg 0.0005 30 2000 -429.98 1000
Lin Reg Heavy 2.50E-05 50 2500 117.1 1000
Log Reg 0.002 50 1500 -250.5 1000
Poiss Reg 0.0001 50 2000 82576.9 1000
Student t Reg 0.0008 80 2000 -145.41 1000
Sparse Reg 0.001 30 600 -125.9 1000
Sparse Reg High Dim 0.0012 50 2000 -538.36 2000
RadialFlow
#LAYER MEDIAN IQR #NAN
Lin Reg 5 -434.14 (-434.36, -434.12) 0
Lin Reg 10 -434.12 (-434.19, -434.12) 0
Lin Reg 20 -433.82 (-434.21, -433.73) 0
Lin Reg Heavy 5 111.99 (111.91, 112.01) 0
Lin Reg Heavy 10 111.99 (111.89 , 112.00) 0
Lin Reg Heavy 20 111.7 (111.61, 111.71) 0
Log Reg 5 -251.33 (-251.49, -251.31) 0
Log Reg 10 -251.17 (-251.21 , -251.06) 0
Log Reg 20 -250.96 (-250.97, -250.95) 0
RadialFlow
#LAYER MEDIAN IQR #NAN
Poiss Reg 5 82570.5 (82570.19, 82570.57) 0
Poiss Reg 10 82571.02 (82570.76 , 82571.05) 0
Poiss Reg 20 82571.08 (82570.90, 82571.12) 0
Student t Reg 5 -145.61 (-145.62, -145.61) 0
Student t Reg 10 -145.59 (-145.60, -145.59) 0
Student t Reg 20 -145.58 (-145.60, -145.57) 0
RadialFlow
#LAYER MEDIAN IQR #NAN
Sparse Reg 5 -127.28 (-127.37, -127.27) 0
Sparse Reg 10 -127 ( -127.03, -126.88) 0
Sparse Reg 20 -126.68 ( -126.69, -126.61) 0
Sparse Reg High Dim 5 -548.05 (-548.54, -547.98) 0
Sparse Reg High Dim 10 -547.2 (-547.74, -547.16) 0
Sparse Reg High Dim 20 -547.03 (-547.78, -546.76) 0
Table 4: Minimum, average, and standard deviation of KSD obtained from NEO for each of the examples across all tuning settings. #Fail indicates the number of tuning settings that resulted in NaN outputs. The final KSDs obtained from MixFlow are also included for comparion.
MIXFLOW KSD MIN. KSD AVG. KSD SD. KSD #FAIL
banana 0.06 0.06 1.3 1.69 7/24
cross 0.13 0.11 2.78 3.09 12/24
Neal’s funnel 0.04 0.04 0.76 1.84 6/24
warped Gaussian 0.15 0.23 0.55 0.25 0/14
linear regression 1.64 9.37 143.56 84.16 18/24
linear regression (heavy tail) 51.43 396.97 1589.71 599.47 18/24
logistic regression 0.34 0.72 19.28 11.7 18/24
Poisson regression 10.57 639.08 1642.81 670.8 18/24
t regression 0.11 0.15 4.79 5.03 12/24
sparse regression 0.44 1.26 43.83 25.58 18/24
Table 5: ELBO results of UHA with different number of leapfrogs and number of refreshments on real data examples. Each setting of UHA is run in 55 trials. ELBOs of UHA are estimated using 50005000 independent samples.
#LFRG #REF MEDIAN IQR #SAMPLE
Lin Reg 10 5 -430.438 (-430.488, -430.411) 5000
Lin Reg 20 5 -430.589 (-430.596, -430.541) 5000
Lin Reg 50 5 -432.215 (-432.268, -432.146) 5000
Lin Reg 10 10 -430.468 (-430.511, -430.457) 5000
Lin Reg 20 10 -431.404 (-431.441, -431.202) 5000
Lin Reg 50 10 -432.462 (-432.56, -432.327) 5000
Lin Reg Heavy 10 5 120.188 (120.08, 120.401) 5000
Lin Reg Heavy 20 5 119.126 (118.806, 119.796) 5000
Lin Reg Heavy 50 5 119.134 (118.994, 119.247) 5000
Lin Reg Heavy 10 10 118.222 (118.125, 118.245) 5000
Lin Reg Heavy 20 10 118.191 (118.151, 118.202) 5000
Lin Reg Heavy 50 10 118.059 (117.718, 118.175) 5000
Log Reg 10 5 -250.588 (-250.607, -250.553) 5000
Log Reg 20 5 -250.555 (-250.565, -250.551) 5000
Log Reg 50 5 -250.601 (-250.64, -250.567) 5000
Log Reg 10 10 -250.643 (-250.662, -250.584) 5000
Log Reg 20 10 -250.578 (-250.615, -250.563) 5000
Log Reg 50 10 -250.913 (-251.006, -250.839) 5000
Poiss Reg 10 5 82578.439 (82578.43, 82578.459) 5000
Poiss Reg 20 5 82578.453 (82578.372, 82578.522) 5000
Poiss Reg 50 5 82578.421 (82578.342, 82578.465) 5000
Poiss Reg 10 10 82578.45 (82578.435, 82578.488) 5000
Poiss Reg 20 10 82578.449 (82578.368, 82578.458) 5000
Poiss Reg 50 10 82578.369 (82578.322, 82578.414) 5000
Student t Reg 10 5 -145.46 (-145.461, -145.438) 5000
Student t Reg 20 5 -145.424 (-145.451, -145.423) 5000
Student t Reg 50 5 -145.427 (-145.43, -145.421) 5000
Student t Reg 10 10 -145.441 (-145.454, -145.404) 5000
Student t Reg 20 10 -145.439 (-145.466, -145.411) 5000
Student t Reg 50 10 -145.41 (-145.417, -145.407) 5000
Sparse Reg 10 5 -125.936 (-125.953, -125.931) 5000
Sparse Reg 20 5 -125.954 (-125.981, -125.944) 5000
Sparse Reg 50 5 -126.279 (-126.28, -126.226) 5000
Sparse Reg 10 10 -125.978 (-125.988, -125.931) 5000
Sparse Reg 20 10 -126.032 (-126.057, -125.961) 5000
Sparse Reg 50 10 -126.557 (-126.559, -126.525) 5000
Sparse Reg High Dim 10 5 -501.201 (-501.201, -500.778) 5000
Sparse Reg High Dim 20 5 -499.467 (-499.471, -499.387) 5000
Sparse Reg High Dim 50 5 -498.968 (-499.001, -498.656) 5000
Sparse Reg High Dim 10 10 -488.7 (-488.799, -488.561) 5000
Sparse Reg High Dim 20 10 -487.216 (-487.662, -487.042) 5000
Sparse Reg High Dim 50 10 -486.423 (-486.477, -485.83) 5000