This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Energetic Variational Gaussian Process Regression for Computer Experiments

Lulu Kang lulukang@umass.edu Department of Mathematics and Statistics, University of Massachusetts, Amherst Yuanxing Cheng Department of Applied Mathematics, Illinois Institute of Technology Yiwei Wang Department of Mathematics, University of California, Riverside Chun Liu Department of Applied Mathematics, Illinois Institute of Technology
Abstract

The Gaussian process (GP) regression model is a widely employed surrogate modeling technique for computer experiments, offering precise predictions and statistical inference for the computer simulators that generate experimental data. Estimation and inference for GP can be performed in both frequentist and Bayesian frameworks. In this chapter, we construct the GP model through variational inference, particularly employing the recently introduced energetic variational inference method by Wang et al. (2021). Adhering to the GP model assumptions, we derive posterior distributions for its parameters. The energetic variational inference approach bridges the Bayesian sampling and optimization and enables approximation of the posterior distributions and identification of the posterior mode. By incorporating a normal prior on the mean component of the GP model, we also apply shrinkage estimation to the parameters, facilitating mean function variable selection. To showcase the effectiveness of our proposed GP model, we present results from three benchmark examples.

1 Introduction

Uncertainty Quantification (Ghanem et al., 2017) is a highly interdisciplinary research domain that involves mathematics, statistics, optimization, advanced computing technology, and various science and engineering disciplines. It provides a computational framework for quantifying input and response uncertainties and making model-based predictions and their inferences for complex science or engineering systems/processes. One fundamental research problem in uncertainty quantification is to analyze computer experimental data and build a surrogate model for the computer simulation model. Gaussian process (GP) regression model, sometimes known as “kriging”, has been widely used for this purpose since the seminal paper by Sacks et al. (1989).

In this book chapter, we plan to re-introduce Gaussian process models under the Bayesian framework. Among the various model assumptions, we choose the one from Santner et al. (2003). It has a mean function that can be helpful in model interpretation and an anisotropic covariance function that provides more flexibility to improve prediction accuracy. Different from most existing works on Bayesian GP models, we provide an alternative computational tool based on variational inference to replace the common Markov Chain Monte Carlo (MCMC) sampling method. Specifically, we propose using a particle-based energetic variational inference approach (EVI) developed in Wang et al. (2021) to generate samples and approximate the posterior distribution. For short, we name the method EVI-GP. Depending on the number of particles, EVI-GP can be used to compute the maximum a posteriori (MAP) estimate of the parameters or obtain a large number of samples to approximate the posterior distribution of the parameters. The estimations and inference of parameters and predictions can be obtained from the MAP estimate or the posterior samples. Thanks to the conjugate prior of the regression coefficients of the mean function, l2l_{2}-regularization is adopted to achieve model sparsity for the mean function of the GP.

The rest of the chapter is organized as follows. In Section 2, we review the Gaussian process model, including its assumption and prior distributions, and derive the posterior and posterior predictive distributions. In Section 3, the preliminary background on variational inference and the particle energetic variational inference methods are briefly reviewed. The EVI-GP method is also summarized at the end of this section. Section 4 shows three main simulation examples in which different versions of EVI-GP are compared with other existing methods. The chapter concludes in Section 5.

2 Gaussian Process Regression: Bayesian Approach

2.1 Gaussian Process Assumption

Denote {𝒙i,yi}i=1n\{\bm{x}_{i},y_{i}\}_{i=1}^{n} as the nn pairs of input and output data from a certain computer experiment, and 𝒙iΩd\bm{x}_{i}\in\Omega\subseteq\mathbb{R}^{d} are the iith experimental input values and yiy_{i}\in\mathbb{R} is the corresponding output. In this chapter, we only consider the case of univariate response, but the proposed EVI-GP can be applied to the multi-response GP model which involves the cross-covariance between responses (Cressie, 2015).

Gaussian process regression is built on the following model assumption of the response,

yi=𝒈(𝒙i)𝜷+Z(𝒙i)+ϵi,i=1,,n,y_{i}=\bm{g}(\bm{x}_{i})^{\top}\bm{\beta}+Z(\bm{x}_{i})+\epsilon_{i},\quad i=1,\ldots,n, (1)

where 𝒈(𝒙)\bm{g}(\bm{x}) is a pp-dim vector of user-specified basis functions and 𝜷\bm{\beta} is the pp-dim vector of linear coefficients corresponding the basis functions. Usually, 𝒈(𝒙)\bm{g}(\bm{x}) contains the polynomial basis functions of 𝒙\bm{x} up to a certain order. For example, if 𝒙2\bm{x}\in\mathbb{R}^{2}, 𝒈(𝒙)=[1,x1,x2,x12,x1x2,x22]\bm{g}(\bm{x})=[1,x_{1},x_{2},x_{1}^{2},x_{1}x_{2},x_{2}^{2}] contains the intercept, the linear, the quadratic and the interaction effects of the two input variables. The random noise ϵi\epsilon_{i}’s are independently and identically distributed following 𝒩(0,σ2)\mathcal{N}(0,\sigma^{2}). They are also independent of the other stochastic components of (1). We assume the GP prior on the stochastic function Z(𝒙)Z(\bm{x}), which is denoted as Z()GP(0,τ2K)Z(\cdot)\sim GP(0,\tau^{2}K), i.e., 𝔼[Z(𝒙)]=0\mathbb{E}[Z(\bm{x})]=0 and the covariance function

cov[Z(𝒙1),Z(𝒙2)]=τ2K(𝒙1,𝒙2;𝝎).\mbox{cov}[Z(\bm{x}_{1}),Z(\bm{x}_{2})]=\tau^{2}K(\bm{x}_{1},\bm{x}_{2};\bm{\omega}).

In most applications of computer experiments, we use the stationary assumption of Z(𝒙)Z(\bm{x}), and thus the variance τ2\tau^{2} is a constant. The function K(,;𝝎):Ω×Ω+K(\cdot,\cdot;\bm{\omega}):\Omega\times\Omega\mapsto\mathbb{R}_{+} is the correlation of the stochastic process with hyperparameters 𝝎\bm{\omega}. For it to be valid, K(,;𝝎)K(\cdot,\cdot;\bm{\omega}) must be a symmetric positive definite kernel function. Gaussian kernel function is one of the most commonly used kernel functions and its definition is

K(𝒙1,𝒙2;𝝎)=exp{j=1dωj(x1jx2j)2},K(\bm{x}_{1},\bm{x}_{2};\bm{\omega})=\exp\left\{-\sum_{j=1}^{d}\omega_{j}(x_{1j}-x_{2j})^{2}\right\},

with 𝝎d\bm{\omega}\in\mathbb{R}^{d} and 𝝎0\bm{\omega}\geq 0. Although we only demonstrated the examples using the Gaussian kernel, the EVI-GP method can be applied in the same way for other kernel functions.

In terms of response Y(𝒙)Y(\bm{x}), it follows a Gaussian Process with the following mean and covariance,

𝔼[Y(𝒙)]\displaystyle\mathbb{E}[Y(\bm{x})] =𝒈(𝒙)𝜷,𝒙Ω\displaystyle=\bm{g}(\bm{x})^{\top}\bm{\beta},\quad\forall\bm{x}\in\Omega (2)
cov[Y(𝒙1),Y(𝒙2)]\displaystyle\mbox{cov}[Y(\bm{x}_{1}),Y(\bm{x}_{2})] =τ2K(𝒙1,𝒙2;𝝎)+σ2δ(𝒙1,𝒙2),𝒙1,𝒙2Ω,\displaystyle=\tau^{2}K(\bm{x}_{1},\bm{x}_{2};\bm{\omega})+\sigma^{2}\delta(\bm{x}_{1},\bm{x}_{2}),\quad\forall\bm{x}_{1},\bm{x}_{2}\in\Omega, (3)
=τ2[K(𝒙1,𝒙2;𝝎)+ηδ(𝒙1,𝒙2)],\displaystyle=\tau^{2}\left[K(\bm{x}_{1},\bm{x}_{2};\bm{\omega})+\eta\delta(\bm{x}_{1},\bm{x}_{2})\right], (4)

where δ(𝒙1,𝒙2)=1\delta(\bm{x}_{1},\bm{x}_{2})=1 if 𝒙1=𝒙2\bm{x}_{1}=\bm{x}_{2} and 0 otherwise, and η=σ2/τ2\eta=\sigma^{2}/\tau^{2}. So η\eta is interpreted as the noise-to-signal ratio. For deterministic computer experiments, the noise component is not part of the model, i.e., σ2=0\sigma^{2}=0. However, a nugget effect, which is a small η\eta value, is usually included in the covariance function to avoid the singularity of the covariance matrix (Peng and Wu, 2014). The unknown parameter values of the GP model are 𝜽=(𝜷,𝝎,τ2,η)\bm{\theta}=(\bm{\beta},\bm{\omega},\tau^{2},\eta). We are going to show how to obtain the estimation and inference of the parameters using the Bayesian framework.

2.2 GP under Bayesian Framework

We assume the following prior distributions for the parameters 𝜽=(𝜷,𝝎,τ2,η)\bm{\theta}=(\bm{\beta},\bm{\omega},\tau^{2},\eta),

𝜷𝒱𝒩p(𝟎,ν2𝑹)\displaystyle\bm{\beta}\sim\mathcal{MVN}_{p}({\bf 0},\nu^{2}\bm{R}) (5)
ωii.i.d.Gamma(aω,bω), for i=1,,d\displaystyle\omega_{i}\overset{\text{i.i.d.}}{\sim}\text{Gamma}(a_{\omega},b_{\omega}),\text{ for }i=1,\ldots,d (6)
τ2Inverse-χ2(dfτ2),\displaystyle\tau^{2}\sim\text{Inverse-}\chi^{2}(df_{\tau^{2}}), (7)
ηGamma(aη,bη).\displaystyle\eta\sim\text{Gamma}(a_{\eta},b_{\eta}). (8)

These distribution families are commonly used in the literature such as Gramacy and Lee (2008); Gramacy and Apley (2015). However, the choice of parameters of the prior distributions should require fine-tuning using testing data or cross-validation procedures. In some literature, parameters 𝝎\bm{\omega}, τ2\tau^{2}, and η\eta are considered to be hyperparameters. The conditional posterior distribution of 𝜷\bm{\beta} given data and (𝝎,τ2,η)(\bm{\omega},\tau^{2},\eta) is a multivariate normal distribution, which is shown later.

Next, we derive the posterior distributions and some conditional posterior distributions. Based on the data, the sampling distribution is

𝒚n|𝜽𝒱𝒩n(𝑮𝜷,τ2(𝑲n+η𝑰n)),\bm{y}_{n}|\bm{\theta}\sim\mathcal{MVN}_{n}\left(\bm{G}\bm{\beta},\tau^{2}(\bm{K}_{n}+\eta\bm{I}_{n})\right), (9)

where 𝒚n\bm{y}_{n} is the vector of yiy_{i}’s and 𝑮\bm{G} is a matrix of row vectors 𝒈(𝒙i)\bm{g}(\bm{x}_{i})^{\top}’s. The matrix 𝑲n\bm{K}_{n} is the n×nn\times n kernel matrix with entries Kn[i,j]=K(𝒙i,𝒙j)K_{n}[i,j]=K(\bm{x}_{i},\bm{x}_{j}) and is a symmetric and positive definite matrix, and 𝑰n\bm{I}_{n} is an n×nn\times n identity matrix. The density function of 𝒚n|𝜽\bm{y}_{n}|\bm{\theta} is

p(𝒚n|𝜽)(τ2)n2det(𝑲n+η𝑰n)1/2exp(12τ2(𝒚n𝑮𝜷)(𝑲n+η𝑰n)1(𝒚n𝑮𝜷)).p(\bm{y}_{n}|\bm{\theta})\propto(\tau^{2})^{-\frac{n}{2}}\det(\bm{K}_{n}+\eta\bm{I}_{n})^{-1/2}\exp\left(-\frac{1}{2\tau^{2}}(\bm{y}_{n}-\bm{G}\bm{\beta})^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}(\bm{y}_{n}-\bm{G}\bm{\beta})\right). (10)

Following Bayes’ Theorem, the joint posterior distribution of all parameters is

p(𝜽|𝒚n)\displaystyle p(\bm{\theta}|\bm{y}_{n}) p(𝜽)p(𝒚n|𝜽)\displaystyle\propto p(\bm{\theta})p(\bm{y}_{n}|\bm{\theta})
p(𝜷)(j=1dp(ωi))p(τ2)p(η)p(𝒚n|𝜽).\displaystyle\propto p(\bm{\beta})\left(\prod_{j=1}^{d}p(\omega_{i})\right)p(\tau^{2})p(\eta)p(\bm{y}_{n}|\bm{\theta}).

The conditional posterior distribution of 𝜷\bm{\beta} can be easily obtained through conjugacy. It is also straightforward to obtain the posterior distribution p(𝝎,τ2,η|𝒚n)p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n}) as shown in the appendix. The results are summarized in Proposition 1.

Proposition 1.

Using the prior distribution of 𝛉=(𝛃,𝛚,τ2,η)\bm{\theta}=(\bm{\beta},\bm{\omega},\tau^{2},\eta) in (5)-(8), the conditional posterior distribution of 𝛃\bm{\beta} is

𝜷|𝒚n,𝝎,τ2,η𝒱𝒩p(𝜷^n,𝚺𝜷|n),\bm{\beta}|\bm{y}_{n},\bm{\omega},\tau^{2},\eta\sim\mathcal{MVN}_{p}\left(\hat{\bm{\beta}}_{n},\bm{\Sigma}_{\bm{\beta}|n}\right),

where

𝚺𝜷|n\displaystyle\bm{\Sigma}_{\bm{\beta}|n} =[1τ2𝑮(𝑲n+η𝑰n)1𝑮+1ν2𝑹1]1,\displaystyle=\left[\frac{1}{\tau^{2}}\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}+\frac{1}{\nu^{2}}\bm{R}^{-1}\right]^{-1}, (11)
𝜷^n\displaystyle\hat{\bm{\beta}}_{n} =τ2𝚺𝜷|n[𝑮(𝑲n+η𝑰n)1]𝒚n.\displaystyle=\tau^{-2}\bm{\Sigma}_{\bm{\beta}|n}\left[\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\right]\bm{y}_{n}. (12)

The marginal posterior distribution of (𝛚,τ2,η)(\bm{\omega},\tau^{2},\eta) is

p(𝝎,τ2,η|𝒚n)\displaystyle p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n})\propto det(𝚺𝜷|n)1/2exp[12𝜷^n𝚺𝜷|n1𝜷^n12τ2𝒚n(𝑲n+η𝑰n)1𝒚n]\displaystyle\det(\bm{\Sigma}_{\bm{\beta}|n})^{1/2}\exp\left[-\frac{1}{2}\hat{\bm{\beta}}_{n}^{\top}\bm{\Sigma}_{\bm{\beta}|n}^{-1}\hat{\bm{\beta}}_{n}-\frac{1}{2\tau^{2}}\bm{y}_{n}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{y}_{n}\right]
×(τ2)n/2det(𝑲n+η𝑰n)1/2p(τ2)p(𝝎)p(η).\displaystyle\times(\tau^{2})^{-n/2}\det(\bm{K}_{n}+\eta\bm{I}_{n})^{-1/2}p(\tau^{2})p(\bm{\omega})p(\eta). (13)

The posterior distribution of τ2\tau^{2} can be significantly simplified if a non-informative prior is used for 𝜷\bm{\beta}, as described in Proposition 2.

Proposition 2.

If using a non-informative prior distribution for 𝛃\bm{\beta}, i.e., p(𝛃)1p(\bm{\beta})\propto 1, and the prior distributions (6)-(8) for the remaining parameters, the conditional posterior distribution of 𝛃\bm{\beta} is still 𝒱𝒩p(𝛃^n,𝚺𝛃|n)\mathcal{MVN}_{p}\left(\hat{\bm{\beta}}_{n},\bm{\Sigma}_{\bm{\beta}|n}\right), but the covariance and mean are

𝚺𝜷|n\displaystyle\bm{\Sigma}_{\bm{\beta}|n} =τ2[𝑮(𝑲n+η𝑰n)1𝑮]1,\displaystyle=\tau^{2}\left[\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}\right]^{-1},
𝜷^n\displaystyle\hat{\bm{\beta}}_{n} =[𝑮(𝑲n+η𝑰n)1𝑮]1[𝑮(𝑲n+η𝑰n)1]𝒚n.\displaystyle=\left[\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}\right]^{-1}\left[\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\right]\bm{y}_{n}.

The conditional posterior distribution for τ2\tau^{2} is

τ2|𝝎,η,𝒚nScaled Inverse-χ2(dfτ2+np,τ^2),\tau^{2}|\bm{\omega},\eta,\bm{y}_{n}\sim\text{Scaled Inverse-}\chi^{2}(df_{\tau^{2}}+n-p,\hat{\tau}^{2}), (14)

where

τ^2\displaystyle\hat{\tau}^{2} =1+sn2dfτ2+np,\displaystyle=\frac{1+s_{n}^{2}}{df_{\tau^{2}}+n-p},
sn2\displaystyle s^{2}_{n} =τ2𝜷^n𝚺𝜷|n1𝜷^n+𝒚n(𝑲n+η𝑰n)1𝒚n.\displaystyle=\tau^{-2}\hat{\bm{\beta}}_{n}^{\top}\bm{\Sigma}_{\bm{\beta}|n}^{-1}\hat{\bm{\beta}}_{n}+\bm{y}_{n}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{y}_{n}.

The marginal posterior of (𝛚,η)(\bm{\omega},\eta) is

p(𝝎,η|𝒚n)(τ^2)12(dfτ2+np)det(𝑮(𝑲n+η𝑰n)1𝑮)1/2det(𝑲n+η𝑰n)1/2p(𝝎)p(η).\displaystyle p(\bm{\omega},\eta|\bm{y}_{n})\propto(\hat{\tau}^{2})^{-\frac{1}{2}(df_{\tau^{2}}+n-p)}\det(\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G})^{-1/2}\det(\bm{K}_{n}+\eta\bm{I}_{n})^{-1/2}p(\bm{\omega})p(\eta). (15)

If we use non-informative prior distributions for all the parameters, i.e., p(𝜷)1p(\bm{\beta})\propto 1, p(ωi)i.i.d.Uniform[aω,bω]p(\omega_{i})\overset{\text{i.i.d.}}{\sim}\text{Uniform}[a_{\omega},b_{\omega}] for i=1,,di=1,\ldots,d, p(τ2)τ2p(\tau^{2})\propto\tau^{-2}, and p(η)Uniform[aη,bη]p(\eta)\propto\text{Uniform}[a_{\eta},b_{\eta}], the Bayesian framework is equivalent to the empirical Bayesian or maximum likelihood estimation method. The GP regression model estimated via this frequentist approach is common in both methodology research and application (Santner et al., 2003; Fang et al., 2005; Gramacy, 2020). In this chapter, we consider the empirical Bayesian estimation as a special case of the Bayesian GP model. The choice between the two different types of prior distributions for 𝜷\bm{\beta}, informative or non-informative, is subject to the dimension of the input variables, the assumption on basis functions 𝒈(𝒙)\bm{g}(\bm{x}), the goal of GP modeling (accurate prediction v.s. interpretation), and sometimes the application of the computer experiment. Both types have their unique advantages and shortcomings. The non-informative prior distribution for 𝜷\bm{\beta} reduces the computation involved in the posterior sampling for τ2\tau^{2}, but we would lose the l2l_{2} regularization effect on 𝜷\bm{\beta} brought by the informative prior 𝜷\bm{\beta}.

One issue with the informative prior distribution is to choose its parameters, i.e., the constant variance ν2\nu^{2} and the correlation matrix 𝑹\bm{R}. Here we recommend using a cross-validation procedure to select ν2\nu^{2}. If the mean function 𝒈(𝒙)𝜷\bm{g}(\bm{x})^{\top}\bm{\beta} is a polynomial function of the input variables, we specify the matrix 𝑹\bm{R} to be a diagonal matrix 𝑹=diag{1,r,,r,r2,,r2,}\bm{R}=\mbox{diag}\{1,r,\ldots,r,r^{2},\ldots,r^{2},\ldots\}, where r(0,1)r\in(0,1) is a user-specified parameter. The power index of rr is the same as the order of the corresponding polynomial term. For example, if 𝒈(𝒙)𝜷\bm{g}(\bm{x})^{\top}\bm{\beta} with 𝒙2\bm{x}\in\mathbb{R}^{2} is a full quadratic model and contains the terms 𝒈(𝒙)=[1,x1,x2,x12,x22,x1x2]\bm{g}(\bm{x})=[1,x_{1},x_{2},x_{1}^{2},x_{2}^{2},x_{1}x_{2}]^{\top}, the corresponding prior correlation matrix should be specified as 𝑹=diag{1,r,r,r2,r2,r2}\bm{R}=\mbox{diag}\{1,r,r,r^{2},r^{2},r^{2}\}. In this way, the prior variance of the effect decreases exponentially as the order of effect increases, following the effect hierarchy principle (Hamada and Wu, 1992; Wu and Hamada, 2021). It states that lower-order effects are more important than higher-order effects, and the effects of the same order are equally important. The hierarchy ordering principle can reduce the size of the model and avoid including higher-order and less significant model terms. Such prior distribution was firstly proposed by Joseph (2006), and later used in Kang and Joseph (2009); Ai et al. (2009); Kang et al. (2018); Kang and Huang (2019); Kang et al. (2021, 2023).

Proposition 3.

Given the parameters (𝛚,τ2,η)(\bm{\omega},\tau^{2},\eta), the posterior predictive distribution of y(𝐱)y(\bm{x}) at any query point 𝐱\bm{x} is the following normal distribution.

y(𝒙)|𝒚n,𝝎,τ2,η𝒩(μ^(𝒙),σn2(𝒙)),y(\bm{x})|\bm{y}_{n},\bm{\omega},\tau^{2},\eta\sim\mathcal{N}(\hat{\mu}(\bm{x}),\sigma^{2}_{n}(\bm{x})), (16)

where

μ^(𝒙)\displaystyle\hat{\mu}(\bm{x}) =𝒈(𝒙)𝜷^n+K(𝒙,𝒳n)(𝑲n+η𝑰n)1(𝒚n𝑮𝜷^n),\displaystyle=\bm{g}(\bm{x})^{\top}\hat{\bm{\beta}}_{n}+K(\bm{x},\mathcal{X}_{n})(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}(\bm{y}_{n}-\bm{G}\hat{\bm{\beta}}_{n}), (17)
σn2(𝒙)\displaystyle\sigma_{n}^{2}(\bm{x}) =τ2{1K(𝒙,𝒳n)(𝑲n+η𝑰n)1K(𝒳n,𝒙)+𝒄(𝒙)[𝑮(𝑲n+η𝑰n)1𝑮]1𝒄(𝒙)},\displaystyle=\tau^{2}\left\{1-K(\bm{x},\mathcal{X}_{n})(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}K(\mathcal{X}_{n},\bm{x})+\bm{c}(\bm{x})^{\top}\left[\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}\right]^{-1}\bm{c}(\bm{x})\right\}, (18)

where

𝒄(𝒙)\displaystyle\bm{c}(\bm{x}) =𝒈(𝒙)𝑮(𝑲n+η𝑰n)1K(𝒳n,𝒙),\displaystyle=\bm{g}(\bm{x})-\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}K(\mathcal{X}_{n},\bm{x}),
K(𝒙,𝒳n)\displaystyle K(\bm{x},\mathcal{X}_{n}) =K(𝒳n,𝒙)=[K(𝒙,𝒙1),,K(𝒙,𝒙n)].\displaystyle=K(\mathcal{X}_{n},\bm{x})^{\top}=[K(\bm{x},\bm{x}_{1}),\ldots,K(\bm{x},\bm{x}_{n})].

For non-informative prior, the posterior predictive distribution y(𝐱)|𝐲n,𝛚,ηy(\bm{x})|\bm{y}_{n},\bm{\omega},\eta is the same except τ2\tau^{2} is replaced by τ^2\hat{\tau}^{2}.

A detailed proof can be found in Santner et al. (2003) and Rasmussen and Williams (2006).

Thanks to the Gaussian process assumption and the conditional conjugate prior distributions for 𝜷\bm{\beta} and τ2\tau^{2}, Proposition 1, 2, and 3 give the explicit and easy to generate conditional posterior distribution of 𝜷\bm{\beta} (and τ2\tau^{2}) and posterior predictive distribution. Therefore, how to generate samples from p(𝝎,τ2,η|𝒚n)p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n}) in (13) or p(𝝎,η|𝒚n)p(\bm{\omega},\eta|\bm{y}_{n}) in (15) is the bottleneck of the computation for GP models. Since p(𝝎,τ2,η|𝒚n)p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n}) and p(𝝎,η|𝒚n)p(\bm{\omega},\eta|\bm{y}_{n}) are not from any known distribution families, Metropolis-Hastings (MH) algorithm (Robert et al., 1999; Gelman et al., 2014), Hamiltonian Monte Carlo (HMC) (Neal, 1996), or Metropolis-adjusted Langevin algorithm (MALA) can be used to for sampling (Roberts and Rosenthal, 1998). In this chapter, we introduce readers to an alternative computational tool, namely, a variational inference approach to approximate the posterior distribution p(𝝎,τ2,η|𝒚n)p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n}) or p(𝝎,η|𝒚n)p(\bm{\omega},\eta|\bm{y}_{n}). More specifically, we plan to use energetic variational inference, a particle method, to generate posterior samples.

3 Energetic Variational Inference Gaussian Process

Variational inference-based GP models have been explored in prior works such as Tran et al. (2016), Cheng and Boots (2017), and Wynne and Wild (2022). Despite sharing the variational inference idea, the proposed Energetic Variational Inference (EVI) GP differs significantly from these existing methods regarding the specific variational techniques employed. Tran et al. (2016) utilized auto-encoding, while Cheng and Boots (2017) and Wynne and Wild (2022) employed the mean-field variational inference (Blei et al., 2017).

The EVI approach presented in this chapter is a newly introduced particle-based method. It offers simplicity in implementation across diverse applications without the need for training any neural networks. Notably, EVI establishes a connection between the MAP procedure and posterior sampling through a user-specified number of particles. In contrast to the complexity of auto-encoding variational methods, the particle-based approach is much simpler, devoid of any network intricacies. Moreover, in comparison to mean-field methods, both particle-based and MAP-based approaches (auto-encoding falls into the MAP-based category) can exhibit enhanced accuracy, as they do not impose any parametric assumptions on a feasible family of distributions in the optimization to solve the variational problem.

This section provides a brief introduction to the Energetic Variational Inference (EVI) approach. Subsequently, we adapt this method for the estimation and prediction of the GP regression model.

3.1 Preliminary: Energetic Variational Inference

Motivated by the energetic variational approaches for modeling the dynamics of non-equilibrium thermodynamical systems (Giga et al., 2017), the energetic variational inference framework uses a continuous energy-dissipation law to specify the dynamics of minimizing the objective function in machine learning problems. Under the EVI framework, a practical algorithm can be obtained by introducing a suitable discretization to the continuous energy-dissipation law. This idea was introduced and applied to variational inference by Wang et al. (2021). It can also be applied to other machine learning problems similar to Trillos and Sanz-Alonso (2020) and E et al. (2020).

We first introduce the EVI using the continuous formulation. Let ϕt\bm{\phi}_{t} be the dynamic flow map ϕt:dd\bm{\phi}_{t}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d} at time tt that continuously transforms the dd-dimensional distribution from an initial distribution toward the target one and we require the map ϕt\bm{\phi}_{t} to be smooth and one-to-one. The functional (ϕt)\mathcal{F}(\bm{\phi}_{t}) is a user-specified divergence or other machine learning objective functional, such as the KL-divergence in Wang et al. (2021). Taking the analogy of a thermodynamics system, (ϕt)\mathcal{F}(\bm{\phi}_{t}) is the Helmholtz free energy. Following the First and Second Law of thermodynamics (Giga et al., 2017) (kinetic energy is set to be zero)

ddt(ϕt)=(ϕt,ϕ˙t),\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}(\bm{\phi}_{t})=-\triangle(\bm{\phi}_{t},\dot{\bm{\phi}}_{t}), (19)

where (ϕt,ϕ˙t)\triangle(\bm{\phi}_{t},\dot{\bm{\phi}}_{t}) is a user-specified functional representing the rate of energy dissipation, and ϕ˙t\dot{\bm{\phi}}_{t} is the derivative of ϕt\bm{\phi}_{t} with time tt. So ϕ˙t\dot{\bm{\phi}}_{t} can be interpreted as the “velocity” of the transformation. Each variational formulation gives a natural path of decreasing the objective functional (ϕt)\mathcal{F}(\bm{\phi}_{t}) toward an equilibrium.

The dissipation functional should satisfy (ϕt,ϕ˙t)0\triangle(\bm{\phi}_{t},\dot{\bm{\phi}}_{t})\geq 0 so that (ϕt)\mathcal{F}(\bm{\phi}_{t}) decreases with time. As discussed in Wang et al. (2021), there are many ways to specify (ϕt,ϕ˙t)\triangle(\bm{\phi}_{t},\dot{\bm{\phi}}_{t}) and the simplest among them is a quadratic functional in terms of ϕ˙t\dot{\bm{\phi}}_{t},

(ϕt,ϕ˙t)=Ωtρ[ϕt]ϕ˙t22d𝒙,\triangle(\bm{\phi}_{t},\dot{\bm{\phi}}_{t})=\int_{\Omega_{t}}\rho_{[\bm{\phi}_{t}]}\|\dot{\bm{\phi}}_{t}\|_{2}^{2}\mathrm{d}\bm{x},

where ρ[ϕt]\rho_{[\bm{\phi}_{t}]} denotes the pdf of the current distribution which is the initial distribution transformed by ϕt\bm{\phi}_{t}, Ωt\Omega_{t} is the current support, and 𝒂2=𝒂𝒂\|{\bm{a}}\|_{2}={\bm{a}}^{\top}{\bm{a}} for 𝒂d\forall{\bm{a}}\in\mathbb{R}^{d}. This simple quadratic functional is appealing since it has a simple derivative, i.e.,

δ(ϕt,ϕ˙t)δϕ˙t=2ρ[ϕt]ϕ˙t,\frac{\delta\triangle(\bm{\phi}_{t},\dot{\bm{\phi}}_{t})}{\delta\dot{\bm{\phi}}_{t}}=2\rho_{[\bm{\phi}_{t}]}\dot{\bm{\phi}}_{t},

where δ\delta is the variation operator, i.e., functional derivative.

With the specified energy-dissipation law (19), the energy variational approach derives the dynamics of the systems through two variational procedures, the Least Action Principle (LAP) and the Maximum Dissipation Principle (MDP), which leads to

δ12δϕ˙t=δδϕt.\frac{\delta\frac{1}{2}\triangle}{\delta\dot{\bm{\phi}}_{t}}=-\frac{\delta\mathcal{F}}{\delta\bm{\phi}_{t}}.

The approach is motivated by the seminal works of Raleigh (Rayleigh, 1873) and Onsager (Onsager, 1931a, b). Using the quadratic 𝒟(ϕt,ϕ˙t)\mathcal{D}(\bm{\phi}_{t},\dot{\bm{\phi}}_{t}), the dynamics of decreasing \mathcal{F} is

ρ[ϕt]ϕ˙t=δδϕt.\rho_{[\bm{\phi}_{t}]}\dot{\bm{\phi}}_{t}=-\frac{\delta\mathcal{F}}{\delta\bm{\phi}_{t}}. (20)

In general, this continuous formulation is difficult to solve, since the manifold of ϕt\bm{\phi}_{t} is of infinite dimension. Naturally, there are different approaches to approximate an infinite-dimensional manifold by a finite-dimensional manifold. One such approach, as used in Wang et al. (2021), is to use particles (or samples) to approximate the ρ[ϕt]\rho_{[\bm{\phi}_{t}]} in (19) with kernel regularization, before any variational steps. It leads to a discrete version of the energy-dissipation law, i.e.,

ddth({𝒙i(t)}i=1N)=h({𝒙i(t)}i=1N,{𝒙˙i(t)}i=1N).\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_{h}(\{\bm{x}_{i}(t)\}_{i=1}^{N})=-\triangle_{h}(\{\bm{x}_{i}(t)\}_{i=1}^{N},\{\dot{\bm{x}}_{i}(t)\}_{i=1}^{N}). (21)

Here {𝒙(t)}i=1N\{\bm{x}(t)\}_{i=1}^{N} is the locations of NN particles at time tt and 𝒙˙i(t)\dot{\bm{x}}_{i}(t) is the derivative of 𝒙i\bm{x}_{i} with tt, and thus is the velocity of the iith particle as it moves toward the target distribution. The subscript hh of \mathcal{F} and \triangle denotes the bandwidth parameter of the kernel function used in the kernelization operation. Applying the variational steps to (21), we obtain the dynamics of decreasing \mathcal{F} at the particle level,

δ12hδ𝒙˙i(t)=δhδ𝒙i,for i=1,,N.\frac{\delta\frac{1}{2}\triangle_{h}}{\delta\dot{\bm{x}}_{i}(t)}=-\frac{\delta\mathcal{F}_{h}}{\delta\bm{x}_{i}},\quad\text{for }i=1,\ldots,N. (22)

This leads to an ODE system of {𝒙i(t)}i=1N\{\bm{x}_{i}(t)\}_{i=1}^{N} and can be solved by different numerical schemes. The solution is the particles approximating the target distribution.

Due to limited space, we can only briefly review the EVI framework and explain it intuitively. Readers can find the rigorous and concrete explanation in Wang et al. (2021). It also suggested many different ways to specify the energy dissipation, the ways to approximate the continuous formulation, and different ways to solve the ODE system.

3.2 EVI-GP

In this chapter, we use the KL-divergence as the energy functional as demonstrated in Wang et al. (2021),

DKL(ρ||ρ)=Ωρ(𝒙)log(ρ(𝒙)ρ(𝒙))d𝒙,D_{\text{KL}}(\rho||\rho^{*})=\int_{\Omega}\rho(\bm{x})\log\left(\frac{\rho(\bm{x})}{\rho^{*}(\bm{x})}\right)\mathrm{d}\bm{x}, (23)

where ρ(𝒙)\rho^{*}(\bm{x}) is the density function of the target distribution with support region Ω\Omega and ρ(𝒙)\rho(\bm{x}) is to approximate ρ(𝒙)\rho^{*}(\bm{x}). For EVI-GP, ρ\rho^{*} is the posterior distribution of p(𝝎,τ2,η|𝒚n)p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n}) or p(𝝎,τ2|𝒚n)p(\bm{\omega},\tau^{2}|\bm{y}_{n}).

Using the KL-divergence, the divergence functional at time tt is

(ϕt)=(ρ[ϕt](𝒙)logρ[ϕt](𝒙)+ρ[ϕt](𝒙)V(𝒙))d𝒙,\mathcal{F}(\bm{\phi}_{t})=\int\left(\rho_{[\bm{\phi}_{t}]}(\bm{x})\log\rho_{[\bm{\phi}_{t}]}(\bm{x})+\rho_{[\bm{\phi}_{t}]}(\bm{x})V(\bm{x})\right)\mathrm{d}\bm{x},

where V(𝒙)=logρ(𝒙)V(\bm{x})=\log\rho^{*}(\bm{x}), which is known up to a scaling constant. The discrete version of the energy becomes

h({𝒙i}i=1N)=1Ni=1N(ln(1Nj=1NKh(𝒙i,𝒙j))+V(𝒙i)),\mathcal{F}_{h}\left(\{\bm{x}_{i}\}_{i=1}^{N}\right)=\frac{1}{N}\sum_{i=1}^{N}\left(\ln\left(\frac{1}{N}\sum_{j=1}^{N}K_{h}(\bm{x}_{i},\bm{x}_{j})\right)+V(\bm{x}_{i})\right), (24)

and the discrete dissipation is

2𝒟h({𝒙i}i=1N)=1Ni=1N|𝒙˙i(t))|2.-2\mathcal{D}_{h}\left(\{\bm{x}_{i}\}_{i=1}^{N}\right)=-\frac{1}{N}\sum_{i=1}^{N}|\dot{\bm{x}}_{i}(t))|^{2}. (25)

Applying variational step to (21), we obtain (22) which is equivalent to the following nonlinear ODE system:

𝒙˙i(t)=(j=1N𝒙iKh(𝒙i,𝒙j)j=1NKh(𝒙i,𝒙j)+k=1N𝒙iKh(𝒙k,𝒙i)j=1NKh(𝒙k,𝒙j)+𝒙iV(𝒙i)),\dot{\bm{x}}_{i}(t)=-\left(\frac{\sum_{j=1}^{N}\nabla_{\bm{x}_{i}}K_{h}(\bm{x}_{i},\bm{x}_{j})}{\sum_{j=1}^{N}K_{h}(\bm{x}_{i},\bm{x}_{j})}+\sum_{k=1}^{N}\frac{\nabla_{\bm{x}_{i}}K_{h}(\bm{x}_{k},\bm{x}_{i})}{\sum_{j=1}^{N}K_{h}(\bm{x}_{k},\bm{x}_{j})}+\nabla_{\bm{x}_{i}}V(\bm{x}_{i})\right),\\ (26)

for i=1,,Ni=1,\ldots,N. The iterative update of NN particles 𝒙ii=1N{\bm{x}_{i}}_{i=1}^{N} involves solving the nonlinear ODE system (26) via optimization problem (27) at the mm-th iteration step

{𝒙im+1}i=1N=argmin{𝒙i}i=1NJm({𝒙i}i=1N),\{\bm{x}^{m+1}_{i}\}_{i=1}^{N}=\text{argmin}_{\{\bm{x}_{i}\}_{i=1}^{N}}J_{m}(\{\bm{x}_{i}\}_{i=1}^{N}), (27)

where

Jm({𝒙i}i=1N):=12τi=1N𝒙i𝒙im2/N+h({𝒙i}i=1N).J_{m}(\{\bm{x}_{i}\}_{i=1}^{N}):=\frac{1}{2\tau}\sum_{i=1}^{N}||\bm{x}_{i}-\bm{x}_{i}^{m}||^{2}/N+\mathcal{F}_{h}(\{\bm{x}_{i}\}_{i=1}^{N}). (28)

Wang et al. (2021) emphasized the advantages of using the Implicit-Euler solver for enhanced numerical stability in this process. We summarize the algorithm of using the implicit Euler scheme to solve the ODE system (26) into Algorithm 1. Here MaxIter is the maximum number of iterations of the outer loop. The optimization problem in the inner loop is solved by L-BFGS in our implementation.

Algorithm 1 EVI with Implicit Euler Scheme (EVI-Im)
  Input: The target distribution ρ(𝒙)\rho^{*}(\bm{x}) and a set of initial particles {𝒙i0}i=1N\{\bm{x}_{i}^{0}\}_{i=1}^{N} drawn from a prior ρ0(𝒙)\rho_{0}(\bm{x}).
  Output: A set of particles {𝒙i}i=1N\{\bm{x}_{i}^{*}\}_{i=1}^{N} approximating ρ\rho^{*}.
  for m=0m=0 to MaxIter do
     Solve {𝒙im+1}i=1N=argmin{𝒙i}i=1NJm({𝒙i}i=1N)\{\bm{x}^{m+1}_{i}\}_{i=1}^{N}=\text{argmin}_{\{\bm{x}_{i}\}_{i=1}^{N}}J_{m}(\{\bm{x}_{i}\}_{i=1}^{N}).
     Update {𝒙im}i=1N\{\bm{x}_{i}^{m}\}_{i=1}^{N} by {𝒙im+1}i=1N\{\bm{x}_{i}^{m+1}\}_{i=1}^{N}.
  end for

We propose two adaptations of the EVI-Im algorithm for GP model estimation and prediction. The first approach involves generating NN particles using Algorithm 1 to approximate the posterior p(𝝎,τ2,η|𝒚n)p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n}) when an informative normal prior distribution is adopted for 𝜷\bm{\beta}, or p(𝝎,τ2|𝒚n)p(\bm{\omega},\tau^{2}|\bm{y}_{n}) when a non-informative prior distribution is used for 𝜷\bm{\beta}. These NN particles serve as posterior samples. Conditional on their values, we can generate samples for 𝜷\bm{\beta} based on its conditional posterior distribution (Proposition 1) or generate samples for both 𝜷\bm{\beta} and τ2\tau^{2} according to their conditional posterior distribution (Proposition 2). Following Proposition 3, we can generally predict y(𝒙)y(\bm{x}) and confidence intervals conditional on the posterior samples.

In the second approach, we employ EVI-Im solely as an optimization tool for Maximum A Posteriori (MAP), entailing the minimization of V(𝒙)=logρ(𝒙)V(\bm{x})=-\log\rho^{*}(\bm{x}). This can be done by simply setting the free energy as (𝒙)=V(𝒙)\mathcal{F}(\bm{x})=V(\bm{x}) and N=1N=1. As a result, the optimization problem (27) at the iith iteration becomes

𝒙m+1=argmin𝒙12τ𝒙𝒙m2logV(𝒙),\bm{x}^{m+1}=\text{arg}\min_{\bm{x}}\frac{1}{2\tau}||\bm{x}-\bm{x}^{m}||^{2}-\log V(\bm{x}),

which is the celebrated proximal point algorithm (PPA) Rockafellar (1976). Therefore, EVI is a method that connects the posterior sampling and MAP under the same general framework. Based on the MAP, we can obtain the posterior mode for 𝜷\bm{\beta} (or β\beta and τ2\tau^{2}) and the mode of the prediction and the corresponding inference based on the posterior mode. For short, we name the first approach EVI-post and the second EVI-MAP.

4 Numerical Examples

In this section, we demonstrate the performances of EVI-GP and compare it with three commonly used GP packages in R, which are gpfit (MacDonald et al., 2015), mlegp (Dancik and Dorman, 2008), laGP (Gramacy and Apley, 2015; Gramacy, 2016). The comparison is illustrated via three examples. The first one is a 1-dim toy example and the other two are the OTL Circuit and Borehole examples chosen from the online library built by Surjanovic and Bingham (2013). All three examples are frequently used benchmarks in the computer experiment literature. The codes for EVI-GP and all the examples are available on GitHub with the link. https://github.com/XavierOwen/EVIGP

The EVI-GP is implemented in Python. The proximal point optimization in Algorithm 1 is solved by the LBFGS function of Pytorch library (Paszke et al., 2019). Some arguments of the EVI-GP codes are set the same for all three examples. The argument history_size of the LBFGS function is set to 50 and to perform line search, line_search_fn is set to strong_wolfe, the strong Wolfe conditions. In the EVI algorithm, the maximum allowed iteration of the inner loop is set to 100 and the maximum allowed iteration of the outer loop (or maximum epoch) is set to 500. The outer loop of the EVI is terminated either when the maximum epoch is reached or when the convergence condition is achieved. We consider the convergence is achieved if 1Ni=1N𝒙i(t)𝒙i(t1)2<𝚃𝚘𝚕\frac{1}{N}\sum_{i=1}^{N}||\bm{x}_{i}(t)-\bm{x}_{i}(t-1)||_{2}<\verb|Tol| and 𝚃𝚘𝚕=1e8\verb|Tol|=1e-8. These parameter settings are done through many experiments and they lead to satisfactory performance in the following examples.

In each example, we use the same pair of training and testing datasets for all the methods under comparison. The designs for both datasets are generated via maximinLHS procedure from the lhs package in R (Carnell, 2022). We run 100 simulations. In each simulation, we compute the standardized Root Mean Square Prediction Error (RMSPE) on the test data set, which is defined as follows

standardized RMSPE=(1ntesti(y^iyi)2)/standard deviation of test(𝒚),\text{standardized RMSPE}=\left(\sqrt{\frac{1}{n_{\text{test}}}\sum_{i}(\hat{y}_{i}-y_{i})^{2}}\right)\Bigg{/}\text{standard deviation of test}(\bm{y}),

where y^i\hat{y}_{i} is the predicted value at the test point 𝒙i\bm{x}_{i} and yiy_{i} is the corresponding true value. Box plots of the 100 standardized RMSPEs of all methods are shown for comparison.

4.1 One-Dim Toy Example

In the one-dim example, the data are generated from the test function y(x)=xsin(x)+ϵy(x)=x\sin(x)+\epsilon for x[0,10]x\in[0,10] The size of the training and test data sets are n=11n=11 and m=100m=100, respectively. The variance of the noise is σ2=0.52\sigma^{2}=0.5^{2}.

The parameters of the prior distributions of the EVI-GP are aω=aη=1a_{\omega}=a_{\eta}=1, bω=bη=0.5b_{\omega}=b_{\eta}=0.5, and dfτ2=0df_{\tau^{2}}=0 which is equivalent to p(τ2)1/τ2p(\tau^{2})\propto 1/\tau^{2}. We consider two possible mean functions for the GP model, the constant mean μ(x)=β0\mu(x)=\beta_{0} and the linear mean μ(x)=β0+β1x\mu(x)=\beta_{0}+\beta_{1}x. There is no need for parameter regularization for these simple mean functions, and thus we use non-informative prior for 𝜷\bm{\beta}. We use the EVI-post to approximate p(ω,η|𝒚n)p(\omega,\eta|\bm{y}_{n}) and set the number of particles N=100N=100, kernel bandwidth h=0.02h=0.02 and stepsize τ=1\tau=1 in the EVI procedure. The initial particles are sampled from the uniform distribution in [0,0.1]×[0.1,0.4][0,0.1]\times[0.1,0.4]. Figure 1 and 2 show the posterior modes and particles returned by EVI-MAP and EVI-post, as well as the prediction and predictive confidence interval returned by both methods.

Refer to caption
(a) GP with constant mean, EVI-MAP.
Refer to caption
(b) GP with constant mean, EVI-post.
Refer to caption
(c) GP with linear mean, EVI-MAP.
Refer to caption
(d) GP with linear mean, EVI-post.
Figure 1: One-Dim Toy Example: in Figure 1(a)1(d) the contours are plotted according to p(ω,η)p(\omega,\eta) evaluated on mesh points without the normalizing constant, the red points are the modes of the posterior distribution of (ω,η)(\omega,\eta) returned by EVI-MAP approach, and the black dots in Figure 1(b) and 1(d) are the particles returned by EVI-post approach.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 2: The Gaussian process prediction. Black dots are training data, the red curve is the true y(x)=xsin(x)y(x)=x\sin(x), the blue curve is the predicted curve, and the light blue area shows the 95%95\% predictive confidence interval.

We compare the EVI-GP with three R packages. For the gpfit package, we set the nugget threshold to be [0,25][0,25], corresponding to η\eta in our method, and the correlation is the Gaussian kernel function. For the mlegp package, we set the argument constMean to be 0 for the constant mean model and 1 for the linear mean model. The optimization-related arguments in mlegp are set as follows. The number of simplexes tries is 5. Each simplex maximum iteration is 5000. The relative tolerance is 1e81e^{-8}, BFGS maximum iteration is 500, BFGS tolerance is 0.010.01, and BFGS bandwidth is 1e101e^{-10}. Other parameters in these two packages are set to default. For the laGP package, we set the argument start at 6, end at 10, d at null, g at 1e41e^{-4}, method to a list of ”alc”, ”alcray”, ”mspe”, ”nn”, ”fish”, and Xi.ret at True. The comparison in terms of prediction accuracy is shown in Table 1 and Figure 3. Table 1 shows the mean of RMSPEs from 100 simulations and Figure 3 shows the box plots of the RMSPEs. In the third column of Table 1, we only list the best result from the three R packages. The prediction accuracy of both versions of the EVI-GP performs almost equally well and both significantly outperform the three R packages.

Table 1: Standardized RMSPE of the One-Dim Toy Example.
Mean Model EVI-post EVI-MAP gpfit/mlegp/lagp
Constant 0.1310 0.1311 0.1500
Linear 0.1195 0.1194 0.1584
Refer to caption
Figure 3: Box plots of the standardized RMSPEs of the toy example with all different mean models and different methods. The labels of the proposed EVI-GP methods are underlined.

4.2 OTL-Circuit function

We test the proposed method using the OTL-circuit function of input dimension d=6d=6. The OTL-circuit function models an output transformerless push-pull circuit. The function is defined as follows,

Vm(𝒙)\displaystyle V_{m}(\bm{x}) =(Vb1+0.74)I(Rc2+9)I(Rc2+9)+Rf+11.35RfI(Rc2+9)+Rf+0.74RfI(Rc2+9)(I(Rc2+9)+Rf)Rc1,\displaystyle=\frac{(V_{b1}+0.74)I(R_{c2}+9)}{I(R_{c2}+9)+R_{f}}+\frac{11.35R_{f}}{I(R_{c2}+9)+R_{f}}+\frac{0.74R_{f}I(R_{c2}+9)}{(I(R_{c2}+9)+R_{f})R_{c1}},

where Vb1=12Rb2/(Rb1+Rb2)V_{b1}=12R_{b2}/(R_{b1}+R_{b2}), Rb1[50,150]R_{b1}\in[50,150] is the resistance of b1 (K-Ohms), Rb2[25,70]R_{b2}\in[25,70] is the resistance of b2 (K-Ohms), Rf[0.5,3]R_{f}\in[0.5,3] is the resistance of f (K-Ohms), Rc1[1.2,2.5]R_{c1}\in[1.2,2.5] is the resistance of c1 (K-Ohms), Rc2[0.25,1.2]R_{c2}\in[0.25,1.2] is the resistance of c2 (K-Ohms) and I[50,300]I\in[50,300] is the current gain (Amperes).

The size of training and testing datasets is 200 and 1000, respectively. We set the variance of the noise σ2=0.022\sigma^{2}=0.02^{2}. For the prior distributions of 𝝎\bm{\omega}, we let ωii.i.d.Gamma(aω=1,bω=2)\omega_{i}\overset{\text{i.i.d.}}{\sim}\text{Gamma}(a_{\omega}=1,b_{\omega}=2) for i=2,,8i=2,\ldots,8, but ω1Gamma(aω=4,bω=2)\omega_{1}\sim\text{Gamma}(a_{\omega}=4,b_{\omega}=2). The prior distribution of ηGamma(aη=1,bη=2)\eta\sim\text{Gamma}(a_{\eta}=1,b_{\eta}=2). Both non-informative and informative prior distributions are considered. When using the informative prior, we set r=1/3r=1/3 and dfτ2=7df_{\tau^{2}}=7 for the prior distribution of 𝜷\bm{\beta} and τ2\tau^{2}. For the variance of the prior distribution of 𝜷\bm{\beta}, we choose ν=4.35\nu=4.35, which was the result of 5-fold cross-validation when the biggest mean model is used. After variable selection, the 5-fold cross-validation is conducted again to find the optimal ν\nu to fit the finalized GP model, which leads to ν=4.05\nu=4.05. In both cross-validation procedures, ν\nu is searched in an evenly spaced grid from 0 to 5 with 0.050.05 grid size.

For both EVI-post and EVI-MAP, we set h=0.001h=0.001 and τ=0.1\tau=0.1. For the EVI-post approach, we use N=100N=100 particles, and the initial particles of (𝝎,η)(\bm{\omega},\eta) are sampled uniformly in [0,0.1]7[0,0.1]^{7}. The two versions of EVI-GP perform very similarly, so we only return the result of EVI-MAP. The initial mean model of the GP is assumed as a quadratic function of the input variables, including the 2-way interactions. Based on the 95%95\% posterior confidence interval in Figure 4 of the 𝜷\bm{\beta}, we select x2x_{2} and x22x_{2}^{2} as the significant terms to be kept for the final model.

For comparison, we choose the R package mlegp because only this package allows the specification of the mean model. We set the argument constMean to be 0 for the constant mean and 1 for the linear mean model. For the full quadratic mean model, we manually add all the second-order effect terms as the input and set constMean to 1. Unfortunately, the input of the mean and correlation cannot be separately specified in mlegp. So once we specify the mean part to have all the quadratic terms, the input of the correlation function also contains the quadratic terms. But we believe this might give more advantage for mlegp because the correlation involves more input variables. If some of the variables are not needed, their corresponding correlation parameter estimated by mlegp can be close to zero. The other arguments in mlegp are set in the same way as the previous example. Table 2 compares the mean of 100 standardized RMSPEs of the 100 simulations of EVI-GP and mlegp and Figure 5 shows the box plots of the RMSPEs of different models returned by the two methods. The EVI-GP is much more accurate than mlegp package in terms of prediction. However, for this example, variable selection does not improve the model in terms of prediction. The most accurate model is the GP with a linear function of the input variables as the mean model.

Refer to caption
Figure 4: The 95%95\% posterior confidence interval of 𝜷\bm{\beta} for the full quadratic mean model.
Table 2: Standardized RMSPE of the OTL-Circuit Example.
Type of mean EVIGP mlegp
Constant 0.01608 0.04882
Linear 0.01399 0.03584
Quadratic 0.01792 0.03006
Quadratic, after selection 0.01625 N/A
Refer to caption
Figure 5: Box plots of standardized RMSPE’s of the OTL-Circuit Example.

4.3 Borehole function

In this subsection, we test the EVI-GP with the famous Borehole function, which is defined as follows:

f(𝒙)=2πTu(HuHl)ln(r/rω)(1+2LTuln(r/rω)rω2Kω+TuTl),f(\bm{x})=\frac{2\pi T_{u}(H_{u}-H_{l})}{\ln(r/r_{\omega})\left(1+\frac{2LT_{u}}{\ln(r/r_{\omega})r_{\omega}^{2}K_{\omega}}+\frac{T_{u}}{T_{l}}\right)},

where rω[0.05,0.15]r_{\omega}\in[0.05,0.15] is the radius of the borehole (m), r[100,50000]r\in[100,50000] is the radius of influence (m), Tu[63070,115600]T_{u}\in[63070,115600] is the transmissivity of the upper aquifer (m2/yr), Hu[990,1110]H_{u}\in[990,1110] is the potentiometric head of upper aquifer (m),Tl[63.1,116]T_{l}\in[63.1,116] is the transmissivity of the lower aquifer (m2/yr), Hl[700,820]H_{l}\in[700,820] is the potentiometric head of lower aquifer (m), L[1120,1680]L\in[1120,1680] is the length of the borehole (m), Kω[9855,12045]K_{\omega}\in[9855,12045] is the hydraulic conductivity of borehole (m/yr).

We use a training dataset of size 200 and 100 testing datasets of size 100. The noise variance is σ2=0.022\sigma^{2}=0.02^{2}. We use the same parameter settings for the prior distributions and EVI-GP method as in the OTL-Circuit example. Regarding the variance of the prior distribution of 𝜷\bm{\beta}, ν=4.55\nu=4.55 was the result of 5-fold cross-validation when the biggest mean model was used. After variable selection, the 5-fold cross-validation is conducted again to find the optimal ν\nu to fit the finalized GP model. Coincidently, the optimal ν\nu is also 4.554.55. The initial mean model of the GP is assumed as a quadratic function of the input variables, including the 2-way interactions. Based on the 95%95\% posterior confidence interval in Figure 6 of the 𝜷\bm{\beta}, we select intercept, x1,x4x_{1},x_{4}, and x1x4x_{1}x_{4} as the significant terms to be kept for the final model. Similarly, we compare the EVI-GP with the mlegp package, and the RMSPEs of the 100 simulations are shown in Table 3 and Figure 7. Again, EVI-GP is much better than the R package. More importantly, variable selection proves to be essential for the Borehole example, as the final GP model with the reduced mean model is more accurate than both the quadratic and linear mean models.

Refer to caption
Figure 6: The 95%95\% posterior confidence interval of 𝜷\bm{\beta} for the full quadratic mean model.
Table 3: Standardized RMSPE of the Borehole Example.
Type of mean EVIGP mlegp
Constant 0.01151 0.3354
Linear 0.04191 0.1810
Quadratic 0.01212 0.2019
Quadratic, after selection 0.01019 N/A
Refer to caption
Figure 7: Box plots of standardized RMSPE’s of the Borehole Example.

5 Conclusion

In this book chapter, we review the conventional Gaussian process regression model under the Bayesian framework. More importantly, we propose a new variational inference approach, called Energetic Variational Inference, as an alternative to traditional MCMC approaches to estimate and make inferences for the GP regression model. Through comparing with some commonly used R packages, the new EVI-GP performs better in terms of prediction accuracy. Although not completely revealed in this book chapter, the true potential of variational inference lies in transforming the MCMC sampling problem into an optimization problem. As a result, it can be used to solve a more complicated Bayesian framework. For instance, we can adapt the GP regression and classification by adding fairness constraints on the parameters such that it can meet the ethical requirements in many social and economic contexts. In this case, variational inference can easily solve the constrained optimization whereas it is very challenging to do MCMC sampling with fairness constraints. We are pursuing in this direction in the future work.

Acknowledgment

The authors’ work is partially supported by NSF Grant # 2153029 and # 1916467.

Appendix

Derivation of Proposition 1

Proof.

The marginal posterior distribution of (𝝎,τ2,η)(\bm{\omega},\tau^{2},\eta) can be obtained by integration as follows.

p(𝝎,τ2,η|𝒚n)p(𝜷)p(𝒚n|𝜽)p(𝝎)p(τ2)p(η)d𝜷\displaystyle p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n})\propto\int p(\bm{\beta})p(\bm{y}_{n}|\bm{\theta})p(\bm{\omega})p(\tau^{2})p(\eta)\mathrm{d}\bm{\beta}
\displaystyle\propto {exp[12(𝜷𝜷^n)𝚺𝜷|n1(𝜷𝜷^n)]det(𝚺𝜷|n)1/2\displaystyle\int\left\{\exp\left[-\frac{1}{2}(\bm{\beta}-\hat{\bm{\beta}}_{n})^{\top}\bm{\Sigma}_{\bm{\beta}|n}^{-1}(\bm{\beta}-\hat{\bm{\beta}}_{n})\right]\det(\bm{\Sigma}_{\bm{\beta}|n})^{-1/2}\right.
det(𝚺𝜷|n)1/2exp[12𝜷^n𝚺𝜷|n1𝜷^n12τ2𝒚n(𝑲n+η𝑰n)1𝒚n]\displaystyle\det(\bm{\Sigma}_{\bm{\beta}|n})^{1/2}\exp\left[-\frac{1}{2}\hat{\bm{\beta}}_{n}^{\top}\bm{\Sigma}_{\bm{\beta}|n}^{-1}\hat{\bm{\beta}}_{n}-\frac{1}{2\tau^{2}}\bm{y}_{n}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{y}_{n}\right]
(τ2)n/2det(𝑲n+η𝑰n)1/2p(τ2)p(𝝎)p(η)}d𝜷\displaystyle\left.(\tau^{2})^{-n/2}\det(\bm{K}_{n}+\eta\bm{I}_{n})^{-1/2}p(\tau^{2})p(\bm{\omega})p(\eta)\right\}\mathrm{d}\bm{\beta}
\displaystyle\propto det(𝚺𝜷|n)1/2exp[12𝜷^n𝚺𝜷|n1𝜷^n12τ2𝒚n(𝑲n+η𝑰n)1𝒚n]\displaystyle\det(\bm{\Sigma}_{\bm{\beta}|n})^{1/2}\exp\left[-\frac{1}{2}\hat{\bm{\beta}}_{n}^{\top}\bm{\Sigma}_{\bm{\beta}|n}^{-1}\hat{\bm{\beta}}_{n}-\frac{1}{2\tau^{2}}\bm{y}_{n}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{y}_{n}\right]
×(τ2)n/2det(𝑲n+η𝑰n)1/2p(τ2)p(𝝎)p(η).\displaystyle\times(\tau^{2})^{-n/2}\det(\bm{K}_{n}+\eta\bm{I}_{n})^{-1/2}p(\tau^{2})p(\bm{\omega})p(\eta).

Derivation of Proposition 2

Based on the non-informative prior,

𝚺𝜷|n\displaystyle\bm{\Sigma}_{\bm{\beta}|n} =τ2[𝑮(𝑲n+η𝑰n)1𝑮]1\displaystyle=\tau^{2}\left[\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}\right]^{-1}
𝜷^n\displaystyle\hat{\bm{\beta}}_{n} =[𝑮(𝑲n+η𝑰n)1𝑮]1[𝑮(𝑲n+η𝑰n)1]𝒚n.\displaystyle=\left[\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}\right]^{-1}\left[\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\right]\bm{y}_{n}.

Define

sn2\displaystyle s^{2}_{n} =τ2𝜷^n𝚺𝜷|n1𝜷^n+𝒚n(𝑲n+η𝑰n)1𝒚n\displaystyle=\tau^{-2}\hat{\bm{\beta}}_{n}^{\top}\bm{\Sigma}_{\bm{\beta}|n}^{-1}\hat{\bm{\beta}}_{n}+\bm{y}_{n}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{y}_{n}
=𝒚n[(𝑲n+η𝑰n)1𝑮(𝑮(𝑲n+η𝑰n)1𝑮)1𝑮(𝑲n+η𝑰n)1+(𝑲n+η𝑰n)1]𝒚n\displaystyle=\bm{y}_{n}^{\top}\left[(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}\left(\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}\right)^{-1}\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}+(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\right]\bm{y}_{n}
=𝒚n(𝑲n+η𝑰n)1[𝑮(𝑮(𝑲n+η𝑰n)1𝑮)1𝑮+(𝑲n+η𝑰n)](𝑲n+η𝑰n)1𝒚n.\displaystyle=\bm{y}_{n}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\left[\bm{G}\left(\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G}\right)^{-1}\bm{G}^{\top}+(\bm{K}_{n}+\eta\bm{I}_{n})\right](\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{y}_{n}.

Then p(𝝎,τ2,η|𝒚n)p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n}) is

p(𝝎,τ2,η|𝒚n)(τ2)p/2exp(sn22τ2)(τ2)n/2p(τ2)det(𝑮(𝑲n+η𝑰n)1𝑮)1/2det(𝑲n+η𝑰n)1/2p(𝝎)p(η)\displaystyle p(\bm{\omega},\tau^{2},\eta|\bm{y}_{n})\propto(\tau^{2})^{p/2}\exp\left(-\frac{s_{n}^{2}}{2\tau^{2}}\right)(\tau^{2})^{-n/2}p(\tau^{2})\det(\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G})^{-1/2}\det(\bm{K}_{n}+\eta\bm{I}_{n})^{-1/2}p(\bm{\omega})p(\eta)
(τ2)[12(dfτ2+np)+1]exp(1+sn22τ2)det(𝑮(𝑲n+η𝑰n)1𝑮)1/2det(𝑲n+η𝑰n)1/2p(𝝎)p(η).\displaystyle\propto(\tau^{2})^{-\left[\frac{1}{2}(df_{\tau^{2}}+n-p)+1\right]}\exp\left(-\frac{1+s_{n}^{2}}{2\tau^{2}}\right)\det(\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G})^{-1/2}\det(\bm{K}_{n}+\eta\bm{I}_{n})^{-1/2}p(\bm{\omega})p(\eta).

Due to the conditional conjugacy, we can see the conditional posterior distribution of τ2|𝝎,η,𝒚n\tau^{2}|\bm{\omega},\eta,\bm{y}_{n} is

τ2|𝝎,η,𝒚nScaled Inverse-χ2(dfτ2+np,τ^2),\tau^{2}|\bm{\omega},\eta,\bm{y}_{n}\sim\text{Scaled Inverse-}\chi^{2}(df_{\tau^{2}}+n-p,\hat{\tau}^{2}), (29)

where

τ^2=1+sn2dfτ2+np.\hat{\tau}^{2}=\frac{1+s_{n}^{2}}{df_{\tau^{2}}+n-p}.

Based on it, we can obtain the integration

p(𝝎,η|𝒚n)(τ^2)12(dfτ2+np)det(𝑮(𝑲n+η𝑰n)1𝑮)1/2det(𝑲n+η𝑰n)1/2p(𝝎)p(η).\displaystyle p(\bm{\omega},\eta|\bm{y}_{n})\propto(\hat{\tau}^{2})^{-\frac{1}{2}(df_{\tau^{2}}+n-p)}\det(\bm{G}^{\top}(\bm{K}_{n}+\eta\bm{I}_{n})^{-1}\bm{G})^{-1/2}\det(\bm{K}_{n}+\eta\bm{I}_{n})^{-1/2}p(\bm{\omega})p(\eta).

References

  • Ai et al. (2009) Ai, M., Kang, L., and Joseph, V. R. (2009), “Bayesian optimal blocking of factorial designs,” Journal of Statistical Planning and Inference, 139, 3319–3328.
  • Blei et al. (2017) Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2017), “Variational inference: A review for statisticians,” Journal of the American statistical Association, 112, 859–877.
  • Carnell (2022) Carnell, R. (2022), lhs: Latin Hypercube Samples, r package version 1.1.5.
  • Cheng and Boots (2017) Cheng, C.-A. and Boots, B. (2017), “Variational Inference for Gaussian Process Models with Linear Complexity,” in Advances in Neural Information Processing Systems, eds. Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., Curran Associates, Inc., vol. 30.
  • Cressie (2015) Cressie, N. (2015), Statistics for spatial data, John Wiley & Sons.
  • Dancik and Dorman (2008) Dancik, G. M. and Dorman, K. S. (2008), “mlegp: statistical analysis for computer models of biological systems using R,” Bioinformatics, 24, 1966–1967.
  • E et al. (2020) E, W., Ma, C., and Wu, L. (2020), “Machine learning from a continuous viewpoint, I,” Science China Mathematics, 1–34.
  • Fang et al. (2005) Fang, K.-T., Li, R., and Sudjianto, A. (2005), Design and modeling for computer experiments, CRC press.
  • Gelman et al. (2014) Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2014), Bayesian Data Analysis, Third Edition, New York: Chapman and Hall/CRC.
  • Ghanem et al. (2017) Ghanem, R., Higdon, D., and Owhadi, H. (2017), Handbook of uncertainty quantification, Springer.
  • Giga et al. (2017) Giga, M.-H., Kirshtein, A., and Liu, C. (2017), “Variational Modeling and Complex Fluids,” in Handbook of Mathematical Analysis in Mechanics of Viscous Fluids, eds. Giga, Y. and Novotny, A., Springer International Publishing, pp. 1–41.
  • Gramacy (2016) Gramacy, R. B. (2016), “laGP: Large-Scale Spatial Modeling via Local Approximate Gaussian Processes in R,” Journal of Statistical Software, 72, 1–46.
  • Gramacy (2020) — (2020), Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences, Boca Raton, Florida: Chapman Hall/CRC, http://bobby.gramacy.com/surrogates/.
  • Gramacy and Apley (2015) Gramacy, R. B. and Apley, D. W. (2015), “Local Gaussian process approximation for large computer experiments,” Journal of Computational and Graphical Statistics, 24, 561–578.
  • Gramacy and Lee (2008) Gramacy, R. B. and Lee, H. K. H. (2008), “Bayesian treed Gaussian process models with an application to computer modeling,” Journal of the American Statistical Association, 103, 1119–1130.
  • Hamada and Wu (1992) Hamada, M. and Wu, C. J. (1992), “Analysis of designed experiments with complex aliasing,” Journal of quality technology, 24, 130–137.
  • Joseph (2006) Joseph, V. R. (2006), “A Bayesian approach to the design and analysis of fractionated experiments,” Technometrics, 48, 219–229.
  • Kang et al. (2023) Kang, L., Deng, X., and Jin, R. (2023), “Bayesian D-Optimal Design of Experiments with Quantitative and Qualitative Responses,” The New England Journal of Statistics in Data Science, 1–15.
  • Kang and Huang (2019) Kang, L. and Huang, X. (2019), “Bayesian A-Optimal Design of Experiment with Quantitative and Qualitative Responses,” Journal of Statistical Theory and Practice, 13, 1–23.
  • Kang and Joseph (2009) Kang, L. and Joseph, V. R. (2009), “Bayesian optimal single arrays for robust parameter design,” Technometrics, 51, 250–261.
  • Kang et al. (2018) Kang, L., Kang, X., Deng, X., and Jin, R. (2018), “A Bayesian hierarchical model for quantitative and qualitative responses,” Journal of Quality Technology, 50, 290–308.
  • Kang et al. (2021) Kang, X., Ranganathan, S., Kang, L., Gohlke, J., and Deng, X. (2021), “Bayesian auxiliary variable model for birth records data with qualitative and quantitative responses,” Journal of statistical computation and simulation, 91, 3283–3303.
  • MacDonald et al. (2015) MacDonald, B., Ranjan, P., and Chipman, H. (2015), “GPfit: An R Package for Fitting a Gaussian Process Model to Deterministic Simulator Outputs,” Journal of Statistical Software, 64, 1–23.
  • Neal (1996) Neal, R. M. (1996), Bayesian Learning for Neural Networks, New York, NY: Springer New York, chap. Monte Carlo Implementation, pp. 55–98.
  • Onsager (1931a) Onsager, L. (1931a), “Reciprocal relations in irreversible processes. I.” Phys. Rev., 37, 405.
  • Onsager (1931b) — (1931b), “Reciprocal relations in irreversible processes. II.” Phys. Rev., 38, 2265.
  • Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. (2019), “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in Advances in Neural Information Processing Systems 32, Curran Associates, Inc., pp. 8024–8035.
  • Peng and Wu (2014) Peng, C.-Y. and Wu, C. F. J. (2014), “On the Choice of Nugget in Kriging Modeling for Deterministic Computer Experiments,” Journal of Computational and Graphical Statistics, 23, 151–168.
  • Rasmussen and Williams (2006) Rasmussen, C. E. and Williams, C. K. I. (2006), Gaussian Processes for Machine Learning, Adaptive Computation and Machine Learning, MIT Press.
  • Rayleigh (1873) Rayleigh, L. (1873), “Some General Theorems Relating to Vibrations,” Proceedings of the London Mathematical Society, 4, 357–368.
  • Robert et al. (1999) Robert, C. P., Casella, G., and Casella, G. (1999), Monte Carlo statistical methods, vol. 2, Springer.
  • Roberts and Rosenthal (1998) Roberts, G. O. and Rosenthal, J. S. (1998), “Optimal scaling of discrete approximations to Langevin diffusions,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60, 255–268.
  • Rockafellar (1976) Rockafellar, R. T. (1976), “Monotone operators and the proximal point algorithm,” SIAM J. Control Optim., 14, 877–898.
  • Sacks et al. (1989) Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P. (1989), “Design and Analysis of Computer Experiments,” Statistical Science, 4.
  • Santner et al. (2003) Santner, T. J., Williams, B. J., Notz, W. I., and Williams, B. J. (2003), The design and analysis of computer experiments, vol. 1, Springer.
  • Surjanovic and Bingham (2013) Surjanovic, S. and Bingham, D. (2013), “Virtual Library of Simulation Experiments: Test Functions and Datasets,” Retrieved October 1, 2023, from http://www.sfu.ca/~ssurjano.
  • Tran et al. (2016) Tran, D., Ranganath, R., and Blei, D. M. (2016), “The variational Gaussian process,” in 4th International Conference on Learning Representations, ICLR 2016.
  • Trillos and Sanz-Alonso (2020) Trillos, N. G. and Sanz-Alonso, D. (2020), “The Bayesian Update: Variational Formulations and Gradient Flows,” Bayesian Analysis, 15, 29 – 56.
  • Wang et al. (2021) Wang, Y., Chen, J., Liu, C., and Kang, L. (2021), “Particle-based energetic variational inference,” Statistics and Computing, 31, 1–17.
  • Wu and Hamada (2021) Wu, C. J. and Hamada, M. S. (2021), Experiments: planning, analysis, and optimization, Hoboken, NJ: John Wiley & Sons, Inc.
  • Wynne and Wild (2022) Wynne, G. and Wild, V. (2022), “Variational gaussian processes: A functional analysis view,” in International Conference on Artificial Intelligence and Statistics, PMLR, pp. 4955–4971.