Geometry of Score Based Generative Models
Abstract
In this work, we look at Score-based generative models (also called diffusion generative models) from a geometric perspective. From a new view point, we prove that both the forward and backward process of adding noise and generating from noise are Wasserstein gradient flow in the space of probability measures. We are the first to prove this connection. Our understanding of Score-based (and Diffusion) generative models have matured and become more complete by drawing ideas from different fields like Bayesian inference, control theory, stochastic differential equation and Schrodinger bridge. However, many open questions and challenges remain. One problem, for example, is how to decrease the sampling time? We demonstrate that looking from geometric perspective enables us to answer many of these questions and provide new interpretations to some known results. Furthermore, geometric perspective enables us to devise an intuitive geometric solution to the problem of faster sampling. By augmenting traditional score-based generative models with a projection step, we show that we can generate high quality images with significantly fewer sampling-steps.
1 Introduction
Score-based (or Diffusion) models are a new type of generative models in the field of computer vision and machine learning, achieving state-of-the-art results in image synthesis (Dhariwal & Nichol, 2021) and log likelihood (Kingma et al., 2021). They have recently gained popularity due to interesting applications such as text to image generation (DALL-E (Ramesh et al., 2022), (Rombach et al., 2022) and Imagen (Saharia et al., 2022)), image super-resolution, image editing (Meng et al., 2022), etc. Score-based generative models have enjoyed diverse perspectives from different fields. Originally, diffusion models (DDPM) were developed from expected lower bound (ELBO) maximization on data log likelihood (Ho et al., 2020). Song & Ermon (2019) showed that we can learn gradient of log likelihood (called score functions) and use it to generate images. Song et al. (2021) showed that the epsilon function in DDPM is in fact the scaled version of the score function. They further generalized these models to a continuous setting as stochastic differential equations (Song et al., 2021). More recent works have connected score-based generative models with Schrodinger bridge problem (De Bortoli et al., 2021) and control theoretic perspectives (Chen et al., 2022; Huang et al., 2021).
In this work, we present a completely different view-point on score-based generative models: geometric perspective. To the best of our knowledge, we are the first to explore the geometric connection of these generative models. Applying the solid mathematical framework in the area of Wasserstein gradient flow (Jordan et al., 1998; Ambrosio et al., 2005; Wibisono, 2018; Salim et al., 2020; Korba et al., 2020), we show that the forward and backward process of adding noise and generating image from the noise are in fact equivalent to moving on a gradient-flow-path in a metric space of probability distributions following the Wasserstein gradient flow equation.
While our understanding of score-based generative models has matured over time, few important questions remain unanswered. For example, why is it a good idea to choose forward and reverse variance the same? Can we choose reverse variance differently? Are score-based generative models same as the energy based models (Xie et al., 2016; Gao et al., 2020; Du et al., 2021)? Furthermore, new models have been proposed like Wavefit (Koizumi et al., 2022), which tries to generalize the diffusion sampling to a proximal gradient type of update. How can we explain this type of algorithms? In this work, we demonstrate that the geometric connection investigated in this work helps to answer these questions from a geometric point of view.
In addition to conceptual advantages and novel perspectives, geometric framework enables us to design practical algorithms with faster sampling capability. Score-based generative models work remarkably well when the number of sampling-steps is large (i.e. small step-size). However, the sampling time is also large for such finer schemes. As we decrease the number of sampling steps, the samples move away from the gradient-flow-path incurring error in each step, and resulting into high overall error. To minimize such error and achieve high quality samples even with small number of sampling steps, we propose to project back intermediate samples to the gradient-flow-path after every step. To achieve this, we propose an efficient estimation of Wasserstein gradient to descend towards the flow-path. As demonstrated in the result section, our proposed method significantly reduces error for smaller number of sampling steps. Below we summarize our contributions. All complete proofs are included in the appendix.
- 1.
-
2.
This connection sheds light on several interesting questions: 1) the reverse variance in score-based generative models, 2) the connection between score-based model and energy based model, and 3) the use of proximal gradient algorithms as proposed in recent works.
-
3.
Based on these insights, we propose a new algorithm which generalizes the score-based model and allows for significantly faster sampling, which would otherwise be very difficult to achieve. To achieve this, we also propose an efficient Wasserstein gradient estimation algorithm.
2 Related Works
Early works on diffusion models were based on matching the forward and reverse joint distributions through bounds on log likelihood (Ho et al., 2020; Sohl-Dickstein et al., 2015). (Song & Ermon, 2019) proposed a score-based generative model motivating from the Langevin dynamics and estimating the score function. Later (Song et al., 2021) showed that the two approaches are actually equivalent and it can be generalized further in continuous time setting through the stochastic differential equations. On the more theoretical directions, score-based optimization has been shown to be equivalent to likelihood maximization through Feynman-Kac theorem (Chen et al., 2022; Huang et al., 2021). Other notable works are interpreting the forward diffusion and generation as solving the Schrodinger Bridge problem (De Bortoli et al., 2021). Many approaches have been proposed to speed up the sampling process through clever ways to solve differential equations (Lu et al., 2022).
In their seminal work, Jordan, Kinderlehrer, Otto (JKO) proved the connection between Wasserstein gradient flow and the the diffusion systems guided by Fokker-Planck equations (Jordan et al., 1998). This result has been vastly generalized and formalized by (Villani, 2003, 2009) and (Ambrosio et al., 2005) giving birth to the theory of Wasserstein gradient flow and optimization on space of probability measures. Several notable works have followed in machine learning (Wibisono, 2018), (Korba et al., 2020), (Salim et al., 2020), for example for sampling, generative models, etc.
3 Preliminaries
3.1 Notations
Let denote the Borel - algebra over , and let denote a probability measure on . denotes the space of probability measures on with finite second order moment. For any , is the space of functions such that (Ambrosio et al., 2005; Korba et al., 2020). Let , then denotes the pushforward measure of by such that the transfer lemma holds for any measurable bounded function . We use Wasserstein-2 distance as a metric on the space of probability measures. The Wasserstein-2 distance is defined as , where and is the set of couplings between and , i.e. the set of nonnegative measures, over such that their projections on first and second components are and where and (Villani, 2003).
3.2 Wasserstein Gradient Flow
Let denote a family of probability measures. This family satisfies a continuity equation if there exists a family of velocity fields, such that
(1) |
in a distributional sense. It is also absolutely continuous if is integrable over . Among all possible , there is one with minimum norm, and it lies on the tangent space of and is called tangent vector field (Ambrosio et al., 2005) Chapter 8.
We define a functional on the space of probability measures, . We define Wasserstein gradient of any functionals on the space as the change in the value of functional with small perturbation on the probability measure. Wasserstein gradient can be expressed in the following form (Ambrosio et al., 2005) Chapter 10:
(2) |
Consider the KL divergence, between any measure and a base measure . We can show that the Wasserstein gradient of the functional at is
(3) |
In the family , let the initial measure be and final measure is . Then there exists a geodesic between two probability measures and with respect to the Wasserstein metric. If we choose the velocity field equal to the negative of Wasserstein gradient (i.e. ), then we can show that the path traced by the probability measures is the geodesic between and (Ambrosio et al., 2005) Chapter 7, and the flow is known as Wasserstein gradient flow. Using the functional and the continuity equation, we obtain the equation of Wasserstein gradient flow as:
(4) |
Wasserstein gradient flow is a differential equation of probability measures. Consider a Wasserstein gradient flow with initial measure satisfying the continuity equation 1. Let be sample from the initial measure. The differential equation for the samples can be derived from continuity equation as follows (Ambrosio et al., 2005):
(5) |
3.3 Score Based Generative Model
Score-based generative model (Song et al., 2021) extends diffusion models to work on continuous time setting using stochastic differential equations (SDEs). The forward and reverse process of adding noise and generating images are interpreted as forward and reverse diffusion process with following differential equations:
(6) | ||||
(7) | ||||
where is forward drift function and is the Brownian motion. Note that the flow of time in two SDE is different: the time flows from to in the forward process and the initial distribution is , while the time flows from to in the reverse process. Time direction is crucial in the stochastic differential equation because for the forward process, is independent of the future while in the reverse direction it is independent of the past (Anderson, 1982). To make things simpler such that time always flow in positive direction, we can equivalently use the positive time notation indexed by (following is equivalent to eq.(7)):
(8) | ||||
Note that we use for forward flow of time and for the backward flow of time, so that now flows from to . With this notation, , , and . Euler-Maruyama discretization of the reverse SDE equation yields:
(9) |
where is a random normal distributed sample.
4 Forward Diffusion as Gradient Flow
Instead of taking the velocity vector to be negative of Wasserstein gradient, we consider an accelerated flow where at any time, , the velocity is equal to the negative Wasserstein gradient scaled by a time-varying .
Proposition 1 (Accelerated Wasserstein Gradient Flow).
We define accelerated gradient flow with respect to the functional as the gradient flow where the velocity vector is defined as . Consequently, the continuity equation is given by:
(10) |
Using this accelerated Wasserstein Gradient flow, we can establish a connection with the forward process in score-based generative model. We start from the Fokker-Planck equation corresponding to the stochastic differential equation of the forward diffusion process given by eq.(6):
(11) |
where the initial measure is . Following this SDE, we know that it will end up in the final measure, . Next theorem shows that the forward Fokker Planck equation and the accelerated Wasserstein Gradient descent are equivalent.
Theorem 1.
Consider an accelerated gradient flow in eq.(10) with initial measure and the target measure and the functional on the Wasserstein space defined by . The family of measures corresponding to this gradient flow is equivalent to the family of measures corresponding to the forward Fokker Plank equation in eq.(11) given that and take the following form: , .
Remark 1.1.
This implies that the forward diffusion process considered in the diffusion generative model, DDPM (Ho et al., 2020; Song et al., 2021) can be equivalently thought as an accelerated Wasserstein gradient flow starting from an initial measure corresponding to the data distribution and following the negative gradient towards the target measure .
Remark 1.2.
We can also think of accelerated Wasserstein gradient flow as regular Wasserstein gradient flow with non-uniform discretization, i.e., step at is scaled by .
Next we investigate the geometric interpretation of the generation process or the reverse SDE.
5 Generation as Reverse Gradient Flow
Next theorem establishes the equivalence between the reverse SDE and the Wasserstein Gradient flow.
Theorem 2.
The reverse SDE in eq.(8) is equivalent to the Wasserstein gradient flow in the space of probability measures with respect to the functional starting from the initial measure towards the target measure .
Proof.
(13) | ||||
(14) | ||||
(15) |
Here, we apply forward backward splitting scheme due to (Wibisono, 2018; Salim et al., 2020)
(16) | ||||
(17) |
where is the Wasserstein gradient, the expression for which can be obtained as:
(18) |
In eq.(16), we are trying to move in the direction of Wasserstein gradient. Let samples from distribution . Transforming differential equation in measure space to sample space, similar to eq.(5), yields:
(19) |
In eq.(17), we are using JKO operator as a solution of the negative entropy functional, , where the JKO operator is defined as :
For the negative entropy functional, we have the exact solution as Brownian motion (Jordan et al., 1998; Wibisono, 2018; Salim et al., 2020). Let , we obtain
(20) | |||
(21) |
Combining both, we obtain
(22) |
In the limiting case as , we obtain,
which coincides exactly with the reverse SDE in eq.(8) for and . ∎
The reverse SDE or the score-based model is trying to reverse the forward process by tracing the path followed in the forward process in the opposite direction. One important implication of this theorem is that since we are moving towards the target measure in the forward process, the reverse is actually simply moving away from , which is realized as the accelerated Wasserstein gradient flow with the functional . The gradient flow path with constant velocity is the geodesic. Since we are considering gradient flow path with acceleration, it is not exactly the geodesic, but similar path traced by gradient flow. We will call it gradient-flow-path in rest of the paper.
6 Insights, Connections, Discussion
We have shown that both the forward and reverse diffusion process involved in score-based generative models are gradient flows on the space of probability measures. We gain geometric insights because of this geometric interpretation.
6.1 Alternative Interpretation of Reverse SDE equation
Score-based generative model uses the fact that for every forward SDE of the form in eq.(6), there exists a reverse SDE as in eq.(8), which is a remarkable result due to (Anderson, 1982). Theorem 2 provides an interesting interpretation of this result from a completely different perspective. In eq.(15), we added and subtracted the negative entropy term in diffusion and drift terms respectively. It allowed us to design a forward-backward algorithm instead of forward algorithm of Wasserstein gradient flow. The backward term essentially added the Brownian motion term yielding us a reverse stochastic differential equation. Note that if we had not added and subtracted the term , we would have obtained following iterative scheme:
(23) |
Note that this is a discretized version of the following ODE.
(24) |
Comparing this equation with eq.(8), observe that eq.(8) has an additional in the drift part which is compensated by the Brownian motion . It is clear to see that eq.(8) and eq.(24) yields same family of marginal distributions at even though the former is deterministic differential equation and the latter is the stochastic. Perhaps the advantage of score-based models is that stochasticity helps in generating diverse samples for small sample size.
6.2 Why is reverse variance same as the forward variance?
In the DDPM model (Ho et al., 2020), it was not clear how to choose the variance of the reverse differential equation, and why choosing the reverse variance the same as in forward is a good strategy. From previous analysis, we see that the reverse variance must be same as the forward because we have added and subtracted the same negative entropy term from both the drift and the diffusion. However, it is possible to change the reverse time variance. For example, we can add to both the drift and diffusion terms in eq. (15). Then the reverse SDE variance will be , but then the drift term in eq.(8) will also be modified to instead of .
6.3 Contrasting Score-based with Energy-based Model
Let’s assume that the probability measure of data can be obtained in the form of . Consider Wasserstein gradient descent with the functional as
(25) | ||||
(26) |
We can use the same forward-backward splitting scheme as we used in Theorem 2 Proof, and with similar reasoning, we can recover the Langevin dynamics:
(27) |
This demonstrates the critical difference between the Energy-based model and the score-based model: while the energy based model is moving towards the data distribution with functional , the score-based model is moving away from the isotropic Gaussian distribution () with the functional, . Score-based generative model traces the forward diffusion path in the reverse direction thereby avoiding the need to work with the data distribution . In the energy based model, however, we need to either estimate energy function like (Gao et al., 2020; Du et al., 2021) or KL divergence with the data, .
6.4 Proximal Algorithms in Diffusion models
WaveFit (Koizumi et al., 2022) tries to generalize the iteration in diffusion models to a proximal algorithm. Motivating from a fixed point iteration, they try to improve upon DDPM model by drawing ideas from GANs and propose a proximal algorithm type of approach which is faster in generating samples than DDPM without losing quality. Here, we show that starting from geometric perspective we can reach the proximal algorithm as a way to perform Wasserstein gradient descent. Consider a functional, say for example where is the data distribution. Forward discretization of Wasserstein gradient descent yields us iteration (Jordan et al., 1998; Salim et al., 2020)
(28) |
which is a proximal gradient algorithm in the space of probability measures. This justifies why proximal algorithms make sense in the context of diffusion generative models or score-based generative models because we are trying to reach the data distribution descending in the direction of Wasserstein gradient. Jordan et al. (1998); Wibisono (2018); Salim et al. (2020) have shown that proximal algorithm converges to the target distribution, . As for the choice of functional , it can be any convex functional that decreases as we descend towards the target measure . Wavefit (Koizumi et al., 2022) shows that using much stronger GAN-type objective as a functional yields good result.
7 Challenges with Faster Sampling
Once the connection between the gradient flow and score-based generative model is established, we can interpret the generation as a process walking on the gradient-flow-path. If we follow the flow-path with small steps, we can reliably reach the initial data distribution, as demonstrated by the success of the score-based generative models and diffusion models. However, this is not a great idea if we increase the step size. The score-based increment is a linear approximation and therefore it accrues more error as we increase the step size. It has been experimentally observed that the samples get poorer as we increase the step size in score-based models (Dhariwal & Nichol, 2021; Ho et al., 2020; Song et al., 2021). From a geometric point-of-view, we are taking Wasserstein gradient steps using forward-backward strategy. While this strategy works well when the step size is small, it converges to a biased measure for large step-size. Bias associated with the forward-backward strategy for large step size has been studied in the context of Wasserstein gradient flow (Wibisono, 2018). In our case, this issue is further exacerbated by the fact that the functional we are trying to minimize is actually concave with respect to .
To mitigate this issue, we propose an intuitive and geometric idea: projection. As shown in Fig.3, as we try to sample in score-based models with large step-size, the error gets large and the trajectory deviates away from the gradient-flow-path. We propose to resolve this problem by projecting again to the gradient-flow-path before taking another step.
8 Projection to Gradient-flow-path
Score-based generative model first trains a score model, such that using score matching strategy. Once, the score model is trained, the discretized Eurler-Maruyama step (eq.(9)) is used for generation of samples, where score function, replaces :
(29) |
It can also be interpreted as predict and diffuse steps, where predict step is . Since generation is trying to trace the gradient-flow-path, after each of these predict-diffuse, we should obtain samples from the measure on the gradient-flow-path. Because of discretization error and bias, these samples do not lie on the gradient-flow-path, in fact they deviate away. To pull these samples towards the measure, on the gradient-flow-path, we use the fact that we have a way to sample from the measures on gradient-flow-path. Using the SDE equation, we can write closed form conditional distribution of as follows:
Sampling from this conditional distribution is given by the following equation:
(30) |
Comparing eq.(30) with eq.(29), we note that pulling close to may be enough to pull the samples in eq.(29) towards gradient-flow-path assuming that is close to . In terms of measures, we consider the measure associated with samples and target samples . Let’s define the measure corresponding to samples as and the measures corresponding to means, as . We can sample from the measure by first sampling from , and passing them through . Our strategy to project the samples in eq.(29) onto gradient-flow-path is to project the pred measure, to the mean measure . We achieve this through Wasserstein gradient descent in the space of probability measures. For that, we need an efficient way to estimate Wasserstein gradient, which we describe in next subsection.
8.1 Efficient Estimation of Wasserstein Gradient
Imagine that we want to estimate Wasserstein Gradient of a functional , i.e., . For that, we can use the following Taylor expansion:
(31) |
where, is the Wasserstein gradient of at . To estimate the Wasserstein gradient, consider the following optimization problem:
(32) |
It is easy to see that is the solution of this problem. Plug in eq.(31), and parameterize the gradient function as a function of neural network parameters . Hence, we solve the following optimization problem:
(33) |
This optimization is efficient and can use parallel processing because: 1) it only requires samples from the measure , and 2) we can use minibatch from the measure to update neural network parameters at a time. This removes the need to obtain all samples at a time leading to stochastic gradient descent optimization of .
8.2 Predict-Project Algorithm
Datasets | N = 1000 | N = 100 | N = 40 | N = 20 | |
---|---|---|---|---|---|
FID Score | |||||
Celeb-A | Score Model | 6.331 | 35.14 | 149.42 | 222.71 |
Predict-Project | 20.54 | 68.23 | 121.12 | ||
LSUN | Score Model | 15.12 | 34.62 | 122.23 | 246.17 |
Predict-Project | 25.35 | 66.61 | 164.32 | ||
SVHN | Score Model | 18.95 | 146.63 | 183.30 | 285.61 |
Predict-Project | 149.56 | 174.34 | 152.94 |
With the Wasserstein gradient estimation method in hand, we now move on to project to . We define the functional in the following way:
(34) |
Note that, is indexed by time. Instead of learning different for different time, we parameterize it by time as as in score function (Song et al., 2021). Similarly, we choose to be different for different in eq.(33). To train the projection function, , we sample from the uniform distribution in the interval , and optimize the following optimization:
After training, we have, . Using this relation, we update the sample as
(35) | ||||
(36) |
where is the small scalar by which to move in the direction of Wasserstein gradient and is present due to the fact that the Wasserstein gradient flow with velocity field corresponds to dynamics (see eq.(5)). See Algorithm 1 for full sampling algorithm.
9 Experimental Results
To demonstrate efficacy of our algorithm, we train and generate samples on three datasets: 1) Celeb-A dataset, 2) LSUN-church dataset, and 3) SVHN dataset, where all images are of size . Our neural network architecture for both score model and projection model uses standard U-net architecture with attention due to (Dhariwal & Nichol, 2021). We use publicly available code from (Song et al., 2021) as score-based model. In these experiments, we demonstrate that as we decrease the number of sampling steps, the quality of samples decreases in Score-based generative model, but we maintain quality to a reasonable level even when the number of sampling steps is reduced to as low as 20. We use FID metric (Heusel et al., 2017) to measure the sample qualities.
In Table 1, we compare the FID score of generated images from score-based method and our Predict-Project method. We outperform the score-based method by a large margin in all cases except SVHN (N=100). This underperformance could be because our model is not trained well in SVHN (see appendix) due to lack of time. For qualitative comparison, please see Fig. (4) and Fig. (2). These results our claim that projecting to the gradient-flow-path improves sample quality, especially when the number of sampling-step is low.
10 Conclusion
We presented a novel geometric perspective on score-based generative models (also called diffusion generative models) by showing that they are in fact gradient flows in a space of probability measures. The geometric insight gained from this connection helped us answer and clarify some critical open questions. We also demonstrated that it can help us design faster sampling algorithm. We believe that this connection will help diffuse knowledge between Wasserstein gradient flow field and score-based generative models field in the future inspiring interesting solutions to problems in both areas. Similarly, connection with energy-based models, proximal algorithms and reverse SDE could help design better algorithms in general and generative models in specific. Energy-based models, for example, can be combined with score-based models in the light of geometric understanding.
References
- Ambrosio et al. (2005) Ambrosio, L., Gigli, N., and Savaré, G. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2005.
- Anderson (1982) Anderson, B. D. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313–326, 1982.
- Chen et al. (2022) Chen, T., Liu, G.-H., and Theodorou, E. Likelihood training of schrödinger bridge using forward-backward SDEs theory. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nioAdKCEdXB.
- De Bortoli et al. (2021) De Bortoli, V., Thornton, J., Heng, J., and Doucet, A. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695–17709, 2021.
- Dhariwal & Nichol (2021) Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 8780–8794. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf.
- Du et al. (2021) Du, Y., Li, S., Sharma, Y., Tenenbaum, J., and Mordatch, I. Unsupervised learning of compositional energy concepts. Advances in Neural Information Processing Systems, 34:15608–15620, 2021.
- Gao et al. (2020) Gao, R., Nijkamp, E., Kingma, D. P., Xu, Z., Dai, A. M., and Wu, Y. N. Flow contrastive estimation of energy-based models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7518–7528, 2020.
- Heusel et al. (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
- Ho et al. (2020) Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, volume 33, 2020. URL https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf.
- Huang et al. (2021) Huang, C.-W., Lim, J. H., and Courville, A. C. A variational perspective on diffusion-based generative models and score matching. Advances in Neural Information Processing Systems, 34:22863–22876, 2021.
- Jordan et al. (1998) Jordan, R., Kinderlehrer, D., and Otto, F. The variational formulation of the fokker–planck equation. SIAM journal on mathematical analysis, 29(1):1–17, 1998.
- Kingma et al. (2021) Kingma, D., Salimans, T., Poole, B., and Ho, J. Variational diffusion models. Advances in neural information processing systems, 34:21696–21707, 2021.
- Koizumi et al. (2022) Koizumi, Y., Yatabe, K., Zen, H., and Bacchiani, M. Wavefit: An iterative and non-autoregressive neural vocoder based on fixed-point iteration. arXiv preprint arXiv:2210.01029, 2022.
- Korba et al. (2020) Korba, A., Salim, A., Arbel, M., Luise, G., and Gretton, A. A non-asymptotic analysis for stein variational gradient descent. Advances in Neural Information Processing Systems, 33:4672–4682, 2020.
- Lu et al. (2022) Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., and Zhu, J. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. arXiv preprint arXiv:2206.00927, 2022.
- Meng et al. (2022) Meng, C., He, Y., Song, Y., Song, J., Wu, J., Zhu, J.-Y., and Ermon, S. SDEdit: Guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations, 2022.
- Ramesh et al. (2022) Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
- Rombach et al. (2022) Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022.
- Saharia et al. (2022) Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S. K. S., Ayan, B. K., Mahdavi, S. S., Lopes, R. G., et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
- Salim et al. (2020) Salim, A., Korba, A., and Luise, G. The wasserstein proximal gradient algorithm. Advances in Neural Information Processing Systems, 33:12356–12366, 2020.
- Sohl-Dickstein et al. (2015) Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256–2265. PMLR, 2015.
- Song & Ermon (2019) Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019.
- Song et al. (2021) Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=PxTIG12RRHS.
- Villani (2003) Villani, C. Topics in optimal transportation.(books). OR/MS Today, 30(3):66–67, 2003.
- Villani (2009) Villani, C. Optimal transport: old and new, volume 338. Springer, 2009.
- Wibisono (2018) Wibisono, A. Sampling as optimization in the space of measures: The langevin dynamics as a composite optimization problem. In Conference on Learning Theory, pp. 2093–3027. PMLR, 2018.
- Xie et al. (2016) Xie, J., Lu, Y., Zhu, S.-C., and Wu, Y. A theory of generative convnet. In International Conference on Machine Learning, pp. 2635–2644. PMLR, 2016.
Appendix A Proof of Theorems
A.1 Forward Diffusion as Gradient Flow
Theorem 3.
Consider an accelerated gradient flow in eq.(10) with initial measure and the target measure and the functional on the Wasserstein space defined by . The family of measures corresponding to this gradient flow is equivalent to the family of measures corresponding to the forward Fokker Plank equation in eq.(11) given that and take the following form: , .
A.2 Generation as Reverse Gradient Flow
Theorem 4.
The reverse SDE in eq.(8) is equivalent to the Wasserstein gradient flow in the space of probability measures with respect to the functional starting from the initial measure towards the target measure .
Proof.
(45) | ||||
(46) | ||||
(47) |
Here, we apply forward backward splitting scheme due to (Wibisono, 2018; Salim et al., 2020)
(48) | ||||
(49) |
where is the Wasserstein gradient. We can compute the expression for the Wasserstein gradient using the following relation:
(50) |
First, the derivative is given by
(51) |
Therefore,
(52) |
In eq.(48), we are trying to move in the direction of Wasserstein gradient. Let samples from distribution . Transforming differential equation in measure space to sample space, similar to eq.(5), yields:
(53) |
In eq.(49), we are using JKO operator as a solution of the negative entropy functional, , where the JKO operator is defined as :
For the negative entropy functional, we have the exact solution as the Brownian motion (Jordan et al., 1998; Wibisono, 2018; Salim et al., 2020). Let , we obtain
(54) | |||
(55) |
Combining both, we obtain
(56) |
In the limiting case as , we obtain,
which coincides exactly with the reverse SDE in eq.(8) for and . ∎
Appendix B Experimental Details
We jointly train the score model and projection model . They have the same U-Net with attention architecture following (Dhariwal & Nichol, 2021). We apply minibatch optimization to optimize both score model and projection model. Because of this the computational burden is low. In terms of parameters, since we have additional, projection model, the parameter is twice of regular score model.
We train Celeb-A model upto 450K iteration and LSUN upto 300K iteration with the batch size of , and report the FID score. We had to terminate SVHN early at 50K iteration (batch size = 32) due to lack of time. We will continue to train this model and will update the score later if we get chance.
B.1 Hyperparameter
While projecting measures to the gradient-flow-path, we scale the Wasserstein gradient towards path by a factor, . This factor intuitively represents how much error is likely to be present in the prediction step of the score model. Obviously, the error is large for and small for , so we choose large for and small for . At the moment, it is a hyperparameter, which we optimize keeping in mind that it should correspond to the level of error is score model prediction. In the future work, we will estimate this hyperparameter from the training loss . Current best hyperparameter for are:
We are cleaning up the code and will make it publicly available.