This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\jvol

00 \jnum00 \jyear2013 \jmonthJanuary

Sparse Representation Learning with Modified q-VAE towards Minimal Realization of World Model

\name Taisuke Kobayashia,b and Ryoma Watanukib CONTACT T. Kobayashi. Email: kobayashi@nii.ac.jp aNational Institute of Informatics, Japan; and The Graduate University for Advanced Studies (SOKENDAI), Japan
bDivision of Information Science, Nara Institute of Science and Technology, Nara, Japan
Abstract

Extraction of low-dimensional latent space from high-dimensional observation data is essential to construct a real-time robot controller with a world model on the extracted latent space. However, there is no established method for tuning the dimension size of the latent space automatically, suffering from finding the necessary and sufficient dimension size, i.e. the minimal realization of the world model. In this study, we analyze and improve Tsallis-based variational autoencoder (q-VAE), and reveal that, under an appropriate configuration, it always facilitates making the latent space sparse. Even if the dimension size of the pre-specified latent space is redundant compared to the minimal realization, this sparsification collapses unnecessary dimensions, allowing for easy removal of them. We experimentally verified the benefits of the sparsification by the proposed method that it can easily find the necessary and sufficient six dimensions for a reaching task with a mobile manipulator that requires a six-dimensional state space. Moreover, by planning with such a minimal-realization world model learned in the extracted dimensions, the proposed method was able to exert a more optimal action sequence in real-time, reducing the reaching accomplishment time by around 20 %.

keywords:
Variational autoencoder; World model; Model predictive control
articletype: Full Papers

1 Introduction

Expectations for robots are increasing along with the rapid development of robot and AI technologies, and coupled with the shortage of labor force, robots are beginning to be required to accomplish more complex tasks than ones like factory automation. For example, manipulation of flexible objects [1, 2]; (physical) human-robot interaction [3, 4]; and autonomous driving based on high-dimensional observation data from cameras and LiDAR [5, 6] can be raised. In these tasks, modeling is a major obstacle to the use of conventional model-based control, where the whole behavior in the task is mathematically modeled in advance for planning the optimal action sequence of the robot [7]. This is because the state that can adequately represent the whole behavior is unknown and must be extracted from observations somehow.

Recently, a so-called world model, which simulates prediction and evaluation of the whole behavior at each time step, has been attracting attention [8, 9, 10, 11, 12]. The world model is acquired from the experienced data under the state, the extraction way of which is also learned from the data, in most cases simultaneously, mainly by a variant of variational autoencoder (VAE) [13, 14, 15]. VAE compresses high-dimensional observation data into a low-dimensional latent space with each axis of the obtained latent space as the state. The low-dimensional latent space with appropriately compressed observation data can capture the behavior while eliminating unnecessary calculations, and therefore, the world model constructed in this space is suitable for control applications because it enables future predictions accurately with low computational cost.

For the optimal control using the acquired world model, sampling-based nonlinear model predictive control (MPC) [6, 7, 16] is often employed for its generality. This is a methodology that randomly generates candidates of the optimal action sequence and finds the better candidates based on the evaluation results of them simulated by the world model with them. While this methodology can be applied to arbitrary world models because it does not require gradient information, its optimization process is fully accomplished by re-evaluating numerous candidates many times, and requires a very large computational cost. In particular, this computational cost is correlated to the dimensionality of the state of the world model, and it is intractable to generate the (near) optimal action sequence in real-time if the sufficiently low-dimensional state is not extracted. On the other hand, of course, if the state dimension is set too small, the whole behavior cannot be simulated by the world model, and the accuracy of the planning itself would be greatly reduced.

Thus, in order to construct a world model that can accomplish the task in real-time, it is essential to keep the size of dimensions of the extracted latent space to a necessary and sufficient level. In other words, it is desirable to achieve minimal realization [17] of the world model. Most developers to date have adjusted this manually, changing the size of dimensions of the latent space little by little and re-learning to find the minimum size of dimensions that will leave enough information in the state to recover the observation. Unfortunately, this fine-tuning process is a highly time-consuming process that should be automated.

For this automation, disentangled representation learning [18, 19] (or more directly, the independence and sparsification of the latent space) may play an important role. In this concept, the latent space should be divided into independent state dimensions and unnecessary state dimensions. To this end, it requires to eliminate as much as possible the dependencies among dimensions. In addition, it requires to collapse the state dimensions that can only be dependent so that they are always zero. If these requirements are always performed correctly, we can hypothesize that the uncollapsed state dimensions correspond to the minimal realization.

In this paper, we focus on one of the latest disentangled representation learning methods, q-VAE [19], for promoting such independence and sparsity. This q-VAE is derived by replacing the log-likelihood maximization of the observed data, which is the starting point of the conventional VAE, with the qq-log-likelihood maximization given in Tsallis statistics [20, 21]. As a characteristic of q-VAE, adaptive learning is performed to balance between the term that improves the reconstruction accuracy of the observed data and the term that refines the latent space, and experimental results have reported that the independence of the latent space is increased. However, although the cause of this independence could be understood qualitatively, it was not clear whether it was always mathematically valid. In addition, numerical stability was needed to be guaranteed by ad-hoc constraints.

Refer to caption
Figure 1: Proposed framework: with the collected dataset, the latent space is first extracted using the modified q-VAE; since its latent space should be sparse, the state for minimal realization is further extracted by masking the latent space; the world model is then trained using the collected dataset and the extracted state; finally, the robot plans the optimal action by simulating the future states and rewards using the world model.

Therefore, we deepen the analysis of this q-VAE to establish a new formulation that increases numerical stability and implementation flexibility by eliminating the ad-hoc constraints. To this end, we exclude common terms that cause instability over all terms found by further decomposing the q-VAE. We also consider a further lower bound to prevent numerical divergence. After these modifications, we reveal the conditions to always facilitate sparsification based on the inter-axis dependence and the finite lower bound of the qq-logarithm.

Using the modified q-VAE, the pre-specified size of dimensions of the latent space can be increased to ensure the reconstruction accuracy of the observed data, and unnecessary state dimensions that can be easily discriminated thanks to sparsification can be masked. By constructing a world model based on the masked state that may satisfy the minimal realization, we can expect to accomplish the task in real-time using MPC with the trained world model. The proposed framework for the above processes is illustrated in Fig. 1. Note that unlike conventional methods [9, 10, 11, 12], it is not possible to train all neural networks at the same time, but instead, by dividing the optimization problems like [8], the advantages of step-by-step performance analysis and verification can be obtained.

The proposed framework is empirically validated in an autonomous driving simulation and in a reaching task to a target object by a mobile manipulator. In both tasks, we show that the modified q-VAE improves the sparsity over a conventional method while ensuring the reconstruction accuracy. We also confirm that the modified q-VAE can make the latent space sparse to six dimensions and achieve almost the minimal realization in the reaching task. With the world model constructed after masking the unnecessary dimensions, the prediction accuracy can be maintained before the masking. We finally report that the world model with masking contributes to the improvement of control performance in real-time.

2 Preliminaries

2.1 Model predictive control with world model

Refer to caption
Figure 2: Framework of sampling-based MPC: at first, given the current state, the proposal distribution π\pi samples the action sequence at:t+Ha_{t:t+H}; the world model simulates the future states and rewards according to at:t+Ha_{t:t+H} step by step, obtaining the sum of rewards RR; π\pi is improved to obtain larger RR, and repeats the sampling-improvement process until convergence or time limit.

Before describing the method of extracting the latent space, which is the main topic of this paper, we briefly introduce the world model for the given state and the use of MPC with it [7]. Here, we first define the state as s𝒮|𝒮|s\in\mathcal{S}\subset\mathbb{R}^{|\mathcal{S}|}, the robot action as a𝒜|𝒜|a\in\mathcal{A}\subset\mathbb{R}^{|\mathcal{A}|}, and the reward (or cost) as rr\in\mathbb{R} (with the state and action spaces 𝒮\mathcal{S} and 𝒜\mathcal{A}, respectively). Note that |||\cdot| with space denotes the size of dimensions of the given space. In addition, a discrete-time system is often supposed in the world model and MPC. Therefore, the time step is given as t=t=\mathbb{N}, and it can be noted as a subscript to the above variables to clarify the time of them.

Following the above definitions, we set the world model 𝒲θ\mathcal{W}_{\theta} with a set of parameters θ\theta as follows:

𝒲θ:{ps(stst,at;θ)dynamicspr(rtst,at;θ)reward\displaystyle\mathcal{W}_{\theta}:\begin{cases}p_{s}(s^{\prime}_{t}\mid s_{t},a_{t};\theta)&\mathrm{dynamics}\\ p_{r}(r_{t}\mid s_{t},a_{t};\theta)&\mathrm{reward}\end{cases} (1)

where p()p_{\cdot}(\cdot\mid\cdot) denotes the conditional probability. That is, with the current state-action pair (st,at)(s_{t},a_{t}), the world model predicts the future state st=st+1s_{t}^{\prime}=s_{t+1} and evaluates the current situation as rtr_{t}. With this structure (i.e. Markov decision process), a transited state ss^{\prime} and an evaluated reward rr obtained by an action aa given to the “actual” environment in a state ss are combined into a tuple (s,a,s,r)(s,a,s^{\prime},r), and a dataset with NN tuples, 𝒟𝒲={(si,ai,si,ri)}i=1N\mathcal{D}_{\mathcal{W}}=\{(s_{i},a_{i},s^{\prime}_{i},r_{i})\}_{i=1}^{N}, can be constructed and used to train the world model. Specifically, we can find θθ\theta\to\theta^{\ast} that achieves the following negative log-likelihood minimization problem.

θ=argminθ𝔼𝒟𝒲[ps(sisi,ai;θ)pr(risi,ai;θ)]\displaystyle\theta^{\ast}=\arg\min_{\theta}\mathbb{E}_{\mathcal{D}_{\mathcal{W}}}[-p_{s}(s^{\prime}_{i}\mid s_{i},a_{i};\theta)-p_{r}(r_{i}\mid s_{i},a_{i};\theta)] (2)

where 𝔼𝒟𝒲[]\mathbb{E}_{\mathcal{D}_{\mathcal{W}}}[\cdot] denotes the expectation operation by randomly sampling tuples from 𝒟𝒲\mathcal{D}_{\mathcal{W}}.

It is important to note that the world model only includes the action as one of the conditions, namely, if the robot freely plans and generates the action sequence at:t+H=[at,at+1,,at+H]a_{t:t+H}=[a_{t},a_{t+1},\ldots,a_{t+H}] with HH\in\mathbb{N} horizon step, its value can be evaluated via simulating the world model to improve the way to generate at:t+Ha_{t:t+H}. This mechanism is utilized in the sampling-based nonlinear MPC used in this paper, so-called cross entropy method (CEM) [7] (see Fig. 2). Based on the evaluation of at:t+Ha_{t:t+H}, the optimal at:t+Ha_{t:t+H}^{\ast} is eventually obtained by repeatedly modifying and re-evaluating at:t+Ha_{t:t+H} in the direction of improving the evaluation. Note that although MPC optimizes the whole action sequence, only ata_{t}^{\ast} is actually used, since this optimization is conducted at every time step.

Specifically, CEM samples KK candidates of at:t+Ha_{t:t+H}, {at:t+Hk}k=1K\{a_{t:t+H}^{k}\}_{k=1}^{K}, from a proposal distribution (or policy), π\pi, at each iteration (i.e. the evaluation and improvement), and evaluates all of them using the world model. The score of each candidate is given as the sum of rewards Rk=h=0Hrt+hR^{k}=\sum_{h=0}^{H}r_{t+h}. With this score, KK candidates is sorted in descending (or ascending if cost is used instead of reward) order, and then the top νK\nu K (ν(0,1)\nu\in(0,1) denotes the elite ratio) candidates is extracted as the elites. Since these elites should be actively sampled, a new policy π\pi^{\prime} is obtained through the following maximum likelihood estimation.

π=argmaxπk=1K𝕀(RkRthreshold)lnπ(at:t+Hk)\displaystyle\pi^{\prime}=\arg\max_{\pi}\sum_{k=1}^{K}\mathbb{I}(R^{k}\geq R_{\mathrm{threshold}})\ln\pi(a_{t:t+H}^{k}) (3)

where RthresholdR_{\mathrm{threshold}} denotes the minimum score in the elites. 𝕀()\mathbb{I}(\cdot) is defined as the indicator function, which returns one if the condition in the bracket is satisfied; otherwise zero. If π\pi is modeled as normal distribution with μ\mu location and σ\sigma scale, this can be analytically solved by the mean and standard deviation of the elites, respectively.

Note that this improvement π=π\pi=\pi^{\prime} is largely sample-dependent, hence if the samples are biased, π\pi^{\prime} would overfit to one of the local optima. To mitigate this issue, the following smooth update is often employed.

θπηθπ+(1η)θπ\displaystyle\theta_{\pi}\leftarrow\eta\theta_{\pi}+(1-\eta)\theta_{\pi^{\prime}} (4)

where θπ\theta_{\pi} denotes the set of parameters in π\pi (in the case of normal distribution, θπ=[μ,σ]\theta_{\pi}=[\mu,\sigma]). Larger η(0,1)\eta\in(0,1) makes the update smoother. The above process (with sampling, evaluation, and improvement) are iterated until the specified number of times or the specified time is exceeded, and the mean of the final π\pi updated, or the one with the highest score among the candidates sampled so far, is returned as the optimal action sequence.

2.2 Tsallis statistics

Refer to caption
Figure 3: qq-logarithm: with q>0q>0, this function is concave; while the output is 0 when x=1x=1 for any qq, the smaller qq is, the larger the output.

Let us briefly introduce several important properties of Tsallis statistics [20, 21], which is utilized in this paper. First of all, well-known natural logarithm ln()\ln(\cdot) is extended to qq-logarithm lnq()\ln_{q}(\cdot) with qq\in\mathbb{R} in Tsallis statistics.

lnq(x)={ln(x)q=1x1q11qq1\displaystyle\ln_{q}(x)=\begin{cases}\ln(x)&q=1\\ \frac{x^{1-q}-1}{1-q}&q\neq 1\end{cases} (5)

where x>0x>0. As illustrated in Fig. 3, qq-logarithm with q>0q>0 is concave. While natural logarithm has infinite upper and lower bounds ±\pm\infty, in qq-logarithm, either is finite according to qq.

limx0lnq(x)\displaystyle\lim_{x\to 0}\ln_{q}(x) ={q111qq<1\displaystyle=\begin{cases}-\infty&q\geq 1\\ -\frac{1}{1-q}&q<1\end{cases} (6)
limxlnq(x)\displaystyle\lim_{x\to\infty}\ln_{q}(x) ={1q1q>1q1\displaystyle=\begin{cases}\frac{1}{q-1}&q>1\\ \infty&q\leq 1\end{cases} (7)

In addition, the following inequality holds for q1<q2q_{1}<q_{2}.

lnq1(x)lnq2(x)\displaystyle\ln_{q_{1}}(x)\geq\ln_{q_{2}}(x) (8)

The equality is satisfied only when x=1x=1.

In the derivation of q-VAE, several important tricks are described below. First, in qq-logarithm, pseudo-additivity is established instead of additivity.

lnq(x1x2)\displaystyle\ln_{q}(x_{1}x_{2}) =lnq(x1)+lnq(x2)+(1q)lnq(x1)lnq(x2)\displaystyle=\ln_{q}(x_{1})+\ln_{q}(x_{2})+(1-q)\ln_{q}(x_{1})\ln_{q}(x_{2})
=x21qlnq(x1)+lnq(x2)\displaystyle=x_{2}^{1-q}\ln_{q}(x_{1})+\ln_{q}(x_{2}) (9)
=lnq(x1)+x11qlnq(x2)\displaystyle=\ln_{q}(x_{1})+x_{1}^{1-q}\ln_{q}(x_{2})

For the reciprocal, the following formula holds.

lnq(x1)=xq1lnq(x)\displaystyle\ln_{q}(x^{-1})=-x^{q-1}\ln_{q}(x) (10)

Finally, the qq-deformed Kullback-Leibler (KL) divergence (or, Tsallis divergence) is given as follows:

KLq(p1p2)=p1(x)lnqp2(x)p1(x)dx\displaystyle\mathrm{KL}_{q}(p_{1}\|p_{2})=-\int p_{1}(x)\ln_{q}\frac{p_{2}(x)}{p_{1}(x)}dx (11)

Note that some probability distribution models, such as exponential distribution families, have closed-form solutions even for Tsallis divergence [22].

2.3 Original VAE and q-VAE

As a comparison to q-VAE, we first derive the original VAE [13]. For VAE, a dataset 𝒟𝒳={xi}i=1M\mathcal{D}_{\mathcal{X}}=\{x_{i}\}_{i=1}^{M}, where x𝒳x\in\mathcal{X} is the data observed by sensors and MM of them are collected. Here, 𝒟𝒳\mathcal{D}_{\mathcal{X}} is distinguished from the dataset for the world model 𝒟𝒲\mathcal{D}_{\mathcal{W}} by definition. However, for practical use, the dataset 𝒟={(xi,ai,xi,ri)}i=1N\mathcal{D}=\{(x_{i},a_{i},x^{\prime}_{i},r_{i})\}_{i=1}^{N} can be reused for both of them by extracting 𝒟𝒳𝒟\mathcal{D}_{\mathcal{X}}\subset\mathcal{D} from it to train VAE; and converting it into 𝒟𝒲\mathcal{D}_{\mathcal{W}} by mapping xi,xisi,six_{i},x^{\prime}_{i}\to s_{i},s^{\prime}_{i} with VAE.

Anyway, for 𝒟𝒳\mathcal{D}_{\mathcal{X}}, in order to obtain a generative distribution p(x)p(x), the problem of maximizing its log-likelihood is considered. In variational inference, xx is supposed to be generated stochastically depending on the corresponding latent variable z𝒵|𝒵|z\in\mathcal{Z}\subset\mathbb{R}^{|\mathcal{Z}|} (in general, |𝒵|<|𝒳||\mathcal{Z}|<|\mathcal{X}|). In that case, p(x)p(x) can be represented to be p(x)=p(xz;ϕ)p(z)𝑑zp(x)=\int p(x\mid z;\phi)p(z)dz with a pre-designed prior distribution p(z)p(z) and a decoder p(xz;ϕ)p(x\mid z;\phi) with the set of parameters ϕ\phi. From this relation, the variational lower bound (ϕ;𝒟𝒳)-\mathcal{L}(\phi;\mathcal{D}_{\mathcal{X}}) is derived as follows:

𝔼𝒟𝒳[lnp(xi)]\displaystyle\mathbb{E}_{\mathcal{D}_{\mathcal{X}}}[\ln p(x_{i})] =𝔼𝒟𝒳[lnp(xiz;ϕ)p(z)𝑑z]\displaystyle=\mathbb{E}_{\mathcal{D}_{\mathcal{X}}}\left[\ln\int p(x_{i}\mid z;\phi)p(z)dz\right]
=𝔼𝒟𝒳[lnp(zxi;ϕ)p(xiz;ϕ)p(z)p(zxi;ϕ)𝑑z]\displaystyle=\mathbb{E}_{\mathcal{D}_{\mathcal{X}}}\left[\ln\int p(z\mid x_{i};\phi)\frac{p(x_{i}\mid z;\phi)p(z)}{p(z\mid x_{i};\phi)}dz\right]
𝔼𝒟𝒳,zip(zxi;ϕ)[lnp(xizi;ϕ)]𝔼𝒟𝒳[KL(p(zxi;ϕ)p(z))]\displaystyle\geq\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\left[\ln p(x_{i}\mid z_{i};\phi)\right]-\mathbb{E}_{\mathcal{D}_{\mathcal{X}}}\left[\mathrm{KL}(p(z\mid x_{i};\phi)\|p(z))\right]
=(ϕ;𝒟𝒳)\displaystyle=-\mathcal{L}(\phi;\mathcal{D}_{\mathcal{X}}) (12)

where p(zxi;ϕ)p(z\mid x_{i};\phi) denotes the variational posterior distribution (or encoder). Note that q()q(\cdot) is generally used instead of p()p(\cdot) to denote the variational distribution, but since qq appears in Tsallis statistics, it is unified with p()p(\cdot) to avoid confusion. The inequality in the above derivation is given by Jensen’s inequality using the fact that the natural logarithm is a concave function. In order to minimize (ϕ;𝒟𝒳)\mathcal{L}(\phi;\mathcal{D}_{\mathcal{X}}), the computational graph for ϕ\phi is constructed using the reparameterization trick [13], etc., and one of the stochastic gradient descent methods [23, 24] is used to optimize ϕ\phi. Furthermore, by considering the minimization problem of (ϕ;𝒟𝒳)\mathcal{L}(\phi;\mathcal{D}_{\mathcal{X}}) as a constrained optimization problem with KL divergence, β\beta-VAE [14], which multiplies KL divergence by a weight β>0\beta>0, is derived via Lagrange’s method of undetermined multipliers.

For convenience, the first term is called the reconstruction term to increase the accuracy of reconstructing the observed data from the encoded latent variable, and the second term is called the regularization term that attempts to match the encoder to the prior. The regularization term shapes the latent space according to the prior, and the design of the prior promotes disentangled representation (i.e. independence and sparsification). For implementation, from the many reasons (e.g. the closed-form solution of KL divergence can be obtained, the reparameterization trick is well established, and the computational cost is small), p(z)p(z) is frequently given by the standard normal distribution 𝒩(0,I)\mathcal{N}(0,I), and p(zxi;ϕ)p(z\mid x_{i};\phi) is accordingly modeled by a diagonal normal distribution. Note that the model of p(xz;ϕ)p(x\mid z;\phi) depends on xx: for real data such as robot coordinates, a diagonal normal distribution (with fixed variance in some cases) or other real-space distribution like student-t distribution [25] is used; and for image data (normalized to [0,1][0,1] for each pixel), Bernoulli distribution (recently, continuous Bernoulli distribution [26]) is adopted.

Finally, we introduce the original q-VAE [19], which uses the same variables and probability distributions, but replaces the starting point for the maximization problem with the qq-log likelihood. By restricting q>0q>0 to make qq-logarithm concave, the variational lower bound for this problem can be derived as in the usual VAE (note the pseudo-additivity).

𝔼𝒟𝒳[lnqp(xi)]\displaystyle\mathbb{E}_{\mathcal{D}_{\mathcal{X}}}[\ln_{q}p(x_{i})] =𝔼𝒟𝒳[lnqp(zxi;ϕ)p(xiz;ϕ)p(z)p(zxi;ϕ)𝑑z]\displaystyle=\mathbb{E}_{\mathcal{D}_{\mathcal{X}}}\left[\ln_{q}\int p(z\mid x_{i};\phi)\frac{p(x_{i}\mid z;\phi)p(z)}{p(z\mid x_{i};\phi)}dz\right]
𝔼𝒟𝒳,zip(zxi;ϕ)[ρ(xi,zi)1qlnqp(xizi;ϕ)]𝔼𝒟𝒳[KLq(p(zxi;ϕ)p(z))]\displaystyle\geq\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\left[\rho(x_{i},z_{i})^{1-q}\ln_{q}p(x_{i}\mid z_{i};\phi)\right]-\mathbb{E}_{\mathcal{D}_{\mathcal{X}}}\left[\mathrm{KL}_{q}(p(z\mid x_{i};\phi)\|p(z))\right]
=q(ϕ;𝒟𝒳)\displaystyle=-\mathcal{L}_{q}(\phi;\mathcal{D}_{\mathcal{X}}) (13)

where ρ(x,z)=p(z)/p(zx;ϕ)=(1q)lnqp(z)/p(zx;ϕ)+1>0\rho(x,z)=p(z)/p(z\mid x;\phi)=(1-q)\ln_{q}p(z)/p(z\mid x;\phi)+1>0. Under the condition on q<1q<1, when ρ\rho is small (i.e. p(z)<p(zx;ϕ)p(z)<p(z\mid x;\phi)), the influence of the reconstruction term is suppressed and the regularization term dominates, thus promoting p(zx;ϕ)p(z)p(z\mid x;\phi)\to p(z). Otherwise, the reconstruction term becomes dominant and p(zx;ϕ)p(z\mid x;\phi) tries to extract the information needed for the reconstruction. In the original paper, this behavior is regarded as an adaptive β\beta in β\beta-VAE, automatically adjusting the trade-off between the disentangled representation promoted by large β\beta and the reconstruction accuracy impaired by it. Indeed, experimental results in that paper showed that the reconstruction accuracy can be retained while increasing the independence among latent variables compared to β\beta-VAE. Note that, with q=1q=1, the above problem reverts to the standard VAE.

3 Modified q-VAE

3.1 Stability issues

In the original q-VAE, numerical stability issues were found, and two ad-hoc cheap tricks were made to address these issues. The first is the removal of the computational graph leading to ρ\rho, making ρ\rho a merely adaptive coefficient. In this way, ρ\rho can be regarded as a part of β\beta in the original version, but ϕ\phi should naturally be updated by the computational graph to ρ\rho. This trick may cause large biases in the behavior during training and the obtained latent space.

The other is the limitation of the decoder model. In many cases, the observed data handled by VAE are of very high dimension, and the following another representation of qq-logarithm for them reveals that it tends to have relatively large values in the exponential function, causing numerical divergence.

p(x)1q11q\displaystyle\frac{p(x)^{1-q}-1}{1-q} =exp{(1q)lnp(x)}11q\displaystyle=\frac{\exp\{(1-q)\ln p(x)\}-1}{1-q}
=exp{(1q)k=1|𝒳|lnp(xk)}11q\displaystyle=\frac{\exp\{(1-q)\sum_{k=1}^{|\mathcal{X}|}\ln p(x_{k})\}-1}{1-q} (14)

That is, if p(x)p(x) is given as probability density function (i.e. xx is in continuous space), p(x)p(x) can be over one, resulting in the positive log-likelihood. Even if the log-likelihood for each dimension is slightly positive, the value accumulated tens of thousands of them would easily diverge the above exponential function numerically. To avoid this issue, the original q-VAE limits the decoder model that cancel qq-logarithm, such as qq-Gaussian distribution [21]. Of course, it is not a good idea to do so, because various literatures have reported performance gains by utilizing different decoder models [25, 26], as mentioned above.

For these two open issues, this paper deepens the analysis of the original q-VAE and decomposes it into a new surrogated variational lower bound. In addition, as a part of the flexibility of the decoder model, we also derive a formulation that takes into account the case of mixed observations that should be represented by different models. Note that we take care of making the modified q-VAE a general form by guaranteeing that it reverts to the standard VAE when q=1q=1 (and other hyperparameters are appropriately given).

3.1.1 Alternative to removal of computational graph

First, Tsallis divergence is decomposed as follows, making full use of its definition in eq. (11), pseudo-additivity in eq. (9), and the formula for the reciprocal in eq. (10).

KLq(p1p2)\displaystyle\mathrm{KL}_{q}(p_{1}\|p_{2}) =p1(x)lnqp2(x)p1(x)dx\displaystyle=-\int p_{1}(x)\ln_{q}\frac{p_{2}(x)}{p_{1}(x)}dx
=𝔼p1[p1(x)q1lnqp2(x)+lnqp1(x)1]\displaystyle=-\mathbb{E}_{p_{1}}[p_{1}(x)^{q-1}\ln_{q}p_{2}(x)+\ln_{q}p_{1}(x)^{-1}]
=𝔼p1[p1(x)q1lnqp2(x)p1(x)q1lnqp1(x)]\displaystyle=-\mathbb{E}_{p_{1}}[p_{1}(x)^{q-1}\ln_{q}p_{2}(x)-p_{1}(x)^{q-1}\ln_{q}p_{1}(x)]
=𝔼p1[p1(x)q1{lnqp2(x)lnqp1(x)}]\displaystyle=-\mathbb{E}_{p_{1}}[p_{1}(x)^{q-1}\{\ln_{q}p_{2}(x)-\ln_{q}p_{1}(x)\}] (15)

By applying this to eq. (13) and decomposing ρ\rho as p(z)/p(zx;ϕ)p(z)/p(z\mid x;\phi), we see that p(zx;ϕ)q1p(z\mid x;\phi)^{q-1} is multiplied to every term.

(13)\displaystyle\eqref{eq:qvae} =𝔼𝒟𝒳,zip(zxi;ϕ)[ρ(xi,zi)1qlnqp(xizi;ϕ)+p(zixi;ϕ)q1{lnqp(zi)lnqp(zixi;ϕ)}]\displaystyle=\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\left[\rho(x_{i},z_{i})^{1-q}\ln_{q}p(x_{i}\mid z_{i};\phi)+p(z_{i}\mid x_{i};\phi)^{q-1}\left\{\ln_{q}p(z_{i})-\ln_{q}p(z_{i}\mid x_{i};\phi)\right\}\right]
=𝔼𝒟𝒳,zip(zxi;ϕ)[p(zixi;ϕ)q1{p(zi)1qlnqp(xizi;ϕ)+lnqp(zi)lnqp(zixi;ϕ)}]\displaystyle=\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\left[p(z_{i}\mid x_{i};\phi)^{q-1}\left\{p(z_{i})^{1-q}\ln_{q}p(x_{i}\mid z_{i};\phi)+\ln_{q}p(z_{i})-\ln_{q}p(z_{i}\mid x_{i};\phi)\right\}\right] (16)

Here, with the fact p(zixi;ϕ)>0p(z_{i}\mid x_{i};\phi)>0 and p(zixi;ϕ)q11p(z_{i}\mid x_{i};\phi)^{q-1}\simeq 1 when q1q\simeq 1, p(zixi;ϕ)q1p(z_{i}\mid x_{i};\phi)^{q-1} can be ignored as a slightly biased but consistent surrogated problem.

(16)𝔼𝒟𝒳,zip(zxi;ϕ)[p(zi)1qlnqp(xizi;ϕ)+lnqp(zi)lnqp(zixi;ϕ)]\displaystyle\eqref{eq:qvae2}\propto\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\left[p(z_{i})^{1-q}\ln_{q}p(x_{i}\mid z_{i};\phi)+\ln_{q}p(z_{i})-\ln_{q}p(z_{i}\mid x_{i};\phi)\right] (17)

In this case, the first term can be regarded as the reconstruction term, the second as the regularization term that brings the encoder closer to the prior, and the third as an entropy term that maximizes the entropy of the encoder. Although the surrogated problem induces only a small bias, the elimination of p(zixi;ϕ)q1p(z_{i}\mid x_{i};\phi)^{q-1} simplifies the gradient considerably, making it numerically much more stable. Note that the reconstruction term is not yet sufficiently stable numerically at this stage, since the direction to be updated is not unique unless under certain conditions, as described later.

3.1.2 Alternative to limitation of decoder model

Based on eq. (17), the decoder model is first decomposed for representing several types of observations. To this end, x𝒳x\in\mathcal{X} is classified into CC\in\mathbb{N} classes, i.e. x=[x1,x2,,xC]x=[x_{1},x_{2},\ldots,x_{C}] with xc𝒳cx_{c}\in\mathcal{X}_{c} (c=1,2,,Cc=1,2,\ldots,C). Note that, at this stage, each class is unordered. Suppose that each class is independent, the decoder can be decomposed as follows:

p(xz;ϕ)=c=1Cp(xcz;ϕ)\displaystyle p(x\mid z;\phi)=\prod_{c=1}^{C}p(x_{c}\mid z;\phi) (18)

where p(xcz;ϕ)p(x_{c}\mid z;\phi) is modeled by an appropriate distribution, such as a diagonal normal distribution and a continuous Bernoulli distribution. Although the production of likelihoods can be converted into the sum of them through natural logarithm, qq-logarithm requires the pseudo-additivity defined in eq. (9). By applying the pseudo-additivity iteratively, qq-logarithm of the decomposed decoders is derived as follows:

lnqp(xz;ϕ)=c=1Cp(x<cz;ϕ)1qlnqp(xcz;ϕ)\displaystyle\ln_{q}p(x\mid z;\phi)=\sum_{c=1}^{C}p(x_{<c}\mid z;\phi)^{1-q}\ln_{q}p(x_{c}\mid z;\phi) (19)

where

p(x<cz;ϕ)={1c=1j=1c1p(xjz;ϕ)c>1\displaystyle p(x_{<c}\mid z;\phi)=\begin{cases}1&c=1\\ \prod_{j=1}^{c-1}p(x_{j}\mid z;\phi)&c>1\end{cases} (20)

To avoid numerical divergence of the decomposed decoders, we pay attention to 1q1-q, which is in the exponential function of eq. (14) as a coefficient. That is, the larger qq is, the smaller the scale of values in the exponential function becomes. Therefore, the condition for no numerical divergence can be found by increasing qq. In addition, as introduced in eq. (8), the qq-logarithm becomes smaller for larger qq. From these facts, the following lower bound is gained by introducing qcq_{c} (qq1q2qC1q\leq q_{1}\leq q_{2}\leq\ldots\leq q_{C}\leq 1).

lnqp(xz;ϕ)\displaystyle\ln_{q}p(x\mid z;\phi) c=1Cp(x<cz;ϕ)1q1lnq1p(xcz;ϕ)\displaystyle\geq\sum_{c=1}^{C}p(x_{<c}\mid z;\phi)^{1-q_{1}}\ln_{q_{1}}p(x_{c}\mid z;\phi)
=lnq1p(x1z;ϕ)+p(x1z;ϕ)1q1c=2Cp(x<cz;ϕ)1q1lnq1p(xcz;ϕ)\displaystyle=\ln_{q_{1}}p(x_{1}\mid z;\phi)+p(x_{1}\mid z;\phi)^{1-q_{1}}\sum_{c=2}^{C}p(x_{<c}\mid z;\phi)^{1-q_{1}}\ln_{q_{1}}p(x_{c}\mid z;\phi)
lnq1p(x1z;ϕ)+p(x1z;ϕ)1q1c=2Cp(x<cz;ϕ)1q2lnq2p(xcz;ϕ)\displaystyle\geq\ln_{q_{1}}p(x_{1}\mid z;\phi)+p(x_{1}\mid z;\phi)^{1-q_{1}}\sum_{c=2}^{C}p(x_{<c}\mid z;\phi)^{1-q_{2}}\ln_{q_{2}}p(x_{c}\mid z;\phi)
c=1Cp(x<cz;ϕ)1q<clnqcp(xcz;ϕ)\displaystyle\geq\sum_{c=1}^{C}p(x_{<c}\mid z;\phi)^{1-q_{<c}}\ln_{q_{c}}p(x_{c}\mid z;\phi) (21)

where

p(x<cz;ϕ)1q<c={1c=1j=1c1p(xjz;ϕ)1qjc>1\displaystyle p(x_{<c}\mid z;\phi)^{1-q_{<c}}=\begin{cases}1&c=1\\ \prod_{j=1}^{c-1}p(x_{j}\mid z;\phi)^{1-q_{j}}&c>1\end{cases} (22)

Note that although xcx_{c} was assumed to be unordered, its order can be determined by finding each of the non-divergent qcq_{c} and arranging them in ascending order. As the continuous space with larger |𝒳c||\mathcal{X}_{c}| tends to diverge more easily, this fact can be a guide for roughly adjusting qcq_{c}.

By substituting this lower bound into eq. (17), a modified q-VAE, which can be numerically stable under the appropriate conditions, is obtained. Here, for convenience, all terms are weighted respectively as in β\beta-VAE [14] and its variants like [15]. Specifically, the modified q-VAE aims to find ϕ\phi that maximizes the following equation (i.e. minimizes ~q(ϕ;𝒟𝒳)\mathcal{\tilde{L}}_{q}(\phi;\mathcal{D}_{\mathcal{X}})) weighted by ζc>0\zeta_{c}>0, β>0\beta>0, and γ>0\gamma>0.

(17)\displaystyle\eqref{eq:qvae3} 𝔼𝒟𝒳,zip(zxi;ϕ)[p(zi)1qc=1Cp(x<cz;ϕ)1q<clnqcp(xcz;ϕ)\displaystyle\geq\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\Biggl{[}p(z_{i})^{1-q}\sum_{c=1}^{C}p(x_{<c}\mid z;\phi)^{1-q_{<c}}\ln_{q_{c}}p(x_{c}\mid z;\phi)
+lnqp(zi)lnqp(zixi;ϕ)]\displaystyle\qquad\qquad\qquad\qquad+\ln_{q}p(z_{i})-\ln_{q}p(z_{i}\mid x_{i};\phi)\Biggr{]}
𝔼𝒟𝒳,zip(zxi;ϕ)[p(zi)1qc=1Cζcp(x<cz;ϕ)1q<clnqcp(xcz;ϕ)\displaystyle\propto\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\Biggl{[}p(z_{i})^{1-q}\sum_{c=1}^{C}\zeta_{c}p(x_{<c}\mid z;\phi)^{1-q_{<c}}\ln_{q_{c}}p(x_{c}\mid z;\phi)
+βlnqp(zi)γlnqp(zixi;ϕ)]\displaystyle\qquad\qquad\qquad\qquad+\beta\ln_{q}p(z_{i})-\gamma\ln_{q}p(z_{i}\mid x_{i};\phi)\Biggr{]}
=~q(ϕ;𝒟𝒳)\displaystyle=-\mathcal{\tilde{L}}_{q}(\phi;\mathcal{D}_{\mathcal{X}}) (23)

3.2 Analysis for sparsification

In the original q-VAE, the computational graph of ρ\rho is removed to stabilize the computation, and it becomes a merely adaptive coefficient. Thus, overall, the original q-VAE solves the multi-objective optimization problem of reconstruction and regularization terms by scalarizing them with an (adaptive) linear weighted sum. On the other hand, in the modified q-VAE (with q<1q<1), the likelihood of the encoder, which was the denominator of ρ\rho, is eliminated, but the regularization by the prior distribution in the numerator is retained, preserving the computational graph. This means that the multi-objective optimization problem with the reconstruction and regularization terms is scalarized and solved in the form of a product. This product is important: both terms must be satisfied simultaneously to obtain a high value.

However, in the maximization problem involving such a product, each term must always be non-negative. That is, if the sign of one term is reversed to negative, the other term has a negative coefficient, converting its maximization problem into a minimization problem. Of course, this switching is also a factor that makes learning unstable. Since the regularization term p(zi)1qp(z_{i})^{1-q} is non-negative in definition, we need to reveal the conditions for the non-negative reconstruction term.

Now, we first consider the case with C=1C=1 for simplicity. The reconstruction term lnq1p(x1z;ϕ)\ln_{q_{1}}p(x_{1}\mid z;\phi) becomes negative with p(x1z;ϕ)<1p(x_{1}\mid z;\phi)<1 and non-negative with p(x1z;ϕ)1p(x_{1}\mid z;\phi)\geq 1. Since sign reversal may occur depending on the performance of the decoder, we substitute the definition of qq-logarithm when q1q\neq 1 (see eq. (5)) and reform eq. (23) as follows:

𝔼𝒟𝒳,zip(zxi;ϕ)[p(zi)1qζ1lnq1p(x1z;ϕ)+βlnqp(zi)γlnqp(zixi;ϕ)]\displaystyle\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\Biggl{[}p(z_{i})^{1-q}\zeta_{1}\ln_{q_{1}}p(x_{1}\mid z;\phi)+\beta\ln_{q}p(z_{i})-\gamma\ln_{q}p(z_{i}\mid x_{i};\phi)\Biggr{]}
=\displaystyle= 𝔼𝒟𝒳,zip(zxi;ϕ)[p(zi)1q{ζ1lnq1p(x1z;ϕ)+β1q}β1qγlnqp(zixi;ϕ)]\displaystyle\mathbb{E}_{\mathcal{D}_{\mathcal{X}},z_{i}\sim p(z\mid x_{i};\phi)}\Biggl{[}p(z_{i})^{1-q}\left\{\zeta_{1}\ln_{q_{1}}p(x_{1}\mid z;\phi)+\frac{\beta}{1-q}\right\}-\frac{\beta}{1-q}-\gamma\ln_{q}p(z_{i}\mid x_{i};\phi)\Biggr{]} (24)

Using this form and the fact that qq-logarithm with q<1q<1 has a finite lower bound as shown in eq. (6), the condition for the values in the brace {}\{\cdot\} to always be non-negative is revealed.

ζ11q1+β1q0\displaystyle-\frac{\zeta_{1}}{1-q_{1}}+\frac{\beta}{1-q}\geq 0
β1q1q1ζ1\displaystyle\therefore\beta\geq\frac{1-q}{1-q_{1}}\zeta_{1} (25)

Similarly, the conditions required when C>1C>1 are also considered. The terms inside the brace {}\{\cdot\} of eq. (24) is derived as follows:

ζCp(x<Cz;ϕ)1q<Cp(xCz;ϕ)1qC1qC\displaystyle\zeta_{C}p(x_{<C}\mid z;\phi)^{1-q_{<C}}\frac{p(x_{C}\mid z;\phi)^{1-q_{C}}}{1-q_{C}}
+\displaystyle+ c=1Cp(x<cz;ϕ)1q<c(ζc11qc1ζc1qc)\displaystyle\sum_{c=1}^{C}p(x_{<c}\mid z;\phi)^{1-q_{<c}}\left(\frac{\zeta_{c-1}}{1-q_{c-1}}-\frac{\zeta_{c}}{1-q_{c}}\right) (26)

where q0=qq_{0}=q and ζ0=β\zeta_{0}=\beta for simplifying the description. The first term is always non-negative, so the required conditions are gained from the second term.

ζc11qc1ζc1qc0,c=1,2,,C\displaystyle\frac{\zeta_{c-1}}{1-q_{c-1}}-\frac{\zeta_{c}}{1-q_{c}}\geq 0,\ c=1,2,\ldots,C
β1q1q1ζ11q1q2ζ21q1qCζC\displaystyle\therefore\beta\geq\frac{1-q}{1-q_{1}}\zeta_{1}\geq\frac{1-q}{1-q_{2}}\zeta_{2}\geq\ldots\geq\frac{1-q}{1-q_{C}}\zeta_{C} (27)

When the above conditions are satisfied, the improvement in the reconstruction accuracy of the observed data (i.e. compression of important information into the latent space) and the regularization of the encoder (i.e. organization of the latent space) should be promoted simultaneously. It is also important that this regularization is given in terms of likelihood rather than log-likelihood to the prior. That is, even if the prior is assumed to be a diagonal distribution such as p(z)=𝒩(0,I)p(z)=\mathcal{N}(0,I), the likelihood is given by the product of the likelihoods for the respective dimensions. As in the discussion above, if the regularization for each dimension is taken as a multi-objective problem, the regularization for all dimensions should be simultaneously established. This suggests that each dimension of the latent space is sparsified as much as possible to match the prior (basically, centered at the origin), while still extracting sufficient information into the latent space to reconstruct the observed data. In addition, the regularization status of the other dimensions influences each other, thus preventing duplication of information and encouraging independence. As a result, the latent variable zz on the latent space constructed by the modified q-VAE is expected to coincide with the state ss, which is the minimal realization for the whole behavior of the system.

4 Simulation

4.1 Task

Refer to caption
Figure 4: highway-env: an agent with yellow aims to drive within the lanes while avoiding other blue cars.
Refer to caption
Figure 5: Network architecture of VAE: after encoding the image ximgx_{\mathrm{img}} and velocity xvelx_{\mathrm{vel}} separately to some extent, the two are concatenated and further encoded; from the obtained posterior distribution p(zx)p(z\mid x), the two observations are reconstructed separately after sampling the latent variable zz (with the reparameterization trick); details of the respective modules ①–④ are shown in Table 4.

As a statistical validation in the simulation, we use CEM to control highway-env [27]. Specifically, as shown in Fig. 4, this task is to avoid colliding with other blue car(s) and going out of the lane by operating the accelerator and steering wheel (i.e. two-dimensional continuous action space) of a yellow car. Usually, geometric information between cars can be used for observation, but in this verification, the RGB image of 300×\times150×\times3 in Fig. 4 is resized to 64×\times64×\times3 as an observation. This modification requires to extract the latent state from high-dimensional observation. In addition, to show that multiple types of sensors can be integrated as described above, the yellow car’s velocity vx,yv_{x,y} is given as another observation.

The control by CEM is real-time oriented and outputs the currently optimal action in each control period even if the maximum number of iterations has not been finished. In addition, the network architecture of VAE implemented by PyTorch [28] is illustrated in Fig. 5. These details and the way to collect the dataset are described in Appendix A.

4.2 Results

Table 1: Comparisons for highway-env
Method qq q1q_{1} q2q_{2} ζ1\zeta_{1} ζ2\zeta_{2} β\beta γ\gamma
VAE 1 50 1 1
β\beta-VAE 1 50 1 0.3
q-VAE (proposal) 0.99 0.99 0.999 10 1 10 3

Under the common settings described above, three methods shown in Table 1 are compared. Note that these values were adjusted by trial and error to increase the accuracy of observation reconstruction as much as possible. In addition, q-VAE is set to satisfy the sparsity condition indicated by eq. (27) while confirming the stability of the numerical computation.

4.3 Sparse extraction of latent state

Refer to caption
(a) Image ximgx_{\mathrm{img}}
Refer to caption
(b) Velocity xvelx_{\mathrm{vel}}
Figure 6: Learning curves for the reconstruction performance: the standard VAE insufficiently reconstructs ximgx_{\mathrm{img}}; q-VAE improves the reconstruction performance faster than other methods.
Refer to caption
(a) Image ximgx_{\mathrm{img}}
Refer to caption
(b) Velocity xvelx_{\mathrm{vel}}
Figure 7: Examples of the five reconstructed observations: the standard VAE failed to reconstruct the blue other cars; The remaining two methods were able to reconstruct both observations (ximgx_{\mathrm{img}} and xvelx_{\mathrm{vel}}) to the same degree.
Refer to caption
Figure 8: Sparsity of the latent space: β\beta-VAE lost its sparsity, although it improved the reconstruction performance from the standard VAE; q-VAE achieved higher sparsity than β\beta-VAE, while holding the same level of reconstruction; the sparsity of the standard VAE was at the expense of reconstruction performance.

The learning curves for the statistical reconstruction performance (for image and velocity, respectively) with 25 trials are depicted in Fig. 6. The well-tuned parameters yielded that β\beta-VAE and q-VAE eventually achieved approximately the same level of reconstruction performance, while the standard VAE (with β=1\beta=1) performed poorly in reconstructing images. This is probably due to the strong regularization to the prior distribution p(z)p(z). In fact, β\beta-VAE succeeded in image reconstruction by setting β=0.3<1\beta=0.3<1.

Another feature of q-VAE is that its learning speed tends to be faster than others. Although it is difficult to make a clear agreement because the parameter settings differ greatly from others, we can consider that γ\gamma over the entropy term of the encoder can be set separately from β\beta, and γ<β\gamma<\beta mitigates the loss of information from the encoder (and the latent space). In fact, previous studies have pointed out the negative effects of the entropy term found in the decomposition of the regularization term in VAE [15], and this is consistent with those reports.

Next, five samples are selected to illustrate the post-learning reconstruction accuracy in Fig. (7). As can be seen, while all methods succeeded in reconstructing the velocity with good accuracy, in the image reconstruction, the standard VAE in the second row failed to visualize the other blue car(s). In other words, it can be said that the encoder of the standard VAE did not properly incorporate the information of the other blue cars into the latent state. In contrast, β\beta-VAE and q-VAE properly embedded the important information needed for the reconstruction contained in the observation into the latent space.

Finally, the sparsity of the acquired latent space is evaluated. As sparsity, we use the following definition in the literature [29].

sparse(𝒵)\displaystyle\mathrm{sparse}(\mathcal{Z}) =1Ni=1N|𝒵|ratio(zi)|𝒵|1\displaystyle=\frac{1}{N}\sum_{i=1}^{N}\frac{\sqrt{|\mathcal{Z}|}-\mathrm{ratio}(z_{i})}{\sqrt{|\mathcal{Z}|}-1} (28)
ratio(zi)\displaystyle\mathrm{ratio}(z_{i}) =j=1|𝒵||zi,j|j=1|𝒵|zi,j2\displaystyle=\frac{\sum_{j=1}^{|\mathcal{Z}|}|z_{i,j}|}{\sqrt{\sum_{j=1}^{|\mathcal{Z}|}z_{i,j}^{2}}}

where zi=𝔼p(zxi;ϕ)[z]z_{i}=\mathbb{E}_{p(z\mid x_{i};\phi)}[z] denotes the location of the encoder over xix_{i} in the dataset. If each component of ziz_{i} is with the same magnitude, this definition returns zero; in contrast, if only one component has a non-zero value and others are zero, sparse(𝒵)\mathrm{sparse}(\mathcal{Z}) converges to one.

With this definition of the sparsity, we evaluated each method as shown in Fig. 8. As a result, only β\beta-VAE gained low sparsity. This is due to the fact that β<1\beta<1 is used to improve the reconstruction accuracy. The standard VAE with β=1\beta=1 achieved the same level of sparsity as the proposed q-VAE, but at the expense of the reconstruction accuracy as mentioned above, extracting the meaningless latent space. This trend is also consistent with the previous study [19]. From the above, it can be concluded that q-VAE mitigates the trade-off between the reconstruction accuracy and sparsity, and increases both of them sufficiently, namely, it enables to acquire the important information contained in the observation with the smallest dimension size of the latent space (i.e. minimal realization).

4.4 Control performance

Refer to caption
(a) β\beta-VAE
Refer to caption
(b) q-VAE
Figure 9: Importance of the latent dimensions: (a) β\beta-VAE remained around half of latent dimensions as importance ones due to low sparsity; (b) q-VAE revealed that only eight dimensions (with over 0.15 standard deviation) are important for this task by collapsing other dimensions through sparsification.
Refer to caption
(a) #Iteration
Refer to caption
(b) #Step
Figure 10: Control performance: (a) with masking, MPC could iterates its optimization process four times stably; (b) the masked q-VAE achieved the better performance than others.
Table 2: Network size for highway-env
#Inputs and outputs for all layers #Parameters
Model w/o masking w/ masking w/o masking w/ masking
Dynamics 302 92 5,810 1,526
Reward 155 71 3,752 1,232
Total 457 163 9,562 2,758
Refer to caption
(a) Dynamics
Refer to caption
(b) Reward
Figure 11: Negative log-likelihood of the world model: the masked q-VAE achieved the stable prediction performance, while others encountered the worst-case prediction error; in particular, the masked β\beta-VAE failed to learn the dynamics since it lost the essential information for predicting future states.

The control performance by CEM is compared between β\beta-VAE and q-VAE, both of which obtained the sufficiently meaningful latent space. Here, |𝒵||\mathcal{Z}| was set to be 50, which cannot reduce the computational cost sufficiently. Therefore, in order to confirm the benefit of sparsification, the unnecessary latent dimensions are excluded by masking, and the state space with minimal realization is extracted. As a criterion for judging unnecessary dimensions, the sample standard deviation of the locations of the encoder is evaluated (see Fig. 9). The dimension with the small sample standard deviation mostly takes zero for most of the data, and can be eliminated as an unnecessary dimension. As can be seen in the figure, it is easy to expect that the top eight dimensions (with a standard deviation over 0.15) are important in the case of q-VAE. In line with this, β\beta-VAE also extracts the top eight dimensions, but there is concern that the necessary information may be truncated.

The world model constructed on the latent space before and after masking is used to implement control with CEM. Each method was tested with different random seeds for 300 episodes, and if no failure occurred during the episode, it was successfully completed after 200 steps. The statistical number of steps and the number of CEM iterations in each episode are depicted in Fig. 10. It is remarkable that masking reduced the computational cost and increased the number of iterations. In fact, the number of inputs/outputs and parameters of the world model are reduced by masking, as shown in Table 2.

The number of steps reveals the performance difference between the methods. First, β\beta-VAE clearly reduced the maximum performance by masking. This is because the necessary information was missing due to masking, and the optimization by CEM did not work properly. On the other hand, in q-VAE, masking improved the maximum performance more than in the other cases. One of the reason for this is simply that the necessary information is retained even after masking, facilitating the optimization by increasing the number of iterations.

In addition, it was confirmed that q-VAE without masking requires a larger learning rate than the others (from 10310^{-3} to 10210^{-2}) for making learning of the world model progress. This may be due to the fact that most of inputs and outputs in the training dataset were zero, resulting in over-learning that generate zero. Therefore, the performance of q-VAE without masking may have been insufficient. In fact, the negative log-likelihood of the world model for the test dataset, as shown in Fig. 11, revealed that q-VAE without masking increased the worst-case loss of the dynamics model. Although the learning rate could be adjusted to make progress, over-learning still occurred in favor of majority zeros, overlooking the important features. Note that the masked β\beta-VAE was reduced the accuracy of the dynamics model, as expected above.

5 Experiment

5.1 Task

Refer to caption
(a) Stretch RE1
Refer to caption
(b) Task
Figure 12: Experimental setup: (a) this task limits the motion of Stretch RE1 to yzyz-axes arm movement by two linear actuations, while the task scene is observed by a camera on its top; (b) this task lets the tip of arm reach a target position, 2 cm above a target object (and around its head).

As a demonstration, we conduct a reaching task, named stretch-reach, with a Stretch RE1 developed by Hello Robot [30]. Specifically, as shown in Fig. 12(a), Stretch RE1 is a kind of mobile manipulator with a camera on its top. For simplicity, the motion of this robot is limited to yzyz-axes arm movement (within [0.87,1][0.87,1] in yy direction and [0,0.5][0,0.5] in zz direction). The target position is 2 cm above an object randomly placed in the zz direction (within [0.15,0.45][0.15,0.45]), and the task is to move the arm to that position (see Fig. 12(b)). Similar to the above simulation, one of the observations is an RGB image (originally, 424×\times240×\times3), which is pre-processed to 64×\times64×\times3. In addition, since the observation of the arm position and velocity can be easily measured from the encoders for the respective actuators, this 4D information is also added as an observation.

The task is considered successful when the arm stops near the target position. Specifically, the following reward function rr is provided, and r0.02r\geq-0.02 is considered success, and 20 steps without success is considered failure.

r=(e+0.3v)\displaystyle r=-(e+0.3v) (29)

where ee denotes the distance between the current arm position and the target position, and vv denotes the arm velocity. From the above setup, although the task itself is simple, the random target position has to be estimated from the RGB image to compute ee and to obtain the world model.

Note that the details of other configurations are described in Appendix A, similar to that for the simulation.

5.2 Results

Table 3: Comparisons for stretch-reach
Method qq q1q_{1} q2q_{2} ζ1\zeta_{1} ζ2\zeta_{2} β\beta γ\gamma
β\beta-VAE 1 50 1 0.3
q-VAE (proposal) 0.95 0.95 0.999 50 1 50 3
Refer to caption
(a) β\beta-VAE
Refer to caption
(b) q-VAE
Figure 13: Importance of the latent dimensions: as well as the simulation shown in Fig. 9, only q-VAE succeeded in easily selecting the important latent dimensions.

Since the standard VAE is insufficient to extract the important information, only two methods, β\beta-VAE and q-VAE, are tested as described in Table 3. Note again that q-VAE is set to satisfy the sparsity condition indicated by eq. (27) while confirming the stability of the numerical computation.

As in the simulation, after confirming that the reconstruction accuracy was comparable to each other, the importance of the latent dimensions was evaluated in terms of the sample standard deviation (see Fig. 13). From the figure, q-VAE can select six dimensions (with the same threshold 0.15). In theory, this task may have five dimensions for the minimal realization (i.e. the 2D arm position and velocity and the zz-axis target position), but in reality, other environmental noise (e.g. misalignment of the target object in the xx direction) may occur. Therefore, a total of six dimensions can be reasonably extracted: five dimensions for the theoretical minimal realization and one dimension to handle other noise in practice. On the other hand, β\beta-VAE is difficult to find the important dimensions.

For practical use, the size of dimensions extracted by q-VAE is unknown when using β\beta-VAE alone, so masking to β\beta-VAE is omitted in this experiment. In addition, the simulation results described before indicated that q-VAE without masking is prone to make learning of the world model unstable by unnecessary axes, so we omit it as well. Therefore, the performance comparison in this experiment is limited to β\beta-VAE without masking and q-VAE with masking.

5.3 Accuracy of world model

Refer to caption
(a) Image ximgx_{\mathrm{img}}
Refer to caption
(b) Arm state xarmx_{\mathrm{arm}}
Figure 14: Prediction of future observations (H=5H=5 steps ahead): in both (a) ximgx_{\mathrm{img}} and (b) xarmx_{\mathrm{arm}}, the masked q-VAE was comparable to β\beta-VAE.
Refer to caption
Figure 15: Accuracy of the reward prediction: both methods could approximate the reward function.

First, we confirm that the performance of the world models is comparable to each other. The dynamics to H=5H=5 horizon ahead handled in CEM is illustrated in Fig. 14. It can be seen that the differences between the predictions and the true observations for β\beta-VAE and q-VAE are comparable for both the image and the arm state. The predictions and true values of the rewards are also compared in Fig. 15. Similarly, the predictions are mostly consistent with the true values, suggesting that both methods achieved the good accuracy. From the above, it can be expected that the difference in control performance between the two methods (to be confirmed in the next section) would be only due to the masking (from 30 to six dimensions) obtained by the sparsity of q-VAE.

5.4 Control performance

Refer to caption
(a) #Step for 0.2 m
Refer to caption
(b) Reward for 0.2 m
Refer to caption
(c) #Iteration for 0.2 m
Refer to caption
(d) #Step for 0.3 m
Refer to caption
(e) Reward for 0.3 m
Refer to caption
(f) #Iteration for 0.3 m
Refer to caption
(g) #Step for 0.4 m
Refer to caption
(h) Reward for 0.4 m
Refer to caption
(i) #Iteration for 0.4 m
Figure 16: Experimental results: in (c), (f), (i), as well as the simulation result shown in Fig. 10, the number of iterations was able to be increased by masking; the reduction in computational cost and the increase in the number of iterations due to masking contributed significantly to control performance, accelerating the time to reach the target position and/or increasing the sum of rewards.
Refer to caption
Figure 17: Snapshots of acquired motions: the masked q-VAE made the tip of arm reach the target position faster than β\beta-VAE.

Using the learned world model, 5 trials of reaching were performed for each of the 3 target object positions (i.e. 0.2, 0.3, and 0.4 m, respectively). The number of steps until task termination (20 steps at maximum), average reward, and average number of iterations are listed up in Fig. 16. An example of the trials are shown in Fig. 17 and the attached video.

When the object was placed at 0.2 m, q-VAE succeeded in reaching the target earlier than β\beta-VAE, although there is little difference in reward due to the overall smaller distance penalty. Even in the case with 0.3 m target object position, q-VAE succeeded in performing the task earlier than β\beta-VAE, and also increased the reward due to smoother acceleration/deceleration. When the target object was placed as far as 0.4 m, β\beta-VAE failed in all trials, while q-VAE generally succeeded in most cases.

As noted above, these performance gains are not due to differences in the prediction accuracy of the world model, but rather due to the reduced computational cost by the compact world model, which is with almost the minimal realization. In fact, the number of iterations of q-VAE is more than double that of β\beta-VAE under all conditions. Thus, we confirmed that q-VAE facilitates the minimal realization of the world model through sparsification and contributes to improving the control performance of computationally expensive optimal control such as CEM in real time.

6 Conclusion and discussion

6.1 Conclusion

In this paper, we improved and analyzed q-VAE, a deep learning technique for extracting sparse latent space, aiming at the minimal realization of the world model. In particular, we clarified the hyperparameters condition under which the modified q-VAE always sparsifies the latent space. In both simulations and experiments, the modified q-VAE successfully collapsed many latent dimensions to zero while maintaining the same level of reconstruction accuracy as the baseline method, β\beta-VAE, by learning according to the condition. The world model with almost the minimal realization was obtained by masking the unnecessary latent dimensions and utilized in CEM, which is a sampling-based optimal control method. Consequently, the optimization of CEM was facilitated by the reduction in computational cost, resulting in better control performance.

6.2 Discussion for future work

Two major open issues can be raised from our investigation. One is how to adjust hyperparameters. The hyperparameters of the modified q-VAE have increased due to the decomposition from the conventional q-VAE, and it is clear that their optimal values are task-specific, although the sparsification condition limits the degree of freedom in design of them. In particular, q0q\to 0 is desirable to strongly promote sparsification, but as the literature [31] reported, a small qq would exclude many data from training as outliers. In fact, in highway-env simulations, we observed cases where other blue cars could not be reconstructed as in the standard VAE, depending on the value of q2q_{2}. A framework that is adaptive or robust to this trade-off, such as meta-optimization [32] or ensemble learning with multiple combinations of hyperparameters [33], would be useful.

Another open issue is the simultaneous learning of the latent space and the world model. In this paper, the latent space extraction phase and the world model acquisition phase were conducted independently, giving priority to ease of analysis. However, in order to obtain the state holding the minimal realization, it is desirable to extract the latent space by considering the world model (and controller). Since simultaneous learning of multiple modules tends to be basically difficult, it would be important to introduce curriculum learning [34], etc.

Anyway, we will apply our method to more practical tasks in the near future and enhance its practicality by resolving the above-mentioned open issues.

Acknowledgement

This work was supported by JSPS KAKENHI, Grant-in-Aid for Scientific Research (B), Grant Number JP20H04265 and JST, PRESTO Grant Number JPMJPR20C3, Japan.

References

  • [1] Sanchez J, Corrales JA, Bouzgarrou BC, et al. Robotic manipulation and sensing of deformable objects in domestic and industrial applications: a survey. The International Journal of Robotics Research. 2018;37(7):688–716.
  • [2] Tsurumine Y, Cui Y, Uchibe E, et al. Deep reinforcement learning with smooth policy update: Application to robotic cloth manipulation. Robotics and Autonomous Systems. 2019;112:72–83.
  • [3] Modares H, Ranatunga I, Lewis FL, et al. Optimized assistive human–robot interaction using reinforcement learning. IEEE transactions on cybernetics. 2015;46(3):655–667.
  • [4] Kobayashi T, Dean-Leon E, Guadarrama-Olvera JR, et al. Whole-body multicontact haptic human–humanoid interaction based on leader–follower switching: A robot dance of the “box step”. Advanced Intelligent Systems. 2022;4(2):2100038.
  • [5] Paden B, Čáp M, Yong SZ, et al. A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Transactions on intelligent vehicles. 2016;1(1):33–55.
  • [6] Williams G, Drews P, Goldfain B, et al. Information-theoretic model predictive control: Theory and applications to autonomous driving. IEEE Transactions on Robotics. 2018;34(6):1603–1622.
  • [7] Botev ZI, Kroese DP, Rubinstein RY, et al. The cross-entropy method for optimization. In: Handbook of statistics. Vol. 31. Elsevier; 2013. p. 35–59.
  • [8] Ha D, Schmidhuber J. World models. arXiv preprint arXiv:180310122. 2018;.
  • [9] Hafner D, Lillicrap T, Ba J, et al. Dream to control: Learning behaviors by latent imagination. In: International Conference on Learning Representations; 2020.
  • [10] Hafner D, Lillicrap TP, Norouzi M, et al. Mastering atari with discrete world models. In: International Conference on Learning Representations; 2021.
  • [11] Okada M, Kosaka N, Taniguchi T. Planet of the bayesians: Reconsidering and improving deep planning network by incorporating bayesian inference. In: IEEE/RSJ International Conference on Intelligent Robots and Systems; IEEE; 2020. p. 5611–5618.
  • [12] Okada M, Taniguchi T. Dreaming: Model-based reinforcement learning by latent imagination without reconstruction. In: IEEE International Conference on Robotics and Automation; IEEE; 2021. p. 4209–4215.
  • [13] Kingma DP, Welling M. Auto-encoding variational bayes. In: International Conference on Learning Representations; 2014.
  • [14] Higgins I, Matthey L, Pal A, et al. beta-vae: Learning basic visual concepts with a constrained variational framework. In: International Conference on Learning Representations; 2017.
  • [15] Mathieu E, Rainforth T, Siddharth N, et al. Disentangling disentanglement in variational autoencoders. In: International Conference on Machine Learning; PMLR; 2019. p. 4402–4412.
  • [16] Okada M, Taniguchi T. Variational inference mpc for bayesian model-based reinforcement learning. In: Conference on robot learning; PMLR; 2020. p. 258–272.
  • [17] Williams RL, Lawrence DA, et al. Linear state-space control systems. John Wiley & Sons; 2007.
  • [18] Higgins I, Amos D, Pfau D, et al. Towards a definition of disentangled representations. arXiv preprint arXiv:181202230. 2018;.
  • [19] Kobayashi T. q-vae for disentangled representation learning and latent dynamical systems. IEEE Robotics and Automation Letters. 2020;5(4):5669–5676.
  • [20] Tsallis C. Possible generalization of boltzmann-gibbs statistics. Journal of statistical physics. 1988;52(1-2):479–487.
  • [21] Suyari H, Tsukada M. Law of error in tsallis statistics. IEEE Transactions on Information Theory. 2005;51(2):753–757.
  • [22] Gil M, Alajaji F, Linder T. Rényi divergence measures for commonly used univariate continuous distributions. Information Sciences. 2013;249:124–131.
  • [23] Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980. 2014;.
  • [24] Ilboudo WEL, Kobayashi T, Sugimoto K. Robust stochastic gradient descent with student-t distribution based first-order momentum. IEEE Transactions on Neural Networks and Learning Systems. 2020;.
  • [25] Takahashi H, Iwata T, Yamanaka Y, et al. Student-t variational autoencoder for robust density estimation. In: International Joint Conference on Artificial Intelligence; 2018. p. 2696–2702.
  • [26] Loaiza-Ganem G, Cunningham JP. The continuous bernoulli: fixing a pervasive error in variational autoencoders. Advances in Neural Information Processing Systems. 2019;32.
  • [27] Leurent E. An environment for autonomous driving decision-making [https://github.com/eleurent/highway-env]; 2018.
  • [28] Paszke A, Gross S, Chintala S, et al. Automatic differentiation in pytorch. In: Advances in Neural Information Processing Systems Workshop; 2017.
  • [29] Hoyer PO. Non-negative matrix factorization with sparseness constraints. Journal of machine learning research. 2004;5(9).
  • [30] Kemp CC, Edsinger A, Clever HM, et al. The design of stretch: A compact, lightweight mobile manipulator for indoor human environments. In: International Conference on Robotics and Automation; IEEE; 2022. p. 3150–3157.
  • [31] Kobayashi T, Enomoto T. Towards autonomous driving of personal mobility with small and noisy dataset using tsallis-statistics-based behavioral cloning. arXiv preprint arXiv:211114294. 2021;.
  • [32] Aotani T, Kobayashi T, Sugimoto K. Meta-optimization of bias-variance trade-off in stochastic model learning. IEEE Access. 2021;9:148783–148799.
  • [33] Sagi O, Rokach L. Ensemble learning: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2018;8(4):e1249.
  • [34] Graves A, Bellemare MG, Menick J, et al. Automated curriculum learning for neural networks. In: International conference on machine learning; PMLR; 2017. p. 1311–1320.
  • [35] Elfwing S, Uchibe E, Doya K. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks. 2018;107:3–11.
  • [36] Ba JL, Kiros JR, Hinton GE. Layer normalization. arXiv preprint arXiv:160706450. 2016;.

Appendix A Learning configurations

Table 4: Network designs for VAE
Meaning Setting
Kernel sizes for convolutional layers [4, 3, 3, 3, 3]
Channel sizes for convolutional layers [8, 16, 32, 64, 128]
Unit sizes for FC layers ① [8, 32, 128]
Unit sizes for FC layers ② [100, 80, 60]
Kernel sizes for deconvolutional layers [3, 3, 3, 3, 4]
Channel sizes for deconvolutional layers [128, 64, 32, 16, 8]
Unit sizes for FC layers ③ [128, 32, 8]
Unit sizes for FC layers ④ [60, 80, 100]
The size of latent space |𝒵||\mathcal{Z}| {50, 30}
Activation function for all layers Swish [35]
+ Layer normalization [36]
Table 5: Network designs for world model
Meaning Setting in simulation Setting in experiment
Unit sizes for dynamics [50, 50] [10, 10]
Unit sizes for reward [20, 10] [10, 10]
Activation function for all layers Tanh + Layer normalization [36]
Table 6: Learning configurations
Meaning Setting for VAE Setting for dynamics Setting for reward
Optimizer t-Adam [24] Adam [23] Adam [23]
Learning rate 1×1041\times 10^{-4} 1×1031\times 10^{-3} 3×1043\times 10^{-4}
Batch size 256 512 512

In VAE, a continuous Bernoulli distribution [26] is employed for the image decoder p(ximgz)p(x_{\mathrm{img}}\mid z), and a diagonal Gaussian distribution is also employed for the velocity decoder p(xvelz)p(x_{\mathrm{vel}}\mid z). From the fact that |𝒳img||𝒳vel||\mathcal{X}_{\mathrm{img}}|\gg|\mathcal{X}_{\mathrm{vel}}|, c=1c=1 in eq. (23) is for 𝒳vel\mathcal{X}_{\mathrm{vel}} and c=2c=2 corresponds to 𝒳img\mathcal{X}_{\mathrm{img}}. The encoder p(zx)p(z\mid x) is given by a diagonal Gaussian distribution.

The details of each module in the figure are summarized in Table 4. In addition, the network architecture for the world model is also summarized in Table 5. The conditions related to the learning of the respective modules are summarized in Table 6. In this way, after training multiple models of VAE with different random seeds, the world model is trained by selecting the median model among them.

A.1 Configurations for simulation

Table 7: Configurations for highway-env and CEM
Symbol Meaning Value
|𝒜||\mathcal{A}| The size of action space 2
|𝒳img||\mathcal{X}_{\mathrm{img}}| The size of image observation 64×\times64×\times3
|𝒳vel||\mathcal{X}_{\mathrm{vel}}| The size of velocity observation 2
Control frequency 10 Hz
HH Horizon step 10
KK The number of candidates 10,000
Maximum iteration 10
ν\nu Elite ratio 0.01
η\eta Smooth update 0.4

The network architecture of VAE implemented by PyTorch [28] is illustrated in Fig. 5. Since the simulation task is more complex than the experimental one, the latent dimension size |𝒵||\mathcal{Z}| is given a larger value of 50, and the world model is also designed slightly larger than in the experiment (although still small enough).

The control frequency is set to 10 Hz. Because it is necessary to predict some time ahead for safe driving, Horizon set to be H=10H=10. Other configurations are summarized in Table 7.

For collecting a dataset, CEM under the true world model with the true state (i.e. position and velocity of each car) is utilized. Using it, 52,365 tuples for the training, 971 tuples for the validation, and 2,863 tuples for the test are collected. Note that noise ϵ𝒩(0,I)\epsilon\sim\mathcal{N}(0,I) is injected to the action from CEM for generating various data.

A.2 Configurations for experiment

Table 8: Configurations for stretch-reach and CEM
Symbol Meaning Value
|𝒜||\mathcal{A}| The size of action space 2
|𝒳img||\mathcal{X}_{\mathrm{img}}| The size of image observation 64×\times64×\times3
|𝒳arm||\mathcal{X}_{\mathrm{arm}}| The size of arm observation 4
Control frequency 1 Hz
HH Horizon step 5
KK The number of candidates 10,000
Maximum iteration 20
ν\nu Elite ratio 0.01
η\eta Smooth update 0.4

The network architecture for VAE is almost identical to Fig. 5 and Table 4 except for replacing xvelxarmx_{\mathrm{vel}}\to x_{\mathrm{arm}} and the latent dimension size |𝒵|=30|\mathcal{Z}|=30. In contrast, the network architecture for the world model is lightened due to the simplicity of task and importance of real-time control, as shown in Table 5.

For the control by CEM, the control frequency is set to 1 Hz. However, under the consideration of other processes, such as encoding the observation, the allowable computational time for CEM is limited to 0.56 s. Since future predictions do not play a significant role in control performance, H=5H=5 is set to reduce the computational cost. Other configurations are summarized in Table 8.

For collecting a dataset, a simple feedback controller is employed with the explicit target position. Using it, 2,983 tuples for the training, 63 tuples for the validation, and 107 tuples for the test are collected. Note that the number of data is absolutely less than one for the simulation due to the lack of diversity. Therefore, noise injected to the action is also reduced to ϵ𝒩(0,0.0252I)\epsilon\sim\mathcal{N}(0,0.025^{2}I).