-divergence improves the entropy production estimation via machine learning
Abstract
Recent years have seen a surge of interest in the algorithmic estimation of stochastic entropy production (EP) from trajectory data via machine learning. A crucial element of such algorithms is the identification of a loss function whose minimization guarantees the accurate EP estimation. In this study, we show that there exists a host of loss functions, namely those implementing a variational representation of the -divergence, which can be used for the EP estimation. By fixing to a value between and , the -NEEP (Neural Estimator for Entropy Production) exhibits a much more robust performance against strong nonequilibrium driving or slow dynamics, which adversely affects the existing method based on the Kullback-Leibler divergence (). In particular, the choice of tends to yield the optimal results. To corroborate our findings, we present an exactly solvable simplification of the EP estimation problem, whose loss function landscape and stochastic properties give deeper intuition into the robustness of the -NEEP.
I Introduction
How irreversible does a process look? One may pose this question for two distinct reasons. First, whether a biological process requires energy dissipation is often a subject of much debate [1, 2]. To resolve this issue, it is useful to note that irreversibility suggests energy dissipation. Various hallmarks of irreversibility, such as the breaking of the fluctuation-dissipation theorem [3] and the presence of nonequilibrium probability currents in the phase space [4, 5], have been used to determine whether energy is dissipated. Second, whether a nonequilibrium system allows for an effective equilibrium description is an important issue. For instance, in active matter, despite the energy dissipation at the microscopic level, it has been argued that the large-scale phenomena allow for an effective equilibrium description [6, 7, 8, 9, 10]. If we can quantify the irreversibility of an empirical process at various levels of coarse-graining [11, 12], it will provide us with helpful clues as to whether we should look for an effective equilibrium theory for the process.
Based on the framework of stochastic thermodynamics, modern thermodynamics assigns entropy production (EP) to each stochastic trajectory based on its irreversibility [13]. Thus, empirically measuring the irreversibility of a process is closely tied to the problem of estimating EP from sampled trajectories [14, 15, 16, 17, 18, 19, 20, 21]. A straightforward approach to the problem is to evaluate the relevant transition probabilities by directly counting the number of trajectory segments, which is called the plug-in method [14, 15]. The method, readily applicable to discrete systems, can also be applied to continuous systems through the use of kernel functions [16]. However, while this method is simple and intuitive, it requires a huge ensemble of lengthy trajectories for accurate estimations (curse of dimensionality). More recent studies proposed methods based on universal lower bounds of the average EP, such as the thermodynamic uncertainty relations [16, 17, 18, 19] and the entropic bound [20]. While these methods do not suffer from the curse of dimensionality and are applicable even to non-stationary processes [19, 20], their accuracy is impaired when the underlying bounds are not tight. Moreover, these methods are applicable only to the estimation of the average EP, not the EP of each trajectory.
Meanwhile, with the advent of machine learning techniques in physics, a novel method for EP estimation using artificial neural networks has been developed [21]. This method, called the Neural Estimator for Entropy Production (NEEP), minimizes the loss function based on a variational representation of the Kullback-Leibler (KL) divergence. Without any presupposed discretization of the phase space and using the rich expressivity of neural networks, the NEEP suffers far less from the complications of the sampling issues and is applicable to a diverse range of stochastic processes [19].
Still, the NEEP has its limits. Its accuracy deteriorates when the nonequilibrium driving is strong or when the dynamics slows down so that the phase space is poorly sampled. In this study, we show that the NEEP can be significantly improved by changing the loss function. Toward this purpose, we propose the -NEEP, which generalizes the NEEP. Instead of the KL divergence, the -NEEP utilizes the -divergence, which has been mainly used in the machine learning community [22, 23, 24, 25]. We demonstrate that the -NEEP with nonzero values of shows much more robust performance for a broader range of nonequilibrium driving and sampling quality, with showing the optimal performance overall. This is corroborated by an analytically tractable simplification of the -NEEP that shows the optimality of .
The rest of this paper is organized as follows. After reviewing the original NEEP and its limitations (Sec. II), we introduce the -NEEP (Sec. III) and demonstrate its enhanced performance for three different examples of nonequilibrium systems (Sec. IV). Then we investigate the rationale behind the observed results using a simplified model describing how the -NEEP works (Sec. V). Finally, we sum up the results and discuss their implications (Sec. VI).

II Overview of the Original NEEP
We first give a brief overview of how the original NEEP [21] estimates EP at the trajectory level. Suppose our goal is to estimate EP of a Markov process in discretized time, , in a -dimensional space. For every ordered pair of states, denoted by , there is EP associated with the transition between them, which is given by the ratio between the forward and the backward path probabilities
(1) |
where . Note that, throughout this study, we use the unit system in which the Boltzmann constant can be set to unity (). Then it follows that the ensemble average of this EP is equivalent to the KL divergence, which satisfies the inequality
(2) |
for any positive function , given that denotes the average with respect to the distribution . This inequality can be proven as follows: since is a concave function, the line tangent to any point never falls below the function. Thus, for any and . By putting and taking the average with respect to , we get the inequality. In this derivation, we immediately note that the equality condition is satisfied if and only if . Hence, by varying to maximize the right-hand side of Eq. (II), we accurately estimate the average EP . For this reason, Eq. (II) is called the variational representation of the KL divergence. Moreover, as a byproduct, we also obtain the function , which yields an accurate estimate for trajectory-level EP by .
Kim et al. [21] used these properties to construct the loss function of the NEEP. More specifically, they introduce , an estimator for trajectory-level EP parametrized by , and put . Then, Eq. (II) can be rewritten as
(3) |
where has been used based on the one-to-one correspondence between and . Furthermore, since EP is odd under time reversal, i.e., , it is natural to impose the same condition on . This leads to the inequality
(4) |
which motivates the loss function
(5) |
so that the minimization of ensures the accurate EP estimation .
It is notable that defined above is a convex functional of . Thus, as long as the -dependence of is well behaved, any gradient-descent algorithm can reach the global minimum of without getting trapped in a local minimum. In this regard, the rugged loss function landscape is not a major issue of the NEEP.
However, the performance of the NEEP strongly depends on how well is sampled. Since the second term of depends exponentially on , rare transitions with minute can make nonnegligible contributions to when is extremely large. Since the frequency of rare events is subject to considerable sampling noise, the performance of the original NEEP deteriorates in the presence of a strong nonequilibrium driving which induces rare transitions with large negative EP. In the following section, we propose a loss function that remedies this weakness of the NEEP.
III Formulation of the -NEEP
Here we formulate a generalization of the NEEP loss function with the goal of mitigating its strong sampling-noise dependence. We note that the loss function needs not be an estimator of average EP , for our goal is to estimate at the level of each trajectory. Thus, while the original NEEP uses the variational representation of the KL divergence corresponding to , we propose a different approach based on the variational representation of the -divergence, which quantifies the difference between a pair of probability distributions and as
(6) |
Since this reduces to the KL divergence in the limit , our approach generalizes the NEEP by introducing an extra parameter . To emphasize this aspect, we term our method the -NEEP.
The goal of the -NEEP is to find that minimizes the loss function
(7) |
where and are probability density functions, and is a real number other than and . See Appendix B for discussions of these two exceptional cases. It can be rigorously shown (see Appendix B) that satisfies the inequality
(8) |
where the equality is achieved if and only if for all . In other words, by minimizing to find , we also obtain an estimate for the ratio . We note that the properties of used here are also valid for a much more general class of loss functions, as discussed in [22, 23] (also see Appendix B).
Based on Eq. (8), we can construct a loss function
(9) |
Note that this reduces to the loss function of the original NEEP shown in Eq. (5) in the limit . If is sufficiently well behaved, the minimization of yields the minimizer which satisfies and . The former is generally not equal to average EP (unless ), but the latter ensures the accurate estimation of trajectory-level EP .
Comparing Eqs. (5) and (9), one readily observes that the exponential dependence on can be made much weaker in by choosing the value of between and . Since this mitigates the detrimental effects of the sampling error associated with rare trajectories with large negative , one can naturally expect that the performance of the -NEEP is much more robust against strong nonequilibrium driving. This is confirmed in the following sections.
Before proceeding, a few remarks are in order:
-
1.
The loss function satisfies , so the -NEEP is symmetric under the exchange . For this reason, in the rest of this paper, we focus on the regime (the regime leads to very poor performance and is left out).
-
2.
From the antisymmetry , we may set the estimator to be related to the feedforward neural network (FNN) output as
(10) so that the neural network focuses on the estimators that satisfy the antisymmetry of EP for more efficient training. The method described so far is schematically illustrated in Fig. 1.
-
3.
We emphasize that the minimized is not directly related to average EP. In all cases, we compute the average EP by averaging over the sampled transitions.
IV Examples
To assess the performance of the -NEEP for various values of , we apply the method to toy models of nonequilibrium systems, namely the two-bead model, the Brownian gyrator, and the driven Brownian particle.

(i) The two-bead model. This model has been used in a number of previous studies as a benchmark for testing EP estimators [4, 16, 18, 21]. The model consists of two one-dimensional (1D) overdamped beads which are connected to each other and to the walls on both sides by identical springs, see Fig. 2(a). The beads are in contact with heat baths at temperatures and with . Denoting by () the bead in contact with the hot (cold) bath, the stochastic equations of motion are given by
(11a) | ||||
(11b) |
Here is the spring constant, the friction coefficient, and the Gaussian thermal noise with zero means and . For infinitesimal displacements , the associated EP is given by
(12) |
where denotes the Stratonovich product and the change of the system’s Shannon entropy, namely
(13) |
for the steady-state distribution . Since the system is fully linear, can be calculated analytically. Thus the EP of this model can be calculated exactly using Eq. (12) and compared with the -NEEP result.
To see how the predicted EP differs from the true EP, we observe the behavior of the mean square error (MSE) . In Fig. 2(b), we observe that strengthening the nonequilibrium driving (by increasing while keeping ) tends to impair the EP estimation. This is because a stronger driving makes the reverse trajectories of typical trajectories rarer, lowering the sample quality. The adverse effects of the nonequilibrium driving are the strongest for the original NEEP (), which are mitigated by choosing different values of . Remarkably, choosing leads to the most robust performance against the driving.
As an alternative measure of the estimator’s performance, we also observe the ratio between the predicted average EP and the exact average EP . The results are shown in Fig. 2(c), which exhibit two different regimes. As increases, there is a regime where the estimator overestimates average EP, which is followed by an underestimation regime. A detailed explanation for this behavior will be given in Sec. V using a simplified model. At the moment, we note that tends to deviate away from most strongly for the original NEEP (), while choosing different values of makes the ratio stay closer to . Again, the optimal value of seems to be .

(ii) The Brownian gyrator. This simple model of a single-particle heat engine allows us to check the effects of a nonequilibrium driving apart from the temperature difference . The dynamics of the model is governed by
(14a) | ||||
(14b) |
where is the harmonic potential, and is a nonconservative force that drives the system out of equilibrium and enables work extraction. See Fig. 3(a) for an illustration of this system. For infinitesimal displacements , the associated EP is given by
(15) |
where
(16a) | ||||
(16b) |
and the change of the system entropy. Again, the system is fully linear and the steady-state distribution can be calculated analytically, allowing exact calculations of EP at the trajectory level.
Setting and , we vary the magnitude of to assess the robustness of the -NEEP in terms of the MSE and the ratio , as shown in Figs. 3(b) and (c), respectively. The results are qualitatively similar to the case of the two-bead model: as the nonconservative driving gets stronger, the performance of the original NEEP () deteriorates the most, while other values of yield more robust results. Again, seems to be the optimal choice.
(iii) The driven Brownian particle. While the two examples given above were both linear systems, we also consider a nonlinear system featuring a 1D overdamped Brownian particle in a periodic potential driven by a constant force . The motion of the particle is described by the Langevin equation
(17) |
where is a Gaussian white noise with unit variance. See Fig. 4(a) for an illustration of the model. For sufficiently large , this model can approximate the behaviors of the Markov jump process on a discrete chain. For this model, the EP associated with the infinitesimal displacement is given by
(18) |
where again denotes the Shannon entropy change for the steady-state distribution . Since the system is 1D, it is straightforward to obtain by numerical integration. Thus, the EP of this model can also be calculated exactly and compared to the -NEEP result.

Fixing , the performance of the -NEEP for this model is shown in Figs. 4(b) and (c) in terms of the MSE and the ratio , respectively. Due to the presence of a strong background driving (), there are already considerable differences among different methods at . But it is worth noting that increasing the amplitude of the periodic potential clearly increases the MSE and makes deviate farther away from for the original NEEP (). This may be the consequence of rarer movements (jumps from one potential well to the next) across the system as the potential well gets deeper, which means rare trajectories are even more poorly sampled. The -NEEPs with nonzero values of are much more robust against the increase of , with showing the best performance overall.

V Simple Gaussian Model
The results shown thus far clearly indicate that, by choosing a nonzero value of , the -NEEP can exhibit a much more robust performance against the adverse effects of the nonequilibrium driving. Moreover, seems to exhibit the best performance in many cases. To gain more intuition into these results, we simplify the EP estimation problem to the density-ratio estimation problem for a 1D random variable. To be specific, we estimate the log ratio given samples drawn from the distribution . It is intuitively clear that this problem is structurally equivalent to EP estimation.
For further simplification, we set
(19) |
Here is a suitable normalization factor, the positive mean of the distribution, the width of the distribution, and a positive number truncating the tails of the distribution. While corresponds to the perfect sampling of a Gaussian distribution, a finite corresponds to the case where the tails of the distribution are poorly sampled.
For , the correct answer to the problem is a linear function , where . Thus, for further simplicity, we focus on the one-parameter model , which estimates using only a single parameter . For this problem, the suitable loss function is obtained as an analog of Eq. (9):
(20) |
If is large but finite, the minimum of this loss function shifts to , where can be expanded to the leading orders in :
(21) |
This clearly shows that gives the least shift , as also illustrated by various results shown in Fig. 5.
In Fig. 5(a), we show that the shift of the minimum tends to increase as the tail sampling becomes poorer (i.e., decreases). The landscapes of the loss function , shown in the inset of Fig. 5(a), also confirm this observation. The increase of the error with the potential depth in Figs. 1(d) and 2(b) may primarily be due to the same effect.
In Fig. 5(b), we plot the ratio between the estimated minimum and the true minimum as a function of the mean , which is an analog of the nonequilibrium driving. We note that here is the lowest value of at which the slope of the loss function becomes less then . We observe that an overestimation regime () crosses over to an underestimation regime () as grows. This is in striking agreement with the trends shown in Fig. 2(a). The reason why underestimates for large can be understood by the flattened loss function landscapes shown in the inset of Fig. 5(b). In this regime, the dynamics of (starting from = 0) slows down, ending up at a value (filled diamonds) even lower than (empty diamonds). This effect is due to the samples with vanishing when is too large. We expect that a similar mechanism might be at play behind the observed behavior of shown in Fig. 2(a). If we had used a broader range of nonequilibrium driving, the same behaviors might have been observed for other models as well, although this remains to be checked.
The one-parameter model also allows us to examine the effects of the finite minibatch size . While the ideal loss function is given in Eq. (20), the loss function used in the actual training looks like
(22) |
where are i.i.d. Gaussian random variables of mean and variance . When is large and finite, using the central limit theorem (CLT), the gradient of this loss function can be approximated as [26, 27]
(23) |
where , , and . When the stochastic gradient descent reaches the steady state, the MSE of is given by
(24) |
This leading-order behavior is shown in Fig. 5(c) for various values of . For all cases, the MSE of is minimized at , which is consistent with the smallest error bars observed at in Figs. 1 and 2. Hence, yields the most consistent EP estimator.
Direct measurements of the loss function gradient at the minimum also confirm the above result. As shown in Fig. 5(d), the gradient is far more broadly distributed for than for . Moreover, due to the subleading effects (beyond the CLT) of finite , the gradient for features a large skewness. These show that the training dynamics for the original NEEP () tends to be far more volatile and unstable than for the -NEEP with .
VI Summary and outlook
We proposed the -NEEP, a generalization of the NEEP for estimating steady-state EP at the trajectory level. By choosing a value of between and , the -NEEP weakens the exponential dependence of the loss function on the EP estimator, effectively mitigating the adverse effects induced by poor sampling of transitions associated with large negative EP in the presence of strong nonequilibrium driving and/or deep potential wells. We also observed that tends to exhibit the optimal performance, which can be understood via a simplification of the original EP estimation problem, whose loss function landscape and relaxation properties are analytically tractable. The -NEEP thus provides a powerful method for estimating the EP for much broader range of the nonequilibrium driving force and the time scale of dynamics. Identification of even better loss functions and optimization of other hyperparameters (network size, number of iterations, etc.) are left as future works. It would also be interesting to apply the -NEEP to estimations of the EP of the Brownian movies [28] and stochastic systems with odd-parity variables [29], which have been studied using the original NEEP method.
Acknowledgments. — This work was supported by the POSCO Science Fellowship of the POSCO TJ Park Foundation. E.K. and Y.B. also thank Junghyo Jo and Sangyun Lee for helpful comments.


Appendix A Training details
We always use the fully connected network (FCN) with three hidden layers, with each layer composed of nodes. Each training dataset consists of trajectories. The neural network parameters are updated using the ReLU activation function and the Adam optimizer. The learning rate is fixed to and the weight decay is fixed to . We halt the training after iterations, except for the results shown in Figs. 8 and 9 (see Appendix C), where we continue the training for a longer time to check the overfitting effects. All trainings are done on PyTorch with NVIDIA GeForce RTX 3090.
Appendix B Density ratio estimation via -divergence
Here we show that the the loss function given in Eq. (7), whose minimization allows us to estimate the ratio between two probability density functions, can be generalized even further using the concept of -divergence. Consider a convex, twice-differentiable real-valued function . Then, the inequality
(25) |
holds. We can verify this by differentiating the left-hand side (LHS) with respect to , which yields . Thus, the LHS has a local minimum at , and this is the only local minimum since is convex. In addition, the second derivative of the LHS at equals , which is positive by the convexity. This proves the inequality (25).
Using this result, we can design a loss function whose minimum is equal to the negative -divergence between two probability distributions and . To be specific, for any function , we define
(26) |
Using Eq. (B), we conclude that
(27) |
where is the -divergence between the distributions and , and the equality holds if and only if for all . By minimizing , we can estimate as well as .
The loss function and the associated -divergence discussed in the main text are obtained by choosing the function to be
(28) |
Note that and . It is straightforward to obtain Eq. (9) and its extensions to the cases and from this choice.


Appendix C Extra numerical results
C.1 Coefficient of determination
In the literature, the extent of agreement between a prediction and the true value is often expressed by the coefficient of determination . Here we check how the behaviors of differ as the value of changes for the cases of the two-bead model and the driven 1D Brownian particle.
For the two-bead model, as shown in Fig. 6(a), exhibits a nonmonotonic behavior as a function of . The decrease of with increasing reflects the detriment of the -NEEP performance as the nonequilibrium driving gets stronger. Meanwhile, the decrease of as decreases (getting closer to equilibrium ) is due to the overfitting phenomenon discussed in the next section, which disrupts the linear relationship between the predicted EP and the true EP.
For the driven Brownian particle, as shown in Fig. 6(b), always increases with . This may seem contradictory to how the MSE tends to increase or stay constant with increasing in Fig. 4(b). Indeed, higher only means that there is a good linear relationship between the EP estimate and the true EP , not that and are close to each other. When is increased, due to the slower dynamics, we may have for transitions with positive EP and for transitions with negative EP, which can make the linear relationship between and appear stronger. This example clearly shows that is not an adequate measure of the performance of EP estimators.
C.2 Effects of the minibatch size
The minibatch refers to the group of samples used for computing the gradient of the loss function. Smaller (larger) minibatches increase (decrease) the noisy component of the gradient, which in turn affects the performance of the -NEEP.
We explicitly check the effects of the minibatch size using the two-bead model with and , as shown in Fig. 7. We use the ratio and the MSE as two different measures of the -NEEP performance. For small minibatches, the highly skewed distribution of the stochastic gradient shown in Fig. 5(d) causes underestimation of the EP. For large minibatches, the noisy component of the loss-function gradient decreases, revealing the properties of the loss function landscape of the training dataset. As discussed using the Gaussian model in Sec. V, the loss function landscape at a moderately strong nonequilibrium driving leads to the overestimation of the EP. Thus, as the minibatch size is increased, grows beyond .
The nonmonotonic behaviors of the MSE also hint at the existence of an optimal minibatch size at the tradeoff between the skewed noise in the gradient (which drives the neural network towards underestimation) and the loss function landscape tilted towards overestimation. For both measures, the superiority of to is manifest.
C.3 Effects of overfitting
In many cases, when the training continues for too many iterations, artificial neural networks are known to exhibit overfitting behaviors. As shown in Figs. 8 and 9, we checked whether the -NEEP is also subject to the same phenomena as the training continues up to iterations. Towards this end, we created two independent datasets of trajectories exhibited by the two-bead model, namely the training set and the test set. Only the former was used during the training of the -NEEP, and we measured the MSE and the ratio to assess the performance of the -NEEP for each dataset.
In Fig. 8, we show the results for the weak nonequilibrium driving ( and ). The first and the third columns show the two different measures of performance for the training dataset and the test dataset. Meanwhile, the second and the fourth columns show the difference between the corresponding measures obtained for two datasets. The overfitting phenomena are manifest from the increase of the MSE towards the end of the training. Interestingly, overfitting leads to an overestimation of the average EP only for the training dataset. We also note that the value of is largely irrelevant to the extent of overfitting. This phenomenon can be explained as follows. Near equilibrium, the neural network swiftly reaches the loss function minimum. However, as the training continues, the neural network starts to see the detailed fluctuations of the training dataset. This makes the functional form of the estimator very rough, leading to the increase of the MSE for both datasets. But while the neural network now believes all trajectories in the training dataset to be highly irreversible and assigns high EP to them, the EP assigned to the trajectories in the test dataset stay unbiased. Thus, grows larger only for the training dataset.
In Fig. 9, we show the results for the strong nonequilibrium driving ( and ). The subfigures are organized in exactly the same way as in Fig. 8. In this case, the overfitting effects do exist. But they are not as pronounced as in the case of the weaker nonequilibrium driving, and the differences between the training and the test datasets stay small. Note that the curves for exhibit strong fluctuations, which is in agreement with the large fluctuations of the gradient shown in Fig. 5(d).
References
- Brangwynne et al. [2008] C. P. Brangwynne, G. H. Koenderink, F. C. MacKintosh, and D. A. Weitz, J. Cell Biol. 183, 583 (2008).
- Weber et al. [2012] S. C. Weber, A. J. Spakowitz, and J. A. Theriot, Proc. Natl. Acad. Sci. U. S. A. 109, 7338 (2012).
- Mizuno et al. [2007] D. Mizuno, C. Tardin, C. F. Schmidt, and F. C. MacKintosh, Science 315, 370 (2007).
- Battle et al. [2016] C. Battle, C. P. Broedersz, N. Fakhri, V. F. Geyer, J. Howard, C. F. Schmidt, and F. C. MacKintosh, Science 352, 604 (2016).
- Gladrow et al. [2016] J. Gladrow, N. Fakhri, F. C. MacKintosh, C. F. Schmidt, and C. P. Broedersz, Phys. Rev. Lett. 116, 248301 (2016).
- Tailleur and Cates [2008] J. Tailleur and M. E. Cates, Phys. Rev. Lett. 100, 218103 (2008).
- Speck et al. [2014] T. Speck, J. Bialké, A. M. Menzel, and H. Löwen, Phys. Rev. Lett. 112, 218304 (2014).
- Takatori et al. [2014] S. C. Takatori, W. Yan, and J. F. Brady, Phys. Rev. Lett. 113, 028103 (2014).
- Farage et al. [2015] T. F. F. Farage, P. Krinninger, and J. M. Brader, Phys. Rev. E 91, 042310 (2015).
- Solon et al. [2018] A. P. Solon, J. Stenhammar, M. E. Cates, Y. Kafri, and J. Tailleur, Phys. Rev. E 97, 020602(R) (2018).
- Fodor et al. [2016] E. Fodor, C. Nardini, M. E. Cates, J. Tailleur, P. Visco, and F. van Wijland, Phys. Rev. Lett. 117, 038103 (2016).
- Nardini et al. [2017] C. Nardini, E. Fodor, E. Tjhung, F. van Wijland, J. Tailleur, and M. E. Cates, Phys. Rev. X 7, 021007 (2017).
- Seifert [2012] U. Seifert, Rep. Prog. Phys. 75, 126001 (2012).
- Roldán and Parrondo [2010] E. Roldán and J. M. R. Parrondo, Phys. Rev. Lett. 105, 150607 (2010).
- Roldán and Parrondo [2012] E. Roldán and J. M. R. Parrondo, Phys. Rev. E 85, 031129 (2012).
- Li et al. [2019] J. Li, J. M. Horowitz, T. R. Gingrich, and N. Fakhri, Nat. Commun. 10, 1666 (2019).
- Van Vu et al. [2020] T. Van Vu, V. T. Vo, and Y. Hasegawa, Phys. Rev. E 101, 042138 (2020).
- Otsubo et al. [2020] S. Otsubo, S. Ito, A. Dechant, and T. Sagawa, Phys. Rev. E 101, 062106 (2020).
- Otsubo et al. [2022] S. Otsubo, S. K. Manikandan, T. Sagawa, and S. Krishnamurthy, Commun. Phys. 5, 11 (2022).
- Lee et al. [2023] S. Lee, D.-K. Kim, J.-M. Park, W. K. Kim, H. Park, and J. S. Lee, Phys. Rev. Res. 5, 013194 (2023).
- Kim et al. [2020] D.-K. Kim, Y. Bae, S. Lee, and H. Jeong, Phys. Rev. Lett. 125, 140604 (2020).
- Basu et al. [1998] A. Basu, I. R. Harris, N. L. Hjort, and M. C. Jones, Biometrika 85, 549 (1998).
- Sugiyama et al. [2012] M. Sugiyama, T. Suzuki, and T. Kanamori, Density Ratio Estimation in Machine Learning (Cambridge University Press, 2012).
- Nowozin et al. [2016] S. Nowozin, B. Cseke, and R. Tomioka, in Proceedings of the 30th International Conference on Neural Information Processing Systems (Curran Associates, Inc., 2016) pp. 271–279.
- Belghazi et al. [2018] M. I. Belghazi, A. Baratin, S. Rajeshwar, S. Ozair, Y. Bengio, A. Courville, and D. Hjelm, in International Conference on Machine Learning (PMLR, 2018) pp. 531–540.
- Li et al. [2017] Q. Li, C. Tai, and W. E, in Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 70, edited by D. Precup and Y. W. Teh (PMLR, 2017) pp. 2101–2110.
- Chaudhari and Soatto [2018] P. Chaudhari and S. Soatto, in 2018 Information Theory and Applications Workshop (ITA) (IEEE, 2018) pp. 1–10.
- Bae et al. [2022] Y. Bae, D.-K. Kim, and H. Jeong, Phys. Rev. Res. 4, 033094 (2022).
- Kim et al. [2022] D.-K. Kim, S. Lee, and H. Jeong, Phys. Rev. Res. 4, 023051 (2022).